US 11,876,858 B1
Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads
Pradeep Nair, Greebrae, CA (US); Pragyana K Mishra, Seattle, WA (US); Anish Swaminathan, Greebrae, CA (US); and Janardhan Prabhakara, Greebrae, CA (US)
Assigned to Armada Systems Inc., Greebrae, CA (US)
Filed by Armada Systems Inc., Greebrae, CA (US)
Filed on Sep. 5, 2023, as Appl. No. 18/461,459.
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 67/1008 (2022.01); H04L 67/101 (2022.01); H04L 41/22 (2022.01); H04L 41/16 (2022.01)
CPC H04L 67/1008 (2013.01) [H04L 41/16 (2013.01); H04L 41/22 (2013.01); H04L 67/101 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method comprising:
receiving monitoring information from each respective edge compute unit of a plurality of edge compute units, wherein the monitoring information includes information associated with one or more machine learning (ML) or artificial intelligence (AI) workloads implemented by the respective edge compute unit;
receiving respective status information corresponding to a plurality of connected edge assets, wherein each connected edge asset is associated with one or more edge compute units of the plurality of edge compute units, and wherein the plurality of edge compute units and the plurality of connected edge assets are included in a fleet of edge devices;
displaying, using a remote fleet management graphical user interface (GUI), at least a portion of the monitoring information or the status information corresponding to a selected subset of the fleet of edge devices, wherein the selected subset is determined based on one or more user selection inputs to the remote fleet management GUI;
receiving, using the remote fleet management GUI, one or more user configuration inputs indicative of an updated configuration for a respective ML or AI workload of at least one edge compute unit of the selected subset of the fleet of edge devices, the respective ML or AI workload corresponding to a pre-trained ML or AI model deployed on the at least one edge compute unit, wherein the updated configuration:
corresponds to the pre-trained ML or AI model and is configured to cause a subset of edge compute units of the fleet of edge devices to perform distributed retraining of the pre-trained ML or AI model; and
includes orchestration information for distributing a retraining workload across the respective edge compute units of the subset of edge compute units; and
transmitting, from a cloud computing environment associated with the remote fleet management GUI, control information corresponding to the updated configuration, wherein the control information is transmitted to the at least one edge compute unit of the selected subset.