US 12,333,462 B2
Cloud-based fleet and asset management for edge computing of machine learning and artificial intelligence workloads
Pradeep Nair, Greebrae, CA (US); Pragyana K Mishra, Seattle, WA (US); Anish Swaminathan, Greebrae, CA (US); and Janardhan Prabhakara, Greebrae, CA (US)
Assigned to Armada Systems Inc., Greebrae, CA (US)
Filed by Armada Systems Inc., Greebrae, CA (US)
Filed on Apr. 3, 2024, as Appl. No. 18/626,208.
Application 18/626,208 is a continuation of application No. 18/461,470, filed on Sep. 5, 2023, granted, now 12,014,219.
Prior Publication US 2025/0077992 A1, Mar. 6, 2025
This patent is subject to a terminal disclaimer.
Int. Cl. G06Q 10/0631 (2023.01); G06N 20/00 (2019.01)
CPC G06Q 10/06311 (2013.01) [G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving monitoring information from a respective containerized edge compute unit of a plurality of containerized edge compute units included within a fleet of edge devices, wherein the monitoring information is indicative of: local telemetry information corresponding to an edge deployment environment of the respective containerized edge compute unit, and inference performance information associated with a configured machine learning (ML) or artificial intelligence (AI) inference workload implemented locally within the edge deployment environment by the respective containerized edge compute unit;
displaying, using a remote fleet management graphical user interface (GUI): at least a portion of the monitoring information from the respective containerized edge compute unit, and additional monitoring information corresponding to a selected subset of the fleet of edge devices, wherein the selected subset is determined based on one or more user selection inputs to the remote fleet management GUI, and wherein the selected subset includes the respective containerized edge compute unit;
receiving, using the remote fleet management GUI, one or more user configuration inputs indicative of an updated configuration for the configured ML or AI inference workload implemented by the containerized edge compute unit, wherein: the configured ML or AI inference workload corresponds to a pre-trained ML or AI model deployed on the respective edge compute unit, and the updated configuration corresponds to a request to finetune the pre-trained ML or AI model; and
transmitting, from a cloud computing environment associated with the remote fleet management GUI, control information corresponding to the updated configuration, the control information obtained based on the one or more user configuration inputs and comprising model finetuning information generated responsive to the request, wherein the control information is transmitted to at least the respective containerized edge compute unit of the selected subset thereby causing the respective containerized edge compute unit of the selected subset to apply the control information to update the configured ML or AI inference workload.