US 12,388,719 B2
Creating a global reinforcement learning (RL) model from subnetwork RL agents
Christopher Barber, Ottawa (CA); Sa'di Altamimi, Nepean (CA); Shervin Shirmohammadi, Ottawa (CA); and David Côté, Gatineau (CA)
Assigned to Ciena Corporation, Hanover, MD (US)
Filed by Ciena Corporation, Hanover, MD (US)
Filed on Mar. 9, 2023, as Appl. No. 18/119,586.
Application 18/119,586 is a continuation in part of application No. 17/166,383, filed on Feb. 3, 2021, granted, now 11,637,742.
Prior Publication US 2023/0216747 A1, Jul. 6, 2023
Int. Cl. H04L 41/16 (2022.01); H04L 41/12 (2022.01)
CPC H04L 41/16 (2013.01) [H04L 41/12 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A non-transitory computer-readable medium configured to store computer logic having instructions that, when executed, enable a processing device to perform the steps of:
acknowledging a plurality of subnetworks among a whole network, each subnetwork including a plurality of nodes and being represented by a tunnel group having a plurality of end-to-end tunnels through the respective subnetwork;
selecting a first group of subnetworks from the plurality of subnetworks;
generating a Reinforcement Learning (RL) agent for each subnetwork of the first group, each RL agent based on observations of end-to-end metrics of the end-to-end tunnels of the respective subnetwork, the observations being independent of specific topology information of the respective subnetwork;
training a global model based on the RL agents of the first group of subnetworks; and
applying the global model to an Action Recommendation Engine (ARE) configured for recommending actions that can be taken to improve a state of the whole network.