US 12,393,869 B2
Distributed training method between terminal and edge cloud server
Myung Ki Shin, Sejong-si (KR); and Soohwan Lee, Daejeon (KR)
Assigned to ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Daejeon (KR)
Filed by Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed on Sep. 24, 2021, as Appl. No. 17/484,356.
Claims priority of application No. 10-2020-0124250 (KR), filed on Sep. 24, 2020; and application No. 10-2021-0126550 (KR), filed on Sep. 24, 2021.
Prior Publication US 2022/0092479 A1, Mar. 24, 2022
Int. Cl. G06N 20/00 (2019.01); G06F 11/30 (2006.01); G06N 3/082 (2023.01); G06N 3/084 (2023.01); G06N 3/098 (2023.01); G06N 20/20 (2019.01); H04L 41/0654 (2022.01); H04L 41/149 (2022.01); H04L 67/10 (2022.01)
CPC G06N 20/00 (2019.01) [H04L 67/10 (2013.01); G06F 11/3006 (2013.01); G06N 3/082 (2013.01); G06N 3/084 (2013.01); G06N 3/098 (2023.01); G06N 20/20 (2019.01); H04L 41/0654 (2013.01); H04L 41/149 (2022.05)] 18 Claims
OG exemplary drawing
 
1. A distributed training method performed by an edge cloud server connected to a central cloud server, the distributed training method comprising:
downloading a machine learning model for training from the central cloud server;
identifying a plurality of user terminals located in a local network coverage of the edge cloud server;
providing machine learning models to the plurality of user terminals based on the downloaded machine learning model and current load conditions or current performance conditions of the plurality of user terminals;
aggregating results of the training of the machine learning models from the plurality of user terminals when the training of the machine learning models is completed in the plurality of user terminals;
evaluating the results of the training of the machine learning models by the plurality of user terminals;
updating the machine learning models based on evaluation results; and
providing rewards to the plurality of user terminals for training the machine learning models.