US 12,314,201 B2
Method and apparatus for distributed training of artificial intelligence model in channel-sharing network environment
Ki-Dong Kang, Daejeon (KR); Hong-Yeon Kim, Daejeon (KR); Baik-Song An, Seoul (KR); and Myung-Hoon Cha, Daejeon (KR)
Assigned to ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Daejeon (KR)
Filed by ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Daejeon (KR)
Filed on Jun. 30, 2023, as Appl. No. 18/345,083.
Claims priority of application No. 10-2022-0162976 (KR), filed on Nov. 29, 2022.
Prior Publication US 2024/0176756 A1, May 30, 2024
Int. Cl. G06F 7/00 (2006.01); G06F 9/38 (2018.01); G06F 9/48 (2006.01); G06F 13/362 (2006.01)
CPC G06F 13/3625 (2013.01) [G06F 9/3885 (2013.01); G06F 9/4881 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A method for distributed training of an Artificial Intelligence (AI) model in a channel-sharing network environment including multiple computation devices, comprising:
determining whether data parallel processing is applied;
calculating a computation time and a communication time when input data is evenly distributed across the multiple computation devices; and
unevenly distributing the input data across the multiple computation devices based on the computation time and the communication time,
wherein unevenly distributing the input data comprises distributing the input data such that a difference between sizes of the pieces of input data distributed to the respective computation devices is constant so as to enable the multiple computation devices to sequentially access a channel.