US 12,229,569 B2
Methods and apparatus for deep learning network execution pipeline on multi-processor platform
Liu Yang, Beijing (CN); and Anbang Yao, Beijing (CN)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Oct. 27, 2023, as Appl. No. 18/384,714.
Application 18/384,714 is a continuation of application No. 17/887,964, filed on Aug. 15, 2022, granted, now 11,868,782.
Application 17/887,964 is a continuation of application No. 16/475,081, granted, now 11,461,105, issued on Oct. 4, 2022, previously published as PCT/CN2017/079726, filed on Apr. 7, 2017.
Prior Publication US 2024/0143333 A1, May 2, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 9/30 (2018.01); G06F 9/38 (2018.01); G06F 15/80 (2006.01); G06N 3/08 (2023.01); G06N 20/00 (2019.01); G06T 1/20 (2006.01); G06T 15/00 (2011.01)
CPC G06F 9/3867 (2013.01) [G06F 9/3893 (2013.01); G06F 15/80 (2013.01); G06N 3/08 (2013.01); G06N 20/00 (2019.01); G06T 1/20 (2013.01); G06T 15/005 (2013.01)] 20 Claims
OG exemplary drawing
 
1. At least one non-transitory machine-readable medium, comprising a plurality of instructions stored thereon, that if executed by one or more processors, cause the one or more processors to:
determine a computation distribution for a plurality of nodes of a deep neural network (DNN);
assign at least one node of the plurality of nodes to a first group based on the computation distribution;
assign at least one other node of the plurality of nodes to a second group based on the computation distribution;
assign the first group to a processing resource of a plurality of processing resources;
assign the second group to a second processing resource of the plurality of processing resources; and
cause sequential execution of the first group and the second group by the plurality of processing resources.