US 12,443,847 B2
Task processing method and device based on neural network
Yi Xiong, Guangzhou (CN); and Songsong Yi, Guangzhou (CN)
Assigned to BIGO TECHNOLOGY PTE. LTD., Singapore (SG)
Appl. No. 17/284,201
Filed by BIGO TECHNOLOGY PTE. LTD., Singapore (SG)
PCT Filed Aug. 23, 2019, PCT No. PCT/CN2019/102139
§ 371(c)(1), (2) Date Apr. 9, 2021,
PCT Pub. No. WO2020/073742, PCT Pub. Date Apr. 16, 2020.
Claims priority of application No. 201811180174.5 (CN), filed on Oct. 10, 2018.
Prior Publication US 2021/0357759 A1, Nov. 18, 2021
Int. Cl. G06N 3/082 (2023.01); G06F 9/48 (2006.01); G06N 3/063 (2023.01)
CPC G06N 3/082 (2013.01) [G06F 9/485 (2013.01); G06F 9/4881 (2013.01); G06N 3/063 (2013.01)] 18 Claims
OG exemplary drawing
 
10. A task processing device based on a neural network, comprising: a multi-core processor and a memory storing at least one instruction therein;
wherein the instruction, when executed by the multi-core processor, causes the device to execute a task processing method comprising:
acquiring input data, wherein the input data is intended to trigger thread tasks, and is source input data or cache exchange data;
generating processing result data by scheduling at least two corresponding module threads in parallel based on at least two triggered thread tasks to process the input data, wherein the at least two module threads respectively correspond to at least two network modules in the neural network;
outputting the processing result data to a cache, wherein the processing result data is used as the cache exchange data of module threads except the at least two module threads, or outputting the processing result data, wherein the processing result data is used as a processing result of the source input data;
wherein the at least two module threads at least comprise a start module thread and an end module thread;
wherein generating the processing result data by scheduling at least two corresponding module threads in parallel based on at least two triggered thread tasks to process the input data comprises:
scheduling the start module thread based on the triggered thread task to process input data to the start module thread; and
scheduling the end module thread based on the triggered thread task to process input data to the end module thread; and
executing tasks of the scheduled start module thread on a first core of the multi-core processor and executing tasks of the scheduled end module thread on a different core of the multi-core processor.