US 12,112,200 B2
Pipeline parallel computing using extended memory
Abdullah Kayi, Westchester, NY (US); and Tayfun Gokmen, Briarcliff Manor, NY (US)
Assigned to International Business Machines Corporation, Armonk, NY (US)
Filed by International Business Machines Corporation, Armonk, NY (US)
Filed on Sep. 13, 2021, as Appl. No. 17/473,428.
Prior Publication US 2023/0080480 A1, Mar. 16, 2023
Int. Cl. G06F 9/50 (2006.01); G06N 3/08 (2023.01)
CPC G06F 9/50 (2013.01) [G06N 3/08 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system, comprising:
compute nodes operatively coupled to a communications network, each compute node comprising processor circuitry and a local memory; and
wherein portions of the local memory of each compute node are pooled to provide an extended memory which comprises a global virtual address space which is shared by the processor circuitry of the compute nodes;
wherein the processor circuitry of the compute nodes communicate over the communications network to perform a pipeline parallel computation process and utilize the extended memory to exchange data over the communications network to perform the pipeline parallel computation process; and
wherein in performing the pipeline parallel computation process, data generated by the processing circuitry of a first compute node performing a respective first computation is stored in the extended memory and accessed and utilized by the processing circuitry of a second compute node to perform a respective second computation.