US 11,748,302 B2
Engine to enable high speed context switching via on-die storage
Altug Koker, El Dorado Hills, CA (US); Prasoonkumar Surti, Folsom, CA (US); David Puffer, Tempe, AZ (US); Subramaniam Maiyuran, Gold River, CA (US); Guei-Yuan Lueh, San Jose, CA (US); Abhishek R. Appu, El Dorado Hills, CA (US); Joydeep Ray, Folsom, CA (US); Balaji Vembu, Folsom, CA (US); Tomer Bar-On, Petah Tikva (IL); Andrew T. Lauritzen, Victoria (CA); Hugues Labbe, Granite Bay, CA (US); John G. Gierach, Portland, OR (US); and Gabor Liktor, San Francisco, CA (US)
Assigned to INTEL CORPORATION, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Dec. 23, 2021, as Appl. No. 17/561,427.
Application 17/561,427 is a continuation of application No. 16/869,223, filed on May 7, 2020, granted, now 11,210,265.
Application 16/869,223 is a continuation of application No. 15/477,027, filed on Apr. 1, 2017, granted, now 10,649,956, issued on May 12, 2020.
Prior Publication US 2022/0206990 A1, Jun. 30, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 16/13 (2019.01); G06F 9/38 (2018.01); G06F 9/30 (2018.01); G06F 16/11 (2019.01); G06F 16/172 (2019.01); G06F 9/46 (2006.01); G06F 12/1036 (2016.01); G06F 12/1045 (2016.01); G06F 12/0831 (2016.01)
CPC G06F 16/13 (2019.01) [G06F 9/30 (2013.01); G06F 9/38 (2013.01); G06F 9/3836 (2013.01); G06F 9/461 (2013.01); G06F 16/113 (2019.01); G06F 16/172 (2019.01); G06F 12/0831 (2013.01); G06F 12/1036 (2013.01); G06F 12/1045 (2013.01); G06F 2201/84 (2013.01)] 21 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
a hardware processor to:
receive a signal from an instruction scheduler indicating an initiation of a preemption process on a workload executing on a general-purpose graphics processing compute block comprising a plurality of graphics processing resources to execute graphics instructions;
stop an execution of an existing context on at least a first of the plurality of processing resources; and
copy context state data from the existing context on the at least a first of the plurality of processing resources to a first shared memory communicatively coupled to the plurality of processing resources in parallel with executing a new context on the plurality of processing resources.