US 11,900,665 B2
Graphics neural network processor, method, and system
Barnan Das, Newark, CA (US); Mayuresh M. Varerkar, Folsom, CA (US); Narayan Biswal, Folsom, CA (US); Stanley J. Baran, Chandler, AZ (US); Gokcen Cilingir, San Jose, CA (US); Nilesh V. Shah, Folsom, CA (US); Archie Sharma, Folsom, CA (US); Sherine Abdelhak, Beaverton, OR (US); Praneetha Kotha, Atlanta, GA (US); Neelay Pandit, Beaverton, OR (US); John C. Weast, Portland, OR (US); Mike B. Macpherson, Portland, OR (US); Dukhwan Kim, Mountain View, CA (US); Linda L. Hurd, Cool, CA (US); Abhishek R. Appu, El Dorado Hills, CA (US); Altug Koker, El Dorado Hills, CA (US); and Joydeep Ray, Folsom, CA (US)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Jul. 25, 2023, as Appl. No. 18/358,067.
Application 18/358,067 is a continuation of application No. 17/966,067, filed on Oct. 14, 2022.
Application 17/966,067 is a continuation of application No. 16/696,854, filed on Nov. 26, 2019, granted, now 11,487,811, issued on Nov. 1, 2022.
Application 16/696,854 is a continuation of application No. 16/123,842, filed on Sep. 6, 2018, granted, now 10,496,697, issued on Dec. 3, 2019.
Application 16/123,842 is a continuation of application No. 15/495,327, filed on Apr. 24, 2017, granted, now 10,108,850, issued on Oct. 23, 2018.
Prior Publication US 2023/0368516 A1, Nov. 16, 2023
Int. Cl. G06V 10/82 (2022.01); G06V 40/10 (2022.01); G06V 10/94 (2022.01); G06V 10/764 (2022.01); G06V 40/20 (2022.01); G06F 16/783 (2019.01); G06F 16/583 (2019.01); G06F 18/2413 (2023.01); G06V 10/10 (2022.01)
CPC G06V 10/82 (2022.01) [G06F 16/5838 (2019.01); G06F 16/784 (2019.01); G06F 18/24143 (2023.01); G06V 10/764 (2022.01); G06V 10/955 (2022.01); G06V 40/10 (2022.01); G06V 40/103 (2022.01); G06V 40/23 (2022.01)] 25 Claims
OG exemplary drawing
 
1. A graphics processor comprising:
a plurality of memory controllers associated with a plurality of memory partitions;
a level-two (L2) cache including a plurality of cache partitions associated with the plurality of memory partitions;
a processing cluster array including a plurality of processing clusters coupled with the plurality of memory controllers, each processing cluster of the plurality of processing clusters including a plurality of streaming multiprocessors, the processing cluster array configured for partitioning into a plurality of partitions, the plurality of partitions including:
a first partition including a first plurality of streaming multiprocessors configured to perform operations for a first neural network, the operations for the first neural network isolated to the first partition; and
a second partition including a second plurality of streaming multiprocessors configured to perform operations for a second neural network, the operations for the second neural network are isolated to the second partition and protected from operations performed for the first neural network.