US 12,066,945 B2
Dynamic shared cache partition for workload with large code footprint
Prathmesh Kallurkar, Bangalore (IN); Anant Vithal Nori, Bangalore (IN); and Sreenivas Subramoney, Bangalore (IN)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Dec. 22, 2020, as Appl. No. 17/130,698.
Prior Publication US 2022/0197794 A1, Jun. 23, 2022
Int. Cl. G06F 12/084 (2016.01); G06F 9/50 (2006.01); G06F 12/0811 (2016.01); G06F 12/0846 (2016.01); G06F 12/0871 (2016.01)
CPC G06F 12/084 (2013.01) [G06F 9/5016 (2013.01); G06F 12/0811 (2013.01); G06F 12/0848 (2013.01); G06F 12/0871 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An integrated circuit, comprising:
a core;
a first core cache memory at a first cache level, the first core cache memory coupled to the core;
a shared core cache memory at a second cache level, the shared core cache memory coupled to the core;
a first cache controller coupled to the core and communicatively coupled to the first core cache memory;
a second cache controller coupled to the core and communicatively coupled to the shared core cache memory; and
circuitry coupled to the core and communicatively coupled to the first cache controller and the second cache controller to:
determine if a workload has a large code footprint, comprising the circuitry to determine if, after a counted first number of misses at the first cache level exceeds a threshold, a counted second number of code misses at the first cache level exceeds a counted third number of data misses at the first cache level, and, if so determined,
partition N ways of the shared core cache memory into first and second chunks of ways with the first chunk of M ways reserved for code cache lines from the workload and the second chunk of N minus M ways reserved for data cache lines from the workload, where N and M are positive integer values and N minus M is greater than zero.