US 12,216,583 B1
Macro-op cache data entry pointers distributed as initial pointers held in tag array and next pointers held in data array for efficient and performant variable length macro-op cache entries
John G. Favor, San Francisco, CA (US); and Michael N. Michael, Folsom, CA (US)
Assigned to Ventana Micro Systems Inc., Cupertino, CA (US)
Filed by Ventana Micro Systems Inc., Cupertino, CA (US)
Filed on Oct. 13, 2023, as Appl. No. 18/380,152.
Application 18/380,152 is a continuation in part of application No. 18/240,249, filed on Aug. 30, 2023.
Int. Cl. G06F 12/08 (2016.01); G06F 12/0864 (2016.01)
CPC G06F 12/0864 (2013.01) [G06F 2212/452 (2013.01)] 27 Claims
OG exemplary drawing
 
1. A microprocessor, comprising:
a macro-op (MOP) cache (MOC) comprising:
a MOC tag RAM (MTR) arranged as a set-associative cache of MTR entries; and
a MOC data RAM (MDR) managed as a pool of MDR entries;
wherein a MOC entry (ME) comprises:
one MTR entry; and
one or more MDR entries that hold the MOPs of the ME, wherein the MDR entries of the ME have a program order;
wherein each MDR entry is configured to hold:
one or more MOPs of the ME; and
a next MDR entry pointer;
wherein each MTR entry is configured to hold:
a length that specifies the number of the MDR entries of the ME; and
one or more initial MDR entry pointers;
wherein the MOC is configured to:
during allocation of a ME into the MOC, populate the one or more initial MDR entry pointers and the next MDR entry pointers to point to the one or more MDR entries of the ME based on the program order; and
in response to an access of the MOC that hits upon the MTR entry of an ME, fetch the MDR entries of the ME according to the program order:
initially using the one or more initial MDR entry pointers of the hit upon MTR entry; and
subsequently using the next MDR entry pointers of fetched MDR entries of the ME, until all the MDR entries of the ME have been fetched from the MDR.