US 12,130,736 B2
System and method for sharing a cache line between non-contiguous memory areas
Dan Shechter, Tel Aviv (IL); and Elad Raz, Ramat Gan (IL)
Assigned to Next Silicon Ltd, Givatayim (IL)
Filed by Next Silicon Ltd, Givatayim (IL)
Filed on Aug. 7, 2023, as Appl. No. 18/230,689.
Application 18/230,689 is a continuation of application No. 17/553,931, filed on Dec. 17, 2021, granted, now 11,720,491.
Prior Publication US 2023/0393979 A1, Dec. 7, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 12/06 (2006.01); G06F 12/0895 (2016.01)
CPC G06F 12/0607 (2013.01) [G06F 12/0895 (2013.01); G06F 2212/1021 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A method for caching memory, comprising:
caching, in a cache memory for accessing a physical memory area connected to at least one hardware processor, the cache memory comprising a plurality of cache lines each having a cache line amount of bits, at least two data values, each of one of at least two ranges of application memory addresses, each associated with one of a set of execution threads having an identified order of threads and executed by the at least one hardware processor, by:
organizing a plurality of sequences of consecutive address sub-ranges, each sequence associated with one of the set of execution threads and consisting of a consecutive sequence of application memory address sub-ranges of the respective range of application memory addresses associated with the execution thread, each application memory address sub-range having an identified amount of memory bits less than the amount of cache line bits, in an interleaved sequence of address sub-ranges by alternately selecting, for each execution thread in the identified order of threads, a next address sub-range in the respective sequence of address sub-ranges associated therewith;
generating a mapping of the interleaved sequence of address sub-ranges to a range of physical memory addresses in order of the interleaved sequence of address sub-ranges; and
when an execution thread of the set of execution threads accesses an application memory address of the respective range of application memory addresses associated thereof:
storing the at least two data values in one cache line of the plurality of cache lines by accessing the physical memory area according to the mapping using the application memory address.