US 12,174,739 B2
Method and apparatus to use DRAM as a cache for slow byte-addressible memory for efficient cloud applications
Yao Zu Dong, Shanghai (CN); Kun Tian, Shanghai (CN); Fengguang Wu, Yunnan (CN); and Jingqi Liu, Shanghai (CN)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Dec. 21, 2023, as Appl. No. 18/392,310.
Application 18/392,310 is a continuation of application No. 17/695,788, filed on Mar. 15, 2022, granted, now 11,921,632.
Application 17/695,788 is a continuation of application No. 17/255,886, granted, now 11,307,985, issued on Apr. 19, 2022, previously published as PCT/CN2018/108206, filed on Sep. 28, 2018.
Prior Publication US 2024/0126695 A1, Apr. 18, 2024
Int. Cl. G06F 12/08 (2016.01); G06F 3/06 (2006.01); G06F 12/0802 (2016.01)
CPC G06F 12/0802 (2013.01) [G06F 3/0604 (2013.01); G06F 3/0647 (2013.01); G06F 3/0667 (2013.01); G06F 3/0673 (2013.01); G06F 2212/651 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A processor, comprising:
a multi-chip package comprising a plurality of integrated circuit (IC) chips, the IC chips including:
a plurality of cores to execute instructions;
interface circuitry to couple the plurality of cores to a plurality of memories including a first memory and a second memory, the first memory associated with a faster access speed than the second memory;
one or more cores of the plurality of cores to translate a plurality of guest virtual addresses to a corresponding plurality of guest physical addresses based on a first set of page tables and to translate the plurality of guest physical addresses to a corresponding plurality of host physical addresses based on a second set of page tables, a first portion of the plurality of host physical addresses associated with a corresponding first plurality of memory pages in the first memory and a second portion of the plurality of host physical addresses associated with a corresponding second plurality of memory pages from the second memory;
a translation lookaside buffer (TLB) to cache translations associated with the corresponding plurality of host physical addresses; and
circuitry operable, at least in part, in accordance with executable code, to migrate a group of memory pages of the second plurality of memory pages from the second memory to the first memory, and to flush from the TLB one or more translations corresponding to the group of memory pages being migrated.