US 12,298,909 B2
Memory prefetch based on machine learning
Chao Sun, San Jose, CA (US); Qingbo Wang, Irvine, CA (US); Minghai Qin, Fremont, CA (US); Jaco Hofmann, Santa Clara, CA (US); Anand Kulkarni, San Jose, CA (US); Dejan Vucinic, San Jose, CA (US); and Zvonimir Bandic, San Jose, CA (US)
Assigned to Sandisk Technologies, Inc., Milpitas, CA (US)
Filed by Sandisk Technologies, Inc., Milpitas, CA (US)
Filed on Aug. 8, 2023, as Appl. No. 18/231,730.
Claims priority of provisional application 63/430,949, filed on Dec. 7, 2022.
Prior Publication US 2024/0193088 A1, Jun. 13, 2024
Int. Cl. G06F 12/0862 (2016.01); G06N 20/00 (2019.01)
CPC G06F 12/0862 (2013.01) [G06N 20/00 (2019.01)] 19 Claims
OG exemplary drawing
 
1. A memory device, comprising:
a first memory configured to store data;
a second memory configured to cache data stored in the first memory; and
at least one controller, individually or in combination, configured to:
receive page fault information from a host, wherein the page fault information results from a request for data by the host that is stored in the first memory but is not cached in the second memory when requested by the host;
use the received page fault information for one or more inputs into a prefetch model trained by Machine Learning (ML) to generate at least one inference;
based at least in part on the at least one inference, cache prefetch data in the second memory that is stored in the first memory;
receive Operating System (OS) metadata from an OS of the host including at least one of a page fault rate, a plurality of timestamps indicating occurrences of page faults, and resource usage information; and
based on the received OS metadata, determine at least one of an amount of data to prefetch from the first memory to cache in the second memory and when to prefetch data from the first memory to cache in the second memory.