US 12,455,841 B2
Adaptive fabric allocation for local and remote emerging memories based prediction schemes
Benjamin Graniello, Gilbert, AZ (US); Francesc Guim Bernat, Barcelona (ES); Karthik Kumar, Chandler, AZ (US); and Thomas Willhalm, Sandhausen (DE)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Sep. 22, 2023, as Appl. No. 18/371,513.
Application 18/371,513 is a continuation of application No. 16/721,706, filed on Dec. 19, 2019, granted, now 11,789,878.
Prior Publication US 2024/0086341 A1, Mar. 14, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/06 (2006.01); G06F 9/50 (2006.01); G06F 11/30 (2006.01); G06F 12/02 (2006.01); G06F 13/16 (2006.01); G06F 15/78 (2006.01)
CPC G06F 13/1663 (2013.01) [G06F 3/061 (2013.01); G06F 3/0635 (2013.01); G06F 3/067 (2013.01); G06F 3/0685 (2013.01); G06F 9/5016 (2013.01); G06F 11/3037 (2013.01); G06F 12/0246 (2013.01); G06F 13/1678 (2013.01); G06F 15/7807 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
coupling a compute host to a memory device via an interconnect structure including one or more interconnect segments including an input/output (I/O) link comprising a Compute Express Link (CXL) flex bus or a memory channel with a plurality of reconfigurable upstream lanes and downstream lanes, the memory device including volatile memory comprising a majority of storage capacity for the memory device;
performing memory reads and writes initiated by the host to read data from the memory device and write data to the memory device via the interconnect structure;
at the memory device,
monitoring memory read and memory write traffic transferred via the I/O link;
predicting, based on the monitored memory read and memory write traffic, expected read and write bandwidths for the I/O link; and
dynamically reconfiguring the plurality of upstream lanes and downstream lanes for the I/O link based on expected memory read and write bandwidths for the I/O link.