US 11,934,316 B2
Controlling cache size and priority using machine learning techniques
Shanmugasundaram Alagumuthu, San Jose, CA (US)
Assigned to PayPal, Inc., San Jose, CA (US)
Filed by PAYPAL, INC., San Jose, CA (US)
Filed on Oct. 29, 2021, as Appl. No. 17/515,109.
Application 17/515,109 is a continuation of application No. 17/101,689, filed on Nov. 23, 2020, granted, now 11,200,173.
Application 17/101,689 is a continuation of application No. 16/230,851, filed on Dec. 21, 2018, granted, now 10,846,227, issued on Nov. 24, 2020.
Prior Publication US 2022/0050783 A1, Feb. 17, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 12/0871 (2016.01); G06F 12/0891 (2016.01); G06N 20/00 (2019.01)
CPC G06F 12/0871 (2013.01) [G06F 12/0891 (2013.01); G06N 20/00 (2019.01); G06F 2212/6026 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
one or more processing elements configured to:
cache, in a software cache that is implemented using one or more hardware storage elements, data for a plurality of different user accounts;
generate, based on a first control value that specifies a size of the cache and a first set of time-to-live values for entries in the cache that are output by a machine learning module, a simulated hit rate and simulated read access times for the cache;
modify the machine learning module based on the simulated hit rate and the simulated read access times;
generate, using the modified machine learning module and based on access patterns to the cache:
a second control value that specifies a size of the cache; and
a second set of time-to-live values for entries in the cache;
control a size of the cache based on the second control value; and
evict data from the cache based on the second set of time-to-live values.