US 11,989,670 B1
System and methods for preemptive caching
Gunjan C. Vijayvergia, San Antonio, TX (US); Anand Shah, Helotes, TX (US); Alan David Chase, Boerne, TX (US); Anil Sanghubattla, San Antonio, TX (US); and Andrew P. Jamison, San Antonio, TX (US)
Assigned to United Services Automobile Association (USAA), San Antonio, TX (US)
Filed by UIPCO, LLC, San Antonio, TX (US)
Filed on Nov. 8, 2021, as Appl. No. 17/520,796.
Claims priority of provisional application 63/111,426, filed on Nov. 9, 2020.
Int. Cl. G06Q 10/00 (2023.01); G06F 12/0802 (2016.01); G06Q 10/0631 (2023.01); G06Q 30/0201 (2023.01)
CPC G06Q 10/0631 (2013.01) [G06F 12/0802 (2013.01); G06F 2212/60 (2013.01); G06Q 30/0201 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system for improving the delivery of data to an a first computing device associated with a first customer of a financial institution comprising:
at least one server in the financial institution housing a machine learning module;
the machine learning module receiving customer profile data from a customer profile database, customer financial accounts data from a customer financial accounts database, and customer app history data from a customer app history database;
each of the customer profile data, the customer financial accounts data, and the customer app history data including data associated with the first customer and also including data associated with a plurality of remote customers associated with the financial institution;
wherein the machine learning module is configured to determine usage patterns among the customer profile data, the customer accounts data, and the customer app history data, and to develop rules based upon the usage patterns;
wherein the machine learning module is configured to apply the rules in order to transmit predicted data, that the machine learning module anticipates the first customer might request, to a cache when a likelihood of the predicted data being used by the first customer exceeds a confidence standard; and
the confidence standard is determined based on a first amount of computing resources needed to fulfill a request made by the first customer and based on a second amount of computing resources needed to place the predicted data into the cache.