US 11,954,569 B2
Techniques for parallel model training
Karthick Abiraman, Plano, TX (US); Bing Liu, Frisco, TX (US); Saranya Thangaraj, Frisco, TX (US); Paul Ponce Portugal, Plano, TX (US); and Sang Jin Park, Allen, TX (US)
Assigned to Capital One Services, LLC, McLean, VA (US)
Filed by Capital One Services, LLC, McLean, VA (US)
Filed on Aug. 3, 2022, as Appl. No. 17/880,239.
Application 17/880,239 is a continuation of application No. 16/845,597, filed on Apr. 10, 2020, granted, now 11,436,533.
Prior Publication US 2022/0374777 A1, Nov. 24, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 20/00 (2019.01)
CPC G06N 20/00 (2019.01) 20 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
at least one processor; and
a memory coupled to the at least one processor, the memory comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform a parallel model training process operative to:
access a plurality of model specifications for a plurality of computational models,
provide each of the plurality of model specifications associated with at least one model event to one of a plurality of serverless computing clusters, each of the plurality of serverless computing clusters operative to generate model data for each of the plurality of model specifications, and
perform parallel training of each of the plurality of computational models by, via one of the plurality of serverless computing clusters:
initiating an instance of a cloud-computing resource for each of the plurality of computational models, the instance of the cloud-computing resource to generate a trained model specification based on training one of the plurality of computational models.