US 12,443,887 B2
Automated model generation platform for recursive model building
Maharaj Mukherjee, Poughkeepsie, NY (US)
Assigned to Bank of America Corporation, Charlotte, NC (US)
Filed by Bank of America Corporation, Charlotte, NC (US)
Filed on Jun. 18, 2024, as Appl. No. 18/746,148.
Application 18/746,148 is a continuation of application No. 18/113,144, filed on Feb. 23, 2023, granted, now 12,050,973.
Application 18/113,144 is a continuation of application No. 16/795,852, filed on Feb. 20, 2020, granted, now 11,631,031, issued on Apr. 18, 2023.
Prior Publication US 2024/0338606 A1, Oct. 10, 2024
Int. Cl. G06Q 30/00 (2023.01); G06N 20/00 (2019.01); G06Q 30/0601 (2023.01)
CPC G06N 20/00 (2019.01) [G06Q 30/0601 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computing platform comprising:
at least one processor;
a communication interface communicatively coupled to the at least one processor; and
memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to:
select, based on an identified service offering and using one or more machine learning algorithms, one or more machine learning models and a corresponding sequence of model application, resulting in machine learning model information, wherein the corresponding sequence of model application indicates an order in which the one or more machine learning models should be applied, and wherein:
the one or more machine learning models include bagging, and wherein performing the bagging comprises:
generating new training data sets by sampling an initial training data set uniformly with replacement, and
fitting the one or more machine learning models using the new training data sets, wherein the bagging improves stability and accuracy of the one or more machine learning models, reduces variance, and avoids overfitting; and
in response to identifying that a service access request corresponds to a problem within the identified service offering, send, to an enterprise service host system, the machine learning model information, wherein sending the machine learning model information to the enterprise service host system causes the enterprise service host system to generate a service output interface by applying the selected one or more machine learning models in the order in which the one or more machine learning model should be applied.