US 11,886,321 B2
System and method for bias evaluation scanning and maturity model
Aquelah Davis, Chicago, IL (US); Iqbal M Khan, Glenview, IL (US); Edwin L Tate, Frankfort, IL (US); Paula Fetterman, Belleair, FL (US); Peter J Evans, Carrollton, TX (US); Stephanie M Cromuel, Tampa, FL (US); Sharon L Williams, Chicago, IL (US); Paroul Bhandari, Lutz, FL (US); Dillon W Sullivan, West Lafayette, IN (US); and Nicoleta Mihai, Rolling Meadows, IL (US)
Assigned to JPMORGAN CHASE BANK, N.A., New York, NY (US)
Filed by JPMorgan Chase Bank, N.A., New York, NY (US)
Filed on Aug. 10, 2021, as Appl. No. 17/444,789.
Prior Publication US 2023/0053115 A1, Feb. 16, 2023
Int. Cl. G06F 9/44 (2018.01); G06F 11/36 (2006.01); G06N 20/00 (2019.01)
CPC G06F 11/3616 (2013.01) [G06N 20/00 (2019.01)] 17 Claims
OG exemplary drawing
 
1. A method for coding out biases in applications, systems, and processes by utilizing one or more processors and one or more memories, the method comprising:
implementing, by at least one processor, a bias code scanning module that includes a receiving module, an implementing module, an identifying module, a mitigating module, a generating module, a coding module, a scanning module, and a certifying module;
applying, by calling the receiving module by a first application programming interface (API), an intake process based on received inventory data to applications, systems, and processes;
implementing, by calling the implementing module by a second API, a machine learning model in response to applying the intake process;
identifying, by calling the identifying module by a third API, areas of the potential bias data within the applications, systems, and processes by utilizing the machine learning model based on analyzing response data received during the intake process, wherein the potential bias data includes conscious or unconscious decisions data during development of the applications, systems and processes based on a set of predefined criteria data;
generating, by calling the generating module by a fourth API, output data that includes bias data and exceptions data identified for the applications, systems, and processes; and
mitigating, by calling the mitigating module by a fifth API, the bias data and exceptions data in response to the output data by implementing a mitigation process, and
wherein in implementing the machine learning model, the method further comprising:
generating, by calling the generating module by the fourth API, algorithmic measurement data of biases in response to analyzing input data corresponding to the set of predefined criteria data:
coding, by calling the coding module by a sixth API, the algorithmic measurement data of the biases into the bias code scanning module;
scanning, by calling the scanning module by a seventh API, the coded algorithmic measurement data to identify potential biases within the applications, systems, and processes;
automatically coding out the potential biases in the applications, systems, and processes.