CPC G06Q 40/12 (2013.12) [G06F 18/24 (2023.01); G06F 40/284 (2020.01); G06N 3/02 (2013.01); G06N 3/08 (2013.01); G06N 20/00 (2019.01); G06Q 20/045 (2013.01); G06Q 20/389 (2013.01); G06Q 20/40 (2013.01); G06Q 20/4016 (2013.01); G06T 7/0002 (2013.01); G06T 7/74 (2017.01); G06V 30/224 (2022.01); G06V 30/413 (2022.01); G06V 30/414 (2022.01); G06V 30/418 (2022.01); G06F 16/24564 (2019.01); G06T 2207/20061 (2013.01); G06T 2207/30176 (2013.01)] | 20 Claims |
1. A computer-implemented method comprising:
identifying policy questions associated with at least one policy-enforcer entity, wherein each policy question is associated with at least one policy question answer, wherein policy-enforcer entities enforce respective policies based on respective policy question answers, and wherein each policy question answer corresponds to a conformance or a violation of a policy selected by at least one policy-enforcer entity, wherein identified policy questions associated with a first policy-enforcer entity include a first set of policy questions specific to the first policy-enforcer entity and a second set of policy questions common to multiple policy-enforcer entities, wherein the multiple policy-enforcer entities include the first policy-enforcer entity and at least a second policy enforcer entity that is a different entity than the first-policy enforcer entity;
training, for each respective policy question in the identified policy questions, a machine learning policy model for the respective policy question based on historical determinations of policy question answers for the respective policy question, wherein the machine learning policy models for the second set of policy questions are trained using data for multiple policy-enforcer entities, and wherein each machine learning policy model is trained to determine whether a given request corresponds to a policy conformance or a policy violation;
receiving data associated with a request associated with the first policy-enforcer entity;
identifying, for each respective policy question associated with the first policy-enforcer entity, the trained machine learning policy model for the respective policy question based on a mapping associated with the first policy-enforcer entity that maps policy questions to machine learning policy models;
identifying, for each identified machine learning policy model, tuning parameters for the first policy-enforcer entity, wherein first tuning parameters for a first machine learning policy model for the first policy-enforcer entity are different than second tuning parameters for the first machine learning policy model for the second policy-enforcer entity;
tuning each identified machine learning model for the first policy-enforcer entity using the tuning parameters specific to the first policy-enforcer entity;
using, for each respective policy question associated with the first policy-enforcer entity, the identified machine learning policy model to automatically determine a respective policy question answer to the respective policy question for the request, wherein the respective policy question answer indicates whether the request conforms to or violates the respective policy question; and
in response to determining that a first policy question answer corresponds to a policy violation:
generating an audit alert regarding the policy violation that identifies the request as a fraudulent request; and
providing the audit alert to one or more systems, wherein any of the one or more systems that receives the audit alert regarding the policy violation automatically rejects the request based on the audit alert.
|