| CPC G10L 17/04 (2013.01) [G10L 17/02 (2013.01); G10L 17/06 (2013.01); G10L 17/24 (2013.01)] | 20 Claims |

|
1. A method for generating a voice signature using validated input from an authenticated user, the method comprising:
providing an electronic document to a user;
receiving audio input from the user in association with the electronic document, wherein the audio input is captured by one or more microphones at a client device;
authenticating, using the audio input, an identity for the user by:
processing the audio input using one or more first trained machine learning models to generate an audio embedding; and
comparing, via a prediction model, the generated audio embedding to one of more pieces of audio references stored in association with the identity of the user, wherein the identity of the user is associated with first code data and the electronic document is associated with second code data;
validating the audio input against one or more validation rules by processing the audio input using one or more second trained machine learning models to extract natural language data from the audio input, wherein a first of the validation rules validates that the extracted natural language data from the audio input corresponds to the first code data and a second of the validation rules validates that the extracted natural language data from the audio input corresponds to the second code data;
generating, after the user is authenticated and the audio input is validated, a voice signature for the user using the audio input, wherein the voice signature represents the user's endorsement of the provided electronic document;
embedding, based on the user authentication and audio input validation, the electronic document with the first code data and/or the second code data; and
storing the electronic document in combination with the generated voice signature.
|