US 12,236,717 B2
Spoof detection based on challenge response analysis
Spandana Vemulapalli, Kansas City, MO (US); and Reza R. Derakhshani, Shawnee, KS (US)
Assigned to JUMIO CORPORATION, Sunnyvale, CA (US)
Filed by Jumio Corporation, Sunnyvale, CA (US)
Filed on May 30, 2023, as Appl. No. 18/325,928.
Application 18/325,928 is a continuation of application No. 17/463,199, filed on Aug. 31, 2021, granted, now 11,710,353.
Prior Publication US 2023/0306792 A1, Sep. 28, 2023
Int. Cl. G06V 40/40 (2022.01); G06V 40/16 (2022.01)
CPC G06V 40/45 (2022.01) [G06V 40/172 (2022.01); G06V 40/174 (2022.01)] 20 Claims
OG exemplary drawing
 
15. A computer-implemented system, comprising:
one or more computers; and
one or more computer memory devices interoperably coupled with the one or more computers and having tangible, non-transitory, machine-readable media storing one or more instructions that, when executed by the one or more computers, perform operations comprising:
causing display of an animated image of an avatar performing a facial expression over a reference time period;
capturing a set of images of a subject as a response of the subject to the display of the animated image of the avatar performing the facial expression during a substantially similar time period to the reference time period, wherein the set of images of the subject includes multiple images depicting a transition in the facial expression over the substantially similar time period;
determining a set of points of interest for the facial expression in a first image of the set of images, the set of points of interest used to determine one or more expression features in the facial expression;
identifying the one or more expression features in the facial expression, the one or more expression features comprising one or more subject-specific features, the one or more expression features identified based on the set of points of interest for the facial expression;
determining, in a reference image of the subject performing a neutral expression, at least one subject-specific feature identified in the first image exists in the reference image using a deep learning model of a machine learning process;
determining that the subject in the first image substantially matches the subject in the reference image based, at least in part, on the at least one subject-specific feature identified in the first image exists in the reference image and a dynamic metric representing a quantification of continuous motion of the facial expression based on the multiple images depicting the transition from the neutral expression; and
in response to determining that the subject in the first image substantially matches the subject in the reference image, identifying the subject as a live person.