US 12,080,028 B2
Large pose facial recognition based on 3D facial model
Jianxiang Chang, Mountain View, CA (US); and Lin Tao, Campbell, CA (US)
Assigned to Intuit Inc., Mountain View, CA (US)
Filed by Intuit Inc., Mountain View, CA (US)
Filed on Sep. 30, 2021, as Appl. No. 17/490,791.
Prior Publication US 2023/0102682 A1, Mar. 30, 2023
Int. Cl. G06T 7/73 (2017.01); G06F 3/01 (2006.01); G06F 21/32 (2013.01); G06N 20/00 (2019.01); G06V 40/16 (2022.01)
CPC G06T 7/75 (2017.01) [G06F 3/017 (2013.01); G06F 21/32 (2013.01); G06N 20/00 (2019.01); G06V 40/172 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
generating a captured facial object and a captured pose from a captured image, wherein:
the captured facial object comprises a computer generated construct of the captured image, and
the captured pose comprises a first set of angles for the captured facial object;
obtaining a base facial object and a base pose from a base image, wherein:
the base facial object comprises a three-dimensional computer generated construct of the base image,
the base pose comprises a second set of angles for at least one of the base image and the base facial object,
the base facial object comprises a pre-existing image taken before the captured facial object was captured, and
the second set of angles is smaller than the first set of angles;
generating a plurality of base pose angles using the captured pose, and a plurality of captured pose angles using the captured pose, wherein:
the plurality of captured pose angles comprise first pre-determined angle variations of the first set of angles, and
the plurality of base pose angles comprise second pre-determined angle variations of the second set of angles;
obtaining a plurality of selected base images using the plurality of base pose angles and the base facial object, wherein each of the plurality of selected base images is a corresponding first representation of the base facial object, as modified by the second pre-determined angle variations of the second set of angles;
generating a plurality of selected captured images using the plurality of captured pose angles and the captured facial object, wherein each of the plurality of selected captured images is a corresponding second representation of the captured image, as modified by the first pre-determined angle variations of the first set of angles;
comparing the plurality of selected base images to the plurality of selected captured images to establish a comparison; and
outputting, responsive to the comparison identifying at least one of the plurality of selected captured images as matching or not matching at least one of the plurality of selected base images, a match output.