US 11,657,573 B2
Automatic mesh tracking for 3D face modeling
Yuelong Li, San Jose, CA (US); and Mohammad Gharavi-Alkhansari, San Jose, CA (US)
Assigned to SONY GROUP CORPORATION, Tokyo (JP); and SONY CORPORATION OF AMERICA, New York, NY (US)
Filed by SONY GROUP CORPORATION, Tokyo (JP); and Sony Corporation of America, New York, NY (US)
Filed on May 6, 2021, as Appl. No. 17/313,949.
Prior Publication US 2022/0358722 A1, Nov. 10, 2022
Int. Cl. G06T 17/20 (2006.01); G06T 7/564 (2017.01); G06T 7/11 (2017.01); G06T 7/73 (2017.01); G06T 7/33 (2017.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06N 3/04 (2023.01)
CPC G06T 17/205 (2013.01) [G06N 3/04 (2013.01); G06T 7/11 (2017.01); G06T 7/337 (2017.01); G06T 7/564 (2017.01); G06T 7/75 (2017.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06T 2207/20084 (2013.01)] 39 Claims
OG exemplary drawing
 
1. A method programmed in a non-transitory memory of a device comprising:
inputting unaligned 3D scans;
implementing pose correction via rigid alignment on the unaligned 3D scans to generate aligned meshes;
detecting eye and mouth boundaries on the aligned meshes;
implementing mesh tracking on the aligned meshes; and
outputting a tracked mesh based on the mesh tracking,
wherein detecting the eye and mouth boundaries includes 3D contour detection, wherein the 3D contour detection includes:
applying a mask Regional Convolutional Neural Network (RCNN) which results in a segmentation probability, left and right corners, and a region of interest;
generating an edge map from the region of interest; and
using an improved active contour fitting snake algorithm.