US 12,131,416 B2
Pixel-aligned volumetric avatars
Stephen Anthony Lombardi, Pittsburgh, PA (US); Jason Saragih, Pittsburgh, PA (US); Tomas Simon Kreuz, Pittsburgh, PA (US); Shunsuke Saito, Pittsburgh, PA (US); Michael Zollhoefer, Pittsburgh, PA (US); Amit Raj, Atlanta, GA (US); and James Henry Hays, Decatur, GA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed on Dec. 20, 2021, as Appl. No. 17/556,367.
Claims priority of provisional application 63/129,989, filed on Dec. 23, 2020.
Prior Publication US 2022/0198731 A1, Jun. 23, 2022
Int. Cl. G06T 13/40 (2011.01); G06T 7/00 (2017.01); G06T 7/73 (2017.01)
CPC G06T 13/40 (2013.01) [G06T 7/73 (2017.01); G06T 7/97 (2017.01); G06T 2207/30201 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
receiving multiple two-dimensional images having at least two or more fields of view of a subject;
extracting multiple image features from the two-dimensional images using a set of learnable weights;
generating predicted features of pixels along a target direction based on the at least two or more fields of view;
generating a summarized feature vector based on information associated with a camera used to collect the two-dimensional images;
projecting the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer based on the summarized feature vector, wherein the projecting includes concatenating multiple feature maps produced by each of multiple cameras, each of the multiple cameras having an intrinsic characteristic; and
providing, to the viewer, an image of the three-dimensional model of the subject based on the predicted features and the projecting.