US 12,437,429 B2
Three-dimensional object reconstruction
Riza Alp Guler, London (GB); and Iason Kokkinos, London (GB)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Sep. 28, 2023, as Appl. No. 18/477,146.
Application 18/477,146 is a continuation of application No. 17/301,926, filed on Apr. 19, 2021, granted, now 11,816,850.
Application 17/301,926 is a continuation of application No. PCT/EP2019/080897, filed on Nov. 11, 2019.
Claims priority of provisional application 62/768,824, filed on Nov. 16, 2018.
Prior Publication US 2024/0029280 A1, Jan. 25, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/50 (2017.01); G06N 3/02 (2006.01); G06T 7/73 (2017.01); G06T 17/00 (2006.01); G06T 19/20 (2011.01); G06V 10/22 (2022.01); G06V 10/44 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01); G06V 40/10 (2022.01)
CPC G06T 7/50 (2017.01) [G06N 3/02 (2013.01); G06T 7/75 (2017.01); G06T 17/005 (2013.01); G06T 19/20 (2013.01); G06V 10/225 (2022.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01); G06V 40/103 (2022.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01); G06T 2207/30196 (2013.01); G06T 2219/2004 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for creating a three-dimensional (3D) reconstruction from a two-dimensional (2D) image comprising a plurality of object landmarks corresponding to an object, the method comprising:
extracting a 2D image representation from each object landmark to generate a plurality of 2D images each corresponding to a different object landmark;
estimating a respective 3D representation for each of the plurality of 2D images, each respective separate 3D representation representing a corresponding set of possible orientation angles of the different object landmarks, the possible orientation angles referring to all possible positions of an object landmark;
applying a weighting to each orientation angle in the set of possible orientation angles based on a kinematic association of the respective object landmark associated with a respective one of the plurality of separate 3D representations;
reducing a weight of the set of possible orientation angles corresponding to one or more object landmarks that are missing from the object; and
combining the respective 3D representations comprising the plurality of separate 3D representations of each of the different object landmarks, resulting in a fused 3D representation of the object.