US 11,786,129 B2
Systems and methods for human mesh recovery
Srikrishna Karanam, Bangalore (IN); Ziyan Wu, Lexington, MA (US); and Georgios Georgakis, Philadelphia, PA (US)
Assigned to SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD., Shanghai (CN)
Filed by SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD., Shanghai (CN)
Filed on Feb. 7, 2022, as Appl. No. 17/666,319.
Application 17/666,319 is a continuation of application No. 16/863,382, filed on Apr. 30, 2020, granted, now 11,257,586.
Claims priority of provisional application 62/941,203, filed on Nov. 27, 2019.
Prior Publication US 2022/0165396 A1, May 26, 2022
Int. Cl. G06K 9/00 (2022.01); A61B 5/00 (2006.01); G16H 30/40 (2018.01); G06T 7/00 (2017.01); G06T 7/90 (2017.01); G06T 17/00 (2006.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 17/20 (2006.01); G16H 10/60 (2018.01); G16H 30/20 (2018.01); G06V 20/64 (2022.01); G06V 40/10 (2022.01); G06V 40/20 (2022.01); G06V 20/62 (2022.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/778 (2022.01); G06V 10/82 (2022.01); G06V 10/42 (2022.01); G06V 10/40 (2022.01)
CPC A61B 5/0077 (2013.01) [A61B 5/0035 (2013.01); A61B 5/70 (2013.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06F 18/2193 (2023.01); G06T 7/0012 (2013.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 7/90 (2017.01); G06T 17/00 (2013.01); G06T 17/20 (2013.01); G06V 10/40 (2022.01); G06V 10/42 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/7796 (2022.01); G06V 10/82 (2022.01); G06V 20/62 (2022.01); G06V 20/64 (2022.01); G06V 40/10 (2022.01); G06V 40/20 (2022.01); G16H 10/60 (2018.01); G16H 30/20 (2018.01); G16H 30/40 (2018.01); G06T 2200/08 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30004 (2013.01); G06T 2207/30196 (2013.01); G06V 2201/033 (2022.01)] 18 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
one or more processors configured to:
obtain an image of a person;
determine, based on one or more machine-learned (ML) models, respective angles of a first plurality of joints of the person based on the image of the person, wherein the first plurality of joints is associated with a root kinematic chain that includes a chest area or a pelvis area of the person, wherein the respective angles of the first plurality of joints include positional information about the first plurality of joints, and wherein the respective angles of the first plurality of joints and the positional information about the first plurality of joints indicate a position of the person as depicted in the image;
determine, based on the one or more ML models, respective angles of a second plurality of joints of the person based on the image of the person and the position of the person, wherein the second plurality of joints is associated with a head kinematic chain that includes a head area of the person or with a limb kinematic chain that includes a limb area of the person, and wherein the respective angles of the second plurality of joints are determined based on a range of joint angle values dictated by the position of the person; and
estimate a human model associated with the person based at least on the respective angles of the first plurality of joints of the person and the respective angles of the second plurality of joints of the person.