US 11,861,860 B2
Body dimensions from two-dimensional body images
Amit Kumar Agrawal, Santa Clara, CA (US); Siddharth Choudhary, San Jose, CA (US); Antonio Criminisi, Cambridge (GB); Ganesh Subramanian Iyer, Sunnyvale, CA (US); JinJin Li, San Jose, CA (US); Prakash Ramu, Portland, OR (US); Brandon Michael Smith, Fremont, CA (US); and Durga Venkata Kiran Yakkala, Eluru (IN)
Assigned to Amazon Technologies, Inc., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on Sep. 29, 2021, as Appl. No. 17/489,393.
Prior Publication US 2023/0096013 A1, Mar. 30, 2023
Int. Cl. G06T 7/60 (2017.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06V 40/10 (2022.01); G01B 11/24 (2006.01); G06T 17/20 (2006.01)
CPC G06T 7/60 (2013.01) [G01B 11/24 (2013.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06T 17/20 (2013.01); G06V 40/103 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30196 (2013.01)] 22 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
receiving a first two-dimensional (“2D”) body image of a human body from a 2D camera;
processing the first 2D body image to segment a first plurality of pixels of the first 2D body image that represent the human body from a second plurality of pixels of the first 2D body image that do not represent the human body to produce a first silhouette of the human body;
processing the first silhouette using a convolutional neural network to produce a plurality of body dimensions corresponding to the human body;
generating, based at least in part on the first silhouette or at least some of the plurality of body dimensions, a personalized three-dimensional (“3D”) body model of the human body;
comparing the personalized 3D body model with at least one of the human body represented in the 2D body image or the first silhouette to determine a difference between the personalized 3D body model and at least one of the human body represented in the 2D body image or the first silhouette;
refining, based at least in part on the difference, the first silhouette to produce a refined silhouette; and
processing the refined silhouette using the convolutional neural network to produce a refined plurality of body dimensions corresponding to the human body.
 
18. A method, comprising:
processing a first two-dimensional (“2D”) body image that includes a representation of a body from a first view to produce a first silhouette of the body;
determining, based at least in part on the first silhouette, a plurality of body dimensions corresponding to the body;
generating, based at least in part on the first silhouette, a three-dimensional (“3D”) model of the body;
comparing the 3D body model with at least one of the body represented in the 2D body image or the first silhouette to determine a difference between the 3D body model and at least one of the body represented in the 2D body image or the first silhouette;
refining, based at least in part on the difference, the first silhouette to produce a refined silhouette;
processing the refined silhouette to produce a refined plurality of body dimensions corresponding to the human body;
generating, based at least in part on the refined silhouette, a refined 3D model of the body; and
sending, for presentation, the refined 3D model of the body and at least one dimension of the refined plurality of body dimensions.