US 12,333,760 B1
System for estimating a three dimensional pose of one or more persons in a scene
Bedirhan Uguz, Pittsburgh, PA (US); Emre Akbas, Ankara (TR); Ozhan Suat, Ankara (TR); Utku Aktas, Ankara (TR); Necip Berme, Worthington, OH (US); and Mohan Chandra Baro, Columbus, OH (US)
Assigned to Bertec Corporation, Columbus, OH (US)
Filed by Bertec Corporation, Columbus, OH (US)
Filed on Jun. 26, 2023, as Appl. No. 18/214,145.
Application 18/214,145 is a continuation in part of application No. 18/074,978, filed on Dec. 5, 2022, granted, now 11,688,139.
Application 18/074,978 is a continuation in part of application No. 17/827,975, filed on May 30, 2022, granted, now 11,521,373, issued on Dec. 6, 2022.
Application 17/827,975 is a continuation in part of application No. 17/533,096, filed on Nov. 22, 2021, granted, now 11,348,279, issued on May 31, 2022.
Application 17/533,096 is a continuation in part of application No. 17/107,845, filed on Nov. 30, 2020, granted, now 11,182,924, issued on Nov. 23, 2021.
Application 17/107,845 is a continuation in part of application No. 16/826,200, filed on Mar. 21, 2020, granted, now 10,853,970, issued on Dec. 1, 2020.
Claims priority of provisional application 62/822,352, filed on Mar. 22, 2019.
Int. Cl. G06T 7/73 (2017.01)
CPC G06T 7/75 (2017.01) [G06T 2207/20084 (2013.01); G06T 2207/30196 (2013.01)] 4 Claims
OG exemplary drawing
 
1. A system for estimating a three dimensional pose of one or more persons in a scene, the system comprising:
one or more cameras, the one or more cameras configured to capture one or more images of the scene; and
a data processor including at least one hardware component, the data processor configured to execute computer executable instructions, the computer executable instructions comprising instructions for:
receiving the one or more images of the scene from the one or more cameras;
extracting features from the one or more images of the scene for providing inputs to a three dimensional pose estimation neural network;
generating, by using the three dimensional pose estimation neural network, vertices of a canonical human mesh model for the one or more images of the scene;
retrieving a particular weight matrix of a plurality of weight matrices from an annotation server that corresponds to a desired three dimensional keypoint set, the plurality of weight matrices corresponding to different applications of the system, the desired three dimensional keypoint set being a three dimensional keypoint set for a particular user-desired one of the different applications of the system;
generating the desired three dimensional keypoint set by multiplying the retrieved particular weight matrix with the vertices of the canonical human mesh model; and
wherein, during training of the system, the data processor is further configured to execute computer executable instructions for:
retrieving one or more canonical human mesh samples corresponding to the particular user-desired one of the different applications of the system from the annotation server, and displaying the one or more canonical human mesh samples to one or more human annotators using an annotation interface so that the one or more human annotators are able to annotate three dimensional keypoints that are located inside of the human mesh or on the human mesh to produce user-defined annotated three dimensional keypoint locations;
determining the particular weight matrix, which is used to generate the desired three dimensional keypoint set, from the user-defined annotated three dimensional keypoint locations and the vertices of the canonical human mesh model; and
storing the determined particular weight matrix on the annotation server; and
wherein the vertices of the canonical human mesh model are generated independently from the user-defined annotated three dimensional keypoint locations.