US 12,008,787 B2
Object pose estimation
Shubham Shrivastava, Sunnyvale, CA (US); Gaurav Pandey, College Station, TX (US); and Punarjay Chakravarty, Campbell, CA (US)
Assigned to Ford Global Technologies, LLC, Dearborn, MI (US)
Filed by Ford Global Technologies, LLC, Dearborn, MI (US)
Filed on Jul. 20, 2021, as Appl. No. 17/380,174.
Prior Publication US 2023/0025152 A1, Jan. 26, 2023
Int. Cl. G06T 7/73 (2017.01); B60W 10/04 (2006.01); B60W 10/18 (2012.01); B60W 10/20 (2006.01); G05B 13/02 (2006.01); G06N 3/084 (2023.01)
CPC G06T 7/74 (2017.01) [B60W 10/04 (2013.01); B60W 10/18 (2013.01); B60W 10/20 (2013.01); G05B 13/027 (2013.01); G06N 3/084 (2013.01); G06T 7/75 (2017.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30204 (2013.01); G06T 2207/30232 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer, comprising:
a processor; and
a memory, the memory including instructions executable by the processor to:
input a depth image of an object to a deep neural network to determine a first four degree-of-freedom pose of the object;
input the first four degree-of-freedom pose and a three-dimensional model of the object to a silhouette rendering program to determine a first two-dimensional silhouette of the object;
threshold the depth image to determine a second two-dimensional silhouette of the object;
determine a loss function based on comparing the first two-dimensional silhouette of the object which is based on the first four degree-of-freedom pose and the three-dimensional model of the object to the second two-dimensional silhouette of the object which is based on the depth image of the object;
optimize deep neural network parameters based on the loss function; and
output the deep neural network.