US 12,290,917 B2
Object pose estimation system, execution method thereof and graphic user interface
Dong-Chen Tsai, Miaoli (TW); Ping-Chang Shih, Yuanlin (TW); Yu-Ru Huang, Hualien (TW); and Hung-Chun Chou, Taipei (TW)
Assigned to INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE, Hsinchu (TW)
Filed by INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE, Hsinchu (TW)
Filed on Oct. 19, 2021, as Appl. No. 17/505,041.
Claims priority of application No. 110117478 (TW), filed on May 14, 2021.
Prior Publication US 2022/0362945 A1, Nov. 17, 2022
Int. Cl. G06K 9/00 (2022.01); B25J 9/16 (2006.01); B25J 13/08 (2006.01); G06T 1/00 (2006.01); G06T 7/73 (2017.01)
CPC B25J 13/089 (2013.01) [B25J 9/163 (2013.01); G06T 1/0014 (2013.01); G06T 7/73 (2017.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30164 (2013.01)] 22 Claims
OG exemplary drawing
 
1. An execution method of an object pose estimation system, comprising:
determining a feature extraction strategy of a pose estimation unit by a feature extraction strategy neural network model according to a scene point cloud;
according to the feature extraction strategy, extracting a model feature from a 3D model of an object and extracting a scene feature from the scene point cloud by the pose estimation unit; and
comparing the model feature with the scene feature by the pose estimation unit to obtain an estimated pose of the object;
wherein the feature extraction strategy comprises a model feature extraction strategy and a scene feature extraction strategy, and the pose estimation unit includes a 3D model feature extractor and a scene point cloud feature extractor;
the model feature is extracted, by the 3D model feature extractor, from the 3D model according to the model feature extraction strategy; and
the scene feature is extracted, by the scene point cloud feature extractor, from the scene point cloud according to the scene feature extraction strategy.