| CPC B25J 13/089 (2013.01) [B25J 9/163 (2013.01); G06T 1/0014 (2013.01); G06T 7/73 (2017.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30164 (2013.01)] | 22 Claims |

|
1. An execution method of an object pose estimation system, comprising:
determining a feature extraction strategy of a pose estimation unit by a feature extraction strategy neural network model according to a scene point cloud;
according to the feature extraction strategy, extracting a model feature from a 3D model of an object and extracting a scene feature from the scene point cloud by the pose estimation unit; and
comparing the model feature with the scene feature by the pose estimation unit to obtain an estimated pose of the object;
wherein the feature extraction strategy comprises a model feature extraction strategy and a scene feature extraction strategy, and the pose estimation unit includes a 3D model feature extractor and a scene point cloud feature extractor;
the model feature is extracted, by the 3D model feature extractor, from the 3D model according to the model feature extraction strategy; and
the scene feature is extracted, by the scene point cloud feature extractor, from the scene point cloud according to the scene feature extraction strategy.
|