| CPC B25J 9/1612 (2013.01) [B25J 9/08 (2013.01); B25J 9/1697 (2013.01); G05B 19/4155 (2013.01); G06N 3/08 (2013.01); G06T 7/73 (2017.01); G05B 2219/39505 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] | 20 Claims |

|
1. A method, comprising:
determining a set of data for an object in a scene, each of the set of data comprising a respective keypoint and a corresponding object component identifier;
determining a set of candidate grasp locations for the object, using the set of data, wherein each candidate grasp location of the set of candidate grasp locations is associated with a respective occlusion score determined by a machine learning model trained on images labeled with predetermined occlusion scores for locations in the images;
determining one or more candidate grasp proposals based on the set of candidate grasp locations for grasping the object;
selecting a candidate grasp proposal from the one or more candidate grasp proposals as a final grasp proposal based on one or more criteria; and
storing the final grasp proposal in memory to be retrieved for controlling a robot to grasp the object.
|