US 12,175,741 B2
Systems and methods for a vision guided end effector
Vage Taamazyan, Moscow (RU); Kartik Venkataraman, San Jose, CA (US); Agastya Kalra, Nepean (CA); and Achuta Kadambi, Los Altos Hills, CA (US)
Assigned to Intrinsic Innovation LLC, Mountain View, CA (US)
Filed by INTRINSIC INNOVATION LLC, Mountain View, CA (US)
Filed on Jun. 22, 2021, as Appl. No. 17/354,924.
Prior Publication US 2022/0405506 A1, Dec. 22, 2022
Int. Cl. G06K 9/00 (2022.01); B25J 9/16 (2006.01); B25J 15/06 (2006.01); G06T 7/11 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06V 20/10 (2022.01)
CPC G06V 20/10 (2022.01) [B25J 9/1612 (2013.01); B25J 9/1697 (2013.01); B25J 15/0608 (2013.01); G06T 7/11 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method for picking an object from a plurality of objects by a robot having an end effector, the method comprising:
obtaining an image of a scene containing the plurality of objects;
generating a segmentation map for the plurality of objects in the scene;
determining shapes of the plurality of objects based on the segmentation map including obtaining, for each of one or more objects of the plurality of objects in the segmentation map, a respective 3D CAD model of the object and generating a respective shape of the object from the 3D CAD model of the object;
adjusting the end effector including shaping the end effector according to a shape belonging to an object of the plurality of objects;
approaching the plurality of objects; and
picking the object of the plurality of objects with the end effector adjusted according to the shape of the object.