US 12,067,746 B2
Systems and methods for using computer vision to pick up small objects
Vage Taamazyan, Moscow (RU); Guy Michael Stoppi, Victoria (CA); Bradley Craig Anderson Brown, Oakville (CA); Agastya Kalra, Nepean (CA); Achuta Kadambi, Los Altos Hills, CA (US); and Kartik Venkataraman, San Jose, CA (US)
Assigned to Intrinsic Innovation LLC, Mountain View, CA (US)
Filed by INTRINSIC INNOVATION LLC, Mountain View, CA (US)
Filed on May 7, 2021, as Appl. No. 17/314,929.
Prior Publication US 2022/0375125 A1, Nov. 24, 2022
Int. Cl. G06T 7/73 (2017.01); B25J 9/16 (2006.01); B25J 13/08 (2006.01); G05B 19/4155 (2006.01); G06T 7/269 (2017.01); G06T 7/55 (2017.01)
CPC G06T 7/75 (2017.01) [B25J 9/1697 (2013.01); B25J 13/08 (2013.01); G05B 19/4155 (2013.01); G06T 7/269 (2017.01); G06T 7/55 (2017.01); G05B 2219/50391 (2013.01); G06T 2207/10024 (2013.01)] 32 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, by a processor, an observed image depicting a plurality of objects from a viewpoint;
computing, by the processor, an instance segmentation map identifying a class of the plurality of objects depicted in the observed image;
loading, by the processor, a 3-D model corresponding to the identified class of the plurality of objects, wherein the plurality of objects are homogeneous objects of the same identified class;
computing, by the processor, a rendered image comprising a plurality of renderings of the plurality objects based on the 3-D model in accordance with respective corresponding initial pose estimates of the plurality of objects and the viewpoint of the observed image;
computing, by the processor, a plurality of dense image-to-object correspondences between the observed image of the plurality of objects and the 3-D model based on the observed image and the rendered image; and
computing, by the processor, a plurality of poses of the plurality of objects based on the dense image-to-object correspondences.