US 11,883,966 B2
Method and computing system for performing object detection or robot interaction planning based on image information generated by a camera
Xutao Ye, Tokyo (JP); Puttichai Lertkultanon, Tokyo (JP); and Rosen Nikolaev Diankov, Tokyo (JP)
Assigned to MUJIN, INC., Tokyo (JP)
Filed by Mujin, Inc., Tokyo (JP)
Filed on Dec. 9, 2020, as Appl. No. 17/116,436.
Claims priority of provisional application 62/946,973, filed on Dec. 12, 2019.
Prior Publication US 2021/0178593 A1, Jun. 17, 2021
Int. Cl. B25J 19/02 (2006.01); B25J 9/16 (2006.01); G06T 7/73 (2017.01); B25J 13/08 (2006.01); B25J 15/00 (2006.01); B65G 59/02 (2006.01); G05B 19/4155 (2006.01); G06T 7/60 (2017.01); G06F 18/2413 (2023.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01)
CPC B25J 9/1697 (2013.01) [B25J 9/1612 (2013.01); B25J 9/1653 (2013.01); B25J 9/1664 (2013.01); B25J 9/1669 (2013.01); B25J 9/1671 (2013.01); B25J 13/08 (2013.01); B25J 15/0061 (2013.01); B25J 19/023 (2013.01); B65G 59/02 (2013.01); G05B 19/4155 (2013.01); G06F 18/2413 (2023.01); G06T 7/60 (2013.01); G06T 7/74 (2017.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01); G05B 2219/40269 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20164 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
20. A method performed by a computing system, the method comprising:
receiving first image information at the computing system, wherein the computing system is configured to communicate with: (i) a robot having a robot arm and an end effector apparatus disposed at or forming one end of the robot arm, and (ii) a camera mounted on the robot arm and having a camera field of view, wherein the first image information is for representing at least a first portion of an object structure associated with the object, wherein the first image information is generated by the camera when the camera is in a first camera pose in which the camera is pointed at the first portion of the object structure;
generating or updating, based on the first image information, sensed structure information that represents the object structure associated with the object;
identifying, based on the sensed structure information, an object corner associated with the object structure;
outputting, based on the sensed structure information gathered from the first image information received from the camera in the first camera pose, one or more camera placement movement commands which, when executed by the robot, causes the robot arm to move the camera to a second camera pose in which the camera is pointed at the object corner;
receiving second image information for representing the object structure, wherein the second image information is generated by the camera while the camera is in the second camera pose;
updating the sensed structure information based on the second image information to generate updated sensed structure information;
determining, based on the updated sensed structure information, an object type associated with the object;
determining one or more robot interaction locations based on the object type, wherein the one or more robot interaction locations are for interaction between the end effector apparatus and the object; and
outputting one or more robot interaction movement commands for causing the interaction at the one or more robot interaction locations, wherein the one or more robot interaction movement commands are generated based on the one or more robot interaction locations.