US 12,138,815 B2
Method and computing system for performing motion planning based on image information generated by a camera
Xutao Ye, Tokyo (JP); Puttichai Lertkultanon, Tokyo (JP); and Rosen Nikolaev Diankov, Tokyo (JP)
Assigned to MUJIN, INC., Tokyo (JP)
Filed by MUJIN, INC., Tokyo (JP)
Filed on Jun. 8, 2023, as Appl. No. 18/331,650.
Application 18/331,650 is a continuation of application No. 17/385,349, filed on Jul. 26, 2021, granted, now 11,717,971.
Application 17/385,349 is a continuation of application No. 17/084,272, filed on Oct. 29, 2020, granted, now 11,103,998, issued on Aug. 31, 2021.
Claims priority of provisional application 62/946,973, filed on Dec. 12, 2019.
Prior Publication US 2024/0017417 A1, Jan. 18, 2024
Int. Cl. G05B 15/00 (2006.01); B25J 9/16 (2006.01); B25J 13/08 (2006.01); B25J 15/00 (2006.01); B25J 19/02 (2006.01); B65G 59/02 (2006.01); G05B 19/00 (2006.01); G05B 19/4155 (2006.01); G06F 18/2413 (2023.01); G06T 7/60 (2017.01); G06T 7/73 (2017.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01)
CPC B25J 9/1697 (2013.01) [B25J 9/1612 (2013.01); B25J 9/1653 (2013.01); B25J 9/1664 (2013.01); B25J 9/1669 (2013.01); B25J 9/1671 (2013.01); B25J 13/08 (2013.01); B25J 15/0061 (2013.01); B25J 19/023 (2013.01); B65G 59/02 (2013.01); G05B 19/4155 (2013.01); G06F 18/2413 (2023.01); G06T 7/60 (2013.01); G06T 7/74 (2017.01); G06V 10/764 (2022.01); G06V 20/10 (2022.01); H04N 23/54 (2023.01); H04N 23/695 (2023.01); G05B 2219/40269 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20164 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computing system comprising:
a communication interface configured to communicate with: (i) a robot having an end effector apparatus having a mounting structure, a first gripper member movable along a first side of the mounting structure, and a second gripper member movable along a second side of the mounting structure, and (ii) a camera mounted on the end effector apparatus and having a camera field of view;
at least one processing circuit configured, when an object is or has been in the camera field of view, to:
determine a first estimate of an object structure associated with the object;
identify, based on the first estimate of the object structure, a corner of the object structure;
determine a camera pose which, when adopted by the camera, causes the camera to be pointed at the corner of the object structure such that the camera field of view encompasses the corner and at least a portion of a first side and a second side of an outer surface of the object structure;
receive image information for representing the object structure, wherein the image information is generated by the camera while the camera is in the camera pose;
determine a second estimate of the object structure based on the image information;
generate a motion plan based on at least the second estimate of the object structure, wherein the motion plan is for causing robot interaction between the robot and the object, including orienting the end effector apparatus such that the first side of the mounting structure aligns with the first side of the outer surface of the object structure, and the second side of the mounting structure aligns with the second side of the outer surface of the object structure; and
output one or more object interaction movement commands for causing the robot interaction, including:
control movement of the first gripper member along the first side of the mounting structure towards a first determined position adjacent the first side of the object structure; and
control movement of the second gripper member along the second side of the mounting structure towards a second determined position adjacent the second side of the object structure.
 
19. A method performed by a computing system, wherein the computing system is configured to communicate with: (i) a robot having an end effector apparatus having a mounting structure, a first gripper member movable along a first side of the mounting structure, and a second gripper member movable along a second side of the mounting structure, and (ii) a camera mounted on the end effector apparatus and having a camera field of view, the method comprising:
determining a first estimate of an object structure associated with an object that is or has been in the camera field of view;
identifying, based on the first estimate of the object structure, a corner of the object structure;
determining a camera pose which, when adopted by the camera, causes the camera to be pointed at the corner of the object structure such that the camera field of view encompasses the corner and at least a portion of a first side and a second side of an outer surface of the object structure;
receiving image information for representing the object structure, wherein the image information is generated by the camera while the camera is in the camera pose;
determining a second estimate of the object structure based on the image information;
generating a motion plan based on at least the second estimate of the object structure, wherein the motion plan is for causing robot interaction between the robot and the object, including orienting the end effector apparatus such that the first side of the mounting structure aligns with the first side of the outer surface of the object structure, and the second side of the mounting structure aligns with the second side of the outer surface of the object structure; and
outputting one or more object interaction movement commands for causing the robot interaction, including:
controlling movement of the first gripper member along the first side of the mounting structure towards a first determined position adjacent the first side of the object structure; and
controlling movement of the second gripper member along the second side of the mounting structure towards a second determined position adjacent the second side of the object structure.