US 11,813,749 B2
Robot teaching by human demonstration
Kaimeng Wang, Fremont, CA (US); and Tetsuaki Kato, Fremont, CA (US)
Assigned to FANUC CORPORATION, Yamanashi (JP)
Filed by FANUC CORPORATION, Yamanashi (JP)
Filed on Apr. 8, 2020, as Appl. No. 16/843,185.
Prior Publication US 2021/0316449 A1, Oct. 14, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. B25J 9/16 (2006.01)
CPC B25J 9/163 (2013.01) [B25J 9/1664 (2013.01); B25J 9/1612 (2013.01); G05B 2219/39546 (2013.01); G05B 2219/40116 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for programming a robot to perform an operation by human demonstration, said method comprising:
demonstrating the operation on a workpiece by a human hand;
analyzing camera images of the hand demonstrating the operation on the workpiece, by a computer receiving the images from a two dimensional camera, to create demonstration data, where the demonstration data defines a pick, move and place operation including a grasping step where hand pose and workpiece pose are determined when the hand grasps the workpiece, a move step where hand pose and workpiece pose are determined at a plurality of points defining a move path, and a place step where the workpiece pose is determined when the workpiece becomes stationary after the move step, where the demonstration data includes a hand coordinate frame and a gripper coordinate frame corresponding to the hand coordinate frame, where the gripper coordinate frame represents a gripper type selected from a group including a finger-type gripper and a vacuum-type gripper, and where the hand coordinate frame is computed from the camera images by processing the camera images in a neural network convolution layer to identify key points on the human hand in the camera images, performing a Point-n-Perspective calculation using the key points on the human hand in the camera images and previously determined true lengths of a plurality of segments of digits of the human hand, and calculating a three-dimensional pose of the plurality of segments, and the true lengths of the plurality of segments of the digits of the human hand were previously determined using a hand size image analysis step including providing a sizing image of the human hand on a fiducial marker grid, analyzing the sizing image to compute transformations from a marker coordinate system to a screen coordinate system, processing the sizing image in a neural network convolution layer to identify key points on the human hand in the sizing image, using the transformations to compute coordinates of the key points in the marker coordinate system, and calculating the true lengths of the segments of the digits of the human hand;
analyzing camera images of a new workpiece to determine an initial position and orientation of the new workpiece;
generating robot motion commands, based on the demonstration data and the initial position and orientation of the new workpiece, to cause the robot to perform the operation on the new workpiece; and
performing the operation on the new workpiece by the robot.