US 12,145,267 B2
System and method for embodied authoring of human-robot collaborative tasks with augmented reality
Karthik Ramani, West Lafayette, IN (US); Ke Huo, Unico City, CA (US); Yuanzhi Cao, Redmond, WA (US); and Tianyi Wang, West Lafayette, IN (US)
Assigned to Purdue Research Foundation, West Lafayette, IN (US)
Filed by Purdue Research Foundation, West Lafayette, IN (US)
Filed on Sep. 16, 2020, as Appl. No. 17/022,216.
Claims priority of provisional application 62/902,007, filed on Sep. 18, 2019.
Prior Publication US 2021/0252699 A1, Aug. 19, 2021
Int. Cl. B25J 9/16 (2006.01)
CPC B25J 9/1605 (2013.01) 11 Claims
OG exemplary drawing
 
1. A method for authoring a human-robot collaborative task in which a robot collaborates with a human, the method comprising:
recording, with at least one sensor, during a first time period, human motions of a human as the human demonstrates the human-robot collaborative task in an environment, the recorded human motions including a plurality of recorded positions of the human in the environment over a period of time;
displaying, on a display, during a second time period that is subsequent to the first time period, an augmented reality graphical user interface including a graphical representation of the recorded human motions that is superimposed on the environment such that the graphical representation appears within the environment at the plurality of recorded positions of the human in the environment;
displaying, in the graphical user interface on the display, during the second time period while the graphical representation of the recorded human motions is displayed, a virtual representation of the robot that is superimposed on the environment and which can be manipulated by the human by providing user inputs;
receiving, via a user interface, during the second time period, user inputs defining manipulations of the virtual representation of the robot, the manipulations being graphically represented by the virtual representation of the robot;
determining, with a processor, during the second time period, a sequence of robot motions to be performed by a robot in concert with a performance of human motions that match the recorded human motions, based on the manipulations of the virtual representation of the robot;
storing, in a memory, during the second time period, the recorded human motions and the sequence of robot motions to be performed by the robot;
detecting, during a third time period that is subsequent to the second time period, the performance of human motions that match the recorded human motions by one of the human and a further human, the detecting including (i) recording, with the at least one sensor, a real-time position of the one of the human and the further human and (ii) comparing, with the processor, the real-time position to the plurality of recorded positions of the human in the recorded human motions; and
generating, with the processor, and transmitting to the robot, with a transceiver, during the third time period, a plurality of commands configured to operate the robot to perform the sequence of robot motions in concert with the performance of the human motions that match the recorded human motions.