US 11,893,468 B2
Imitation learning system
Yu-Wei Chao, Seattle, WA (US); De-An Huang, Cupertino, CA (US); Christopher Jason Paxton, Pittsburgh, PA (US); Animesh Garg, Berkeley, CA (US); and Dieter Fox, Seattle, WA (US)
Assigned to NVIDIA Corporation, Santa Clara, CA (US)
Filed by NVIDIA Corporation, Santa Clara, CA (US)
Filed on Jul. 16, 2020, as Appl. No. 16/931,211.
Claims priority of provisional application 62/900,226, filed on Sep. 13, 2019.
Prior Publication US 2021/0081752 A1, Mar. 18, 2021
Int. Cl. G06N 3/008 (2023.01); G06N 20/00 (2019.01)
CPC G06N 3/008 (2013.01) [G06N 20/00 (2019.01)] 28 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
segmenting video data into at least a first segment and a second segment the first segment comprising video data representative of a first trajectory of a first object manipulated in a demonstration performed in a first set of circumstances, the second segment comprising video data representative of a second trajectory of a second object manipulated in the demonstration;
identifying a motion predicate satisfied by the first trajectory, wherein the motion predicate is identified based, at least in part, on a determination that movement of the first object on the first trajectory enabled movement of the second object on the second trajectory;
identifying a task predicate satisfied by the second trajectory, based at least in part on the second trajectory satisfying a logical condition defined in a domain definition;
identifying a goal of the demonstration based at least in part on the task predicate; and
causing one or more robotic manipulation devices to move from a first pose to a second pose based, at least in part, on performing the goal in a different second set of circumstances.