US 12,269,170 B1
Systems, methods, and computer program products for generating robot training data
Mani Ranjbar, Port Coquitlam (CA); and Geoffrey Mantel, Maple Ridge (CA)
Assigned to Sanctuary Cognitive Systems Corporation, Vancouver (CA)
Filed by Sanctuary Cognitive Systems Corporation, Vancouver (CA)
Filed on Feb. 27, 2024, as Appl. No. 18/588,919.
Int. Cl. B25J 9/16 (2006.01); B25J 13/08 (2006.01); B25J 19/02 (2006.01)
CPC B25J 9/1671 (2013.01) [B25J 9/163 (2013.01); B25J 13/085 (2013.01); B25J 19/023 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method comprising:
accessing action data, the action data comprising sensor data recorded by at least one sensor of a robot body as the robot body performs a task and the sensor data including movement data that defines an action path in space through which at least one actuatable member of the robot body moves as the robot body performs the task;
accessing context data, the context data at least partially representing an environment in which the robot body performs the task, wherein the context data includes chroma key context data corresponding to at least one augmentable region of the environment outside of the action path of the robot body based on the action data;
generating, by at least one processor, a plurality of augmented environment instances based at least in part on the context data, each augmented environment instance different from the environment of the robot body and from other augmented environment instances in the plurality of augmented environment instances in at least one aspect, wherein generating the plurality of augmented environment instances based at least in part on the context data comprises, for at least one augmented environment instance, adding at least one visual virtual object to at least one augmentable region of the environment outside of the action path of the robot body based on the action data; and
generating, by the at least one processor, a plurality of instances of training data for training at least one model to control autonomous movement of the robot body as the robot body performs the task, each instance of training data comprising the action data and a respective augmented environment instance of the plurality of augmented environment instances.