US 12,131,529 B2
Virtual teach and repeat mobile manipulation system
Jeremy Ma, San Jose, CA (US); Josh Petersen, Mountain View, CA (US); Umashankar Nagarajan, Sunnyvale, CA (US); Michael Laskey, Los Altos, CA (US); Daniel Helmick, Cupertino, CA (US); James Borders, Los Gatos, CA (US); Krishna Shankar, Los Altos, CA (US); Kevin Stone, Menlo Park, CA (US); and Max Bajracharya, Millbrae, CA (US)
Assigned to TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed by TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed on Jan. 18, 2023, as Appl. No. 18/098,625.
Application 18/098,625 is a continuation of application No. 16/570,852, filed on Sep. 13, 2019, granted, now 11,580,724.
Claims priority of provisional application 62/877,792, filed on Jul. 23, 2019.
Claims priority of provisional application 62/877,793, filed on Jul. 23, 2019.
Claims priority of provisional application 62/877,791, filed on Jul. 23, 2019.
Prior Publication US 2023/0154015 A1, May 18, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. B25J 9/16 (2006.01); G06F 18/214 (2023.01); G06F 18/28 (2023.01); G06N 3/08 (2023.01); G06T 7/246 (2017.01); G06T 7/33 (2017.01); G06T 7/55 (2017.01); G06T 7/73 (2017.01); G06T 19/20 (2011.01); G06V 10/75 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06V 20/10 (2022.01); G06V 20/20 (2022.01)
CPC G06V 20/10 (2022.01) [B25J 9/1605 (2013.01); B25J 9/1661 (2013.01); B25J 9/1664 (2013.01); B25J 9/1671 (2013.01); B25J 9/1697 (2013.01); G06F 18/214 (2023.01); G06F 18/28 (2023.01); G06N 3/08 (2013.01); G06T 7/248 (2017.01); G06T 7/33 (2017.01); G06T 7/55 (2017.01); G06T 7/74 (2017.01); G06T 19/20 (2013.01); G06V 10/751 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06V 20/20 (2022.01); B25J 9/163 (2013.01); G05B 2219/37567 (2013.01); G05B 2219/40543 (2013.01); G05B 2219/40564 (2013.01); G06T 2200/04 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20104 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for performing a task by a robotic device, comprising:
mapping a plurality of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a plurality of teaching image pixel descriptors associated with a second group of pixels in a teaching image of a teaching environment based on positioning the robotic device within the task environment;
determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors, the relative transform indicating a change in one or more of points of three-dimensional (3D) space between the task image and the teaching image;
updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform; and
performing the task associated with the set of parameterized behaviors based on updating the one or more parameters.