US 12,136,251 B2
Guided domain randomization via differentiable dataset rendering
Sergey Zakharov, Los Altos, CA (US); Rares Ambrus, San Francisco, CA (US); Vitor Guizilini, Santa Clara, CA (US); and Adrien Gaidon, San Jose, CA (US)
Assigned to Toyota Research Institute, Inc., Los Altos, CA (US)
Filed by Toyota Research Institute, Inc., Los Altos, CA (US)
Filed on Jan. 19, 2022, as Appl. No. 17/579,370.
Claims priority of provisional application 63/279,416, filed on Nov. 15, 2021.
Prior Publication US 2023/0154145 A1, May 18, 2023
Int. Cl. G06V 10/70 (2022.01); G06T 15/50 (2011.01); G06V 10/60 (2022.01); G06V 10/75 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)
CPC G06V 10/76 (2022.01) [G06T 15/50 (2013.01); G06V 10/60 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)] 17 Claims
OG exemplary drawing
 
1. A method, comprising:
receiving an input image having an object and a background;
intrinsically decomposing the object and the background into an input image data having a set of features;
augmenting the input image data with a 2.5D differentiable renderer for each feature of the set of features to create a set of augmented images; and
compiling the input image and the set of augmented images into a training data set for training a downstream task network;
wherein augmenting the input image with the 2.5D differentiable renderer comprises:
receiving with the 2.5D differentiable renderer an input data set having at least a set of material features and a set of lighting features based on the input image data;
generating simulated lighting conditions different than the set of lighting features;
generating simulated material conditions different than the set of material features;
applying the simulated lighting conditions and the simulated material conditions to the input data set to generate an output data set; and
combining the output data set to generate an augmented image.