US 11,989,847 B2
Photorealistic image simulation with geometry-aware composition
Frieda Rong, Toronto (CA); Yun Chen, Toronto (CA); Shivam Duggal, Toronto (CA); Shenlong Wang, Toronto (CA); Xinchen Yan, San Mateo, CA (US); Sivabalan Manivasagam, Toronto (CA); Ersin Yumer, Burlingame, CA (US); and Raquel Urtasun, Toronto (CA)
Assigned to UATC, LLC, Mountain View, CA (US)
Filed by UATC, LLC, Mountain View, CA (US)
Filed on Feb. 10, 2022, as Appl. No. 17/668,577.
Application 17/668,577 is a continuation of application No. 17/150,989, filed on Jan. 15, 2021, granted, now 11,551,429.
Claims priority of provisional application 63/093,471, filed on Oct. 19, 2020.
Claims priority of provisional application 63/035,573, filed on Jun. 5, 2020.
Prior Publication US 2022/0165043 A1, May 26, 2022
Int. Cl. G06T 19/20 (2011.01); G01B 11/22 (2006.01); G01S 17/89 (2020.01); G01S 17/931 (2020.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06T 3/00 (2006.01); G06T 7/521 (2017.01); G06T 15/20 (2011.01); G06T 17/10 (2006.01); G06T 19/00 (2011.01)
CPC G06T 19/20 (2013.01) [G01B 11/22 (2013.01); G01S 17/89 (2013.01); G01S 17/931 (2020.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 3/0093 (2013.01); G06T 7/521 (2017.01); G06T 15/20 (2013.01); G06T 17/10 (2013.01); G06T 19/006 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01); G06T 2210/12 (2013.01); G06T 2210/21 (2013.01); G06T 2219/2004 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
obtaining one or more real world images collected by one or more real world sensors of a first vehicle during operation of the first vehicle, wherein the one or more real world images depict an environment at which the first vehicle is located;
generating a depth map of the environment that describes respective depths of objects of the environment;
identifying one or more first objects of the objects of the environment, based at least in part on the depth map, that would occlude a simulated object at an insertion location within the environment;
augmenting at least one of the one or more real world images of the environment to generate an initial augmented image that depicts the simulated object at the insertion location and occluded by the one or more first objects of the environment;
refining the initial augmented image with a machine-learned refinement model to generate a refined augmented image, wherein the machine-learned refinement model processes the initial augmented image to add one or more of texture correction, color correction, or contrast correction to a border between the at least one of the one or more real world images and the simulated object;
generating simulation data based at least in part on the refined augmented image that depicts the simulated object, wherein the simulated object is generated in the refined augmented image at least in part by selecting an object and a source texture to represent the simulated object from an object bank based on a point of view and distance relative to the simulated object from the first vehicle; and
executing, based on the simulation data, a simulation for autonomous vehicle software;
wherein the simulation data comprises road data to test performance features for the autonomous vehicle software during the simulation.