US 12,434,630 B2
Below vehicle rendering for surround view systems
Hemant Vijay Kumar Hariyani, Plano, TX (US); Aishwarya Dubey, Plano, TX (US); and Mihir Narendra Mody, Bengaluru (IN)
Assigned to TEXAS INSTRUMENTS INCORPORATED, Dallas, TX (US)
Filed by Texas Instruments Incorporated, Dallas, TX (US)
Filed on Nov. 14, 2023, as Appl. No. 18/389,297.
Application 18/389,297 is a continuation of application No. 17/536,727, filed on Nov. 29, 2021, granted, now 11,858,420.
Prior Publication US 2024/0075876 A1, Mar. 7, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. B60R 1/00 (2022.01); G06T 3/00 (2024.01); G06T 3/047 (2024.01); G06T 3/40 (2024.01); G06T 3/4038 (2024.01); G06T 7/70 (2017.01); G06T 7/80 (2017.01); G06T 17/20 (2006.01); G06V 10/80 (2022.01); G06V 20/56 (2022.01)
CPC B60R 1/00 (2013.01) [G06T 3/047 (2024.01); G06T 3/4038 (2013.01); G06T 7/70 (2017.01); G06T 7/80 (2017.01); G06T 17/20 (2013.01); G06V 10/80 (2022.01); G06V 20/56 (2022.01); B60R 2300/105 (2013.01); B60R 2300/303 (2013.01); B60R 2300/304 (2013.01); B60R 2300/60 (2013.01); G06T 2207/20221 (2013.01); G06T 2207/30252 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system, comprising:
a memory configured to store program instructions; and
one or more processors configured to execute the program instructions to:
determine that a region is not in a field of view of a set of cameras disposed about a vehicle when the vehicle is at a first location, wherein the region is underneath the vehicle at the first location so as to be not in the field of view of the set of cameras; and
based on determining that the region is not in the field of view of the set of cameras at the first location,
determine that an image of the region was captured by the set of cameras when the vehicle was at a second location before the vehicle moves to the first location, wherein the region was not underneath the vehicle at the second location so as to be in the field of view of the set of cameras at the second location;
determine a set of motion data indicative of a relationship between the first and second locations of the vehicle; and
render an image of the region underneath the vehicle at the first location based on the image of the region captured at the second location and the set of motion data.