US 11,935,288 B2
Systems and methods for generating of 3D information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
Iven Connary, Atlanta, GA (US); Guy Ettinger, Flowery Branch, GA (US); Habib Fathi, Atlanta, GA (US); Jacob Garland, Peachtree Corners, GA (US); and Daniel Ciprari, Atlanta, GA (US)
Assigned to Pointivo Inc., Atlanta, GA (US)
Filed by Pointivo Inc., Atlanta, GA (US)
Filed on Jan. 3, 2022, as Appl. No. 17/567,347.
Application 17/567,347 is a continuation of application No. 17/108,976, filed on Dec. 1, 2020, granted, now 11,216,663.
Claims priority of provisional application 62/942,171, filed on Dec. 1, 2019.
Prior Publication US 2022/0130145 A1, Apr. 28, 2022
Int. Cl. G06V 20/10 (2022.01); G05D 1/00 (2006.01); G05D 1/10 (2006.01)
CPC G06V 20/176 (2022.01) [G05D 1/0016 (2013.01); G05D 1/0044 (2013.01); G05D 1/101 (2013.01); G06V 20/10 (2022.01)] 11 Claims
OG exemplary drawing
 
1. A method of remotely inspecting a real-life object in a scene or location using a collection of previously acquired object-related data comprising:
a. providing, by a computer, a stored data collection associated with an object of interest in a real-life scene or location, wherein:
i. the stored data collection comprises at least two different data types associated with the object of interest, wherein one of the at least two different data types comprises or is derived from two-dimensional (2D) aerial images including the object of interest, and wherein:
(1) each of the 2D aerial images is acquired by an unmanned aerial vehicle (UAV) configured with an imaging device, wherein the 2D aerial images are acquired while the UAV is navigated in and around the real-life scene or location during one or more UAV imaging events: and
(2) each of the 2D aerial images includes information associated with both of a UAV imaging device location and orientation in the real-life scene or location when that 2D aerial image was acquired; and
b. generating, by the computer, an object information display in a single user viewport configured on a user device, wherein the object information display:
(1) comprises a first data type and at least one additional data type present in or derived from the stored data collection, wherein one of the data types is defined as a base data type and the at least one additional data type is defined as a dependent data type;
(2) includes a 3D representation of all or part of the object of interest; and
(3) prior to generation of the object information display each of the data types are synchronized by either of:
(a) registering data for each of the data types in a single coordinate system; or
(b) selecting a target coordinate system and calculating one or more transformations for data in each of the data types, wherein the one or more transformations enable representation of each of the data types in the target coordinate system;
c. navigating, by a user, a scene camera to generate a user-selected positioning of the scene camera relative to the 3D representation of the object of interest as displayed in the single user viewport; and
d. updating, by the computer, the object information display in real time as the scene camera is being navigated by the user in the single user viewport, wherein the updated object information display includes an object-centric visualization of the 3D representation of the object of interest derived from the user's positioning of the scene camera relative to the 3D representation as appearing in the single user viewport, and wherein the updated object information display is provided with a concurrent display of the at least one additional data type.