US 12,249,096 B2
Systems and methods to determine object position using images captured from mobile image collection vehicle
Brent Ronald Frei, Bellevue, WA (US); Dwight Galen McMaster, Burien, WA (US); Michael Racine, Bellevue, WA (US); Jacobus du Preez, Snoqualmie, WA (US); William David Dimmit, Seattle, WA (US); Isabelle Butterfield, Bellevue, WA (US); Clifford Holmgren, Spanaway, WA (US); Dafydd Daniel Rhys-Jones, Tacoma, WA (US); Thayne Kollmorgen, Eugene, OR (US); and Vivek Ullal Nayak, Seattle, WA (US)
Assigned to TerraClear Inc., Bellevue, WA (US)
Filed by TerraClear Inc., Bellevue, WA (US)
Filed on Oct. 1, 2021, as Appl. No. 17/492,405.
Application 17/492,405 is a continuation of application No. 16/510,717, filed on Jul. 12, 2019, granted, now 11,138,712.
Claims priority of provisional application 62/697,057, filed on Jul. 12, 2018.
Prior Publication US 2022/0028048 A1, Jan. 27, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/73 (2017.01); A01B 43/00 (2006.01); A01B 59/042 (2006.01); A01B 69/00 (2006.01); G05D 1/00 (2024.01); G06F 18/20 (2023.01); G06F 18/21 (2023.01); G06F 18/211 (2023.01); G06F 18/214 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06T 3/00 (2024.01); G06T 7/00 (2017.01); G06T 7/13 (2017.01); G06T 7/60 (2017.01); G06T 7/62 (2017.01); G06T 7/70 (2017.01); G06V 10/20 (2022.01); G06V 20/00 (2022.01); G06V 20/10 (2022.01); G08G 5/32 (2025.01); G08G 5/55 (2025.01); G08G 5/57 (2025.01); B64U 10/13 (2023.01); B64U 101/30 (2023.01); B64U 101/60 (2023.01)
CPC G06T 7/74 (2017.01) [A01B 43/00 (2013.01); A01B 59/042 (2013.01); A01B 69/001 (2013.01); G05D 1/0038 (2013.01); G05D 1/101 (2013.01); G06F 18/211 (2023.01); G06F 18/214 (2023.01); G06F 18/2163 (2023.01); G06F 18/217 (2023.01); G06F 18/285 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 3/00 (2013.01); G06T 7/0002 (2013.01); G06T 7/13 (2017.01); G06T 7/60 (2013.01); G06T 7/62 (2017.01); G06T 7/70 (2017.01); G06T 7/73 (2017.01); G06V 10/255 (2022.01); G06V 20/10 (2022.01); G06V 20/188 (2022.01); G06V 20/38 (2022.01); G08G 5/32 (2025.01); G08G 5/55 (2025.01); G08G 5/57 (2025.01); B64U 10/13 (2023.01); B64U 2101/30 (2023.01); B64U 2101/60 (2023.01); B64U 2201/20 (2023.01); G06T 2207/10032 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20104 (2013.01); G06T 2207/30188 (2013.01); G06T 2207/30244 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method, comprising:
obtaining a first image of a geographical area;
performing a homography transform on the first image to generate a second image having uniform-pixel-distances based on a capture height and avionic telemetry information associated with an aerial vehicle that captured the first image;
performing image recognition on the second image to identify an object in the second image using an artificial neural network trained on a dataset of trained object parameters, wherein the second image is divided into a plurality of tiles, wherein each of the plurality of tiles is input into an artificial neural network trained on trained object parameters, and wherein a bounding box is generated for the object based on results from the artificial neural network;
determining a first pixel location of the object within the second image based on the bounding box;
performing a reverse homography transform on the first pixel location to determine a second pixel location in the first image for the object;
determining a position of the object within the geographical area based on the second pixel location within the first image and an image position associated with the aerial vehicle capturing the image; and
storing the determined position of the object.