US 12,080,024 B2
Systems and methods for generating 3D models from drone imaging
Matthew Laurence Arksey, Seattle, WA (US); Deon Blaauw, Lisse (NL); Lucas Thomas Hahn, Brooklyn, NY (US); John Gordon McQueen, Seattle, WA (US); Satoshi Nakajima, Medina, WA (US); Guy David Byron Shefner, Coeur d'Alene, ID (US); and Richard Chia Tsing Tong, Seattle, WA (US)
Assigned to NETDRONES, INC., Kirkland, WA (US)
Filed by NETDRONES, INC., Kirkland, WA (US)
Filed on Jun. 10, 2022, as Appl. No. 17/838,082.
Claims priority of provisional application 63/209,392, filed on Jun. 11, 2021.
Prior Publication US 2022/0398806 A1, Dec. 15, 2022
Int. Cl. G06T 7/73 (2017.01); B64C 39/02 (2023.01); G01C 15/00 (2006.01); G05D 1/00 (2006.01); G06N 20/00 (2019.01); G06T 17/10 (2006.01); G06V 20/17 (2022.01); G08G 5/00 (2006.01); H04B 7/185 (2006.01); H04B 17/318 (2015.01); H04W 4/02 (2018.01); H04W 4/40 (2018.01); H04W 24/08 (2009.01); H04W 28/24 (2009.01); H04W 64/00 (2009.01); B64U 10/13 (2023.01); B64U 80/86 (2023.01); B64U 101/20 (2023.01); B64U 101/30 (2023.01); H04W 84/04 (2009.01); H04W 84/12 (2009.01)
CPC G06T 7/73 (2017.01) [B64C 39/024 (2013.01); G01C 15/002 (2013.01); G05D 1/0027 (2013.01); G05D 1/104 (2013.01); G06N 20/00 (2019.01); G06T 17/10 (2013.01); G06V 20/17 (2022.01); G08G 5/0013 (2013.01); G08G 5/0026 (2013.01); G08G 5/0039 (2013.01); G08G 5/0043 (2013.01); G08G 5/0069 (2013.01); H04B 7/18504 (2013.01); H04B 17/318 (2015.01); H04W 4/025 (2013.01); H04W 4/40 (2018.02); H04W 24/08 (2013.01); H04W 28/24 (2013.01); H04W 64/003 (2013.01); H04W 64/006 (2013.01); B64U 10/13 (2023.01); B64U 80/86 (2023.01); B64U 2101/20 (2023.01); B64U 2101/30 (2023.01); B64U 2201/102 (2023.01); G05D 1/042 (2013.01); H04W 84/047 (2013.01); H04W 84/12 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A method for generating a model of a scene comprising:
receiving a plurality of images of a scene captured by at least one drone;
identifying features within the plurality of images;
identifying similar images of the plurality of images based on the features identified within the plurality of images;
comparing the similar images based on the features identified within the similar images to determine a proportion of features shared by the similar images;
selecting a subset of the plurality of images that have a proportion of shared features that meets a predetermined range;
generating a first 3D model of the scene from the subset of images using a first 3D model building algorithm;
generating a second 3D model of the scene from the subset of images using a second 3D model building algorithm;
computing errors for the first and second 3D models; and
selecting as the model of the scene the first or second 3D model depending on the computed errors.