US 11,748,901 B1
Using deep learning and structure-from-motion techniques to generate 3D point clouds from 2D data
Ryan Knuffman, Danvers, IL (US); and Jeremy Carnahan, Normal, IL (US)
Assigned to STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY, Bloomington, IL (US)
Filed by STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY, Bloomington, IL (US)
Filed on Sep. 24, 2020, as Appl. No. 17/31,643.
Claims priority of provisional application 62/972,987, filed on Feb. 11, 2020.
Int. Cl. G06T 7/579 (2017.01); G06T 15/20 (2011.01)
CPC G06T 7/579 (2017.01) [G06T 15/205 (2013.01); G06T 2207/10032 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A server comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the server to
receive a plurality of two-dimensional images corresponding to an outdoor scene including outdoor objects,
analyze each of the plurality of two-dimensional images corresponding to the outdoor scene using a trained deep artificial neural network to generate a respective set of one or more labeled points, each of the one or more labeled points corresponding to a respective class label describing one of more physical objects depicted in the two-dimensional images corresponding to the outdoor scene, and at least one of the labeled points including a colorspace value that corresponds to a visible light spectrum value of the physical object,
process the set of labeled points to identify one or more tie points, and
combine the two-dimensional images corresponding to the outdoor scene into the three-dimensional point cloud using a structure-from-motion technique, wherein the combining includes combining the respective one or more labeled points according to a plurality voting algorithm.