US 12,223,549 B2
Systems and methods for automated data processing using machine learning for vehicle loss detection
Jean-Christophe Bouëtté, Montreal (CA); Jimmy Lévesque, Blainville (CA); Marc Poulin, Saint-Lambert (CA); Satya Krishna Gorti, Toronto (CA); Keyu Long, Toronto (CA); Nicolas Gervais, St-Hubert (CA); and Jennifer Bouchard, Montreal (CA)
Assigned to THE TORONTO-DOMINION BANK, Toronto (CA)
Filed by THE TORONTO-DOMINION BANK, Toronto (CA)
Filed on May 18, 2022, as Appl. No. 17/747,819.
Prior Publication US 2023/0377047 A1, Nov. 23, 2023
Int. Cl. G06Q 40/08 (2012.01); G06V 10/26 (2022.01); G06V 10/80 (2022.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01)
CPC G06Q 40/08 (2013.01) [G06V 10/26 (2022.01); G06V 10/803 (2022.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A computer system for processing digital images, the computer system comprising:
a processor configured to execute instructions;
a non-transient computer-readable medium comprising instructions that when executed by a processor cause the processor to:
receive a plurality of possible images of a vehicle;
apply an object detection machine learning model to the plurality of possible images, and based on the application of the object detection machine learning model to the plurality of possible images, determine, within each possible image, a location of the vehicle and define a bounding box surrounding the location;
perform operations that crop each said possible image to display only the vehicle and rotate each said possible image to a defined orientation for subsequent processing thereof;
select, from the cropped and rotated possible images, a set of four distinct images of the vehicle in relation to a claim for the vehicle being damaged, each of the set of four distinct images corresponding to a different angle view of the vehicle selected as being of interest, thereby in combination an overall view of the vehicle;
generate a tiled image of the vehicle by combining and merging the set of four distinct images into a single image concurrently displaying all images of respective said different angle views in equal portions of the tiled image;
process, via a first convolutional neural network the tiled image, the first convolutional neural network configured for image processing and trained based on historical tiled image data to extract a first set of image features from tiled images for predicting a first likelihood of total loss for the vehicle;
process, via a second set of distinct and separate convolutional neural networks, a multi-fusion set of images comprising the set of four distinct images provided individually to respective ones of the second set of convolutional neural networks each associated with one of the different angle views, each of the second set of convolutional neural networks trained for a different non-overlapping view of the vehicle, using historical multi-fusion images, to extract a second set of image features from multi-fusion images;
fuse together the second set of image features to predict, via a classifier, trained based on historical image features of vehicles, a second likelihood of total loss for the vehicle;
obtain and process tabular data relating to the vehicle and the overall likelihood into a machine learning model, the machine learning model trained based on historical tabular data and associated features to predict a third likelihood of total loss for the vehicle;
aggregate, via an ensembler, the first, the second and the third likelihood of total loss vehicle to perform an ensemble prediction of a classification of image and tabular data thereby determining the overall likelihood of whether the vehicle depicted in the set of images is repairable or total loss; and
present the overall likelihood on a display for the computer system to process the claim.