US 11,812,184 B2
Systems and methods for presenting image classification results
Micah Price, The Colony, TX (US); Chi-San Ho, Allen, TX (US); and Yue Duan, Plano, TX (US)
Assigned to Capital One Services, LLC, McLean, VA (US)
Filed by Capital One Services, LLC, McLean, VA (US)
Filed on Nov. 30, 2020, as Appl. No. 17/106,851.
Application 17/106,851 is a continuation of application No. 16/534,375, filed on Aug. 7, 2019, granted, now 10,885,099.
Prior Publication US 2021/0081446 A1, Mar. 18, 2021
Int. Cl. G06F 16/51 (2019.01); H04N 5/272 (2006.01); G06F 16/538 (2019.01); G06F 9/451 (2018.01); G06F 16/535 (2019.01); G06F 8/65 (2018.01); G06N 3/08 (2023.01); G06F 3/04817 (2022.01); G06F 3/0482 (2013.01); G06N 3/04 (2023.01); G06F 18/24 (2023.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01)
CPC H04N 5/272 (2013.01) [G06F 3/0482 (2013.01); G06F 3/04817 (2013.01); G06F 8/65 (2013.01); G06F 9/451 (2018.02); G06F 16/51 (2019.01); G06F 16/535 (2019.01); G06F 16/538 (2019.01); G06F 18/24 (2023.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 2201/08 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A system for generating and implementing patches to improve classification model results based on user feedback through input icons, the system comprising:
a camera;
one or more processors; and
one or more memory devices storing instructions that, when executed by the one or more processors, configure the one or more processors to perform operations comprising:
capturing an image with the camera;
generating a first graphical user interface comprising:
one or more first interactive icons corresponding to first results, the first results comprising object recognition results based on attributes identified in the image using a classification model, wherein the classification model comprises a convolutional neural network and associated model hyperparameters, and wherein the associated model hyperparameters comprise at least one of a number of layers, a number of nodes, and an indication of whether the network is fully connected;
an input icon;
and a first button;
upon receiving a user selection of at least one of the first interactive icons:
performing a search to identify second results, the search being based on the selected at least one of the first interactive icons; and
generating a second graphical user interface displaying the second results, the second graphical user interface being different from the first graphical user interface; and
upon receiving a user selection of the first button:
determining whether the input icon is empty;
in response to determining the input icon is not empty, transmitting, to a server, the image and content in the input icon;
receiving, from the server, a patch for the classification model, the patch comprising updated model hyperparameters and a classification model exception for the identified attributes, and wherein the patch includes a script that modifies a response of the classification model to images with attributes including at least one of make, model, trim and color;
based on the patch, retraining the classification model to include the updated model hyperparameters such that the response of the classification model to images with attributes including at least one of make, model, trim and color is modified, wherein retraining the classification model to include the updated model hyperparameters comprises developing the convolutional neural network using backpropagation with gradient descent based on a training dataset; and
performing a conditional routine to substitute third results based on the classification model exception.