US 12,437,562 B2
System and method for increasing a resolution of a three-dimension (3D) image
Ravindran Padmanaban, Chennai (IN); and Srinivasan Selvaraj, Chennai (IN)
Assigned to Bank of America Corporation, Charlotte, NC (US)
Filed by Bank of America Corporation, Charlotte, NC (US)
Filed on Feb. 21, 2023, as Appl. No. 18/171,921.
Prior Publication US 2024/0282125 A1, Aug. 22, 2024
Int. Cl. G06V 20/64 (2022.01); G06T 7/73 (2017.01); G06T 17/20 (2006.01); G06T 19/20 (2011.01); G06V 10/44 (2022.01); G06V 10/77 (2022.01)
CPC G06V 20/647 (2022.01) [G06T 7/73 (2017.01); G06T 17/205 (2013.01); G06T 19/20 (2013.01); G06V 10/44 (2022.01); G06V 10/7715 (2022.01); G06T 2210/36 (2013.01); G06T 2219/2012 (2013.01); G06T 2219/2016 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system for increasing a resolution of a three-dimension (3D) image, comprising:
a memory configured to store:
a baseline dataset that comprises:
a first set of feature points that are known for a first category of objects, wherein the first set of feature points indicates physical attributes that are common among the first category of objects; and
a color code associated with each of the first set of feature points;
a first 3D image of a first object, wherein the first object belongs to the first category of objects;
a processor, operably coupled to the memory, and configured to:
determine a set of contours from the first 3D image, wherein each of the set of contours represents a boundary around the first object in a different two-dimension (2D) plane;
determine a mesh image vector that indicates a set of location coordinates of a second set of feature points on a surface of the first object, wherein the second set of feature points indicates physical attributes of the first object;
for at least a first contour from among the set of contours:
compare the mesh image vector with the first contour;
determine an intersecting feature point where the mesh image vector meets the first contour;
determine that the baseline dataset comprises a first feature point that corresponds to the intersecting feature point;
in response to determining that the baseline dataset comprises the first feature point that corresponds to the intersecting feature point, generate a structural vector by populating the structural vector with the intersecting feature point;
determine a first color code associated with the intersecting feature point based at least in part upon determining that the first feature point is associated with the first color code;
generate a textural vector by populating the textural vector with the first color code;
generate an image vector of the first object by combining the structural vector with the textural vector;
access a test 3D image of a second object;
extract a set of features from the test 3D image, wherein the set of features represents physical attributes of the second object shown in the test 3D image;
compare the image vector of the first object to the extracted set of features; and
determine that the first object is the second object in response to determining that more than a threshold percentage of feature points of the image vector corresponds to counterpart features of the extracted set of features.