US 12,462,529 B2
Method and system for image processing and classifying target entities within image
Taniya Saini, Ahmedabad (IN); Niranjan Kumar Manjunath, Davangere (IN); Ashok Ajad, Gorakhpur (IN); Arbaaz Mohammad Shaikh, Satara (IN); and Nisarga Krishnegowda, Mysore (IN)
Assigned to L&T TECHNOLOGY SERVICES LIMITED, Chennai (IN)
Filed by L&T TECHNOLOGY SERVICES LIMITED, Chennai (IN)
Filed on May 9, 2023, as Appl. No. 18/144,881.
Claims priority of application No. 202241052040 (IN), filed on Sep. 12, 2022.
Prior Publication US 2024/0087288 A1, Mar. 14, 2024
Int. Cl. G06V 10/764 (2022.01); G06T 5/30 (2006.01); G06T 5/70 (2024.01); G06T 7/13 (2017.01); G06V 10/56 (2022.01); G06V 10/60 (2022.01); G06V 10/82 (2022.01)
CPC G06V 10/764 (2022.01) [G06T 5/30 (2013.01); G06T 5/70 (2024.01); G06T 7/13 (2017.01); G06V 10/56 (2022.01); G06V 10/60 (2022.01); G06V 10/82 (2022.01); G06T 2207/10024 (2013.01); G06T 2207/20084 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A method of image processing and classifying target entities with an image, the method comprising:
applying a contrast amplification procedure to a Lightness parameter associated with an input image, to amplify contrast of the input image and obtain an amplified-contrast image corresponding to the input image;
de-noising the amplified-contrast image by iteratively performing on the amplified-contrast image a blur correction, an erosion correction, and a dilation correction, to obtain a de-noised image corresponding to the amplified-contrast image;
determining, from the de-noised image, edges of each of one or more target entities associated with the input image using at least one edge detection model; and
identifying the one or more target entities associated with the input image based on the identified edges of each of the one or more target entities, to generate a contoured image,
wherein the method further comprising:
converting BGR (Blue, Green, Red) parameter configuration of the input image into LAB (Lightness, a-axis, b-axis) configuration and LUV (Lightness, u-axis, v-axis) configuration;
identifying a mask associated with each of the one or more target entities associated with the input image using a segmentation model, to generate a mask image comprising the one or more target entities and the mask associated with each of the one or more target entities;
feeding the mask image and the contoured image to a classification model; and
obtaining, from the classification model, classification of the one or more target entities into one or more predefined classes, the one or more predefined classes being associated with one or more entity types.