US 12,450,878 B2
System and method for detecting object in an adaptive environment using a machine learning model
C. V. Jawahar, Hyderabad (IN); Rohit Saluja, Hyderabad (IN); Chetan Arora, New Delhi (IN); Vineeth N Balasubramanian, Sangareddy (IN); and Vaishnavi Mohan Khindkar, Hyderabad (IN)
Assigned to INTERNATIONAL INSTITUTE OF INFORMATION TECHNOLOGY, HYDERABAD, Hyderabad (IN); INDIAN INSTITUTE OF TECHNOLOGY, DELHI, New Delhi (IN); and INDIAN INSTITUTE OF TECHNOLOGY, HYDERABAD, Sangareddy (IN)
Filed by International Institute of Information Technology, Hyderabad, Hyderabad (IN); Indian Institute of Technology, Delhi, New Delhi (IN); and Indian Institute of Technology, Hyderabad, Sangareddy (IN)
Filed on May 14, 2023, as Appl. No. 18/197,075.
Claims priority of application No. 202241027805 (IN), filed on May 14, 2022.
Prior Publication US 2023/0368498 A1, Nov. 16, 2023
Int. Cl. G06V 10/77 (2022.01); G06F 18/213 (2023.01); G06F 18/24 (2023.01); G06F 18/25 (2023.01); G06N 3/08 (2023.01); G06N 5/04 (2023.01); G06N 20/00 (2019.01); G06T 5/60 (2024.01); G06T 7/246 (2017.01); G06V 10/20 (2022.01); G06V 10/40 (2022.01); G06V 10/70 (2022.01); G06V 10/74 (2022.01); G06V 10/764 (2022.01); G06V 10/80 (2022.01); G06V 10/82 (2022.01); G06V 20/00 (2022.01); G06V 20/05 (2022.01); G06V 20/10 (2022.01); G06V 20/40 (2022.01); G06V 20/69 (2022.01); G06V 30/19 (2022.01)
CPC G06V 10/7715 (2022.01) [G06F 18/213 (2023.01); G06F 18/24 (2023.01); G06F 18/25 (2023.01); G06F 18/251 (2023.01); G06F 18/253 (2023.01); G06F 18/254 (2023.01); G06N 3/08 (2013.01); G06N 5/04 (2013.01); G06N 20/00 (2019.01); G06T 5/60 (2024.01); G06T 7/251 (2017.01); G06V 10/255 (2022.01); G06V 10/40 (2022.01); G06V 10/70 (2022.01); G06V 10/761 (2022.01); G06V 10/764 (2022.01); G06V 10/765 (2022.01); G06V 10/80 (2022.01); G06V 10/806 (2022.01); G06V 10/809 (2022.01); G06V 10/82 (2022.01); G06V 20/00 (2022.01); G06V 20/05 (2022.01); G06V 20/10 (2022.01); G06V 20/35 (2022.01); G06V 20/38 (2022.01); G06V 20/41 (2022.01); G06V 20/47 (2022.01); G06V 20/698 (2022.01); G06V 30/19173 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A processor-implemented method for detecting at least one object in at least one image in a target environment that is adapted to a source environment using a machine learning model, thereby reducing a dissimilarity between a plurality of features of the target environment and the source environment comprising:
extracting a plurality of features (f1) from a source image associated with a source environment and a target image associated with a target environment;
generating a feature map (ATmap1) based on the plurality of features from the source image and the target image;
generating, using an environment classifier, a pixel-wise probability output map (D1) for the plurality of features (f1) from the source image and the target image, wherein the pixel-wise probability output map represents a probability of each pixel in the feature map belonging to the source environment;
determining a first environment invariant feature map (SAmap1) by combining the feature map (ATmap1) with the pixel-wise probability output map (D1), wherein the environment invariant feature map represents the feature map that is invariant to any environment;
determining a second environment invariant feature map (RSmap1) by combining the first environment invariant feature map (SAmap1) and the plurality of features (f1);
generating a plurality of second environment invariant feature maps (RSmapn) at a plurality of instances;
extracting a plurality of environment invariant features based on the plurality of second environment invariant feature maps; and
detecting the at least one object in the at least one image in the target environment that is adapted to the source environment by training the machine learning model using the plurality of environment invariant features, thereby reducing a dissimilarity between the plurality of features of the target environment and the source environment.