US 11,948,350 B2
Method and system for tracking an object
Dragos Dinu, Brasov (RO); Mihai Constantin Munteanu, Brasov (RO); and Alexandru Caliman, Brasov (RO)
Assigned to FotoNation Limited, (IE)
Filed by FotoNation Limited, Galway (IE)
Filed on May 27, 2022, as Appl. No. 17/827,574.
Application 16/532,059 is a division of application No. 15/426,413, filed on Feb. 7, 2017, granted, now 10,373,052, issued on Aug. 6, 2019.
Application 17/827,574 is a continuation of application No. 16/746,430, filed on Jan. 17, 2020, granted, now 11,379,719.
Application 16/746,430 is a continuation of application No. 16/532,059, filed on Aug. 5, 2019, granted, now 10,540,586, issued on Jan. 21, 2020.
Application 15/426,413 is a continuation in part of application No. PCT/EP2016/063446, filed on Jun. 13, 2016.
Prior Publication US 2022/0292358 A1, Sep. 15, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 10/82 (2022.01); G06F 18/2413 (2023.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06T 7/246 (2017.01); G06T 7/269 (2017.01); G06V 10/44 (2022.01); G06V 10/764 (2022.01)
CPC G06V 10/82 (2022.01) [G06F 18/2413 (2023.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 7/246 (2017.01); G06T 7/248 (2017.01); G06T 7/269 (2017.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06T 2207/10016 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/20021 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20104 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of tracking an object across a set of image frames, the method comprising:
inputting a first frame of the set of image frames into a first neural network, the first neural network comprising at least one convolutional layer and at least one fully-connected layer;
receiving a first output from the first neural network representative of a first map value associated with the first frame and a region of interest in the first frame that comprises the object;
determining a weight value based at least in part on the first map value and the region of interest;
inputting a second frame of the set of image frames into the first neural network;
receiving a second output from the first neural network representative of a second map value associated with the second frame;
inputting the second map value and the weight value into a second neural network;
receiving a third output from the second neural network identifying a region of interest in the second frame as matching the region of interest in the first frame; and
determining a location of the object in the second frame based at least in part on the third output.