US 11,967,088 B2
Method and apparatus for tracking target
HyunJeong Lee, Seoul (KR); Changbeom Park, Seoul (KR); Hana Lee, Suwon-si (KR); and Sung Kwang Cho, Seoul (KR)
Assigned to Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed by Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed on Dec. 2, 2022, as Appl. No. 18/073,737.
Application 18/073,737 is a continuation of application No. 17/126,513, filed on Dec. 18, 2020, granted, now 11,544,855.
Claims priority of application No. 10-2020-0033213 (KR), filed on Mar. 18, 2020.
Prior Publication US 2023/0115606 A1, Apr. 13, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/246 (2017.01); G06F 18/213 (2023.01); G06F 18/22 (2023.01); G06T 7/73 (2017.01); G06V 10/40 (2022.01); G06V 10/44 (2022.01); G06V 10/74 (2022.01); G06V 20/00 (2022.01)
CPC G06T 7/246 (2017.01) [G06F 18/213 (2023.01); G06F 18/22 (2023.01); G06T 7/73 (2017.01); G06V 10/40 (2022.01); G06V 10/451 (2022.01); G06V 10/761 (2022.01); G06V 20/00 (2022.01)] 16 Claims
OG exemplary drawing
 
1. A target tracking method comprising:
obtaining similarity information of a target in a target region in a first image and a searching region in a second image, including extracting feature information of the target in the target region in the first image;
obtaining similarity information of a background in the target region and the searching region in the second image, including extracting feature information of the background in the target region and extracting feature information of the searching region in the second input image;
obtaining a score matrix based on the obtained similarity information of the target and the searching region and the obtained similarity information of the background and the searching region; and
estimating a position of the target in the searching region from the obtained score matrix,
wherein the extracting of the feature information of the target in the target region comprises obtaining a first feature map from the target region,
wherein the extracting of the feature information of the background in the target region comprises:
obtaining a second feature map from a region obtained by removing the background from the target region; and
obtaining a third feature map from a region obtained by removing the target from the target region, and
wherein the extracting of the feature information of the searching region in the second input image comprises obtaining a fourth feature map from the searching region.