US 11,749,005 B2
User authentication apparatus, user authentication method and training method for user authentication
Heewon Kim, Seoul (KR); Seon Min Rhee, Seoul (KR); Jihye Kim, Anyang-si (KR); Ju Hwan Song, Suwon-si (KR); and Jaejoon Han, Seoul (KR)
Assigned to Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed by Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed on Sep. 20, 2022, as Appl. No. 17/948,450.
Application 17/948,450 is a continuation of application No. 16/875,368, filed on May 15, 2020, granted, now 11,482,042.
Claims priority of application No. 10-2019-0169581 (KR), filed on Dec. 18, 2019.
Prior Publication US 2023/0009696 A1, Jan. 12, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/64 (2022.01); G06T 7/50 (2017.01); G06F 21/32 (2013.01); G06N 3/08 (2023.01); G06V 40/16 (2022.01); G06F 18/22 (2023.01); G06F 18/214 (2023.01); G06F 18/2413 (2023.01); G06V 10/74 (2022.01); G06V 10/44 (2022.01); G06V 10/75 (2022.01)
CPC G06V 20/647 (2022.01) [G06F 18/214 (2023.01); G06F 18/22 (2023.01); G06F 18/2413 (2023.01); G06F 21/32 (2013.01); G06N 3/08 (2013.01); G06T 7/50 (2017.01); G06V 10/454 (2022.01); G06V 10/751 (2022.01); G06V 10/761 (2022.01); G06V 40/165 (2022.01); G06V 40/168 (2022.01); G06V 40/172 (2022.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A processor-implemented method comprising:
acquiring a training image pair comprising a frontalized face of a user, wherein any one of the training image pair includes depth values of pixels corresponding to at least a portion of the face of the user viewed when a depth image was captured and includes pixels which correspond to a portion of the face of the user hidden when the depth image was captured, which do not have depth values;
calculating, using a first neural network, a first confidence map comprising confidence values, for the pixels of a first training image corresponding to at least the portion of the face of the user viewed when the depth image was captured among pixels included in the first training image;
calculating, using a second neural network, a second confidence map comprising confidence values, for authenticating a user included in a second training image, for the pixels of a second training image corresponding to at least the portion of the face of the user viewed when the depth image was captured among pixels included in the second training image;
extracting a first feature vector from a first image generated based on the first training image and the first confidence map;
extracting a second feature vector from a second image generated based on the second training image and the second confidence map; and
updating the first neural network and the second neural network based on a correlation between the first feature vector and the second feature vector.