CPC G06V 20/647 (2022.01) [G06F 18/214 (2023.01); G06F 18/22 (2023.01); G06F 18/2413 (2023.01); G06F 21/32 (2013.01); G06N 3/08 (2013.01); G06T 7/50 (2017.01); G06V 10/454 (2022.01); G06V 10/751 (2022.01); G06V 10/761 (2022.01); G06V 40/165 (2022.01); G06V 40/168 (2022.01); G06V 40/172 (2022.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01)] | 20 Claims |
1. A processor-implemented method comprising:
acquiring a training image pair comprising a frontalized face of a user, wherein any one of the training image pair includes pixels which correspond to a portion of the face of the user hidden when the depth image was captured, which do not have depth values;
calculating, using a first neural network, a first confidence map comprising confidence values, for the pixels of a first training image corresponding to at least the portion of the face of the user viewed when the depth image was captured among pixels included in the first training image;
calculating, using a second neural network, a second confidence map comprising confidence values, for authenticating a user included in a second training image, for the pixels of a second training image corresponding to at least the portion of the face of the user viewed when the depth image was captured among pixels included in the second training image;
extracting a first feature vector from a first image generated based on the first training image and the first confidence map;
extracting a second feature vector from a second image generated based on the second training image and the second confidence map; and
updating the first neural network and the second neural network based on a correlation between the first feature vector and the second feature vector.
|