US 11,747,898 B2
Method and apparatus with gaze estimation
Xiabing Liu, Beijing (CN); Hui Zhang, Beijing (CN); Jae-Joon Han, Seoul (KR); Changkyu Choi, Seongnam-si (KR); and Tianchu Guo, Beijing (CN)
Assigned to Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed by Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed on Aug. 5, 2021, as Appl. No. 17/394,653.
Application 17/394,653 is a continuation of application No. 16/722,221, filed on Dec. 20, 2019, granted, now 11,113,842.
Claims priority of application No. 201811582119.9 (CN), filed on Dec. 24, 2018; and application No. 10-2019-0116694 (KR), filed on Sep. 23, 2019.
Prior Publication US 2021/0366152 A1, Nov. 25, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 10/82 (2022.01); G06F 3/01 (2006.01); G06T 7/73 (2017.01); G06V 10/764 (2022.01); G06V 40/18 (2022.01); G06V 20/20 (2022.01)
CPC G06F 3/013 (2013.01) [G06T 7/74 (2017.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 40/18 (2022.01); G06T 2207/20021 (2013.01); G06T 2207/20076 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01); G06V 20/20 (2022.01)] 13 Claims
OG exemplary drawing
 
1. A processor-implemented gaze estimation method, comprising:
obtaining an image including an eye region of a user;
extracting, from the obtained image, a first feature of data;
obtaining a second feature of data used for calibration of a neural network model; and
estimating a position of a gaze point in a gaze area of the user from the first feature and the second feature using the neural network model,
wherein the estimating of the position of the gaze point comprises:
determining, from subareas into which the gaze area is divided, a subarea in which the gaze point is included based on a feature difference between the first feature and the second feature, and
determining the position of the gaze point based on the determined subarea.