US 12,488,488 B2
Personalized neural network for eye tracking
Adrian Kaehler, Los Angeles, CA (US); Douglas Bertram Lee, Redwood City, CA (US); and Vijay Badrinarayanan, Mountain View, CA (US)
Assigned to MAGIC LEAP, INC., Plantation, FL (US)
Filed by Magic Leap, Inc., Plantation, FL (US)
Filed on Apr. 2, 2021, as Appl. No. 17/221,250.
Application 17/221,250 is a continuation of application No. 16/880,752, filed on May 21, 2020, granted, now 10,977,820.
Application 16/880,752 is a continuation of application No. 16/134,600, filed on Sep. 18, 2018, granted, now 10,719,951, issued on Jul. 21, 2020.
Claims priority of provisional application 62/560,898, filed on Sep. 20, 2017.
Prior Publication US 2021/0327085 A1, Oct. 21, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 10/40 (2022.01); G02B 27/00 (2006.01); G02B 27/01 (2006.01); G06F 1/16 (2006.01); G06F 3/01 (2006.01); G06F 3/0346 (2013.01); G06F 3/04815 (2022.01); G06N 3/02 (2006.01); G06T 7/20 (2017.01); G06T 7/246 (2017.01); G06T 7/70 (2017.01); G06T 19/00 (2011.01)
CPC G06T 7/70 (2017.01) [G02B 27/0093 (2013.01); G02B 27/0172 (2013.01); G06F 1/163 (2013.01); G06F 3/011 (2013.01); G06F 3/013 (2013.01); G06F 3/0346 (2013.01); G06F 3/04815 (2013.01); G06N 3/02 (2013.01); G06T 7/20 (2013.01); G06T 7/246 (2017.01); G02B 2027/0185 (2013.01); G06T 19/006 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10048 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30041 (2013.01); G06T 2207/30201 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A computing system comprising:
a display device;
a non-transitory computer-readable storage medium configured to store software instructions;
a hardware processor configured to execute the software instructions to cause the computing system to:
capture one or more first images of an eye of a user during or immediately after a first user interface event in which the user activates or deactivates a virtual button of a virtual remote control in a first position, the first images reflecting eye poses of the user which are associated with a particular first portion of a user interface rendered as virtual content;
capture one or more second images of an eye of a user during or immediately after a second user interface event in which the user activates or deactivates the virtual button of the virtual remote control in a second position, the second images reflecting eye poses of the user which are associated with a particular second portion of a user interface, different than the particular first portion, rendered as virtual content;
cause update, based on the obtained first and second images as a set of retraining eye images, of a machine learning model configured to output an eye pose based on an input image related to the particular portion of the user interface, wherein the eye pose indicates a plurality of angular parameters relative to a natural resting direction of the eye and wherein the angular parameters indicate an azimuthal deflection and a zenithal deflection; and
identify, during operation of the computing system, a particular eye pose of the user via applying the updated machine learning model to an input image.