| CPC G06T 7/70 (2017.01) [G02B 27/0093 (2013.01); G02B 27/0172 (2013.01); G06F 1/163 (2013.01); G06F 3/011 (2013.01); G06F 3/013 (2013.01); G06F 3/0346 (2013.01); G06F 3/04815 (2013.01); G06N 3/02 (2013.01); G06T 7/20 (2013.01); G06T 7/246 (2017.01); G02B 2027/0185 (2013.01); G06T 19/006 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10048 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30041 (2013.01); G06T 2207/30201 (2013.01)] | 19 Claims |

|
1. A computing system comprising:
a display device;
a non-transitory computer-readable storage medium configured to store software instructions;
a hardware processor configured to execute the software instructions to cause the computing system to:
capture one or more first images of an eye of a user during or immediately after a first user interface event in which the user activates or deactivates a virtual button of a virtual remote control in a first position, the first images reflecting eye poses of the user which are associated with a particular first portion of a user interface rendered as virtual content;
capture one or more second images of an eye of a user during or immediately after a second user interface event in which the user activates or deactivates the virtual button of the virtual remote control in a second position, the second images reflecting eye poses of the user which are associated with a particular second portion of a user interface, different than the particular first portion, rendered as virtual content;
cause update, based on the obtained first and second images as a set of retraining eye images, of a machine learning model configured to output an eye pose based on an input image related to the particular portion of the user interface, wherein the eye pose indicates a plurality of angular parameters relative to a natural resting direction of the eye and wherein the angular parameters indicate an azimuthal deflection and a zenithal deflection; and
identify, during operation of the computing system, a particular eye pose of the user via applying the updated machine learning model to an input image.
|