US 12,475,696 B2
Personalized online learning for artificial reality applications
Syed Shakib Sarwar, Bellevue, WA (US); Manan Suri, New Delhi (IN); Vivek Kamalkant Parmar, Vadodara (IN); Ziyun Li, Redmond, WA (US); Barbara De Salvo, Belmont, WA (US); and Hsien-Hsin Sean Lee, Cambridge, MA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by META PLATFORMS TECHNOLOGIES, LLC, Menlo Park, CA (US)
Filed on Mar. 29, 2022, as Appl. No. 17/707,149.
Claims priority of application No. 202241007310 (IN), filed on Feb. 11, 2022.
Prior Publication US 2023/0260268 A1, Aug. 17, 2023
Int. Cl. G06V 10/82 (2022.01); G06F 3/01 (2006.01); G06T 7/80 (2017.01); G06T 19/00 (2011.01); G06V 10/764 (2022.01); G06V 40/10 (2022.01); G06V 40/16 (2022.01)
CPC G06V 10/82 (2022.01) [G06F 3/013 (2013.01); G06T 7/80 (2017.01); G06T 19/006 (2013.01); G06V 10/764 (2022.01); G06V 40/11 (2022.01); G06V 40/161 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An augmented-reality headset storing executable instructions that, when executed, cause the augmented-reality headset to perform steps comprising:
capturing, via a camera communicatively coupled to the augmented-reality headset, a target frame, wherein the target frame includes a portion of a wearer of the augmented-reality headset;
generating, based on the target frame, a target feature embedding representing the target frame;
identifying, from a stored set of embeddings that is distinct from the target feature embedding, a sample embedding that is closest to the target feature embedding and a sample calibration frame associated with the sample embedding, wherein the sample embedding is distinct from the target feature embedding and the sample calibration frame is distinct from the target frame;
generating a combined embedding comprising a difference between the target feature embedding and the sample embedding;
providing the target feature embedding, the combined embedding, and information about the sample calibration frame as inputs to a neural network that is trained to predict a difference between an input calibration frame and an input target frame;
generating, using the neural network, a predicted difference between the sample calibration frame and the target frame;
predicting, based on the predicted difference and the neural network, a configuration of the portion of the wearer in the target frame; and
in accordance with a determination that the configuration of the portion of the wearer in the target frame is indicating a leftward direction, updating an image displayed at a display that is communicatively coupled to the augmented-reality headset so that the image is panned in the leftward direction.