US RE50,598 E1
Artificial reality system having a sliding menu
Jonathan Ravasz, London (GB); Jasper Stevens, London (GB); Adam Tibor Varga, London (GB); Etienne Pinchon, London (GB); Simon Charles Tickner, Canterbury (GB); Jennifer Lynn Spurlock, Seattle, WA (US); Kyle Eric Sorge-Toomey, Seattle, WA (US); Robert Ellis, London (GB); and Barrett Fox, Berkeley, CA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed on Jan. 11, 2023, as Appl. No. 18/095,946.
Application 18/095,946 is a reissue of application No. 16/434,919, filed on Jun. 7, 2019, granted, now 10,890,983, issued on Jan. 12, 2021.
Int. Cl. G06F 3/01 (2006.01); G06F 3/03 (2006.01); G06F 3/04815 (2022.01); G06F 3/0482 (2013.01); G06F 3/0484 (2022.01); G06F 3/04842 (2022.01); G06F 3/04847 (2022.01); G06T 19/00 (2011.01)
CPC G06F 3/017 (2013.01) [G06F 3/011 (2013.01); G06F 3/012 (2013.01); G06F 3/0304 (2013.01); G06F 3/04815 (2013.01); G06F 3/0482 (2013.01); G06F 3/04842 (2013.01); G06F 3/04847 (2013.01); G06T 19/006 (2013.01)] 59 Claims
OG exemplary drawing
 
1. An artificial reality system comprising:
an image capture device configured to capture image data;
a head mounted display (HMD) configured to output artificial reality content;
a gesture detector comprising processing circuitry configured to identify, from the image data, a menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand and identify, from the image data and subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand;
a user interface (UI) engine configured to, in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface, and in response to the menu sliding gesture [ that, via at least the configuration of the hand, engages the slidably engageable UI element] , translate the slidably engageable UI element to a second position relative to the menu interface [ by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element] ; and
a rendering engine configured to render the artificial reality content, the menu interface, and the translation of the slidably engageable UI element from the first position relative to the user [ menu ] interface to the second position relative to the user [ menu ] interface for display at the HMD.