US 12,013,985 B1
Single-handed gestures for reviewing virtual content
Karen Stolzenberg, Venice, CA (US); and Ilteris Canberk, Marina Del Rey, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Jan. 31, 2022, as Appl. No. 17/588,934.
Claims priority of provisional application 63/153,818, filed on Feb. 25, 2021.
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/04815 (2022.01); G02B 27/01 (2006.01); G06F 3/01 (2006.01); G06F 3/0485 (2022.01); G06T 19/00 (2011.01); G06V 20/40 (2022.01); G06V 40/10 (2022.01); G06V 40/20 (2022.01)
CPC G06F 3/017 (2013.01) [G02B 27/0101 (2013.01); G02B 27/017 (2013.01); G06F 3/011 (2013.01); G06F 3/04815 (2013.01); G06F 3/0485 (2013.01); G06T 19/006 (2013.01); G06V 20/46 (2022.01); G06V 40/113 (2022.01); G06V 40/28 (2022.01); G02B 2027/0138 (2013.01); G02B 2027/0178 (2013.01)] 15 Claims
OG exemplary drawing
 
1. A method of viewing virtual content in response to hand gestures detected with an eyewear device, the eyewear device comprising a camera system, an image processing system, and a display for presenting the virtual content, the method comprising:
presenting on the display a series of virtual items;
capturing frames of video data with the camera system;
detecting a series of hand shapes in the captured frames of video data with the image processing system;
determining, with the image processing system, whether the detected series of hand shapes matches a predefined hand gesture selected from a plurality of predefined hand gestures, each associated with an action, wherein the plurality of predefined hand gestures and associated actions comprises a combination selected from the group consisting of (a) a neutral gesture associated with an opening action, wherein the detected opening action matches the predefined neutral gesture, (b) a leafing gesture associated with a scrolling action, (c) a grasping gesture associated with a selecting action, and (d) a dorsal gesture associated with a closing action;
identifying a first subset of the captured frames of video data, the first subset including the detected leafing motion;
detecting in the first subset a current finger position and a previous finger position;
measuring a gap distance between the detected current finger position and the detected previous finger position; and
controlling the presentation on the display of the series of virtual items in accordance with the associated action, wherein the controlling comprises presenting a next virtual item in the series in accordance with the measured gap distance, such that the series appears to advance to the next virtual item at a speed correlated with the gap distance.