US 12,242,666 B2
Artificial reality input using multiple modalities
Roger Ibars Martinez, Seattle, WA (US); Johnathon Simmons, Seattle, WA (US); Pol Pla I Conesa, Portland, OR (US); Nathan Aschenbach, Seattle, WA (US); Aaron Faucher, Seattle, WA (US); Chris Rojas, Seattle, WA (US); Emron Jackson Henry, Duvall, WA (US); and Bryan Sparks, Sammamish, WA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed on Apr. 8, 2022, as Appl. No. 17/716,456.
Prior Publication US 2023/0324986 A1, Oct. 12, 2023
Int. Cl. G06F 3/01 (2006.01); G06F 3/04883 (2022.01); G06F 3/04886 (2022.01)
CPC G06F 3/013 (2013.01) [G06F 3/04883 (2013.01); G06F 3/04886 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for receiving input in an artificial reality environment using multiple modalities, the method comprising:
displaying a virtual keyboard in an artificial reality environment;
receiving user gaze input in relation to the virtual keyboard;
receiving initial user touch input at a surface of a controller device, wherein the initial user touch input generates an ad-hoc mapping between the surface of the controller device and the virtual keyboard;
determining that the ad-hoc mapping meets an alignment criteria;
displaying, in response to the initial user touch input, an indicator at a location on the virtual keyboard according to the received user gaze input;
receiving additional user touch input as a swipe segment, with a starting point and a finishing point, across a surface of a controller device, wherein the starting point of the swipe segment is mapped to the placed indicator;
dynamically moving the indicator on the virtual keyboard according to the relative positions of the starting point and the finishing point on the surface of the controller device;
resolving the swipe segment into two or more selections from the virtual keyboard according to the relative positions of the starting point and the finishing point;
dynamically removing the indicator from the virtual keyboard in response to no longer receiving the touch input at the controller device;
receiving, after the indicator is removed from the virtual keyboard, second initial user touch input at a surface of the controller device, wherein the second initial user touch input generates an other ad-hoc mapping between the surface of the controller device and the virtual keyboard;
determining that the other ad-hoc mapping does not meet the alignment criteria; and
triggering, in response to the determining that the other ad-hoc mapping does not meet the alignment criteria, visual user feedback in relation to the virtual keyboard or haptic user feedback via the controller device.