| CPC G06F 3/014 (2013.01) [G06F 3/0346 (2013.01); G06T 7/20 (2013.01); G06T 7/70 (2017.01); G06F 2203/0331 (2013.01); G06T 2207/30196 (2013.01); G06T 2207/30241 (2013.01)] | 13 Claims |

|
1. A head-mounted display, comprising:
an image capturing device, being configured to capture a plurality of real-time images including a plurality of fingers of a user, wherein the user wears at least one wearable device on at least one of the fingers, and the at least one wearable device is configured to generate an inertial sensing data; and
a processor, being electrically connected to the image capturing device, and being configured to perform the following operations:
determining whether the fingers conform to a contact mode corresponding to an entity plane based on the real-time images and the inertial sensing data;
generating a target finger trajectory based on the real-time images and the inertial sensing data in response to the fingers conforming to the contact mode corresponding to the entity plane; and
generating a tap input signal corresponding to the fingers based on a target input type and the target finger trajectory;
wherein the head-mounted display further comprises:
a display device, being electrically connected to the processor, wherein the display device is configured to display a target object,
wherein the processor is further configured to perform the following operations:
judging the target object displayed on the display device; and
determining an input type corresponding to the target object displayed on the display device to select the target input type from a plurality of candidate input types,
wherein:
in response to the processor judging the target object to be a canvas or a signature area, the processor selects the target input type to be a writing type, and calculates a displacement path corresponding to the target finger trajectory on the entity plane to generate the tap input signal corresponding to the fingers;
in response to the processor judging the target object to be a menu or a drop-down field, the processor selects the target input type to be a cursor type, selects a target cursor action from a plurality of cursor actions based on the target finger trajectory, and generates the tap input signal corresponding to the fingers based on the target finger trajectory and the target cursor action; and
in response to the processor judging the target object to be an input filed of inputting an account and password, the processor selects the target input type to be a keyboard type, and calculates a tap position corresponding to the target finger trajectory on the entity plane to generate the tap input signal corresponding to the fingers, such that an output signal is generated based on key contents of a virtual keyboard.
|