US 11,755,137 B2
Gesture recognition devices and methods
Thomas J. Moscarillo, Woodbridge, CT (US)
Filed by Thomas J. Moscarillo, Woodbridge, CT (US)
Filed on May 17, 2021, as Appl. No. 17/322,556.
Application 15/870,023 is a division of application No. 13/776,439, filed on Feb. 25, 2013, granted, now 9,880,629, issued on Jan. 30, 2018.
Application 17/322,556 is a continuation of application No. 15/870,023, filed on Jan. 12, 2018, granted, now 11,009,961, issued on May 18, 2021.
Claims priority of provisional application 61/602,704, filed on Feb. 24, 2012.
Prior Publication US 2021/0271340 A1, Sep. 2, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/01 (2006.01); G06F 3/041 (2006.01); G06F 3/0488 (2022.01); G06F 3/042 (2006.01); G06F 3/04886 (2022.01); G06F 21/32 (2013.01); G06F 21/83 (2013.01)
CPC G06F 3/0416 (2013.01) [G06F 3/017 (2013.01); G06F 3/0426 (2013.01); G06F 3/0488 (2013.01); G06F 3/04886 (2013.01); G06F 21/32 (2013.01); G06F 21/83 (2013.01); G06F 2203/04104 (2013.01); G06F 2221/2101 (2013.01)] 17 Claims
OG exemplary drawing
 
1. An input device for a digital data processing system, comprising:
at least one sensor that observes a workspace and generates data indicative of one or more parameters of an input agent within the workspace; and
a processor that identifies gestures made by the input agent from the data generated by the at least one sensor and that generates user input information based on the identified gestures, wherein the processor further comprises
a user profile module that identifies a plurality of anatomical landmarks of at least one hand of the user and determines locations of said plurality of anatomical landmarks within the workspace based on data generated by the at least one sensor;
a classification module that tracks said plurality of anatomical landmarks over time and interprets a stream of active motion variables as a particular user gesture by assessing movement of at least one anatomical landmark relative to another anatomical landmark and assigning a particular gesture to the detected changes in the anatomical landmarks; and
a mode selection module configured to switch the input device between a plurality of operating modes;
wherein the input agent comprises one or more hands, each of the one or more hands comprising a plurality of fingers and wherein the input device is operable in the plurality of operating modes, the processor being configured to set a current operating mode based at least in part on at least one of a location of the one or more hands, a gesture made by the one or more hands, and a configuration of the one or more hands; and
wherein the plurality of operating modes comprises at least two of: a keyboard input mode, a pointing device input mode, a number pad input mode, a template-based input mode, and a custom pad input mode.