US 12,366,920 B2
Systems and methods for gesture inference using transformations
Dexter Ang, Boston, MA (US); David Cipoletta, Boston, MA (US); Xiaofeng Tan, Boston, MA (US); Matt Fleury, Boston, MA (US); and Dylan Pollack, Boston, MA (US)
Assigned to Pison Technology, Inc., Boston, MA (US)
Filed by Pison Technology, Inc., Boston, MA (US)
Filed on Jan. 28, 2023, as Appl. No. 18/161,053.
Application 18/161,053 is a continuation in part of application No. 17/935,480, filed on Sep. 26, 2022, granted, now 11,914,791.
Prior Publication US 2024/0103623 A1, Mar. 28, 2024
Int. Cl. G06F 3/01 (2006.01); G06N 5/04 (2023.01)
CPC G06F 3/015 (2013.01) [G06F 3/017 (2013.01); G06N 5/04 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system for gesture inference, the system comprising:
a wearable device configured to be worn on a portion of an arm of a user, the wearable device comprising:
a biopotential sensor, the biopotential sensor being configured to obtain biopotential data indicating electrical signals generated by nerves and muscles in the arm of the user; and
a motion sensor, the motion sensor being configured to obtain motion data relating to a motion of the portion of the arm of the user, the motion data and biopotential data collectively being sensor data; and
a processing pipeline configured to receive the biopotential data and the motion data and process the biopotential data and the motion data to generate a gesture inference output using a ML model, wherein the processing pipeline includes:
a pre-process module configured to:
obtain a first set of sensor data;
determine, based on the sensor data or a derivative thereof, a first transformation to the ML model and/or a second transformation to the first set of sensor data; and
apply the first transformation to the ML model to obtain a session ML model and/or apply the second transformation to the first set of sensor data or derivative thereof to obtain mapped sensor data; and
an inference module configured to infer the gesture inference based on (1) the session ML model and the first set of sensor data, and/or (2) the ML model and the mapped sensor data;
wherein the system is configured to, based on the gesture inference, determine a machine interpretable event, and execute an action corresponding to the machine interpretable event.