US 12,366,923 B2
Systems and methods for gesture inference using ML model selection
Dexter Ang, Boston, MA (US); David Cipoletta, Boston, MA (US); Xiaofeng Tan, Boston, MA (US); Matt Fleury, Boston, MA (US); and Dylan Pollack, Boston, MA (US)
Assigned to Pison Technology, Inc., Boston, MA (US)
Filed by Pison Technology, Inc., Boston, MA (US)
Filed on Jan. 28, 2023, as Appl. No. 18/161,054.
Application 18/161,054 is a continuation in part of application No. 17/935,480, filed on Sep. 26, 2022, granted, now 11,914,791.
Prior Publication US 2024/0103628 A1, Mar. 28, 2024
Int. Cl. G06F 3/01 (2006.01); G06F 3/0346 (2013.01); G06N 3/045 (2023.01); G06N 3/09 (2023.01)
CPC G06F 3/017 (2013.01) [G06F 3/015 (2013.01); G06F 3/0346 (2013.01); G06N 3/045 (2023.01); G06N 3/09 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A system for gesture inference, the system comprising:
a wearable device configured to be worn on a portion of an arm of a user, the wearable device comprising:
a biopotential sensor, the biopotential sensor being configured to obtain biopotential data indicating electrical signals generated by nerves and muscles in the arm of the user; and
a motion sensor, the motion sensor being configured to obtain motion data relating to a motion of the portion of the arm of the user, the motion data and biopotential data collectively being sensor data; and
a base ML model;
wherein the system is configured to:
prompt the user to perform a first action;
obtain, using the biopotential sensor and the motion sensor, first sensor data while the user performs the first action;
using at least the base ML model and the first sensor data, determine that the first action was performed by the user;
select, based on at least the first sensor data, a second ML model, the second ML model being selected to provide improved inference accuracy for the user as compared to the base ML model;
obtain, using the biopotential sensor and the motion sensor, second sensor data while the user performs a second action;
using at least the second ML model and the second sensor data, generate an inference output indicating that the user performed the second action.