US 12,229,338 B2
Detecting user input from multi-modal hand bio-metrics
Jamin Hu, Helsinki (FI); Ville Klar, Helsinki (FI); Eemil Visakorpi, Helsinki (FI); and Lauri Tuominen, Helsinki (FI)
Assigned to Doublepoint Technologies Oy, Helsinki (FI)
Filed by Doublepoint Technologies Oy, Helsinki (FI)
Filed on Feb. 17, 2023, as Appl. No. 18/110,979.
Application 18/110,979 is a continuation in part of application No. 17/694,758, filed on Mar. 15, 2022, granted, now 11,635,823.
Prior Publication US 2023/0297167 A1, Sep. 21, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/01 (2006.01); G06F 1/16 (2006.01); G06N 3/045 (2023.01)
CPC G06F 3/014 (2013.01) [G06F 1/163 (2013.01); G06F 3/017 (2013.01); G06N 3/045 (2023.01)] 19 Claims
OG exemplary drawing
 
1. A multimodal biometric measurement apparatus, the apparatus comprising:
a mounting component configured to be worn by a user,
at least one wrist contour sensor,
at least one bioacoustic sensor comprising a vibration sensor,
at least one inertial measurement unit, IMU, comprising an accelerometer and gyroscope,
 wherein the at least one wrist contour sensor, the at least one bioacoustic sensor, and the at least one inertial measurement unit are arranged within a single housing, and
a controller comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, to cause the controller at least to:
receive a first sensor data stream from the at least one wrist contour sensor,
receive a second sensor data stream from the at least one bioacoustic sensor,
receive a third sensor data stream from the at least one inertial measurement unit,
wherein the first, the second and the third sensor data stream are received concurrently, and
determine, based on at least one of the first, the second and the third sensor data stream, at least one characteristic of a user action,
determine, based on the determined at least one characteristic of a user action, at least one user action, and
generate at least one user interface, UI, command, based at least in part on the determined at least one user action,
wherein at least one of the first, the second or the third sensor data stream is preprocessed in a separate preprocessing sequence from the other data streams.