US 12,290,719 B2
Exercise guidance using multi-modal data
Giuseppe Barbalinardo, Berkeley, CA (US); Joshua Ben Shapiro, Toronto (CA); Asim Kadav, Mountain View, CA (US); Ivan Savytskyi, Mississauga (CA); Rajiv Bhan, Mountain View, CA (US); Rustam Paringer, Samara (RU); and Aly E. Orady, Austin, TX (US)
Assigned to Tonal Systems, Inc., San Francisco, CA (US)
Filed by Tonal Systems, Inc., San Francisco, CA (US)
Filed on Oct. 16, 2023, as Appl. No. 18/380,575.
Claims priority of provisional application 63/417,052, filed on Oct. 18, 2022.
Prior Publication US 2024/0123288 A1, Apr. 18, 2024
Int. Cl. A63B 24/00 (2006.01); G06T 7/73 (2017.01); G06V 20/40 (2022.01); G06V 40/20 (2022.01)
CPC A63B 24/0062 (2013.01) [A63B 24/0075 (2013.01); G06T 7/73 (2017.01); G06V 20/41 (2022.01); G06V 40/23 (2022.01); A63B 2024/0065 (2013.01); G06T 2200/24 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30196 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An exercise system, comprising:
a first hardware optical sensor;
a second hardware sensor;
a user interface that provides guidance for a movement, wherein the user interface comprises a display on the exercise system and wherein the guidance is based at least in part on:
a first output from the first hardware optical sensor for a user;
a second output from the second hardware sensor for the user;
a position data based at least in part on the second output that is associated with a cable position for a cable coupled to an actuator for the user;
a pose data model for the user based at least in part on historical performance of the movement;
wherein the pose data model comprises pose data generated at least in part on the first output;
wherein pose data comprises a set of canonical key points each positioned in three-dimensional space at a specific time and wherein each canonical key point of the set of canonical key points represents a joint for the user;
a sensor fusion data comprising the position data combined with the pose data model via preprocessing;
wherein the sensor fusion data is based at least in part on a synchronization of the position data with the pose data model based at least in part on a reference timestamp;
a trigger variable value determined at least in part on the sensor fusion data; and
a prediction triggered at least in part on the trigger variable value; and
wherein the prediction provides the guidance for the movement via the display in an event a confidence parameter for the prediction is above a threshold.