US 12,102,411 B2
Method for predicting intention of user and apparatus for performing same
Kyu-Jin Cho, Seoul (KR); Sungho Jo, Daejeon (KR); Byunghyun Kang, Seoul (KR); Daekyum Kim, Daejeon (KR); Hyungmin Choi, Seoul (KR); and Kyu Bum Kim, Daejeon (KR)
Assigned to SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION, Seoul (KR); and KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, Daejeon (KR)
Filed by SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION, Seoul (KR); and KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, Daejeon (KR)
Filed on Apr. 30, 2021, as Appl. No. 17/246,299.
Application 17/246,299 is a continuation of application No. PCT/KR2019/014855, filed on Nov. 4, 2019.
Claims priority of application No. 10-2018-0133652 (KR), filed on Nov. 2, 2018; and application No. 10-2019-0139747 (KR), filed on Nov. 4, 2019.
Prior Publication US 2021/0256250 A1, Aug. 19, 2021
Int. Cl. G06K 9/00 (2022.01); A61B 5/00 (2006.01); A61H 1/02 (2006.01); G06N 3/08 (2023.01); G06T 7/20 (2017.01); G06T 7/70 (2017.01); G06V 40/20 (2022.01)
CPC A61B 5/0077 (2013.01) [A61H 1/02 (2013.01); G06N 3/08 (2013.01); G06T 7/20 (2013.01); G06T 7/70 (2017.01); G06V 40/20 (2022.01); G06V 40/28 (2022.01); A61H 2201/5007 (2013.01); A61H 2201/5092 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30196 (2013.01); G06T 2207/30241 (2013.01)] 13 Claims
OG exemplary drawing
 
1. A method for predicting an intention of a user with limited motion through an image acquired by capturing the user, the method comprising:
receiving an image acquired by capturing a target object and a body part of the user which is at least one of a hand, an arm and a foot of the user;
predicting an intention of the user for the user's next body motion for the target object by using spatial information and temporal information about the user and the target object included in the image; and
applying a driving signal for operating to assist the user in performing the user's next body motion corresponding to the predicted intention, to a device for assisting the user in performing the user's body motions,
wherein the spatial information is acquired based on each of a plurality of frames constituting the image and comprises the pose of the body part of the user, the position of the target object and an interaction between the body part of the user and the target object,
wherein the interaction comprises a distance between the body part and the target object, a position and a direction of the body part based on the target object,
wherein the temporal information is acquired based on the spatial information and comprises a speed at which the body part moves toward the target object, changes in the pose of the body part of the user viewed by eyes of the user over time and changes in the interaction between the body part of the user and the target object over time, and
wherein the device for assisting the user in performing the user's body motions is worn on at least one of the body parts of the user.