| CPC G06F 3/017 (2013.01) [B60K 35/00 (2013.01); B60W 60/00253 (2020.02); G06F 3/0304 (2013.01); B60K 35/28 (2024.01); B60K 35/29 (2024.01); B60K 2360/176 (2024.01); B60K 2360/191 (2024.01)] | 20 Claims |

|
1. A human-machine interaction method, comprising:
obtaining motion track information of a mobile terminal, wherein the motion track information is obtained by using a motion sensor of the mobile terminal;
in response to determining that a predefined operation is performed on the mobile terminal, obtaining first gesture action information of a user, wherein the first gesture action information is obtained by using an optical sensor of an object device that interacts with the user, wherein the first gesture action information comprises gesture action form information and gesture action time information, and the motion track information comprises motion track form information and motion track time information;
determining whether a similarity of a first form of a motion of the mobile and a second form of a gesture exists by processing the gesture action form information and the motion track form information using a machine learning model;
determining whether a consistency between a first time of the motion of the mobile and a second time of the gesture by comparing a preset threshold with a difference between the gesture action form information and the motion track time information;
determining that the first gesture action information matches the motion track information in response to determining that the similarity exists and the consistency exists; and
executing first control when the first gesture action information matches the motion track information, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action information.
|