US 11,755,121 B2
Gesture information processing method and apparatus, electronic device, and storage medium
Xiaolin Hong, Shenzhen (CN); Qingqing Zheng, Shenzhen (CN); Xinmin Wang, Shenzhen (CN); Kai Ma, Shenzhen (CN); and Yefeng Zheng, Shenzhen (CN)
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, Shenzhen (CN)
Filed by Tencent Technology (Shenzhen) Company Limited, Shenzhen (CN)
Filed on Jan. 20, 2022, as Appl. No. 17/580,545.
Application 17/580,545 is a continuation of application No. PCT/CN2020/130567, filed on Nov. 20, 2020.
Claims priority of application No. 202010033904.X (CN), filed on Jan. 13, 2020.
Prior Publication US 2022/0147151 A1, May 12, 2022
Int. Cl. G06F 3/01 (2006.01); G06T 5/00 (2006.01); G06N 3/045 (2023.01)
CPC G06F 3/017 (2013.01) [G06N 3/045 (2023.01); G06T 5/002 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A gesture information processing method performed by an electronic device, the method comprising:
determining an electromyography signal collection target object in a gesture information usage environment of a gesture recognition model, wherein the gesture recognition model is trained by:
processing a training sample set through the gesture recognition model based on initial parameters of the gesture recognition model;
determining update parameters corresponding to different neural networks of the gesture recognition model;
iteratively updating parameters of the gesture recognition model through the training sample set according to the update parameters corresponding to the different neural networks of the gesture recognition model, so as to recognize different gesture information through the gesture recognition model;
acquiring an electromyography signal sample matching the electromyography signal collection target object, and a corresponding gesture information label;
dividing the electromyography signal sample through a sliding window having a fixed window value and a fixed stride into different electromyography signals of the target object, and denoising the different electromyography signals of the target object;
recognizing the denoised different electromyography signals based on the gesture information label, and determining probabilities of gesture information represented by the different electromyography signals using the gesture recognition model; and
weighting the probabilities of the gesture information represented by the different electromyography signals to determine gesture information matching the target object.