US 12,489,869 B2
Interaction processing method and apparatus, terminal and medium
Wenjing Yin, Shenzhen (CN); Zebiao Huang, Shenzhen (CN); Xianyang Xu, Shenzhen (CN); Shu-Hui Chou, Shenzhen (CN); and Zhimiao Yu, Shenzhen (CN)
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, Shenzhen (CN)
Filed by Tencent Technology (Shenzhen) Company Limited, Shenzhen (CN)
Filed on Apr. 13, 2023, as Appl. No. 18/134,166.
Application 18/134,166 is a continuation of application No. PCT/CN2022/088332, filed on Apr. 22, 2022.
Claims priority of application No. 202110606182.7 (CN), filed on May 31, 2021.
Prior Publication US 2023/0247178 A1, Aug. 3, 2023
Int. Cl. H04N 7/15 (2006.01); G06F 3/04842 (2022.01); G06T 17/20 (2006.01); G06V 10/44 (2022.01); G06V 40/16 (2022.01); G06V 40/20 (2022.01)
CPC H04N 7/157 (2013.01) [G06F 3/04842 (2013.01); G06T 17/205 (2013.01); G06V 10/44 (2022.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01)] 17 Claims
OG exemplary drawing
 
1. A method performed by a computing device acting as a target terminal in a video session, the method comprising:
displaying, by the target terminal, a video session interface, the video session interface including an image display region for displaying images associated with one or more users participating in the video session;
displaying, by the target terminal, a target virtual image of a target virtual object corresponding to a user of the target terminal in the image display region;
controlling, according to movement information of the user captured by the target terminal, the target virtual image displayed in the image display region to perform a target interaction action corresponding to the movement information of the user, further including:
determining a type of the movement information of the user based on differences of two images of the user captured by the target terminal at two different points in time;
identifying an object element of the target virtual object according to the determined type of the movement information of the user;
acquiring a set of meshes associated with the identified object element of the target virtual object, each mesh having a plurality of vertices and each vertex in the mesh having a corresponding position value and a position relationship value of the vertex relative to other vertices of the same mesh;
performing mesh deformation on the position values and the position relationship values of respective vertices in the set of meshes according to the determined type of the movement information of the user; and
determining movement data of the target virtual image that has the identified object element performing the target interaction action according to the mesh deformation of the set of meshes of the identified object element of the target virtual object; and
transmitting, by the target terminal, the movement data of the target virtual image performing the target interaction action, to terminals of the other users of the video session, wherein the movement data renders the target virtual image to perform the target interaction action on the corresponding terminals.