US 11,875,467 B2
Processing method for combining a real-world environment with virtual information according to a video frame difference value to provide an augmented reality scene, terminal device, system, and computer storage medium
Dan Qing Fu, Guangdong (CN); Hao Xu, Guangdong (CN); Cheng Quan Liu, Guangdong (CN); Cheng Zhuo Zou, Guangdong (CN); Ting Lu, Guangdong (CN); and Xiao Ming Xiang, Guangdong (CN)
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD, Shenzhen (CN)
Filed by TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD, Shenzhen (CN)
Filed on Jun. 8, 2022, as Appl. No. 17/835,077.
Application 17/835,077 is a continuation of application No. 16/573,397, filed on Sep. 17, 2019, granted, now 11,410,415.
Application 16/573,397 is a continuation of application No. PCT/CN2018/103589, filed on Aug. 31, 2018.
Claims priority of application No. 201710804532.4 (CN), filed on Sep. 8, 2017.
Prior Publication US 2022/0301300 A1, Sep. 22, 2022
Int. Cl. G06T 19/00 (2011.01); G06F 3/048 (2013.01); G11B 27/031 (2006.01); G06V 20/10 (2022.01); G06V 10/94 (2022.01); G06V 20/20 (2022.01)
CPC G06T 19/006 (2013.01) [G06F 3/048 (2013.01); G06V 10/95 (2022.01); G06V 20/10 (2022.01); G06V 20/20 (2022.01); G11B 27/031 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A processing method, performed by at least one processor, for an augmented reality scene, the processing method comprising:
detecting, by the at least one processor, target feature points of a current video frame in a currently captured video;
calculating, by the at least one processor, a video frame difference value according pixel coordinates of the target feature points of the current video frame and pixel coordinates of target feature points of a previous video frame;
determining, by the at least one processor, the current video frame as a target video frame in the currently captured video based on the video frame difference value not satisfying a preset change condition;
determining, by the at least one processor, an object area in the target video frame; and
performing, by the at least one processor, augmented reality processing on the object area in the target video frame and augmented reality scene information, to obtain the augmented reality scene,
wherein the calculating the video frame difference value comprises calculating, by the at least one processor, the video frame difference value based on:
a first difference between a current mean of the pixel coordinates of the target feature points of the current video frame and a previous mean of the pixel coordinates of the target feature points of the previous video frame; and
a second difference between a current variance of the pixel coordinates of the target feature points of the current video frame and a previous variance of the pixel coordinates of the target feature points of the previous video frame, and
wherein the processing method further comprises determining, by the at least one processor, whether the video frame difference value satisfies the preset change condition by comparing each of the first difference and the second difference with a preset threshold.