US 12,272,076 B2
Image processing method, electronic device and storage medium
Yuanjiao Ma, Yokohama (JP); Jun Luo, Yokohama (JP); and Wei Quan, Yokohama (JP)
Assigned to GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD., Dongguan (CN)
Filed by GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD., Dongguan (CN)
Filed on Aug. 26, 2022, as Appl. No. 17/896,903.
Application 17/896,903 is a continuation of application No. PCT/CN2020/077036, filed on Feb. 27, 2020.
Prior Publication US 2022/0414896 A1, Dec. 29, 2022
Int. Cl. G06T 7/207 (2017.01); G06T 7/11 (2017.01); G06V 10/80 (2022.01)
CPC G06T 7/207 (2017.01) [G06T 7/11 (2017.01); G06V 10/806 (2022.01)] 20 Claims
OG exemplary drawing
 
1. An image processing method, comprising:
obtaining feature information of a first region in a current image frame, wherein the first region comprises a region that is determined in the current image frame by performing motion estimation on the current image frame and a previous image frame based on optical flow;
obtaining feature information of a second region in the current image frame, wherein the second region comprises a region corresponding to pixel points among a plurality of first pixel points of the current image frame, where association between the pixel points among the plurality of first pixel points of the current image frame and pixel points among a plurality of second pixel points of the previous image frame satisfies a condition; and
obtaining a processed current image frame by fusing the previous image frame and the current image frame based on the feature information of the first region and the feature information of the second region, wherein the processed current image frame is used as a previous image frame for a next image frame for processing the next image frame,
wherein the obtaining the processed current image frame by fusing the previous image frame and the current image frame based on the feature information of the first region and the feature information of the second region comprises:
based on the feature information of the first region and the feature information of the second region, determining a third region and a fourth region in the current image frame, wherein the third region represents a region where local motion occurs in the current image frame with respect to the previous image frame, and the fourth region represents a region where global motion occurs in the current image frame with respect to the previous image frame; and
based on the third region and the fourth region, processing the fusing between the previous image frame and the current image frame.