US 11,894,023 B2
Video enhancement
Anthony L. Cole, Hampshire (GB); Thomas J. Davison, Southampton (GB); Daniel Del Piccolo, Portsmouth (GB); Daniel Lane, Hampshire (GB); James S. Luke, Isle of Wight (GB); and Martine M. Pulvenis, Hampshire (GB)
Assigned to International Business Machines Corporation, Armonk, NY (US)
Filed by International Business Machines Corporation, Armonk, NY (US)
Filed on Feb. 11, 2019, as Appl. No. 16/272,313.
Application 16/272,313 is a continuation of application No. 14/944,501, filed on Nov. 18, 2015, granted, now 10,276,210.
Prior Publication US 2019/0172497 A1, Jun. 6, 2019
Int. Cl. G06T 7/70 (2017.01); G06T 7/13 (2017.01); G06T 11/60 (2006.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01); G11B 27/036 (2006.01); H04N 7/14 (2006.01)
CPC G11B 27/036 (2013.01) [G06T 7/13 (2017.01); G06T 7/70 (2017.01); G06T 11/60 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); H04N 7/142 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving first video data of an environment having an object observed from a first point of view, the first video data comprising a plurality of frames having the object disposed in an area of the plurality of frames; and
generating revised video data of the environment having the object from the first point of view based on the first video data, the revised video data comprising a plurality of revised frames with the object disposed in the area of the plurality of frames being revised based on data corresponding to the object separate from the first video data, wherein the first video data of the environment comprises the object comprising a virtual reality headset, and the revised video data comprises the plurality of frames with the virtual reality headset disposed in the area of the plurality of frames being revised based on second video data obtained of the environment behind the virtual reality headset during the receiving the first video data, wherein the generating comprises identifying the object in the frame and retrieving the data corresponding to the object from a database based on the identified object, wherein the data corresponding to the object separate from the first video data comprises video data obtained of the environment behind the object during the receiving the first video data, wherein the receiving and the generating are performed in real-time, wherein the generating comprises determining a peripheral edge of the area of the object in the plurality of frames, and the revised video data comprises a revised object based on the determined peripheral edge, wherein the data corresponding to the object separate from the first video data comprises the object from a second point of view, and the generating comprises reorienting the object from the second point of view to correspond to the first point of view, wherein the generating comprises replacing the object disposed in a first area in the plurality of frames of the revised video data with the data corresponding to the object separate from the first video data, and wherein the generating comprise identifying the area of the object in one of the frames of the plurality of frames, and using the identified area in the one of the frames for identifying the area of the object in a subsequent frame.