CPC G11B 27/031 (2013.01) [H04N 17/002 (2013.01)] | 2 Claims |
1. A method of camera tracking and compositing comprising: receiving a video feed from a video camera;
receiving positional data of the video camera from a tracking device attached to the video camera;
receiving a plurality of lens parameters from an encoder connected to a lens of the video camera;
generating an intrinsic calibration file based on the plurality of lens parameters and the video feed;
generating an extrinsic calibration file based on the positional data and the video feed;
generating a room scale calibration file based on the plurality of lens parameters, the positional data, and the video feed;
generating a composite video feed from the video feed and a virtual camera, wherein a plurality of lens distortion parameters of the virtual camera are set based on the intrinsic calibration file;
displaying the composite video in real time;
tracking an object in the composite video feed using camera and tracking offsets derived from the extrinsic calibration file;
adjusting a geometry scale of the virtual camera based on the room scale calibration file;
synchronizing a video frame rate of the video camera and a tracking frame rate of the tracking device by:
generating a timeline from a server system clock;
generating synchronized output for a plurality of frames from the video feed and a plurality of corresponding data points from the positional data;
interpolating the positional data to determine interpolated positions at selected times on the timeline for the plurality of frames and the plurality of corresponding data points; and generating a positional data timeline;
generating an encoder value from a lens gear connected to a lens on the video camera and adjusting a virtual camera focus distance and a virtual camera aperture of the virtual camera based on the encoder value; and
storing a plurality of images captured from the video feed and storing, in association with each respective image of the plurality of images, information about each respective image including scene state data, lens parameter data, video camera parameters, and metadata, wherein the lens parameter data includes focal length and distortion, wherein the camera parameters include camera model and settings, and wherein the metadata includes a date and a location for the capture of each of the plurality of images.
|