US 11,776,577 B2
Camera tracking system for live compositing
Donnie Ocean, Burlington, VT (US); Michael Batty, Burlington, VT (US); and John Hoehl, Burlington, VT (US)
Assigned to Mean Cat Entertainment LLC, Burlington, VT (US)
Appl. No. 17/762,442
Filed by Mean Cat Entertainment LLC, Burlington, VT (US)
PCT Filed Sep. 22, 2020, PCT No. PCT/US2020/070564
§ 371(c)(1), (2) Date Mar. 22, 2022,
PCT Pub. No. WO2021/056030, PCT Pub. Date Mar. 25, 2021.
Claims priority of provisional application 62/903,860, filed on Sep. 22, 2019.
Prior Publication US 2022/0351751 A1, Nov. 3, 2022
Int. Cl. G11B 27/031 (2006.01); H04N 17/00 (2006.01)
CPC G11B 27/031 (2013.01) [H04N 17/002 (2013.01)] 2 Claims
OG exemplary drawing
 
1. A method of camera tracking and compositing comprising: receiving a video feed from a video camera;
receiving positional data of the video camera from a tracking device attached to the video camera;
receiving a plurality of lens parameters from an encoder connected to a lens of the video camera;
generating an intrinsic calibration file based on the plurality of lens parameters and the video feed;
generating an extrinsic calibration file based on the positional data and the video feed;
generating a room scale calibration file based on the plurality of lens parameters, the positional data, and the video feed;
generating a composite video feed from the video feed and a virtual camera, wherein a plurality of lens distortion parameters of the virtual camera are set based on the intrinsic calibration file;
displaying the composite video in real time;
tracking an object in the composite video feed using camera and tracking offsets derived from the extrinsic calibration file;
adjusting a geometry scale of the virtual camera based on the room scale calibration file;
synchronizing a video frame rate of the video camera and a tracking frame rate of the tracking device by:
generating a timeline from a server system clock;
generating synchronized output for a plurality of frames from the video feed and a plurality of corresponding data points from the positional data;
interpolating the positional data to determine interpolated positions at selected times on the timeline for the plurality of frames and the plurality of corresponding data points; and generating a positional data timeline;
generating an encoder value from a lens gear connected to a lens on the video camera and adjusting a virtual camera focus distance and a virtual camera aperture of the virtual camera based on the encoder value; and
storing a plurality of images captured from the video feed and storing, in association with each respective image of the plurality of images, information about each respective image including scene state data, lens parameter data, video camera parameters, and metadata, wherein the lens parameter data includes focal length and distortion, wherein the camera parameters include camera model and settings, and wherein the metadata includes a date and a location for the capture of each of the plurality of images.