CPC G06T 19/006 (2013.01) [G06F 3/017 (2013.01); G06T 7/73 (2017.01); G06T 15/20 (2013.01); G06T 17/00 (2013.01); H04N 5/2224 (2013.01); H04N 5/2621 (2013.01); H04N 5/265 (2013.01); H04N 5/272 (2013.01); H04N 13/156 (2018.05); H04N 13/204 (2018.05); H04N 13/275 (2018.05); H04N 23/80 (2023.01); H04N 13/239 (2018.05)] | 41 Claims |
1. A markerless system, the system including:
(i) a hand-held or portable monoscopic video camera, and lens encoders configured to output values including focus and iris data in real-time, the monoscopic video camera including, or in attachment with, the lens encoders;
(ii) sensors including an accelerometer and a gyro sensing over six degrees of freedom;
(iii) two witness cameras forming a stereoscopic system, in which the monoscopic video camera does not form part of the stereoscopic system;
(iv) a camera tracking system in connection with the monoscopic video camera; and
(v) a rendering station in connection with the camera tracking system;
the markerless system being for mixing or compositing in real-time, computer generated 3D objects and a video feed from the video camera, to generate real-time augmented reality video for TV broadcast, cinema or video games, in which:
(a) the sensors in or attached directly or indirectly to the video camera provide real-time positioning data defining the 3D position and 3D orientation of the video camera, or enabling the 3D position and 3D orientation of the video camera to be calculated, wherein the sensors are configured to output the real-time positioning data to the camera tracking system;
(b) the two witness cameras forming the stereoscopic system are fixed directly or indirectly to the video camera;
(c) the rendering station is configured to receive and to use the focus and iris data, and the real-time positioning data automatically to create, recall, render or modify computer generated 3D objects;
(d) the rendering station is configured to mix-in or to composite the resulting computer generated 3D objects with the video feed from the video camera to provide augmented reality video for TV broadcast, cinema or video games;
and in which:
(e) the camera tracking system is configured to determine the 3D position and orientation of the video camera with reference to a 3D map of the real-world, wherein the camera tracking system is configured to generate the 3D map of the real-world, at least in part, by using the real-time 3D positioning data from the sensors plus a video flow in which the two witness cameras forming the stereoscopic system survey a scene, and in which the camera tracking system is configured to detect natural markers in the scene that have not been manually or artificially added to that scene.
|