US 11,721,076 B2
System for mixing or compositing in real-time, computer generated 3D objects and a video feed from a film camera
Samuel Boivin, Saclas (FR); and Brice Michoud, Chantilly (FR)
Assigned to NCAM TECHNOLOGIES LIMITED, London (GB)
Filed by NCAM TECHNOLOGIES LIMITED, London (GB)
Filed on Nov. 22, 2021, as Appl. No. 17/532,066.
Application 17/532,066 is a continuation of application No. 14/830,494, filed on Aug. 19, 2015, granted, now 11,182,960.
Application 14/830,494 is a continuation of application No. 14/399,632, granted, now 9,600,936, previously published as PCT/GB2013/051205, filed on May 9, 2013.
Claims priority of application No. 1208088 (GB), filed on May 9, 2012.
Prior Publication US 2022/0076501 A1, Mar. 10, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06T 7/73 (2017.01); G06T 15/20 (2011.01); G06T 17/00 (2006.01); G06F 3/01 (2006.01); H04N 5/265 (2006.01); H04N 5/272 (2006.01); H04N 5/222 (2006.01); H04N 5/262 (2006.01); H04N 13/156 (2018.01); H04N 13/204 (2018.01); H04N 13/275 (2018.01); H04N 13/239 (2018.01); H04N 23/80 (2023.01)
CPC G06T 19/006 (2013.01) [G06F 3/017 (2013.01); G06T 7/73 (2017.01); G06T 15/20 (2013.01); G06T 17/00 (2013.01); H04N 5/2224 (2013.01); H04N 5/265 (2013.01); H04N 5/2621 (2013.01); H04N 5/272 (2013.01); H04N 13/156 (2018.05); H04N 13/204 (2018.05); H04N 13/275 (2018.05); H04N 23/80 (2023.01); H04N 13/239 (2018.05)] 33 Claims
OG exemplary drawing
 
1. A markerless system, the system including:
(i) a hand-held or portable monoscopic video camera;
(ii) sensors including an accelerometer and a gyro sensing over six degrees of freedom;
(iii) two witness cameras forming a stereoscopic system, in which the monoscopic video camera does not form part of the stereoscopic system;
(iv) a camera tracking system in connection with the monoscopic video camera; and
(v) a rendering station in wireless connection with the camera tracking system;
the markerless system being for mixing or compositing in real-time, computer generated 3D objects and a video feed from the video camera, to generate real-time augmented reality video for TV broadcast, cinema or video games, in which:
(a) the sensors in or attached directly or indirectly to the video camera provide real-time positioning data defining the 3D position and 3D orientation of the video camera, or enabling the 3D position and 3D orientation of the video camera to be calculated, wherein the sensors are configured to output the real-time positioning data to the camera tracking system;
(b) the two witness cameras forming the stereoscopic system are fixed directly or indirectly to the video camera;
(c) the rendering station is configured to receive wirelessly and to use real-time positioning data automatically to create, recall, render or modify computer generated 3D objects;
(d) the rendering station is configured to mix-in or to composite the resulting computer generated 3D objects with the video feed from the video camera to provide augmented reality video for TV broadcast, cinema or video games;
and in which:
(e) the camera tracking system is configured to determine the 3D position and orientation of the video camera with reference to a 3D map of the real-world, wherein the camera tracking system is configured to generate the 3D map of the real-world, at least in part, by using the real-time 3D positioning data from the sensors plus a video flow in which the two witness cameras forming the stereoscopic system survey a scene, and in which the camera tracking system is configured to detect natural markers in the scene that have not been manually or artificially added to that scene, and to use a constant velocity model associated with the information provided by the sensors to predict a next position of the monoscopic video camera using the previously correctly computed or confirmed position.