US 11,967,001 B2
Systems and methods to generate a video of a user-defined virtual reality scene
Gil Baron, Los Angeles, CA (US); Daniel Andrew Bellezza, Van Nuys, CA (US); Jeffrey Scott Dixon, Pasadena, CA (US); William Stuart Farquhar, Hollis, NH (US); Jason Zesheng Hwang, Los Angeles, CA (US); John Henry Kanikula Peters, Los Angeles, CA (US); Nhan Van Khong, Los Angeles, CA (US); Christopher Robert Laubach, Los Angeles, CA (US); Gregory Scott Pease, Burbank, CA (US); and Jonathan Michael Ross, Santa Monica, CA (US)
Assigned to Mindshow Inc., Los Angeles, CA (US)
Filed by Mindshow Inc., Los Angeles, CA (US)
Filed on Apr. 13, 2023, as Appl. No. 18/300,163.
Application 18/300,163 is a continuation of application No. 17/842,292, filed on Jun. 16, 2022, granted, now 11,631,201.
Application 17/842,292 is a continuation of application No. 16/932,473, filed on Jul. 17, 2020, granted, now 11,403,785, issued on Aug. 2, 2022.
Prior Publication US 2023/0252690 A1, Aug. 10, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 11/00 (2006.01); G06F 3/16 (2006.01)
CPC G06T 11/00 (2013.01) [G06F 3/165 (2013.01); G06T 2210/61 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A system for generating a video of a user-defined virtual scene, the system comprising:
one or more physical processors configured by machine-readable instructions to:
obtain a scene definition for a virtual scene, the scene definition defining preset performances of characters within a virtual setting of at least inanimate objects over a scene duration from a scene beginning to a scene end;
obtain camera information for multiple virtual cameras to be used in generating a two-dimensional presentation of the virtual scene, wherein the camera information for individual ones of the multiple virtual cameras defines, as a function of progress through the scene duration, field of view for the individual multiple virtual cameras, values of camera capture parameters for the individual multiple virtual cameras, and adjustments to the scene information specific to the individual multiple virtual cameras, wherein the adjustments to the scene information include changes to values that define a location of individual ones of the inanimate objects, a size of the individual inanimate objects, lighting of the virtual setting, and/or ambient audio of the virtual setting;
obtain camera timing instructions specifying which of the multiple virtual cameras should be used to generate the two-dimensional presentation of the virtual scene as a function of progress through the scene duration, wherein the camera timing instructions include individual timepoints within the scene duration to initiate the individual virtual cameras for different portions of the scene duration; and
generate the two-dimensional presentation of the virtual scene in accordance with the camera timing instructions and the camera information such that responsive to the camera timing instructions, the two-dimensional presentation of the virtual scene depicts the virtual setting and the characters through the field of view of the individual virtual cameras at the individual timepoints for the different portions of the scene where the values of the camera capture parameters for the individual virtual cameras and the adjustments to the scene information specific to the individual virtual cameras are implemented at the individual timepoints for the individual portions.