US 12,212,731 B2
Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery
Christopher E. Nolan, Los Angeles, CA (US); Bradley T. Collar, Valencia, CA (US); and Michael D. Smith, Hermosa Beach, CA (US)
Assigned to Warner Bros. Entertainment Inc., Burbank, CA (US)
Filed by WARNER BROS. ENTERTAINMENT INC., Burbank, CA (US)
Filed on Oct. 18, 2022, as Appl. No. 17/968,722.
Application 17/968,722 is a continuation of application No. 17/021,499, filed on Sep. 15, 2020, granted, now 11,477,430.
Application 17/021,499 is a continuation of application No. 16/370,067, filed on Mar. 29, 2019, granted, now 10,778,955, issued on Sep. 15, 2020.
Application 16/370,067 is a continuation of application No. 15/368,456, filed on Dec. 2, 2016, granted, now 10,277,883, issued on Apr. 30, 2019.
Application 15/368,456 is a continuation of application No. 13/482,953, filed on May 29, 2012, granted, now 9,532,027, issued on Dec. 27, 2016.
Claims priority of provisional application 61/533,777, filed on Sep. 12, 2011.
Claims priority of provisional application 61/491,157, filed on May 27, 2011.
Prior Publication US 2023/0291884 A1, Sep. 14, 2023
Int. Cl. H04N 13/189 (2018.01); H04N 13/128 (2018.01); H04N 13/204 (2018.01); H04N 13/275 (2018.01)
CPC H04N 13/189 (2018.05) [H04N 13/128 (2018.05); H04N 13/204 (2018.05); H04N 13/275 (2018.05)] 20 Claims
OG exemplary drawing
 
1. A method for stereography, comprising:
defining, by at least one processor, perception values comprising a three-dimensional (3D) shape ratio and a 3D width magnification factor for an output of a stereographic image generating process based at least in part on parameters of a viewing environment of a display on a screen, wherein the 3D width magnification is a ratio between perceived 3D image width to original object width, the 3D shape ratio is a ratio between 3D depth magnification and the 3D width magnification, and the 3D depth magnification is a ratio of change in perceived depth of the image to change in the original object depth;
generating, by the at least one processor, input parameters of the stereographic image generating process based at least in part on the perception values;
generating, by the at least one processor, the output using the stereographic image generating process and the input parameters;
receiving, by the at least one processor, one or more updated perception values; and
generating, by the at least one processor, an updated output based on the one or more updated perception values.