| CPC G06T 15/506 (2013.01) [G06T 7/10 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 15/04 (2013.01); G06T 15/50 (2013.01); G06T 15/60 (2013.01); G06T 17/20 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); G06V 10/70 (2022.01); G06V 20/20 (2022.01); G06V 20/36 (2022.01); H04N 23/74 (2023.01); G06T 2200/08 (2013.01); G06T 2200/24 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20212 (2013.01); G06T 2207/30244 (2013.01); G06T 2210/04 (2013.01); G06T 2210/56 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] | 54 Claims |

|
1. A method, comprising:
receiving a selection of a camera framing template for capturing one or more images of a scene in an imaging studio, wherein a plurality of camera framing templates, including the selected camera framing template, is generated using one or more machine learning based networks trained to learn frequently occurring camera framings in training images of scenes that are of a same or a similar type as the scene; and
automatically adjusting one or more cameras comprising the imaging studio according to the selected camera framing template;
wherein a set of images of the scene captured by the one or more cameras in the imaging studio is used to at least in part generate an interactive image of the scene, wherein the interactive image of the scene comprises a two-dimensional image with at least partial three-dimensional capabilities but without having an underlying three-dimensional model.
|