| CPC G06T 15/506 (2013.01) [G06T 7/10 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 15/04 (2013.01); G06T 15/50 (2013.01); G06T 15/60 (2013.01); G06T 17/20 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); G06V 10/70 (2022.01); G06V 20/20 (2022.01); G06V 20/36 (2022.01); H04N 23/74 (2023.01); G06T 2200/08 (2013.01); G06T 2200/24 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20212 (2013.01); G06T 2207/30244 (2013.01); G06T 2210/04 (2013.01); G06T 2210/56 (2013.01); G06T 2219/2004 (2013.01); G06T 2219/2016 (2013.01)] | 54 Claims |

|
1. A method, comprising
obtaining camera captured raw images of a scene; and
processing the camera captured raw images of the scene to generate a two-dimensional interactive image of the scene comprising a plurality of interactive features modifiable by an end user according to user preferences, wherein the generated interactive image of the scene comprises a two-dimensional image with at least partial three-dimensional capabilities but without having an underlying three-dimensional model;
wherein a plurality of machine learning based networks facilitates capturing and processing of raw camera images of the scene to generate the interactive image of the scene and wherein at least one of the plurality of machine learning based networks is trained at least in part on training images comprising a prescribed scene type to which the scene belongs and constrained to one or more of a corresponding set of objects.
|