CPC A63F 13/65 (2014.09) [A63F 13/30 (2014.09); A63F 13/57 (2014.09); A63F 13/812 (2014.09); G06T 7/20 (2013.01); G06T 9/00 (2013.01); G06T 13/40 (2013.01); G06T 15/04 (2013.01); G06T 15/40 (2013.01); G06T 2200/04 (2013.01); G06T 2200/08 (2013.01); G06T 2207/30221 (2013.01); G06T 2215/16 (2013.01)] | 20 Claims |
1. A method, comprising:
defining a physical model describing a plurality of real objects within a real environment extracted from a live action video of the plurality of real objects, representing, for each real object, at least a surface, a mass, a motion, acoustic properties, and interaction with other real objects;
defining a dynamic motion vector state of the plurality of real objects in the real environment extracted from the live action video of the plurality of real objects;
receiving an input from a user comprising an influence on the defined dynamic motion vector of at least one real object in the real environment to the defined dynamic motion vector state; and
synthesizing with at least one automated processor, a virtual view and audio output representing the plurality of real objects in the real environment as modified by the received user input, representing an extrapolation of the defined dynamic motion vector state of the plurality of real objects in the real environment and audio output according to at least the surface, the mass, the motion, acoustic properties, and the interaction with other real objects according to the model modified by the received input from the user, the extrapolation comprising a modification of a movement of the plurality of real objects within the real environment, representing a synthetic interaction of the plurality of real objects in the real environment, different from an interaction of the plurality of real objects according to the defined motion vector state and the physical model in the real environment absent the received input from the user.
|