CPC G06N 3/088 (2013.01) [G06F 16/951 (2019.01); G06F 18/217 (2023.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 19/006 (2013.01); G06T 15/005 (2013.01)] | 30 Claims |
1. A method comprising:
at a device including a non-transitory memory, a display, and one or more processors coupled with the non-transitory memory:
while presenting a synthesized reality setting on the display:
instantiating an objective-effectuator into the synthesized reality setting, wherein the objective-effectuator is characterized by a set of predefined actions and a set of visual rendering attributes, and the objective-efffectuator represents a character from a source material;
obtaining an objective for the objective-effectuator;
determining contextual information characterizing the synthesized reality setting at least in part by determining a mapping between the synthesized reality setting and a physical setting in which the device is located;
generating a sequence of actions from the set of predefined actions based on the contextual information and the objective, wherein the actions in the sequence of actions are within a degree of similarity to actions that the character performs in the source material; and
manipulating the objective-effectuator to perform the sequence of actions.
|