US 11,676,353 B2
Systems and methods configured to facilitate animation
Jeffrey Scott Dixon, Pasadena, CA (US); and William Stuart Farquhar, Hollis, NH (US)
Assigned to Mindshow Inc., Los Angeles, CA (US)
Filed by Mindshow Inc., Los Angeles, CA (US)
Filed on Jun. 23, 2022, as Appl. No. 17/847,959.
Application 17/847,959 is a continuation of application No. 17/328,943, filed on May 24, 2021, granted, now 11,380,076.
Application 17/328,943 is a continuation of application No. 16/925,964, filed on Jul. 10, 2020, granted, now 11,043,041, issued on Jun. 22, 2021.
Prior Publication US 2022/0351471 A1, Nov. 3, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01)
CPC G06T 19/006 (2013.01) 16 Claims
OG exemplary drawing
 
1. A system configured to facilitate animation, the system comprising:
one or more physical processors configured by machine-readable instructions to:
obtain a first scene definition, the first scene definition including scene information that defines a virtual scene, the virtual scene including integrated motion capture information of entities within a virtual setting over a scene duration from a scene beginning to a scene end, the scene information including entity information, the entity information defining individual ones of the entities and the motion capture information of the entities, wherein the scene information includes first entity information, the first entity information defining a first entity and first motion capture information for the first entity, the first motion capture information characterizing motion and/or sound made by a first user during a first portion of the scene duration such that the first user virtually embodies the first entity;
receive second entity information, the second entity information defining a second entity and second motion capture information characterizing motion and/or sound made by a second user during a second portion of the scene duration such that the second user virtually embodies the second entity, wherein the first portion and the second portion of the scene duration have at least some overlap;
integrate the second entity information into the first scene definition such that a second scene definition is generated, the second scene definition including the first scene definition and the second entity information, wherein the integrated second motion capture information affects the motion capture information of the entities;
for each of the entities of the entity information:
execute a simulation of the virtual scene from the second scene definition for at least a portion of the scene duration; analyze the second scene definition for deviancy between a given entity and the second motion capture information, wherein the deviancy characterizes the motion capture information of the given entity as incompliant with the second motion capture information based on the integration of the second motion capture information;
indicate, based on the analysis for deviancy, the given entity as deviant; and
re-integrate the given entity into the second scene definition.