| CPC B25J 9/1671 (2013.01) [B25J 9/1664 (2013.01); G06T 1/0014 (2013.01); G06T 7/20 (2013.01); G06T 7/70 (2017.01); G06T 17/05 (2013.01); G06T 2215/16 (2013.01)] | 15 Claims |

|
1. A method of implementing object permanence in a simulated environment, the method comprising:
accessing, by at least one processor, the simulated environment, wherein the simulated environment comprises an environment model representing a physical environment;
capturing, by at least one image sensor, first image data representing the physical environment at a first time, the first image data further representing a first object at a first position in the physical environment and a second object at a second position in the physical environment;
detecting, by the at least one processor, a plurality of features of the first object in the first image data;
identifying, by the at least one processor, the first object in the first image data based on the plurality of detected features of the first object in the first image data;
detecting, by the at least one processor, at least one feature of the second object in the first image data;
identifying, by the at least one processor, the second object in the first image data based on the at least one detected features of the second object in the first image data;
including, based on the first image data, a first representation of the first object at a first position in the environment model and a second representation of the second object at a second position in the environment model;
capturing, by the at least one image sensor, second image data representing the physical environment at a second time after the first time, the second image data further representing the first object at the first position in the physical environment and the second object at a third position in the physical environment;
detecting, by the at least one processor, at least one feature of the second object in the second image data;
identifying, by the at least one processor, the second object in the second image data based on the at least one detected feature of the second object in the second image data;
determining, by the at least one processor, that the second object has moved to the third position in the physical environment in the second image data;
updating representation of the second object to a third position in the environment model;
detecting, by the at least one processor, a subset of the plurality of features of the first object in the second image data;
identifying, by the at least one processor, the first object in the second image data based on the subset of detected features of the first object in the second image data;
determining, by the at least one processor, that the first object remains positioned at the first position in the physical environment in the second image data;
maintaining representation of the first object at the first position in the environment model;
capturing, by the at least one image sensor, third image data representing the physical environment at a third time after the second time, the third image data further representing the second object at a fourth position in the physical environment;
detecting, by the at least one processor, at least one feature of the second object in the third image data;
identifying, by the at least one processor, the second object in the third image data based on the at least one detected feature of the second object in the third image data;
determining, by the at least one processor, that the second object has moved to the fourth position in the physical environment in the third image data;
updating representation of the second object to a fourth position in the environment model;
determining, by the at least one processor, that the first object is not detectable in the third image data;
determining, by the at least one processor, that the first position is occluded by the second object at the fourth position; and
maintaining representation of the first object at the first position in the environment model without visually rendering the first object.
|