US 12,217,478 B2
Utilizing prediction models of an environment
Shmuel Ur, Shorashim (IL); Vlad Dabija, Mountain View, CA (US); and David Hirshberg, Haifa (IL)
Assigned to SHMUEL UR INNOVATION LTD., Shorashim (IL)
Filed by SHMUEL UR INNOVATION LTD., Shorashim (IL)
Filed on Jun. 17, 2022, as Appl. No. 17/807,648.
Application 16/272,655 is a division of application No. 15/177,411, filed on Jun. 9, 2016, granted, now 10,245,724.
Application 17/807,648 is a continuation of application No. 16/272,655, filed on Feb. 11, 2019, granted, now 11,389,956.
Prior Publication US 2023/0008007 A1, Jan. 12, 2023
Int. Cl. B25J 9/00 (2006.01); B25J 9/16 (2006.01); G06T 7/12 (2017.01); G06T 7/73 (2017.01); G06V 10/75 (2022.01); G06V 20/10 (2022.01)
CPC G06V 10/751 (2022.01) [B25J 9/1612 (2013.01); B25J 9/1669 (2013.01); B25J 9/1697 (2013.01); G06T 7/12 (2017.01); G06T 7/75 (2017.01); G06V 20/10 (2022.01); G05B 2219/39536 (2013.01); G05B 2219/40425 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A system comprising:
one or more sensors configured to capture scenes of an environment, the environment comprises one or more physical objects;
a processor configured to:
obtain a computer-representation of a first scene of the environment from said one or more sensors;
predict, using a model of the environment and based on the computer-representation of the first scene of the environment, a predicted second scene of the environment, the predicted second scene comprises predicted pixel values;
obtain a computer-representation of a second observed second scene of the environment from said one or more sensors, the observed second scene comprises observed pixel values;
determine a difference between the predicted second scene and the observed second scene based on different values of the predicted pixel values compared to the observed pixel values; and
create a compressed computer-representation of the observed second scene, wherein the compressed computer-representation includes the determined difference between the predicted second scene and the observed second scene, thereby the computer-representation of the observed second scene can be reconstructed by predicting, using the model and the computer-representation of the first scene, the predicted second scene, and by applying the determined difference between the predicted second scene and the computer representation of the observed second scene.