| CPC G05D 1/0246 (2013.01) [G05D 1/0221 (2013.01); G05D 1/0274 (2013.01); G06N 20/00 (2019.01); G06V 20/58 (2022.01)] | 20 Claims | 

| 
               1. An autonomous vehicle comprising: 
            a plurality of sensors; 
                one or more processors; and 
                one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: 
                receiving, from the plurality of sensors, sensor data representing an environment of the autonomous vehicle at near real-time, the environment comprising: 
                a visible region that is visible to a first sensor of the plurality of sensors that is associated with a first sensor modality and a second sensor of the plurality of sensors that is associated with a second sensor modality, wherein the first sensor modality is different from the second sensor modality; and 
                  an occluded region that is occluded to the first sensor and visible to the second sensor; 
                inputting a first portion of the sensor data associated with the occluded region and a second portion of the sensor data associated with the visible region into a machine learned model, wherein: 
                the second portion comprises an output of the second sensor from the occluded region, 
                  the machine learned model is trained based on first log data and second log data, 
                  the first log data represents: 
                  a second visible region at a first past time, and 
                    that a previously undetected object was first detected to be present at the second visible region at a second past time, wherein the second past time is after the first past time, and 
                  the second log data is generated by: 
                receiving third log data captured by at least one of the autonomous vehicle or a second vehicle, the third log data representing a third visible region, and 
                    artificially occluding the third visible region, comprising artificially occluding an object in the third visible region; 
                  receiving, from the machine learned model, prediction probabilities associated with an occluded object at near real-time, the prediction probabilities indicative of the occluded object occupying the visible region during a future period of time; 
                determining, based at least in part on the prediction probabilities, an action for the autonomous vehicle to perform; and 
                controlling the autonomous vehicle to perform the action. 
               |