CPC B60R 1/003 (2013.01) [B60Q 1/22 (2013.01); F21L 4/005 (2013.01); G06T 7/73 (2017.01); H04N 23/611 (2023.01); H04N 23/80 (2023.01); G06T 2207/30252 (2013.01)] | 16 Claims |
1. A method, comprising:
activating, by at least one processor of a device, a camera to capture first image data exterior to the device;
while the camera is capturing the first image data, based on the activation of the camera, activating, by the at least one processor, a first light source at a first controlled intensity, a first controlled frequency, and a first controlled direction;
while the camera is capturing the first image data, based on the activation of the camera, activating, by the at least one processor, the first light source at a second controlled intensity, a second controlled frequency, and a second controlled direction;
receiving, by the at least one processor, the first image data, the first image data comprising pixels having first color values;
identifying, by the at least one processor, first light generated by the first light source at the first controlled intensity, the first controlled frequency, and the first controlled direction while the camera is capturing the first image data;
identifying, by the at least one processor, second light generated by the first light source at the second controlled intensity, the second controlled frequency, and the second controlled direction while the camera is capturing the first image data;
identifying, by the at least one processor, based on the first image data, third light generated by a second light source different than the first light source;
determining, by the at least one processor, a first response of the camera based on a first product of the first controlled intensity and a first power spectral density of the first light source divided by a first area between the first light source and a first pixel of the first image data, added to a first estimate of an illumination of the first light and the third light;
determining, by the at least one processor, a second response of the camera based on a second product of the second controlled intensity and a second power spectral density of the first light source divided by a second area between the first light source and a second pixel of the first image data, added to a second estimate of the illumination of the second light and the third light;
determining, by the at least one processor, based on the first response of the camera and the second response of the camera, the illumination of the second light and the third light;
determining, by the at least one processor, a third response of the camera based on the second product of the second controlled intensity and the second power spectral density of the first light source divided by the second area between the first light source and the second pixel of the first image data, added to the illumination of the second light and the third light;
converting, by the at least one processor, by applying a spectral transform matrix to the third response of the camera, the first image data to second image data in an illumination-invariant color space;
identifying, by the at least one processor, an object represented by the second image data in the illumination-invariant color space; and
causing actuation, by the at least one processor, of a vehicle based on the object.
|