CPC H04N 23/80 (2023.01) [H04N 25/57 (2023.01)] | 10 Claims |
1. A data simulation method for an event camera, comprising:
decoding an acquired video to be processed to obtain a video frame sequence;
inputting a target video frame from the video frame sequence to a fully convolutional network UNet, to extract an event camera contrast threshold distribution information corresponding to the target video frame, thereby obtaining the event camera contrast threshold distribution information;
based on the event camera contrast threshold distribution information, sampling each pixel in the target video frame to generate an event camera contrast threshold, to obtain an event camera contrast threshold set;
performing pseudo-parallel event data simulation generation processing on the event camera contrast threshold set and the video frame sequence, to obtain simulated event camera data;
performing generative adversarial learning on the simulated event camera data and pre-acquired event camera shooting data, to update the event camera contrast threshold distribution information, to obtain an updated event camera contrast threshold distribution information, wherein a similarity between the updated event camera contrast threshold distribution information and real data of a target domain is greater than a first predetermined threshold;
based on the updated event camera contrast threshold distribution information, the video frame sequence and a preset noise signal, generating the simulated event camera data, wherein the simulated event camera data is data whose similarity with the pre-acquired event camera shooting data is greater than a second predetermined threshold.
|
10. A data simulation device for an event camera, comprising:
a decode unit, configured to decode an acquired video to be processed, to obtain a video frame sequence;
an input unit, configured to input a target video frame from the video frame sequence into a fully convolutional network UNet to extract an event camera contrast threshold distribution information corresponding to the target video frame, thereby obtaining the event camera contrast threshold distribution information;
a sampling unit, configured to, based on the event camera contrast threshold distribution information, sample each pixel in the target video frame to generate an event camera contrast threshold, and obtain an event camera contrast threshold set;
a pseudo-parallel event data simulation generation processing unit, configured to perform pseudo-parallel event data simulation generation processing on the event camera contrast threshold set and the video frame sequence, to obtain simulated event camera data;
a generative adversarial learning unit, configured to perform generative adversarial learning on the simulated event camera data and pre-acquired event camera shooting data, to update the event camera contrast threshold distribution information, to obtain an updated event camera contrast threshold distribution information, wherein a similarity between the updated event camera contrast threshold distribution information and real data of a target domain is greater than a first predetermined threshold;
a generating unit, configured to generate the simulated event camera data based on the updated event camera contrast threshold distribution information, the video frame sequence and a preset noise signal, wherein the simulated event camera data is data whose similarity with the pre-acquired event camera shooting data is greater than a second predetermined threshold.
|