| CPC G06V 10/751 (2022.01) [G06V 10/771 (2022.01)] | 16 Claims |

|
1. A stripe image processing method for camera optical communication, comprising:
obtaining a stripe image data set, wherein the stripe image data set comprises a stripe image sample and a stripe sequence label corresponding to the stripe image sample and a stripe-free image label; and
based on the stripe image sample and the stripe sequence label corresponding to the stripe image sample and the stripe-free image label, training a generative adversarial network to obtain an image reconstruction model and a stripe extraction model, wherein the image reconstruction model is configured to reconstruct a stripe image into a stripe-free image, and the stripe extraction model is configured to extract stripes based on stripe images;
wherein the image reconstruction model serves as a generator of the generative adversarial network, and the stripe extraction model serves as a discriminator of the generative adversarial network;
the training of the generative adversarial network based on the stripe image sample and the stripe sequence label corresponding to the stripe image sample and the stripe-free image label to obtain the image reconstruction model and the stripe extraction model comprises:
based on the stripe image sample and the stripe sequence label corresponding to the stripe image sample and the stripe-free image label, continuing to calculate a loss value and optimize model parameters until a training stop condition is met;
the calculating of the loss value and the optimizing of the model parameters comprises:
inputting the stripe image sample and the corresponding stripe sequence label to the image reconstruction model to obtain a reconstructed image in a training phase output by the image reconstruction model;
inputting the stripe image sample to the stripe extraction model to obtain a stripe extraction result of the stripe image sample output by the stripe extraction model, inputting the stripe-free image label to the stripe extraction model to obtain a stripe extraction result of the stripe-free image output by the stripe extraction model, and inputting the reconstructed image in the training stage to the stripe extraction model to obtain a stripe extraction result of the reconstructed image output by the stripe extraction model;
based on the reconstructed image in the training stage, the stripe extraction result of the reconstructed image, and the stripe-free image label, obtaining the loss value corresponding to the image reconstruction model, and based on the stripe extraction result of the stripe image sample, the stripe extraction result of the stripe-free image, the stripe extraction result of the reconstructed image, and the stripe sequence label, obtaining the loss value corresponding to the stripe extraction model; and
based on the loss value corresponding to the image reconstruction model and the loss value corresponding to the stripe extraction model, optimizing the model parameters through a back-propagation method;
based on the reconstructed image in the training stage, the stripe extraction result of the reconstructed image, and the stripe-free image label, the obtaining of the loss value corresponding to the image reconstruction model comprises:
comparing a difference between pixels in the reconstructed image in the training stage and corresponding pixels in the stripe-free image label to obtain a first-type generator loss;
comparing a difference between the stripe extraction result of the reconstructed image and a stripe sequence corresponding to the stripe-free image to obtain a second-type generator loss; and
based on the first-type generator loss and the second-type generator loss, performing a weighted sum to obtain the loss value corresponding to the image reconstruction model;
based on the stripe extraction result of the stripe image sample, the stripe extraction result of the stripe-free image, the stripe extraction result of the reconstructed image, and the stripe sequence label, the obtaining of the loss value corresponding to the stripe extraction model, comprises:
comparing a difference between the stripe extraction result of the stripe image sample and the stripe sequence label to obtain a first-type discriminator loss;
comparing a difference between the stripe extraction result of the stripe-free image and the stripe sequence corresponding to the stripe-free image to obtain a second-type discriminator loss;
comparing a difference between the stripe extraction result of the reconstructed image and the stripe sequence label to obtain a third-type discriminator loss; and
based on the first-type discriminator loss, the second-type discriminator loss, and the third-type discriminator loss, performing a weighted sum to obtain the loss value corresponding to the stripe extraction model.
|