| CPC G06T 7/557 (2017.01) [B22F 10/31 (2021.01); B22F 10/80 (2021.01); B33Y 50/00 (2014.12); G06T 7/0004 (2013.01); G06T 7/80 (2017.01); H04N 13/271 (2018.05); H04N 13/282 (2018.05); G06T 2207/10052 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30144 (2013.01)] | 7 Claims |

|
1. A method for 3D contour reconstruction of additive manufacturing (AM) parts based on light field imaging, characterized by comprising:
calibrating a light field camera to determine an equivalent calibrated parameter set of a light field camera;
constructing an epipolar-plane image-Unet (EPI-UNet) framework, using a preset light field dataset to construct a training set, obtaining learning labels from disparity maps corresponding to the preset light field dataset, and training the EPI-UNet framework with the training set and the learning labels to obtain a predicted disparity vector model;
capturing light field information of an AM part surface of a target to be tested using the light field camera, and obtaining a two-dimensional disparity map of a scene by inputting the light field information into the predicted disparity vector model;
determining geometric optical path relationship between disparity and depth based on the equivalent calibrated parameter set of the light field camera to obtain 3D coordinate information of the target to be tested;
performing disparity mapping on the 3D coordinate information of the target to be tested to obtain 3D contour information of the target to be tested,
wherein constructing the EPI-UNet framework comprises:
determining the EPI-UNet framework that includes a contour feature extraction sub-network, a local feature extraction sub-network, and a detail feature extraction sub-network that are sequentially connected to each other, wherein
the contour feature extraction sub-network consists of a 5×5×32 convolution kernel, a residual module, a 5×5×64 convolution kernel, a residual module, and a 5×5×64 convolution kernel,
the local feature extraction sub-network consists of a 3×3×32 convolution kernel, a residual module, a 3×3×64 convolution kernel, a residual module, and a 3×3×64 convolution kernel,
the detail feature extraction sub-network consists of a 2×2×32 convolution kernel, a 2×2×16 convolution kernel, and a 2×2×1 convolution kernel,
wherein using the preset light field dataset to construct the training set, obtaining the learning labels from the disparity maps corresponding to the preset light field dataset comprises:
extracting a plurality of sub-view images from the preset light field dataset, stacking the sub-view images to form a four-dimensional light field volume;
shearing the four-dimensional light field volume horizontally and vertically, and obtaining a plurality of light field-EPIs (LF-EPIs) after gray scaling and performing a contrast-limited adaptive histogram equalization algorithm on the sheared four-dimensional light field volume;
dividing the LF-EPIs in the preset light field dataset into the training set and a testing set according to a preset ratio; and
extracting vectors from a plurality of real disparity maps corresponding to the preset light field dataset as the learning labels,
wherein training the EPI-UNet framework with the training set and the learning labels to obtain the predicted disparity vector model comprises:
inputting the training set into the EPI-UNet framework to obtain a plurality of predicted disparity vectors;
calculating difference between the predicted disparity vectors and the learning labels using a preset loss function;
performing backpropagation on the difference, and after completing one training cycle, inputting the testing set into the EPI-UNet framework which is trained for accuracy testing; and
iteratively adjusting hyperparameters of the EPI-UNet framework to perform iteration training until the preset loss function is less than a loss threshold or a number of training iterations reaches a training iteration threshold, stopping training, and outputting the predicted disparity vector model.
|