CPC G06V 10/7715 (2022.01) [G06T 7/001 (2013.01); G06V 10/56 (2022.01); G06V 10/751 (2022.01); G06T 2207/10024 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30108 (2013.01); G06T 2207/30168 (2013.01); G06V 10/20 (2022.01); G06V 2201/06 (2022.01)] | 7 Claims |
1. A deep learning-based quality inspection method applicable to an injection process, comprising:
step 1 of receiving, by an input unit, a non-defective manufactured product image data set;
step 2 of extracting, by a controller, at least one attribute of an objectness value in an image, a brightness value, a contrastive value, and an object angle value in the image for each of a plurality of images included in the image data set;
step 3 of performing, by the controller, statistical analysis including at least one of an average and a standard deviation using the extracted attributes and calculating a quality score for each of the plurality of images using the performed statistical analysis;
step 4 of determining, by the controller, an image having quality higher than a predetermined quality score among the plurality of images based on the calculated quality score;
step 5 of preprocessing, by the controller, the determined quality image by applying at least one of resizing and padding processes for feature extraction;
step 6 of extracting, by the controller, non-defective manufactured product features that are criteria for a non-defective manufactured product from the preprocessed quality image;
step 7 of generating, by the controller, a plurality of fake defective manufactured product features;
step 8 of performing, by the controller, learning based on at least some of the determined quality image, the extracted non-defective manufactured product features, and the plurality of fake defective manufactured product features;
step 9 of receiving, by the input unit, an actual image;
step 10 of preprocessing, by the controller, the actual image by applying at least one of resizing and padding processes for the feature extraction;
step 11 of extracting, by the controller, actual features from the preprocessed actual image;
step 12 of determining, by the controller, whether the extracted actual feature is the non-defective manufactured product or a defective manufactured product based on the contents learned in step 8;
step 13 of deriving, by the controller, at least one of shape, length, width, diameter, radius, and circumference information of a defective area on the actual image, color information related to the defective area, and number information of the defective area, if the actual feature is the defective manufactured product; and
step 14 of determining, by the controller, a first defective type to be matched based on the information derived in step 13 among predetermined defective types,
wherein in step 2, the objectness value in the image is a score obtained by comparing corresponding pixels while moving a template image from an upper left end to a lower right end in each of the plurality of input images, and the controller determines the objectness based on a predetermined threshold,
in step 2,
the image brightness value is calculated as an average value of all pixels of each of the plurality of images,
the image contrastive value is calculated as a difference between maximum and minimum values of the all pixels in each of the plurality of images, and
the object angle value in the image is calculated by extracting coordinates through detection of an outline of the object and then using a width and a length of the outline.
|