US 12,307,653 B1
Surface defect detection method, system, equipment, and terminal thereof
Kansong Chen, Wuhan (CN); Zhihao Xi, Wuhan (CN); Zuyang Liu, Wuhan (CN); and Yanyu Chen, Wuhan (CN)
Assigned to Hubei University, Wuhan (CN)
Filed by Hubei University, Wuhan (CN)
Filed on Dec. 2, 2024, as Appl. No. 18/965,023.
Claims priority of application No. 202410042044.4 (CN), filed on Jan. 11, 2024.
Int. Cl. G06T 7/00 (2017.01); G06V 10/50 (2022.01); G06V 10/54 (2022.01); G06V 10/75 (2022.01)
CPC G06T 7/001 (2013.01) [G06V 10/507 (2022.01); G06V 10/54 (2022.01); G06V 10/751 (2022.01); G06T 2207/20016 (2013.01)] 5 Claims
OG exemplary drawing
 
1. A surface defect detection method, wherein comprising:
S1, an application of LBP operator: before comparing a benchmark image with an actual shot image, a LBP texture feature extraction algorithm is first used for pre-processing the benchmark image and the actual shot image using the LBP operator; and
S2, sift feature-point matching: adding an Sift feature-point matching algorithm, developing a gradient direction formula, calculating key feature points in the image, and matching spatial information of the feature points; and
S3, defect detection: When local feature points of the actual shot image are successfully matched with an image in a benchmark image library, the two images will be compared in detail according to a multi-color fusion comparison method, so as to find out difference points and complete the defect detection; and
the step S2 specifically comprising:
S201, constructing a Gaussian pyramid; and
S202, establishing a DOG Gaussian difference pyramid; and
S203, accurate positioning of key points; and
S204, main direction allocating of the key points; and
S205, features description of the key points; and
S206, completing feature-point matching through the key points matching using an original SIFT algorithm; and
the step S204 specifically comprising: a direction of the key points is actually a gradient direction of local of the image; For key points detected in the DOG difference pyramid, gradient and direction distribution features of pixels within 3σ neighborhood window of the Gaussian pyramid images are collected; Firstly, calculating a image scale space L(x,y,σ):
wherein x, y, and σ respectively represent x, y coordinates and dimensions of the pixels;
then calculating an amplitude and direction of the gradient:
the amplitude of the gradient:

OG Complex Work Unit Math
the direction of the gradient:

OG Complex Work Unit Math
after calculating all gradient directions of the key points, a gradient direction of the key point with peak value is taken as the main direction, and gradient direction of the key points with 80% higher than the peak value is taken as the auxiliary direction;
recalculating the main direction and the amplitude for key points with multiple gradient directions, the main direction and the auxiliary direction are weighted to calculate a new main direction; As the multiple gradient directions are ultimately merged, each feature point only has one gradient direction, which significantly improves probability of successful feature-point matching.