Next Article in Journal
Factors Associated with the Proximity of the Incisive Canal to the Maxillary Central Incisor
Next Article in Special Issue
Semantic Segmentation of Packaged and Unpackaged Fresh-Cut Apples Using Deep Learning
Previous Article in Journal
Bearing Behavior of Cold-Formed Thick-Walled Steel Plates with a Single Bolt
Previous Article in Special Issue
Quick Overview of Face Swap Deep Fakes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Defect Detection for Metal Components: A Fusion of Enhanced Canny–Devernay and YOLOv6 Algorithms

1
Guangdong Laboratory for Lingnan Modern Agriculture, College of Engineering, South China Agricultural University, Guangzhou 510642, China
2
Guangdong Zhengzheng Construction Engineering Testing Co., Ltd., Guangzhou 510635, China
3
College of Urban and Rural Construction, Zhongkai University of Agriculture and Engineering, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 6898; https://doi.org/10.3390/app13126898
Submission received: 6 May 2023 / Revised: 5 June 2023 / Accepted: 5 June 2023 / Published: 7 June 2023
(This article belongs to the Special Issue Application of Machine Vision and Deep Learning Technology)

Abstract

:
Due to the presence of numerous surface defects, the inadequate contrast between defective and non-defective regions, and the resemblance between noise and subtle defects, edge detection poses a significant challenge in dimensional error detection, leading to increased dimensional measurement inaccuracies. These issues serve as major bottlenecks in the domain of automatic detection of high-precision metal parts. To address these challenges, this research proposes a combined approach involving the utilization of the YOLOv6 deep learning network in conjunction with metal lock body parts for the rapid and accurate detection of surface flaws in metal workpieces. Additionally, an enhanced Canny–Devernay sub-pixel edge detection algorithm is employed to determine the size of the lock core bead hole. The methodology is as follows: The data set for surface defect detection is acquired using the labeling software lableImg and subsequently utilized for training the YOLOv6 model to obtain the model weights. For size measurement, the region of interest (ROI) corresponding to the lock cylinder bead hole is first extracted. Subsequently, Gaussian filtering is applied to the ROI, followed by a sub-pixel edge detection using the improved Canny–Devernay algorithm. Finally, the edges are fitted using the least squares method to determine the radius of the fitted circle. The measured value is obtained through size conversion. Experimental detection involves employing the YOLOv6 method to identify surface defects in the lock body workpiece, resulting in an achieved mean Average Precision ( m A P ) value of 0.911. Furthermore, the size of the lock core bead hole is measured using an upgraded technique based on the Canny–Devernay sub-pixel edge detection, yielding an average inaccuracy of less than 0.03 mm. The findings of this research showcase the successful development of a practical method for applying machine vision in the realm of the automatic detection of metal parts. This achievement is accomplished through the exploration of identification methods and size-measuring techniques for common defects found in metal parts. Consequently, the study establishes a valuable framework for effectively utilizing machine vision in the field of metal parts inspection and defect detection.

1. Introduction

In recent years, the limitations of manual detection, including low accuracy, poor real-time performance, and high labor intensity, have been addressed through the development of automatic detection methods based on machine vision. This technology has emerged as a fast and reliable alternative to manual detection for numerous surface defects, offering advantages such as high automation, dependability, and reduced subjectivity. Machine vision-based inspection has proven to be highly adaptable to different environmental conditions and can operate continuously with high levels of accuracy and efficiency. By implementing machine vision defect detection technology, businesses can benefit from reduced production costs, improved productivity, enhanced product quality, and pave the way for the intelligent transformation of the industrial sector [1].
One of the most significant research projects in the field of industrial vision has long been industrial defect detection. Machine vision algorithms offer numerous techniques for identifying defects, which can be broadly categorized into two groups: conventional techniques and deep learning techniques. It has been successfully used to identify mutant defects on textile surfaces according to Chetverikov et al. [2], who deal with texture defects based on two fundamental structural qualities, regularity and local orientation (anisotropy). According to research by Hou et al. [3], the accurate identification and segmentation of texture surface defects can be achieved through a method utilizing the support vector machine classification technique based on Gabor wavelet features. A VMD mode number optimization method based on the maximum envelope kurtosis was proposed by Zheng et al. [4] and has strong generalizability and anti-noise performance. A structure visual detection method based on Faster region convolutional neural network (Faster R-CNN) [5] was proposed by Cha et al. [6], which enabled the simultaneous, quasi-real-time detection of concrete cracks, medium and high-level steel corrosion, bolt corrosion, and five different types of steel delamination damage. With a resolution of 500 × 375, this method offers a relatively fast detection speed of 0.03 s per image. Tao et al. [7] proposed a method for the automatic detection of insulator defects using aerial photographs. Their study focused on achieving the precise localization of insulator defects in real inspection environments. To detect strip surface defects, Li et al. [8] developed a fully convolutional YOLO network based on the YOLO (You Only Look Once) principle. The network offers method support for the instantaneous discovery of strip surface defects. A unified framework for identifying defects in industrial goods or flat surfaces is put out by Wu et al. [9] and is based on an end-to-end learning technique. For defect detection, a framework for end-to-end learning is established. A display line defect detection method based on the fusion of color features was proposed by Xie et al. [10]. This technique has a detection accuracy of over 90% for multi-background offline defects, which is superior to other widely used algorithms in this detection. Gao et al. [11] introduced a hierarchical training-CNN approach with feature alignment, which demonstrated an improved performance. Furthermore, this method has been successfully applied in a real-world case, resulting in significant enhancements. Li et al. [12] presented a novel automatic defect detection solution called YOLO-attention, based on YOLOv4, utilizing deep learning techniques. This approach achieves both fast and accurate defect detection for the process of Wire Arc Additive Manufacturing (WAAM). Yoon et al. [13] proposed a real-time non-destructive evaluation technique that utilizes highly nonlinear solitary waves (HNSWs) and deep learning to detect defects in laminated composites.
Dimensional error detection plays a critical role in ensuring the quality of certified goods during the production and processing of locks. While surface flaws are commonly addressed in defect identification, it is important to consider other dimensions as well. The size of the lock core bead hole, for instance, is a crucial dimension that directly impacts the lock’s quality. Manual measurement of this dimension is labor-intensive and time-consuming. As vision technology continues to advance, the use of machine vision measurement is becoming increasingly prevalent in industrial applications due to its high accuracy, speed, and automation capabilities. A method for binocular online binocular stereo vision detection of rubber extrusions was put out by Liguori et al. [14], and the measurement uncertainty was kept to a maximum of 0.1 mm. In the study conducted by Valkenburg et al. [15], the utilization of a temporally encoded structured light system was explored as a means to achieve precise 3D measurements. This method offers the potential to achieve an accuracy of approximately 0.3 mm, providing reliable and detailed dimensional information. Aguilar et al. [16] developed a new automatic measurement system. The system has a measurement range of 1500 × 200 × 4000 mm and limits the uncertainty to within 1 mm. To achieve high-precision measurement of the dimensions of massive vehicle brake pads, Xiang et al. [17] proposed a measurement method based on a dual-camera machine vision system and a relative measurement principle. Li et al. [18] proposed a method for performing coarse edge detection of workpiece images using the Holistic Nested Edge Detection (HED) model. The effectiveness of this method was validated using a shaft workpiece as an example, demonstrating its applicability in practical scenarios. A machine vision-based online measurement method for the dimensional accuracy of cone-spinning workpieces was proposed by Xiao et al. [19]. The method can obtain a clean profile for a conical spinning workpiece. A high accuracy, well-stabilized machine vision-based straightness measurement system for 1500 mm long seamless steel pipes was developed by Lu et al. [20]. In order to enhance the image quality of thermal parts, Jia et al. [21] primarily introduced a spectral selection method. This method significantly enhances the hot spot appearance. Gao et al. [22] aimed at the rapid non-contact measurement problem of a revolving workpiece’s radial and axial dimensions, a fast and high-precision visual inspection method has been presented in this paper. Yan et al. [23] introduced a non-contact strip deviation online measurement method that relies on image detection and machine vision technology. This method offers the flexibility to adapt to varying strip heights, allowing for easy modifications. By utilizing this approach, accurate measurements of strip deviation can be obtained without the need for physical contact with the strip. In high-mix, variable-volume production systems, T.K. et al. [24] proposed a method for transferring the workpiece position and orientation across machine tools as an example of its use. A visual measurement system was established for determining ring size by Xiong et al. [25]. Lang et al. [26] proposed a unique monocular vision measurement method for the non-horizontal target to solve the difficulty of the multi-equipment measurement methods and the limitation of the monocular measurement methods for horizontal targets. Using machine vision, Zhang et al. [27] suggested a technique for calculating the straightness, roundness, and cylindricity of a workpiece. This method will offer a workable industrial remedy for determining workpiece shape deviation. Luo et al. [28] introduced a contour segment combination algorithm based on a clustering search approach and rotation direction determination. This algorithm enables a proper reorganization of segmented contour segments, accurately calculating and measuring the number and size of berries. Tang et al. [29] proposed a new crack trunk thinning algorithm and width measurement scheme based on reservoir dam cracks. The algorithm simplifies the redundant data in the images of the crack and improves the efficiency of the estimation of the crack shape.
The detection of surface flaws and dimensions is a critical requirement in the production process of lock bodies to ensure their quality. Manual inspection methods are no longer sufficient due to the high labor intensity involved. To address this issue, this paper proposes the utilization of the YOLOv6 algorithm, leveraging the advantages of deep learning in defect detection, for surface defect identification in lock bodies. Furthermore, a novel automatic measurement method based on the modified Canny–Devernay sub-pixel edge detection algorithm is presented for measuring the lock cylinder bead hole, which has a small size of approximately 3 mm [30]. The contributions of this work can be summarized as follows:
(1)
A conceptual model of an automated detection production line was established for the detection of metal workpieces such as lock bodies.
(2)
To achieve a fast and accurate detection of sub-pixel edge points, the Canny algorithm and the Devernay algorithm are seamlessly combined in the calculation process.
(3)
In order to enhance the accuracy and stability of Devernay’s method in determining sub-pixel edge points, various specialized instances have been employed, thereby boosting its overall performance.
(4)
This paper explores the identification method and size measurement method of common defects in metal parts, presenting a practical approach for the application of machine vision in the field of the automatic detection of metal parts.
The paper is structured as follows: Section 2 provides an introduction to the experimental environment and dataset used in this study. Section 3 presents the network structure of the YOLOv6 algorithm and the improved Devernay algorithm. The analysis of the experiments and the corresponding results are presented in Section 4. Lastly, Section 5 summarizes the conclusions drawn from this paper and outlines potential areas for future improvements.

2. Experimental Environment and Dataset

The same underlying factor that leads to defects in regular metal workpieces is also responsible for defects in metal lock bodies. In order to categorize surface defects on metal lock bodies, the following categories are used: bad stuff, freckles, scratches, poor contraction, bad cover, bump damage, and poor cut. These intricate surface defects require labor-intensive and time-consuming manual detection methods. One critical parameter in the lock cylinder workpiece is the size of the bead hole, which necessitates the installation of marbles. Manual measurements using Vernier calipers are typically employed, but they are not suitable for every workpiece, and they involve high labor costs and subjective judgments. Hence, the algorithm proposed in this paper is applied to an automated production line, as depicted in Figure 1, presenting its conceptual model. This production process enables rapid and accurate measurement of the lock cylinder’s bead hole size and precise identification of surface defects in the lock body.

2.1. Experiment Environment

For surface defect detection, the experimental environment of this paper is based on the deep learning framework pytorch1.8.1, python3.7, and Windows10 operating system. We used an Intel(R) Core (TM) i7-10700 [email protected] processor, 16 GB memory, NVIDIA GeForce RTX 2060 graphics card model, with CUDA10.2 and CUDNN7.6.5.32 to speed up the processing speed of GPU. The specific configuration is shown in Table 1.
In the experimental setup of this paper for dimensional measurement, a ring LED light source with an outer diameter of 70 mm and an inner diameter of 40 mm was utilized. The size of the bead hole in the lock cylinder was measured using the MV-CE050-30UC industrial camera from Hikvision, specifically the model number MVL-HF1228M-6MP, which was equipped with a 12 mm fixed focus lens. The specifications for the light source, camera, and lens are provided in Table 2. For photography purposes, the lens’s closest focusing distance of 60 mm was employed. The configuration of the measurement environment is illustrated in Figure 2.

2.2. Datasets

In the experiment conducted for defect detection on the surface of the lock body and lock cylinder, this paper classifies the defects into seven categories: bad stuff, freckles, scratches, poor contraction, bad cover, bump damage, and poor cut. However, the exact quantity of data sets available for each category is not specified in the paper. In this experiment, 40 lock bodies and 6 defective lock cores were used as samples. A total of 1780 photos were taken depending on the intensity of the light, the angle, and many other variables. Figure 3a,b display images captured from different angles and combinations of various defect categories. The experimental setup described in Section 2.1 is employed to gather a total of 64 images for the lock cylinder bead hole size measurement experiment, as shown in Figure 3c. The exposure time for capturing these images is set to 10,000 ms. The number of surface samples of the lock body and the number of Lock cylinder samples are shown in Table 3.

3. Materials and Methods

3.1. Detection Method of Lock Body Defect

Target detection has emerged as a fundamental technology in the field of computer vision, finding widespread application in various industries [31,32,33,34,35,36,37,38,39,40]. Among the different algorithms available, the YOLO (You Only Look Once) series has gained prominence due to its exceptional performance and has become the preferred framework for numerous industrial applications [41,42,43,44,45]. However, many existing algorithms struggle to meet the accuracy and speed requirements of real-world industrial inspections. In the context of lock cylinder and lock body inspection, the wide variation in shape, size, and types of defects necessitates a deep learning algorithm that possesses robust performance, high detection speed, and accuracy. YOLOv6, developed by the Meituan Visual Intelligence Department in June 2022, represents a new generation of target detection networks. This framework incorporates several algorithmic-level changes and optimizations, including network structure and training strategies. YOLOv6 supports the complete industrial application chain, encompassing model training, inference, and deployment across multiple platforms. When evaluated on the COCO dataset, YOLOv6 demonstrates both speed and accuracy, surpassing other algorithms in performance. Figure 4 provides an overview of the general framework of YOLOv6.

3.2. Size Measurement Method

No matter what kind of material the workpiece is made of, its shape and size are the most important factors affecting its mechanical properties and. Locks are no exception. [46,47,48,49,50] Therefore, a high precision method for measuring the lock cylinder’s bead hole is proposed in this paper. The accurate acquisition of edge information for size measurement of industrial parts is often hindered by interference regions on the workpiece surface, such as lines, noise, reflections, and other factors. These elements contribute to increased inaccuracies in size measurement. To address this issue, the size measurement method proposed in this paper relies on the Canny–Devernay sub-pixel edge detection algorithm. Figure 5 illustrates the main flow of the size measurement method, outlining the key steps involved. Additionally, Figure 6 presents a flowchart that specifically outlines the edge detection process utilized in the method.

3.2.1. Region of Interest Acquisition

In Figure 6a, an image measuring 2592 × 1944 pixels is presented, demonstrating an uneven light reflection. The middle of the image appears brighter, while the two sides are darker, which can be attributed to the structural properties of the lock cylinder. However, since the bead hole of the lock cylinder serves as the measurement target, it is crucial to extract the region of interest (ROI) corresponding to the bead hole. To locate the bead hole, this paper employs the RANSAC (Random Sample Consensus) method to detect the circular shape of the lock cylinder’s bead hole. By eliminating the complex background, the effectiveness of image processing can be enhanced. A rectangular area the size of 270 × 270 pixels is utilized to extract the ROI following the circle detection. Figure 6b illustrates the resulting image of the lock cylinder’s bead hole after the ROI processing. It can be observed that the extracted bead hole area has effectively eliminated unnecessary background information.

3.2.2. Image Denoising

Figure 6b shows that there is still a significant amount of noise in the region of interest due to the interaction of the lock cylinder’s structural properties and the illumination, and that this noise has a direct impact on the acquisition of the hole edge. Eliminating noise from the region of interest is therefore absolutely required. Gaussian filtering can be used to remove these noises [51]. In this paper, two convolutions are performed using a one-dimensional Gaussian kernel. The one-dimensional Gaussian kernel is defined as indicated in Equation (1). It is depicted in Figure 6c after the noise has been removed.
K = 1 2 π σ e x 2 2 σ 2
In the formula, K is the Gaussian kernel, σ is the standard deviation of the Gaussian function, and x is the pixel variable of the image pixel point ( x ,   y ) in the horizontal direction (or vertical direction).

3.2.3. Sub-Pixel Edge Detection

The central method of the lock cylinder bead hole size measurement suggested in this paper is sub-pixel edge detection. This process can be roughly separated into two steps: (1) pixel-level edge localization based on the Canny operator [52] and (2) accurate sub-pixel edge localization based on the Devernay [53] sub-pixel edge detection algorithm.
(1)
The steps of the Canny operator to find the edge points are as follows.
  • Smooth the image with a Gaussian filter. The obtained ROI region has been Gaussian filtered in Section 3.2, so this step is omitted.
  • Calculate the gradient magnitude and direction. In this paper, the first-order partial derivative finite difference is used to calculate the gradient magnitude and direction, as shown in Equations (2)–(6).
s x = [ 1   1                 1               1 ] , s y = [       1 1                     1     1 ]
P [ i , j ] = 1 2 ( f [ i + 1 , j ] f [ i , j ] + f [ i + 1 , j + 1 ] f [ i , j + 1 ] )
Q [ i , j ] = 1 2 ( f [ i , j ] f [ i , j + 1 ] + f [ i + 1 , j ] f [ i + 1 , j + 1 ] )
M [ i , j ] = P [ i , j ] 2 + Q [ i , j ] 2
θ [ i , j ] = a r c tan ( Q [ i , j ] / P [ i , j ] )
where f is the gray value of the image, P represents the gradient amplitude in the X direction, Q represents the gradient amplitude in the Y direction, M is the amplitude of the point, and θ is the gradient direction, that is, the angle.
  • c.
    Non-maximum suppression of the gradient magnitude. As shown in Figure 7: g1, g2, g3, and g4 all represent pixel points. Obviously, they are four of the eight neighborhoods of point c. Point c in Figure 7 is the point we need to judge. The blue line is its gradient direction; that is to say, if c is a local maximum, its gradient magnitude M needs to be greater than or equal to the intersection of the line with g1g2 and g3g4, that is, the gradient magnitudes at the two points p and q. However, p and q are not integer pixels but sub-pixels; that is, the coordinates are floating points. We used linear interpolation to find their gradient magnitudes. For example, p is between g1 and g2, and the magnitudes of g1 and g2 are known. As long as we know the ratio of p between g1 and g2, we can obtain its gradient amplitude, and the ratio can be calculated by the included angle θ, which is the direction of the gradient. As shown in Formulas (7) and (8): set the amplitude of g1 M(g1), the amplitude of g2 M(g2), then p can be easily obtained.
Figure 7. Gradient amplitude for non-maximum suppression.
Figure 7. Gradient amplitude for non-maximum suppression.
Applsci 13 06898 g007
M ( p ) = w × M ( g 2 ) + ( 1 w ) × M ( g 1 )
w = d i s t a n c e ( p , g 2 ) / d i s t a n c e ( g 1 , g 2 )  
distance ( p , g 2 ) represents the distance between points p and g 2 , and distance ( g 1 , g 2 ) represents the distance between points g 1 and g 2 . In fact, w is a scale coefficient, which can be obtained by the gradient direction (tangent and cotangent of the argument θ).
The Canny algorithm has been employed to achieve coarse pixel-level positioning of the edge of the lock cylinder’s bead hole. However, it is evident that this method alone is insufficient to meet the high accuracy requirements of industrial measurements. Therefore, the revised Devernay algorithm is utilized in this paper to determine the sub-pixel edge of the lock cylinder’s bead hole, leveraging the information obtained from the edge points. This refined algorithm allows for more precise and accurate determination of the bead hole’s sub-pixel edge.
(2)
Building upon the Devernay sub-pixel edge detection algorithm, accurate sub-pixel edge finding is achieved. Devernay further refined the Canny algorithm by proposing that the new edge point can be determined as the maximum value of the difference between multiple adjacent gradient modulus values. This value can be interpolated using a quadratic function based on the gradient modulus values at three adjacent points along the gradient direction. Figure 8 provides a visual representation of this process.
In Figure 8, ||g(A)||, ||g(B)||, and ||g(C)|| are the three gradient modulus values in the vertical direction in the entire region from point A to point C. The Canny algorithm selects point B with the largest gradient modulus as the edge point. However, there may be a point η between points A and C whose modulus is the extreme point of this region. At this time, η is more suitable as an edge point. Devernay gave a method for estimating n-point and calculating the offset of n-point. Additionally, the sub-pixel position of η is given, as shown in Equation (9):
η = 1 2 || g ( A ) || || g ( C ) || || g ( A ) || + || g ( C ) || 2 || g ( B ) ||
Directly following Devernay’s method, nevertheless, will still result in numerous mistakes. The following four sources can go wrong: (1) The error caused by presuming that the gradient satisfies the quadratic curve law; (2) using a finite term, determine the error caused by the gradient value; (3) the gradient of some locations in the interpolation calculation is not calculated using a clever operation; (4) data errors. Here is a more effective way to prevent these error sources from affecting edge detection: Point ( x ,   y ) is an edge point in the horizontal direction when the gradient at that point satisfies the following criteria:
{ || g ( x 1 , y ) || < || g ( x , y ) || || g ( x , y ) || > || g ( x + 1 , y ) || | g x ( x , y ) | > | g y ( x , y ) |
Similarly, the vertical edge points are defined as follows:
{ || g ( x , y 1 ) || < || g ( x , y ) || || g ( x , y ) || > || g ( x , y + 1 ) || | g x ( x , y ) | < | g y ( x , y ) |
When the gradients in the x and y directions are equal, it defaults to the horizontal edge point.

3.2.4. Radius Acquisition

Figure 6 demonstrates the precise determination of the sub-pixel edge of the lock cylinder’s bead hole, as outlined in Section 3.1 to 3.3. However, directly obtaining the diameter of the lock core’s bead hole remains a challenge. To address this, the paper employs the least squares method to fit a circle to the edge of the bead hole. The least squares circle fitting method is a statistically based detection approach. Even in scenarios where the circular target in the image is affected by factors such as uneven light intensity, the least squares method ensures the accurate positioning of the circle’s center and detection of its radius. This method relies on the statistical properties of the detected edge points to determine an accurate fit. It is worth noting that a circle can be defined mathematically by an equation (Equation (12)), which allows for precise fitting at the sub-pixel level, given accurate edge positioning and visible contours.
x 2 + y 2 + a x + b y + c = 0
In the formula, x and y are two values of the coordinate point, and a, b, and c are three parameters. Therefore, Equations (13)–(15) can be obtained.
d i 2 = ( X i A ) 2 + ( Y i B ) 2
δ i = d i 2 R 2 = ( X i A ) 2 + ( Y i B ) 2 R 2 = X i 2 + Y i 2 + a X i + b Y i + c
Q ( a , b , c ) =   δ i 2 =   [ X i 2 + Y i 2 + a X i + b Y i + c ] 2
where δ i is the difference between the square of the distance from the point ( X i , Y i ) to the center of the circle and the square of the radius, and Q   ( a ,   b ,   c ) is the sum of the squares of δ i . When Q   ( a ,   b ,   c ) takes the optimal solution, the equation of the fitted circle can be solved, and the center coordinates and radius of the fitted circle can be obtained. The fitted circle effect is shown in Figure 9.

4. Results

4.1. Defect Detection Experiment

4.1.1. Defect Detection Evaluation Index

For defect detection, this experiment uses precision ( P ), recall ( R ), and m A P   (mean of Average Precision) to evaluate the effect of YOLOv6 on surface defect detection. The calculation formulas of P , R , and m A P can be defined by Formulas (16) to (18), respectively.
P = T P T P + F P
R = T P T P + F N
m A P = i = 1 K A P i K
Among them, T P is the number of positive samples that are correctly detected as positive by the algorithm, F P is the number of negative samples that are correctly detected as positive by the algorithm, and K is the number of defect types. A P i is the average precision of various classes, and its definition is shown in Equation (19).
A P = 0 1 P ( R ) d ( R )

4.1.2. Experimental Data and Analysis

Take 1300 images from the collected images and feed them into the YOLOv6 network for training using a training set to validation set ratio of 8:2. The training parameters are shown in Table 4. Where, the learning rate refers to the step size of gradient descent for the loss function at each parameter update. Batch-size represents the number of images trained in one training session. The epoch represents the number of training sessions. The img-size indicates the training image size. Workers represents the number of CPU cores in the computer. Figure 10 and Figure 11 depict the change in the loss function value and the change in the m A P verification after 1000 training times.
Figure 10 illustrates that around the 700th iteration, the loss function value of YOLOv6 starts to decline progressively and approaches convergence. In terms of validation mean Average Precision ( m A P ), when considering a confidence level larger than 0.5, the m A P is consistently maintained at approximately 0.9 during the initial 300 iterations. Similarly, when considering a confidence level of 0.5 to 0.95, the m A P remains at around 0.6 until approximately 600 iterations, as depicted in Figure 11. Figure 12 showcases the test results of the YOLOv6 algorithm after 440 additional images were captured following 1000 training iterations.
Based on the results displayed in Figure 12, it is evident that the YOLOv6 algorithm effectively detects various surface defects, meeting the requirements for industrial detection. The experiment’s mean Average Precision ( m A P ) is measured to be 0.911, which indicates compliance with industrial testing requirements.
Table 5 provides a comparison of the results obtained from the most recent iterations of the YOLO algorithm series, namely YOLOv5, YOLOv6, and YOLOv7, in terms of their ability to identify surface defects on lock bodies. According to the table, the YOLOv6 algorithm demonstrates a significantly better average precision compared to the YOLOv5 algorithm. Furthermore, the average precision of YOLOv6 is nearly on par with that of the YOLOv7 algorithm. It is important to note that YOLOv6 achieves this higher precision while weighing substantially less than YOLOv7 in terms of model weight. High precision in defect detection is crucial as it reduces the chances of false detection and missed detections. By reducing the weight size of the inspection algorithm, YOLOv6 enables lightweight inspection in industrial production. This reduction in weight requirements translates to improved productivity, reduced equipment needs, and lower production costs. Therefore, YOLOv6 offers a favorable solution by combining accurate detection with reduced weight size. In addition, according to Table 5, the reasoning time of Yolov6 is better than that of the other two networks. Therefore, even if the training time is longer, it does not affect the superiority of Yolov6.

4.2. Size Measurement Experiments

(1)
Size conversion. In this paper, a total of 64 pictures of the lock cylinder were captured, with five bead holes present in each picture. This amounts to a total of 320 lock core bead hole pictures after the extraction of the Region of Interest (ROI). Section 3 of the paper describes the process of obtaining the fitting circle radius for the lock cylinder’s bead hole. However, it should be noted that the calculated radius is in terms of pixels, and thus requires a size conversion to verify the accuracy of the algorithm. Equation (20) is provided in the article to facilitate this size conversion.
k = i = 1 n d r d p i x e l n
In the formula mentioned, the variable “ n ” represents the number of measurements conducted on the standard workpiece, “ d r ” represents the actual measurement size, and “ d p i x e l ” represents the diameter of the fitted circle obtained through sub-pixel edge detection measured in pixels. The variable “ k ” represents the average pixel equivalent, indicating the actual size and length represented by each pixel in the experimental environment, measured in mm/pixel. In this paper, a vernier caliper with an accuracy of 0.02 mm is used for measurements. The diameter of the standard qualified lock cylinder bead hole is measured using this vernier caliper, yielding a value of 3 mm. A total of 45 measurements are performed on the standard workpiece using the vernier caliper. By calculating the pixel equivalents for these measurements and determining their average, the average pixel equivalent is found to be 0.013282 mm/pixel.
(2)
Size calculation. The measurement size can be calculated from equation after computing the average pixel equivalent (21). The 320 measurement data groups were separated into 6 groups, and Table 6 contains the average values of the calculated results for each group.
Table 6 shows that the average measurement error of the measurement algorithm proposed in this paper for the six groups of measurement data is all within 0.03 mm. The measurement duration for each ROI lock cylinder bead hole area is 20.54 ms on average. Figure 13 displays three shapes of the object: a triangular prism, a quadrangular prism, and a nut. The triangular prism has an equilateral triangle base with a width of 5.06 mm and a side length of 5.84 mm. The quadrangular prism has a side length of 5.02 mm, and the nut is a standard part of the model GB_FASTENER_NUT_SN AB1 M4-N. The size of the nut was measured using a Vernier caliper, resulting in a width of 6.88 mm and a length of 7.76 mm. Both the triangular prism and the quadrangular prism were fabricated using a 3D printer with a printing accuracy of 0.1 mm, and their sizes were measured using the proposed algorithm in this paper. Thirty sets of data were measured for each of the three shapes, and the average values were calculated and shown in Table 7. The results in Table 7 demonstrate that the proposed algorithm can measure the size of different shapes with high accuracy. The average absolute error between the measured values and the true values is less than 0.03 mm.
Figure 14 presents a comparison of the measurement errors between the Canny–Zernike [54,55] algorithm, the Hough circle algorithm [56], and the algorithm proposed in this paper when measuring the bead hole of the lock cylinder. The figure clearly demonstrates that the method suggested in this paper outperforms both the Canny–Zernike algorithm and the Hough circle method in terms of accuracy and stability. The measurements obtained using the algorithm proposed in this paper exhibit significantly less error and variability compared to those obtained using the Canny–Zernike algorithm. Furthermore, when compared with the Hough circle method, the algorithm proposed in this paper achieves higher measurement accuracy. These findings indicate that the method described in this paper offers superior stability and high accuracy for measuring metal workpieces. Overall, the results shown in Figure 14 highlight the advantages of the method proposed in this paper, demonstrating its capability to provide precise and consistent measurements of metal workpieces.

5. Conclusions

The study presented in this paper focuses on the application of machine vision and deep learning algorithms to enhance the conventional industrial production process by automating the detection of metal parts. A dedicated experimental platform for machine vision detection is developed to meet the requirements of detecting surface defects and measuring the size of the lock cylinder’s bead hole. Extensive algorithm research, specifically in the areas of size detection and surface defect detection of lock cylinders and lock bodies, has led to several significant conclusions.
(1)
This paper introduces the YOLOv6 algorithm as the proposed method for identifying surface defects on the lock body and lock cylinder. Through a rigorous training process consisting of 1000 iterations, YOLOv6 effectively detects seven distinct types of surface defects. The algorithm demonstrates a remarkable mean Average Precision ( m A P ) value of 0.911, highlighting its accuracy and efficiency in defect detection.
(2)
The Devernay algorithm is refined, and a new algorithm based on Canny–Devernay sub-pixel edge detection is proposed for measuring the lock cylinder’s bead hole. The average measurement error for each region of interest (ROI) in the lock core bead hole area is less than 0.03 mm, and the average measurement time is 20.54 ms. These results meet the requirements for the automatic measurement of small size, high speed, and high precision, approximately 3 mm. The algorithm proposed in this paper outperforms manual detection in terms of speed and efficiency while maintaining comparable accuracy.
However, this paper has two limitations. Firstly, the use of YOLOv6, although effective, may not be the optimal tool for the defect recognition of small targets such as bad items in the lock body and lock cylinder. The network structure will be improved in future research to enhance the performance of small target detection. Secondly, there is still a 4.063% probability that the accuracy of the size measurement method suggested in this paper may have a measurement error greater than 0.05 mm. Therefore, future optimizations will focus on improving the algorithm’s robustness and enhancing measurement accuracy.

Author Contributions

X.X.: Conceptualization, Methodology, Formal analysis, Writing—Original draft. H.W.: Project administration, Funding acquisition, Writing—Review and editing. Y.L.: Resources, Supervision and writing. D.L.: Visualization and writing. B.L.: Software and writing. Y.T.: Methodology, Writing—Review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Laboratory of Lingnan Modern Agriculture Project under Grant NT2021009 and the Guangdong Basic and Applied Basic Research Foundation (2022A1515140162). We would like to express our gratitude for the support provided by the project “Application of Multi-source Sensing Information Fusion Technology in Engineering Inspection”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical comments and suggestions for improving the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, J.; Li, S.; Wang, Z.; Dong, H.; Wang, J.; Tang, S. Using Deep Learning to Detect Defects in Manufacturing: A Comprehensive Survey and Current Challenges. Materials 2020, 13, 5755. [Google Scholar] [CrossRef]
  2. Chetverikov, D.; Hanbury, A. Finding defects in texture using regularity and local orientation. Pattern Recogn. 2002, 35, 2165–2180. [Google Scholar] [CrossRef]
  3. Hou, Z.; Parker, J.M. Texture Defect Detection Using Support Vector Machines with Adaptive Gabor Wavelet Features. In Proceedings of the 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05)—Volume 1, Breckenridge, CO, USA, 5–7 January 2005; pp. 275–280. [Google Scholar]
  4. Zheng, S.; Zhong, Q.; Chen, X.; Peng, L.; Cui, G. The Rail Surface Defects Recognition via Operating Service Rail Vehicle Vibrations. Machines 2022, 10, 796. [Google Scholar] [CrossRef]
  5. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Cha, Y.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types. Comput-Aided Civ. Inf. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  7. Tao, X.; Zhang, D.; Wang, Z.; Liu, X.; Zhang, H.; Xu, D. Detection of Power Line Insulator Defects Using Aerial Images Analyzed with Convolutional Neural Networks. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1486–1498. [Google Scholar] [CrossRef]
  8. Li, J.; Su, Z.; Geng, J.; Yin, Y. Real-time Detection of Steel Strip Surface Defects Based on Improved YOLO Detection Network. IFAC-PapersOnLine 2018, 51, 76–81. [Google Scholar] [CrossRef]
  9. Wu, Y.; Guo, D.; Liu, H.; Huang, Y. An end-to-end learning method for industrial defect detection. Assem. Autom. 2020, 40, 31–39. [Google Scholar] [CrossRef]
  10. Xie, W.; Chen, H.; Wang, Z.; Liu, B.; Shuai, L. Display Line Defect Detection Method Based on Color Feature Fusion. Machines 2022, 10, 723. [Google Scholar] [CrossRef]
  11. Gao, Y.; Gao, L.; Li, X. A hierarchical training-convolutional neural network with feature alignment for steel surface defect recognition. Robot. Comput. Integr. Manuf. 2023, 81, 102507. [Google Scholar] [CrossRef]
  12. Li, W.; Zhang, H.; Wang, G.; Xiong, G.; Zhao, M.; Li, G.; Li, R. Deep learning based online metallic surface defect detection method for wire and arc additive manufacturing. Robot. Comput. Integr. Manuf. 2023, 80, 102470. [Google Scholar] [CrossRef]
  13. Yoon, S.; Song-Kyoo Kim, A.; Cantwell, W.J.; Yeun, C.Y.; Cho, C.; Byon, Y.; Kim, T. Defect detection in composites by deep learning using solitary waves. Int. J. Mech. Sci. 2023, 239, 107882. [Google Scholar] [CrossRef]
  14. Liguori, C.; Paolillo, A.; Pietrosanto, A. An on-line stereo-vision system for dimensional measurements of rubber extrusions. Measurement 2004, 35, 221–231. [Google Scholar] [CrossRef]
  15. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vision Comput. 1998, 16, 99–110. [Google Scholar] [CrossRef]
  16. Aguilar, J.J.; Torres, F.; Lope, M.A. Stereo vision for 3D measurement: Accuracy analysis, calibration and industrial applications. Meas. J. Int. Meas. Confed. 1996, 18, 193–200. [Google Scholar] [CrossRef]
  17. Xiang, R.; He, W.; Zhang, X.; Wang, D.; Shan, Y. Size measurement based on a two-camera machine vision system for the bayonets of automobile brake pads. Measurement 2018, 122, 106–116. [Google Scholar] [CrossRef]
  18. Li, X.; Yang, Y.; Ye, Y.; Ma, S.; Hu, T. An online visual measurement method for workpiece dimension based on deep learning. Measurement 2021, 185, 110032. [Google Scholar] [CrossRef]
  19. Xiao, G.; Li, Y.; Xia, Q.; Cheng, X.; Chen, W. Research on the on-line dimensional accuracy measurement method of conical spun workpieces based on machine vision technology. Measurement 2019, 148, 106881. [Google Scholar] [CrossRef]
  20. Lu, R.S.; Li, Y.F.; Yu, Q. On-line measurement of the straightness of seamless steel pipes using machine vision technique. Sens. Actuators A Phys. 2001, 94, 95–101. [Google Scholar] [CrossRef]
  21. Jia, Z.; Wang, B.; Liu, W.; Sun, Y. An improved image acquiring method for machine vision measurement of hot formed parts. J. Mater. Process. Technol. 2010, 210, 267–271. [Google Scholar] [CrossRef]
  22. Gao, P.; Liu, F.; Sun, X.; Wang, F.; Li, J. Rapid non-contact visual measurement method for key dimensions of revolving workpieces. Int. J. Metrol. Qual. Eng. 2021, 12, 10. [Google Scholar] [CrossRef]
  23. Yan, S.; Wang, X.; Yang, Q.; Xu, D.; He, H.; Liu, Y. Online deviation measurement system of the strip in the finishing process based on machine vision. Measurement 2022, 202, 111735. [Google Scholar] [CrossRef]
  24. Kurita, T.; Kasashima, N.; Matsumoto, M. Development of a vision-based high precision position and orientation measurement system to facilitate automation of workpiece installation in machine tools. CIRP J. Manuf. Sci. Technol. 2022, 38, 509–517. [Google Scholar] [CrossRef]
  25. Zhihao, X.; Zhijiang, Z.; Han, L.; Libo, P. Research on dynamic measurement of hot ring rolling dimension based on machine vision. IFAC-PapersOnLine 2022, 55, 125–130. [Google Scholar] [CrossRef]
  26. Lang, J.; Mao, J.; Liang, R. Non-horizontal target measurement method based on monocular vision. Syst. Sci. Control. Eng. 2022, 10, 443–458. [Google Scholar] [CrossRef]
  27. Zhang, W.; Han, Z.; Li, Y.; Zheng, H.; Cheng, X. A Method for Measurement of Workpiece form Deviations Based on Machine Vision. Machines 2022, 10, 718. [Google Scholar] [CrossRef]
  28. Luo, L.; Liu, W.; Lu, Q.; Wang, J.; Wen, W.; Yan, D.; Tang, Y. Grape Berry Detection and Size Measurement Based on Edge Image Processing and Geometric Morphology. Machines 2021, 9, 233. [Google Scholar] [CrossRef]
  29. Tang, Y.; Huang, Z.; Chen, Z.; Chen, M.; Zhou, H.; Zhang, H.; Sun, J. Novel visual crack width measurement based on backbone double-scale features for improved detection automation. Eng. Struct. 2023, 274, 115158. [Google Scholar] [CrossRef]
  30. Grompone Von Gioi, R.; Randall, G. A Sub-Pixel Edge Detector: An Implementation of the Canny/Devernay Algorithm. Image Process. Line 2017, 7, 347–372. [Google Scholar] [CrossRef] [Green Version]
  31. Luo, D.L.; Cai, Y.X.; Yang, Z.H.; Zhang, Y.Z.; Zhou, Y.; Bai, Y. Survey on industrial defect detection with deep learning. Sci. Sin. Inform. 2022, 52, 1002–1039. (In Chinese) [Google Scholar] [CrossRef]
  32. Xu, D.; Lu, W.; Li, F. A Review of Typical Object Detection Algorithms in Deep Learning. Comput. Eng. Appl. 2021, 57, 10–25. [Google Scholar]
  33. Wu, F.; Yang, Z.; Mo, X.; Wu, Z.; Tang, W.; Duan, J.; Zou, X. Detection and counting of banana bunches by integrating deep learning and classic image-processing algorithms. Comput. Electron. Agric. 2023, 209, 107827. [Google Scholar] [CrossRef]
  34. Wu, F.; Duan, J.; Ai, P.; Chen, Z.; Yang, Z.; Zou, X. Rachis detection and three-dimensional localization of cut off point for vision-based banana robot. Comput. Electron. Agric. 2022, 198, 107079. [Google Scholar] [CrossRef]
  35. Tang, Y.; Cheng, Z.; Huang, Z.; Nong, Y.; Li, L. Visual measurement of dam concrete cracks based on U-net and improved thinning algorithm. J. Exp. Mech. 2022, 37, 209–220. [Google Scholar] [CrossRef]
  36. Zhang, J.; Sun, Y.; Li, G.; Wang, Y.; Sun, J.; Li, J. Machine-learning-assisted shear strength prediction of reinforced concrete beams with and without stirrups. Eng. Comput. Ger. 2022, 38, 1293–1307. [Google Scholar] [CrossRef]
  37. Sun, J.; Ma, Y.; Li, J.; Zhang, J.; Ren, Z.; Wang, X. Machine learning-aided design and prediction of cementitious composites containing graphite and slag powder. J. Build. Eng. 2021, 43, 102544. [Google Scholar] [CrossRef]
  38. Sun, J.; Wang, J.; Zhu, Z.; He, R.; Peng, C.; Zhang, C.; Huang, J.; Wang, Y.; Wang, X. Mechanical Performance Prediction for Sustainable High-Strength Concrete Using Bio-Inspired Neural Network. Buildings 2022, 12, 65. [Google Scholar] [CrossRef]
  39. Sun, J.; Wang, X.; Zhang, J.; Xiao, F.; Sun, Y.; Ren, Z.; Zhang, G.; Liu, S.; Wang, Y. Multi-objective optimisation of a graphite-slag conductive composite applying a BAS-SVR based model. J. Build. Eng. 2021, 44, 103223. [Google Scholar] [CrossRef]
  40. Rifai, A.; FUKUDA, R.; Aoyama, H. Surface Roughness Estimation and Chatter Vibration Identification Using Vision-Based Deep Learning. J. Jpn. Soc. Precis. Eng. 2019, 85, 658–666. [Google Scholar] [CrossRef] [Green Version]
  41. Zhou, Y.; Tang, Y.; Zou, X.; Wu, M.; Tang, W.; Meng, F.; Zhang, Y.; Kang, H. Adaptive Active Positioning of Camellia oleifera Fruit Picking Points: Classical Image Processing and YOLOv7 Fusion Algorithm. Appl. Sci. 2022, 12, 12959. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Liu, X.; Guo, J.; Zhou, P. Surface Defect Detection of Strip-Steel Based on an Improved PP-YOLOE-m Detection Network. Electronics 2022, 11, 2603. [Google Scholar] [CrossRef]
  43. Ge, H.; Dai, Y.; Zhu, Z.; Liu, R. A Deep Learning Model Applied to Optical Image Target Detection and Recognition for the Identification of Underwater Biostructures. Machines 2022, 10, 809. [Google Scholar] [CrossRef]
  44. Wang, J.; Dai, H.; Chen, T.; Liu, H.; Zhang, X.; Zhong, Q.; Lu, R. Toward surface defect detection in electronics manufacturing by an accurate and lightweight YOLO-style object detector. Sci. Rep. 2023, 13, 7062. [Google Scholar] [CrossRef]
  45. Ge, Y.; Lin, S.; Zhang, Y.; Li, Z.; Cheng, H.; Dong, J.; Shao, S.; Zhang, J.; Qi, X.; Wu, Z. Tracking and Counting of Tomato at Different Growth Period Using an Improving YOLO-Deepsort Network for Inspection Robot. Machines 2022, 10, 489. [Google Scholar] [CrossRef]
  46. Tang, Y.; Zhu, M.; Chen, Z.; Wu, C.; Chen, B.; Li, C.; Li, L. Seismic performance evaluation of recycled aggregate concrete-filled steel tubular columns with field strain detected via a novel mark-free vision method. Structures 2022, 37, 426–441. [Google Scholar] [CrossRef]
  47. Xiong, Z.; Guo, X.; Luo, Y.; Zhu, S.; Liu, Y. Experimental and numerical studies on single-layer reticulated shells with aluminium alloy gusset joints. Thin Walled Struct. 2017, 118, 124–136. [Google Scholar] [CrossRef]
  48. Que, Y.; Dai, Y.; Jia, X.; Leung, A.; Chen, Z.; Jiang, Z.; Tang, Y. Automatic classification of asphalt pavement cracks using a novel integrated generative adversarial networks and improved VGG model. Eng. Struct. 2023, 277, 115406. [Google Scholar] [CrossRef]
  49. Huang, B.; Wang, J.; Piukovics, G.; Zabihi, N.; Ye, J.; Saafi, M.; Ye, J. Hybrid cement composite-based sensor for in-situ chloride monitoring in concrete structures. Sens. Actuators B Chem. 2023, 385, 133638. [Google Scholar] [CrossRef]
  50. Xiong, Z.; Guo, X.; Luo, Y.; Xu, H. Numerical analysis of aluminium alloy gusset joints subjected to bending moment and axial force. Eng. Struct. 2017, 152, 1–13. [Google Scholar] [CrossRef]
  51. Getreuer, P. A Survey of Gaussian Convolution Algorithms. Image Process. Line 2013, 3, 286–310. [Google Scholar] [CrossRef] [Green Version]
  52. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  53. Devernay, F. A Non-Maxima Suppression Method for Edge Detection with Sub-Pixel Accuracy. Ph.D. Thesis, INRIA, Grenoble, France, 1995. [Google Scholar]
  54. Wang, Z.; Wang, Z.; Huang, N.; Zhao, J. An Improved Canny-Zernike Subpixel Detection Algorithm. Wirel. Commun. Mob. Comput. 2022, 2022, 1488406. [Google Scholar] [CrossRef]
  55. Tang, R.; Chen, W.; Wu, Y.; Xiong, H.; Yan, B. A Comparative Study of Structural Deformation Test Based on Edge Detection and Digital Image Correlation. Sensors 2023, 23, 3834. [Google Scholar] [CrossRef] [PubMed]
  56. Lin, E.; Tu, C.; Lien, J.J. Nut Geometry Inspection Using Improved Hough Line and Circle Methods. Sensors 2023, 23, 3961. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Automatic production line for lock body defect detection.
Figure 1. Automatic production line for lock body defect detection.
Applsci 13 06898 g001
Figure 2. Size measurement sample.
Figure 2. Size measurement sample.
Applsci 13 06898 g002
Figure 3. Example of datasets acquisition: (a) Lock body surface defects; (b) Lock cylinder surface defects; (c) Lock cylinder bead hole dataset (d stands for diameter).
Figure 3. Example of datasets acquisition: (a) Lock body surface defects; (b) Lock cylinder surface defects; (c) Lock cylinder bead hole dataset (d stands for diameter).
Applsci 13 06898 g003
Figure 4. The framework of YOLOv6.
Figure 4. The framework of YOLOv6.
Applsci 13 06898 g004
Figure 5. Edge detection process.
Figure 5. Edge detection process.
Applsci 13 06898 g005
Figure 6. Process of edge detection: (a) Lock cylinder image; (b) obtaining ROI; (c) filtering and denoising; (d) edge detection.
Figure 6. Process of edge detection: (a) Lock cylinder image; (b) obtaining ROI; (c) filtering and denoising; (d) edge detection.
Applsci 13 06898 g006
Figure 8. Devernay interpolation principle.
Figure 8. Devernay interpolation principle.
Applsci 13 06898 g008
Figure 9. Fitting circle effect.
Figure 9. Fitting circle effect.
Applsci 13 06898 g009
Figure 10. Training loss function change curve.
Figure 10. Training loss function change curve.
Applsci 13 06898 g010
Figure 11. m A P verification curve.
Figure 11. m A P verification curve.
Applsci 13 06898 g011
Figure 12. Detection results of various surface defects: (a) bump damage, scratches and freckles; (b) freckles, scratches and bump damage; (c) bad stuff; (d) bad cover; (e) freckles and poor contraction; (f) poor cut.
Figure 12. Detection results of various surface defects: (a) bump damage, scratches and freckles; (b) freckles, scratches and bump damage; (c) bad stuff; (d) bad cover; (e) freckles and poor contraction; (f) poor cut.
Applsci 13 06898 g012aApplsci 13 06898 g012b
Figure 13. The geometric dimensions of the three shapes. (a) Triangle; (b) square; (c) nut.
Figure 13. The geometric dimensions of the three shapes. (a) Triangle; (b) square; (c) nut.
Applsci 13 06898 g013
Figure 14. Comparison of measurement errors among three methods.
Figure 14. Comparison of measurement errors among three methods.
Applsci 13 06898 g014
Table 1. Automatic production line for lock body defect detection.
Table 1. Automatic production line for lock body defect detection.
ConfigurationParameter
Operating systemWindows10
Deep learning frameworkpytorch1.8.1
Programming languagepython3.7
GPU accelerated environmentCUDA10.2
GPUNVIDIA GeForce RTX 2060
CPUIntel(R)Core (TM)i7-10700 [email protected]
Table 2. Size Measurement experiment environment parameters.
Table 2. Size Measurement experiment environment parameters.
EquipmentParameterData
LED ring lightProduct numberJHZM-A40-W
Luminous colorwhite
Number of LEDs48 shell LED
Industrial cameraEffective pixels5 million
Colormulticolor
Cell size2.2 μm × 2.2 μm
Frame Rate/Resolution31 @2592 × 1944
Camera lensFocal length12 mm
Maximum size of image surface1/1.8’’ (φ9 mm)
Aperture rangeF2.8–F16
Closest shooting distance0.06 m
Table 3. Dataset.
Table 3. Dataset.
ClassesQuantity
Surface defects1740
Lock cylinder64
Table 4. The training parameters.
Table 4. The training parameters.
ParameterLearning RateBatch-SizeEpochImg-SizeWorkers
Value0.001810006408
Table 5. Comparison of the detection effects of YOLO algorithms.
Table 5. Comparison of the detection effects of YOLO algorithms.
VersionmAPWeight SizeTraining TimeAverage Inference Time
YOLOv50.90013.8 MB10 h15.31 ms
YOLOv60.91136.2 MB16 h10.29 ms
YOLOv70.91571.4 MB8 h11.23 ms
Table 6. Calculation result record.
Table 6. Calculation result record.
GroupNumber of
Samples
Fitting Circle
Diameter/(Pixel)
Real Value/(mm)Predictive Value/(mm)Mean Absolute
Error/(mm)
145225.8913.003.0010.027
250227.4953.003.0220.023
360227.6113.043.0240.019
450227.2463.043.0190.023
550225.0312.982.9890.028
665222.2172.982.9520.028
Table 7. Measurement of geometric dimensions of various shapes.
Table 7. Measurement of geometric dimensions of various shapes.
ValueTriangle WidthTriangle LengthSquare LengthNut WidthNut Length
Real Value/(mm)5.065.605.306.887.76
Mean Predictive Value/(mm)5.0505.5955.3156.917.742
Mean Absolute Error/(mm)0.0100.0050.0150.0260.018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Xu, X.; Liu, Y.; Lu, D.; Liang, B.; Tang, Y. Real-Time Defect Detection for Metal Components: A Fusion of Enhanced Canny–Devernay and YOLOv6 Algorithms. Appl. Sci. 2023, 13, 6898. https://doi.org/10.3390/app13126898

AMA Style

Wang H, Xu X, Liu Y, Lu D, Liang B, Tang Y. Real-Time Defect Detection for Metal Components: A Fusion of Enhanced Canny–Devernay and YOLOv6 Algorithms. Applied Sciences. 2023; 13(12):6898. https://doi.org/10.3390/app13126898

Chicago/Turabian Style

Wang, Hongjun, Xiujin Xu, Yuping Liu, Deda Lu, Bingqiang Liang, and Yunchao Tang. 2023. "Real-Time Defect Detection for Metal Components: A Fusion of Enhanced Canny–Devernay and YOLOv6 Algorithms" Applied Sciences 13, no. 12: 6898. https://doi.org/10.3390/app13126898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop