Flaw Detection Tutorial: Quantifying and Qualifying Flaws
Setting up flaw detection is not without its challenges.
Flaw detection is one of the most fundamental machine vision tasks. Essential to quality control, machine vision allows manufacturers to find contamination, scratches, cracks, blemishes, discoloration, gaps, pits, and other unacceptable flaws via nondestructive methods. Setting up flaw detection is not without its challenges. Manufacturers must work with engineers to quantify and qualify potential flaws, in order to create a system that provides reliable and repeatable results.
Three Scenarios for Flaws
The first challenge is determining the nature of potential flaws that a machine vision system must identify.
In many applications, the flaws are easily detected by standard imaging tools, such as “blob” analysis and intensity measurement. For example, pin-holes and foreign particles are often round, have a known range of diameters, and appear as bright pixels on a dark background, or vice versa.
In slightly more challenging applications, the flaws are less well defined in shape and size, but can be distinguished from the underlying part. Examples include scuff marks or long, low-contrast line defects, such as scratches or cracks. These types of flaws may require advanced imaging tools for detection, such as Fourier Transform.
The final, most challenging scenario is when flaws have no pattern definition and may not be easily distinguished from an underlying part. Some such applications include printing defects and foreign matter in a “random” medium (e.g. a small piece of plastic on a conveyor belt of rice).
Challenges to Determining the Nature of Flaws
Determining the nature of potential flaws and designing a successful system brings rise to a number of challenges/considerations.
Many manufacturers are very savvy in terms of flaw definition, and can provide parameters for what constitutes a product flaw. However, there are also those who tell a machine vision engineer, “I can tell you it’s a flaw, but I can’t tell you why.” Unfortunately, it is difficult to create an application that can determine the difference between a flaw and a minor-but-acceptable defect without more specific parameters than “I’ll know it when I see it.”
Similarly, there is sometimes a lack of agreement on what constitutes a failure versus an acceptable imperfection. Ask four different people and you are likely to get four different answers. Sometimes, the task becomes garnering consensus amongst stakeholders on consistent parameters for what constitutes a reject.
Another important part of the process is determining the tolerance for false rejects— when machine vision systems mistakenly tag an object as flawed when it is not. Manufacturers are rarely willing to accept false rejects because valuable materials are wasted which affects profitability. Thankfully, in many cases, flaw detection is 100% accurate. Technology for identifying well-defined flaws is excellent. However, in cases with less well-defined flaws or no pattern recognition, manufacturers may need to determine their tolerance for false rejects.
Lighting is a complicated and important element of flaw detection. The type, color and angle of illumination play a crucial role in the application’s success or failure. For example, in an application where machine vision is looking for contaminants in an empty bottle, if light hits the bottle in front, it may be difficult to see debris. However, if the system can be set up so that the object is backlit, white light will shine through the bottle and any debris will appear as black, making it easy to mark as a reject.
One final challenge to creating a flaw detection application is defining “perfection,” as well as total failure. Manufacturers will often supply a perfect version of a sample object—or “golden template”—to demonstrate how an unflawed object should appear. However, examples of the range of flaws are also needed to help create an appropriate flaw detection application. In certain industries, the occurrence of a flawed object is very rare, and, therefore, a range of flawed samples can be difficult to procure.
Methods of Flaw Detection
Now that we’ve faced some of the common challenges and identified the type of flaws that might occur, let’s talk about methods of flaw detection.
As previously noted, an application to detect well-defined flaws is most common. In these cases, “traditional” imaging tools, such as morphology, are very successful. These tools are well defined, there are a lot of them, and they will do the job. As an example, defects such as holes or discolorations, with proper lighting, can use traditional tools with reliable results.
Potential flaws with less well-defined patterns require more advanced imaging tools. Such flaws can come in an infinite number of shapes and sizes, and can appear on textured or patterned surfaces that can make it difficult to separate true flaws from surface patterns. In these applications, more advanced tools deliver more information, helping to separate the flaw from the background. For instance, the Hough Transform algorithm is available from any vendor who wants to include it in their toolkit, and can separate flaws from textured background. Imagine looking at leather car seats, which have a pebbled texture. How do you isolate a flaw—a blob or bubble—from that pebbled texture? There are tools that will suppress the texture, leaving just the imperfections visible.
Flaws that have no pattern definition require a different approach. In this case, flaws can be identified by differencing the test product with a known, good product. Such applications require careful control of lighting and alignment/presentation of the two images. Any rotation, translation or scale changes will be seen as defects. Flaw detection in these situations can often be achieved through image differencing or pattern matching.
Image differencing involves taking a perfect sample and subtracting it from another image of the same kind of object. One image is mathematically subtracted from the other, and if the result is no difference, or a very low-level difference, then the sample is not defective. If the new sample has a flaw, there will be a large difference between the two images and the sample will be rejected. The main challenge to this approach is that the two images must line up exactly or the results will be inaccurate.
Pattern matching involves training the machine vision system on a good pattern and bringing it to a new image to locate that specific pattern on the image. It is not necessarily looking at the whole object at once, but is seeking out one specific pattern within the image to match up with the original perfect pattern. This method could be used to inspect the label of a bottle. In this instance, it would be possible to train a number of different parts of the label (the logo, a cartoon character, some text) as a pattern and do each one of those as a comparison. This would enable discovery of smaller defects in the overall product and pinpoint the defect’s location. Over time, if a large number of labels are rejected, and it is always the same pattern causing the problem, then it might be possible to identify an issue with the printing process and resolve it.
These methods can be used in applications when potential flaws can take infinite shapes and sizes and occur in many locations on the object. That makes it difficult to set constraints but if the object can be compared to a “perfect” sample, it is possible to create a successful flaw detection application.
The Future of Flaw Detection
The promise of Artificial Intelligence (AI) has been around for decades, and certainly even today, scientists, innovators and engineers continue to pursue the development of AI systems that are able to “learn.” While AI might seem like the future of flaw detection, it is unlikely to happen on a large scale anytime soon.
Currently, 80-90% of machine vision flaw detection is effective with zero or near-zero false rejections, and it is unlikely that AI could improve these results. Furthermore, manufacturers use flaw detection on a large scale, so value and efficiency are essential. It is not cost-effective for most manufacturers to set up an expensive, complex AI system.
The future of flaw detection is more likely to include further improvements in current technology, as well as increased efficiency and cost-effectiveness. As manufacturers work to quantify and qualify potential flaws with machine vision, they can ensure higher quality products for their customers.