Vision systems provide peace of mind when it comes to production quality but they can also generate valuable data that tracks process variability. Diving into the world of vision data can seem overwhelming, but with the right tips and tricks you can set up a system to work for you. Using a vision system to fully automate inspection rather than relying on hand tools for sampling parts saves significant time.

When was the last time you designed a sampling process? How time-consuming was it? Did you wonder how valid the process was? Were you sampling across all relevant variables—or just during working hours?

Imagine working through all of those steps and then spending time gathering all of the measurements to try and figure out what is really going on just to see an indication that something might have drifted along the way. Still, the nominal was within 95% confidence, so you gathered more samples but before completing the process, management added another hurdle to the mix. They mention that another part in the process has changed and now the results were no longer relevant. At the end of the day, was a slightly lower p-value really going to change things?

However, the world of sampling came about because gathering data was so expensive. In fact, it still is if it’s done manually but now there are many new technologies on the plant floor that can gather data at minimal cost (electricity and system maintenance), and instantaneously—if you will accept millisecond delays. One of the chief technologies among these IIOT quality metrics is machine vision. Although there are other IIOT connected devices, the datasets they provide do not contain images so it can be difficult to verify the data is what it purports to be. Outliers are critical to the analysis and without images, a simple product jam can skew an entire data set. Flipping through images of individual measurements can give more confidence that measurement is accurate. If you begin taking measurements automatically without vision, that might work too, but may require an inexpensive recording solution during the process if anomalies start occurring. Remember that only a few bad data points, mixed in, can render a whole dataset unusable.

Advancements in technology have not only changed the quality space but have consequently altered the role of a quality-related professional. Previously, the cost to gather data was so high that most of the time was spent designing good experiments and discerning characteristics about the group from samples in-hand. However, the quality professional now has the ability to gather enormous amounts of data from vision systems and drive more insights, faster, and with proof. There is confidence that the entire population has been measured now. From this dataset, one can then tease out the impact of any particular factor: shift, supplier batch, machine parameters, temperatures, speeds, time since changeover or tooling changes. This data can drive process improvements, and small process improvements add up over time.

Why Vision?

My first professional job was for a company that built a statistics package for big retailers. Doing this involved writing code to process large batches of transactions recorded at cash registers. The datasets were great, except for the oddities, like a purchase and return. Was the transaction a mistake? Was it a store manager making monthly numbers by moving old inventory? An actual return?

In a retail world, as long as it was less than a percentage of total sales we could safely drop it from any analysis but in manufacturing, it’s difficult to do the same. Some industries are lucky enough that customers will accept an AQL (acceptable quality limit) but many will not. Those oddities are the story and we have to know what occurred. When looking through the retail data, just like logs from machines or other collections of numbers, it is really hard to determine what the oddities actually are unless you have accompanying images. If there is, in fact, video footage or images, the story becomes a lot clearer. That questionable return is actually “shrink” (inventory shrinking without anyone paying for it), or spoilage, or whatever else the case might be. Manufacturing, like retail, has all kinds of issues that can arise and without images to pair with the data, there might simply be more questions than actual answers. The problem for manufacturers is that often we have to get to the bottom of that one part that escaped.

Transitioning to fully automated quality processes without images to explain oddities can sometimes result in sinking projects. Think about a mechanical system on a production line measuring every part and producing data by contacting the part and measuring displacement. Then think of all the things that could occur to generate an invalid measurement. When it’s automated and no one is exercising common sense, all those things will happen. Parts will jam, parts will feed oddly, parts may stack, a broken part may be presented, a part may double trigger two inspections. Without images we’re left to guess whether any of these occurred and whether we really trust the data.

What benefits are there?

Quality now depends on measurement error, not sampling error since every part is being inspected. When sampling, it is important to have a tight standard deviation to say that a bad part is sufficiently rare. When every part is being measured, it is the measurement error that determines whether the part could be out of tolerance.

LSL+ME < Part < USL-ME.

If I continue remeasuring, there is certainty as to where the part lies on that line because the standard deviation of the error will decrease with sqrt(n), where n is the number of trials. So if I measure the part 10 times there is more certainty whether it is good or bad.

LSL+ME/sqrt(10) < Part < USL-ME/sqrt(10)

For a normal distribution of the average of N trials, we have a standard deviation equal to the standard deviation of the trial * 1/sqrt(n). Essentially, the average of trials has a lower standard deviation than a single trial and that standard deviation declines proportionately to the number of trials in the average, but by the square root (i.e. sub-linearly).

Sampling error is expensive to improve as I must manually sample more parts and costs incur every time sampling is conducted. Measurement error can continue to improve and there are many gains to be kept. I can continue measuring to reduce measurement error and since it is more cost effective with machine vision, my costs don’t necessarily increase with the number of measurements taken (depending on cycle time).

Obviously, vision cannot solve for the sampling required by destructive testing, but if I can correlate a vision system result with the destructive test results that game can begin to change as well. If we can conduct a regression I can observe to predict the results of the destructive test and apply 100% inspection on that observable feature.

Monitoring the Vision System

If we are going to trust a vision system to generate data that has a single point of failure in the system itself, how do I make sure that the vision system isn’t drifting?

There are ways to go about this, such as reference parts, which can be tested or calibrated and can perform to known standards. Unfortunately, correcting for lens distortion in vision can also be called calibration. It is important to be very specific with how deep the system must be re-calibrated to ensure the correct process is performed. If the system is drifting, it can be appropriate to simply adjust the slight error in measurement or go back to the first principles and calibrate for lens distortion.

How many times do I want to calibrate for lens distortion to minimize error? These are all good questions each company will need to develop for themselves based on the cost of taking a system offline and the need for measurement continuity. These are questions a vision company can help with as you set up systems.

Conclusion

As the cost to gather individual measurements in a 100% inspection process has fallen dramatically, a new philosophy is emerging on quality control. We now think more in terms of datasets and actionable insights and better ways to gather more valuable datasets. The quality engineer is rapidly becoming the data scientist of the manufacturing operation who continues to drive those insights and value into areas of the process that previously were only statistically estimated. Quality now has some of the most valuable datasets within the organization, bringing it front and center, and often saving money too. Quality departments that lead this revolution are becoming critical to their companies by assisting them with insights that let them make significant gains against their competition. Vision systems are letting us see those problems we could previously only statistically guess at. V&S