Vision Measurement's New Look
October 29, 2009
Within the past five years, vision measurement has become a radically different animal in terms of capabilities and ease of use. The transformation began with the advent of computer-aided design (CAD)-based programming, which made the application of vision measurement analogous to other types of measuring technologies.
This brought vision measurement into the mainstream a decade or more after most other types of measurement hardware and software combinations had already made the transition to CAD. Therefore, the software had a lot of catching up to do. Now, as vision software continues to evolve, vision systems are becoming more flexible and productive.
In the previous article, “A New Look at Vision Measurement,” some of these improvements were discussed, touching on feature-based measurement and advanced edge detection algorithms that operate beneath the surface of the interface to improve programming efficiency and measurement reliability.
In this article, two additional software enhancements will be explored:
1. Simultaneously capturing data from a group of features to improve measurement throughput.
2. Giving programmers the ability to adjust the RGB profile on vision systems with color cameras, turning what has been considered an image sharpness liability into a measurement contrast advantage.
One Frame, Many CapturesCompared to capturing data points with a tactile probe, one of the key advantages of vision measurement is its ability to capture data from any feature that is within its field of view. Instead of laboriously capturing data one measurement point at a time, it can gobble up all the data points simultaneously-from all visible features within the frame.
Unfortunately, to take advantage of this capability, the programmer has always had to create measurement sequences that bring this multiple capture capability into play. This was no big deal if there were only a few features needing measurement, but it did not result in much improvement in throughput. And when there were many features to measure, the programming became extraordinarily laborious and prone to error.
That said, multiple captures can and do yield substantial improvements in measurement throughput. It is often worth spending the time to program them for production measurement applications, especially when there are dozens of densely packed features to measure. However, it is hard to justify spending the time for short runs or prototyping even though it could significantly reduce the time spent measuring.
To bypass this limitation, the trick is to automate the task of programming the vision system’s software. This involves the software being “smart enough” to create an optimal sequence for measuring. The software has to calculate the maximum number of features it can measure with the minimum number of frame captures. This capability exists in some vision measurement software.
The multiple capture feature automatically finds multiple features that fit within the same field of view and captures their measurement data simultaneously. On completing the measurements in a particular field of view, the software drives the camera to the next cluster of features and measures them in the same way. This continues until the inspection program is complete.
In order to qualify for multiple capture, the features must be in the same field of view and use the same magnification and illumination. If this is the case, they require no programming at all. All the programmer has to do is make sure the multiple capture feature is on. A unique algorithm works in the background, analyzing the program and selecting the minimum number of frame positions needed to measure the features of interest.
Next, the multiple capture algorithm executes its own locate and capture sequence independent of the order in which the features were programmed, with one exception: If the measurement software encounters a clearance move in the program, it turns the multiple capture feature off, executes the clearance move to avoid a potential crash and then resumes in the multiple capture mode.
To date, this multiple capture mode of vision measurement typically results in throughput increases of about 35%. However, there have been instances where the comparison of parts measured with and without this feature have exhibited productivity improvements as great as 70%. These big winners were parts densely populated with features having complex geometries.
The improvements cited were seen using computers with single core processors. However, multiple cores are an ideal complement to this new technology. Because the software must process the measurement results before the system can execute its next move, multiple cores should substantially improve overall system productivity.
Users do not need to have direct computer controlled (DCC) vision systems to enjoy the benefits of multiple captures. The technology also works with manual vision systems by prompting the user to move the viewfinder to successive groups of features.
Color Cameras RevisitedAbout half of the vision systems in use employ grayscale cameras and the rest use color. Virtually all of the high-end systems use grayscale cameras. This is because each point registered by a black-and-white camera can be represented by a single grayscale pixel. Color cameras deliver somewhat fuzzier images because each point is a dot of color produced by combining three pixels-red, green and blue. So purists who require high-resolution captures, insist on grayscale cameras to avoid the triple error stack of the RGB camera.
However, there is a gray area, so to speak, in which color cameras can be used to improve measurement consistency. This involves allowing the programmer to individually adjust the R, G and B sensitivity of the camera on a per-feature basis and improve the contrast so that edges are identified clearly and accurately.
This capability is particularly useful for measuring medical devices manufactured in colors to support safety protocols. Edges on these parts can be difficult to detect with grayscale cameras. Adjusting lighting to increase contrast and make the edges sharper can be counterproductive because shadows cast by intense lighting can alter the position of the edge as perceived by the grayscale camera. Adjusting the RGB profile, on the other hand, can dramatically heighten the contrast without distorting edge location.
Circuit boards with gold print against a green background are another application in which this approach works very well.
There are many other features that can enhance vision measurement software to improve system performance and reliability. However, they also can require many years of research and development to implement. To telescope the pace of the product development, good software developers often insert building blocks of code supporting enhancements one or more releases in advance of their introduction. That is why it is a good idea to let your software vendor know what type of functionality you would like to see added to your vision measurement system. If you ask, you may have it sooner than you would expect.