Machine vision systems with smart software can provide measurements more efficiently than ever before.
Machine vision is the application of computer vision to the real-time inspection of products in a manufacturing environment. It is an automated vision system that is tailored to factory use for product characterization, defect detection, gaging and product feature, or code identification, as well as product and process control.
It should provide reliable actions despite various challenges from acceptable variations in the products to hardware and environmental variations.
Decisions to provide actions are quantitative. A computer digitally decides to accept or reject an action, or it digitally provides a quantitative measurement that is used for either inspection or process control.
Ultimately, a machine vision system is a number cruncher that may respond to noise signals from irrelevant changes in surface texture or color, product displacement, illumination intensity, and uniformity or environmental factors. These change the photodetected image signal that provides the numbers for the decisions.
All parts of the system, and their interactions, can affect this decision. The quality level lives or dies by the numbers used in this decision.
Where possible, the machine vision system should contain smart functionality that anticipates, senses and responds to acceptable changes in features specific to the products being made, the performance of the vision hardware and software used, and the environment. Smart machine vision systems can provide these functions with or without burdening the machine vision computer using smart sensors, smart optics and smart software. Possible interferences with the decision should be a center of attention in design and operation.
The word smart has been applied to various aspects of sensing systems, as well as entire systems. However, it is used with different meanings in different contexts. Even though smart sensors have an IEEE 1451.4 standard for different aspects of sensing, signal conditioning and calibration, interfacing and communication, one part of it-the transducer electronic data sheet-is available for calibration of sensors without use of the other parts of the standard.
The word smart is used to promote various products with different specifications and features to suggest responsive functionality.
Smart sensors and software can provide additional intelligence and a dynamic functionality that enhance their performance. Smart sensors can be small and rugged, contain a microprocessor, provide calibration and communicate wirelessly with a processor or network. This provides extended functionality, reliability in the sensor calibration, extended communication capabilities and off-loading of computer processing for faster operation.
Smart sensors are becoming increasingly common. Smart phones carry sensors to detect ambient light to adjust display illumination, proximity sensors to turn off displays when the phone is nearly dead and accelerometers to change a screen’s orientation to match the orientation of the device. Municipalities have used smart sensors in monitoring city-wide utility networks.
For smart machine vision systems, smart sensors can be used to monitor hardware-such as light intensity variations-and provide measurements with a minimum burden on the machine vision system.
Lighting, Optics, Smart Optics
The golden rule for any computer-based sensing system with multiple stages-including machine vision, radar systems and smart bombs-is to obtain the highest signal-to-noise ratio in the first stage. For machine vision systems and smart machine vision systems, the signal is the detected light from the feature of interest and the noise is the detected light from all other causes.
Optical techniques to improve the optical contrast are known from photography including the use of filters, polarizers and directional lighting. Smart optics extends this.
According to a November 2010 Economist
report, smart optics improve reliability and simplify software and hardware. Optics can even eliminate the need for an image of the object if sufficient information is provided for a decision.
For example, laser diffraction provided by a wire gives greater accuracy as the diameter of the wire decreases to wavelengths of light, with no image formed of the wire. By Babinet’s Principle, the diffraction pattern from the wire should be the diffraction pattern from a single slit except for the on-axis intensity.
Sensing can even eliminate all irrelevant detail in an imaged object. For example, an optical color filter matched to the color of a feature of interest (FOI) can eliminate all details but that of the FOI. For defect detection of integrated circuits, optical spatial filtering has been used to eliminate the image of the required circuit detail, and provide an image of defects only.
Measure the Measurement
Software can be used to verify that practical and consistent results have been obtained by the initial image acquisition and digital image processing. Internal consistency of results from expected features of the product’s image can be used to confirm the reliability of detection and measurements.
Gaging the thickness of a coating on a wire, for example, can provide two coating thickness values-one for each edge of the wire-as well as the value of the wire diameter. It should be confirmed that the sum of these three values equals the value of the outer diameter of the assembly.
Additionally, using measurements from multiple scan-lines of the camera, statistical standard deviations of measurements provide a measure of gaging reliability.
Large standard deviations in coating thickness measurements from multiple scan lines have been used to alert operators to the presence of scratches formed by worn-out extrusion tools. For defect detection, periodically measuring the average value of the light intensity along a long featureless product can be used to confirm (or adjust) the threshold intensity value that determines the detection of a defect.
The Future of Smart Systems and Sensors
Smart machine vision systems are just a small part of smart systems in the world. In November 2010, The Economist
said that countries are spending “large chunks” of their stimulus money on smart infrastructure, with Siemens and General Electric at the helm of smart system innovation.
These smart systems will incorporate increasing amounts of smart sensors to provide information about the product produced, manufacturing machinery operation and degradation, and environment.
For the near future, one can expect that smart machine vision systems will utilize signals from more smart sensors-both optical and electronic-in addition to the cameras that give these systems their vision.
Machine vision systems continue to expand their functionality and reliability with enhanced software and hardware. The use of smart machine vision systems is expanding as the systems’ own functionality increases and with the increasing availability of simple, small, inexpensive sensors to augment and even replace machine vision functions.
Decreased size and cost of microprocessors have been the driving force for the growth of electronics. Decreased size and cost of smart sensors, and the growth of IT structures will continue to fuel growth of machine vision systems as standalone and networked systems.
For more information on machine vision, visit www.qualitymag.com
for the following:
“Can Machine Vision be Your Answer?”
“Why Machine Vision Needs Standards”
“Selecting a World-Class Vision Systems Integrator”
Tech TipsThe machine vision system should contain smart functionality that anticipates, senses and responds to changes in features specific to the products being made.
Software can be used to verify that practical and consistent results have been obtained by the initial image acquisition and digital image processing.
The golden rule for any computer-based sensing system with multiple stages-including machine vision, radar systems and smart bombs-is to obtain the highest signal-to-noise ratio in the first stage.