Quality Magazine

Quality 101: Vision Sensors

June 1, 2007
The sensor reads direct-marked Data Matrix bar codes. Source: Banner Engineering

To ship consistently high-quality products while keeping costs down, companies are incorporating more automated inspections into their manufacturing processes. Photoelectric sensors can handle simple visual inspections, but complex inspections require machine vision. The range of machine vision products available-from off-the-shelf $1,000 vision sensors to custom systems that run tens of thousands of dollars-create opportunities for automation, while reducing waste and improving productivity.

The term machine vision applies to optical systems that include an industrial camera to capture an image and a processor to analyze operator-specified features of that image. In low-cost vision sensors-sometimes called smart cameras-the processor is built into the camera; in custom systems, the processor is in a PC.

Machine vision uses complex processes, but they can be grouped into three main categories: capture, analyze and communicate.

The sensor inspects the integrity of a metal part. Source: Banner Engineering


Three factors are critical in capturing a good image: resolution, lighting and lens.

  • Resolution. The amount of detail in the captured image-the resolution, expressed as the pixel count-depends on how many light-sensitive cells the imager has. Unlike a photoelectric sensor, which uses a single photoelectric cell, a vision sensor has an imager chip containing several thousands or even a few million light sensitive cells to capture an entire scene. That pixel count might seem low for those familiar with consumer digital cameras, but it provides enough information for industrial inspections. The higher the resolution, the clearer the image, whether the camera is centimeters or meters away from the object. The tradeoff is speed, which is why operators should use the minimum resolution necessary for the application.

  • Lighting. The lighting must be sufficiently bright and correctly angled to create contrast that makes the feature of interest stand out from its background. To that end, a machine vision lighting subindustry has evolved, offering specialized lighting such as ring lights that surround the lens, low-angle lights for revealing raised features or texture, or back lights to emphasize an object’s silhouette.

  • Lens. Lens quality determines how much the edges of the image are distorted, how uniformly bright the image is from the center to the edges and how accurately the camera perceives color.

The sensor verifies that stamped knockouts have been completely removed from the part. Source: Banner Engineering


The purpose of a vision device is to reduce large amounts of image data to several pieces of relevant information. Vision devices use image-processing techniques to locate edges, find patterns or locate variations in color or brightness. These image-processing techniques are preconfigured by an operator, but then run over and over again at high speed during production. For example, if the operator sets up the device to detect a full case of a product, the vision device knows to reject a carton with an empty cell. It can make that judgment regardless of where the empty cell is located within the camera’s field of view. It also can inspect asymmetrical objects no matter how they are rotated within a full 360-degree range.

Low-cost vision sensors come with built-in image processing tools. For custom systems, image-processing algorithms are written from the ground up, or customized to suit the application.

Tools fall into two basic groups: linear and area. Both types look for a transition in the image, but in different ways.

Because they are faster and more precise than area tools, linear tools are the best choice when the area of interest is predictable. For example, a vision sensor could use a linear tool such as the edge tool to ensure that vials rushing past on an assembly line all have their lids tightly sealed.

An area tool is the most useful when the location of the target could vary, such as a carton of plastic bottles that could be missing one or more units anywhere in the box. An area tool examines the entire region of interest for any deviation from the norm.


After the machine vision device determines whether the results of the image processing are within operator-defined tolerances, it communicates with a controller, which triggers an operator- specified action. For example, an indictor light might go on, signaling a worker to scrap the part, or a blower or diverter might move the part off the line.

Increasingly, machine vision is being integrated into networks, using communication protocols such as Modbus TCP and EtherNet/IP. In some of these cases, vision devices are part of closed-loop systems that self-correct, based on data from the vision device. A closed-loop feedback system is more technically challenging to set up than an after-the-fact inspection, but the savings in reduced scrap and reworking can make them worthwhile.

Machine vision continues to show great growth as device costs come down and capabilities increase. The challenge now is to make machine designers aware of the innovations in machine vision, so they can benefit from this booming technology.