“I am looking for a camera that rejects parts with incorrect drill-holes.”
Every day, machine vision providers receive such inquiries. Often, prospective customers are surprised at the answer: “Sorry, but cameras can’t do that.”



Prospective customers are often surprised to learn that cameras alone cannot reject parts with incorrect drill-holes. Source: The Imaging Source Taiwan

While a camera is part of the solution, a complete machine vision system is required to solve the afore-mentioned problem.

A machine vision system consists of illumination, optics, camera, computer and software.

At first glance, the camera is the source of the image. But actually, the image is created by the illumination. The charge-coupled device (CCD) chip transforms the photons encountering it into an electrical signal. The camera electronics digitize this electrical signal and make it available as a “raw digital image.”

The computer-or rather the software it runs-uses this raw digital image for two basic tasks: visualization and automatic image analysis. Automatic image analysis is, however, only reliable in the case of controlled illumination and simple objects. In the context of visualization, the illumination’s influence on the resulting image should not be underestimated.



Illumination

There is no image without light. An image is created due to the interaction between an object and photons. In contrast to this banality, in practice illumination is a complex technology. This is not only true for industrial, medical and scientific applications, but also for aesthetically oriented applications. Professional photo studios are not dominated by cameras but by various types of illumination.

The quick growth of the machine vision market permits some component manufacturers to concentrate fully on illumination. These manufacturers offer LED illumination in various versions. These components’ range of variation, in features as well as price, reflects the complexity of this subject.

Optics

Cameras used in the domains of industry, medicine and science are usually shipped without a lens. Therefore, the operator is able to adapt the optics to his special requirements. Besides normal lenses microscopes, endoscopes and telescopes also are used.

Due to their standardized mount, in machine vision, C-mount lenses are widely used. The calculation of these lenses only requires three parameters: addition, multiplication and division.

A machine vision system consists of five basic components: illumination, optics, camera, computer and software. Source: The Imaging Source Taiwan

Camera

As discussed previously, the camera does not create the image but only transforms optical signals (light) into electrical ones (voltage) and digitizes them (raw digital image). Thus, a camera is not able to compensate for incorrectly chosen illumination or optics. On the other hand, it is important to select an appropriate camera to avoid a malfunction of the system or unnecessary costs. An overview of the most important decision criterion includes:

• Monochrome/color. As a rule of thumb, color cameras are only used if the different colors of an image “carry” information.

For all other applications, it is recommended to use monochrome cameras because color cameras have some disadvantages:
• They are less sensitive in comparison to monochrome cameras.
• Assuming the same number of pixels, the effective resolution of a color CCD is lower than that of a monochrome CCD. Every second pixel of a color CCD is sensitive to green, while blue and red share the remaining pixels.
• Because one expects a green, blue and red value for every pixel, the raw digital image of a color CCD has to undergo a color interpolation. This interpolation requires extra processing power and bandwidth during the data transfer.
• IR cut filter. In contrast to the human eye, CCDs also are sensitive to the near infrared (IR). To approximate the human eye, cameras are equipped with IR cut filters. However, cameras without IR cut filters offer more flexibility because the operator is able to adapt them to his requirements using custom filters.
Therefore, usually the manufacturers of modern machine vision cameras do not equip their monochrome cameras with IR cut filters and also offer a variation of color cameras without IR cut filters.
• Format. The format describes the CCD’s size. It is an important parameter when deciding which lens to choose. Some FireWire camera formats range from ¼-inch to ²⁄³-inch.
• Resolution. To avoid unnecessary high costs, as well as a large amount of data, the resolution should be as small as possible. For two typical application areas of machine vision there are the following rules of thumb: the measurement of a distance requires at least 10 pixels, while checking the presence of an object requires at least 2 or 3 pixels. These rules of thumb serve only as a basic indicator.
• Frame rate. For visualization purposes, as well as for the setup of illumination, optics and a frame rate of 15 fps (frames per second) are usually sufficient. Automatic image analysis often requires a much lower frame rate.
• I/O and trigger. Digital I/Os (Input/Output) and appropriate software allow the camera to control external devices and to respond to signals from external devices. The “trigger” is a special input, which is comparable to the shutter button of a photo camera. A pulse appearing at this input starts the exposure of an image. After the output of this image the camera waits for a new trigger pulse.
• Interface. Machine vision pioneers had to make their first steps with pick-up tube cameras. They were based on the analog video standards NTSC and PAL. One of the principal issues was the digitalization of the analog video signals.
Meanwhile, pick-up tubes have almost been completely replaced by CCD chips. These chips already provide a digital signal. Additionally, today there are PCs with fast and easy to handle digital connectors, such as FireWire. Therefore, new projects in the domains of industry, medicine and science are mainly based on FireWire cameras.



Computer

Machine vision in the domains of industry, medicine and science are dominated by PCs and the operating system Windows. The use of modern interfaces, such as USB and FireWire requires Windows XP and Windows Vista. Efficient visualization requires graphics hardware with on-board memory. If image sequences should be recorded, the computer configuration should be similar to that of video editing systems-fast processor, fast separate hard disk.

The requirements of a computer configuration for automatic image analysis vary. In simple applications with one camera and a slow sequence of images, a simple low-end computer may be sufficient. However, increasing complexity, number of cameras and number of frames may lead to a processing load that has to be distributed among several PCs.



Software

The software has to perform four tasks. A driver integrates the camera into the operating system, while a programming tool supports the setting of the camera’s parameters as well as the transmission of the images.

The third task is the analysis of the images. Since there is no off-the-shelf analysis software for special cases, users have to develop it themselves. The use of a programming tool may be very helpful. Independently of the tools, this work is based on two important requirements:

An expert has to be able to describe the problem-such as incorrect drill-holes-in a way that allows the realization of an algorithm.
The illumination has to be designed in a way that the reflected light reliably represents the problem.

The fourth task is visualizing the results, which may vary from a LED that shines red or green to a complex visualization and archiving in a database.

Machine vision can be used in many applications. From factory automation and quality inspection, to medical and microscopy systems, the uses for machine vision are endless. V&S

Tech Tips

These are the most important, very basic rules for the development of a machine vision system:

- The image is created by the interaction between light and object.

- The cameras have only one function, namely the electrical representation of the image in order to let it be analyzed by a computer.

- The software for this automatic image analysis usually is not available off-the-shelf. Therefore, it has to be written by a system engineer.

- If the object is not easy to describe, automatic image analysis tends to become complex. In these cases expert knowledge has to be converted to algorithms.