The lens is responsible for creating sufficient image quality to enable the vision system to extract the desired information about the object from the image.  In a typical application, the lens is required to locate features within the field of view (FOV), ensure the features are in focus, maximize contrast and avoid perspective distortion. What may be adequate image quality for one application may be insufficient for another. This article will explain the fundamentals of using optics to optimize a machine vision application.
 

BASICS OF MACHINE VISION OPTICS

The object area imaged by the lens is called the field of view (FOV). The FOV should cover all features that are to be inspected with tolerance for alignment errors. The working distance (WD) is the distance from the front of the lens to the object being imaged.
 
The depth of field (DOF) is the maximum object depth that can be maintained entirely in focus. The sensor size is the size of a camera sensor’s active area, typically specified in the horizontal dimension. The primary magnification is the ratio between the sensor size and the field of view. With primary magnification held constant, reducing the sensor size reduces the field of view and increasing the sensor size increases the field of view. 
 

RESOLUTION

Resolution is a measurement of the vision system’s ability to reproduce object detail. Take for instance an image with two small objects with some finite distance between them. As they are imaged through the lens onto the sensor, they are so close together that they are imaged onto adjacent pixels. If we were to zoom in, we would see one object that is two pixels in size because the sensor cannot resolve the distance between the objects. The separation between the objects has been increased to the point that there is a pixel of separation between them in the image. This pattern—a pixel on, a pixel off, and a pixel on—is referred to as a line pair and is used to define the pixel limited resolution of the system. 
In the real world, diffraction, sometimes called lens blur, reduces the contrast at high spatial frequencies, setting a lower limit on image spot size.

Figure 3 shows a spark plug being imaged on two sensors with different levels of resolution. Each cell in the grid on the image represents one pixel. The resolution in the image on the left with a 0.5 megapixel sensor is not sufficient to distinguish characteristics such as spacing, scratches or bends in the features of interest. The image on the right with a 2.5 megapixel sensor provides the ability to discern details in the features of interest.

Targets can be used to determine the limiting resolution of a system and how well the sensor and optics complement each other. 
 

CONTRAST

Contrast is the separation in intensity between blacks and whites in an image. The greater the difference between a black and a white line, the better the contrast. Take for instance, two different images of a UPS label taken with the same high resolution sensor at the same position and focal length with different lenses. The difference is that the lens used to take the image on the right provides higher levels of contrast because it is a better match for the high resolution sensor.
 
Color filtering can be used to increase contrast. A machine vision application designed to distinguish between red and green gel capsules adds either a red or a green filter, increasing the contrast to the point that the vision solution becomes much more robust.
 

DIFFRACTION

In the real world, diffraction, sometimes called lens blur, reduces the contrast at high spatial frequencies, setting a lower limit on image spot size.  Figure 7 shows how these effects degrade the quality of the image. The object on the top of Figure 7 has a relatively low spatial frequency while the object on the bottom has a higher spatial frequency. After passing through the lens, the upper image has 90% contrast while the bottom image has only 20% contrast due to its higher spatial frequency.
 
Now let’s look at lens performance across an entire field of view. The three images enclosed in different colors in Figure 8 are close-up views of the boxes shown in the same color on the larger image. The chart at the bottom of Figure 9 shows the modulation transfer function (MTF) of the lens at each position in the field of view. MTF is a measurement of the ability of an optical system to reproduce various levels of detail from the object to the image as shown by the degree of contrast in the image. The MTF measures the percentage of contrast loss as a function of the spatial frequency in units of LP/mm. The lens in Figure 9 has 59% average contrast in the center section, 56% in the bottom middle and 62% in the corner. The image demonstrates the importance of checking the MTF of a lens over the entire area that will be used in the application. 
 
The same test applied to a different lens with the same focal length and same field of view using the same image sensor will produce a contrast reduced to 47% in the centre, 42% in the bottom middle and 37% in the corner. A third lens is different in that the performance is good in the center of the image at 52% contrast, drops off in the corner position to 36%, and drops even more in the bottom middle to 22%. Note that all three of these lenses have the same FOV, depth of field, resolution and primary magnification. Lens performance can have a dramatic impact on the ability of the sensor to discern the details that are important in the application.
 

DEPTH OF FIELD

Depth of field is the difference between the closest and furthest working distances an object may be viewed before an unacceptable blur is observed. The F stop number (F/#), also called the aperture setting or the iris setting of the lens helps to determine the depth of field. The F/# is the focal length of the lens divided by the diameter of the lens. F/#’s are specified for most lenses at a focal length of infinity. As the F/# is reduced, the lens collects less light. Reducing the aperture setting or making the aperture smaller increases depth of field. Increasing the allowable blur also increases the depth of field. The best focused position in the depth of field is indicated by the green line, which is close to the end of the depth of field closest to the lens.
 
Figure 13 shows a depth of field target with a set of lines on a sloping base. A ruler on the target makes it simple to determine how far above and below the best focus the lens is able to resolve the image.
 
Figure 14 shows the performance of short fixed focal length lens that is used in machine vision applications. With the aperture completely open looking far up the target in an area defined by the red box that’s beyond the depth of field range, we see a considerable amount of blur. When we close the iris to the point where there is very little light coming in the overall resolution is reduced and the numbers and lines both become less clear.
 
Figure 15 shows this same lens but this time looking at the best focus position. With the iris completely open we see the image and numbers clearly. With the iris half open the image has become blurred. The resolution degrades even more with the iris mostly closed.
 
A lens with a different focal length that is designed specifically for machine vision applications will produce the following results: With the iris completely open, the lines are gray rather than black and white and the numbers are somewhat legible but highly blurred; With the iris mostly closed, the resolution improves even more in the area of interest and the image is sharp throughout the range of working distances shown.
 

DISTORTION

Distortion is an optical error or aberration that results in a difference in magnification at different points within the image. Perspective distortion is caused by the fact that the further an object is from the camera, the smaller it appears through a lens. Perspective distortion can be minimized by keeping the camera perpendicular to the field of view.
 
Perspective distortion can also be minimized optically with a telecentric lens. The image on the right shows the objects—four pins mounted perpendicular to a base.
 
Figure 21 shows perspective distortion in a real world scenario. The object in the top center appears through two different lenses in the left and right lower images. Using a conventional fixed focal length lens produces the image on the lower left. The two parts appear to be different heights on the monitor even though they are exactly the same height in real life. In the image on the lower right, the telecentric lens has corrected for perspective distortion and the objects can be measured accurately. 
 

CONCLUSION

Optics is very important to the overall success of a machine vision application. The examples shown here demonstrate the importance of considering the overall system including the optics, lighting and vision system as opposed to simply picking out components. When you are discussing the application with suppliers, be sure to completely explain the goals of the inspection as opposed to just asking for specific components so that the supplier can contribute to the success of the application.

Tech Tips

  • Targets can be used to determine the limiting resolution of a system and how well the sensor and optics complement each other.
  • In the real world, diffraction, sometimes called lens blur, reduces the contrast at high spatial frequencies, setting a lower limit on image spot size.
  • Perspective distortion can also be minimized optically with a telecentric lens.

John Lewis is Market Development Manager of Cognex in Natick, MA.