Handling Distortion and Perspective Errors in Imaging Systems
While all vision systems try to accurately reproduce an object's features, the degree of image resolution and contrast needed are determined by the specific application (see "Now You See It," Quality, May 2002, pg. 8). The application also will dictate the acceptable degree of distortion and perspective error. This begs the question, "What are these errors and what role do they play in determining sufficient image quality?"
Distortion is an optical error or aberration, caused by the lens, which results in differences in magnification at different points in the image. Although distortion does limit the quality of the image, no image information is lost -- it is merely misplaced.
No lens is perfect. All lenses cause some distortion, which grows worse toward the edges of the field of view. The difference between the actual (distorted image) and predicted (nondistorted object) position -- for example, the amount of distortion -- can be expressed in terms of a percentage from the center of the field:
% Distortion = [(AD-PD)/PD] X 100%
where AD = -actual distance from the center and PD = predicted distance.
If the distortion at the edge of the sensor is less than the size of a pixel, then the distortion will have no effect on the recorded image. And, if the distortion is less than ~2%, the human eye will not perceive it.
As always, the application dictates how much distortion is acceptable. Distortion is particularly troublesome for applications that measure features within the image, but it can be accounted for and corrected in software. After the distortion has been measured, software can correct for it by factoring the percentage distortion into the calculation of measurements. Distortion error is more difficult to correct in short-focal-length lenses, such as wide-angle or fisheye lenses.
Perspective error, also known as parallax, is part of our everyday experience in gaging distance. Our brains expect closer objects to appear larger than those far away. Perspective also exists in conventional imaging systems, in which the magnification of the object changes with its distance from the lens.
If, however, an imaging system needs to measure the length of an object, then perspective error becomes critical. Perspective errors are most troublesome in measurement applications involving objects with depth or objects moving relative to the lens.
Telecentric lenses are useful for these applications because they minimize perspective error. Telecentric lenses optically correct for perspective, thereby allowing objects to remain the same perceived size, independent of their location within a depth of field.
While telecentric lenses do not inherently have more depth of field than conventional lens designs, their images tend to blur symmetrically. The center of the blur corresponds to the center of the object, and as a result, no perspective error is introduced in measuring the center-to-center distance between objects. This is true even if the object is not in focus. It is important to remember that a telecentric lens system can only provide a field of view as large as the front lens in the system. So, increasing field of view for a telecentric system will mean increasing front lens size, which adds to the weight and cost.
Julianne Wagner is product line manager at Edmund Industrial Optics (Barrington, NJ). She can be reached at email@example.com.