Machine vision is a familiar technology in manufacturing and industry, and is used in an ever-growing range of tasks from simple code reading and assembly verification to robotic guidance and 3-D profiling. Theapplication of machine vision for gaging and metrology is not new, but advances in technology are enabling higher precision and more accurate measurement than might have been achieved even a few years ago. Even as technology improves, it is important to consider some fundamental techniques for successfully integrating machine vision for metrology, and know how to avoid some of the problems that could affect the results of a measurement application.

This article focuses only on techniques for production systems, that is, inspection systems designed to perform 100% automated measurement on a production line. There are many camera-based, noncontact metrology devices designed for laboratory or off-line use, and these are highly recommended in that environment. However, those systems are outside the scope of this discussion.

Understand Applied Metrology Concepts

The term metrology often is used interchangeably with measurement and indeed they are closely linked. Actually, metrology is the science of measurement. When we talk about metrology in an industrial process, it refers to applied metrology or industrial metrology: the application of the science of metrology for manufacturing. As we consider metrology in this sense, let’s review some concepts used in measurement and metrology.

The terms precision and accuracy are used in the specification and qualification of a measurement system. Precision is the ability to repeat a certain measurement, while accuracy is how well a measurement agrees with a true or known value. It is very important to understand that accuracy does not provide a very useful metric in determining the quality of a metrology device in the context of the plant floor. In specifying and qualifying production machine vision solutions, then, precision is most important, and in specific, the uncertainty or bias of the measurement. The terms uncertainty and bias refer to the amount of variation that will be exhibited by a measurement system over repeated measurements (or the degree of precision). To be clear, the goal of the on-line metrology system will be to produce a specific measurement with a reliable and repeatable level of uncertainty that meets the application requirements.

For example, a machine vision system might report the diameter of a specific part to be 20.4 millimeters (mm), with an uncertainty over many repetitions of ±0.05 mm (that is, no measurement of the same part deviated from 20.4 mm by more than 0.05 mm). Of course, the actual amount of uncertainty that is acceptable for any application is dependent upon the application specification.

In addition, don’t automatically use machining specifications for inspection specifications. If a part must be manufactured to 24 mm ±0.01 mm in length, what is the acceptable uncertainty for inspecting that length? If the inspection uncertainty is specified as ±0.01, a good machined part could be 24.01 mm in length, but that measurement system would fail it a statistically significant number of times since in this example, the 24.01 mm measurement could report between 24.00 and 24.02 mm. To be certain of the target measurement criteria, consult with the quality team and determine in advance the allowable inspection uncertainty based upon the needs of the application.

Use the Correct Image Resolution

A determining factor for delivering high precision and low uncertainty in machine vision metrology is the resolution of the acquired image. In this context, the term resolution (or image resolution) means the size of an individual pixel in real-world units. Simply put, if a camera sensor contained 1,000 pixels in the horizontal direction, and optics were incorporated that acquired an image that was 1 inch in width, a single pixel would represent 0.001 inch. Note that this is a fundamental metric that does not change with camera manufacturer or analysis software. As a gage, the smallest unit of measurement (some exceptions noted later) in a machine vision system is the single pixel. As with any measurement system, in order to make a repeatable and reliable measurement one must use a gage where the smallest measurement unit (as a general rule of thumb) is one tenth of the required measurement tolerance band. In the example just described, the system could be estimated to provide a precision measurement to approximately ±0.005 inch (a tolerance band of 0.01 inch, ten times the gage unit).

Engineers first using machine vision for measurement often seriously underestimate the number of pixels required to achieve a desired level of measurement precision uncertainty. In fact, it may require multiple cameras, specialty cameras such as line scan imagers, or multiple views of a single part to achieve the required resolution for the specified inspection tolerance.

Expand the Resolution If Possible

Sometimes we can squeeze out additional resolution in an imaging system mathematically using algorithms that report features to sub-pixel repeatability. Some examples would be gray-scale edge analysis, geometric or correlation searching, regressions such as circle or line fitting, and connectivity in some cases. If one can take into account sub-pixel results through the use of these tools, then the smallest unit of measurement can be less than a single pixel, as described earlier. Note though, that estimates provided by vendors for sub-pixel capability are only that, and usually are made for best-case imaging, optics, and part presentation. Take care in using arbitrary sub-pixel expectations as a determining factor for specifying system measurement capability. Test the system with actual parts and images to empirically determine the sub-pixel capability.

Use High-resolution Optics

Imaging is a function of optics and lighting (and as we will discuss later, part presentation). For most applications, the only optics used will be a lens assembly, but the selection of that lens is critical to the metrology application. Beyond delivering an image of the proper real-world size to the sensor, for metrology the lens must reproduce the image as accurately as possible without distortion. Furthermore, lenses have a resolution metric as well, which often is specified as line pairs per millimeter or inch (lp/mm, lp/in), and by extension may have a specification for MTF (modulation transfer function) or more simply the ability of the lens to produce high contrast at high lp/mm. The higher the pixel count, the more important these lens metrics become. Ensure that the specified optics are high-quality, high-resolution products designed for machine vision applications.

Telecentric lenses are extremely useful for measurement applications in many cases. A telecentric lens uses a combination of optics to virtually eliminate all distortion caused by parallax in an image. The result is an image that is parallel to the sensor for mostly all of the image. Planar geometric relationships (in the image plane) are completely preserved, making measurements more direct and straightforward. As always, test the imaging before specification.

For applications that require a very small field of view (for example less than a few millimeters), consider the use of microscope optics and/or high-magnification optics specially made for machine vision. These are available from a number of vendors. It is not recommended that standard optics be pushed to higher magnification using extenders or add-on magnification.

Experiment for the Best Illumination

Lighting is important for any machine vision application, and for metrology, the choice of illumination may play an even more critical role. And as with most machine vision applications there is unfortunately no specific rule that can be applied to illumination. Many metrology applications benefit from backlighting (taking care in part presentation as noted below), though the physical implementation of a backlight in automation on the production line may be a challenge. Front lighting may present difficulties in highlighting feature edges that must be identified for measurement. Consider the use of low-angle or structured lighting to bring out low contrast features. When attempting to measure features that are very small (with resolutions below 0.001 mm for example) use long wavelength colors such as blue or violet to enhance contrast. If the part is in motion (or even if not), consider strobing the LED illuminator for the best intensity and lamp life.

In all cases, successful machine vision illumination requires experimentation, both in the lab and on the floor, in order to ensure the correct component selection.

Know What Is Being Inspected

While it might seem obvious, many applications fail because the features to be inspected are not what was specified. Often, this is because the part specification is not always “noncontact measurement friendly,” and it is up to the machine vision engineer to determine and specify what exactly is being measured. Take, for example, measurement of the diameter—to a high precision and low uncertainty—of a through bore hole that is small in diameter but quite deep. If a front lighting is used, only the top edge of the hole will be gaged. This could be unacceptable if the desired inspection is to mimic an insertion gage. On the other hand, if back lighting is used, due to the depth of the bore it is unlikely that the optics will “average” the entire bore in the image. More probably, the optics will be focused on a point at some depth in the bore—top, bottom, middle—and again this result might not be the one desired. Choose lighting, optics and algorithms carefully to ensure measurement of an agreed-upon surface. Understand that in many cases, on-line, noncontact machine vision measurement will not exactly duplicate a physical measuring device for the reasons described earlier.

Pay Close Attention to Part Presentation

Finally, the most overlooked issue in high-resolution, low-bias machine vision metrology for an on-line application is the nonrepeatability of part presentation. The imaging, optics, resolution, and algorithms might all be perfect in the off-line setup, but then the repeatability and reliability of the inspection on line is a catastrophe. Usually, it’s inconsistency in part presentation. Sometimes part presentation can even make a certain measurement impossible to achieve. Take, for example, the small but deep bore hole described earlier. When the face of that hole is perpendicular to the lens, and the image is taken directly down the depth of the hole, it can be successfully measured. However, if the part tilts even slightly, a hole like that could visibly turn into an ellipse, or be obscured completely if backlit. With imaging for noncontact measurement one must first mitigate all possible variation in part presentation, then understand that in any case part presentation will be responsible for some stack-up error in the measurement. Take that into consideration when determining and specifying resolution, optics and lighting.

These few tips and techniques are just a small part of an overall implementation and integration plan for in production metrology using machine vision. There are certainly many other considerations in any inspection application. Always undertake a complete application analysis and project specification before undertaking a system design, and when in doubt seek the advice of vendors and other machine vision professionals. 


  • The resolution of the acquired image is a determining factor for delivering high precision and low uncertainty in machine vision metrology.
  • Take part presentation into consideration when determining and specifying resolution, optics and lighting.
  • Always undertake a complete application analysis and project specification before undertaking a system design.