Readers of this column will be familiar with the subject of measurement uncertainty since I comment on it from time to time, as I did last month. Those readers that have not been that interested in it will certainly run across it on reports from their calibration sources. All things considered, it is an element that permeates measurements of all kinds and without a statement about it, the numbers on a report are just readings with little credibility.
Gage and instrument calibration is a specialist branch of dimensional metrology governed by a number of published standards, the most popular being ISO/IEC 17025. This standard and supporting documentation stipulates that measurement uncertainty must be included in reports issued by accredited laboratories and outlines how it should be calculated. The meaning and application of uncertainty often brings up questions which I hope the following notes will provide answers for.
Some companies insist their calibration source has an uncertainty ratio of 1:10 for a given measurement. In some cases, a ratio of 1:4 may be agreed to by both parties, but in the real world this is not always achievable due to technical limitations. This comes up often when thread gages are being calibrated and the typical uncertainty can be up to 50% of the gage tolerance and there is no way around this. This situation is the result of tolerances that were calculated many years ago, long before the limitations of some measurement processes were known as well as they are today.
Uncertainty values are based on specific equipment, people, procedures, and laboratories which means that two different labs may have identical hardware but different uncertainties for a given measurement. There is no generic uncertainty you can get from a book or a website that is valid for each situation.
If the uncertainty is ±.0005, the uncertainty represents a band around the reported size within which the true value for the size is said to be. It does not mean that the reading given is in error by that amount, only that it could be. It could be that the reading is exactly the same as the true value but we are unable to prove it. If this is not good enough for you, another lab using different or better equipment may be worth looking into but it will issue a statement of uncertainty that you may not like—even if you send the corporate jewels to NIST for calibration.
Some calibration labs offer on-site calibration services for gages or masters that require the work to be done in a controlled environment. As a result, the uncertainty shown in the lab’s scope will be exceeded in such cases. If the lab doing the work is showing the same uncertainty for on-site work and for the same work done in their lab, you know one or other of the uncertainties is suspect or perhaps both of them.
Two labs may show identical uncertainty for a particular measurement using the same principal hardware but different results are reported by each one. This can be due to uncertainty calculations of one or the other not including all of the critical factors that should be considered. This problem arises most often when high precision masters or gages are involved and is one of the reasons ASME standard B1.25 referred to in my last column was created.
Calculating uncertainty is relatively simple; after all, even I can do it, but knowing what to include in the calculations requires knowledge about the process and the hardware being used. When this is not the case, the stated uncertainty would no longer apply. For example, many people compute the uncertainty for a measurement under one inch—within the measuring range of the instrument being used. But when they need to measure over this size, the instrument must now be set to a master that wasn’t needed in the under-the-inch measurement. This becomes an additional element for the uncertainty calculation.
Some companies try to improve their measurement capability by changing the key measuring instrument for one with higher resolution. On the surface, this looks like a good way to achieve this but an uncertainty budget may show that the higher resolution makes little or no difference in the measurement uncertainty of the process.
When you realize the impact reliable uncertainty values have, it’s easy to understand why this aspect of metrology is so important.