Other Dimensions: Certainties About Measurement Uncertainty
April 25, 2008
Considering the nature of the subject, I suppose I had better begin this rant with a simple definition. Measurement uncertainty is about how close your measurement is to reality. These days, “close enough for artillery” won’t cut it and neither will “a couple of tenths or so.”
Today, knowledgeable people want numbers derived from a formula rather than guesswork. And don’t think the latest software will bail you out. If you don’t know your metrology, software will be a waste of money. On the other hand, if you do know your metrology, you won’t need software. A dollar store calculator will handle the math involved.
The greatest certainty about measurement uncertainty lies in the fact that even though the value that is the result of the calculations may be quite accurate, the application of it will often make a joke of the whole process.
An example of this thumped on my desk as I pondered this month’s column. It was a newly minted standard by a trade association dealing with thread gage calibration. Measurement uncertainty was covered by referring to the usual standards complete with a neat chart borrowed from a technical paper issued by another organization. A chart of the ‘worst’ acceptable uncertainties is provided and the reader is advised that laboratories can either work up their own uncertainty budgets or use those in the chart.
I can see all those labs that don’t know what their uncertainty really is glomming onto this standard and, of course, some will have to outdo the others. Watch for: “Our measurement uncertainty complies with ABC’s standard #xxx.” Or “Our measurement uncertainty is strictly controlled within ABC’s standard #xxx.”
Hint for those wishing to become properly accredited to ISO 17025: This won’t wash with your assessor because it is misleading in the least; when it comes to uncertainty, one number does not fit all. If you claim to meet some generic measurement uncertainty value, you’ll still need an uncertainty budget to prove that you do.
One of the documents that this standard references is even worse. It indicates that in case of a measurement dispute on a thread gage, if the customer’s measurement falls within the gagemaker’s uncertainty added to the gage tolerance, the gage must be considered good. This simply means that the gage supplier with the worst uncertainty gets the largest tolerance to make the gage to. Worse, it implies that gagemakers have lower uncertainty than gage users, which I know from experience is not always the case.
Both of these documents are rather self-serving from the gagemaker’s point of view and because my company makes gages, it would be in my best interest to leave them alone. I can’t do this because both defy logic, common sense and proper metrology, and one day that’ll come back to haunt their publishers.
It’s bad enough that documents like these are out there, but when one of the most critical elements in all of this is ignored, they are reduced to creating more problems than they solve.
The critical element I’m referring to is the application of uncertainty. Most of the world’s standards require that a reading of size plus the uncertainty attached to it must fall within the tolerance for that feature. This means that the gagemaker or calibration laboratory with the worst uncertainty has little wiggle room in declaring a gage is within or outside of tolerance.
In offering a generic uncertainty value as these documents do, there should be at least a listing of what is required in the way of hardware and environment to get close to meeting it. One of these indicates that “laboratory grade” instruments are required, which can mean just about anything, including your trusty handheld micrometer. The other notes that the uncertainty values shown are good if your environment is within ±2 of 68 F. The uncertainty shown for a 6-inch diameter gage is 0.00015 inch for pitch diameter. The effect of such a temperature spread on its own would result in 0.0001-inch expanded uncertainty for this element alone, which makes me wonder how they arrived at the measurement uncertainty value shown.
One thing is certain in all of this. If documents like these are used to any great extent, there will be more confusion than clarity when discussions turn to measurement uncertainty.