Calibration reports are supposed to be clear and concise so the reader can understand the details. In fact, the ISO 17025 standard has a section that deals with this topic in specific terms so it would seem there should be no room for misinterpretation. Unfortunately, like all standards, if there’s room for creativity, someone will get creative and the results may not be particularly enlightening. In some cases, the intent of the standard is never considered.
Laboratories issuing these documents are not always to blame for this situation. In some cases the instructions from their customers are ambiguous or the reader of the document doesn’t understand some of the content when it is clearly reported. In others, the laboratory is expected to be psychic or judge and jury all in one. In a perfect world, the standard requires discussion with the customer to clarify what is required but often the customer is not up to speed on this and can offer little guidance. In many cases, such discussions could cost the laboratory more than the job is worth. I hope the following comments will help in addressing some of the causes of it all.
If you have asked your calibration source to make decisions for you, you had better be quite specific about what those decisions are to be based on. If you simply say the item should be to a stated standard be aware of what that standard requires or your calibration cost could go off the charts. Also, be aware that where fixed limit gages are concerned, tolerances in their standards apply to a new gage not a used one. In all cases, the gage record for each item should indicate what limits the gage can be without compromising your work and should be based on your situation which is why I say only you should make the call. If you do, then you don’t need a lab to tell you whether the gage is acceptable. Sometimes labs will decide to make a call on acceptance and make the same mistake regarding what is acceptable—at no extra cost. But there is a cost when a quality auditor asks you to justify using a gage a lab has rejected.
Measurement uncertainty on its own can become a minefield. Instead of reporting the measured values and the uncertainty attached to each, a lab’s software may automatically add their uncertainty to the tolerance and declare the result to be the limits for the gage and then make a decision for acceptability based on that—whether you asked for it or not. One problem with this is that you may never know what the actual tolerance is before the software messed with it. Another major problem pops up when the uncertainty used is wishful thinking rather than reality. Assuming accreditation removes such risks is wishful thing in its own right. A couple of weeks ago I had discussions with a major manufacturer of thread gages on this subject. The uncertainty their accredited lab scope shows is better than NIST which, given their operations, is impossible. (They’re reviewing it now.)
I don’t want to beat up on uncertainty, but when calibration reports are reviewed by someone not up to speed on the realities of fixed limit gages, other problems can arise. Astute readers of reports will encounter situations where reliable levels of uncertainty have been used but the end result indicates (correctly) that the uncertainty exceeds acceptable ratios such as 4:1. The reality here is that while the uncertainty is correct, it’s the tolerances that are wishful thinking. This goes back to standards that were produced long before measurement uncertainty came on the radar at the industrial level. Since gage manufacturers want to appear to be as capable as their competitors, bringing those tolerances into the real world is not likely to happen any time soon. Class XX tolerance plain ring gages or Class W thread gages are examples of what I mean. Some manufacturers downplay their listing of them or increase the cost to discourage customers from asking for them. They know that endless battles over calibrated sizes are sure to result.
I hope these notes will assist those readers who actually read calibration reports as opposed to those who simply check for red flags, the absence of which ensures they get filed away until you know what hits the fan.