Last year at this time I wrote a column in which I criticized the practice of having calibration labs make acceptance decisions in their reports. At that time, the lab could use pass or fail beside the item involved and no data needed to be presented as long as the lab retained it if needed later.

While ISO 17025 still allows that practice, accrediting agencies such as A2LA do not, and for good reason. It means nothing of value. What hasn’t changed is the requirement that uncertainty for the measurement involved would have to be accounted for in making the decision. How it is to be accounted for is not explained. The reason it’s not outlined is due to differing practices around the world, especially where fixed limit gages are involved. North American tolerances are often so close that even NIST would have trouble making a pass/fail decision when about one third or more of the gage tolerance could be eaten up by measurement uncertainty.

Being old school, I believe calibration laboratories should be neutral, unbiased providers of data.

As can be expected, this uncertainty about presenting uncertainty values has lead to confusion and arguments. Some labs add their uncertainty to the tolerance shown for the gage and report their reading next to it. If you’re lucky, they tell you what they’ve done and how much their uncertainty actually is. I’ve not read any standards indicating this is what should be done with uncertainty values.

Other labs report their readings of size, ignore their uncertainty and base their pass/fail decision on their readings only. Once again, if luck is on your side, they indicate what they’ve done but note what their uncertainty is for each feature measured.

Being a simple guy, I prefer to list the nominal values and tolerances (if known) of our readings and our uncertainty. This way the customer makes the pass/fail decision and that is the way it should be. If we see a potential problem we simply highlight that reading and provide a note to the customer to “review before use.”

As I have noted in previous columns, the nominal values and tolerances used by most labs when making their pass/fail decisions are the specs for a new gage, not a used one. A lot of perfectly good gages are rejected because of this misapplication of the gage maker’s tolerance.

Gage users who follow this practice often demand that their gages be supplied to the high side of the tolerance all of which is there for the benefit of the gage maker, not the gage user. Such requests are akin to cutting the gage maker’s tolerance in half or more, which warrants a higher price few want to pay. 

Some gage users have their gage program very well organized so erroneous or simplistic decisions made against improper specifications are avoided. They list the new gage tolerance along with wear limits in keeping with their applications. This avoids people playing games with their uncertainty or attempting to make it less conspicuous.  European standards for some gages list specifications for what they have to be as new, and what they can wear to and still be considered satisfactory.

Reputable labs that have knowledgeable technical people on staff can often provide you with good advice on accept/reject criteria regarding fixed limit gages. But not all labs are so equipped even though their people are very good at calibrating your gages. Such advice requires engineering knowledge regarding product tolerances and allowances and the practicalities of gage manufacturing and calibration—the sort of stuff you won’t get from a textbook.

I was speaking with a competitor in the calibration business the other day over this very subject. He said that the customers seem to know less about these matters than was the case ten or twenty years ago and that’s why they are passing the buck onto the calibration source. It occurred to me that may be due to the emphasis on computer skills taking precedence over knowledge of what is being measured.  

Being old school, I believe calibration laboratories should be neutral, unbiased providers of data.  Pass/fail decisions are quality matters best left to those who will be directly impacted by them: the users of the gages involved.