Quality Magazine

Other Dimensions: Reading Between the Lines

August 30, 2011
Decipher the fiction found in calibration laboratories’ claims.



It probably started with comic books when I was a kid and took off from there. Now, I usually have a couple of books on the go with one or more waiting to be read. Since I’m an old-fashioned guy, I should make it clear that I’m talking about old-fashioned books-you know, the type that don’t need batteries to be read. I always pack a book with my jammies and carry one to read while travelling. Today’s high-speed air travel usually means waiting around airports for twice as long as you are airborne so you need to have a book ready at all times.

While nonfiction books are my preference, I do appreciate some of the fiction found in claims made by calibration laboratories in their scopes of accreditation. Laboratory accrediting agencies are not too enthusiastic over such literary efforts but sometimes it gets through their filters.

I thought some recent examples might be of interest. If you have examples that I’ve missed please let me know as I’m always adding to my library.

Outside micrometers. One-inch capacity micrometers generate a lot of variation in the uncertainty related to their calibration. Four labs quoted 51, 33, 37.1 and 75 microinch, respectively, for the task. What makes this interesting to me is the range between them when they all claim to be using gage blocks for the calibration. Interestingly, only one referred to the micrometer resolution that was involved. When a 6-inch micrometer was calibrated, the uncertainties reported were 167, 59.8, 52.6 and 150 microinch, respectively.

Having all this wonderful information is not too helpful since you don’t know who forgot what, who was being ultra-conservative or realistic, or who was playing fast and loose with the laws of physics.

Calipers. Uncertainty claims for this type of calibration make comparisons more challenging. The labs noted earlier cited uncertainties of 553, 523 and 310 microinch, and one indicated 0.001 inch for a 6-inch caliper. You might think that the lab quoting 0.001 inch was being very conservative until you realized he claims the same uncertainty for all calipers up to an 80-inch capacity. In this case, I’d be considering those labs in the 500-microinch range subject to seeing their uncertainty budgets because, once again, only one of them referenced the caliper resolution.

It should be remembered that while some labs indicated they used gage blocks for the caliper calibration, others used masters made for the purpose. One didn’t indicate which hardware was used, and the one that has the same uncertainty for up to 80-inch calipers used a “length standard” for the work. No, I don’t think they mean an 80-inch gage block, but then again, I don’t really know what they mean.



When such simple instruments produce a wide range of uncertainties, you’d expect an even greater spread when dealing with something more involved such as a thread plug gage. This is not often the case as the uncertainties claimed from a number of labs are pretty close to each other. This applies even though, in some cases, the hardware cited is inadequate. I think this may be a situation where one lab is copying from another and doesn’t really have a budget for the value quoted.

I find it curious that a lab using the same device for calibrating a plain plug gage, as well as a plain ring gage with all that entails, ends up with the uncertainty shown for the plug gage higher than that for the ring.

Sometimes, what’s missing may be more important than what the scope shows. Gage blocks come to mind, usually a 1:1 calibration process. What’s missing from most scopes is the material of which the blocks are made. Steel is the assumed material, but because the process is 1:1, I wonder how they deal with carbide or ceramic blocks if they don’t have a full set of them as masters.

So much for reading scopes. To answer the obvious question: yes, I have read a good book lately. In fact it’s a great book and should be required reading for anyone in government or the media. It’s called, “Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better,” by Dan Gardner. It’s all about the experts you hear predicting the end of the world or oil, the economy and on they go. Gardner shows just how wrong they usually are, which is good for a laugh. But the scary part is that politicians tend to believe the predictions.