Comparing Calibration Laboratories
Conventional wisdom is usually a safe bet, but not always.
On the surface, it would seem there is little to be done when looking at calibration services. After all, if a lab is accredited by a reputable agency, it would appear there is nothing else you need to know. As long as their scope includes the items you require calibrated, you’re good to go. At least, that is the conventional wisdom and for the most part it’s a safe bet. But it is a bet after all, so the odds may not be in your favor as much as you think.
I was reminded of this recently when comparing the results from a small round-robin study we initiated. It reflected other similar but larger studies we’ve been a part of over the years and indicated some variations in results that could be attributed to the fact that while everyone claims to be measuring the same thing, their procedure for doing so often indicates they are not. Some very expensive hardware may have been used for a particular measurement but that doesn’t prevent this from happening so don’t get dazzled by the many digital wonders out there.
Round-robin studies like this for plain plug gages, for example, ensure the artifacts used are marked as to the axis across which a diameter measurement is to be taken as well as the location(s) along the gage member. This is to prevent variations in roundness and taper skewing the results. The participants in this study were all accredited by well-known agencies and were experienced in this type of calibration.
Considering these precautions, you would think the readings of diameter would be quite close, but they weren’t. The spread of sizes ranged from 80 to 90 millionths of an inch. A larger number of participants would have produced results for the bulk of the participants that were much closer but if you chose three of the group at random to settle a dispute, the results you obtained could mean rejecting a good gage or accepting a bad one. Remember, a class ‘x’ tolerance gage of this type has a tolerance about half the spread of sizes noted. So what happened here? The same thing that can happen to your measurements if you’re not careful.
A diameter is a length between two opposed points on the gage. If it is measured using a bench micrometer with 3/8” diameter measuring faces as is the case with a popular model, the measurement is between two lines of contact not two points. If the measurement is done using a bench micrometer as a comparator involving a stack of gage blocks, wide variations can be the case due to the quality of the ‘wring’—the wringing interval or space between them. Variations in the flatness and parallelism of the blocks in the build-up are a culprit here in addition to their surface texture. The same type of variation in the measuring faces of the bench micrometer could easily account for half or more of the spread.
The usual suspects when such variations are encountered include temperature and calibration of the blocks involved. While I was not privy to the various labs’ uncertainty budgets, I did notice some wide variations in measuring force ranging from zero to one pound. The ASME standard suggests a force of four ounces for this 0.395” size gage.
If nothing else, the foregoing comments should suggest that comparing numbers—while helpful—does not always tell a reliable story. And the same goes for calibration scopes. The values shown for measurement uncertainty on them will vary based on the state of the item being measured and whether it is being measured in a controlled or uncontrolled environment using a specific process. The use of a different process or instrument will change the uncertainty.
So what can you do to avoid compromising calibration results or to ensure everyone is singing from the same song sheet? Check it out yourself. See how the lab does what you want it to do.
Of course, you could put your faith in the odds that the lab knows what it is doing especially if it’s accredited. It all depends on what your stake in the game is. And whether or not you feel lucky.