At first glance, you might think I’m losing it with the title of this month’s rant. After all, who would pay anything for ‘zero’ or nothing? It turns out a lot of people try to get ‘nothing’ or ‘zero’ and end up with more than they bargained for at a very high cost to get there.
The most common example of this situation occurs when determining tolerances for setting masters used in a variety of measuring applications. Since ‘masters’ are involved, it is normal to want them to be more precise than the devices they are being used to set. Unfortunately, there’s more to it all than shifting a decimal point on a dimension.
The most common ‘masters’ used in dimensional work are gage blocks so it is natural to use them as a guide when their shape indicates they would be perfect for a particular application. The high order of precision provided by gage blocks is affordable due to the fact they are made to standardized sizes and tolerances in quantities. But, as luck would have it, their standardized sizes are rarely the nominal value being sought for a specific application. One way around this is to get a block maker to provide a special size block which most will do for a small bucket of money. A larger bucket of money will be required if the tolerance requested for this custom-size block is the same as its standard size equivalent.
After the economics of it all sinks in, it’s time to better define what you mean as a ‘master’ as a possible way to get what you need. If a gage block was chosen because it resembles the product feature to be measured, there may be little gained by considering anything else. However, if a custom-made gage block was chosen because of the precision associated with it while the part feature to be measured is cylindrical it would be an expensive choice.
Once you have decided on the shape of a master, determining a tolerance for it is where most special master requests meet reality. If your master is a mid-limit one with a nominal value of say, 2.3324” how close to that value does it have to be? Conventional thinking would suggest it should be one decimal place better but that now takes it into ‘millionths’ territory which means higher cost. The lower cost alternative would be to get the master made to within +/-.0002”, which is cheaper, and then get it calibrated to the finer limit. You then ‘tell’ or set your measuring instrument to the actual value it should be displaying using this master and everything is good to go. But if you insist on everything being ‘zero’ along the way you will pay more and that goes for re-calibration costs as well.
I should make clear that the master in my example is providing a physical set-point and is not suitable for calibration purposes unless used with other masters covering the range of the measuring instrument being used. In this latter case, the other masters could have similar offsets and be useful for calibration purposes as opposed to zero-setting purposes. The key is to know the actual measured values of the masters compared to the tolerances they were made to.
So where do the masters that are typically supplied with outside micrometers over 1” or 25mm fall into the grand scheme of things? They a recalled setting rods – or discs – for good reason. They provide a reference value or starting point within the range of the instrument. On their own, they are not suitable for calibrating an instrument over a range. To use an old military expression, they get you close enough for artillery but not much else.You need more than one physical master to get you to a bulls-eye situation.
This applies to the many masters made for use with outside micrometers that are supplied in steps of 1” or 25mm. For calibration purposes, you need several masters over a given range to provide meaningful calibration as many makers of them provide.
Save your sanity and keep ‘millionths’ out of your life and equipment if at all possible.