In my last column I noted that I would give some hints to help you avoid problems with assigning and applying acceptance limits, so let’s go back to square one.

When you add a new micrometer, for example, to your inspection equipment you will create a gage record which will show the unique tool number you have assigned to it. Your record should have room to note a few or all of the following: make, model number, capacity and/or measuring range as well as how precise you expect it to be—all the things you will have considered when selecting whatever you have bought, borrowed or stolen for your needs.

Another section of that record should show your re-calibration limits i.e., how far it can deteriorate before it is no longer suitable for your work. Being clever, like all my readers, you will have carefully considered the typical tolerances you have to verify in your work that this micrometer will be used for and arrived at this new limit for precision and that will be your acceptance limits for this device. This limit could be two or more times what the instrument was made to when new. If this will consume too much of your workpiece tolerance, you should consider a different device for the application or you will be wasting a lot of time with measuring disputes and suspect workpieces.

When fixed limit gages are involved, you will go through the same exercise except that you will usually have a specification from a gage maker that shows what his/her tolerances were for making the gage. As I mentioned previously, you now have to set your limits for the gage i.e., how much wear can you accept before it is eating up too much of your work tolerance to the point of driving your machinists crazy.

Taking a class Z tolerance plug gage, the Go member may have a new tolerance of +.0001” and since this gage member will be subject to the most wear, you should consider using a gage with a larger nominal size to give you some wear allowance before it gets worn the limit for size. If you specified a nominal size with a .0002” allowance, your workpiece tolerance will be reduced by that amount plus the gage maker’s tolerance in a worse-case scenario.

You can see from these comments, setting acceptable limits should take a number of factors into account. The ideal way to deal with this would be to have engineering involved in the decision process to ensure they agree with whatever amount of your manufacturing tolerance will be affected and someone from the quality department with metrology knowledge to ensure what engineering expects can actually be measured.

There are some dimensional measuring devices whose makers avoid making any claims for precision that you can use to set performance limits. Optical comparators are one such instrument. Their makers will tell you how good their lenses are or how precise the digital readout included with their unit is likely to be which is handy to know but not what you really need to know.

What you need to know is how precise the device that incorporates all of these elements actually is when you use it, or a functional calibration. This isn’t a fixed value because it varies with the quality of the image at the screen and the visual acuity of the operator using it. Factors that vary between users and the items being measured are elements the instrument maker doesn’t know, making it impossible to predict levels of precision on a generalized basis.

To keep your sanity in all of this, always remember that practical acceptance ‘limits’ are a judgement call related to what you need rather than numbers from a chart or what some lab’s software program or new gage specification says they should be.

I’ve saved the worst part for last. If you are setting acceptance limits you will have to decide how you want the calibration laboratory’s measurement uncertainty accounted for in any ‘acceptance’ decisions made. The minefield this represents could make your acceptance limits a bit dodgy at best so I’ll make it a subject for another day.