A daily regimen of gage verification, combined with empirical and statistical information gained during scheduled tool calibrations, can help track the optimum time intervals for calibration, keeping the measurement tools on the shop floor and operating at peak efficiency.

Many experts feel that true calibration must be done in an environmentally controlled laboratory, but a daily ritual can help operators track their tools. Daily verification can be as simple as a visual inspection of a caliper to see if the jaws come together and no light shows between them. Or, an inspector can carry a couple gage blocks to audit key or heavily used gages.

George Schuetz, precision gaging director for Mahr Federal Inc. (Providence, RI), said that true calibration should only be done in the laboratory, but that daily checks can build a manufacturer's confidence in that tool during calibration intervals. "Some people check their tools every day," said Schuetz. "They have a gage on the floor and a process in place where every morning they use a zero master, a spanmaster and the tool to make sure they get the correct numbers. Those standards might not be held to the same level of control that they might have in the laboratory, but it is a way to determine every day that the gage is functioning."

Not in the shop
Checking the equipment on a shop floor can help detect out-of-tolerance situations quicker than waiting for the regularly scheduled calibration. Some gage management and calibration management software can tie in with statistical process control (SPC) software to track trends, which can keep the gage in use on the factory floor and not out-of-use in the calibration lab. This data can help the operator decide how often to calibrate a tool-too often and the tool becomes unproductive, too infrequently, and parts may be mismeasured. In general, calibration intervals are based on frequency of use and how often the tool is found out of calibration.

Amosh Kumar, calibration lab manager for Mitutoyo America Corp. (Aurora, IL), suggests that a new tool should be calibrated once a month for several months to develop a tool history. "Enough data can be gathered in 3 to 6 months, depending on the historical data collected, to start increasing the frequency of verification," he said.

Operators should increase the intervals between calibrations until the tool shows signs that it needs to be calibrated, generally in about 6 months, or to manufacturer's recommendations. Kumar said that it is best to begin monthly calibrations again to continue building a history. As the upper limits of calibration intervals are discovered and a routine is established, frequency of calibration can be adjusted. Keep in mind that calibration intervals are not static; they may be required sooner depending on factors such as increased use and if the tool was damaged.

Frequency of use is an important determiner for the time a handheld measuring tool can go between calibrations. An automatic gage that is used to measure thousands of pieces per day will need calibration far quicker than one that is used less often. Other types of tools may need to be calibrated more often than other tools because of their design. Torque wrenches, for example, may need frequent calibrations.

"How often to calibrate a tool has never been written in stone. It depends on the frequency of the gage's use," said Richard Dalgleish, metrologist for software manufacturer Westechlabs Inc. (Edmonton, Ontario, Canada). "One big mistake is too let a torque wrench go too long between calibrations. They are spring-loaded devices and they are left loaded on a continual basis."

Environmental effects
While some experts feel that calibration should take place in situ, or in the place where the tools are used, most experts agree that calibrations should take place in the laboratory away from the dirt and thermal conditions that affect the gage, the master and the workpiece. This is especially true when high-tolerances are required such as threaded plug and ring gages.

"If you try to calibrate the gage in the summer time when it is 90 degrees on the shop floor, and use a master from the calibration lab that is 68 degrees, you will get a big difference in results," said Schuetz.

This is because temperature variations of even a few degrees can cause the tool, part and master gage to grow at the micro level. Temperature can fluctuate from one end of the plant to the other, as well as from one end of a tool to the next. With every 1 C difference in temperature, thermal expansion can grow 11.5 microinches, per inch per degree of centigrade, depending on the material from which the tool is made. In a lab, the temperature can be held to a constant 20 C, which is the temperature that is accepted as the standard for correct measuring. Even when brought to a lab, the gage is allowed to soak, or stabilize, to the laboratory temperature for up to a day to reach this temperature. "You are looking at a lot of thermal expansion, especially when checking highly accurate parts. In other words, the guy with the best temperature control wins," said Kumar.

Computerize your calibration
If a shop uses 10 to 20 gages, tracking the calibrations and results can be done on 3-by-5 file cards, although not as efficiently as if using computer software. If a company has tens of thousands of gages, then computerizing the calibration process is a necessity.

Calibration management programs are available from a multitude of companies. Some features include: compliance with ISO 9000 and QS-9000 form requirements, the ability to analyze gage repeatability and reproducibility, create instrument records and print calibration stickers and certificates, perform canned calibration procedures, and set high or low limits based on test data. Automatic reminders and due date calculations can be sent via e-mail. Calculating measurement uncertainty is another important feature that many programs offer.

To find software that will meet long-term and audit day needs, focus on the following software features.

Validation. This is a functional test that should be performed by the software supplier at the place of installation to ensure the software is operating correctly. An auditor may require proof that the test was done.

Daily verification of micrometers, calipers and gages is part of a calibration program that can help a tool work its best. The software should be able to produce a complete list of working gages, including visually verifiable gages, such as rulers that do not require calibration.

Traceability. If a master gage fails, the software should be able to report which gages were calibrated with that master. This will eliminate the need to check the entire gage inventory, not to mention every product manufactured since the failure occurred. The software checks newly entered data against existing tolerances.

Calibration procedures. Can calibration procedures be incorporated into the software program? Auditors find it helpful to view a gage's calibration procedure in tandem with its calibration history.

Reports. Does the software meet a company's report-generating needs? Functionality and flexibility are key.

Report generation is an important function of gage management and calibration management software. The ability to create instrument records is helpful during audits or when a company is preparing for ISO 9000 certification. Data fields can be set up to contain information required for specific calibration standards. Audits that previously took days to conduct can be completed in a few hours. In some cases, auditors may not even require hard-copy certifications because the records kept on the software are complete and accessible.

The software also can help determine scheduling based on production schedules. "You can look at your production schedule and say, 'We are supposed to calibrate next month, but next month will be busy, so let's calibrate the gage this month,'" said Dalgleish.

Gage management and calibration software, as well as checking the tools on a daily basis, can help keep the tools on the shop floor and operating at optimal levels.

Q&A with NIST physicist Dr. Steve Phillips

What is the relationship between calibration and verification?

First, let's examine the standardized meaning of these words: The ISO International Vocabulary of Basic and General Terms in Metrology (VIM) defines calibration as "a set of operations that establish, under specified conditions, the relationship between values of quantities indicated by a measuring instrument or measuring system, or values represented by a material measure or a reference material, and the corresponding values realized by standards."

The ISO Guide 25 defines verification as "confirmation by examination and provision of evidence that specified requirements have been met."

Unfortunately, neither of these definitions yields an unambiguous statement. Consequently, several of my colleagues and I have developed a short checklist for ensuring that all aspects of a calibration are addressed. These include:
1) administrative and procedural documentation, e.g. as required by ISO Guide 25, and involving such issues as the serial numbers of the master gages in use;
2) a statement of the measurand being calibrated;
3) the calibration result together with an uncertainty statement; and
4) a statement regarding the validity conditions of the calibration.

Calibration is a highly technical topic focused on quantitative issues. In contrast, verification is typically a binary decision; the workpiece or instrument is either within specifications or is not. Verification may also result in a business decision such as downgrading or declaring a piece of equipment obsolete.

What can calibration tell me about the processes and the tolerances that have been set? How does it feed back to design?
Most tolerances are set in the design phase. The calibration certificate includes information about the accuracy of the instrument or gage under consideration, typically expressed as either an uncertainty statement or as a maximum permissible error.

The designer of a tight toleranced product is well advised to seek out the calibration information on the instrument or gage that will be used in the subsequent measurements. The uncertainty of measurement can be no better than that state in the calibration certificate.

Consequently, if the uncertainty associated with the calibration represents a significant fraction of the product tolerance, subsequent measurements will generally not be conclusive to verifying conformance to the design specifications. If this situation occurs, the designer should reevaluate whether the tight tolerance is actually required, as it will have economic repercussions.

These costs may include obtaining more accurate calibrations, purchasing new or improved measurement equipment, or accepting some fraction of good products to be measured as out-of-specification, resulting in unnecessary scrap, or out-of-specification product to be measured as good, resulting in warranty costs.

How does calibration feed back to manufacturing?
A product's tolerance must be partitioned between the manufacturing variability and the measurement uncertainty associated with verifying conformance to specifications. The optimal allotment of the tolerance between manufacturing and metrology is an economic decision and depends on the equipment available.

Manufacturing tools are generally more expensive than metrology equipment, hence, a good calibration for the metrology is a worthwhile investment. Additionally, in manufacturing situations using feedback, such as SPC, a poor calibration that results in a large measurement uncertainty, may cause unnecessary or erroneous adjustment to the manufacturing process and actually degrade product quality.

How does calibration feed back into test strategy?
Calibration is an essential requirement for measurement traceability. It provides the connection back to the SI unit and supplies an uncertainty statement about the accuracy of the instrument or gage under stated validity conditions. In short, the calibration process tells us what our accuracy is under stated conditions.

Unfortunately, actual measurement conditions may differ significantly from those of the calibration. For example, a CMM might be calibrated to measure simple, point-to-point length, using step gages and ball bars at 20 C, however, the product measurement of interest might be the concentricity of two bores at an ambient temperature of 25 C. In this example, the measurand of the measurement, the concentricity, is not even the same as that of the calibration or point-to-point length. This can make the measurement uncertainty budget difficult to produce, as the calibration information is not directly comparable.

In these cases, either a computation is needed that transforms the calibration information into values that are directly comparable to the particular measurement under consideration, or the instrument needs a new calibration that is more relevant to the desired measurement. Consequently, both the type of calibration, meaning the measurand being calibrated, and the numerical values documented in the calibration report, influence subsequent measurements.