New standard forces manufacturers to alter gage block calibration.

Consistent with the philosophy behind the new standards, the computer has become the heart of this comparator.

Source: Mahr Federal Inc

Ten years ago, if a box of Grade 2 gage blocks was sent to an outside lab for calibration, there would have been little need for discussion about what a company needed. If that same box was sent today, without any instructions, a company could be headed for trouble. While a box of Grade 2 gage blocks has not changed, the standards for defining their length has changed.

In January 2002, ASME issued B89.1.9-2002 with the objective of bringing U.S. gage block calibration practice more in line with international standards. It also accommodates important shifts in the use of gage blocks and reflects current trends in the use of measurement uncertainty. The basic criteria used by the committee was to adhere as closely as possible to ISO 3650, Geometrical Product Specifications (GPS)-Length Standards-Gage Blocks, while allowing for standard measuring practice in the United States. Incorporating specifications for inch-system gage blocks was the least of these allowances.

So now, when sending out a box of Grade 2 blocks, the calibration lab should know:

• Whether these are new Grade 2 blocks or old Grade 2 blocks

• Whether the blocks should be calibrated to the new standard or the

old standard

For example, under the old standard, the length tolerance on a 0.1-inch Grade 2 block was +4/-2 microinches. On a new Grade 2 block, that same tolerance is ±18 microinches. In fact, the closest equivalent to an old Grade 2 is a new Grade 0 which has a length tolerance of ±5 microinches. The differences are substantial, and rest not only in dimensions, but also in the basic philosophy behind the calibration process.

The older version of ASME B89.1.9, issued in 1984-and revised in 1997-was based on an even older standard, GGG-G-15C, a military specification dating back to 1974. The thrust of these standards-the traditional American approach-was to make blocks as close to exact size as possible. Under the European way of thinking, so long as a block is within a certain size range, it does not matter exactly what its size is, as long as it is measured well and characterized correctly.

Under this new scheme, measurement is as important as manufacturing. If the variation is known, it can be allowed for. But by the same token, while size tolerances are not very tight under the new ASME standard, form tolerance-how flat the blocks should be and how parallel their surfaces should be-can be very tight.

Under the old standard, the size tolerance applied only at the center of the block, at what was known as the reference point. Under the new standard, size tolerance applies everywhere. This implies that the calibration lab should check a representative number of points on the block's surface, not just the central point.

This fundamental difference in the idea of and approach to calibration has ramifications for everyone involved in the process, from part manufacturers seeking calibration, to labs doing calibration and even for manufacturers that make gage block calibration equipment.

Standard changes

Users of gage blocks are not required to change anything. Many manufacturers have hundreds of sets of gage blocks, some many years old and still perfectly good. They also have processes based on these blocks and the standards for calibrating them that are still perfectly acceptable. However, even if they ignore the new standard, users will need to be aware of this new practice and make sure their blocks are being calibrated correctly. And if they acquire new blocks, or adopt new processes, they will have to be equally aware that Grade 2 is no longer the same.

To make this process a bit less confusing, ASME has done a couple things. First, they have added the prefix "AS" to Grades 1 and 2 to help avoid misidentification with the old Grade 1 and 2. Second, while the committee "basically agreed with the logic" behind the size tolerance issue, they also added a Grade 00 with tolerances near those of the old U.S. Grade 1 for those who still feel more comfortable with a high-accuracy gage block.

For calibration labs, in addition to the requirements for clearer customer communication, the new standard requires basic changes in their calibration and reporting procedures. While the old standard suggested that gage blocks be checked "in several places," as a matter of common practice, measurements were typically made and certified at a single reference point near the center of the block.

The new standard strongly recommends multiple checks, suggesting in Section 8.4.4 that, "variations between readings at the reference point and at the four corners of the measuring face, approximately 1.5 millimeters or 0.060 inch from the side faces could be regarded as representative." That these points are not mandated allows flexibility for existing commercial practice, but the section goes on to stipulate that if other points are used, "their position shall be described."

As a practical matter, this means that calibrations will take longer, and perhaps be more costly. It also means that many labs will have to upgrade their equipment as existing comparators in most American labs might not have the capability to measure blocks at the corners.

Herein lies what is perhaps the biggest change occasioned by the new standard: the need for equipment manufacturers to redesign and upgrade their high-end gage block comparators.

Dimensional measurement for gage block calibration is possibly the most precise mechanical measurement process on the planet. The environmental conditions under which it is done are as controlled as possible, and the equipment used is the absolute best that can be made. Thus, changes to that equipment are not undertaken lightly.

These are examples of skeletonized microinterferograms of diamond stylus tips showing surface discontinuities. The micrograph is a “contour map” of the tip so that all points on a given ring are equidistant from the reference optical flat. Source: Mahr Federal Inc.

Penetration coefficients

But the most interesting advances, in terms of metrology, are in the area of penetration coefficients. It has been known for many years that contact between a spherical probe tip and a plane surface under an applied force will result in a deformation of that surface. In terms of gage block calibration, this deformation is small, but of significant magnitude. If the blocks being compared are of different materials, a deformation correction-defined by Hertz-needs to be applied to the measurement. Such corrections have long been included in the calibration process, and penetration coefficients for the various block materials are maintained in the system software.

However, in his book, The Gauge Block Handbook, Dr. Ted Doiron, acting group leader of the Engineering Metrology Group at NIST, has shown that the problem is not quite that simple. Not only is the amount of deformation dependent on block material, probe radius (area) and gaging force, but Doiron has shown it also is dependent on probe material, and more specifically, on probe tip geometry and surface finish. Of all of these variables, probably the effective probe radius is the most difficult to measure directly.

Using a series of microinterferograms of diamond stylus tips, Doiron has shown that tiny discontinuities in tip geometry can significantly affect the contact area, calling into question the Hertzian correction factors. Further, according to Doiron, "It is very difficult to produce a perfectly spherical surface on diamond because the diamond hardness is not isotropic, i.e., there are ‘hard' directions and ‘soft' directions, and material tends to be lapped preferentially from the ‘soft' directions."

As a result, tungsten carbide may be used to make the contacting probes because it can be polished to a more consistently spherical shape. The effect of the actual probe radius and surface finish can be calibrated, if a manufacturer has gage blocks of different materials but of the same nominal size that have been carefully calibrated. By comparing these blocks to one another, the effect of the different penetration shows up as small differences from the calibrated values.

The voluntary integration of any new standard is a slow process. Change is costly and typically not undertaken until there is some absolute requirement or justifiable return. Long term, however, implementation of B89.1.9-2002 with its emphasis on measurement will serve to reduce uncertainty, allow even tighter tolerances on manufactured parts and further improve quality. Q

All user functions are displayed and controlled on screen, and the system provides resolution of 0.1 microinch and repeatability of 0.2 microinch. Source: Mahr Federal Inc.

Dewey Christy is product manager, precision length metrology for Mahr Federal Inc. (Providence, RI). He can be reached at [email protected] or (401) 784-3271.