With the advent of computers and digitization, the process of collecting and analyzing data ballooned, and now there are literally hundreds of different ways to analyze surface characteristics, some codified into national and international standards, others specific to various industries or even individual companies. Source: Mahr Federal Inc.


Since the introduction of optical surface measurement systems some years ago, proponents have lauded the superiority of this technology, and predicted the imminent demise of more traditional tactile-based systems. Optical, they say, is faster, easier, nondestructive, and ultimately a "truer" representation of the surface being measured than the simple tracings of a probe.

Advocates of tactile measurement have countered that optical systems lack sufficient resolution, that they are "finicky" in terms of the set-up and the level of part cleanliness required, have difficulty with transparent or reflective surfaces, and that standards have not been fully developed for 3-D optical measurement.

Both arguments have merit, and it can be said that the truth probably lies somewhere in between. A look at both of these technologies shows their applications tending to diverge rather than compete as technology develops. It may very well be that 3-D optical systems evolve into a separate category of instrument altogether. This can be seen in how these systems collect and analyze data, how surface measurement standards are developing and how the process of surface measurement is used in real-world applications.

There are a wide variety of different methods and technologies used for optical scanning. Early 2D scanning methods mimicked stylus tracing and collected a series of "linear" data points. More recent 3D methods collect data from a small area. But regardless of the method used, the smallest increment of resolution available is the pixel in the imaging device. With typical systems this resolution might be on the order of approximately two microns square. Within this area, height information is optically averaged to come up with a single value to represent this square. Because of this, peak heights and valley depths may be somewhat flattened. Source: Mahr Federal Inc.

Surface by the Numbers

The earliest surface measurement devices were reference surfaces machined to various degrees of roughness. Machinists would literally "scratch the surface" of their parts with their thumbnail and compare them with the reference. Using this method, a good operator could replicate surface finish quite accurately, but the process was a simple go/no-go procedure. No data was gathered and no mathematical analysis conducted.

The next developmental step in surface finish was the use of optical microscopes, which allowed a magnified view of surface characteristics. Again, measurement was comparative, but the various degrees of magnification and differing fields of view afforded led to the concept of sampling lengths and frequency that are central to surface analysis today.

The first attempt to actually capture information about a surface used a trace stylus and a mechanical amplifier that used linkage to replicate the trace onto a smoked glass surface. Later, analog circuitry was applied to magnify the motion of the stylus and generate an average amplitude of the motion. Ra and Rq, for example, are simple parameters that are nothing more than mathematical averages that arose from the ability of this analog circuitry to easily generate these numbers. These parameters say little about how the represented surfaces will function, but they did provide, for the first time, a way to quantify a measurement of surface texture.

But now came a key step. The digital revolution allowed that creation of instruments that digitize the analog signal from the stylus and generate the typical surface profile trace we are familiar with today. With the profile in hand, it became possible to apply mathematical tools to analyze a variety of profile characteristics. Now there are literally hundreds of different ways to analyze surface characteristics, some codified into national and international standards, others specific to various industries or even individual companies.

Moreover, analysis of 2-D digital profiles has gone way beyond simple mathematical averages, and it is now possible to compute sophisticated functional characteristics such as the ability of surfaces to bear loads, retain lubrication, seal against leaks, or even support the growth and attachment of human tissue.

Going beyond 2-D, systems that measure multiple 2-D traces side-by-side have been used to generate 3-D profiles that are used in a variety of applications. One such application is the manufacture of automotive safety glass where certain topographic characteristics are required to ensure adhesion between glass and film layers. However, such measurement is quite slow, and irregular spacing between points within a profile and between the multiple traces makes 3-D parameter calculation difficult.

The white light optical sensor enables the rapid, high-precision recording of surface topography on a range of materials. Using white light and a CCD camera, the system is able to collect height information through the field of view of the camera. Source: Mahr Federal Inc.

Correlating 3-D Optical Data

The important point to bear in mind here is that all these parameters discussed heretofore are based on an analysis, not of the surface itself, but of a representation of the surface generated from a diamond stylus. Change the way that profile is generated and the results of the analysis may be changed.

Optical scanning methods do just that-they change the way profiles are generated. This is not to say optical methods are better or worse, but they are different.

There are a variety of different methods and technologies used for optical scanning. Early 2-D scanning methods mimicked stylus tracing and collected a series of linear data points. More recent 3-D methods collect data from a small area. But regardless of the method used, the smallest increment of resolution available is the pixel in the imaging device. With typical systems, this resolution might be on the order of approximately two microns square. Within this area, height information is optically averaged to come up with a single value to represent this square. Because of this, peak heights and valley depths may be somewhat flattened.

In contrast, the radius of a diamond stylus also is about two microns, but the actual point of contact with the surface is much, much smaller. There also is an averaging, or filtering process involved with stylus data, but the resolution is finer than is possible with current optics. The lateral spacing of data points also is closer with contact trace methods, typically 0.25 to 0.5 µm vs. 1 to 2.5 µm for optical. Again, this is not to say one is better or worse, right or wrong. But this resolution is a key difference that affects the correlation of parameter data.

Figure 2, Correlation Example, shows two trace profiles of a Halle roughness standard, one done with a white light optical system, and the other with a diamond stylus. The lateral sampling interval for the while light scan is about 2.1 µm, and about 0.5 µm for the contact trace. While the profiles appear very similar, there are minor differences in peak height, valley depth and slope representation that may have important effects on the mathematics of parameter calculation.

Regardless of which trace may be the more realistic representation of the surface, and hence, the more accurate calculation of whichever characteristic is being measured, the point is, the results do not always correlate. In fact, of the three surface parameter types, amplitude parameters correlate best-usually to within 20%-while hybrid and spatial parameters are more problematic.

Shown are two trace profiles of a Halle roughness standard, one done with a white light optical system, and the other with a diamond stylus. The lateral sampling interval for the while light scan is about 2.1 µm, and about 0.5 µm for the contact trace. While the profiles appear very similar, there are minor differences in peak height, valley depth, and slope representation which may have important effects on the mathematics of parameter calculation. Source: Mahr Federal Inc.

Parameter Evolution

As previously noted, surface finish parameters began as simple arithmetic averages of various profile measurements and have become steadily more sophisticated in terms of surface functionality, particularly since the widespread application of microprocessors. One of the key expectations of 3-D optical systems-and one of the assumptions current in the marketplace-is that they would dramatically improve this window on surface functionality. Surfaces do exist in three dimensions, after all, so being able to represent them in their native form has to improve the ability to assess their ability to function, right?

Well, hopefully. But so far, all the parameters available in the standards are direct corollaries in 3-D of what can be done in 2-D. There's Ra, the average height, and there's Sa, which is just the average height over the 3-D area. There's Rz, which is 10 point height, and there's Sz, which is 10 point height in 3-D. There's Rq and Sq, and so on. Thus far, there really has not been any standardization of parameters other than these three-dimensional analogs to what we already have in two dimensions.

There is currently a lot of research being done on how to quantify and characterize 3-D surfaces, and how to develop mathematical algorithms to incorporate three-dimensional surface data, but it is not as easy at it might seem. For example, in the various Rz parameters (there are several), a "peak" is defined in relation of a mean line. In two dimensions, this is fairly straightforward, but when a third dimension is added, that mean line becomes a plane extending in an infinite number of directions and mathematically determining what differentiates a peak from a sub-peak (or a rock on the side of a hill!) is tremendously more difficult.

Just the amount of data that needs to be acquired and processed is intimidating. At 0.25-µm point lateral spacing, a typical 5.6 mm 2-D trace would generate 22,400 data points. To generate a 5.6 x 5.6 mm square field of view with this data density, a 3-D optical system would have to generate 501,760,000 data points. This would necessitate a 500-megapixel camera, not to mention a computer capable of processing the data in a timely manner. Even at 0.50 µm data point spacing, the 2-D contact trace would have 11,400 data points, while square field of view with this data density would require 125,440,000 data points, which still is not practical with today's technology.

Because of this, most optical systems look at fields of view smaller than this, or use lateral point spacing greater than this, or both. And while some bright mathematicians are working on native 3-D parameters, and while the potential for the future is incredible, none have as yet been standardized. Technology continues to develop, and it may be possible in the not too distant future to achieve resolution in this range, but it may be many more years before it becomes affordable to the average user of surface metrology systems.

Utilizing a lazy-susan type staging table with customized tooling to accommodate up to 8 measuring stations, this gage incorporates a Perthometer M1 or M2 surface measuring instrument, and supports most common DIN, ISO, ASME and SEP profile parameters, including Ra, Rz, Rmax and RPc. Source: Mahr Federal Inc.

Application Considerations

So what does this mean? Are 3-D optical surface texture measurement systems simply not there yet? Not necessarily. It depends on the application. For an application that was specified and initiated using optical methods, there will probably be no problem at all. However, for parts that have a long data history of contact measurement, the lack of correlation may cause issues. And even in cases where studies have been conducted to verify process parameters with optical systems and establish a correlation, any change in the process-switching to a new grinding wheel composition, for example-may alter the process equation.

Thus, in industries such automotive, where parts are manufactured in disparate locations and where there is a long history of stylus generated surface data, change to optical at this stage would seem to offer few benefits. And even if (when) technology advances to allow equivalent data density, it still may be more economical to measure certain parts with traditional 2-D stylus tracings.

In other industries, however, such as the computer industry, where manufacturing process specifications seem to be rewritten every few years, optical may already be the best choice.

Everything we actually "know" about surfaces is deeply rooted in 2-D, contact-based profiling methods. Source: Mahr Federal Inc.

The Future

People are enamored with the idea of optical measurement, but often people who ask about it do not actually need it. They want it because it is the latest thing and therefore, assume it is more accurate. They like the idea of noncontact, of faster measurements, of gathering more data. And optical methods are typically faster, and do gather more data, but the data is typically less dense with data points not as closely spaced together, and results may or may not correlate with contact-based results.

Optical methods also are typically more sensitive to ambient conditions, particularly light and dirt on the parts. Transparent and highly reflective parts also pose difficulty for many technologies, and optical methods are typically quite finicky about having parts precisely lined up with the optics, which may require more time for setup. On the other hand, optical systems will never break a stylus or damage a sensitive part.

Finally, for most people, what we inherently believe we "know" about surfaces, and what has worked in the past and therefore probably will work in the future, is deeply rooted in 2-D, contact-based methods, and while time and technology may well open entire new vistas for surface measurement-new concepts of functionality, entire new fields of application-that tide has not yet turned.

The bottom line is that tactile and optical methods are fundamentally different, and may always be, despite advances in technology. So if a close analysis of an application shows 3-D optical measurement is appropriate now, go for it by all means. If not, there is nothing wrong with dragging that tried-and-true stylus, nor will there be for many years to come. Choose the right method for the application at hand. Do not be fooled into thinking that one is old-fashioned and the other is the latest and greatest without understanding how the choice will impact the application.

Benefits

  • There are literally hundreds of different ways to analyze surface characteristics, some codified into national and international standards, others specific to various industries or even individual companies.

  • Optical scanning methods change the way profiles are generated.

  • Surface finish parameters began as simple arithmetic averages of various profile measurements and have become steadily more sophisticated in terms of surface functionality, particularly since the widespread application of microprocessors.

  • There is currently a lot of research being done on how to quantify and characterize 3-D surfaces, and how to develop mathematical algorithms to incorporate three-dimensional surface data.