Quality 101: Laser Scanning Fundamentals
Laser line scanning is ideal for noncontact measurement applications, including inspection, cloud-to-computer-aided design (CAD) comparison, rapid prototyping, reverse engineering and 3-D modeling. Laser line probes use a triangulation process to find the position of objects in space. A high-performance laser diode inside the unit produces a straight laser stripe that is projected onto a surface, and a camera observes the laser stripe at a known angle to determine the location for each point on the line.
How Laser Scanners WorkA 3-D laser scanner uses laser light to probe objects. It projects a laser line on the subject and uses a camera to look for the location of the laser line silhouette.
Depending on how far away the laser strikes a surface, each point on the laser line profile appears at different places in the camera’s field of view. This technique is called triangulation because the points on the laser profile, the camera and the laser emitter form a triangle.
The length of one side of the triangle-the distance between the camera and the laser emitter-is known. The angle of the laser emitter corner also is known. The angle of the camera corner can be determined by looking at the location of the laser line in the camera’s field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of each point on the scan subject corner of the triangle.
Most laser scanning systems use lens filters to block out ambient light and allow only the laser light to shine through. This improves system performance by reducing noise created by other light sources.
The Laser Line ProbeA laser line probe allows operators to quickly inspect or reverse engineer complex and organic shapes via laser scanning, as well as capture prismatic elements with the high accuracy that contact metrology provides.
A laser line probe can digitize the shape and position of an object that is placed in its field of view. A laser line is generated by fanning out a laser light beam into a sheet of light. When the sheet of light intersects an object, a bright stripe of light can be seen on the surface of the object. The built-in camera views the stripe at an angle and the observed distortions can be translated into height variations.
Converting Light into PointsData is collected one slice, or cross section, at a time. A measurement arm acts as a referencing device-or localizer-that tracks and communicates to the host application software the position of each cross section in space. As the laser stripe is swept across an object, hundreds of cross sections are instantly captured. When they are collectively rendered in a CAD environment, the end result is a full 3-D digital representation of the object. The collection of raw data is commonly referred to as a point cloud.
Each of the captured cross sections contains hundreds of points. The number of points in each cross section depends on the size of the camera’s image sensor or charge-coupled device (CCD) and how much of the object is in the camera’s field of view. The maximum number of cross sections that can be acquired depends on the capture frame rate of the camera. A 30-hertz CCD can capture 30 frames, or stripes, per second.
The distance between points in a single stripe varies depending on the position within the field of view. Because of the angle of the camera, the field of view is not rectangular but rather trapezoidal. Stripes captured closest to the device, or in near field, will have points closer together than those captured farther away, or in far field. Standoff is the minimum distance required between the laser source and the scan object.
Higher frame rates and higher resolution CCDs can improve scanning speed and produce high-density point clouds capable of detecting finer details. While this is ideal to increase productivity and data quality, large amounts of scan data can quickly consume computer resources and reduce overall performance, requiring more expensive, high-power computers to get the job done.
To the camera, the laser stripe projected onto a part looks like a thick silhouette or profile. In order to produce a single row of points, the scanner must identify the pixels that run through the center of the profile. One data point corresponds to the position of one pixel in the CCD and each column produces one point on the profile.
The closer the laser line probe gets to the part (near field), the lower on the CCD the profile appears. As the laser scanner is pulled back (far field), the profile moves up on the CCD. The laser line silhouette is better defined in near field than in far field. A simple comparison would be to shine a flashlight on a wall; as the flashlight gets closer to the wall the light is brighter and the center spot is clearly defined, but as the flashlight is pulled back the center spot gets bigger and loses intensity and definition. This means that better accuracy and repeatability can be expected when holding the scanner closer to the part.
Sometimes surface properties such as color, texture and, in particular, reflectivity can diminish the quality of the image on the camera. This makes it more difficult for the laser probe to determine the true center of the profile, thus producing erroneous points that appear as noise in the data. Reflective surfaces can generate double images that typically result in outliers.
Noise in the data is almost inevitable and outliers are typically expected. There are many point cloud processing programs available that employ sophisticated and powerful algorithms to reduce noise and filter outliers.
Pushing the 3-D EnvelopeLaser line scanning provides a quick and effective way to inspect and reverse engineer complex parts and surfaces. This noncontact measurement technology employs high-end optics to convert light into accurate 3-D data.
Noncontact measurement devices are becoming more and more popular, driving manufacturers’ research and development efforts to produce better and more effective ways to turn everyday objects into digital computer models and push the envelope of possibilities in 3-D metrology.