Recent Advancements in Vision Technology for Product Inspection
TODAY IT’S POSSIBLE TO OBTAIN THE BENEFITS OF MACHINE VISION ON A WIDER RANGE OF APPLICATIONS WHILE AT THE SAME TIME INCREASING INSPECTION ACCURACY AND REDUCING APPLICATION DEVELOPMENT TIME.
In many industries, manual inspection is replaced by machine vision inspection technology for its higher speed and greater accuracy which improves product quality and reduces production costs. Another advantage to this adoption is the ability to archive images to provide a permanent record of the quality for each individual product. A key factor in the increasing use of machine vision is technological advancements that increase accuracy and reduce application development time for various applications. One of these advancements involves the use of higher resolution cameras that can inspect larger areas at higher levels of accuracy. Three-dimensional vision technologies that can measure height, depth and volume of products make it possible to apply machine vision to many new applications. Finally, self-learning pattern matching vision tools identify objects faster and more consistently while reducing application development time. This article will explore each of these advancements and their impact on machine vision inspection applications.
High resolution sensors
Price points for higher resolution vision sensors are dropping with the result that higher resolution sensors are being used more frequently in inspection applications. High resolution vision sensors are ideal for high accuracy defect detection and provide a larger field of view which makes it easier for the vision tools to accurately find features of interest, even on large parts, products and packages. Typical applications for high resolution vision sensors include alignment of tiny electronic components in the consumer electronics industry and inspection of glue beads on automotive body panels that require a large field of view. The use of higher resolution vision sensors in turn drives the need for faster processing cycles to keep inspection times within acceptable limits. The amount of processing power in a vision system is constrained by the thermal constraints of the package. Machine vision suppliers are overcoming this limitation with the use of new, more efficient algorithms, such as a pattern finding tool that runs four to six times faster than the previous version. Another way around this obstacle is to offload the vision algorithm onto an industrial vision controller in a separate package with a higher thermal budget.
A typical example of a high resolution vision application is a medical device that presents a major inspection challenge for an automated testing system. Alignment of two cylinders needs to be measured within tolerances too close to be accomplished with conventional vision systems. During system development, inspections were made manually which took about one minute per part, much too slow for production operations. Tests showed that conventional 640X480 or 1200X1600 pixel vision systems would have produced too many inaccurate readings. Upgrading to a 5 megapixel vision system and 5 megapixel telecentric lens provided accurate readings while reducing inspection time to about 200 milliseconds. The automated testing machine ensures extremely high levels of quality while minimizing production costs. Implementing inspection at the point of assembly has improved the quality of the supply chain while reducing impact of downstream detection.
3-D vision systems
Rapid improvements are also being seen in the area of applications that are better suited for 3-D vision such as measuring height, volume and tilt of parts or reading raised characters against a similarly colored background, and performing inspections in variable lighting conditions. A new generation of 3-D vision systems is addressing these challenges by using laser triangulation to extract 3-D information from parts. A laser displacement sensor projects a beam onto the object to be measured. The beam is displaced by the 3-D shape of the object. The image sensor captures a 3-D topological representation of the 3-D image where the footprint of the object is represented by the extent of the image and the z height data is stored. A complete 3-D image is generated from a series of acquired profiles where each row of the image corresponds to one profile, resulting in a 16 bit greyscale image. The resultant 3-D image of the object can be used to both locate the object, and accurately register vision tools for the purpose of measuring 3-D and 2-D features, such as length, width and height. It can be used to easily determine presence or absence of an object regardless of color or lighting and yield a high-contrast image from very subtle changes in height.
Consider the sprocket gear shown in the figure that looks good in the x and y dimensions. But a 3-D inspection reveals that a tooth is broken with the top 10 millimeter of height being missing. One could also imagine a case where only the top surface of the tooth is missing, as opposed to the entire tooth, an application where 3-D inspection is required to identify a defect of this type. Another typical 3-D inspection application involves measuring the flatness of a critical component such as a printed circuit board (PCB) or a substrate in a carrier. Skewing or warpage of components cannot be detected with 2-D inspection. However, 3-D machine vision can easily determine the complete topographical profile of the part surface. Detecting the presence or absence of components whose color is similar to the area where they are supposed to be installed is another very common 3-D inspection application. There are also many robotic guidance applications where 3-D vision is required, such as picking parts out of a bin or guiding a robot arm around multiple surfaces of a part.
Self-learning composite pattern matching
Conventional pattern matching works by training a pattern based on the features found in a representative or “good” image of the part. In some applications, it may be impossible to acquire a part image that is not affected by noise, clutter, occlusion, or other defects. Attempting to train a conventional pattern from such a degraded image often produces an unusable pattern, since the pattern includes numerous features that are not present in other run-time part images.
Another recent advancement in machine vision inspection involves the use of intelligent self-learning composite pattern matching tools. Matching tools simplify application set-up by learning to distinguish between important image features, sometimes called the signal, or what makes a good part different from a bad part, and the random and inconsistent differences which are not significant, sometimes called the noise.
The new self-learning pattern matching tools can be used anywhere that conventional pattern matching tools are used today. A typical example is checking the alignment of a chip on a bioanalyzer with its mount. The chip is inscribed with circles that match up with openings in the mount. If the chip is misaligned, the circles will look oval instead of round. The pattern matching vision tool identifies chips with shapes that are circles, within a certain tolerance, as good parts and those with ovals as bad parts. Self-learning pattern matching tools are particularly useful in applications that are difficult to program with conventional pattern matching algorithms such as applications where no representative image of the part exists, applications that need to ignore certain features that vary from part to part, and applications where a part appears in different backgrounds.
You can train a nearly ideal pattern matching composite model using multiple degraded training images. The self-learning algorithm collects the common features from each image and unites them into a single ideal model. This approach filters out noise or other random errors from the training images that would otherwise appear in the final composite model, as shown in the figure.
In some applications, you need to be able to locate and align parts in which certain features may be different in different parts. For example, in the case of a product packaging line, packaging lids may have some features in common, such as the product name, while other features are different from part to part, such as the product flavor. Composite pattern matching training can be used to create a pattern that includes the features that are the same for all parts while excluding the features that are different from part to part.
Composite modeling can also be used to filter out background changes. For example, it allows you to train a pattern that you can use to locate a component at different locations on a printed circuit board, where the background, consisting of electrical connection—tracks and solder—differs at each location.
Machine vision solutions help companies improve product quality, eliminate production errors, lower manufacturing costs, and exceed consumer expectations for high quality products at an affordable price. Recent developments in machine vision technology are making it possible to obtain these benefits on a wider range of applications while at the same time increasing inspection accuracy and reducing application development time.