Machine vision quality assurance systems have excelled at automating the location, identification, and inspection of manufactured components through computational image analysis.
But when the component is part of a larger assembly, a complex package, or a kit—such as an automotive assembly or surgical intubation kit—defects, random product placement, variations in lighting, and other factors can quickly overwhelm a traditional machine vision system. For this reason, final inspection of assemblies, packages, and kits is usually conducted manually, to the detriment of overall quality and productivity.
Labor Shortages Compound Assembly Challenges
While manual operators typically excel at inspecting complex assemblies, by comparing multiple attached or connected components to automated quality inspection solutions, it’s hard for operators to stay sharp. Studies show most operators can only focus on a single task for 15 to 20 minutes at a time.
Furthermore, while the difficulties of automated inspection of assemblies, packages, and kits aren’t new, manufacturers increasingly struggle to fill assembly jobs. In 2018 more than 63% of manufacturers said they were having difficulty staffing assembly lines. That’s 16 percentage points higher than the previous year’s figure, according to Assembly magazine’s 2018 State of the Profession report—and note that quality inspectors are usually pulled from the ranks of the best assemblers.
In addition to labor shortfalls, the realities of component and assembly manufacturing make inspection tasks more difficult. Common challenges associated with the automated inspection of assemblies, packages, and kits include:
Part-to-part variation. Every component varies from every other component in some small way. If it’s a question of a bent or missing lead from a semiconductor, that is easily identified using traditional vision solutions. But if the defect varies considerably or is cosmetic, such as a blemish on a casted or machined part, programming a traditional vision solution isn’t feasible.
Large numbers of components. An automotive motor transmission unit rolling off a power train production line can contain hundreds of individual parts, including dozens of critical components. Programming the size, location, and position for every critical component is time-consuming, and system performance can suffer from a traditional machine vision solution approach.
Broad mix of assemblies, layouts. Assembly, packaging, and kitting lines regularly change what they produce (changeover) in response to new sales and production requirements. Changeover might involve packaging different components compared to previous production runs or repositioning existing components in a new layout to meet a specific customer’s needs. Programming an inspection system using traditional machine vision solutions for every different assembly, grouping, package, or kit would incur the same high offline engineering costs as for assemblies with high component counts.
Machine vision quality assurance systems excel at automating inspection of manufactured components. But when components are combined into larger assemblies, such as a printed circuit board, then defects, random product placement, lighting variations, and other factors can quickly overwhelm a traditional vision system. Courtesy of Cognex
Need for Assembly Verification Spans Industries
While assembly, packaging, and kitting applications exist in virtually every manufacturing industry—either as a final inspection step, an interim inspection step prior to adding value, or a combination of both—we are working with leaders in several key markets to optimize deep learning solutions for both final and in-line assembly verification.
Automobile manufacturers build complex assemblies from thousands of individual components, and the resulting cars are both a boon to civilization and a safety hazard. A single missing hose or plug can result in a stalled production line, a seized engine, or a fatally flawed brake system. Automotive manufacturers and their suppliers negotiate contracts that closely stipulate acceptable levels of defects per shipment as well as financial penalties for missing those quality levels. Failing to inspect final assemblies for completeness can result in significant financial penalties, loss of key customers, and safety hazards.
Food and medical packaging and kitting also represent potential hazards to the general population. If someone with severe food allergies mistakenly ingests food containing nuts, for example, the result could be fatal. For surgical kits and medical supplies, the dangers of incomplete packages or incorrect parts are obvious, and just as clearly, they are unacceptable to the manufacturer and customer alike.
Electronic manufacturers work on razor-thin margins in high-volume production. A missed screw in a laptop or cellphone screen assembly can result in thousands of dollars in lost revenue.
Deep Learning Enables Assembly Verification
Consider the common obstacles outlined earlier in this article: variability, mix, and changeover. Unlike traditional vision solutions, where multiple algorithms must be chosen, sequenced, programmed, and configured to identify and locate key features in an image, there are tools that learn by analyzing images that have been graded and labeled by an experienced quality control technician. It can be trained to recognize any number of products, as well as any number of component/assembly variations.
For example, a car door panel assembly verification includes checks for specific window switches and trim pieces depending on the door being assembled. The same factory can produce doors for different trim levels as well as for different countries. A single tool can be trained to locate and identify each type of window switch and trim piece by using an image set that introduces these different components. By training the tool over a range of images, it develops an understanding of what each component should look like and is able to locate and distinguish them in production.
Unlike traditional machine vision, which requires different algorithms combined in different ways for each object of interest in an image, certain tools can locate any number of different components without explicit programming. By capturing a collection of images, this type of tool incorporates naturally occurring variation into the training, solving the challenges of both product variability and product mix during assembly verification.
The emergence of deep learning software has introduced new tools that “learn” by analyzing images that an experienced quality control technician has first graded and labeled. Vision systems that leverage deep learning algorithms can be trained to recognize any number of component/assembly variations. Courtesy of Cognex
The Solution to Final Inspection
Due to the confluence of factors that go into a final assembly verification and the difficulty of training a traditional machine vision solution to handle every conceivable variation, automating final assembly has been nearly impossible—until now. Today, with deep learning, customers can automate the assembly verification process by breaking the application into two steps.
First, the deep learning neural network is trained to locate each component type; second, the components found are verified for type correctness and location.
Deep learning users can also save their production images to re-train their systems to account for future manufacturing variances. This may help limit future liability in case unknown defects affect a product that has been shipped. V&S
How Is Deep Learning Different from Traditional Machine Vision?
Unlike traditional machine vision solutions, deep learning machine vision tools are not programmed explicitly. Rather than numerically defining an image feature or object within the overall assembly by shape, size, location, or other factors, deep learning machine vision tools are trained by example.
A well-trained neural network requires a comprehensive set of training images that represents all potential variations in visual appearance that would occur during production.
For feature or component location as part of an assembly process, the image set should capture the various orientation, positions, and lighting variation the system will encounter once deployed.
Training the system to recognize new components or assemblies simply requires the addition of new representative data sets.