Vision guided robots (VGRs) enable defect free production by providing important quality information, such as data about flaws and measurement tolerances, which a blind robot programmed to act within a coordinate system or stage cannot deliver.
They can detect detects through inspection, which directly impacts quality. They can also use predictability, or a method where a robotic system stops due to a vision system erroring, thereby identifying an issue in the process, as an indirect form of quality.
Both approaches use Industry 4.0 to identify and flag defective products, which makes it more effective. Vision systems can also record and upload quality data to an external system, which operators can use to predict and respond quickly to errors. Some leaders even use the data to enhance deep learning models.
“The Industry 4.0 vision is to have an intelligent connected manufacturing system that is highly data driven; therefore, the accuracy of the data, how fast it is obtained, and the data interpretation is key,” says Frank Stone, national sales manager, Capture 3D.
“The resulting data provides insight into the entire product lifecycle from design to development to production for a modernized lean manufacturing strategy. This data unlocks Quality 4.0 capabilities, such as digital assembly analysis, allowing you to use digitized components to virtually build an assembly for form, fit, and function analysis regardless of the physical location. Simulating the assembly process within the digital space reduces costs and accelerates launch time.”
Machine vision, which is a form of artificial intelligence, is very prominent in robotics today, says Nick Longworth, senior systems application engineer, SICK Inc. The pandemic has only boosted its use as end users look to create more automated and flexible processes due to labor shortages.
“There are both mature and nascent areas to the robotic machine vision field,” Longworth says.
“On one hand, you have traditional rule-based algorithms like pattern matching, optical character recognition, and other tools, which have been allowing robots to complete pick-and-place and inspection tasks for decades. On the other, you have machine learning and deep learning applications that are allowing the industry to complete tasks that seemed impossible a few years ago, like anomaly detection in wood grains.”
The mature, rule-based products are most popular in robotic machine vision, he explains. At least a few can be found in just about every facility using robotics and are easy to use and highly reliable. In contrast, machine learning and deep learning are just new to vision robotics.
“Due to complexity and need for further development, they are currently reserved for applications where they are absolutely needed over their traditional rule-based vision counterparts,” Longworth says. “They receive a lot of attention because they have the potential to alter robotics as we know them.”
For instance, experts expect deep learning to reach general productive use within two to five years, he says.
Steve Reff, automation and launch support manager, Capture 3D, says that AI and machine learning typically replaces repetitive tasks to achieve faster throughput.
“For quality control and dimensional inspection, AI technology is just not there yet, because the industry needs to adopt full-field data collection as a standard—and it must be good data,” he says. “With complete, high-quality data sets, there is potential for AI to become capable of making intelligent decisions through machine learning and eventually take over more decision-making processes for us in the future, but first, we need to secure consistent access to good data sets.”
Just as humans need good data to make better decisions, so do AI systems.
“The better your data is, whether you're a robot or a human, the better, faster, and more accurate your decision-making is,” he says. “Accurate data is always at the core of every good decision.”
Lavanya Manohar, senior director, Cognex, says she expects to see the number of vision-enabled robots to grow in the next decade.
“We also expect more and more deep learning to be utilized in the inspection and positioning of robots,” she says. “We expect robots to operate with more intelligence and move into areas of more complex grabbing, positioning, and scene-understanding. We expect to see more adoption of 3D vision within robotics and not just traditional 2D vision.”
Trends in the vision robotics field come down to two words: “simplify” and “complexity,” Longworth says.
“End users are attempting more complex vision applications but want to simplify the way they are built, programmed, and supported,” he says. “Many small to medium sized end users also may want to DIY the integration to cut costs. This has led to a rise in more configurable and “no-code” technology. These solutions allow users to build complex applications without advanced knowledge of robotics or machine design.”
Application involving complex tasks such as bin picking or deep learning were considered to be far too complex for practical use, such as in production or a warehouse, he says, but today, companies have developed products to simplify them. For example, PLB software enables users to solve a bin picking application in a few configurable steps and have their robot picking parts within a couple of hours of unboxing the camera, he says. Older technology, such as 2D vision, is also getting the same simplifying treatment.
Combining automation and PLB software democratizes vision technology to new users while allowing experienced users to improve their facilities and processes.
“It allows companies with less resources to automate effectively and efficiently, while giving experienced users another avenue for development and continuous improvement,” Longworth says.
Stone echoes this.
“We are seeing exponential growth in the demand for automated solutions. The trend is to go automated to increase throughput and program repetitive processes because, for ROI purposes, everyone wants to streamline processes and cut costs—and the best way to do that is to automate the process,” he says.
For example, lights out manufacturing is a methodology that allows companies to run an eight or 12-hour shift without human interaction. Organizations can literally turn off their lights, and find inspection reports generated for them the next morning by an automatic part loading batch processing system.
“In the short-term future, we will see more solutions similar to this because the industry is looking for ways to automate processes and become more efficient and leaner in the way they manufacture goods. As this space becomes more competitive, implementing automation, whether through vision robotics or otherwise, can provide a great ROI.”
Additionally, these solutions also free up an operator or another resource to do something else.
The robotics industry is increasingly willing to “try out vision,” Manohar says.
Manufacturers continue to turn to deep learning and 3D vision, and robots have also become easier to use. Both are becoming more affordable.
Still, there is room for improvement.
“Despite all the improvements made in the area of vision with 3D and deep learning and traditional high-accuracy 2D, the technology is still relatively lower down the S-curve compared to inline manufacturing use-cases for vision — such as measurement, gaging, identification,” she says. “Continued algorithmic improvements, greater hand-eye flexibility between the robot and vision, and a full-system optimization per use-case will be required to see adoption rates accelerate.”