- THE MAGAZINE
- WEB EXCLUSIVES
Today’s robots offer a variety of kinematics for different load capacities and speeds, while vision systems add eyes to these robots for applications in automotive manufacturing and many other fields. Visual robot guidance makes it possible to increase production rates, improve and ensure production quality, and reduce the overall cost of manufacturing.
Automation is steadily advancing in all industry sectors in a variety of applications. Anywhere flexibility and reliability are required, robots equipped with vision systems are being applied. These types of applications in particular are being used in the assembly process, in guidance and controlling, handling, and pick and place, as well as in quality assessment. Here, powerful image processing systems are the key element for increased productivity, higher efficiency and seamless quality assurance.
Vision systems meanwhile have advanced to a stage where they can easily meet the growing demands for precision workmanship and higher cycle times. Important considerations are extremely robust detection algorithms and calibration. Key components in image processing systems are image producing sensors or cameras, lighting technology, fast image processing hardware matched with a well thought out conceptual design and matching application knowledge.
In addition to the typical tasks-assembly, 3-D robot and seam guidance, pick and place, and in-line measuring technology-the wide range of vision system application solutions can be combined to match the needs of additional applications. This can be achieved by linking multi-line projection sensors with integrated surface illumination, with so-called robot guidance sensors (RGS) and geometry gaging sensors (GGS).
For example, by using these sensors, an in-line quality gaging step can be integrated directly after the assembly process. Three-dimensional form matching, a solution used in robot guidance and in-line gaging technology, assumes the task of quickly and precisely determining the position of objects to be measured in undefined positions.
Image ProcessingIn the assembly process, the vision system identifies the object type, detects its position and transmits the recorded data to the robot control system. Doing so guarantees trouble- free handling-a critical requirement for the automated production flow. Incorrect or defective parts do not reach the next production steps, preventing additional costs.
When it comes to the assembly of various mounted parts, a high level of flexibility and optimized fitting accuracy is crucial. In this case, it is particularly important to have the positional information of both parts that are to be matched-up.
In automotive construction, for example, by using the best-fit method, the dimensions of the car body as well as those of the subassemblies are defined based on their geometry. The results are then used to determine the optimal position and to minimize component tolerances. As a result, during each assembly process, the best possible assembly accuracy can be achieved.
Best-fit assembly does not rely on the surface integrity of the components. Instead, these processes are based on image processing systems that are designed using methods such as robot guidance sensors, which not only ensure the highest possible precision results and maximum availability, but also benefit the operator in terms of cost. These types of systems can be installed either stationary, mobile or in a hybrid fashion.
In automotive manufacturing, the systems are used in assembling front, rear and side windows, as well as in the installation of glass modules in the vehicle’s roof or in the assembly of a panorama glass roof.
Cost EfficientA particularly difficult task is the automated and correct mounting of wheels in an assembly line operation-not only on a stationary car body, but also on a moving line. During automated wheel mounting, a handling device positions the wheel assembly into the correct position onto the spindle and the lug nuts are fastened. In order to guide and position the robots, an RGS calculates the exact installation position. Axle spindles on the moving car body do not have a defined position and the steering angle of the front wheels can vary significantly. The first step involves measuring the position of the hub of the wheel in 3-D and calculating the exact position of the mounting points. The robot then uses this data to mount the wheels with its tool.
Optimum Application SensorsAnother critical application in which 3-D robot vision is used is in the application of sealant and adhesive beads. Sealing cosmetic seams, on door seams for example, requires the highest level of precision. The goal in the process is to actively inspect the work of the robots and their motions and to regulate them-that is, to guide them during the application. This greatly reduces the need to do any refinishing on car bodies. New sensors have been developed that not only inspect the width of the bead, ensure a quality application and determine the bead position, but also can measure the height of the bead. This is accomplished at speeds up to 500 millimeters per second (mm/s), which until now had not been achieved in the market.
Identification During HandlingDuring handling and pick-and-place tasks, image processing systems play a major role in part recognition. To do so, objects are identified by comparing them to patterns that the system was taught during the commissioning process. Part-recognition using vision system products also often provide data about the position of the object.
Depending on the application, position definition can either be carried out in 2-D or 3-D. This makes it possible to be flexible when carrying out functions such as palletizing and de-palletizing of rims, unfinished parts or body parts, for example, to conduct handling tasks and also perform loading and unloading operations.
Typical 2-D applications include the loading and unloading of presses, machine tools or parts containers and also performing sorting tasks directly on the conveyor. Three-dimensional robot vision accomplishes more difficult tasks: the loading and unloading of racks or the guidance of loading devices.
In food production, the pick-and-place tasks robot vision systems carry out are subject to even more demanding challenges. Fast, wide conveyors with a wide range of products not only challenge the robot’s kinematic technology, but also put high demands on the image processing systems. Where, for example, in many pick-and-place applications the most cost-efficient solution is the use of single or multiple matrix cameras, the use of line scan cameras offers huge benefits particularly in regard to the challenges of accuracy and speed.
Precise 2-D and 3-D MeasurementsFor demanding gaging tasks, flexible in-line gaging systems easily can be integrated directly into the manufacturing process. These systems provide highly accurate, local detection and storage of measured data, as well as the initial analysis and visualization of the data. They consist of compact geometric gaging systems and a gaging cell controller with its own software.
During the manufacturing process, the sensors provide extremely precise 2-D and 3-D gaging of the characteristics and the definition of their positions in spatial coordinates with the highest degree of accuracy. They feature excellent system linearity and stability. Not even the different reflective properties of the surfaces of the parts that are gaged affect the accuracy of the gaging results.
Exact 3-D gaging is used, among other things, to inspect the assembly accuracy of body parts. To accomplish this, robot sensors travel along the object’s surface to determine the gaging and best fit accuracy of body parts. The system’s fast algorithms ensure short gaging times and maximize the speed of the gaging process.
To prevent gaging errors caused by external influences, aging or the heat of the robot, the concept integrates an automatic sensor calibration and compensation for the robot temperature. This guarantees a high level of flexibility and cost-efficient automation. Innovative in-line gaging systems are suitable for manufacturing processes that place extremely high demands on gaging and fitting accuracy.
If several gaging cell controllers are used along with these sensors, it is recommended to expand the gaging cell system by adding analysis software in order to optimize production in the plant or even throughout the entire manufacturing operation. In doing so, companies not only have access to significant measurement data that they can use for their applications, but also can access valuable information that can be utilized to optimize the overall process. This makes it possible to seamlessly perform quality inspection and analyze data locally for each measurement cell, throughout the plant and even for all products produced around the globe.
Gaging, visualizing, analyzing and optimizing-companies that use in-line measuring systems benefit from higher levels of productivity, a greater level of efficiency and a seamless quality assurance process. By making use of production decision intelligence, manufacturers make the most of gaging technology and an understanding of the process as a whole.
Higher Productivity and More EfficiencyVision systems make the use of robots even more flexible and have demonstrated that they can reduce costs. In addition, vision systems provide documentation of the manufacturing quality, which would not have been possible without them.
In the robotic vision industry, a wide array of suitable robot vision products ranging from 2-D to 6-D are available. It is important to work with the right partners who can combine these products with the expertise needed to find solutions that result in the ideal customized, configured systems for a particular application. With today’s technology, solutions can be found for the most demanding automation tasks in rough industrial environments, even providing reliable results at the highest processing speeds. It is important to look for those that are characterized by robust detection capabilities, flexibility, ease-of-use and standard interfaces for the most common robots offered by leading manufacturers. V&S