There is nothing easy about handling a car hood on an assembly line, especially for the human workers in the loop. So when Chrysler wanted to improve this aspect of their manufacturing process, the automaker faced a number of challenges.
Car hoods would arrive from the stamping supplier on a large steel rack-essentially two horizontal arms jutting out that held anywhere from 50 to 100 vertically stacked car hoods. A worker would take the hood off the rail and place it on a precision fixture, which allowed for exact identification of the hood's location-a key to completing further operations, such as undercoat application. The car panels were heavy, oily and sharp, making it difficult for the operators to handle manually. Unloading them involved repetitive motion, and risk of injury was high. Moreover, the racks on which the hoods hung tended to bend and warp over time, the result of repeatedly being loaded in and out of trucks via a forklift.
Automating the process was the logical next step. For help, Chrysler turned to the SmartImage Sensor-Robotic Integration solution offered by DVT (Duluth, GA).
The project requirements included:
• locating the car hood on the rails;
• picking it up and moving it to the inspection location;
• determining the position of the hood in space; and
• placing the hood on a fixture within 0.002 inch of tolerance.
The DVT solution combines multiple parts of machine vision, including onboard imaging and processing, integrated Ethernet, and digital I/O, says Phil Heil, Ph.D., applied engineering manager at DVT. "We call it our smart camera, because all the intelligence is inside a small box. Literally, you put this device, which is a little bigger than a cell phone, on the line, run an Ethernet cable to it, some power, and maybe some I/O lines to communicate to the device from the automation system on the assembly line," Heil says. "There aren't a whole lot of boxes behind the scenes, and you don't have to keep a Windows machine on the line."
DVT worked with Indicon Corp. (Sterling Heights, MI) and ABB Robotics (Auburn Hills, MI) to design a system for Chrysler, with the former handling the vision programming and the latter developing the robotic programming. Three vision systems were used, with two cameras placed at the beginning of the robot movement (one looking at the front of the panel, the other looking straight down on the panel) to determine a hood's exact location. The robot then picked the hood up and moved it within range of the third camera, which would take a picture of the hole, stamped into each frame, that would engage a friction pin as the hood was moved to the work cell. "We would take a very precise measurement," Heil says. "That way we would know some offsets from the way the robot was holding the hood and would be able to place it on the pin successfully."
The main challenge was locating the hood in 3-D phase, reports Brian VanderPryt, vision systems specialist at Indicon. Since the cameras contained processors, all the number-crunching takes place onboard the cameras, which made for a tricky communications network between robot and cameras as they triggered one another and exchanged digital information.
"Originally, no one thought we'd be able to do this," says VanderPryt. "It had never really been done before, and they didn't think we'd be able to handle the offsets inside the camera."
But the system worked-and then some. The customer originally wanted the robotic controller simply to talk directly with the cameras, nothing more. "Just two pieces of hardware, and that's it," Heil says. "But as they got into the project, they decided to add to it." One additional item was a simple PLC (programmable logic controller), placed in the rack, which operated a light to let the forklift driver know when a pallet was empty and needed to be changed out.
The second add-on-placing a PC with a Visual Basic program on the line-had several purposes. One was to create a push-button calibration routine, which would walk operators through the setup routine if a camera needed to be replaced. And, "In case the robot ever crashed, they wanted to create a simple way for an operator to recalibrate it without going into the DVT user interface or into the robot interface," Heil says. "So the VB program would show them pictures of the cameras as well as give them the interface to the calibration system."
Those pictures were another reason for the PC/VB addition. "If you're showing someone around the plant and wanted them to know what this machine is doing, it's a lot easier if you can go over, point to a picture and say, ‘This is what's going on,' " Heil says. "So it's for operator awareness and visitor awareness."
Some two years after the project began, the system continues to work. "There haven't been any issues, which is nice," Heil says. "And it's a little unusual, because anytime you have vision systems mounted in a robotic work cell, there's always an opportunity for something to get bumped or run out of calibration. But they've been up and running with the system very successfully."