If we were to name one of the most common applications for industrial robotics, it would probably be material handling. The technological advances in servo design and motion control have allowed robots to operate at high speed while maintaining precision and repeatability. Although the speed rate of a material handling application is mostly dependent on the robot itself, an essential variable of the process that is usually overlooked is how the robot will localize and dimensionally recognize the part that will be manipulated.

Lack of part presentation design can cause many issues in the processing stream, but mainly the robotic arm can constantly damage the product when handling it. This could lead to time and cost consuming issues such as premature repairs in the robot arm and a reduction in the production throughput. When designing part presentation the most common solution usually includes the addition of guiding rails to a conveyor or using a mechanical device to center the part. This might suffice when working with boxes or any material that has a square/rectangular shape.

TECH TIPS

Sometimes the idea of using a vision system to compensate for flaws on the system’s design can be taken lightly or not considered at all.

But time again, vision systems have proven to become more and more necessary for constant, fluid, repetitive production, with high capabilities of speed and precision.

It is also important to always keep in mind that despite having many advantages, vision systems need proper set-up and programming.

Another approach is to have a fixture to prevent parts from moving. This is an appropriate approach when the robot has to handle the part in a specific orientation. All of these methods for part presentation are difficult to implement, and sometimes come with a high price tag, especially when dealing with small parts being produced at a mass rate. This is one of the main reasons why integrating a vision system in material handling applications is becoming a trending option in the market.

Current vision systems have come a long way from predecessors. Now, most vision manufacturers offer small cameras, but with powerful image processing capabilities which come in handy when dealing with high volume production. The task of the vision system is to detect and send the part’s location to the robot. A simple vision system will involve a camera mounted near the area where the robot will be picking or placing a part. If the process is more complex and there is the need to manipulate parts in different locations around the robot’s workspace, the vision system can include cameras in strategic places that will give the robot a sense of dimension and space. If it is necessary to avoid having multiple cameras, a simple solution would be to mount a camera on top of the robot arm, but this will affect the overall speed in the process. The basic concept on how a vision system interacts with the robot arm for common material handling applications is an industrial process that requires special attention in order to have a proper value stream process. 

STEP 1: Finding the part

As mentioned previously, most vision systems currently available in the industry have solid image inspection capabilities that enable detecting any object’s dimensional placement in the field of view. Nonetheless, in order to find the object, the vision system will still require some “guidelines” as to what to look for. When there is high contrast between the object and the background it may be sufficient to apply imaging tools like blob analysis and background subtraction. Sometimes the camera might need to have a reference image with the object so it can extract features and try to match it with the live view. The camera will process the image according to these settings every time it retrieves a new image.

In order to retrieve a new image, the camera will receive a trigger signal, either directly by the controller or by a sensor that detects that an object is present. Once the vision system detects the object, it will obtain a pixel location, a row and a column value, of the center of gravity. Because an image is a projection of the 3-D world (Xw, Yw, Zw) space onto a 2-D (Xp, Yp) space, the system requires a calibration with the real world; in other words the camera needs to know how many millimeters a pixel is equivalent to. This process is called camera calibration.

STEP 2: Calibrating Camera to the Robot

The concept of camera calibration is to interpolate features (points or lines) in the image plane with their actual location in the real world plane. Most vision systems will recommend or even provide a calibration sheet that contains features that are easy to extract, for example, a checker board pattern. The calibration sheet is placed in the field of view where the camera can extract the features from the calibration grid and requests the real world characteristics of the grid. A good example is when using a checker board type, the vision system needs to know how many rows and columns the calibration sheet has as well as the size of each square.

After the initial calibration process, the camera will store a translation between the image and the real world space. The next step is to perform a second calibration between the camera and robot coordinate frame. The traditional way for calibrating the robot and the camera is to use an asymmetrical circle pattern calibration sheet type. The camera will locate and index the center of each circle. The user will need to manually jog the robot to the center of every circle and store it in the controller’s memory. Then the vision system will correlate the robot point with the point it extracted from the calibration image to create the robot-camera calibration. For this application it is assumed that the camera is mounted in a fixed location pointing straight down into the object of interest so the Z value will be set to a constant. It is important when correlating the points to make sure that the point indexing for all points is the same. Some vision systems are capable of setting a coordinate origin by adding a fiducial marker on the calibration sheet.  If the calibration sheet contains a fiducial marker the calibration could be stored on the robot side by setting an offset profile on the robot controller.

Calibration is the major step when working with robot vision guidance since this will determine where the robot should move when a part is found. The next step in the process is setting up the method by which the camera and the robot communicate.

STEP 3: Sending coordinate data to the Robot

A vision guidance application involves a constant data exchange between the vision system and the robot controller. The camera requires receiving a trigger input to obtain a new image usually coming from an I/O sensor detecting the presence of a part or the robot controller sending a command to get a new image. Once the vision system has processed and extracted the location (X position, Y position, object’s rotation angle) of the object, it will send it to the robot controller. Depending on your system, it might be possible to extract the location of multiple objects with one image reducing the amount of time the robot waits on the image to be processed. The robot requests coordinate data from the camera in order to move towards the object. It is necessary to take into consideration the time it takes the vision system to process a new image, as well as making sure that the robot is not in the field of view since it could create corrupted position data.

Conclusion

As technology progresses more and more automation solutions combine the use of an industrial robot arm and a vision system. Both vision and robotics manufacturers have built relationships to collaborate into creating tools that can help reduce time to get started with a vision guidance solution. Some vision systems include a program template for a specific robot and some robot controllers have libraries that allow communicating to a camera inside the programming environment of the robot. The interaction between the two systems, vision and robot, is such that robot manufacturers are adding image processing capabilities to their controllers to handle material handling applications in which the object is easier to detect.

 Sometimes the idea of using a vision system to compensate for flaws on the system’s design can be taken lightly or sometimes not considered at all. But time again, vision systems have proven to become more and more necessary for constant, fluid, repetitive stream of production, with high capabilities of speed and precision. It is also important to always keep in mind that despite having many advantages, vision systems need proper set up and programming. Small oversights like poor image lighting and image distortion due to vibration can turn any vision system useless. Starting with simple vision applications is highly recommended. The time and cost will greatly be compensated with the many new options the vision system will provide to the value stream process.