Vision guided robotics (VGR) is an automation technology well-recognized for enabling greater flexibility and higher productivity in a diverse set of manufacturing tasks over a wide range of industries. The project scope in practical implementation of VGR can range from very straightforward to extremely complex. Keys to a successful and efficient VGR project rollout can be found in a few system design and implementation “best practices” that can help avoid costly and time-consuming trial-and-error in the integration process.

Although this discussion focuses exclusively on the implementation of robotic guidance for operations involving picking and placing objects in an automated process, some of these principles will be transferrable to other types of robotic guidance applications as well. VGR in this context then is the use of one or more imaging systems which are configured to provide the location of an object relative to a physical, real-world coordinate system defined by a robot arm. This location might be used for gripping and picking/placing the part or even for other operations like welding, assembly, and other tasks. The part location can be provided by the machine vision system as either a 2D position or 3D pose depending on the imaging component.

Ultimately, VGR is a marriage of two technologies that can provide automation requiring less hard fixturing of parts, greater flexibility in part handling and manipulation, and improvements in the safety and efficiency of many processes. Advanced tasks which include tracking moving parts for pick, identifying and sorting mixed objects, and even bin picking of randomly oriented parts have become common functions for VGR.

The following “best practices” are some significant (but certainly not all) guidelines that may help ensure the success of your next VGR project.

 

Perform An Application Analysis

An obvious perhaps, but nonetheless important first step in any application is a thorough project evaluation prior to system design and implementation. Analyze all aspects of the proposed VGR implementation with particular attention to part and presentation variation, required precision in pick or place, targeted processing rates, and part geometries and gripping limitations as well as the many other possible influencers that could affect the success of the project.

 

Beyond “Pick And Place”

In analyzing potential applications for VGR consider the many ways this technology can be used in an automation process. Certainly, the act of picking disparate objects from a somewhat varying location and placing them with precision elsewhere within a work cell is a very familiar and an extremely valuable use of VGR. In general, vision guided “pick and place” is relatively easy to implement and can be executed very reliably. And, some of the variations on this task can be equally reliable and similarly easy to implement in many situations.

One example is part tracking of moving objects for robotic pick. This straightforward task in robot motion programming only requires some additional calibration and configuration relative to the vision components, and it adds value by eliminating the need for the part to stop prior to pick. Limitations to note would be speed of motion and the overall throughput – things which will impact architecture and implementation.

Picking randomly oriented, stacked, and even mixed objects from a bin (“bin picking”) or stack (“depalletizing”) is a VGR application that has been evolving over many years and currently has a high level of maturity and capability, at least for a reasonable subset of potential use cases. While more complex, these use cases are well within the reach of many knowledgeable end-users and integrators of vision and robotic technologies. Manufacturers of both 3D imagers and robots have helped advance the implementations by offering configurable software to perform certain bin picking and/or depalletizing functions.

Ultimately, the critical deliverable from the analysis stage of a project - whether you intend to purchase an integrated VGR system or develop it yourself - is the definition the specific tasks that must be accomplished and the metrics (precision, throughput, reliability, and other outcomes) that the system must deliver.

A complete discussion of application analysis is beyond the scope of this piece, but this step is a critical “best practice” that will lead to a well-implemented application.

 

Specify The System Architecture And Components

The information gained in the application analysis is the starting point for the definition of the architecture of the VGR system. The “best practices” at this stage are to identify how the application tasks will be achieved and specify the components that will successfully deliver performance that meets the stated application metrics. A good rule of thumb overall is to let the needs of the application, not the latest market buzz, drive system architecture and technology selections. That said, here first are few concepts that might factor into general architecture decisions.

Arm or static mounting: Whether 2D or 3D, an imaging system can be either statically mounted over the pick area or mounted as part of the robot EOAT. Advantages to arm mounting include flexibility and possibly smaller fields of view with a potential for better illumination and imaging as a result. Static mounting limits the imaging to one field of view, but speed of processing might be faster because the robot does not need to make extra moves to position the camera and acquire the image. Note too that arm mounted VGR might require more complex calibration or location transformation when the image is taken at different camera poses.

Pre-locating objects for better pick accuracy: An architecture with both EOAT and static imaging might help in some applications. The static imaging system would “rough-locate” a candidate object, then the arm-mounted camera would be moved to a dynamic location to gain a tighter and more accurate image of the part location. An arm-mounted camera also can be used for inspections following rough location of the object.

Grip then locate or relocate in grip: With any robotic gripping technique there is some change in pose of the part that is introduced by the gripping action itself. Whether using vision to provide a grip location or if the part initially can be gripped without vision location, a valuable technique to improve precision is to locate or relocate the object with vision once it is in grip and use the location as a tool offset in the placement of the object. In many cases, this VGR technique can provide the highest level of accuracy in the guidance task because it can mitigate error introduced during grip.

Locate place positions: As with gripping, the accuracy of placement of a part can be influenced by the nest or other tooling that receives the part. Using VGR for guidance in the placement of an object also can improve the accuracy of the VGR process.

Next, taking a broad look at component specification, here are some basic considerations that fit into the notion of “best practices” for VGR.

 

Imaging – “2D Or Not 2D”

Machine vision systems for VGR can deliver either 2D or 3D positional information about the objects or features in the field of view. While there has been strong interest in 3D imaging from recent advances in these imaging components for VGR, it is incorrect to assume that all VGR must or should be 3D. 2D imaging is suitable for the vast majority of VGR use cases even where the object’s profile might have 3D features. When an object will be presented in one or more “stable resting states” on a consistent surface, as long as 2D imaging can locate the part as presented and differentiate which stable resting state is facing the camera, 2D location and suitable tool offsets should be sufficient to successfully pick the part.

While there are many things that may drive the 2D/3D decision a starting point might include the following technology distinctions: 3D would be indicated as the candidate technology; 1) when 2D imaging cannot provide suitable data to clearly segment an object for location (e.g., not enough grayscale or color contrast), or 2) in those cases where the part or parts are presented in multiple “resting states” and/or are randomly stacked (e.g., in bins or on conveyors or pallets) particularly when multiple part types must be located and picked or 3) where the height and pose of the part presentation might vary for other reasons.

 

Robot Or Cobot

Industrial collaborative and non-collaborative (traditional) robot arms perform the same basic motion functions. Collaborative robots are speed and force (and therefore payload) limited so that they can be safe when used in proximity to humans without guarding or other safety devices while traditional robots can move at the highest designed speeds for the reach and payload but require auxiliary safety guarding or sensing. Cobots often are perceived as being “easy-to-use” depending on the software, but many traditional robots also feature ease of use as part of their programming and configuration. As noted previously, select the robotic (and any) component that most effectively and reliably serves the application.

 

Grippers And End Of Arm Tooling (EOAT)

Surprisingly, one of the most challenging components to specify in VGR can be the gripper. It must be able to grasp the part at a point that can be specified by the vision system and do so without too much variation. As mentioned above, sometimes the position or geometry of an object just doesn’t allow a grip using a single technique. In the case of randomly oriented parts and 3D guidance in particular, it might be necessary to rough-grip, fixture, and re-grip a part or even use multiple interchangeable grippers in order to cover various part types or orientations.

 

Project Workflow Best Practices

It is a challenge to limit our discussion of “best practices” in VGR workflow to just a few items. But overall, it seems in VGR that calibration and certain programming issues cause the most confusion and time cost.

Calibration

Calibration for VGR is critical, and often misunderstood. It helps to grasp one primary concept: the task of calibration always is to relate a set of real-world positions to observed features in the camera field of view. The real-world positions are stated within a coordinate workspace defined by and within the robot’s work envelope. (Actually there are two different functions of calibration in VGR – calibration of the imaging system and establishing the world-camera relationship although these might be performed simultaneously.) It’s important to note also that most 3D imaging systems are pre-calibrated to an internal world coordinate space (often located at the center of a camera sensor) but this space still needs to be related to the robot. Armed with this basic understanding it is not too difficult to execute VGR calibration and fortunately many machine vision component manufacturers offer some pre-built programs or apps that help perform a calibration.

Calibration of a camera for 2D VGR often only requires a single view of a calibration article where the features on the article are known in robot coordinates (this technique might also be used for self-calibrated 3D systems too). Limitations in this method include error that may be introduced in the definition of the world points (usually by robot touch-up), and the restriction that the resulting calibration is only valid (in most circumstances) for the exact plane in which the calibration article was imaged. These limitations are mitigated when performing a “hand-eye” calibration. This operation is common for 3D VGR but also can be used in 2D. In execution, a known object or even calibration grid is held in the robot gripper and presented to the camera system at multiple poses within the field of view.

 

Programming

In either 2D or 3D, one useful practice is to use local frames (sometimes called “user frames”) for the VGR task. In some cases, this can improve precision, but in all situations, it makes the work envelope easier to understand.

 

Safety

In conclusion, one indisputable best practice that must not be avoided is that of incorporating safety mechanisms into the VGR system and program. The issue of safety is always important but takes on a different dimension when the robot is autonomously guided. Because the robot is under dynamic control (as opposed to always following a static path to programmed points), motion can be unpredictable. Furthermore, should an error occur, the end point of a move might be completely out of the expected work envelope. Use all available safety envelope limits within the configuration of the robot so that the arm will stop movement – before hitting a cage wall or other required safety mechanism. And, do not consider a collaborative robot to be inherently safe for any use case. Cobots require, by safety standards, a thorough safety analysis for every project to ensure configuration and usage is correct for the anticipated proximity of a human in the robot workspace.

Use these few best practices, and seek out other VGR best practices, to make best use of this productivity-enabling technology.