Over the years, manufacturers of all types of products have deployed literally thousands of machine vision systems for applications such as assembly verification, gaging and motion guidance. To successfully implement a new machine vision system, manufacturers must determine whether the inspected part will be stationary, indexed or moving continuously. 
 
Stationary parts do not cause blurring, so image acquisition is straightforward. Indexed parts may be considered stationary so long as they are stopped long enough to acquire an image. Vision cameras may capture images of moving parts without blurring depending on the target’s rate of motion and the shutter speed of the camera. For continuously moving objects, motion must be optically or electronically frozen for a quality image to be obtained. 
 
In order for the object to be in the proper location within the image frame, the manufacturer must employ a precise part presence trigger. The greater the speed of the moving part, the more precise that trigger must be. Most inspection systems obtain this input trigger with a simple photoelectric part detecting sensor. 
 
As a part moves, the sensor detects its leading edge and sends a signal that tells the image acquisition sensor when to take a picture. A time delay exists between the part presence sensor and the image acquisition sensor based upon the speed of the part and the physical distance between the two sensors.
 
This inspection method works well if both the distance between the sensors and the part speed remain constant. However, while it is common for line speeds to change intentionally or not, sensors are difficult to move. In instances where part speed varies, replacing the simple photoelectric sensor trigger can be advantageous. 
 
The location of the part’s image on the image sensor is most important when adjusting trigger input based on part speed. Figure 1 illustrates what happens when the speed of the part changes while the distance between the part presence and imaging sensors remains the same. 
 
In the center picture, the part is located at VS, the preferred location for the part within the image window. This location provides maximum flexibility for the vision tool application. As velocity decreases (VS-), the image will be displaced in one direction (e.g., to the left), and as part speed increases (VS+) the image will be displaced in the other direction (e.g., to the right). 
 
Depending on the application, either of these two positional displacements may be acceptable. The extreme at either end, where the image no longer captures the entire part, is never acceptable. This problem can be corrected by providing the vision sensor with an input based on distance rather than time. An encoder can provide such an input.
 

Typical Inspection Process

On a bottling line (Figure 2), a vision sensor inspects for label skew. If the label is improperly applied, the bottle will be removed from the conveyor by a reject mechanism.
 
As a bottle travels to the inspection station, it encounters a photoelectric sensor that tells the vision sensor that a part is present. After a certain period of time based upon line speed and the distance between the photoelectric sensor and vision sensor, the vision sensor acquires an image. The vision sensor determines whether the part is conforming or not, then outputs the appropriate signal to the reject mechanism. After another period of time based upon the line speed and the distance between the vision sensor and the reject mechanism, a nonconforming bottle is rejected.
 
The time/distance relationship may be calculated using the formula:
 
Time = Distance/ Velocity where: 
 
Distance = Location Part Presence Sensor – Location Vision Sensor
 
Velocity = Speed of the Bottle (Line Speed)
 
Time= Input Time Delay
 
In this example, if the bottle takes 1 second to travel 100 mm (100 mm/second) and the distance between the photoelectric and vision sensors is 50 mm, then the time delay between sensing the part and acquiring the part image is ½ second (500 ms). 
 
0.5 seconds = 50 mm ÷ 100 mm/second  

Time- Velocity Examined

In the bottling line example, the part presence sensor is at one location, the vision sensor is at another, and the apparent part position is affected by the time delay between the two. Proper alignment between the two sensors at the calculated time delay should place the part image in the center of the vision sensor’s sweet spot. But how can the manufacturer preserve image positioning if the line speed were to double?
 
If the line speed increased to 200 mm/second and the image relationship within the field-of-view was preserved, then either the physical distance between the two sensors must be doubled or the time delay between the two sensors must be cut in half. The time/distance equation addresses this concern.
 
Time = Distance/Velocity
 
Initially:
0.5 seconds = 50 mm ÷ 100 mm/second
 
Part speed doubles, so halve the time delay 
and keep the distance
0.25 seconds = 50 mm ÷ 200 mm/second
 
Or, double the distance and keep the time delay:
0.5 seconds = 100 mm ÷ 200 mm/second
 
If neither of these adjustments is made, then the image sensor would either 1) see a different part from the one detected by the presence sensor, or 2) see no part at all.
 
The equation can also be applied to calculate the relationship between the image sensor and the reject mechanism. Failure to adjust either the distance or the time delay in this case means that either 1) the vision sensor may take pictures of a partial part or the wrong part, or 2) the reject mechanism may reject the wrong part or the system may jam.
 
It is usually easier to change the time delay in software than to adjust the physical distance between sensors since the mechanical design of inspection work cells seldom provide for moving either sensor. 
 

Practical Implications of Variable Line Speed

If a vision system is unable to compensate for changing line speed, there are at least two practical implications to resolve. First, the vision sensor has no way of knowing which part in the queue it is inspecting. Second, the manufacturer may need to apply a locate tool to compensate for part position changes within the image window, a step that requires additional processing time and can limit the application.  
 
For an application where a vision system is used to inspect for the presence of a cap on a bottle, Table 2 illustrates a snapshot of six locations on the bottling line. Each position contains a part at any given time, from part detection (Position 0) to image sensing (Position 1) to part rejection (Position 2) and beyond.
 
The part presence sensor detects the part at Position 0 and, if the speed is constant, the vision sensor detects the missing cap at Position 1 after a suitable time delay. After another suitable time delay, the reject mechanism rejects any part with a missing cap at Position 2. Table 2 shows a nonconforming part appearing in the process and the presence sensor detecting the bottle at Position 0. The inspection is performed after the time delay at Position 1. After another time delay, the part is rejected at Position 2.  In this snapshot of six line positions at various times, the red arrow follows the part and the resulting hole after rejection through the process.
 
Table 3 illustrates what would happen across the line if the line speed were to double. In this case, the faulty part is detected by the part presence sensor at Position 0. But because of the fixed distance and time delay between the part presence sensor and the image sensor, the nonconforming part has actually travelled to Position 2 and the missing cap is never detected. When the image acquisition is triggered at Position 1, a bottle with a cap is present so no reject occurs. The nonconforming part continues through the process. At this doubled speed, even if there were two missing caps in a row, a reject would still not occur since the second uncapped bottle also travels past the reject station before the system can react. 
 
Less dramatic changes in line speed can still affect inspection results. In an application where a machine vision system inspects parts for a feature that occurs at the extremes of the part, detection of that feature may be unreliable if line speed changes slightly with no change in the distance between the part presence and vision sensors and no adjustment to the time delay. 
 
To compensate for the slight variation in line speed and the resulting part position shift, the manufacturer may add to the inspection system a “locate tool,” software that adjusts for slight variations in part position. The locate tool allows for reliable detection of the part itself, and the subsequent application of an inspection tool determines the presence of the feature on the part. However, real world execution of a locate tool adds time to the inspection process. Variables such as processor power and size of the search window can also affect inspection tool location, especially in high-speed applications where a locate tool may limit the maximum rate of inspection. 
 

Adding a Distance Component Using an Encoder

An encoder is an electromechanical device that transforms rotational information into a series of pulses. Encoders provide output signals based upon actual distance and can replace the time-based relationships among inspection system components. 
 
When installed on a conveyor, an encoder provides a series of pulses proportional to the linear motion of the belt. If an encoder input is coupled with a part presence trigger, the distance-based image acquisition compensates for any changes in conveyor speed. The image remains centered within the field-of-view, and part inspection is more robust. 
 
This distance-based approach allows the part-presence trigger, image acquisition, and part reject operations to function independent of line speed. Rather than replacing the part presence sensor, the encoder enables replacement of the time delay function with one based on distance. 
 
Typically, an encoder is comprised of a disk featuring a series of optical or magnetic divisions on it.  This disk rotates about a shaft that is coupled to a motion device such as a conveyor.  As the conveyor travels, the encoder shaft and optical disk rotate. The resulting pulse output is directly related to the physical motion of the conveyor, and the vision sensor can be calibrated in corresponding units of measure so that output pulses directly correspond to distance traveled.
 
Encoders are specified in Pulses Per Revolution (PPR). Calibration of distance to pulses is easy and straightforward using the following formula: 
 
(PPR) / (2*pi*shaft radius) = pulses per unit of measure
 
For example, if the encoder is rated at 1,024 PPR and is coupled to a 50 mm diameter shaft then
1,024/ (2 * 3.1417 * 25) = 6.52 pulses per mm.
 
To carry this example a step further, if the conveyor pulleys and gears are at a 1,000 to 1 ratio (1,000 rotations of the motor to 1 unit of travel), then our pulse calculation is:
 
100 mm/ sec = 6.52 pulses/mm x 1000 = 6,520 pulses/second or
100 mm = 6,520 pulses.
 
This calculated value can be placed into the vision software as a pulse-to-distance calibration factor. Some vision sensors incorporate encoder input calibration into their configuration menu. 
 
The part is detected and the encoder’s pulses replace the time delay. Because the distance between the part presence sensor and the vision sensor is fixed, part position will not change within the image window, assuring the inspection of the correct part and possibly eliminating the need for a locate tool. This same encoder output may be applied to the relationship between the image sensor and the reject mechanism.
 
 

TECH TIPS

  • For continuously moving objects, motion must be optically or electronically frozen for a quality image to be obtained. 
  • In order for the object to be in the proper location within the image frame, the manufacturer must employ a precise part presence trigger. 
  • The greater the speed of the moving part, the more precise that trigger must be.