Too often in the process of selecting machine vision components, decisions on how all the vision components are to be synchronized are left until the end. Many times this can lead to long and costly system integration time, so it is important to look at the synchronization issues before selecting the system components.

Most machine vision applications require that a sequence of events happen based on time or the movement of the object under inspection. This can be a simple camera trigger to read a matrix code, or a complex set of triggers and sequences for processing, motion control and sorting for a high-speed inspection system with many cameras, light sources and robots.

Today a decision is generally made between a PC or smart camera/ sensor-based system. In simple single-camera applications this can be an easy decision. In complex applications, it can be difficult as there is currently no universal way to solve all of the complex cabling, synchronization and timing issues with a solution that works for both smart cameras and PC-based systems. No matter which solution is selected, the synchronization issues need to be solved, which can cost system integration time.

Synchronization for machine vision may require many different time resolutions from microseconds (µsec) to seconds to solve triggering, latency and jitter issues. An example of some typical time requirements:

  • Human reaction time – 250 milliseconds
  • Air cylinder reaction time – 200 to 500 milliseconds
  • Image processing time – 1 to 300+ milliseconds
  • Ethernet latency time – 100 microseconds to 1 millisecond
  • Internet latency time – 30 to 400 millisecond
  • PLC response times – 100 microseconds to 2 milliseconds
  • Photo beam response times – 100 microseconds to 3 milliseconds
  • Encoder rates – 25 microseconds to 10 milliseconds
  • Camera trigger latency – 2 to 100 microseconds
  • LED trigger latency – 10 nanoseconds to 10 microseconds
  • Motion control – 10 microseconds to 10 milliseconds
  • Object resolution of 1 mm at 10m/sec speed requires 100 microsecond minimum exposure time.

     



Each application calls for different synchronization issues. A 3-D laser measured profile of a log surface is shown here, with the log surface (inset). Source: LMI Technologies Inc.

The ideal solution is a programmable system with a single user interface that supplies a common time base to all components with timing resolution smaller than what is required by the fastest component. Of course, with this goes a multitude of cabling requirements for each camera, light source, image processor, encoder, I/O and PLCs. In the future, with IEEE 1588 or similar, there may be solutions that reduce the cabling and integration costs for high-speed vision and motion control in Ethernet-based systems.

This synchronization can be centralized around three solutions: the PC, the programmable logic controller (PLC) and the smart camera.

The PC provides the most common solution that can interface all the encoders, I/O, light controllers and supply a common user interface for timing and sequencing. It is not without its challenges as most cameras and light controllers have unique communication protocols requiring custom drivers to set parameters including exposure, delays and strobe times. The PC solution may be complex, but it does work for all applications, including GigE, FireWire, analog, Camera Link, smart cameras and vision sensors. An additional benefit is that the PC can support internal image processing. On a limited basis, a smart camera can supply some of these functions.



PLC is the next most common solution. PLCs interface easily to most devices including encoders and I/O. It can be difficult to interface to cameras and light controllers because communicating with these devices is more complicated. PLCs are a great solution where the timing resolution required is greater than a few milliseconds and similar latency times can be tolerated.

The smart camera can do its own synchronization in simple applications. The best way to understand these synchronization issues is to consider a few examples (see sidebar Application Examples).

A central PC will likely be required for this 180-degree 3-D laser profiling application. Source : LMI Technologies Inc.

Starting Point

When starting a system design, answer the following questions:
  • Object size and standoff from the camera?
  • Speed and object spacing?
  • Resolution required on the object or surface being inspected?
  • Number of cameras required?
  • Light sources?
  • Encoder or time based synchronization?
  • Triggering method?
  • Is inspection done in separate stages or envelopes?
  • Image processing time tolerated?
  • How many objects need to be tracked for sorting or reject?
  • Sorting or reject gate response time and distance to first sort gate?
As can be seen from the examples, the more complex the synchronization and timing, the more critical the need for centralized control of all timing. Whether smart cameras or PC-based imaging are used, centralized control of timing is a necessity to ensure the integration time and costs are kept to a minimum. PCs will likely supply this central synchronization task using complex cabling until a simpler solution is brought to market. V&S

Application Examples

The best way to understand these synchronization issues is to use a few examples. The application examples here can generally be implemented with either smart cameras/sensors or PC-based machine vision.

Design 1

For design 1, a matrix code reader on a conveyor, the background information is as follows:

Object size is 80L x 80w x 80h millimeters, with a standoff of 300 millimeters. The speed is 1,000 mm/sec (fixed speed), and 10 objects per meter with spacing between objects 100 millimeters ±5 millimeters. Resolution is 0.1 millimeter for a 10-millimeter wide matrix code, centered on the side of object.

It requires one camera and light source, and is time based, with no encoder required. Trigger is supplied by photobeam with latency/jitter of 5 ms; one inspection envelope is needed. There is no tracking or sorting, and the matrix code is sent to the host for later use. Everything is time based from photobeam trigger signal. Requirements include an exposure/LED time of 1,000 mm/0.1 mm = 100 µsec for 0.1 mm resolution.

No encoder is needed as long as conveyor speed is consistent within a tolerance. FOV is determined by 5 ms jitter in trigger gives 5 mm of object jitter if conveyor speed is constant. Trigger jitter of 5 millimeter may need to be added to position tolerance of ±5 millimeters in some applications, giving an uncertainty of ±10 millimeters on object location.

Assuming a consistent conveyor speed, a FOV of 40 to 50 millimeters should allow of capture of a 10-millimeter matrix code with lots of tolerance. Thus a standard 640 x 480 imager can be used with a 100 µs global shutter.

For this design, the PC Solution calls for one GigE, FireWire or Camera Link camera, along with one LED light source and possibly an LED strobe controller, depending on light intensity needed. The PC receives trigger from photobeam and adds any delay needed to trigger camera/LED. The PC processes the image, extracts the matrix code and sends the data to a host computer or PLC.

This design is actually the perfect solution for a smart camera with a built-in light source. No PC is required. The photobeam triggers the smart camera, which then sends matrix code data to a host computer or PLC.

Design 2

The next design is for 180-degree 3-D laser profiling.
The object is 80 L x 80 w x 30 h millimeters at 300 millimeter standoff, offset ±20 millimeters from center of conveyor, and the conveyor width is 180 millimeters. The speed is 1,000 mm/sec, 10 objects per meter with object spacing 100 millimeters ±50 millimeters, 600 per minute.
The 3-D object resolution is 0.5 millimeter resolution in X, Y and Z. Two cameras are required, one for each 90 degrees with overlapping FOV.
Time base calls for an encoder, while the trigger is self-triggering based on 3-D object detection. One inspection 50-millimeter envelope for both cameras is needed.
Buffer one complete object and send to host (160 profiles max for each camera).

Requirements are the following:

The FOV = (80 ± 20)/2 = 60 mm + 20 mm of conveyor for zero reference and overlap of camera FOV.
Two cameras with FOV 80 mm/0.5 mm = 160 pixels at a speed of 1,000 mm/0.5 mm = 2,000 fps.
DOF = 30 mm/0.5 mm = 60 pixels of depth resolution.
Theoretical imager needed is 160 pixels by 60 pixels in a perfect world.
Maximum 250µs exposure/Laser ontime set by 2,000 fps and stagger of two cameras to prevent crosstalk of lasers in overlapping FOV of each camera.
Laser duty cycle = 50% or less depending on laser power used.
Cameras trigger every other encoder pulse (camera 1 on even and camera 2 on odd). Requires a 4 KHz encoder rate.
Object trigger accomplished by camera searching for start of object above conveyor reference surface.
Host requires one complete image for processing.

For a PC Solution, two GigE, FireWire or Camera Link CMOS cameras with programmable windowing and high speed frames each mounted to see 80 millimeter FOV.
Two laser line generators mounted to create a triangulation angle for FOV.
Encoder interfaced to PC at 4 KHz rate to generate timing for cameras/laser.
PC programs cameras to set window size and exposure time.
PC controls triggering of cameras and lasers at 2,000 fps on even and odd encoder pulses.
Each frame is delivered to PC at a total of 4,000 fps.
PC performs triangulation processing on each image, finds the start of an object, builds a complete 3-D profile and sends to host.

This is a difficult solution with smart cameras, but can be implemented with CMOS imager-based smart cameras capable of high-speed windowing.
A central PC will likely be required for high speed processing of encoder, triggering cameras/ lasers and building compete 3-D profile object to forward to the host.

Tech Tips

  • Today operators generally make a decision between a PC or smart camera/sensor-based system.
  • No matter which solution is selected, synchro-nization issues need solving.
  • Synchronization for machine vision may require many different time resolutions from microseconds to seconds to solve triggering, latency and jitter issues.