As computer systems and cameras become faster and more powerful, vision systems are finding their way into increasingly diverse applications. So what’s better than one super powerful camera solving the world’s problems? Well, two or more cameras of course—as long as you can control them precisely to achieve the particular vision system’s requirements. Nick Tebeau, manager vision solutions group from LEONI Engineering Products & Services, offers the following perspective: “As the proliferation of machine vision across manufacturing continues in order to control plant quality, we’ll only see more multiple camera applications requiring precision timing between cameras, lights and other plant equipment.” Achieving this coordination and synchronization between the manufacturing process and multiple cameras within the vision system, at extremely high speeds is a very complex topic. This article looks at the generic use case and how synchronization is achieved on various standardized camera interfaces.

For a vision system to be useful, it needs to obtain a digital image that can then be analyzed with software algorithms to detect particular conditions that allow a conclusion to be reached and often an action taken. Have you ever taken an action shot with a camera, hoping to catch exactly the image that you feel will be the best image possible? As a person, you have identified a target, used your eyes and hands to aim the camera at a target and, when you feel the time is correct, used your finger to snap the image. This is relatively easy for still photos, but increasingly difficult with a rapidly moving target. Vision systems basically do the same thing. A camera (or group of cameras) is trained on a target and must respond to some stimulus that determines “this is the right time to get the image that I want.”

A Simple Use Case

A simple example in the factory would be a series of objects moving rapidly on a conveyor, which passes a camera that captures an image. A computer system then analyzes the digital data and uses software algorithms to determine if the parts are good or bad. The vision system must:

  • Receive a signal from the conveyor or an external sensor which tells the camera when to acquire an image
  • Possibly even strobe a light so that it will be illuminated during image acquisition
  • Process the image to reach some conclusion (say good part or bad part)
  • Initiate some action based on the conclusion

A slightly more complicated model would be to add additional cameras so that various perspectives on the target could be analyzed, either looking at independent perspectives of the object or even combining the image data to generate a 3D data set. This is where it is critical to be able to synchronize the image acquisition of the multiple cameras.

Frame Grabber vs. Non-Frame Grabber Systems

Eric Carey, chair of the GigE Vision Technical Committee (TC) and area camera and frame grabber director from Teledyne DALSA, describes that there are basically two main configurations for vision systems: those that use a frame grabber and those that rely solely on native interfaces built into PCs. The associated vision standards for each configuration are Camera Link, Camera Link HS (CLHS) and CoaXPress (CXP) for frame grabber systems and GigE Vision and USB3 Vision for non-frame grabber systems.

In a frame grabber system, the frame grabber is the master and it ensures that all the cameras get the trigger signal at the same time to acquire an image. This is a fairly easy way to synchronize multiple cameras and is very fast, as the time required to trigger the cameras is primarily the propagation speed of the electrical signal through the wire. This is on the order of magnitude of nano-seconds (1/1000th of a micro-second). You may have heard Camera Link referred to as having a “real time trigger.” Nano-second response times are so small, this is essentially “real time” and is one of the key strengths of Camera Link. In general, trigger signals can be delivered either over the interface cable (I’ll refer to this as the camera cable subsequently) if the interface provides this capability, or over a separate cable via input/output (I/O) pins. Camera Link does provide an ability to send the trigger signal over the camera cable via 4 LVDS pairs reserved for general purpose camera control, defined as camera inputs and frame grabber outputs. The “Camera Control” or “CC” lines in a Camera Link cable allow for a convenient one cable solution.

For non-frame grabber systems such as GigE Vision or USB3 Vision, there is no provision for an electrical camera control signal over the camera cable. After all, that was part of the strategy of GigE Vision: to use industry standard cable to drive down costs. Therefore these cameras need a separate connector on the back of the camera to receive inputs or provide outputs. In these cases, the non-frame grabber system can be very similar to the frame grabber system with an electronic signal over a dedicated wire and achieve very fast, coordinated electrical signaling to multiple cameras.

Software Controls

However, the actual protocol within GigE Vision does provide for software controls for many camera functions over the camera cable. The advantage of this would be eliminating a separate cable and control mechanism for the trigger, as well as the ability to broadcast this software command to multiple cameras. However, this does cause a significant latency in the signal travel time. Latency is defined as the time between two events. Another term that comes into play is jitter. Jitter is defined as the repeatability of the latency (i.e. the time variation of multiple occurrences of an event). One factor that drives increased time (i.e. latency) for software triggers is that the operating system (OS) is typically not real time. For example, the Microsoft Windows OS allows multiple threads so that the OS could be waiting at any given point in time for legitimate reasons, or simply waiting for an unrelated process to finish. This means that additional time must be built into the round trip signal calculation, such that latencies only on the order of micro-seconds are achievable. Depending on your system, this may be fine. For software triggers there is also the possibility of issuing a direct command (to execute as soon as possible), or to execute at a given future time based on a common time base or timestamp. Version 2.0 of GigE Vision added a feature incorporating IEEE 1588, also known as Precision Timing Protocol (PTP). IEEE 1588 will poll all the cameras on the network, determine which clock will be the master and then allow common timestamps to control the whole system from this master clock. Once the clocks are synchronized, future action commands can be issued over the network via software to achieve precise camera synchronization. One nice aspect of IEEE 1588 is that one could conceive of a whole factory synchronizing to the same master timestamp, allowing potentially multiple inspection systems to communicate and determine exactly when a quality issue occurred.

IEEE 1588 Demonstration

To understand these concepts more fully, a picture is worth a thousand words. To this end, the GigE Vision Technical Committee (TC) arranged a demonstration of this feature at the VISION 2016 show in Stuttgart, Germany. The demonstration showed that multiple cameras which have the standardized feature built into them can easily be synchronized to precisely coordinate image acquisition. This is where the true beauty of standardized features shines. As opposed to having to write all software for this demonstration from the ground up, standardized features allow significant parts of the system to be controlled via already written code built into the GigE Vision cameras and GigE Vision software. Additionally, there is greater flexibility to use a whole range of standardized cameras or software in the initial system design, or even more importantly, the flexibility to change in the future without a total system re-design.

Vision Standards

Let’s look at the benefits of vision standards a little more closely. Vision standards are important because they provide interoperability of the various elements of the vision system. These elements are primarily the camera, the connectors, the cable and software. If the elements of the system meet the standard, system integrators and users have:

  • greater flexibility in designing the vision system
  • less work in putting the system together
  • speedier deployment
  • greater flexibility should they need to change the system in the future

Additionally standards drive down costs for vision system manufacturers, which in turn reduce costs for the whole downstream ecosystem.

To learn more about the various vision standards, please see the “Global Machine Vision Interface Standards” brochure available from the AIA, EMVA or JIIA websites. This brochure gives a detailed description of each standard and presents key performance characteristics for each, allowing comparisons of the different standards so that one can decide which standard might be most appropriate for a given situation. In the context of this article, you can see the different system architectures for the various standards, their latency and jitter and determine which standard might be the most appropriate for your application.