The frame grabber is an add-in computer peripheral board designed to capture one or more still frames from a video input source, typically a camera. For over two decades, this dedicated piece of hardware has been entrusted to capture images transmitted in a variety of different ways, from legacy analog to higher-performance digital means. The need for this dedicated piece of image acquisition hardware has been put to question over the last few years with the introduction of camera interface standards that are based on widespread computer connectivity, such as FireWire, USB and Gigabit Ethernet. These camera interface standards take advantage of the connectivity found on today’s computers to reduce overall system cost and complexity.

TECH TIPS

Almost all of today’s frame grabbers have designs based on field programmable gate array (FPGA) devices.

A FPGA-based design provides frame grabber manufacturers the ability to more easily adapt to different high speed interfaces.

 It also offers the ability to make the frame grabber perform custom image processing operations.

The need for this dedicated piece of image acquisition hardware has been put to question over the last few years with the introduction of camera interface standards that are based on widespread computer connectivity, such as FireWire, USB and Gigabit Ethernet. These camera interface standards take advantage of the connectivity found on today’s computers to reduce overall system cost and complexity.Frames_IN

The adoption of these new camera interface standards has been seen mainly in low to mid-range imaging systems in terms of performance and cost. At the high end of the spectrum, intensive research and development efforts have created new camera interface standards that necessitate the use of frame grabbers, with new hardware and software technologies, for many years to come.

The Continued Need for Frame Grabbers

Camera interface standards such as USB3 Vision and GigE Vision that dispense with frame grabbers do offer several advantages, but they also have some marked limitations. They are ideal only for low to medium bandwidth applications. GigE Vision is good for 125MB/s on a single link, while USB3 is good for 350MB/s. They also do not support truly deterministic camera control and triggering.

Traditional machine vision applications in industries such as flat panel display, semiconductor/wafer, and web/print inspection are under constant pressure to increase performance in terms of speed, accuracy or efficiency. In response, the machine vision industry, spurred on by specific camera manufacturers, took it upon itself to develop new high-performance camera interface standards to match the output of the latest high-resolution and high-speed sensors that go beyond the capabilities of previously established standards. CoaXPress and Camera Link HS are the two new interfaces.

CoaXPress (CXP) Features

  • High-speed, point-to-point data transmission at up to 6.25Gbps (gigabits per second, ~800Mbytes/sec for 8-bit images) on a single coaxial cable.
  • Transmission distances of up to 100m.
  • Single cable connection which includes data transmission, camera control, triggering, and power.

Camera Link HS (CLHS) Features

  • Scalable bandwidth of up to 16GBytes/sec.
  • Copper or fiber optic cables, thus allowing for transmission distances beyond 300+ meters.
  • Low-latency triggering and camera control in the same cable as is used for image data transmission.

In addition to higher data transmission rates, each standard also implements several features to simplify system design:

Frame grabber provides power to the camera over the same cable as is used for image data transmission and device control (CXP provides 13W per cable).

GenICam standard is used to normalize the interaction between the application software, frame grabber driver, and camera.

Standard interconnect technologies help lower system cost. CXP uses coaxial cables which are highly cost-effective and already in widespread use. CLHS leverages components available from multiple sources and are already in use in high-volume by various IT sectors.

Beyond the basic necessity set out by these standards, frame grabber hardware and software design has progressed to accommodate the system-level needs imposed by today’s most demanding applications.

Hardware Advances

Almost all of today’s frame grabbers have designs based on field programmable gate array (FPGA) devices. FPGA technology has improved significantly, offering larger capacity devices (number of logic cells/logic elements) with highly efficient architectures, to allow more functions to be implemented onto a single device. In addition, updated peripheral technology such as faster memory interfaces, PCIe 3.0 bus interface, and high-speed transceivers are present to ensure reliable high-speed data transmission. A FPGA-based design not only provides frame grabber manufacturers the ability to more easily adapt to different high speed interfaces, it also offers the ability to make the frame grabber perform custom image processing operations. These can be implemented by the frame grabber manufacturer or the application developer using a FPGA development kit.

With data rates in GBytes/sec making their appearance, the ability to offload image pre-processing operations onto the frame grabber is quite useful, especially in multi-camera applications. This frees up the system hosting the frame grabber to perform higher-level image processing and analysis, thus improving overall system performance.

With high-speed imaging applications, deterministic device control of camera, frame grabber, illuminator, and other devices is required to ensure timely quality image acquisition.

Most frame grabbers provide GPIOs (general purpose inputs/outputs) available in different formats—LVDS, TTL, and OPTO to name a few—which allow connections to devices that are related to image acquisition and analysis. A line-scan application, where each line is captured based on trigger from a rotary encoder, is a perfect example of the benefit of integrated I/Os. In such a case, the frame grabber handles the incoming trigger and then sends the proper exposure signal to the camera for accurate image acquisition.

The latest high-performance camera interfaces have control commands transmitted on the same connection as image data. This allows the frame grabber, via its software, to dynamically change camera settings, thus shortening the control loop to make camera adjustments. For example, the region of interest on an area-scan camera can be changed by the frame grabber based on the results obtained from image processing performed on the latter.

Frame grabbers can record information related to each frame captured (for example: time stamp and rotary encoder count) for the application.

Software Advances

A frame grabber’s Direct Memory Access (DMA) engine is designed to transfer images from board to host memory with minimal host intervention. However, in high speed imaging applications, where the acquisition rate is measured in the tens of thousands of frames per second, any pause in response by the host computer to acquisition events (due to processing and analysis, for example) can adversely affect the reliability of image capture. To eliminate this situation, frame grabber manufacturers offer different methods to provide image acquisition to host memory with greater certainty.

A more autonomous image capture mechanism inside the board’s FPGA is one such method. Such a mechanism is based on an embedded processor (ex. soft processor inside the FPGA device) that can control the entire image acquisition sequence from a single function call by the application running on the host. Under these conditions, the adverse effect of host interruptions becomes irrelevant as the frame grabber operates more independently. As a result, acquisition-related service interrupts to the host system are greatly reduced and even eliminated as the frame grabber is able to work on its own.

Image acquisition software running in a real-time operating system (RTOS) is another method that ensures greater reliability. In this environment, one or more cores of a multi-core host processor are set aside to run a RTOS where the image acquisition software is made to execute. The image acquisition (as well as processing and analysis) software thus runs independently, and without delay, from other system functions executing on the mainstream (Windows) OS. Running application in a dedicated real-time environment not only avoids the loss of critical image data but ensures timely interaction with other devices (for example, illuminator or ejector).

Not Going Extinct but Evolving

 The notion of the frame grabber’s disappearance was premature. The emergence of the Camera Link HS and CoaXpress high-end camera interface standards has extended the need for it. This, plus the ability to alleviate the host system from response-critical and repetitive tasks, secures a place for the frame grabber in high-performance machine vision systems for the foreseeable future.