frame grabber 101 vision sensors
Frame grabbers continue to evolve and adapt to provide greater value for vision systems. 
Source: Teledyne DALSA
 
Frame grabbers were developed in the early days of machine vision to provide a connection from analog cameras providing NTSC and PAL output signals to minicomputers requiring digital signals placed directly on data buses for digital memory storage. Even after the switchover to digital cameras, there was still a disconnect between digital video outputs and computer-input bus requirements. Something was needed to feed video data streaming from the camera into the computer’s memory. That need was met via a piece of hardware, still called a frame grabber, which plugged directly into the computer’s motherboard and provided a physical port for connecting the machine vision camera.
 
In general, however, computers have all sorts of data streaming in through various ports to which they must react in real time. Signals sourced by everything from keyboards and mice to high-speed Internet links arrive at every computer’s door from the world outside. Video data streaming from machine vision cameras is just one source. It may be a source requiring an exceptionally high bandwidth, but it is still just one source out of many. 
 
The introduction of high-speed communication links, such as Ethernet, Firewire and USB, and the advent of multi-core processors have led many camera and vision system designers to leverage these technologies to provide more cost effective solutions that reduce the need for a frame grabber in applications where data rate is not elevated. Such designs seek to lower cost by using the host computer for both buffering and image processing. Some of these PC interfaces also have the advantage of supporting a longer distance between camera and vision system than frame-grabber-based solutions, such as CameraLink.
 
Gigabit Ethernet, for instance, supports data rates up to 120 MBytes/second per cable to distances of 100 meters, with data rate scalability possible through the use of multiple cables transporting data in parallel. The GigE Vision camera interface standard arose to give the generic Ethernet link the kinds of camera-control features that vision systems require, while supporting a variety of Ethernet speed grades.
 
As USB, Firewire and Ethernet grew steadily in speed and throughput, pundits began predicting the demise of the frame grabber. Why plug an extraneous bit of hardware into your PC when smart digital video cameras were perfectly capable of packaging the information into standard-formatted data packets ready to feed directly into the computer’s data-hungry communication ports?
 
Yet, the frame grabber hasn’t gone away. While these new direct-to-PC standards are perfectly adequate for some applications, users have found that today’s frame grabbers offer advantages that continue to make them necessary for a range of machine vision projects. This article looks at developments that have made frame grabber demand persist and will likely make it continue into the future.
 

Frame Grabber Requirements

Perhaps most significantly, image sensors continue to produce higher resolution images at higher frame and line rates, which far exceed the 120MB/s serial-interface limit. Cameras with 4 M pixels capable of running at 60 or 120 frames per second are widely available.
 
With increased data rate comes the need to buffer images for processing and reduction of time available for processing them—the two tasks frame grabbers are best adapted to doing. Frame grabbers not only can buffer images, but can also offload image-reconstruction and image-enhancement tasks from the host CPU. Also, frame grabbers can pre-process images for data reduction or add additional details to image data to help reduce processing time. 
 
Of course, machine vision applications involving image data rates beyond 120 MB still require frame-grabber-based solutions. 
 
The inspection of flat panel displays during fabrication calls for cameras with ever-increasing resolution. The advent of HD quality displays in handheld devices, such as smart phones, calls for inspection of color sub-pixel filters less than 1 micron (µm) across. Meanwhile, the need for manufacturing speed and efficiency calls for cameras imaging as wide a field as possible as rapidly as possible. The result is an extremely high data-rate requirement. One system currently in operation for automated optical inspection of flat panel displays using multiple line-scan cameras must handle nearly 7 Gigabytes/second—far more than can be handled by even GigE.
 
In addition to image acquisition, frame grabbers perform three major tasks in machine-vision systems. One is image reconstruction, which in the original analog video technology meant digitizing the analog signal from the video camera, de-interlacing and re-formatting it as needed. It now means reassembling the series of video frames from information broken up into packets for transmission over serial interface cables. The second frame grabber task is to buffer images until the host CPU is ready to receive them.  The frame grabber’s third task is to provide real-time control of the camera for such activities as exposure adjustment and shutter activation, and to react to external events deterministically.
 
Applications like inspection of electronic circuit boards and continuous webs need both high speed and high precision similar to flat panel inspection. The speed at which the vision system operates determines the rate at which the factory can produce boards. Processing speed in the host system is typically the limiting factor, however, so anything that vision system designers can do to minimize demands on the host processor contributes directly to manufacturing throughput. The offloading that a frame grabber provides can make a major contribution toward that throughput.
 
Some line scan cameras, for instance, produce more than 1 GByte/second of image data. Without a dedicated frame grabber, the tasks of capturing and reformatting the image data, and of responding to external events deterministically present a considerable burden for a PC. Handling these tasks without dedicated hardware leaves little remaining CPU capacity for image processing and other tasks. With a frame grabber offloading the camera-interface and image-assembly tasks, however, much more of the host’s capacity becomes available.
 
Historically, frame grabber and camera designs have evolved in concert to enhance the digital link between them. The evolution of vision-specific digital interfaces has been greatly influenced by vision systems’ needs for higher sustained throughput, longer cable distances, lower heat generation, and smaller physical footprints than standard computer peripheral interfaces could provide. The first such vision-specific digital interface to become an industry standard was AIA’s CameraLink, which provides high-speed triggering and control signals to the camera as well as a data path to the frame grabber. CameraLink uses a set of 11 shielded twisted pairs to carry the signals and data, providing image data transfer rates up to 850 Mbytes/second over a distance up to 10 meters.
 
CameraLink is the dominant interface for camera and frame grabber applications. It continues to offer two major advantages over all other interfaces: its hardware-centric protocol requires minimum software and is thus easy to design and integrate. It also offers real-time signaling for camera control and external-event synchronization.
 
CameraLink continues to advance with developments in FGPA and transmission technologies. FPGA implementations of CameraLink challenge the long held view that 10 meters is the maximum cable distance the CameraLink system can support, as well as the notion of an 85 MHz maximum pixel clock rate. With these two critical limitations of CameraLink overcome, the interface standard is becoming a fertile ground for new-product innovations.
 

New Machine Vision Standards

To meet growing requirements for higher bandwidth and longer cable lengths with a consistent interface design, several new camera interface standards have emerged: AIA’s CameraLink HS and JIIA’s CoaxPress being the two most important ones. The CameraLink HS interface can support as many as 20 cable lanes in parallel, with each lane providing 300 Mbytes/second (3.125 Gbits/second) over a distance up to 15 meters. This scalability allows system developers to use the same interface for systems that require between 300 and 6,000 Mbytes/second of camera data. Additional features of CameraLink HS include low overhead protocol, support for the GeniCam software standard and real-time triggering, as well as support for optical fiber for extended camera distances.
 
CoaXpress is yet another example of a new machine vision standard from the Japan Industrial Imaging Association (JIIA) that requires dedicated frame grabbers. The CoaXpress specification allows transmission of up to 6.25 Gbits/second over a single coax cable to a distance of 100 meters. Systems can use as many as four cables in parallel to achieve higher bandwidths. A 20 Mbps “uplink” from host to camera allows for transmitting control and configuration data back to the camera, and power can be provided over the cable at 24 volts, up to 13 watts per cable.
 
With these new interfaces increasing the amount of video data vision systems must handle, frame grabbers will also evolve to offer ever more image storage and preprocessing capabilities.
 
The role of frame grabbers in machine vision, then, is far from over. Increasing demand for higher performance image sensors and the resulting increase in data bandwidth ensures that frame grabbers have a role in formatting and buffering data to offload host processors. In addition, frame grabbers continue to evolve and adapt to provide greater value for vision systems by offloading foundation image processing tasks and preparing the image for efficient vision analysis. Even when the camera can plug directly into a PC for data capture and analysis, the additional processing capacity a frame grabber provides gives it a valuable role in high-performance vision-system designs.
 
TECH TIPS
  • Today’s frame grabbers offer advantages that continue to make them necessary for a range of machine vision projects.
  • With increased data rate comes the need to buffer images for processing and reduction of time available for processing them—the two tasks frame grabbers are best adapted to doing. 
  • Frame grabbers can also preprocess images for data reduction or add additional details to image data to help reduce processing time.