Despite much industry speculation about their imminent demise, frame grabbers remain an important component of machine vision systems.

Far from being a fading technology, frame grabbers are continually evolving and playing a vital role in machine vision. Source: Teledyne DALSA


Once simply used as analog video digitizers and image buffers, frame grabbers now range in function from vision-specific interface boards to full-featured embedded image processing devices that present buffered, conditioned images and metadata to a host system for analysis. Further, frame grabber design has continually evolved to keep pace with changes in camera technology and to offload the host processor for increased system performance.

In addition to the image acquisition, frame grabbers perform three major tasks in machine vision systems. One is image reconstruction, which with the original analog video technology meant digitizing the analog signal from the video camera, de-interlacing and re-formatting, if needed. The second task of the frame grabber is to simply buffer the images until the host CPU is ready to receive them. The frame grabber’s third task is to provide real-time control of the camera for such activities as exposure control and shutter activation and to react to external events deterministically.

As digital technology began to replace analog technology in camera design, the role of the frame grabber changed somewhat. With modern digital cameras, image reconstruction involves taking camera data from one or more serial channels and reordering the pixels into an image. The need for image buffering and camera control, though, has remained and must be done in an ever shorter period of time to enable high-speed camera operations.

Frame grabber and camera designs have evolved together to enhance the digital link between them (Figure 1). The evolution of vision-specific digital interfaces has been greatly influenced by vision systems’ needs for higher sustained throughput, longer cable distances, lower heat generation and smaller physical footprints than standard computer peripheral interfaces could provide. The first such vision-specific digital interface to become an industry standard provided high-speed triggering and control signals to the camera as well as a data path to the frame grabber.

Ubiquitous presence of PC peripheral interfaces such as Ethernet, Firewire and USB-and the advent of multi-core processors-have led many camera and vision system designers to leverage these technologies to provide more cost effective solutions, thus reducing the need for a frame grabber when data rate is not elevated. Such designs seek to lower cost by using the host computer for both buffering and image processing. Some of these PC interfaces also had the advantage of supporting a longer distance between camera and vision system. Gigabit Ethernet, for instance, supports data rates up to 100 Mbytes/second per cable to distances of 100 meters, with data rate scalability possible through the use of multiple cables. The GigE Vision camera interface standard arose to give the generic Ethernet link the kinds of camera control features that vision systems require, supporting a variety of Ethernet speed grades.



The digital interface between camera and frame grabber is evolving to provide higher bandwidth and longer separation distances with the advent of new standards like CameraLink HS. Source: Teledyne DALSA

Camera Performance Challenges PC's Processing

The use of direct-to-computer camera interfaces, however, may have reduced, but has not eliminated the need for frame grabbers in the vision industry. Image sensor technology is continually increasing both the speed and resolution of the images cameras can provide. Without a dedicated frame grabber, the tasks of capturing and reformatting the image data and of responding to external events deterministically can inevitably represent a considerable burden to the PC. Handling these tasks without dedicated hardware leaves little remaining CPU capacity for image processing and other tasks. With a frame grabber offloading the camera interface and image assembly tasks, however, much of the host’s capacity becomes available for vision processing and other tasks.

There are a growing number of applications that require such high, or even higher, performance cameras. The inspection of flat panel displays during fabrication, for instance, needs cameras with ever-increasing resolution. The advent of HD-quality displays in handheld devices such as smartphones now calls for inspection of color sub-pixel filters less than 1 µm across. Meanwhile, the need for manufacturing speed and efficiency calls for cameras imaging as wide a field as possible as rapidly as possible. The result is extremely high data rates. One system currently in operation for automated optical inspection of flat panel displays uses multiple line-scan cameras and must handle nearly 7 Gigabytes/second.

The inspection of electronic circuit boards or web inspection applications has similar needs for both high speed and high precision. As with flat panel inspection, the speed at which the vision system can operate determines the rate at which the factory can produce boards. Processing speed in the host system is typically the limiting factor, so anything vision system designers can do to minimize the demands on the host processor contributes directly to manufacturing throughput. The offloading that a frame grabber provides can make a major contribution toward that throughput.



Frame Grabbers Offload Vision Tasks

There are also opportunities for the frame grabber to offload from the host processor tasks beyond image assembly and handling of the camera interface. Many applications have a need for deterministic but time-consuming operations such as color space conversion, multi-spectral image extraction, and the generation of image meta-data. Such tasks can be embedded into the frame grabber, providing pre-processing that helps simplify and accelerate subsequent image analysis in the host processor. Embedded image processing with wide applicability may be built into the frame grabber by the manufacturer, or the frame grabber can serve as a platform for users to develop and embed their own custom image processing.

A food-inspection system, for instance, could use a frame grabber to simultaneously prepare both visible-color and monochrome infrared images from the camera data. The host system, then, can then run different inspection algorithms on each image type without having to manipulate the raw camera data twice. By having the images already separated and assembled, the host processor can complete its inspection tasks more efficiently.

This ability to offer pre-processing as well as image assembly is indicative of the ongoing evolution in frame grabber functionality. Frame grabbers are also continuing to evolve as new camera interfaces arise. These new interfaces are needed in part because continual increases in the resolution and frame rates of vision system cameras are pushing the bandwidth limits of existing camera interfaces. Even where high-performance cameras are not required, though, some system designs are calling for greater distances between the camera and the frame grabber or vision processor than the traditional twisted-pair wiring.



New Camera Interfaces Boost Performance

To meet these growing requirements for higher bandwidth and longer cable lengths with a consistent interface design, several new camera interface standards are set to emerge.

CoaXpress is yet another example of a new machine vision standard out of the Japan Industrial Imaging Association (JIIA) that requires dedicated frame grabbers. The CoaXpress specification allows transmission of up to 6.25 Gbits/second over a single coax cable to a distance of 100m. Systems can use as many as four cables in parallel to achieve higher bandwidths. A 20 Mbps “uplink” from host to camera allows for control and configuration data, and power can be provided over the cable at 24V, up to 13W per cable.

The AIA’s GigE Vision Standard can currently support high bandwidths using a 10-GigE interface, which supports 1200 Mbytes/second; the 10-GiGe standard in itself does not provide support for real-time trigger signaling without the use of special hardware. The adoption of 10-Gigabit Ethernet in mainstream machine vision cameras, however, has been slow, due in large part to its bigger protocol, higher power and heat footprint. When and if the 10-GigE interface becomes a mainstay in machine vision cameras, reconstructing data packets into a usable image with the amount of data coming across the 10-GigE link will be a task best left to a frame grabber. While a standard off-the-shelf NIC can serve as the camera interface if host CPU utilization is not a key consideration, a 10-GigE frame grabber provides an efficient way to move the images, not just data, into the host computer.

With these new interfaces increasing the amount of video data vision systems must handle, frame grabbers will also evolve to offer even more image storage and preprocessing capabilities. Such parallelism greatly simplifies the creation of multi-camera systems for applications involving precision inspection of large objects or continuous web inspection.

The role of frame grabbers in machine vision, then, is far from over. Increasing demand for higher performance image sensors and the resulting increase in data bandwidth ensures that frame grabbers have a role in real-time image capture, control, formatting and buffering data to make the usage host efficient. In addition, frame grabbers continue to evolve and adapt to provide greater value in vision systems by offloading computer intensive image processing tasks and preparing the images for efficient vision analysis. Even when the camera can plug directly into a PC for data capture and analysis, the processing capacity a frame grabber can bring to a vision system design gives the frame grabber a valuable role as a key enabler of high-speed, high performance vision system designs.V&S