When it comes to cameras and frame grabbers, when is dumb smart?

Advances in semiconductors and sensors are driving the industry to adopt smart cameras as the most cost effective approach to most machine vision applications. Source: Alacron


With today’s rapidly evolving technology, increased miniaturization along with increased processing power, there is some confusion about when and where “intelligence” needs to be added to an image processing system.

An imaging system usually consists of a sensor with output (a camera), a connection to a processing or storage unit (cabling), and the storage or processing unit (the computer, PC or laptop).

Customers often ask machine vision equipment manufacturers and integrators to evaluate and recommend solutions to many different imaging applications. The best way to make a sale and build customer loyalty is to recommend the “best” solution whether the company’s own products are included or not. This leads to an understanding of the most important cost and performance differentiators that contribute to the optimum solution for each application.

The camera data rate and the number of operations per pixel determine the optimum solution for a given application. Although not illustrated, there may be some additional engineering costs for using a smart frame grabber or camera because a more costly engineering resource is used; the engineering cost can be made insignificant if multiple systems are being built. Usually, cost is king-it is the most important determinant if the system is adequate for the required task.

Regular Cameras

A regular camera has a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) sensor, a driver and I/O board that transmits data via a connection to a computer or laptop.

In general, the standard PC-based I/O for cameras is limited to about 50 to 100 megabytes (MB) per second since beyond that range data is lost or cannot be handled by the typical multicore PC processors. Thus the commodity PC I/Os such as USB 2.0, 1394a or b, or GigE all fit this description.

When this rate is exceeded then a “buffer,” or a frame grabber, is needed to ensure data integrity. Even some of the newer standards such as USB 3.0 or 10 GigE or when multiple inputs (for example GigE), are used, then a frame grabber will be needed. Several frame grabber manufacturers market multiple GigE input frame grabbers to overcome this risk of data loss.

Using a regular camera-instead of a smart camera-is the best option when:
  • The host computer has available I/O bandwidth, when the computer has a slower absorption rate-how much data it can accept without data loss-than the camera data output rate.
  • The video data rate is less than the available bandwidth of the standard PC interconnect.
  • The GigE is less than 100 MB/second.
  • The Fire Wire 2 is less than 80 MB/second.
  • The Fire Wire is less than 50 MB/second.
  • The USB is less than 35 MB/second.
  • The host computer has processing bandwidth for the application.
  • The host computer operating system can provide the CPU bandwidth for the application.
  • The cost of the hardware is compatible with the application.


  • Smart Cameras: Sensor and "Smarts"

    The current family of smart cameras uses both CCD and CMOS sensors. Frequently the processing is achieved by the addition of a processor to the internal bus of the camera. This approach allows for easy programmability such as C or C++ as well as the use of standard commercially available image processing packages. The latter can be incorporated into the camera and include popular libraries. The use of a commodity processor or off-the-shelf library limits the camera data rate but is adequate for most machine vision applications.

    As larger format, faster and cheaper sensors enter the mainstream, the processing power of smart cameras will need to keep pace. This can be achieved by field-programmable gate arrays (FPGAs), application specific silicon (ASSIL) or processors being added to the camera to dramatically increase the processing power from multiple to hundreds of billions of operations (GigaOPS) or floating point operations (GigaFLOPS) range.

    Size and Power Limitations

    Cameras, both regular and smart, typically provide passive thermal solutions. However, the introduction of high performance processing electronics into the constrained camera body is more critical when using a smart camera.

    The size of the solution restricts the camera electronics space. In other words, a camera size is limited to about a four- to five-inch cube. Any larger and the camera becomes unwieldy.

    Cooling fans are typically not used because of vibration and reliability concerns. Chillers require too much power or space.

    The passively cooled camera improvements usually are obtained by adding thermal ridges or fins to the case. Increasing the number of fins improves the thermal advantage, up to a point where the convection air flow decreases. The improvement in the thermal power limit is usually not more than about 10 watts.

    Considering the aforementioned factors, the total camera power is limited to about 25 to 30 watts due to cooling and size considerations.

    Today’s increasingly compute-intensive applications will ensure that both smart cameras and smart frame grabbers will continue to play an important role in the machine vision industry. Source: Alacron

    Smart Camera Advantages

    Smart cameras have several advantages that make them almost indispensible for selected application environments. These include the ability to perform the required image processing at the data source-the sensor-which results in a significant reduction of the data stream and only results are sent to the PC. In turn, less expensive standard PC interconnects can be used between the camera and the main PC.

    Finally, the distributed processing scaling up to multiple smart cameras is easy and requires minimal cabling, which may be impossible or extremely difficult with a regular camera and PC model.

    Smart cameras should be used when:
  • The available interconnect methods cannot provide the bandwidth necessary to transport the data.
  • The host computer does not have the available I/O bandwidth.
  • The host computer does not have processing bandwidth for the application.
  • The host computer OS cannot provide the CPU bandwidth for the application.
  • The engineering cost to program the smart camera is compatible with the application.
  • The overall cost of the hardware is compatible with the application.


  • Standard Frame Grabbers

    A standard frame grabber consists of a camera interface and has the ability to transfer data to the PC. The computer interface is usually either PCI or PCI Express that scales to a transfer rate of multiple gigabytes per second, if required. If there is a mismatch between the camera delivery rate and the PC, then a buffer is needed to ensure that no data is lost.

    With the increasing cost parity of cameras with the standard PC I/O with legacy camera interfaces such as RS170, PAL, low voltage differential signal (LVDS) or even low speed Camera Link, the “dumb” frame grabber is increasingly being eliminated. An exception is when the I/O exceeds the standard PC interfaces and computer image processing power is not a significant issue. An example would be when a high speed camera is used to record and store large amounts of data into a disk or disk array.

    This simplified diagram illustrates that the camera data rate and the number of operations per pixel determine the optimum solution for a given application. Source: Alacron

    Accelerated Frame Grabbers or Vision Processors

    When properly programmed, the commodity, multicore PC is capable of providing hundreds of millions to several billions of operations per second (OPS) or floating point operations per second (FLOPS) on a sustained basis. When the data processing demands exceed this, data is either lost or remains unprocessed. An accelerator is usually needed to assist the PC in its image processing task when the processing requirement is exceeded. Usually the accelerator is combined with a frame grabber and is termed an accelerated or smart frame grabber or vision processor.

    Key components of a smart frame grabber include: multiple camera inputs, scalable memory, scalable performance system bus interface, microprocessor, field-programmable gate array (FPGA) and application specific silicon (ASSIL).

    Use a smart frame grabber when:
  • The host computer does not have processing bandwidth for the application.
  • The host computer OS cannot provide the CPU bandwidth for the application.
  • The overall cost of the hardware is compatible with the application and is cheaper than the smart camera approach.
  • Applications need to be split between smart cameras and high performance frame grabbers.
  • Processing demands exceed that possible in cameras due to the power and size limitations discussed in the smart camera section earlier, and general purpose computers.


  • Cost vs. Need for Speed

    The best way to make a purchasing decision will depend on finding the easiest and most cost effective approach by analyzing one’s situation using the factors described here.

    Advances in semiconductors and sensors are driving the industry to adopt smart cameras as the most cost effective approach to most machine vision applications. The exception remains the low end of the market where regular cameras will still find a niche. But for much of the rapidly growing and increasingly demanding machine vision industry, the increasingly compute-intensive applications will ensure that both smart cameras and smart frame grabbers will continue to play an important role. V&S

    Tech Tips

    Smart cameras should be used when:
  • The host computer does not have the available I/O bandwidth.
  • The host computer does not have processing bandwidth for the application.
  • The overall cost of the hardware is compatible with the application.