Smart, small and flexible—smart cameras combine a standard machine vision camera, programmable processing, communications and sometimes lighting, in a small enclosure. The advantages of this kind of “all in one” vision system include easier integration, lower cost, and the ability to satisfy a wide range of application needs. Smart cameras can find surface defects, guide robots, verify assemblies and operate in harsh industrial environments.

This article reviews what to look for in a smart camera and provides a brief history of how the smart camera evolved. We’ll look at today’s “fourth generation” smart camera and envision how smart cameras will improve.

What to look for in a smart camera

A smart camera must meet and balance many factors. First, a smart camera must have the computational ability to solve your machine vision task within the cycle time of your process. Machine vision tasks can require a large amount of computation due to large data sets (images) and computationally demanding algorithms.

Second, you want a small smart camera. Small size makes it easy to fit in tight spaces or to retrofit into existing processes. Small size and weight are also important when the camera is mounted on a moving platform, such as a robot arm. Size limits a smart camera’s heat dissipation and so can limit computational ability.

Third, the camera must be designed to withstand the environment it is used in. This could include temperature range, splash from wash down, dust, vibration, and electrical noise. A smart camera should meet IP67 (Ingress Protection Rating) codes for liquids, dust, etc. Cables and connectors should also be rated for your environment.

Fourth, the smart camera’s ease of use is critical. Someday you might have a sincere discussion with your “very smart camera,” on how to do a machine vision task. For example:

You, pointing at a part: “I want you to dimension these parts and reject those that are out of tolerance, as specified by the part’s CAD files.”

Very Smart Camera: “I’m sorry Dave, I’m afraid I can’t do that. My current lens and working distance do not provide a large enough field of view. Let me suggest a different optical configuration.”

But until we have very smart cameras you will need some basic machine vision skills and know how to program the smart camera to do your task. That’s why smart camera software should minimize the knowledge you need and make programming as easy as possible. The software “framework” for setting up the smart camera and built-in machine vision knowledge are crucial elements for your success.

TECH TIPS

Smart cameras can find surface defects, guide robots, verify assemblies and operate in harsh industrial environments.

A smart camera must be able to solve your machine vision task within the cycle time of your process.

Smart camera software should minimize the knowledge you need and make programming as easy as possible.

Last, there are features and accessories that make using a smart camera quicker and easier. These include the ability to remotely program and monitor the smart camera—you really don’t want to get a scissors lift and laptop to access the smart camera on that overhead conveyer! The smart camera should “speak” your PLC and factory protocols, so that it is easy to integrate into your line or SPC system. Most smart cameras have a few input/output (IO) lines, limited by the space on the camera for connectors, so you might like an optional IO “expansion box.” You might also want a camera that can control lighting, so that the lighting can be synchronized with the smart camera’s image acquisition.

A Short History of Smart Cameras

We can divide smart camera history into four technology generations. My examples are to illustrate this division, and I make no claims of historical accuracy or completeness.

Smart cameras emerged in the 1980s, mostly from universities. 1980s processor technology was difficult to fit into a small package and had very limited computational ability. These smart cameras often had custom hardware to accelerate specific machine vision algorithms, such as edge detection, and were not easy to program.

Second generation smart cameras used Digital Signal Processing (DSP) technology for computation. In the 1990s DSP-based smart cameras were introduced, which, in my opinion, established the smart camera market. These smart cameras had adequate processing power, a small form factor, and hid the difficulties of programming a DSP under a user-friendly framework.

A third generation of smart cameras came into the market in the 2000s. These cameras used laptop personal computer (PC) technology which provided good computational ability and a familiar Windows programming environment. PC-based smart cameras were bulkier than their DSP counterparts due to additional components and power dissipation from their X86 processors.

About the same time (2004), another company used a “neural net” processor in their small, smart camera. It was a “learn by showing” smart camera and so was an interesting step towards that sincere discussion with your very smart camera.

About six years ago, processors and technology developed for smart phones became capable enough to use in smart cameras. These smart cameras are riding the wave of decreasing price and increasing processor performance generated by the huge volume of consumer smart phones and the adaption of standard software to smart phones.

Fourth Generation: Software Becomes the Product

Fourth generation cameras are small, low wattage and have enough computational ability for many low and medium speed machine vision tasks. Software is key to how easily you can set up and program the smart camera. Just as with a smart phone, the software becomes the product in our mind. The smart camera’s user interface accomplishes two tasks. First, it makes device programming easier by using graphical interaction. Second, it encapsulates domain-specific knowledge so that we don’t have to know or remember technical details.

For ease of programming, a smart camera should have a graphical user interface (GUI) to specify operations, parameters, and program flows. For example, program flow could be specified by a graph of operation boxes connected by data or control paths. Or you use the mouse to select image areas and click icons to specify operations on those areas. Lists are used to specify the order of operations and contingent operations, such as sending a rejection signal for an out-of-tolerance part. Another type of interface uses a graphical tree structure to represent sequential program flow and branching.

 One way to include domain-specific knowledge is to frame operations in terms familiar to a quality control or process engineer. So for example, we want a “caliper” operation rather than a “Parabolic Sub-pixel Sobel edge-to-edge” operation. I’m exaggerating here, but you get the idea. Another way to include domain knowledge is to build “tools,” groups of operations that perform standard machine vision tasks. Familiar examples are Optical Character Recognition (OCR) for reading printed text and barcode reading. A third way is to build “experts” into the software. This is difficult unless the task domain is carefully limited.

Looking Forward

I expect smart cameras will get faster and more capable, following the wave of consumer products. I don’t expect much reduction in smart camera size as you still need space on the camera body for connectors, a lens, and perhaps lighting. Increasing computational performance could require some increase in power dissipation.

Smart cameras will have new forms of connectivity for networks of smart cameras. I also expect the return of application specific processors, such as neural nets, for accelerating specific tasks. Specialized smart cameras have been and will be developed for tasks in high-volume markets, such as pedestrian and obstacle detection in cars or 3-D imaging in entertainment consoles.

 Smart cameras have come a long way in the last few years, but that sincere discussion with your very smart camera will have to wait a little longer.