Fast-Evolving Smart Cameras are Transforming Machine Vision
First introduced in the 1980s, smart cameras or “smart sensors” combine lenses, embedded sensor, processors, interfaces and software together into small, all-in-one vision systems. Besides being inexpensive, their primary advantage is having the on-board computational ability to solve a vision task independently without connection to a host PC. A compact form factor also makes smart cameras easy to fit in tight spaces or to retrofit into an existing process. Since they have few moving parts and do not generate high temperatures, maintenance costs are kept low. Smart camera systems are also typically provided with GUIs for developing a machine-vision inspection program with little or no programming. Smart cameras that integrate fast CPUs may even allow the use of familiar off-the-shelf software packages designed for host-based systems.
It is important to point out that while many consumer cameras have built-in signal and image processing power this does not qualify them as “smart cameras.” What differentiates consumer-level cameras from true smart cameras is their purpose. A smart camera uses an application specific information processing block or “ASIP” running analytics algorithms to make decisions for other devices in an automated system. On the other hand, a consumer camera with embedded processing is solely for personal enjoyment.
Until the late 2000s, smart cameras featured mainly low-end processors and were deployed for single-purpose applications, such as bar code reading, localized pass/fail decision making, OCR or counting, not requiring programming. Difficulty managing pattern recognition algorithms meant that complex images or operations requiring rapid analysis were well outside their scope. Manufacturers proved willing to trade the power and versatility of a host-based system for the “drag and drop” simplicity and compactness of smart cameras.
As processing improved, however, smart cameras have begun to compete with host-based systems in applications including assembly verification, 1D and 2D barcode inspection, and robotic guidance, to name just a few. Adding an FPGA to perform low-level image pre-processing functions was an important step as it freed the CPU to perform higher-level image processing tasks, decreasing processing times and latency. Smart cameras have benefited from the recent developments in CMOS sensor technology with the result that there is an enormous choice on the market with an impressive range of resolutions. While smart cameras are now making their way into the healthcare, surveillance and entertainment industries, they’ve been primarily used in the manufacturing sector and continue to enjoy increasing rates of adoption, especially where independent inspection is needed at multiple locations along a production line, i.e., the auto industry. The strategy of “distributing” smart cameras using distributed algorithms is also proving to be a key component for embedded computer vision systems, enabling technology for cloud-oriented, Internet of Things systems and many future applications.
Smart cameras can be combined with data from sensors from other connected devices to gain insights into how each step and variable impacts the final product.
Bigger Processors, Bigger Problems
Processing power has long been a problem for smart cameras, prompting many customers to ask “Why not incorporate larger processors?” Simply put, a larger processor and FPGA translates into a much larger camera. The same goes for a larger sensor to increase image resolution. Both defeat one of the main benefits of the smart camera: compact size and simplicity.
In addition, a larger processor requires more power consumption resulting in the need for heat dissipation. Using a fan isn’t an answer since it introduces the potential for mechanical failure and can cause vibrations that could influence measurements. As a result, system integrators have continued to rely upon traditional PC-based cameras for high-speed, complex applications and smart cameras for low-end, single-purpose applications—not an efficient solution.
New Imaging Challenges
The pursuit of productivity and quality has spurred the development of a new generation of more advanced smart cameras. These new cameras combine multi-megapixel resolution and high speeds comparable to a host-based system, but without the footprint, heat and power drawbacks associated with bulky processors or sensors. Easy-to-use, open source yet sophisticated software and image processing libraries are included in the package for rapid development in a variety of inspection environments such as pattern matching, filtering, defect analysis and OCR.
In addition, since connectivity is a key factor defining automation applications, the cameras are built around proven machine vision standards such as Camera Link, GigE Vision, CoaXPress or USB3. Finally, an industrial-grade IP67 housing is now standard to withstand the harsh chemicals, vibration, moisture, extreme temperatures, dust, and other contaminants found in an industrial environment. In addition, these new smart cameras still cost less to buy, have a smaller footprint and deliver the imaging quality comparable to their host-based counterparts.
This new breed of smart cameras found inspiration in the wave of miniature, low power technologies that were a byproduct of consumer imaging products. Engineers have transferred ARM processors, CMOS sensors, and other components typically used in products such as smartphones to develop even smarter cameras that meet the demanding reliability, versatility and reproducibility requirements specific to machine vision. As a result, this next generation of smart cameras is leading to higher adoption rates of smart cameras in robotics, surveillance, interactive advertising, self-driving cars, healthcare and entertainment imaging projects, as well as in traditional machine vision.
Conversely, consumer-level cameras have borrowed processing from smart cameras. Google, for example, has introduced Clips, a new camera that uses machine learning to automatically take snapshots of people, pets and other things it finds interesting. Also, there is Apple’s newest iPhone that uses face recognition to unlock your phone.
Better, Faster, Stronger
Leveraging their new high-end designs, smart cameras today can deliver lightning-fast speeds up to 300 frames-per-second at full 12 megapixel resolution in 10-bit mode or 140fps in 12-bit mode, a capability that was unheard of a few short years ago. Reducing the acquisition ROI, therefore lessening the amount of data per image, can further increase speed.
Although architecture varies in smart cameras, one design that has proven highly effective consists of a dual-core ARM Cortex A9 processor combined with a Xilinx FPGA. The FPGA efficiently handles image pre-processing and communication interfaces to ensure real time management and zero latency. Higher resolutions help system integrators do more with fewer cameras and cables, saving money on their designs. Moreover, by swapping host-based cameras for these new smart cameras, the integrator eliminates the need for frame grabbers and the time-consuming set-ups of each individual camera.
Modern FPGAs found in these cameras can be tailored for specialized target applications, and are compelling alternatives to x86 processors. FPGAs can now manage the image sensor and I/O, a configuration that translates into vastly improved acquisition management and a reduced working load for the CPU. Unlike complex FPGAs found on legacy smart cameras, new streamlined FPGAs can be easily programmed through the VHDL language. Users can directly implement proprietary algorithms into the FPGA to decrease the load of the CPU so that its only task is analyzing the data being extrapolated by the FPGA.
For rapid configuration, smart camera manufacturers are offering GUIs that help integrators ensure the reader is properly focused and positioned within the correct field of view. Open source platforms let integrators develop custom applications employing their own code or third-party libraries such as OpenCV—ready to use and free of charge. User-defined inspection programs can be built and tested on any computer without the need for a custom cross-compiler or a board support package.
Typically, older smart cameras used closed systems, meaning you could only deploy software from the same supplier. This approach is in direct contrast with the goals of modern automation that call for customization, flexibility and scalability. When a programmer can’t do anything more than what the software installed on a camera lets them do, they are operating with their hands tied. By deploying smart cameras with Linux OS, however, the programmer can maximize the productivity of a vision system at a lower price.
This next generation of smart cameras is leading to higher adoption rates of smart cameras in robotics, surveillance, interactive advertising, self-driving cars, healthcare and entertainment imaging projects.
In Industry 4.0 settings, devices must be connected with other components and systems involved in the industrial value creation process as well as to the factory’s networks and the internet. Ethernet networking connects smart cameras to automated devices that act on provided information, instantly leading to desired actions without any human intervention. Instead of being a reactive tool to detect defects, today’s smart cameras have become extraction tools that employ Big Data-style statistical and data science techniques to draw insights from images and apply them throughout the enterprise.
For example, with the use of data analytics and smart cameras, a plant manager is able to determine when a piece of equipment will fail before the maintenance crew notices there is a problem. The systems sense warning signs, use data to create maintenance timelines, and preemptively service equipment before trouble starts. Or consider the use of smart cameras to acquire images of molded plastic parts at every stage of the production process, starting at the locations of suppliers. Images recorded of parts can be compared with thousands of others stored in the cloud to identify correlations and trends. Or they can be combined with data from sensors from other connected devices to gain insights into how each step and variable impacts the final product.
In addition, smart camera manufacturers are working to accommodate the range of industrial networking standards used in automation so that their cameras are able to communicate across all industrial protocols and with standard discrete I/O. Factory protocols can be directly integrated on some cameras today, but other protocols like Ethernet/IP and PROFINET may require third-party converters.
While current points of view differ over meeting the challenges presented by automation, one thing everyone can agree on is that smart cameras will play a game-changing role in its development and adoption. V&S