Technological developments now allow increasingly fast and efficient processing of digital image data. Programmable cameras equipped with a digital signal processor (DSP) and/or a field programmable gate array (FPGA) module permit image processing
tasks to be performed directly on the camera.
Intelligent Cameras as Fully Embedded SystemsThe complete image processing for an application can be implemented on an intelligent camera. These image-processing modules possess an image sensor and a separate processor so that they can operate autonomously: decisions about quality, completeness or identity, for example, are made in the camera. This means that the camera not only records images, but also interprets them independently and can then output corresponding control signals via industrial interfaces such as RS232 or Ethernet. An external PC is not required, therefore eliminating the need for data transmission routing, which is often susceptible to interference in industrial environments. Intelligent image processing modules also are advantageous if compact and lightweight designs are required for an application.
Intelligent cameras normally have a versatile and powerful processor, usually a DSP or a combination of DSP and a general-purpose processor (GPP), such as an ARM or Atom. In addition to the optical charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) sensor, these cameras are equipped with a working memory (RAM) and various interfaces for connection to an external computer or network such as Ethernet, USB, Firewire, RS232, and other input and output signals. As a result, an intelligent camera is a complete embedded system that is able to perform even complex image processing and control tasks fully independently.
Typical image processing tasks that can run on a DSP include image segmentation and feature extraction, as well as object and object position recognition in 2-D and 3-D. Intelligent cameras also are suitable for reading 1-D and 2-D codes. They can be used for pattern and object recognition in conjunction with machine learning algorithms. Intelligent systems also are of interest for access control with biometric data.
There is no standard definition for intelligent or “smart” cameras-different sensors, processors and operating systems are used, depending on the manufacturer and application. Designs range from the size of a shoebox to compact and lightweight board versions and models with a remote sensor. Cameras are available with a freely definable or manufacturer-specific operating system that can be custom-programmed with common programming languages such as C/C++.
There also are cameras with preprogrammed image processing software. This software can be adapted to a specific application so that it then runs autonomously. In this case, the existing algorithms are normally simply selected from an image processing library via a user interface and then configured as required. This means that there are hardly any development costs, but the price of these camera modules is higher than freely programmable components.
Programmable FPGA ModulesIntelligent cameras can be significantly enhanced by assigning image processing functions to an FPGA module. The load on the camera central processing unit (CPU) is reduced by transferring standardized functions to an FPGA, and this in turn accelerates the required computing processes.
FPGAs are programmable logic modules that operate in parallel. An FPGA is normally used to control the image sensor. The module also can be used as a buffer to store images. The image data from the sensor can be transferred directly to the FPGA with a high bandwidth. Many image processing algorithms are able to use the parallel architecture of the module. As a result, the performance is higher than in a processor in spite of low execution clock rates.
Typical preprocessing tasks such as lookup tables, binarization, Bayer filter, downsampling and interpolation of defective pixels exploit the potential of FPGA technology. Even more complex compression methods such as run-length encoding, blob analysis, tracking, ROI tracking and JPEG can be realized on an FPGA. Preprocessing of image data in the FPGA starts as soon as the first pixels have been recorded. This means that image processing can even be realized in real time.
Programming of the FPGA circuit structures is more complex than development of software-based solutions. Nevertheless, it is worth optimizing the overall image processing process by individual programming of the components for larger applications. FPGA modules also can be reprogrammed as required, allowing the same component to be adapted to changed requirements or used for other tasks. Developers can integrate their own functions on FPGA modules with the programming languages VHDL or Verilog.
FPGAs are used in non-intelligent cameras in order to relieve the load on a host system by preprocessing image data on the camera or to speed up computing processes. In industrial applications, for example, cameras that are optimized by specific low-level image processing tasks on an FPGA offer the advantage that less computing power is required in the host system, less heat is generated and the costs of the overall system are reduced. Standard industrial transmission bandwidths can be used through data compression in the FPGA.
The machine vision industry is not sitting still. Cameras-intelligent or not-continue to advance, offering more functionality and speed for a variety of applications. V&S
Application ExampleAn example of an autonomous image processing module in practical operation is controlling automatic landing of a drone using an intelligent camera. The 500-gram light drone with four rotor blades, known as a quadrocopter, can be manually remote-controlled or flies along a route programmed in advance via GPS. Previously, landing had to be performed by remote control by a trained operator since the GPS technology used does not provide the necessary positional accuracy to land on an area of approximately 1.5 square meters.
In addition, the barometric altitude measurement performed during landing is made more difficult by the so-called ground effect: when the drone is close to the ground, air turbulence and vertical lift forces occur under the rotor blades. Automation of landing by an intelligent camera now permits autonomous operation of the drones, such as for installation maintenance or monitoring large and inaccessible areas. For this purpose, a tracking algorithm on the camera evaluates the flying altitude of the drone, its relative position and its orientation in relation to active LED markers at the landing area. The camera takes over the control system for the landing operation on the basis of this position data.