Embedded Vision Puts Full Power in Compact Footprint
Developers are working to drive out cost and reduce system size while offering enhanced flexibility.
The introduction of the PC and the increasing functionality of integrated circuits created a new market for PC-based single-board computers, frame grabbers, I/O peripherals, graphics, and communications boards—the building blocks of today’s embedded electronics and machine vision systems. Today, the choice of boards, form factors, and functionality is numerous and includes products based on OpenVPX, VME, CompactPCI, cPCI Express, PC 104, PC/104 Plus, EPIC, EBX, and COM Express standards.
Embedded vision can take two tracks: open small-form-factor image processing boards and peripherals based on these computing platforms and aforementioned standards, or custom designs that use cameras, processors, frame grabbers, I/O peripherals, and software. While the hardware of open embedded vision systems may be relatively easy to reverse engineer, custom embedded vision designs are more complex, highly proprietary, and may use custom-designed CMOS imagers and custom Verilog hardware description language (HDL) embedded in FPGAs and ASICs.
Hardwiring programs for embedded speed
In embedded vision design, many of the image processing functions that lend themselves to a parallel dataflow are implemented in FPGAs. Altera (now part of Intel) and Xilinx offer libraries that can be used with their FPGAs to speed these functions. Intel’s FPGA Video and Image Processing Suite, for example, is a collection of Intel FPGA intellectual property (IP) functions for the development of custom video and image processing (VIP) designs that range from simple building block functions, such as color space conversion, to video scaling functions. Likewise, Xilinx offers many IP functions for image processing functions such as color filter interpolation, gamma correction, and color space conversion.
Both Intel and Xilinx offer third-party IP as part of their partnership programs. In its Xilinx Alliance Program, Xilinx includes products from companies such as Crucial IP, iWave Systems Technologies, and Xylon that offer IP to perform noise reduction, video encoding, and video-to-RGB converters, respectively.
Figure 2: In a teardown of the iPhone X by iFixit, researchers identified that the TrueDepth sensor cluster used in the device costs Apple $16.70. Apple refused to comment on the price of such components, but such low costs are not unusual in high-volume consumer products. Source: iFixit iPhone X Teardown
Leveraging the power of FPGAs, camera companies have not been slow in recognizing the need for peripherals that can be used in open embedded systems. Indeed, companies such as Allied Vision and Basler have already introduced camera modules to meet such demands. To reduce the host processing required, camera modules with on-board processing capability can be used to off-load functions such as noise reduction and color debayering, allowing the developer to concentrate on the application software (Figure 1).
Embedded vision components are being incorporated into a myriad of applications. Even so, a handful of industrial sectors are receiving most of the attention, largely due to economies of scale. These include automotive, medical, security, and consumer applications. Taken together, they spotlight key trends: developers are working to drive out cost and reduce system size while offering enhanced flexibility.
Figure 3: BIKI from Robosea is an underwater drone shaped like a fish that employs a 3840 x 2160 pixel camera, 32 GB memory, and on-board features such as automated balance and obstacle avoidance. Source: BIKI: First Bionic Wireless Underwater Fish Drone
Automotive and Security
Advanced driver assistance systems (ADAS) capabilities such as mirror replacement, driver drowsiness detection, and pedestrian protection systems are pushing the need for enhanced image processing within automobiles. According to the research firm Strategy Analytics, most high-end mass-market vehicles are expected to contain up to 12 cameras within the next few years. In these applications, high-speed computing with low energy consumption is a critical factor, and there are many opportunities for vision innovation to have an impact both inside and outside the vehicle. In the future, custom solutions seem almost inevitable as automakers offer up their own branded cabin configurations of entertainment and information systems.
Data from automotive camera modules must quickly process and analyze images under the most extreme conditions and do so in the face of stringent automotive safety standards. To address these challenges, Arm has developed the Mali-C71, a custom image signal processor (ISP) capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto-white balance, and auto-exposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level (ASIL)–compliant automotive software.
Two major applications of medical embedded systems are endoscopy imaging and X-ray imaging, which in turn enhance diagnosis and treatment. Use of embedded vision within the medical imaging market is growing rapidly, driven by a call for minimally invasive diagnostic and therapeutic procedures, the need to accommodate aging populations, and rising medical costs.
To develop portable products for this market, developers often turn to third-party companies for help. Zibra Corp. turned to NET USA for assistance in the design of its coreVIEW series of borescopes and endoscopes. NET developed a remote camera with a 250 x 250 NanEye pixel imager from AWAIBA and a camera main board that incorporates an FPGA to perform color adjustment and dead pixel correction. An HDMI output on the controller board allows images captured by the camera to be displayed/viewed at distances of up to 25 feet.
New embedded vision markets want vision without the PC, the GPU, or a hard drive. They want the system reduced to the minimum. Reducing the system cost, however, poses a conundrum for those companies traditionally involved in the machine vision market, where high-resolution, high-speed cameras can cost thousands of dollars. In a teardown of the iPhone X by iFixit, researchers identified that the TrueDepth sensor cluster used in the device costs Apple $16.70. Apple refused to comment on the price of these components, but such low costs are not unusual in high-volume consumer products (Figure 2).
While traditional machine vision camera vendors might not want to compete in the consumer market, there are other opportunities for vendors of smart camera modules. These include pro-sumer drones that can be used for industrial applications such as thermography to analyze the heat loss of buildings. For example, consider the BIKI from Robosea, an underwater drone created in the form of a fish that employs a 3840 x 2160 pixel camera, 32 GB memory, and on-board features such as automated balance and obstacle avoidance (Figure 3).
As embedded vision proliferates in automobiles, medical imaging, remote inspection, and consumer electronics, opportunities will continue to arise for vision vendors both traditional and nontraditional in scope. Today, the biggest challenge facing the embedded vision market may be educating an increasingly image-savvy public on the benefits of system-level machine vision designs that can fit in extremely compact locations—including a front pants pocket.