The term machine vision refers to the ability of machines to visually perceive their environment. A typical setting consists of a camera for capturing the images, a cable which links the camera to a PC, and the PC which does the image processing.

The main advantage of such a setup is its simplicity. The plug-and-play components are connected via a standard interface (such as USB, GigE) and use standard protocols (USB3 Vision, GigE Vision). Since they work with a standard operating system such as Windows, the developer can quickly and efficiently create additional software using commercial image processing libraries.

On the other hand, such systems require a lot of space and have a relatively high power consumption. Their most significant drawback is the high system cost, which can easily rise to several thousand dollars. 

However, the technological progress in recent years now makes it possible to implement image processing solutions in an “embedded” form factor that maintains the performance previously only available in a system with a high-end camera and PC. 

In this new embedded setting the individual components are combined and become one device, so machine vision becomes embedded vision. 

An embedded vision system comprises a camera without housing (known as a board-level camera) connected via an inexpensive cable directly to a processing board (aka, embedded board). The processing board handles the same image processing that was performed by the PC in the classic machine vision setup.

COMPARISON OF EMBEDDED VISION SYSTEMS WITH STANDARD VISION SYSTEMS

A clear distinction between “embedded vision” and standard machine vision systems is not easy sometimes. One way to make a classification is to break them down into three segments.

Segment one comprises a classic vision system with a camera and a separate PC. Segment two includes systems based on board-level cameras combined with application-specific hardware, like small form-factor PCs. In segment three, you will find a highly integrated system with a strong degree of miniaturization and just a few or no standardized components at all (see overview chart). For example, in segment one and two you will usually find camera systems using GigE, USB, shielded cables, and so on, while in segment three, you are more likely to find low-level interfaces like LVDS with ribbon cables.

In other words, along the path from stage one to stage three the camera becomes smaller and the number of standardized components shrinks. 

COST SAVINGS AS A RESULT OF EMBEDDED VISION — THE SOFTWARE ASPECT

The embedded approach does more than just save space and energy compared with the classic PC setup. It can also be implemented at a significantly lower cost. One major contributor to cost reduction in embedded systems is the software. For example, the Linux operating system and OpenCV image processing library are open-source, available for free, and commonly found in the embedded world. So when using this combination, there are no license fees to worry about.

For now many customers still hesitate to work with embedded systems because they fear significantly higher development costs compared with classic PC-based vision setups. Especially the software development costs are eyed with concern, as software development for embedded systems is significantly more complex and expensive than for classic PCs. 

However, there is good news here: new programming is supported by collections of ready-to-go program code known as Software Development Kits (SDK). They are a major help in quickly producing reliable results for embedded vision. Many camera SDKs support both Windows and Linux operating systems. However, only a few run on both the classic x86 processors typically found in off-the-rack PCs and on ARM-based architecture. 

The family of ARM-based processors is constantly being upgraded. It is known for being affordable and available in various performance classes, including the ones with multi-core architecture. ARM-based processes currently dominate the embedded field and are much more prevalent than x86-based processors.
For SDKs that run on both ARM and x86-based architecture, it is usually possible to port the program code, for example from Windows/x86 to Linux/ARM, without significant time and effort investment. The re-usability of an already developed code can bring significant cost savings.

THE HARDWARE ASPECT OF EMBEDDED VISION: SYSTEM ON CHIP (SOC), SYSTEM ON MODULE (SOM) AND COMPUTER ON MODULE (COM)
Processing boards used in the embedded area are typically either platforms featuring x86 or ARM processors. The processors used here very often have the graphics unit, bus systems and interfaces (USB, GigE, PCIe, etc.) all together in one so-called System-on-Chip, or SOC. 

The next step of hardware integration uses a Computer-on-Module or System-on-Module (COM or SOM, can be used synonymously). The SOC, RAM, power management and any other peripherals are combined on a circuit board into a module with plug connectors. 

COST REDUCTIONS IN HARDWARE DEVELOPMENT THROUGH SYSTEM-ON-MODULES

Within the scope of hardware development for an embedded application, a developer needs to develop only the so-called carrier or baseboard, which then is used to seat the SOM via a suitable plug connector. Taken as a whole, this is then the embedded processing board. 

The benefit of this approach comes in the fact that the most complex portion of hardware development has already been completed via the SOM. The baseboard, which fundamentally exists to connect the SOM to the external interfaces (USB, GigE, HDMI, etc), is significantly less complex and is vastly more cost-effective to develop for than a full custom design where all components need to be placed on a single circuit board, for example. 

A wide variety of SOMs with various SOCs (both x86 and ARM) are available for industrial applications as well. Manufacturers generally design their SOMs to be compatible without the need to adjust the baseboard, so that a lower-performance SOM can be easily replaced with a higher-performance one. 

Several manufacturer-independent standards have also been established, such as COM Express, Qseven and SMARC. In this case, however, the compatibility of the SOMs across different manufacturers’ products typically only covers a subset of the SOM features.

SOMs make developing an embedded vision system attractive even with small unit volumes. While low production costs are unlikely with this kind of full custom design using the SOM approach, it still gives a significant cost benefit compared with the classic standard PC setup. 

SUMMARY

For standard PCs and embedded systems alike: the developers of image processing solutions must ensure that suitable drivers for camera interfaces and a programming method (SDK) are available. Several camera makers currently offer SDK packages for x86 (Linux and Windows) and ARM (Linux) that are outfitted with optimized drivers and uniform programming interfaces. This helps simplify development on the software side, which in turn keeps costs lower. Development costs for embedded hardware can be reduced using module concepts (SOM) — compensating for the higher manufacturing costs. V&S