- THE MAGAZINE
- WEB EXCLUSIVES
My company spends its days building and deploying machine vision applications for factory automation, OEM machinery, transportation and security applications requiring faster, accurate, real-time image analysis. Some of these applications involve 10-camera simultaneous acquisition, mixed line scan and area scan cameras and combinations of other specialty cameras.
Not long ago, we discovered an application for a camera from Imaging Development Systems (IDS) called the UI-1008XS. This 8-megapixel color camera weighs only 12 grams, fits in a volume less than a 1-inch cube-including built-in auto-focus lens-and has trigger capability and a USB 2.0 interface that powers the camera, as well as provides data connectivity. At less than $500, it is a capable addition to many a machine vision system. We have used these for general inspection, optical character recognition (OCR) and security applications with great success.
Our application, however, also required more standard machine vision cameras with GigE interfaces, and needed the option of adding Camera Link or analog cameras to support some legacy hardware pre-installed on older machines. Some of the cameras required could only be purchased from other camera vendors including Basler, Dalsa, PixeLink and Point Grey.
Putting such an integrated imaging solution together in the past was nearly impossible due to the lack of imaging software that could support these types of devices simultaneously. To this day, none of the off-the-shelf imaging software packages are likely to handle such a mix of sensors seamlessly and at high performance. Furthermore, the more sensor types that are added to a system, the flakier the system often will become.
A thread is a lightweight process that coexists with many other threads as part of the same application. They run independent of one another, each waiting, in the appropriate way, for an image from the desired camera.
By adopting this approach, we were able to rapidly create independent camera interface programs based on the camera vendors’ own code-a safe and risk-averse approach that is almost certain to speed deployment.
It turns out that modern multi-core processors and big 64-bit operating systems-including Linux and Windows 7-can actually manage all of the memory and library interactions required to support armies of threads for cameras with very little effort. The threads are automatically distributed across all of the processor cores to allow for true simultaneous execution, making use of all of the hardware that is available for processing.
In our typical applications, each camera has its own grab thread, waiting patiently for the next image. In addition, each camera has a processing thread that analyzes the data most recently received from its camera.
Finally, we add multiple accelerator threads for hard operations, such as OCR, a processor-intensive activity that benefits from the subdivision of the image into separate regions to allow parallelizing of the character recognition tasks.
Tricky programming? Sure. But really, this is all just back to basics for programmers, making use of tools that have been available for decades. The camera-specific interface code, often a real bear to write and debug, can be extracted from the vendors’ own example programs, often in just a few hours.
Image processing and feature extraction is designed by the machine vision expert and can be performed by the expert’s tool or library of choice. The thread boundaries even ease the chore of using different libraries with different cameras using a custom OCR library, for example, on one camera, while using a general-purpose machine vision library on all of the others.