Vision & Sensors

Integration Corner: Back to the Basics

December 23, 2010
/ Print / Reprints /
ShareMore
/ Text Size+
Programmers are making use of tools that have been around for decades.

The UI-1008XS from IDS is a 8-megapixel color camera that weighs only 12 grams and fits in a volume less than a 1-inch cube. Source: Imaging Development Systems


My company spends its days building and deploying machine vision applications for factory automation, OEM machinery, transportation and security applications requiring faster, accurate, real-time image analysis. Some of these applications involve 10-camera simultaneous acquisition, mixed line scan and area scan cameras and combinations of other specialty cameras.

Not long ago, we discovered an application for a camera from Imaging Development Systems (IDS) called the UI-1008XS. This 8-megapixel color camera weighs only 12 grams, fits in a volume less than a 1-inch cube-including built-in auto-focus lens-and has trigger capability and a USB 2.0 interface that powers the camera, as well as provides data connectivity. At less than $500, it is a capable addition to many a machine vision system. We have used these for general inspection, optical character recognition (OCR) and security applications with great success.

Our application, however, also required more standard machine vision cameras with GigE interfaces, and needed the option of adding Camera Link or analog cameras to support some legacy hardware pre-installed on older machines. Some of the cameras required could only be purchased from other camera vendors including Basler, Dalsa, PixeLink and Point Grey.

Putting such an integrated imaging solution together in the past was nearly impossible due to the lack of imaging software that could support these types of devices simultaneously. To this day, none of the off-the-shelf imaging software packages are likely to handle such a mix of sensors seamlessly and at high performance. Furthermore, the more sensor types that are added to a system, the flakier the system often will become.



Shown here is a programming screen capture in Windows 7. Source: Lecky Integration

However, by going back to the basics, we were able to create a simple solution. We took the demo programs supplied by each vendor for his own camera interface and wrapped each one of these into its own thread.

A thread is a lightweight process that coexists with many other threads as part of the same application. They run independent of one another, each waiting, in the appropriate way, for an image from the desired camera.

By adopting this approach, we were able to rapidly create independent camera interface programs based on the camera vendors’ own code-a safe and risk-averse approach that is almost certain to speed deployment.

It turns out that modern multi-core processors and big 64-bit operating systems-including Linux and Windows 7-can actually manage all of the memory and library interactions required to support armies of threads for cameras with very little effort. The threads are automatically distributed across all of the processor cores to allow for true simultaneous execution, making use of all of the hardware that is available for processing.

In our typical applications, each camera has its own grab thread, waiting patiently for the next image. In addition, each camera has a processing thread that analyzes the data most recently received from its camera.

Finally, we add multiple accelerator threads for hard operations, such as OCR, a processor-intensive activity that benefits from the subdivision of the image into separate regions to allow parallelizing of the character recognition tasks.

Tricky programming? Sure. But really, this is all just back to basics for programmers, making use of tools that have been available for decades. The camera-specific interface code, often a real bear to write and debug, can be extracted from the vendors’ own example programs, often in just a few hours.

Image processing and feature extraction is designed by the machine vision expert and can be performed by the expert’s tool or library of choice. The thread boundaries even ease the chore of using different libraries with different cameras using a custom OCR library, for example, on one camera, while using a general-purpose machine vision library on all of the others.



Did you enjoy this article? Click here to subscribe to Quality Magazine. 

Recent Articles by Ned Lecky

You must login or register in order to post a comment.

Multimedia

Videos

Podcasts

 In honor of World Quality Month, we spoke to James Rooney, ASQ Past Chairman of the Board of Directors 2013, for his take on quality around the world.
For more information, read the ASQ Speaking of Quality column.
More Podcasts

Quality Magazine

CoverImage

2014 September

Check out the September 2014 edition of Quality Magazine for features!

Table Of Contents Subscribe

The Skills Gap

What is the key to solving the so-called skills gap in the quality industry?
View Results Poll Archive

Clear Seas Research

qcast_ClearSeas_logo.gifWith access to over one million professionals and more than 60 industry-specific publications,Clear Seas Research offers relevant insights from those who know your industry best. Let us customize a market research solution that exceeds your marketing goals.

eNewsletters

STAY CONNECTED

facebook_40.png twitter_40px.png  youtube_40px.pnglinkedin_40px.png