Quality Blog

Learning with Lecky: Back to Vision Basics

November 24, 2010
/ Print / Reprints /
/ Text Size+
By going back to the basics, we were able to rapidly create independent camera interface programs based on the camera vendors’ own code-a safe and risk-averse approach that is almost certain to speed deployment.

We just ran into an application for a fantastic little camera from IDS called the UI-1008XS. This 8-megapixel color camera weighs only 12 grams, fits in a volume less than a 1-inch cube including built-in auto-focus lens, and has trigger capability and a USB 2.0 interface that powers the camera as well as providing data connectivity.

Our application, however, also required more standard machine vision cameras with GigE interfaces, and needed the option of adding Camera Link or analog cameras to support some legacy hardware pre-installed on older machines. In addition, some of the cameras required could only be purchased from other camera vendors including Basler, Dalsa, PixeLink, and Point Grey.

By just going back to the basics, we were able to create a simple solution. Effectively, we took the demo programs supplied by each vendor for their own camera interface and wrapped each one of these into its own thread. A thread is a lightweight process that happily coexists with many other threads that are all part of the same application. They run independent of one another, each waiting, in the appropriate way, for an image from the desired camera.

By adopting this approach, we were able to rapidly create independent camera interface programs based on the camera vendors’ own code- a safe and risk-averse approach that is almost certain to speed deployment. Modern multi-core processors and big 64-bit operating systems, including Linux and Windows 7 can manage all of the memory and library interactions required to support armies of threads for cameras with very little effort. Furthermore, the threads are automatically distributed across all of the processor cores to allow for true simultaneous execution, making use of all of the hardware that is available for processing.

In our typical applications, each camera has its own grab thread, waiting patiently for the next image. In addition, each camera has a processing thread that spends its time analyzing the data most recently received form its’ associated camera. Finally, we add multiple accelerator threads for “hard” operations, like optical character recognition (OCR), a processor-intensive activity which benefits from the subdivision of the image into separate regions to allow parallelizing of the character recognition tasks.

Using a similar architecture in your own image processing applications will speed you on your way to writing efficient, sophisticated and stable image analysis applications.

You must login or register in order to post a comment.




Charles J. Hellier has been active in the technology of nondestructive testing and related quality and inspection fields since 1957. Here he talks with Quality's managing editor, Michelle Bangert, about the importance of training.
More Podcasts

Quality Magazine


2015 January

Check out the January 2015 edition of Quality Magazine for features!

Table Of Contents Subscribe

The Skills Gap

What is the key to solving the so-called skills gap in the quality industry?
View Results Poll Archive

Clear Seas Research

qcast_ClearSeas_logo.gifWith access to over one million professionals and more than 60 industry-specific publications,Clear Seas Research offers relevant insights from those who know your industry best. Let us customize a market research solution that exceeds your marketing goals.


facebook_40.png twitter_40px.png  youtube_40px.pnglinkedin_40px.png