Learn how the latest machine vision technology helps operators.



Imaging and machine vision are multipurpose technologies widely used in industry, science, medicine and surveillance. More and more ideas for vision applications are growing to increase production processes and address quality challenges.

For imaging software, it means developing faster algorithms and better use of current hardware environments, such as the benefit of multi-core computers. In addition, new matching technologies to robustly and reliably find objects or work pieces even in images with strong perspective distortions are arriving.

Moreover, 3-D vision methods and the processing of extremely large images are getting more important. Further methods support the large application area of identification, such as the reading of bar and data codes that are steadily increasing in speed and tolerance.



By multi-grid stereo, the disadvantages of conventional (correlation-based) stereo methods are eliminated. After processing by multi-grid stereo, the areas without information appear as proper edges and structures. Source: MVTec

Image Preprocessing

The possibility of sourcing out image preprocessing to the computer’s graphics processing unit (GPU) is much discussed. But for practical use, it does not make sense. Indeed, a GPU’s processing steps can pass off in high speed, but no hardware currently available enables the data to transfer fast enough. Thus, in reality every application is decelerated by the slow transfer times, and image preprocessing on GPU should not be the first option.

But on a frame grabber, free computing capacity can be used without additional data transfer. Therefore, software must enable image preprocessing in real time with frame grabbers, and software must primarily focus on increasing the speed of the algorithms themselves.

Currently, image preprocessing should only be outsourced to a frame grabber, but not to a graphic card.



Descriptor-based matching detects perspectively distorted areas. Therefore, interest points are detected where gray values are clearly differentiated from neighboring areas such as brightness, curvature, corners and spots. Source: MVTec

Automatic Parallel Processing

With the rollout of the first machines that simultaneously provided multiple computer cores, developers responded. They began to think about an automatic operator parallelization to fully process images.

Meanwhile, multi-core processors are PC hardware standard. Modern machine vision software uses this multi-core technology in an optimal way. The software automatically finds-without any action from the programmer-the number of available cores. Afterward, the image also is automatically split into sub-images and delivers these to a corresponding number of threads. After processing by the several cores, the computed data are automatically merged to achieve the result. With an increasing number of processors, the speed also continuously accelerates.

To meet these requirements, it is necessary to parallelize not only filters, but also operators and methods that are important for a huge amount of industrial applications; this should cover methods, for matching, 3-D matching, subpixel extraction and Fast Fourier Transform (FFT), for example.

A further advantage can be to pre-select a region of interest as a free form in the image of any orientation. If it is possible for the software to only process a pre-selected area of the image, it leads to a dramatic reduction in processing time. This speed benefit of software can further be increased if software is able to process arrays of images in parallel as well as arrays of regions, caused by segmentation (such as OCR or blob analysis) and arrays of subpixel-accurate outlines.

However, parallelization only makes sense if enough memory throughput is available; thus, the performance depends on hardware. Also in this case, software automatically identifies the hardware environment. Based on this ability, software should decide which algorithms will parallelize and which will not to avoid unnecessary overhead.

Multichannel images, such as color images, are increasingly important. For software it also is possible to parallelize such multichannel images with an unlimited number of channels.

Modern machine vision and imaging software should parallelize industry-relevant filters, operators and methods completely automatically, and optimize these operations.



Perspective deformable matching recognizes perspectively distorted objects with distinct edge and area accented elements such as a car door. Source: MVTec

Comprehensive 3-D Vision

For robotics, machine vision becomes more and more important, particularly for comprehensive 3-D vision technologies. Three-dimensional vision is an umbrella term for a collection of technologies to determine the 3-D pose and the orientation or to reconstruct the 3-D shape of an arbitrary object.

These technologies include 3-D camera calibration, 3-D matching, perspective matching, circle and rectangle pose recognition, binocular stereo reconstruction (including multi-grid stereo), depth from focus, photometric stereo and sheet-of-light measurement.

3-D Camera Calibration

With 3-D camera calibration, the relationship between camera, object and, if required, a robot, is established. Camera calibration should work for area as well as line scan cameras. Internal and external camera parameters map the image coordinates to world coordinates; thus, robot control becomes easier. For highly accurate and flexible measurements, calibration is essential.

3-D Matching

With 3-D matching, it is possible to recognize and determine the 3-D pose of arbitrary 3-D objects with only one camera. The object to determine is represented by its computer-aided design (CAD) model. Multiple 2-D views of this object match to the object’s image and thus detect the object’s position and orientation using shape-based matching technology extended to 3-D.

Perspective Matching

Industry claims it was only a matter of time for perspective matching, or matching perspectively distorted objects with only one camera. Perspective matching can be based on the detection of interest points where gray values are clearly differentiated from neighboring areas such as brightness, curvature, corners and spots. Planar objects such as prints with texture can be located quickly in any pose and tilt. Another method of perspective matching recognizes perspectively distorted objects by shape-based technology. Work pieces and objects with distinct edge and area-accented elements such as a car door can be identified by this method with high accuracy, reliability and robustness.

Circle and Rectangle Pose Recognition

An object within a perspectively distorted image can be determined faster and easier with only one camera if this object has significant circles or rectangles. This can be done by using the known size of the circle or rectangle to calculate the object’s distance and tilt angle with respect to the calibrated camera.

Binocular Stereo, Multi-grid Stereo

By binocular stereo, the 3-D coordinates of visible points on an object’s surface can be determined. This is done by calculating the disparity of different points of view based on a two-camera setup. During stereo processing often the problem occurs in complete areas that do not have any texture. To close this information gap, multi-grid stereo was developed to eliminate the disadvantages of the conventional stereo method. After processing by multi-grid stereo, the areas without information appear as proper edges and structures. Thus, multi-grid stereo can bridge texture gaps in stereo images and deliver highly accurate results.

Depth from Focus, Photometric Stereo

The depth from focus method is particularly suitable for very small objects. With the height adjustment of the camera, the 3-D conditions of the object can be determined by extracting distance information and calculating the focus of all pixels of an image. Photometric stereo acquires multiple images with illumination from different orientations. In this case, the depth information is reconstructed by making use of the reflectance features of the object.

Sheet-of-Light Measurement

In the case of objects without any texture, the sheet-of-light method is suitable. It means measuring an elevation profile of an object by reconstructing the projected line of laser light on the object and thus generating a 2.5-D model.

Modern machine vision and imaging software must provide comprehensive solutions for all demanding 3-D vision challenges.



Bar and Data Codes under Restrictive Conditions

While bar code reading has been common for a long time, data code reading is currently increasing worldwide. Ideally, a data code consists of a dot print area composing the actual code and a frame for orientation and pose identification of the code, the finder pattern. In practice, important parts can be damaged by transport and other mechanical influences, not printed, overprinted or defocused. For normal data code readers such a defect data code often is not readable. Modern imaging software provides data code readers the ability to read such damaged codes, even if the whole finder pattern is missing. Modern software reads ECC200, QR and PDF417 of each size with elements even smaller than 2 by 2 pixels.

All common bar codes must be read in any orientation. Moreover, the latest software is able to read a bar distance of 1.5 pixels.

Modern machine vision software must not only be able to read all types of code, but also to read damaged and distorted bar and data codes under restrictive conditions.



A modern machine vision embedded software library allows developing the software part on a PC and afterward uploading the application to the embedded system providing the full and comprehensive machine vision power. Source: MVTec

Processing Large Images

For a long time it was a desire of industry, and now modern imaging software processes large images of more than 32k by 32k. The size of images is not limited. Above all, this is interesting for high-resolution line scan camera applications as deployed in print and electronic industries for print and component inspection. If this technology is combined with fast parallel processing in spite of the high data amount, the desired real-time will be reached without outsourcing image pre-processing. Thus, programming is significantly easier and faster and the application will run without problems.

Modern machine vision software runs large images with minimized effort. Thus, complex line-scan camera applications run reliably and accurately.



IDE with High Usability

Modern machine vision software packages must not only be comprehensive in technology, but also in usability. Moreover, the software must provide an integrated development environment (IDE) to significantly speed up programming.

Beyond rapid application development, a machine vision IDE should directly export the machine vision application code to C++, C, C# or Visual Basic. Thus, a machine vision application can directly be included in a control program. Furthermore, the IDE should allow external procedures created by the programmer to run and to integrate as separate files or integrate applications programmed by different developers. As the developers own expertise, external procedures should be supported by password protection.

An IDE should be interoperable with Windows as well as Linux and Unix, and provide capabilities such as development interface, text editor, compiler or interpreter, linker, debugger and source formatter. An IDE is an important programming aid and supports the operator in quickly implementing an application and improving time to market.



When executing an operation on a quad-core computer, the software automatically splits the image into four parts, which are then processed in parallel by four threads executing the operation. Source: MVTec

Embedded Software

During the past few years, vision sensors and smart cameras conquered the market. For non-complex applications, these devices are a good alternative to a PC-camera setup. If the imaging software’s architecture is flexible enough, the software should be able to run on special platforms and must be portable to various microprocessors or digital signal processors, operating systems and compilers.

An embedded software library provides the full and comprehensive machine vision power on embedded systems. It allows development of the software part of a machine vision application on a standard platform and thereby greatly eases the programming of an embedded system. Modern machine vision software is portable and allows development on a PC and running the application on an embedded system.

Advances in machine vision-from software to sensors-will no doubt continue to propel the market forward. Take note of these changes to address the best fit for today’s applications. V&S

Dr. Lutz Kreutzer is manager, PR and marketing, at MVTec Software GmbH (Munich, Germany) and a member of the Vision & Sensors advisory board. For more information, call +49 89 457695-0, e-mail [email protected] or visit www.mvtec.com.

VISION & SENSORS ONLINE

For more information on vision software, visit www.visionsensorsmag.com to read the following:

• Case study: “High-Speed Bottle Inspection”

• Machine Vision 101: “Simple Software”



Tech Tips

- Modern machine vision software packages must not only be comprehensive in technology but also in usability.

- The software must provide an integrated development environment to significantly speed programming.

- If the imaging software’s architecture is flexible enough, the software should be able to run on special platforms.