In macroeconomics there is a concept called utility. According to Investopedia, “utility is the aggregate sum of satisfaction or benefit that an individual gains from consuming a given amount of goods or services.” Creating products that satisfy and tangibly benefit a customer results in greater product sales—therefore increasing utility.
The most significant trend in machine vision today is probably the transition from CCD to CMOS image sensors, which has a number of intrinsic benefits.
A complementary trend is the reduction in the cost of processing image data within the camera; the price of FPGAs, microprocessors and memory continues to drop, while speed and capability continue to increase.
These changes allow the inclusion of more image processing features which provide significant video performance improvements over vision products of the past.
Creating products that satisfy and tangibly benefit a customer results in greater product sales—therefore increasing utility.
In the competitive world of machine vision, camera manufacturers are constantly looking for ways to increase the utility of their products, in order to provide better value for their customers. Typically this is done in one of two ways:
a) Adding features that improve some aspect of the quality of the image, which improves the quality of the information extracted from the analysis of the data, and
b) Creating features that increase focus on key data, thereby reducing the cost of transmitting or processing the data to further extract the information of interest.
Depending on the application, improvements to image quality can have a number of benefits. For example, in quality control applications enhancing contrast may improve ease or consistency of defect detection, reducing false rejection rates and lowering scrap costs. Alternatively, reducing the quantity of image data processed can result in less cabling, less expensive computer hardware, and faster processing speeds resulting in lower system costs and cost of ownership.
There are a number of new features in machine vision today that target improvement in image quality. Understanding these features helps a system designer or OEM system integrator extract the most value from imaging hardware. The most significant trend in machine vision today is probably the transition from CCD to CMOS image sensors, which has a number of intrinsic benefits. Although CMOS sensors have traditionally suffered from higher noise and various artifacts, the amount of global investment in CMOS sensor development in the past decade has far outpaced investment in CCD technology, leading to rapid performance improvements that now exceed the capability of CCD sensors for many applications. Furthermore, the inclusion of additional circuitry on CMOS devices allows much greater on-chip integration, reducing size, cost, power, noise and improved temperature stability.
A complementary trend is the reduction in the cost of processing image data within the camera; the price of FPGAs, microprocessors and memory continues to drop, while speed and capability continue to increase. These changes allow the inclusion of more image processing features which provide significant video performance improvements over vision products of the past.
Multi-exposure or High Dynamic Range
One emerging feature for machine vision is the ability to take multiple synchronized exposures and either extract details of the image of choice or fuse the images into a single combined, high dynamic range image which provides higher contrast than is possible with a single image. This feature is very useful for scenes that have high contrast, where there is important detail in both the dark and bright areas of the image. This is particularly true when scanning objects with highly reflective features—it can be very challenging to set the illumination intensity to capture details in dark areas without over-saturating reflective parts of the image. Multi-exposure allows optimization of exposure time and gain for each exposure in order to capture detail at different intensities. In some cases it is desirable to have independent images, allowing complete flexibility in how to extract key image detail from each image and in other cases it is desirable to use a generic algorithm to fuse the images together, reducing the overall data output from the camera.
Another feature using synchronized exposures is known as cycling mode, where a unique set of exposures is established and then rapidly cycled to create a more comprehensive image. The most common application of this feature is to synchronously strobe different wavelength LEDs to create a multispectral image. This can also be used to incorporate changes in the angle of illumination, different types of light sources, polarization, or any combination of exposure types which help to meet the demands of the application. An example of this is print inspection applications for high value consumer packaging where it is sometimes desirable to inspect not only for color registration and contrast but also to use angled illumination (a sort of dark field) to highlight scratches or wrinkles in the carton material. Although exposure cycling is more frequently used with area scan cameras, this feature is also becoming more popular for line scan applications; however it requires the capability to rapidly switch flat field calibration coefficient sets within a single line time for best image quality. Multiple calibration sets allow optimization of image quality for each illumination source and correct for illumination non-uniformity and differences in intensity.
Color imaging can also benefit from the increased processing capability within the camera. Color accuracy can be enhanced with color calibration to Gretag-Macbeth color reference charts, and corrections for color aliasing due to encoder speed mismatch are easily handled with subpixel spatial corrections. Even more complex issues such as parallax distortion due to camera angle can be corrected within the camera. In a multi-line camera, parallax occurs when the camera is imaging at an acute angle to the object, and is a function of the physical separation between the outermost color lines. In the collected image the uppermost line will be focused on the object farthest from the camera, thus the line appears slightly shorter, and the nearest line longer (think about vanishing point views in art class, or railroad tracks as they disappear in the distance). The parallax correction in the camera realigns these three lines by scaling them slightly so they completely overlap, in order to eliminate color fringing.
Area of Interest
In addition to providing better image quality than in the past, there are numerous features in machine vision cameras today that target data reduction—the ability to transmit only the image data of interest, in order to reduce hardware costs downstream. Of course the ideal example is the so-called “smart” camera, which utilizes a number of standard algorithms to extract specific data from the image and transmit only this data, like bar code identification, optical character recognition, or object position. This extreme data reduction is possible if generic analysis functions are desired, however for more complex applications there are still useful ways to reduce the dataset. One of these features is known as area of interest—allowing the pre-determined extraction of a subset of one or more regions of the image data (in 2-D imaging this is known as windowing). In many cases the field of view is set up to ensure specific detail at each edge of the view is captured, but images often contain more detail than is actually necessary for analysis. AOI allows unimportant details to be discarded before leaving the camera, allowing cheaper cabling solutions (for example, GigE instead of CameraLink) and a reduction in computer host processing requirements. The images on the previous page show images captured of railroad tracks, where the rails are more important than the gravel and ties, reducing the transmitted data by three quarters.
A related feature is burst mode, which is a useful feature for applications utilizing periodic frames rather than continuous imaging—such as imaging discrete objects on a conveyor. This allows image capture at higher speeds (or resolution) during object presentation than the datalink can handle, but utilizes the dead time between objects to complete the image transmission. In this way the data transmitted during the periodic duty cycle of the image capture is averaged to reduce the overall bandwidth requirements of the system.
These and similar features enable optimized vision inspection for companies looking to leverage expected outcomes like speed and resolution with the additional value of what is sometimes referred to as “adaptive imaging.” Expect these feature sets to continue to evolve, further increasing the utility of machine vision equipment to end applications while optimizing the capability of image processing algorithms to extract the best information possible from raw image data.