The latest generations of complementary metal oxide semiconductor (CMOS) imaging technology have greatly diminished the historic distinctions between CMOS and its rival, charge coupled device (CCD). In addition to improving on image quality, CMOS sensors have been incorporating additional functionality while increasing both speed and resolution. The result is that CMOS image sensors are becoming the preferred technology for high-speed inspection applications.
Compared to the first generations of CMOS imagers, CCD imagers had a compelling advantage. CCDs were able to take a snapshot-like image using a global shutter that triggered all photodetectors in a 2-D array to capture and store image energy simultaneously. Early CMOS devices could only offer a rolling shutter that triggered image capture across the array one line at a time, which distorted the image of moving objects. CMOS 2-D imagers incorporated global shutters in 1999, but for many years still ranked behind CCD devices in terms of image quality.
Much has changed in the last few years. For one thing, CMOS global shutter technology has improved significantly. One of the early problems with CMOS global shutters was shutter leakage. CMOS shutters work by passing photoelectric energy from an active sensor to a corresponding storage node. The storage node holds that energy for readout while the active sensor is forming its next image pixel. High-intensity illumination on the active sensor, however, can cause energy to “bleed through” and alter the contents of the storage node.
CMOS image sensor vendors developed both voltage domain and charge domain methods for storing pixel information. Voltage domain methods provide a lower noise floor, enabling the sensor to operate effectively at lower light levels. They also provide better resistance to shutter leakage. However, voltage mode pixels are able to collect and store less signal than pixels based on the charge domain method, and therefore suffer from higher noise in applications where shot noise dominates.
Recent advances in global shutter designs have eroded several of these distinctions. By adding a P-well barrier underneath the storage node, for example, manufacturers of charge domain pixels have been able to significantly reduce shutter leakage. Using such structures some vendors are now able to achieve a shutter leakage less than 0.02% when the active pixel is illuminated at 10x overexposure, orders of magnitude better than earlier generations. And as silicon process dimensions continue to shrink, voltage mode vendors will be able to reduce the gap in light collection efficiency.
The incorporation of correlated double sampling (CDS) has also helped reduce the effect of reset noise in charge domain shutter designs. With CDS the noise associated with the pixel reset operation is eliminated. Recent charge-domain shutter designs have shown an improvement in signal-to-noise ratio (SNR) of 6 decibels (dB) over previous generations while supporting a noise floor of less than ten electrons. The result is an extension to the low-intensity range of CMOS sensors, allowing faster image capture at lower light levels and increasing dynamic range nearly to nearly 4x that of previous generations.
On-Chip Features Simplify ProcessingAnother improvement in CMOS 2-D image sensors has been integration of corrections for mismatches in multi-tap readouts. Large, high-resolution image sensors have so many pixels that even at high clock rates it takes a long time to read out image data. As a result, such sensors can exhibit low frame rates. To improve frame rate, vendors have designed sensors to have multiple readout taps so that several sections of the image can be read simultaneously. But differences among the analog-to-digital (A/D) converters on each readout path can result in intensity mismatches among sections of the reassembled image. Correcting such mismatches was once the task of the image processor. Today’s CMOS image sensors can perform such corrections on chip.
There have also been advances among CMOS line scan sensor designs. Unlike their 2-D cousins, line scan sensors capture an image one stripe at a time. They depend upon the motion of the object or the sensor to form successive stripes that build a 2-D image. Line scan CMOS sensors have become blazingly fast-especially in relation to their CCD counterparts-due in part to the advent of integrated high-speed A/D converters.
In today’s line scan CMOS image sensors, the architecture is massively parallel. Rather than scanning out each pixel in the line to a single A/D converter, today’s devices integrate one A/D converter for each pixel. This design not only increases readout speed compared to previous generations, it produces a lower noise floor because the analog sections do not need to operate at high frequencies. Similarly, the design reduces the sensor’s power needs by avoiding high-speed clocking.
Integrated Artifact ReductionWith such massive parallelism, tap matching is an inevitable concern. To suppress the artifacts that might otherwise arise, today’s line scan CMOS sensor designs inject reference dark and light signals into the data stream during row blanking time. The camera can then use this information to correct tap mismatches on the fly.
Another type of artifact addressed in newer line scan sensor designs is the so-called “Black Sun” phenomenon. This artifact appears in overexposed sections of a scene. The incoming light overpowers the sensor’s ability to reset pixels between exposures. As a result, the pixel appears black when it should be white. Newer sensors detect the overexposure condition and replace the faulty pixel information with a saturated signal level.
Line scan designs have also begun adopting multiline architectures in order to improve their low light sensitivity and provide color capability. Rather than having a single line of photo sensors these designs have multiple lines. In monochrome imaging, because adjacent lines will capture the same scene view at slightly different times, the camera can add together the outputs of each line in order to get a stronger signal for a given illumination intensity. In color imaging the multiple lines allow placement of color filter patterns onto the different lines. In some sensors, a distributed color pattern may be implemented to create one color pixel for every four photo detectors.
CMOS image sensor vendors have also been continually refining their semiconductor processes. Each generation has been able to offer faster speeds and higher resolutions. With multi-tap designs and improved semiconductor lithography, CMOS sensors are now able to offer pixel counts and frame rates that exceed the data handling capacity of traditional camera interfaces. As a result, camera vendors have had to create new gigabit interfaces in order to keep pace.
Process improvements have also allowed CMOS image sensors to integrate image processing functionality with the sensor array, an enhancement not feasible with CCD technology. For example, today’s CMOS sensors can implement functions such as flat field correction of the image before sending it out to the camera electronics. Sensors are also now capable of “windowing,” or the selective downloading of an area of interest within the full image field. These built-in functions have not only offloaded such tasks from the host image processor, they have simplified the remaining processing steps the host must implement.
Future DevelopmentsOne area where CMOS process technology may not continue such steady advancement is in the reduction of pixel size. While smaller pixels would allow increased image resolution for a given sized sensor, they come with several problems. One is decreased dynamic range; smaller pixels typically have less full-well capacity and thus saturate at lower illumination levels than larger pixels.
Perhaps more important, however, is the impact of smaller pixel geometries on camera design. In order to realize the resolution improvements that smaller pixels would allow, cameras would need costly high-resolution lenses. The current machine vision market does not typically support such expensive camera systems.
Many of these advances currently appear only in the latest, high-resolution CMOS image sensors. Most low-resolution devices are based on older process technologies and older designs. That may soon change, however. Some vendors are now in the process of implementing the newer design techniques and technologies in their older, low resolution product lines. The result will be faster sensors producing higher quality images with wider dynamic range than is currently available at the lower resolutions.
The cumulative impact of all these sensor improvements on machine vision systems has been significant. Cameras are now able to implement many image corrective functions that were once the province of the host image processor. This, in turn, frees the host’s computational capacity to implement more complex processing such as automated feature extraction and precision metrology. The reduced noise levels and wider dynamic range that the sensors provide help make such processing more accurate, as well.
Vision systems based on cameras using today’s CMOS sensors are able to capture high-resolution images faster than ever with a reduced need for high-intensity illumination. The speed improvements have increased the capacity of such systems in automated optical inspection applications to handle larger objects with a faster throughput.
Sensor vendors will continue to improve their designs in areas such as reduced power, increased low light sensitivity, and enhanced spectral response to different wavelengths. Such improvements will open the door for new applications in inspection and machine vision with CMOS sensors at their core. Q
Tech TipsCameras using today’s CMOS sensors are able to capture high-resolution images faster than ever.
This speed allows automated optical inspection applications to handle larger objects with a faster throughput.
Future improvements would include reduced power, increased low light sensitivity, and enhanced spectral response to different wavelengths.