There are small differences along the way from light to a signal.

Leading camera manufacturers are now including both CCD and CMOS sensors into their product family lines. Source: Basler AG


Due to constant improvement and optimization over the years, today’s charge-coupled device (CCD) sensors stand for excellent image quality. Originally developed in 1969 for the storage of data, the potential of the CCD as a light sensitive apparatus was soon realized. By 1975, the first sensors with a resolution sufficient for television cameras appeared. Complementary metal-oxide semiconductor (CMOS) sensors were invented around the same time, but until the 1990s, CMOS technology suffered from low lithography resolution and a lack of CMOS fabrication processes. CCD as a light sensitive apparatus was soon realized.
By 1975, the first sensors with a resolution sufficient for television cameras appeared. Complementary metal-oxide semiconductor (CMOS) sensors were invented around the same time, but until the 1990s, CMOS technology suffered from low lithography resolution and a lack of CMOS fabrication processes.
With the advent of CMOS active pixel sensor (CMOS-APS) technology, these sensors were virtually reinvented. Since their introduction in the mid-90s, CMOS sensors have become the first choice for high-speed imaging. They not only exhibit superior performance regarding speed, but also typically have a lower price. Leading camera suppliers typically include both sensor technologies in several different camera types in their portfolio.


Converting Light into an Electrical Signal

Both CCD and CMOS sensors are based on the same physical principles. They convert incoming photons into electrons by means of a photo effect. As a result of their sensor structure, the maximum sensitivity of CMOS sensors is in the red spectral region (650 to 700 nanometers). CCD sensors have a maximum sensitivity at about 550 nanometers-exactly where the human eye is most sensitive.

For a variety of technical reasons, CMOS sensors in the past were considerably less efficient at converting incoming light into an electrical signal. The photosensitive area within each pixel in a CMOS sensor occupies only a fractional part of the total pixel area. The rest of the pixel area was populated by the individual readout electronics associated with each photosensitive area.

The structure of CCD sensors is different. In CCDs, the electronics for the evaluation of the charges collected by the sensor surface is located outside of the chip, so almost the entire chip surface is available for photosensitive structures.

Over the past few years, design improvements have increased the size of the light sensitive area of CMOS sensors to near the level of CCD sensors. One example of such an improvement is the micro-lens array that is now applied to the CMOS chip. The lens array collects the light impinging on each pixel area in sensor and focuses it on the available light sensitive region within the pixel.



The Price of Individuality

One set of electronics for all pixels-this phrase regarding processing capability is valid for CCD sensors and at first sight-sounds rather like a trade-off. However, it is an advantage for image quality. Because there is one common electronic path for a large fraction, if not for all of the pixels in a CCD chip, all analog pixel signals are evaluated and processed in the same way, and they are all converted to digital signals in the same way.

CMOS chips carry individual processing electronics on board each pixel and are different in this respect. This characteristic means that they can be read faster and that the image area can be accessed in more flexible ways.

However, there are tiny variations within the individual electronic structures used to process each pixel, meaning that the signal offset can differ from pixel to pixel within a CMOS sensor, although the amplification slopes are almost identical. Variations between the offset values of the pixels in a CMOS sensor are typically 10 times larger than those of CCD sensors.

Taken together, this offset variation represents a difficulty with respect to the sensitivity threshold of the sensor. This is particularly true when a weak signal that is slightly greater than the background noise must be detected. In this situation, a CMOS sensor looks worse than a CCD sensor.

By definition, this threshold is reached when the signal from the sensor is as high as the noise, for example, the signal-to-noise ratio (SNR) equals one. A technical term that quantitatively describes this characteristic is known as the fixed pattern noise (FPN). CMOS sensors exhibit a higher FPN than CCD sensors.



Less Sensitivity, More Space for Electrons

CMOS sensors, however, do score much better in another area-they can provide a higher full well capacity. The full well capacity represents the maximum number of electrons that an individual pixel can hold.

On CCD sensors, this number is often artificially limited to a reduced saturation capacity to avoid certain technical problems. The ratio of the saturation capacity-full well capacity-to the sensitivity threshold determines the sensor’s dynamic range.

In comparison to a CCD sensor, a CMOS sensor wins with regard to the saturation capacity what it loses when it comes to low-light sensitivity. As a result, CMOS and CCD sensors have almost the same level of dynamic range. In principle, the maximum SNR ratio equals the square root of the saturation capacity. Thus, the CMOS sensor excels in this area, but it needs more light to do so.

As a simplified rule-of-thumb, one can say that CCD sensors are the preferred choice for applications with little light and CMOS sensors are a good alternative when there is a lot of light.



CMOS sensors are a good alternative when there is a lot of light. Source: Basler AG

Too Bright for a Sensor?

If particularly bright light is present, operators must sometimes struggle with other effects. When a CCD pixel is overexposed to light, the pixels can generate an excess of electrons that can migrate into neighboring pixels. In this situation, very bright image structures seem to extend into the darker structures, an effect known as blooming. CMOS sensors do not see the migration effect and are not prone to blooming.

After a CCD sensor has been exposed to very bright light, the charge transportation process can cause bright stripes to appear in the image. The bright structures within the image appear as white smeared lines. This effect is known as smearing and can be seen, for example, in images acquired at night when a car approaches the camera with its headlights on.

As with blooming, CMOS sensors have an advantage compared to CCD sensors because they do not exhibit smearing. This explains why cameras equipped with CMOS sensors are often preferred for outdoor use.

The Agony of Choice

CMOS sensors have a shorter history than CCD sensors, but they are now technically mature and even represent the best choice for some applications. When one must choose the optimum solution of a specific task, many of the aspects mentioned will come into play. In the end, the constraints and the details of an application should determine which sensor technology represents the best alternative.