A charge-coupled device, or CCD, is by far the most common image capture mechanism used for industries that require high-quality image data. Since its invention at AT&T Bell Labs by George E. Smith and Willard Boyle in 1969, the CCD has become a vital piece of technology for digital imaging, particularly in professional, scientific, and medical applications where clarity is crucial.
One major rival to CCD image sensors, however, is complementary metal-oxide-semiconductor, or CMOS, image sensors. Patented by Fairchild Semiconductor employee Frank Wanlass in 1963, CMOS technology is perhaps best known for producing active-pixel sensors, or APSs, that are found in many commercial imaging applications, from cell phone cameras to web cameras to most digital pocket cameras made after 2010. For this reason, CMOS technology is typically less expensive than CCD technology, as CMOS cameras are cheaper to manufacture. On the other hand, CCD cameras tend to produce higher-quality images with less “rolling shutter” effect, or skewed imaging, than the CMOS alternative.
So which is ultimately better, CCD or CMOS? The short answer is: It depends on your application. For the long answer, Qualityspoke to Mike Shahpurwala, founder and president of Aven – a precision technology provider of both CCD and CMOS cameras – about some of the key differences between the two imagers and how good lighting plays a pivotal role.
Mike Shahpurwala: The main advantage of a CCD camera is the image quality, specifically in low-light applications, as CCD cameras provide a much clearer image than CMOS cameras. For example, in scientific applications where you’re trying to examine cell structures or you’re working with slides that have very slight variations in grains that you need to differentiate, you are using a CCD camera. These cameras are also used in medical devices and in general assembly inspection; for example, inspecting tiny parts of a cell phone or other electronic device, to make sure that the device meets all assembly requirements.
However, a lot of the cameras on the market right now, the majority of them, are CMOS, because CMOS cameras are much more economical to produce than CCD sensors. The image quality on these cameras generally tends to be okay, but as I’ve said, when you’re working on an application where there is low light, or you want to differentiate between slides by a variation of degrees in resolution, then you would want to use a CCD camera.
Shahpurwala: [At Aven], we produce CCD and CMOS cameras, but the CCD sensor is made only by a few companies. Sony is one of the largest manufacturers of CCD chips. So normally we buy the chips from companies like Sony and then incorporate our electronics to process the image. It is the electronics that we design that determine the noise ratio, the actual processing of the image, and so forth, and it is the software that we design for the cameras that provides the tools necessary for accurate measurement and inspection.
Let There Be Light
Shahpurwala: With these cameras, lighting is the important aspect. The level of lighting that you use is very important, and also the type of lighting that you use.
For example, if you’re trying to inspect a card that has a lot of reflection, you would then want to use light that has a polarizer built into it, so that the shiny part, the line, gets polarized and does not reflect as much. In some cases, depending on the product, you might want to use only certain types of light; for example, only green light or blue light or white light, depending on the type of product. In such cases you would use a diffuser, so that light doesn’t strike the object directly, but strikes it at an angle so that it diffuses the light. So, as you can see, lighting is a very important part of the inspection process.
Shahpurwala: The next development, I think, is USB 3.0. It’s going to become more [prevalent] in cameras, and it’s going to have a much higher global shutter speed when compared to cameras that still have USB 2.0. That is the next stage.
After that, I don’t see that the megapixel keeps increasing, but as the processing power of NTC becomes faster and faster, then you will see an increase. Right now, 5-megapixel is the ideal resolution, although there are cameras being produced that are 12-megapixel and 19-megapixel; they even go up to 20-megapixel. But those have very slow frame rates, so you can’t really see a 20-megapixel image on your screen because your screen’s resolution is limited. Most of the time, the way the technology works is that you use high megapixel cameras for your application, and the image that you see does have the pixel density— in the sense that it has 10 or 20 megapixels, whatever it is—and you can print the image at 20 megapixels, but you can’t see it on your screen.
PC screens are limited, I believe, to approximately 3-megapixel. So you can have a higher resolution camera, but you can’t see it, and then also the frame rate becomes very slow, because think about trying to move a 20-megapixel image onto a screen; that is a very slow process. But as the processing power of computers increases, I think you will see screen resolutions increase, and then you will see cameras with higher resolutions come into play.