Image processing is usually done in order to extract information of interest from a busy image or to enhance the image.
 
In an imaging system, light is collected from an object and then made incident onto an array of sensors in a camera.  All of the information from the object is contained in the intensity variations of the light from the object that is incident onto the photosensors.
 
However, unwanted variations in the light intensity may be due to features that are not of interest as well as from other unwanted sources including scattered, reflected and other sources of light.
 
Image processing is commonly thought of as digital image processing (DIP), done by a microprocessor or computer. This is done by a microprocessor/cpu after the light has been incident onto the camera’s photosensitive surface and converted into an electronic signal.
 
Optical image processing (OIP) modifies the light before the light strikes the photosensitive surface.
 
It can extract features of interest (e.g. defects) from images that also have details that are necessary for the use of the unprocessed image.
 
Optical image processing (OIP) techniques operate at the speed of light and, hence, are much faster than digital image processing (DIP) techniques.  In many applications, OIP techniques can also provide greater reliability and require less expensive hardware and less software development. 
 

Image Properties

Imaging depends upon the variations in the light from the object due to properties of the light (intensity, color, angles, polarization) as well as to properties of the object (geometries, textures, and materials’ optical properties).  If the light from the object did not somehow modify the light to be imaged, then there would be no image. Optical image processing exploits such differences in the image’s features.
 
Some of the techniques used are familiar to users of single-shot and video cameras. They include filters to sort out color and polarization differences using the appropriate filters.  Such filters are common and useful, but they are limited: they do not have the ability to discriminate between features with different geometries (e.g. circular and elliptical vs. rectangular features) unless they have associated color or polarization variations.
 

Practical Example of Defect Detection in Complex Images

A simple but very powerful example of feature discrimination for defect detection is shown in Figure 1. 
 
Detection of defects is required for inspection of integrated circuits on silicon wafers and of the photomasks that are used to produce the integrated circuits.
 
It is clear that the defects can be easily detected by simple thresholding techniques which are fast, simple, and reliable. And, oh, did I mention that they both use the same hardware with no additional costs (well, maybe slightly different mechanical supports)?
 
The required features, in the integrated circuits, have perimeters that are essentially straight lines. 
 
The defects have irregular shapes that are far from straight lines and if averaged out, would probably look more like circles or ellipses. 
 
We will discuss the basics of how this is done and how to think about applying the techniques using simple models for other problems. Our discussion will use generally familiar physical ideas, although many articles on Optical Spatial Filtering start out by presenting several integral equations. 
 

Basic Signal Processing

Any electronic signal (varying in time) or optical image (varying in space) can be constructed from or analyzed into a suitable sum of sine / cosine functions. The amplitudes and phases of the sines/cosines provide the signals’ shapes: these are the variations of the signal in time (for electronic signals) or space (for optical signals or images).
 
This is the basis of the Fourier Theory of functions and is widely used in processing electronic signals. 
 
In Optics, this is called Fourier Transform Optics or Optical Spatial Filtering.
 
For example, the amplitude variations in a simple time-varying sinusoidal electronic signal can be represented in time-frequency-space as a single time-frequency. Similarly, the intensity variations across an image with an amplitude that is a spatially-varying sinusoidally along the x-axis image can be represented in spatial-frequency-space as a single spatial-frequency. 
 
These Fourier techniques are widespread in electronic 1-D signal processing and in 2-D optical image processing to improve signal and image quality.
 
For example, the data in a simple time-varying electronic signal (i.e. amplitude versus time --- time-space data such as music) with noise is often represented by the distribution of time-frequencies (its frequency-space) obtained from the time-space data.
 
For electronic signals, contributions from frequencies that contribute most to the noise (often high-frequencies such as static) are eliminated from its frequency-space: the resulting lower-frequency frequency-space is then converted back to the time domain to provide an improved electronic signal (S/N) without the noise. Such filtering is often done using simple electronic-filters that block the high frequencies.  More generally, a band of frequencies is permitted and frequencies outside this band are eliminated using filters, phase-sensitive detectors and lock-in amplifiers.
 

Basic Optical Signal Processing and Diffraction

To actually see the Fourier Transform of a pattern or image on a flat transparent plate, you can shine a parallel, coherent light beam onto this two dimensional object, put a lens after this image to collect the light, and then look at a viewing screen at the focal length of the lens.
 
(Since the light at the screen is always positive, this actually provides the intensity pattern of the spatial Fourier transform, not the amplitude pattern).
It can be very useful to note that the Fourier Transform of an object is the Fraunhofer diffraction pattern of an object.
 
Since the two-dimensional diffraction pattern depends on the geometry of the two dimensional object, the approximate distribution of light from different shaped features can be anticipated by simply considering the diffraction of light from the different shapes. This is extremely powerful as may be appreciated from Figure 1 and from the example applications below.
 
One simple application of this has always appeared to me to be magical. Here, the optical spatial frequencies (or Fraunhofer diffraction pattern) from a narrow slit with its long dimension in the y-direction will form a series of maxima and minima along the x-direction. 
 
If the object is a rectangular grid, the primary diffraction pattern has maxima and minima along both the x and y axes. If you put an opaque screen, with a slit in the y-direction, in the plane of the diffraction pattern, and then reimage the light, the resulting image will show only the grid lines in the x-direction! 
 
This demonstrates the power of optical spatial filtering for immediately discriminating different objects by shape and orientation without digital image processing.
 

Simple Pattern Recognition /Discrimination with Optical Image Processing

I have found it useful to use the basic concepts of diffraction to generate & test simple, effective solutions.
 
We have just seen how you can remove the horizontal or vertical images from the image of a rectangular grid by using only the diffraction pattern formed by the horizontals or verticals.
 
This can be applied to more complex situations.
 

Example: Defect Detection in Photomasks with Periodic Array of Circuit Features

Photomasks are used in each process step of the production of multiple integrated circuits onto a silicon wafer. Each integrated circuit in the periodic array of integrated circuits has the same details as all the others. 
 
It is necessary to detect small defects in this large amount of required detail. In this example, the features in the circuits are straight lines in both the vertical and horizontal directions. 
 
The diffraction pattern from this photomask is shown in Figure 2. There are two sets of features.
 
There is a two-dimensional array of small light areas that arises from the diffraction from a periodic array.
 
There is also a large cross-like area that appears to allow dots along the horizontal and vertical axes, but not far off the axis. This large cross-like feature is broader than the light from the smaller light areas and arises from the diffraction from the small, linear features within each circuit.
 
We recall that smaller apertures and smaller features provide larger angles of diffraction than do larger features.
 
For defect detection, there have been two approaches. 
 
In one approach, a mask with opaque dots is provided in the diffraction plane. The opaque dots are used to block the light from smaller light areas and eliminate periodic features.
 
This method works and provides the necessary optical image processing. However, it requires making the mask and then aligning the mask in the diffraction plane with the illuminated dots in the diffraction plane.
 
In the second approach, a blackened metal cross is used in the diffraction plane. The cross is aligned with the diffractive cross of light, and the straight-line features of the circuit are virtually eliminated. 
 
Since the features in the photomask are parallel to the sides of the glass plate that supports the photomask, the axes of the diffracted cross are vertical and horizontal, and it is rather easy to align this blackened metal cross along these axes. The alignment is not critical.
 
Here, looking at the diffraction pattern and identifying the diffraction features with the geometric properties of the object causing the diffraction provides optical solutions.
 
This was an important problem, so that there were also digital solutions that were developed, but they required digital imaging, and then comparing the digital image with a stored image of a defect-free photomask. This was a slower process, required different stored images for the different photomasks used for the different processes used to produce one set of integrated circuits, and the stored and live images needed to be perfectly aligned. It was, however, applicable to integrated circuits with features oriented at multiple angles. 
 

Other applications of Optical Image Processing

The diffraction patterns (or Fourier Transforms) can also be used for the gauging of small dimensions. The linear diffraction pattern from a slit can be used to gauge the diameter of narrow wire by calculating the spacing between the intensity dots in the diffraction plane!  [By Babinet’s Principle, the diffraction pattern of a slit has the same diffraction pattern as an opaque object of the same width.]  Since the spacing between the maxima (and minima) increases with decreasing diameters, this technique has the sunusual property of getting more accurate as the wire diameter gets smaller.

TECH TIPS

  • Image processing is commonly thought of as digital image processing (DIP), done by a microprocessor or computer.
  • Optical image processing (OIP) modifies the light before the light strikes the photosensitive surface.
  • An optical image can be constructed from a suitable sum of sine/cosine functions, the basis of the Fourier Theory of functions.
 
References: