Machine vision has proven its value in factories over the last decade. As with digital devices, advances include improved computing power, reduced physical dimensions, expanded functionality and simplified operations.

This smart camera is 44 by 44 by 39 millimeters plus lens. It contains a 2-D photo-detector, a CPU, digital signal processor, field programmable gated array, communications including Ethernet, Serial and I/O options, software and integrated lighting control. Source: Dalsa


Machine vision is generally understood to mean the automated, on-line, real-time inspection of products in a manufacturing environment. A machine vision system is a vision system with its physical configuration and software tailored to factory use for product characterization, defect detection, gaging, and product feature or code identification.

Advances in machine vision, as with most digital systems, are due to significant increases in computer processing power, reduced physical volume, sensor pixel improvements in both performance and number-of-pixels in the sensor, embedded software performance and operating simplicity. Since machine vision has demonstrated its usefulness over many years, advances in performance, speed and communications now tend to be incremental.

A machine vision system is functionally similar to a digital camera: they both require illumination, light gathering components (usually a lens), photoelectric converter/sensor, shutter and microprocessor with software to process and enhance the image and to control optoelectronic storage communication functions. Set-up and operation can be simplified in both machine vision and digital cameras by using standard application-specific software options, but this can result in less flexibility for special needs. Such options do solve many common problems, however, and should be used if the option meets current and projected needs.

Both machine vision systems and digital cameras can use common accessories to enhance the image such as color sensors, filters, polarizers, fiber optics and software options. Both can have electrical/cable connections to external components such as a computer and display monitor. Vision systems often have hardware and software for output control and system-wide communication. Both have models that provide higher and lower performance systems.

Special optics are usually applied in custom-tailored machine vision systems because of the unique requirements of an application and because of the optical expertise required.

The critical nature of the optics, often simply called illumination, to obtain simpler, faster, more reliable results is discussed because of its importance and increasing use.

An integrated circuit on silicon without (right) and with (left) optical image processing before any digital processing is shown here. The defect can be easily detected in the area marked on left: the image intensity of the integrated circuit features has been significantly reduced so that there is a large signal-to-noise ratio between the light intensity from the defect compared with the light intensity from the circuit features. The high S/N ratio permits detection with reduced resolution. Source: Norman N. Axelrod Associates

Optics: Illumination and Light Gathering Components

Optical inspection requires that the best possible optical contrast be obtained for features of interest in situations with significant constraints.

Often defects need to be detected and dimensions gaged against cluttered optical fields, a large depth of field and product features such as color and surface roughness that can have acceptable variations.

Simple application of special optical techniques beyond basic lighting, such as uniform and dark field illumination, can improve detection sensitivity, reliability, speed, and reduce hardware and software demands.

The optical techniques are applied to process the information-carrying lightbeforethe light strikes the sensor.

The special optical image processing techniques include:
  • optical spatial filtering
  • structured lighting
  • induced optical characteristics
  • shaped and random fiber optic bundles
  • telecentric lenses
  • differential detection
  • wave front coding

    For detection of defects in integrated circuits and photo-masks used to fabricate the integrated circuits, optical spatial filtering can be used to eliminate the integrated circuit information from the image and show only defects in the detected image.

    Images of the same area, before and after optical image processing, can be compared with the same camera settings and software used for both images. For on-line use, relatively low-resolution sensors with digital thresholding can be used after the optical imaging processing is applied.


  • Optics First

    Experts in complex detection systems, including optical and radar, generally recognize that the quality of the information obtained from the first stage in any detection system can be critical to developing a simpler, faster, more sensitive and more reliable system. This first stage in a machine vision system is the optical system that exploits the optical properties of the object or objects of interest.

    By filling in the requirements needed on the graphics screen rather than writing and testing software code, software development time is reduced. Source: Matrox

    Exploiting Optical Variations

    High signal-to-noise ratios by optics can be obtained by exploiting optical variations of the product’s materials, structure and processing to enhance signal-to-noise or optical contrast. The sample of interest is itself an optical component.

    Four product properties, separately or together, can be used to improve discrimination (or S/N) of features from background interference:
  • geometric
  • spectral properties
  • polarization
  • induced optical characteristics

    Consider these examples:
  • The optical spatial filtering can exploit how the shapes or geometries of different features change the distribution of the light reflected or transmitted by a semiconductor wafer or photo-mask target. If the light distribution did not change, then the light collected by the camera would not have changed from the interaction with the target so that there would be no imaging.

  • Thermal processing and scratches can result in changes in the surface material and provide a spectral means to detect unannealed areas and surface damage.

  • Alignment of molecules in solids, as in some extruded materials’ processing, results in the polarization of the light transmitted through the material.

  • Varying the pressure inside automobile tires results in changes in expansion of the surface around subsurface delaminations. The induced surface changes can then be detected by optically comparing the same optical field at different pressures.


    Improvements in LED performance have been increasingly applied by camera and illumination manufacturers. LED units on the camera or on separate units provide illumination controlled by the central processing unit (CPU). This reduces physical space needed by the system and simplifies illumination control.


  • Integrated Systems & Smart Cameras

    Machine vision systems range from small, integrated units (smart cameras) to PC-controlled units with frame-grabbers to high-speed multiprocessor systems.

    At the high end, improvements including enhanced communications capabilities have been announced in recent months, permitting use with Ethernet, PC, robots and Fieldbus systems with transfer of data and images to an enterprise file server or display for operator use.

    Dramatic changes also have been made in small integrated units, also called smart cameras, that include the basic camera with lens and camera functions to control speed and exposure, microprocessor, software, means to communicate data and control hardware, and sometimes an integrated light source, usually an LED array.

    A range of functionality is available with these smart cameras, and sensors are available for color as well as black and white. Availability of a 1,600 by 1,200 pixel 1/8-inch charge-coupled device (CCD) for a smart camera that operates at 15 fps was announced in the summer of 2009; this permits better resolution or a larger field of view per frame without sacrificing resolution.

    And smart cameras are smaller than one might think. Units are available from different manufacturers with dimensions as small as 30 by 30 by 60 millimeters and 44 by 44 by 39 millimeters before adding a lens.

    There are continuing announcements of better functionality in sensor features, software applications, communications and lighting options.



    CCD and CMOS Sensors

    CCD and complementary metal-oxide-semiconductor (CMOS) sensors are the two leading multi-pixel silicon sensors, in both linear and area arrays. Both continue to see improvements in performance and cost reduction. The choice usually depends on both their specifications and specific needs. CMOS performance has improved so that they are commonly used in most cell phones.

    Since signals from individual CMOS pixels can be interrogated and output for use, whereas all of the CCD pixels need to be read before signals from individual pixels can be processed, they provide considerably faster data acquisition when limited areas of interest are sufficient for a particular application.

    The CCD has been the sensor of choice because of its lower noise levels and more uniform response across the sensor.

    Improvements in both types of sensors continue to be announced.

    A graphics-containing screen for use during operation is shown here. This software development screen can be configured with annotations to the displayed image of the inspected part. Source: Matrox

    Software

    Generating software to apply to new situations often has been a time-consuming operation. Programs with graphics and visual aids have been evolving to permit operators and system integrators to develop programs by using graphics, selecting applications and clicking on toolbox menus. Programs are reusable for other applications.

    With some vision systems, scripting systems permit a sequence of steps to automatically record manual operations controlled by a mouse. The sequence can then be used in a program for an automated system. The program can be supplemented by software routines from software toolboxes and software libraries that are fully compatible with the scripted code. These make software development appear like a virtual erector set.

    Machine vision has proven its value in factories over the last decade. As with digital devices, advances have tended to consist of improving computing power, memory, communications and compatibility with other units in the production line; reducing physical dimensions; expanding functionality and simplifying operations. Application of optical techniques to increase S/N ratio or discrimination, with resulting improvements in reliability, sensitivity and line speed, is increasing.

    In addition, more effort is being applied to turning specialized systems that require tweaking into more easily automated units. Simpler interoperability between the machine vision system and communications and production control units is increasingly being provided. V&S

    Tech Tips

  • Machine vision systems range from small, integrated units, or smart cameras, to PC-controlled units with frame-grabbers, to high-speed multiprocessor systems.

  • At the high end, recent improvements include enhanced communications capabilities, permitting use with Ethernet, PC, robots and Fieldbus systems with transfer of data and images to an enterprise file server or display for operator use.

  • Dramatic changes also have been made in small integrated units, also called smart cameras, that include the basic camera with lens and camera functions to control speed and exposure, a microprocessor and software.