The importance of lenses in machine vision applications cannot be overstated. At Edmund Optics, producer of optics, imaging, and photonics technology, the lenses offered are optical components that either focus or diverge light and may consist of single or multiple elements. But from there, the specifics of each lens become appreciably more complex for non-experts. For example, what is the difference between an achromatic and an aspheric lens? How does one fit a lens to an application? How is optics changing, if at all?

For an inside look into the precise functions of these lenses, and also how some exciting new developments are transforming the optics industry as we know it, Quality spoke to Nicholas Sischka, an optical engineer at Edmund Optics who specializes in machine vision.

ON ACHROMATIC VERSUS ASPHERIC LENSES

Nicholas Sischka, Edmund Optics: The most basic optical element with curvature is either a positive lens or a negative lens. Positive lenses are typically used to converge, or focus, light and negative lenses are used to diverge light. On their own, these elements lack the ability to do much. They typically are highly aberrated, which means there’s a lot of error; physical error that manifest themselves when the lens is used as an imager—not because the lens has any inherent manufacturing defects, but due to the fact that you’re trying to image, for example, a large field of view with one element. So if you’re trying to image using a single element, you’re typically going to get  poor image quality, especially as you approach larger fields of view.

That’s when you need to start looking at combining lenses together, such as using positive and negative elements in conjunction with one another. This can be accomplished using machine vision objectives or assemblies, but also using individual lenses like achromats (also called doublets). Achromats are two lenses that are cemented together, usually with something like NOA-61, to bond the two elements together. These lenses are primarily used for color correction, since each individual lens focuses different wavelengths of light differently, à la [Pink Floyd’s] The Dark Side of the Moon [album] cover. We need to use different materials, different glass materials, to focus light differently, and so an achromatic lens is one of the most basic examples of using a positive and negative lens together in order to focus light better than a single lens alone, and that’s because it mainly corrects for color aberrations, what we call chromatic aberrations.

So then the next step is combining achromatic elements together with additional single lens elements and wrapping a barrel or housing around it - you now have a machine vision lens. Machine vision lenses are typically designed at a specific focal length, which defines the angular field of view, and they offer excellent color-correction, and are generally very well corrected for things like spherical aberration as well.

Aspheric lenses are also used in machine vision objectives. And, aspheres have an advantage in that they can correct for aberrations that traditional spherical lenses cannot. The most common example is spherical aberration. So with an aspheric lens, if you take a common example of an asphere, something like a parabola—a parabola, when it focuses light, will have no spherical aberration. When making lens elements, they typically are not specific conic sections like parabolas, but are more generalized aspheric shapes; though they are still used a lot to correct for other aberrations that regular spherical lenses cannot correct for.

Aspheres are more expensive then spherical lenses, because it’s a more time consuming manufacturing and QA process, as you can have aspheres that have wildly different form factors when compared to a spherical lens, which go into things like cell phones, and that’s how these lens assemblies are able to get so compact. But they can also be used in machine vision objectives to add further correction over spherical lenses alone. Or if you had the chance to use an asphere, you could potentially use one asphere instead of two or three spherical elements, and you could get similar image quality, lighter system weight and possibly a reduction in overall cost.

ON FITTING THE LENS TO THE APPLICATION

NS:When I first started at Edmund Optics, I started in what we call our applications engineering group. The applications engineers are effectively tech support, so we would take calls from anyone who used our products. So whenever we would start talking to them on the phone, we would have no idea what they were doing with their application—sometimes they could be in a laser-based application, sometimes they could be in a machine vision application, sometimes they could be a life sciences application—and [we needed to have] the ability to narrow down exactly what they need in terms of ‘Oh, yes, a single lens would be perfect for this,’ or ‘You need a machine vision objective.’

A prime example of an application where someone needs to use a single lens is if one is trying to, for lack of a better term, move light from Point A to Point B. Something like illumination optics, where they’re not too concerned with the MTF (modulation transfer function) of the lens or the overall image quality, but rather they need to focus light from an LED source or perhaps onto a sample. That’s a scenario where a single lens would be very useful. If they need to roughly collimate a beam that’s coming out of a fiber optic light guide, this is another application where a single lens is useful. Where something would benefit from an achromat is if—building off of that same example—let’s say that with that fiber optic light guide, it was actually white light coming out of it, and you needed to maintain a tighter spot size or get slightly better collimation. That’s where the achromatic lens could be more useful.

Typically, anything that involves a camera—if someone says, ‘I’m trying to image X, Y, Z,’—that’s where you’ll always go to a machine vision objective. You’ll never want to use single elements or aspheres on their own to image onto a camera sensor. For those cases we recommend machine vision objectives, which are designed specifically for  camera sensors. So it varies from application to application, of course, with what you would select.

ON CHANGING CURVATURE WITH LIQUID LENSES

NS:One of the things that is interesting in terms of optics these days, which has been making a splash over the last couple of years—I’ve written a couple articles about it—is liquid lenses, or focus tunable lenses.

[Edmund Optics] doesn’t manufacture liquid lens elements, but there are a few companies that do. The three companies that I’m aware of, or at least the three big players that I’m aware of, are TAG Optics, Varioptic, and a company called Optotune which is out of Switzerland. They each use different techniques to do this, but the principle behind the liquid lens is that in comparison to a standard lens, which of course is made of a rigid, semi-crystalline material, typically glass— where you have a dedicated focal length associated with the lens—when you have a liquid lens, you are actually able to dynamically change the focal length on the fly by applying a different electric current, or by applying a different voltage potential to that element, depending on the architecture of the liquid lens you are using. You are literally changing its curvature. So you have the ability then, very quickly, to change focal positions if you augment something like a machine vision lens with a liquid lens element.

ON OVERCOMING LIMITATIONS WITH EXTENDED DEPTH OF FIELD TECHNOLOGY

NS:What’s been really, really exciting me lately from the machine vision side of things is all of the extended depth of field advances going on. You have several different technologies that are out there that make this happen, one of which is something called light-field photography, which has been commercialized by a company called Lytro; but it’s starting to be used in the machine vision world to do things like 3D reconstruction. Typically, systems are effectively limited by depth of field—so you have some focus, and then there’s some area over which your object can drift before it goes out of focus—and extended depth of field technology gives you the ability to overcome those static depth of field limitations.

That’s something that I’m really excited to see, because optics technology fundamentally hasn’t changed in the last 200 years. We’ve been using glass elements to focus light effectively since well before Isaac Newton and even Galileo’s time, but you also have Carl Gauss, who really made the last large leaps in terms of focusing light with glass. So as technology that hasn’t really changed very much over the last two centuries, we’re starting to get to the point now where other technologies, things like silicon sensors, are starting to surpass what the optics are actually physically capable of. And so optics are, for the first time in a very long time, being forced to overcome these newly imposed limitations by the other technology because the optics have never been the limiting factor in any setup until the last decade or so. It’s a very interesting time in seeing how certain innovations are starting to develop which never had to in the past. It’s a very exciting time! V&S

This interview has been edited for length and clarity.