With all the advancements and improvements in machine vision technology, the key to selecting the “right” camera depends a lot on the application requirements. Without knowing what is really needed or necessary for the application, it is easy to get caught up in technology and choose the wrong camera.
Since the camera is most likely the key component of the solution, the first step is to get a clear understanding of the customers’ expectations and needs. In many cases, the specification of what they need to measure, or the data they need to collect from the system, will drive the solution selected.
Does the customer have a camera or system in mind? If so, ask the customer how they selected it. Was there a feature or function that drove them to that conclusion? If the customer does not request a specific camera or technology, then it is important to get a clear picture of what their real needs are. To get that data, typical questions to ask may include budget and timeline for a solution; how fast are items moving; how many types of parts; and will there be more parts in the future? But there are some questions that are less obvious or difficult to ask, like is there a real budget for the project; who from the customer side will be the technical lead; who will own the solution when it is done; and how will support be handled? Working with the customer to indentify the needs of the application and not just the specifications of the measurements will allow for a more complete solution.
Other things to consider include how the camera needs to be installed. Will it be mounted over a conveyor or integrated into a compact machine? The challenge with installing any vision solution is that each one tends to be a bit different. One of the ways to help customers get the technology up and running is through the use of a network of trained and authorized system integrators. These companies take the camera and implement it in conjunction with adjacent technologies to meet the specific needs of the particular customer’s application.
Needs such as system upgradeability; who will be responsible for programming the next new part that will be made on that production line; and how will the data be provided to the customer’s control system are all questions that will influence the selection of the camera.
In many cases, the traditional smart camera is the workhorse of machine vision components. The cameras (multiple vendors) offer a rugged solution with a flexible predefined set of tools, which differs depending on the camera vendor. These cameras typically provide multiple lenses, lighting and communication options (digital I/O, web interfaces, Ethernet, and many fieldbuses).
The toolset may include several types of common tools, like blob inspections, a pattern-matching algorithm and edge inspection tools. These tools, along with filter functions, allow the user to build vision programs for a wide variety of applications.
Smart cameras are becoming easier to program so people using these solution-based cameras are able configure everything themselves. This has helped push vision sensors as a viable alternative to smart cameras in many situations.
Vision sensors have increased the number of applications that are solvable with a simple solution. Over the last year, there have been a number of new entries into this field that have continued to expand the application base. These sensors are very similar to the smart camera group, but, in general, they have a smaller, more specific toolset. These tools include pixel counters and pattern tools that can be applied to images simultaneously.
In the past, some suppliers reduced the resolution of the devices to keep the speed higher. With improvements and greater use of FPGAs, customers can now expect similar speeds with VGA and higher resolution systems. The amount of data that is available from vision sensors is also increasing. Vision sensors are sending results via Ethernet TCP/IP, EtherNet/IP and other network protocols. This not only gives access to more data, but also gives the user access to modify and monitor multiple units from a single location through the network.
Some of the major differences between smart cameras and standard cameras are the way data is processed. The standard camera typically captures the images and transfers the data/image to a PC for analysis using either an off-the-shelf image library or a program written specifically for solving that application-Whereas a smart camera makes its own decisions by capturing the image, processing the data and providing some type of output to the user. The analysis is usually done with a toolkit designed around the camera, so there is no low-level programming but involves selecting tools and setting parameters instead of writing code.
The pros and cons for either type of solution are based on the application, in many cases. There are some inherent differences in the fact that a PC-based system requires a PC during runtime, while a smart camera does not. Many of the other items used to solve an application are similar with lighting and lenses based on the specifics of the application.
Many times, the types of applications that are solved with a PC-based system are done that way because of a need that is not met with a smart camera. Many end-users seem to prefer smart camera solutions because there are no additional PCs on the factory floor that may or may not need maintenance. The flexibility of PC systems, with additional memory and processing horsepower, make them ideal for some solutions.
There are some interesting trends in smart cameras, such as more and more application-specific vision sensors and 3-D imaging sensors. An increasing number of the traditional photoelectric sensor companies are making a bid for vision business with the use of vision sensors. Though they are called sensors, they pack a lot of vision performance to solve applications that are beyond simple point sensors, but do it in a way that makes vision accessible to everyone on the factory floor.
The other trend is that companies are introducing more 3-D cameras to the smart camera field. The idea is to get even more data into the vision system to make better, more reliable decisions. With 3-D, users can now make decisions based on object height, as well as contrast in the X-Y plane. Some cameras use laser triangulation to gain the third dimension, while others offer systems based on time-of-flight measurement, stereo and structured light. All have their place and solve different types of applications. This trend seems to be gaining momentum, not just on the hardware side, but also by image-analysis software companies adding more tools to work with different types of 3-D data.
In addition, there are combination cameras that make many different types of inspections possible. Some are able to take advantage of a technique called “multiscan” that allows a user to get multiple types of images from a single camera in a single scan. The camera can provide, for example, a 3-D image, a 2-D grayscale, a 3-D color and 2-D image all at the same time. This opens the doors to several other applications as the amount of system hardware required is greatly reduced.
As the number of camera types, communication options, software and analysis tools increase, the best way to pick the right camera is to work with someone you trust. Be it a vendor that you have used on multiple installations, a system integrator with experience in the application at hand or a vendor with a wide range of solutions the only way to pick the right camera is keep an open mind and be specific about the results needed. If you have that much figured out, the rest is simple. V&S
Tech TipsKeep an open mind and be specific about the results needed.
Multiscan cameras can provide a 3-D image, a 2-D grayscale, a 3-D color and 2-D image all at the same time.
Smart cameras eliminate the need for additional PCs on the factory floor.