The drive for improved efficiency and increased profit is critical in the majority of today’s manufacturing facilities where downtime equals significant cost. High-speed video is increasingly used to identify problem areas and to ensure top quality and efficiency throughout the production process. The ability to see a very fast moving process in high-definition slow motion can be a valuable tool for the engineer charged with quickly solving a costly line blockage or extracting the maximum efficiency from a new machine.
In a production process, the high-speed camera can be set to automatically record as many frames as needed to show the cause of the jam. The images are then automatically downloaded to a remote drive, feasibly hundreds of miles away at corporate headquarters, to await later review and analysis.
Slow motion cameras and proprietary freeware provide the ability to save recorded files in a wide range of industry-standard file formats such as AVI or JPEG. These high-speed videos can be used as a training aid for personnel to help them recognize the symptoms that contribute to line blockages or failures, regardless of an employee’s geographical location or prior training.
High SpeedAll high-speed video cameras operate at a maximum resolution up to a certain speed, and then reduce the resolution to achieve higher speeds. It is important to establish what camera speed, or frame rate, is required to capture the event of interest. When recording a cyclical production, such as labeling or packaging that takes place at x times per second, generally a minimum of three images per cycle are required (more is always better) to understand precisely what is occurring. If a production line assembles boxes at 6,000 units per minute, or 100 boxes per second, the above rule suggests a minimum frame rate of 300 frames per second (i.e., three frames per box) will be needed to clearly see the process.
With non-cyclical events, such as missile launches or vehicle impact tests, careful planning is critical to capture the action at the most significant moment. It is important to determine what temporal detail must be measurable in the finished image sequence or output video. In an automotive crash test (recorded at 1,000 frames per second, fps, per federal mandate), most of the action occurs within 100 milliseconds. In recording a missile launch, the speed of the action can be even higher. If a projectile is traveling at 500 meters per second, and there is a 100-meter field of view (FOV), it will pass through the image window in one fifth of a second, or 200 milliseconds. However, if you need to capture 100 frames within this 100-meter FOV you will need a camera that can take an image every 2 milliseconds which equates to 500 fps. If the FOV is reduced to 10 meters while all other criteria remain the same, it will require ten times that speed, or 5,000 fps, to capture the same 100 frames. Frame rate comes down to how many images you need to see of the event, regardless of whether it is per cycle or the whole event.
Recording DurationAnother important area to consider when evaluating a high-speed digital video system is record duration, or record time. This topic is sometimes confused with how the camera is triggered, but the real question is: How much (in seconds) of the event needs to be recorded? Remember, the whole event need not be recorded, only the specific part of interest.
Today’s high-speed video cameras use onboard digital RAM memory that can be overwritten, negating the requirement to know exactly when the moment of interest will occur. Just set the camera recording, and when the event occurs, feasibly days later, hit the stop button. Data can be saved from (after) the point this button is pressed, or the data recorded immediately prior can be saved, or an operator-selected percentage of both pre and post trigger (stop button) activation. This is a vast improvement over the old film cameras that took time to get up to speed and then could only maintain that speed for a few seconds before they ran out of film. When the digital buffer is full, the first image recorded is automatically overwritten. The system continues to overwrite data until it receives a trigger signal, such as an optical or audio trigger, switch closure, or a digital TTL trigger such as an alarm or keyboard keystroke. Depending upon how the system has been configured by the operator, it can save all the images recorded before the trigger signal was received, save everything after the signal came in, or a variable percentage of pre- or post-trigger images. Today’s advanced systems have the ability to automatically download some or all of the saved images to a networked hard drive, perhaps thousands of miles away, before automatically re-arming to await the next trigger signal.
Spatial ResolutionResolution, or more correctly spatial resolution, is an important topic to consider when seeking the ideal system for your specific needs. The best example of why resolution matters is detailed in this real-life scenario. One customer needed to be able to measure within one tenth of an inch in an 8.5-foot field of view. Because 8.5 x 12 = 102, the camera would have 102 inches to cover. In order to measure to one tenth of an inch, it would require ten times this number, or 1,020 pixels. New developments in motion tracking algorithms enable motion analysis software to track very accurately to about one tenth of a pixel. However, it is still recommended that whenever possible, operators have the full quota of pixels needed to enable them to discern what they are viewing. To achieve the desired framing rate (camera speed), one may be forced to sacrifice some of the resolution, as it is currently not possible to record megapixel resolution images at 10,000 fps.
When selecting a camera, it is important to determine what the pixel resolution is at the speed required, because all high-speed video cameras reduce the resolution to achieve higher speeds. Be especially wary of systems that interpolate pixels that do not exist, as these virtual pixels do not lend themselves well to serious analysis.
Dynamic RangeThe other form of resolution to consider is called bit depth, sometimes referred to as dynamic range. Bit depth refers to how many shades of gray the sensor uses to transition from pure white to pure black. Older systems used 8-bits, which means they used 256 steps to transition from white to black. Newer systems offer either 10-bits (1,024 steps) or even 12-bits (4,096), which is essential for certain advanced applications. The extra bits are useful in advanced systems because they offer the ability to select which eight of the 10 or 12 bits recorded are displayed. This can be a very effective means of extracting the maximum detail from shadows or other under-exposed areas, or provide an additional means of prolonging the record time.
All high-speed systems should be available in both monochrome and color. They all use the same basic monochrome sensor, but the color versions have a color filter attached that sacrifices some light sensitivity, even when micro lenses are used to maximize the amount of photons falling on the light-gathering part of the sensor. Most systems adopt a color matrix known as a Bayer pattern to produce acceptable colors from what is in reality a black and white sensor. This simulated color requires three bits to each of the monochrome pixels, which is why color sensors have three times the number of bits; 24 vs. eight or 30 vs. 10. If there is not a critical need for color images, it is best to stick with monochrome systems, as they tend to be less expensive as well as more sensitive while providing comparable image quality.
Light SensitivityIt is possible to record some high-speed images of a mousetrap closing at 1,000 frames per second with no additional shuttering, so the effective shutter, or exposure, time is 1/1,000th of a second. But upon closer examination it will be apparent that the trap jaws are quite blurred. One might assume more frames per second are needed; however, there are already sufficient images, although the images are blurred. The solution would be to increase the shutter speed.
Shutter speed is often confused with the framing rate, but they are distinctly different. A 35-millimeter film camera has a range of shutters ranging from seconds to thousandths of a second, but it still takes only one to three pictures per second at most. Similarly, if a high-speed camera is recording at 1,000 frames per second, ideally it is gathering light (exposing the sensor) for one thousandth of a second. With digital gating electronics, the actual time the sensor is exposed to light can be reduced to microseconds and possibly less. With the mousetrap example, if the record rate is kept at 1,000 fps but the shutter is pushed from the reciprocal of the frame rate (0.001 or 1 ms) to 100 microseconds, it will reduce the blur by one tenth.
Do not discount blur; it can be a very important consideration when working with high-speed events, especially projectiles, where it can be used to accurately calculate the speed a projectile is moving if the framing rate and shutter exposure time are known.
Quantifying the sensor’s light sensitivity is an inexact science when using the more familiar ASA/ISO measurement units used to rate 35-millimeter film. If the subject is light/heat-sensitive or the production environment is problematic (e.g., some production lines use light sensors as safety guards and additional lighting can accidentally trigger them) it is best to avoid using anything less than 4800 ISO/ASA for a monochrome camera, or one-third that for color. It is important to note that color sensors are usually one half to one third as sensitive as their black and white counterparts.
Consider the End UseThe next point to consider in the selection process is the end use. While troubleshooting a gasket manufacturing line, an instant slow-motion review of a high-speed video that was just recorded may be all that is needed. If operators need to save a portion of the image sequence for later review and/or analysis, they will need to determine some fundamental issues, such as how to get the images out of the camera’s RAM and into the real world and what do operators intend to do with the images after they have them? The key to this question is to look at the PC used and determine what communication protocols it supports. Or one may be able to use the cameras standard (RS-170) video output direct to a frame grabber or VCR. Most PCs, including laptops, will include the means to connect to external devices via Ethernet, FireWire and/or USB. This should be the first choice unless there are more complex requirements, such as the need to operate a camera located several miles away, which is often the case in military tests that involve explosives and/or projectiles.
The most popular option is Gigabit Ethernet (Gig-E), as it offers the ability to connect to an existing infrastructure and is widely supported with peripherals readily available. Another fast and reliable choice would be FireWire, also known as IEEE1394. It is best to stay away from protocols requiring specialized hardware unless one’s requirements demand them. It also is important to be able to download the images directly into a recognized and immediately usable format (e.g., AVI, JPEG, TIFF). Some systems require time-consuming, post-mission file conversion while others can quickly download to a dedicated controller, but then take forever to download into the real world. PC-based systems operate at megapixel resolution to 1,000 fps directly inside the PC. The system downloads directly to the hard drive (or networked hard drive) via the computer’s PCI bus for the fastest transfer.
System ArchitectureWhat type of physical package does the application require? There is a wide variety of systems available, from inexpensive, low-resolution plastic units for almost disposable usage on the production line, to huge systems specifically built for long record times to cover a missile’s launch or reentry into the Earth’s atmosphere. Some PCI systems use lower cost CCD or super-sensitive megapixel CMOS sensors that are made for use in the personal computer or laptops. More complex systems require housing that is engineered to reliably operate onboard crash vehicles or near missile impacts.
For systems that require the use or control via a computer, it is essential to become familiar with the software that is supplied with the camera. Some manufacturers will supply their systems with an SDK (Software Developer’s Kit) to enable advanced users to develop their own control GUI, or to integrate camera control into their own existing interface
Request a Live DemoBecause of these varying and potentially confusing factors, the best advice for finding the right camera system for the right job is to invite the camera vendors in and insist on a live demonstration of the very camera one is interested in. It is easy to demonstrate a system in the conference room, under ideal and controlled conditions, but the purchase should be based on a real-life demonstration with actual conditions, and with the same camera under consideration, in the exact environment in which the system will need to perform.
As with any relatively new technology, there is a lot of seemingly conflicting information. The main question to answer is: What works best for your needs? The answer is in finding a comfortable fit with one’s requirements and following this single, important rule: Require the systems manufacturer to demonstrate the camera with a real-life, real time demonstration, within the actual environment in which the high-speed imaging system will be used. A live, in-situ demo will bring out the best and the worst of the high-speed camera systems. With that, operators will have the information to make the best choice. V&S