Image in modal.

Machine vision, a cornerstone of modern manufacturing and automation, continues to evolve at a rapid pace. Driven by advancements in hardware, software, and novel applications, this field is reshaping quality assurance and process improvement. From groundbreaking innovations like event-based imaging to the seamless integration of AI-driven tools, machine vision technologies are setting new benchmarks for efficiency and precision. This article explores current trends in machine vision, highlighting breakthroughs in event-based imaging, industrial streaming cameras, area scan sensors, AI-driven software, and their practical applications.

Event-based camera 2
Image Source: IDS

Hardware

Event-Based Imaging

Event-based imaging represents one of the most disruptive innovations in machine vision since AI entered the field. Event-based image sensors are neuromorphic, meaning their design mimics biological systems. Much like how deep neural networks draw inspiration from the human brain, event-based sensors are modeled after the retina. The retina reacts asynchronously to changes in light intensity, ignoring areas without change. Neurons process this data into what we recognize as an image by effectively integrating these deltas over time.

By contrast, digital cameras capture entire images irrespective of changes within the scene. Unlike image-based cameras, event-cameras are generally not related to image information such as color and brightness because they only react to changes, making movement crucial for functionality. Event sensors demand less bandwidth – similar to how compressed video only updates altered pixels. Because event-based sensors can differentiate contrast changes through amplifier circuits in dark regions and through logarithmic mapping of high light values in very bright image areas, their dynamic range outpaces frame-based sensors.

With a high temporal resolution in the µs range, an event-based camera is optimized for highly dynamic scenes and offers the necessary performance for capturing fast moving objects without loss of information, which is unavoidable with image-based sensors due to fixed frame rates. The temporal resolution, i.e., the minimum measurable time difference between two successive changes in brightness (events), can be less than 100µs. This means that even the fastest movements can be detected almost seamlessly. Due to the need for a sufficiently illuminated scene, motion blur can occur with image-based sensors at extreme sampling rates, as the exposure times cannot be set as short as required.

One example of event-based imaging’s potential is in high-speed quality control. Consider a conveyor belt carrying small components at high velocity. Traditional cameras might struggle to capture details due to motion blur or limited framerate, but event-based cameras can detect and analyze rapid changes in position or appearance, identifying defects in real time. Similarly, in robotics, these cameras can enable precise movement tracking, helping robots adapt to dynamic environments.

Industrial streaming camera
Image Source: IDS

Industrial Streaming Cameras

Industrial streaming cameras blend the functionalities of industrial machine vision cameras, security cameras, and dash cams. They epitomize Industry 4.0 by shifting from centralized processing to interconnected smart devices. While industrial machine vision cameras require a host PC, security cameras function as IoT devices, streaming video data over networks without needing dedicated PCs. This ease of setup is invaluable.

These cameras also inherit the robustness of industrial components, featuring long-term availability, durable housings, and flexible lens options. Unlike security cameras with fixed wide-field lenses, industrial streaming cameras support interchangeable lenses, accommodating various fields of view and specialized needs. Their dash cam-like ability to record video onboard ensures upstream errors are captured even if detected downstream. For example, a factory might use these cameras to monitor bottlenecks in production lines. If a defect is discovered downstream, the recorded footage allows engineers to trace the error’s origin, saving time and reducing waste.

Another compelling application is in remote monitoring. Industrial streaming cameras can be deployed in harsh environments, such as mines or offshore rigs, where human presence is limited or dangerous. Their ability to stream high-quality video to control centers ensures real-time monitoring and quick response to anomalies.

Area Scan Sensors and NIR Imaging

Standard area scan sensors continue to advance, with Sony’s STARVIS 2 leading the way. This latest rolling shutter sensor features improved pixel geometry, enhancing sensitivity and noise reduction while maintaining cost-effectiveness. STARVIS 2’s wider photodiodes and improved fill-factor boost performance in near-infrared (NIR) wavelengths (750-1000 nm), critical for low-light applications and inspections revealing invisible details.

For example, in the food industry, NIR imaging can detect foreign objects or impurities in products like grains or packaged goods. Similarly, in the semiconductor industry, these sensors can reveal structural inconsistencies in wafers, ensuring higher yields and fewer defects. By offering a cost-effective alternative to dedicated SWIR sensors, the STARVIS 2 enables broader adoption of advanced imaging in industries with tight budget constraints.

Industrial streaming camera
Image Source: IDS

Software

AI and Machine Vision

Software trends in machine vision remain dominated by deep learning and AI. While AI excels in many applications, it’s not a panacea. Deep learning relies on vast datasets to identify relevant image features for specific tasks, but data collection and labeling can be time-intensive. For some tasks, rules-based image processing – leveraging the expertise of skilled engineers – remains more efficient.

Advancements in synthetic training data and pre-trained networks are streamlining the adoption of AI. Synthetic data mitigates the challenges of data collection and labeling by using simulations to generate labeled images. These simulations, rooted in rendering tools and ray tracing, align with the broader trend of digital twins in manufacturing. For instance, a manufacturer could simulate various lighting conditions and object orientations to train a network capable of identifying defects in automotive components, reducing reliance on physical samples.

Pre-trained networks, meanwhile, provide out-of-the-box solutions for common tasks like person detection, OCR, and defect detection. Fine-tuning these networks with application-specific data often requires significantly less effort than training from scratch. For example, a factory producing glass panels might start with a pre-trained network for surface anomaly detection, then adapt it to identify scratches or cracks specific to their product line.

Applications

Counting

Consider the task of counting objects—a common machine vision application. Event-based cameras excel in counting fast-moving items, such as cascading parts. Their fine temporal and spatial resolution allows precise tracking of moving objects by detecting transitions pixel by pixel. Image-based cameras struggle in such scenarios, where objects may skip across several pixels between frames. For example, a pharmaceutical company might use event-based cameras to count pills dispensed into bottles, ensuring accuracy at high speeds.

For stationary objects, standard area scan cameras are more suitable. The choice between deep learning and rules-based approaches depends on the scene’s complexity. Deep learning thrives in challenging environments with varied backgrounds and object positions but falters with tightly packed or overlapping items. In such cases, rules-based methods like frequency analysis or edge detection are more effective. For instance, an electronics manufacturer might use frequency analysis to count solder points on a circuit board, where precision and repeatability are paramount.

NIR versus SWIR Imaging

Short-wave infrared (SWIR) imaging often reveals features invisible under standard lighting by exploiting material-specific interactions with light. Determining the optimal wavelength for highlighting these features is crucial. If the required wavelength is below 1000 nm, Sony’s STARVIS 2 sensors can offer a cost-effective alternative to dedicated SWIR sensors, delivering high sensitivity at a lower price point.

For example, in agriculture, NIR imaging can assess crop health by detecting water stress or pest damage. In plastics recycling, it can differentiate between types of polymers based on their spectral signatures, streamlining sorting processes. These applications highlight the versatility of NIR imaging in addressing diverse industrial challenges.

Surface Inspection

Surface inspection is another area where machine vision technologies shine. Event-based cameras can detect minute vibrations or deformations on surfaces in real-time, aiding industries like aerospace where material integrity is critical. AI-powered algorithms can further enhance these inspections by identifying subtle patterns indicative of wear or damage, reducing downtime and preventing failures.

The evolution of machine vision – spanning hardware innovations like event-based imaging and industrial streaming cameras, software advancements in AI, and versatile applications – is helping to drive manufacturing’s digital transformation. These technologies empower manufacturers to optimize quality assurance and process improvement, unlocking efficiency and insight. As these trends mature, the potential for machine vision to redefine industrial operations remains boundless. By staying informed and adaptable, professionals in the field can harness these breakthroughs to remain at the forefront of innovation.