AI is broadly defined as a machine solving a problem or completing a task in a way that we consider “intelligent”, like solving a math problem. Machine learning is a branch of AI where machines learn how to solve a specific problem without human intervention, so long as they are supplied with data. Deep learning refers to neural networks, which are inspired by how the human brain works. Neural networks “learn” how to solve a problem as it sees more and more data.

Content provide by Pleora

Machines may not immediately be fully autonomous, but through training the goal is to develop AI capabilities that can function and process decision-making skills independent of human intervention. In an inspection application, for example, the goal is to be able to train the system to the point where it can begin learning errors and independently identifying a defective product.

Hybrid AI Offers a Solution

Hybrid AI allows manufacturers to deploy advanced inspection capabilities alongside proven machine vision processes.

In a classic computer vision application, a developer manually tunes an algorithm for a task to be completed. This can require significant customization if products A and B have different thresholds on what is considered an error. Inaccuracies may generate excessive false positives that stop production and force costly manual secondary inspection, or missed errors that result in defective or poor quality products going to market.

Similarly, AI algorithm training has traditionally required multiple time-consuming steps and dedicated coding to input images, label defects, fine-tune detection, and optimize models. More recently, no-code software platforms provide an intuitive drag-and-drop approach to develop “plug-and-play” machine learning quality inspection applications. Users can design and deploy advanced computer vision, AI, machine learning and deep learning capabilities in minutes instead of days, without requiring specialized skills or external consulting.

AI plug-ins can be developed in any standard web browser on any device with no-code “block-based” tools for vision and AI programming. Comprehensive platforms allow the design of traditional computer vision plug-ins for standard features (for example detection, thresholding, measurement, pattern matching, and barcode reading) and deep learning classification and object detection skills. For more advanced users, platforms provide full flexibility for developers to code and test plug-ins using Python, customize computer vision and AI plug-ins, and create custom plug-ins to run any type of AI model. In addition, “off-the-shelf” plug-in AI skills for quality inspection and hyperspectral imaging lets users easily deploy advanced capabilities for common requirements.

Edge Processing

A hybrid approach takes advantage of advances in edge processing and embedded technologies to seamlessly add AI capabilities alongside existing infrastructure. Plug-ins are transferred to the embedded platform, which acts as an intermediate device between the camera and host PC. The embedded device “mimics” the camera for existing applications and automatically acquires the images and applies the required AI skills. Processed data is then sent over GigE Vision to the inspection application, which receives it as if it were still connected directly to the camera. The embedded device can also transfer images back to the software platform for continuous offline training.

In a potential scenario, a brand owner could begin with a hybrid approach to deploy offline inspection to automate a visual inspection process. For example, AI can enhance human processes by flagging images or products for operator analysis. The device could also be used as a secondary inspection tool by processing imaging data with loaded plug-in skills in parallel to traditional processing tools. If a defect is detected, processed video from the embedded device can confirm or reject results as a secondary inspection.

While continuously training the system with collected data, and building confidence in results, the end-user can gradually transition to an AI-based inspection model. Over time, the end-user can begin to use inspection data as part of a more comprehensive analytics-based Industry 4.0 initiative, focused on driving efficiencies. This can include using data to monitor machine performance for proactive maintenance, and leveraging cloud capabilities to share data across global facilities to improve inspection processes.