It sounds so simple! With little effort and training, an artificial intelligence learns to identify product requirements. Scratches, cracks, shape defects and other object errors are detected reliably and without tiring. Subsequently, the respective products can be sorted out before they go to the customer or are further processed. There is no doubt that automated, image-based quality control with artificial intelligence offers many advantages over manual checks by humans or even classic machine vision approaches based on predefined rules. So many benefits - so why is the technology still in the early stages of growth? One of the biggest problems with current AI vision is the lack of experience among users. AI vision technology encompasses so many new methods and subcomponents that companies—especially small midsize companies—often don’t know which ones are suitable for them. Often they do not have the time, manpower or courage at all to evaluate the new technology in detail in all its facets.
There is also often the question of whether AI vision is even suitable for a particular task or can solve it. Unfortunately, this hen-egg problem too often leads to the technology not being evaluated at all. Certainly, the technology still has to mature, especially in the industrial environment, in order to reach an acceptance level like the proven classical image processing methods. On the other hand, there are already user-friendly software tools that enable even users without experience to evaluate their applications with AI vision and implement them intuitively.
The fact that AI-based methods work in a completely different way than rule-based approaches is their greatest advantage. This enables providers to develop entirely new tools for image processing that can be used much more intuitively. They can already be used to transfer human quality requirements to AI-based image processing systems through machine learning in order to optimize and automate processes. Often, not a single line of source code needs to be written in the process, making AI vision suitable for entirely new target groups that no longer necessarily need to have programming skills. Feasibility analyses can thus be carried out by the employees who themselves have the most knowledge of products and their special features; companies are thus no longer necessarily dependent on programmers and image processing experts in the evaluation phase.
Let’s look at the strengths of AI-Vision with the following application example from one of IDS’s customers. Rotatable axles are often secured with snap rings. However, only a ring fully engaged in the axis slot ensures a 100% secure connection. A faulty fit can result in product damages. The task for quality assurance seems simple. Check that the ring is properly engaged! The fact is, however, that this test is still performed by humans, as no safe automation solution has yet been found. Tests with rule-based image processing could only ensure whether the snap ring was present or missing. At best, it was possible to determine whether the “ears” of the snap ring were further apart than they should be. However, this does not necessarily mean that the snap ring is securely engaged! It could also just be lying on top! The marginal image differences in the error case could be described only with difficulty rule-based.
A feasibility analysis using machine learning methods showed that only a few image examples of correct and incorrect cases, in this scenario just under 300, were required to train a neural network that could predict the incorrect seating of the snap rings with a high degree of confidence. Manual visual inspection was thus only necessary for very few uncertain results.
How good a neural network works through its training can be validated by tests with sample images. A test run with images of known error classes provides information about the learning accuracy and the quality of the AI results. The more clearly the probabilities for GOOD and BAD cases differ from each other, the clearer a decisive threshold between GOOD and BAD can be defined in order to generate as few incorrectly recognized GOOD or BAD cases as possible later in productive operation. The variance of the GOOD probabilities determined during the test also helps to optimize the production environment. After all, the less the environmental conditions and thus irrelevant image content vary, the more concrete quality statements can be made about the relevant distinguishing features in the AI analysis.
The fact that AI quality decisions are not traceable through a clearly defined set of rules and the algorithm is more like a black box does not mean that results cannot still be explained. Tools such as attention maps or anomaly maps visualize where the pixels relevant for predictions are located in the image and to what degree they contribute. In the case of our blast ring inspection, these overlays point to the relevant features of the known defect classes as expected. Especially with anomaly detection, this allows us to sort out unknown, and thus untrained, defect cases. This proves that machine learning methods are also capable of using more than the trained knowledge of known features and can precisely signal unknown, emerging problems. As an example, an out-of-focus camera image caused the anomaly map to mark deviations in several places.
Anomaly detection thus brings another advantage for quality assurance that would not be so easy to realize with rule-based image processing. The decisive factor here is the ability to detect any deviation from the normal case, even those that are underrepresented in the training. In other words, those that were not planned at all. So, where other methods become uncertain about something “unknown,” sometimes even fail, this method is highly certain that nothing remains hidden. And that includes everything that may occur at some point during normal operation. Continuous data about a system condition, for example in the form of increasing product defects or deviations, i.e. anomalies, enables one to determine an optimal time to maintain a system before product quality drops too low or a worst-case scenario such as a plant failure occurs.
User Friendly Tool
AI vision can be used in many ways in quality assurance and can extend or improve existing applications. It is important to proceed step by step. A feasibility analysis in advance helps to clarify whether a task can actually be processed with AI vision, even before a lot of money and time has to be spent on expert personnel, knowledge building and AI systems. User-friendly software tools that enable an initial evaluation purely based on images and even in the cloud are already helping to do this today. This requires neither a real vision system with AI capabilities nor a separate training platform. This greatly reduces the investment risk. Intuitive user interfaces and easy-to-understand workflows and wizards can also create an easy entry point for users who do not yet have much experience in AI or image processing and application programming.
Nevertheless, AI vision requires a certain understanding of what suitable visual material must look like for effective training. This is the prerequisite for drawing trustworthy conclusions later on, which can be evaluated in a comprehensible way. It is also important to bring experienced partners on board who not only promise you the best AI system, but can look at and support the entire workflow of machine learning-based quality assurance. Full support from a single source is also a component of success in the AI vision environment that should not be underestimated. The use of AI vision in quality assurance is therefore perhaps not quite as simple as one is told everywhere, but it is certainly simpler than is often assumed.