Deep learning has become recognized as a useful tool in the integration and implementation of industrial machine vision systems. With its ability to leverage artificial intelligence (AI) to “learn” from continual analysis of data-centric models, deep learning provides value in certain industrial quality inspection applications, including defect detection and assembly verification.

Artificial intelligence can involve many different things, including natural language processing, speech recognition, and robotics. Thus, AI is regarded as a science as opposed to a technology, because it encompasses several technologies and engineering disciplines, including machine vision, computer vision, machine learning, and deep learning.

In the machine vision marketplace, which comprises end users, value-added partners, and even manufacturers — all with a dominant focus on industrial automation tasks in a wide range of vertical markets — the term “AI” typically refers to deep learning platforms that enable industrial automation and inspection. To appreciate the value proposition of AI in this context, it’s helpful to understand how the technology has evolved over the past several decades.


VS 0922 Analysis Main Image


The Evolution Of AI, Machine Vision, Computer Vision And Deep Learning

John McCarthy, a professor of computer science at Stanford University who is considered the “father of AI,” coined the term “artificial intelligence” in 1955. He offered the following definition:

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

McCarthy developed the first practical AI programming language, LISP, in 1958. The late 1950s also saw the emergence of computer vision, a precursor to machine vision, with several important advancements, including Russell Kirsch’s development of the first digital image scanner, which enabled photographs to be computerized, and Lawrence Roberts’ research on extracting 3D information from digitized 2D images.

In the decade that followed, excitement around AI and computer vision escalated. In 1966, Professor Seymour Papert of MIT proposed the Summer Vision Project, which challenged 10 students to make a computer “see” like a human. Although the ambitious endeavor was not particularly successful, it did serve as a catalyst for later research on and understanding of computer vision. The 1960s also saw the publication of the first computer vision textbook, authored by Professor Azriel Rosenfeld. Much of the math in that book served as fundamental building blocks for the machine vision tools that followed.

Toward the end of the decade, however, the hype around AI deflated with Marvin Minksy and Seymour Papert’s publication of a paper demonstrating major limitations of the single-layer perceptron, a learning algorithm developed by Frank Rosenblatt in 1958 that had garnered international attention for its supposed ability to learn from data.

AI fell out of favor in the 1970s, a period that has come to be known as the first AI winter. However, as funding and research dwindled for AI, interest in machine vision flourished. Digital cameras came on the scene, and significant technological advancements in gradient-based and contrast-based algorithms fueled growth in machine vision as an industrial automation technology.

Advances continued into the 1980s, a decade acknowledged by many as the golden age of machine vision. AI enjoyed a resurgence as well, with Geoffrey Hinton, David Rumelhart, and Ronald Williams furthering the concept of back propagation, an algorithm used to train multilayer neural networks. During this time, expert systems, forms of AI that use rules and logic derived from the knowledge of experts, gained popularity with corporations around the world. The 1980s also saw AI applied to chess-playing programs like HiTech and Deep Thought, which ultimately defeated human players.

As the 1980s came to a close, public fascination with AI abated once again as commercially viable applications failed to materialize and funding dissipated. The second AI winter had come. Machine vision, however, continued to grow throughout the 1990s and early 2000s, driven by the need for reliable inspection in industrial automation applications. A major revitalization in AI came around 2012 with the emergence of the concept of multi-layer convolutional neural nets or “deep learning,” and now this technology is being applied in a wide range of applications.


VS 0922 Analysis Plant Laptop


Machine Vision That Learns

Today, the marketplace for machine vision is a mature one. For decades it has provided manufacturers with components and software that have revolutionized inspection systems — and they keep getting better as deep learning and AI complement and enhance their capabilities.

Unlike a rules-based technology, deep learning is not programmed using specific numerical inputs into traditional math algorithms and convolutions. Instead, deep learning “programs” itself by analyzing a database of images that have been labeled and categorized by human experts. Because deep learning software is data-centric, it creates a mathematical model with data on miniscule image variations and other clues that humans use to determine if something is “good” or “bad.” That model is then used to inspect new products for quality. In this way, deep learning systems learn much like humans do — by acquiring knowledge from experts and repeating successful operations.

Automated inspection using deep learning is uniquely able to quickly adapt to changes in parts and processes. This capability to learn makes it a technology well suited for applications in the often-fluctuating industrial automation environment.     


A Data-Centric Approach

Variability in a product or process — changes in lighting, subjective features, or subtle defects like smudges or dust — can stymie all machine vision systems. Deep learning machine vision solutions create models of what good and bad parts look like in images based on statistical analyses of featuresthat have been expertly tagged by human operators. The result is that this technology has the capability of being highly forgiving to such changes when properly trained.      

The key to successfully applying any deep learning software lies in the accuracy and quality of the learning data. Garbage in, garbage out, as they say. If just 10% of the data in a deep learning model is inaccurate, optimizing the model involves three times more data. For system integrators that develop dozens of projects per year, managing data accuracy is paramount. It requires AI/deep learning design tools that identify incorrectly labeled images and inconsistencies among different expert human labelers.

In effect, a data-centric AI/deep learning design approach means that the designer focuses on the quality of the data used to train the AI model rather than trying to tweak the resulting model by changing specific values or changing the statistical methods used to sample images and create the model. This approach makes deep learning system design available to nonprogrammers while giving them the tools to optimize highly effective data-centric machine vision design.

Machine vision technologies have come a long way over the past 70 years, and while AI/deep learning software solutions may not be right for every application, they do offer tangible benefits where continuous improvement in inspection accuracy is a critical objective on the factory floor.