Before we start discussing this idea it is worth looking at the definition of both “cloud” and “machine vision.”

Cloud computing is used to describe a large number of computers connected through a real time communication network.DEPT_IN

And machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis.

Machine vision in the past relied on large processors with multi-cores to be able to process large fast images coming through the systems. This hooks very closely into what cloud computing can provide: large, scalable and distributed computing resources. From that characteristic it looks like it’s a great fit, but the downside might be the time taken for large data sets (images and videos are not small) to be sent to the distributed resources and worked on. Remember the backplane inside a modern PC can run at 240Gbps (bits per second), whereas my Internet connection right now is only able to achieve 30Mbps—a somewhat slower process to transfer image files, process and make real time decisions. But changes are afoot, with 5G promising in excess of 1 Gbps, maybe as high as 10-20 Gbps around the 2020 timeframe.

So how will the phenomenon that is called cloud computing today come to realize itself with machine vision systems?

One possible solution is in the programming of machine vision systems. We are already aware of Mechanical Turk (www.mturk.com),  where the reverse of traditional automation is occurring. This site uses a low-paid globally accessible work force, to work on problems at their own convenience.

What if we were to create machine vision inspection systems in a similar manner? This is traditionally the most expensive part of a machine vision system, and this is due to the high levels of expertise and experience of the machine vision system programmer. Now imagine a world where this level of expertise is no longer required, and a non-machine vision programmer could be used to develop the next generation of machine vision systems. What if in the future you purchased a machine vision system from a manufacturer and you put it on your line, and then you marked images that had good and bad defects, and then they were magically transferred to the cloud, and a program was sent back for you to use? The business model for using the machine vision system now could also change:

What if instead of paying for the equipment out of capital expenditure, you could pay for the system via every part inspected, or by the month, and you could buy the hardware from the manufacturer on a credit card, or you could lease by the month.

What if when you were done with running that product line, you could deploy the hardware to the next line, undertake training and start again.

Wouldn’t this be a vast improvement on the way traditional machine vision systems are deployed?

Would this mean the barrier to machine vision systems are reduced? Traditionally 15% to 40% are hardware costs and the rest is in integration and software development of the algorithms to solve the problem(s).

Could it mean that a machine vision system is no longer a capital expenditure item?

Machine vision would be deployed on a much grander scale. It could become a tool that gets transferred around a facility.

TECH TIPS

Cloud computing is used to describe a large number of computers connected through a real time communication network.

Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis.

Cloud computing can provide large, scalable and distributed computing resources.

It would no longer be dependent on a hardcore set of companies (machine vision integrators) that are in the business to create these inspection systems. (The author being a CEO of one of these companies.)

The systems deployed would more accurately reflect what the client wants, and not what tools are available today, but what should be delivered to them tomorrow.

Now what if we could take a sub-set of something similar to Mechanical Turk and deploy it to solve the hard problems. What if it was highly talented individuals or what if it was highly complex algorithms running on servers? If it solved those difficult problems would it matter?

Images increase in size every day on a consumer level, such as the camera size of 42MP on a phone. Do we need to transfer all of this data for processing? Searching for lost planes involves using satellites to find potential items of interest, which they then send planes and boats to further investigate. I see this as a natural progression in machine vision. We have 29MP cameras available now, and the same problem remains: we can either run high speed or high resolution, but our customers want us to do both at different times. The industry needs to be able to give different views back from the sensor based on what is required. If we could look at a lower resolution full frame image, and then find an area of interest similar to the debris spotted at sea, and then request a high resolution image from that, that’s what our clients are requesting, that’s what the industry needs.

If it were possible to transfer only the data of interest, would it then be possible to acquire image, transfer to cloud, process in cloud, and return result? Maybe. It’s definitely getting closer at that point.

 In summary, is machine vision in the cloud a reality? It’s coming, and some of the technologies are here today, but when it does arrive, it will change the business model of machine vision.