'Turing Learning' Promises More Flexible Robots

January 09, 2018      

In the 1950s, Alan Turing, a famous mathematician and computer scientist who helped Britain crack enemy codes during World War II, published a paper describing a test he called the “Imitation Game.” This became the basis for a recent film, as well as “Turing learning.”

The test is straightforward: A “judge” has to differentiate whether the “person” speaking to him or her is a human being or a computer. If the judge is less than 50% sure that the person talking to him is a human, then the computer has reached a new level of artificial intelligence.

The test, which has come to be known as the “Turing Test,” has been used for decades to measure how advanced computers and software have become. But, the Turing Test was designed with humans in mind.

What happens when computers start judging other computers?

Turing learning works without human input

Last summer, researchers at the University of Sheffield in the U.K. announced they have developed a new kind of technology.

This technology allows machines to learn how “natural and artificial neural systems work” by observing them. And, the AI can learn this without any human input.

To test their technology, the researchers designed an experiment based on the Turing Test. Except there were no humans involved, only robots.

They introduced two groups of swarm robots and monitored them. The first group of the swarm robots was called “original.” The second group was called “counterfeit.”

The job of the software was to detect and learn about the swarm movements, and identify which swarm was original and which swarm was counterfeit. They called this “Turing learning.”

To elaborate further on how this technology works, one of the researchers used painting as an example. Today, if software is deployed to analyze paintings similar to those of Pablo Picasso, a programmer would have to input a benchmark for what is considered a Picasso.

Turing learning turns this on its head. Now, software itself decides what is considered to be like a Picasso, without any pre-programming or human input.

Robots exceed design goals

For a robotics company, the implications of this are not clear at first glance. But this could be one of the most disruptive changes in robotics, and it is not a new product or an innovative government strategy.

In the future, robots won’t be limited to the capabilities they are manufactured with. A new sector is forming within robotics, one that revolves around taking the existing capabilities of robots to a new level, long after they have left the factory.

Alan Turing

Alan Turing

This is akin to the over-the-air updates that Tesla provides to its vehicles, literally giving them new capabilities.

Turing learning could be deployed in practically any setting and give industrial automation capabilities akin to AI-powered autonomous robots.

For example, consider the agricultural robotics developed by the Netherlands. One of these robots automates the process of milking. A cow can walk into a machine and have its milk extracted.

Turing learning technology takes the capabilities of this machine one step further.

Turing learning could theoretically analyze cows based on their milk and other variables and predict how healthy the cow is and what its potential yield will be. Apply this machine learning to an entire dairy farm, and it gives farmers forecasts they have never had access to before.

Service robots empowered by Turing learning

In another example, Turing learning technology can be deployed to humanoid robots. In 2016, AsusTek Computer Inc. unveiled a humanoid robot called Zenbo.

Taiwan-based ASUS designed Zenbo to assist its owners by doing things such as reminding them of appointments or monitoring the environment for “emergency situations.”

At the end of the day, Zenbo — like Pepper, Jibo, and other social robots — is only as good as the software and programming the comes with it. But with Turing learning, Zenbo could take its services to the next level.

What if Zenbo could spot emergencies by monitoring data like movements of occupants or placement of objects? Might it predict when it will be needed the most and move to a specific location? What kind of connections could it make by observing its surroundings day in and day out?

These are just two applications of Turing learning. But they point to something ground-breaking. In both the milking machine and the humanoid robot, robots could learn about their environments and make predictions without any human input.

AI and RaaS

In addition to the emerging robotics as a service (RaaS) model, could Turing learning be offered alongside hardware for a monthly or annual fee? A robotics supplier could provide different capabilities to a robot depending on what the client wants.

The application of AI and machine learning could give rise to robotics companies that don’t manufacture anything. Instead, they’d provide enhanced capabilities to existing robots through a plug-and-play approach or cloud connectivity.

Both robotics vendors and end users have become used to the idea that once a product is shipped, its final capabilities are determined. Soon, autonomous systems could remain flexible or even learn on the job.

How does this change the way you approach robotics?