January 16, 2017      

Robotics is a serious business, encompassing industrial automation, healthcare applications, military strategy, and much more. It can be a serious problem when robots don’t meet performance expectations. Of course, that doesn’t mean we can’t poke a little fun at the robot fails as they try to be more human.

Do the robots know we’re laughing at them? Let’s hope not! Here’s a roundup of the biggest robot fails of 2016.

Fail 5 — Not funny

Robot fail includes facial recognition software for New Zealand customs.

Facial recognition software can be as fallible or biased as humans.

In December, facial recognition software in New Zealand rejected a citizen of Asian descent because his eyes weren’t open. The problem is, his eyes were open .

The 22-year-old had submitted an online application for passport renewal. The software deemed the first picture invalid but later accepted a second picture.

At least the victim isn’t alone. According to the government, an estimated 20 percent of photos are rejected for “various reasons.” The incident revealed the bias that can exist within so-called intelligent software.

Today it’s the eyes. Tomorrow, will it be the height, skin color, or clothing that robots don’t like?

Robot fails of 2016 included Promobot in Russia.

Promobot was a runaway Russian robot that stopped traffic.

Fail 4 — Wanting to be noticed

Humans escaping — or trying to escape — confinement isn’t a new phenomenon. But robots copying that behavior definitely is.

In Russia, a service robot called “Promobot” caused a traffic jam after it escaped from its lab. Once captured, the engineers reprogrammed the robot hoping it would solve the problem. Nope. The robot tried escaping again, reaching 50m into the street after breaking down and causing another traffic jam.

In this robot fail, Promobot got arrested.

Police arrest Promobot at a Russian political rally.

If that’s not enough, in September, Promobot was arrested for taking part in a political rally in Russia. What was it doing there? Recording the opinion of voters to later be examined by the political party. Police tried handcuffing the robot and, thankfully, it didn’t put up any resistance — this time.

Clearly, the robot was suffering from some emotional problems and wanted to be noticed by police, drivers, and protestors alike.

Fail 3 — Why am I this way?

Microsoft made headlines in March because of Tay, a Twitter bot that started out on a positive note but within 24 hours, turned racist.

Tay learned through interacting with people, but this model, which sounds great on the outside, didn’t take into account the darker side of human interactions.

The failure isn’t just that Microsoft’s marketing stunt to show off Tay’s AI capabilities was completely turned on its head, but also that the artificial intelligence application is building its own behavior based on who it interacts with. If what Tay interacts with is negative or racist, that’s what will become the basis of that AI.

After the more than 96,000 tweets, it is only logical that Tay was asking itself an important question. “Why am I this way?”

Fail 2 – I slipped

If the name of a robot is “Fatty,” you are unlikely to think it is unfriendly, let alone be afraid of it attacking you. But that’s exactly what happened at the China Hi-Tech Fair in Shenzhen in November.

Robot fail as Fatty crashes into glass.

Fatty robot injured a person at the China Hi-Tech Fair.

During a demonstration, Fatty ended up ramming into a booth. There was glass in the booth, which Fatty broke through and sent shards outward, piercing a bystander’s ankle.

This was all a big mistake, of course, as someone had pressed the wrong button. Instead of going forward, Fatty was supposed to go backwards.

There hasn’t been much information about Fatty since the event. Fatal accidents involving industrial robots occur relatively infrequently, but Fatty slipping and sliding into glass reminds us that some robot fails are the result of their operators.

More on Service Robotics:

Robot Fails No. 1 – Wasn’t trained properly

The biggest robot fail of 2016 isn’t a failure in the conventional sense. The robots failed in their duties and lost their jobs because of it.

Robot fail includes early waiters.

Restaurants in China were forced to lay off robot waiters. (Source: ChinaFotoPress via Getty Images)

In China, three restaurants in Guangzhou fired their robot waiters because of their “utter incompetence.” The robots did so badly that two of the restaurants have closed down, and the third restaurant is re-hiring humans. It is keeping one robot — probably to show the other employees what bad work looks like.

The restaurant proprietors complained that the robots kept on breaking down. One expert said that the robots can do well when repeating tasks but can’t find their bearing when they have to interact with humans.

While it is funny to hear about how robot fails lead to them being fired, this all points to a real gap between public expectations for robots and what they can actually do.

If robots can’t take orders for food, transport dishes to customers, or handle customer inquiries, then the incoming wave of automation in the restaurant industry appears further away than what’s being predicted and sold.

As far as the robot waiters that were fired, we can only imagine their excuse when they received the bad news. It would probably be something like, “I wasn’t trained properly.”