Executives around the globe generally understand that robotics, artificial intelligence, and drones can make their companies more efficient. However, some investors and early adopters are not considering automation safety, which can cause unexpected headaches after purchases.
Enterprises in multiple industries are looking to automation hardware and software, but they don’t always perform due diligence before signing contracts.
“That’s absolutely happening,” said Svetlana Sicular, a research vice president at Gartner Inc. “They are assuming a lot, or maybe they aren’t not even assuming. Maybe they don’t even think of it. They don’t know what they don’t know.”
Sicular cited the example of Microsoft Corp., which in 2016 deployed a sophisticated chatbot, Tay. The company said it would evolve to better understand and communicate with human beings.
But after less than 24 hours of interfacing with angry and fearful people online, Tay indeed evolved, mimicking users. It spewed a lot of white nationalist insults before it was shut down.
Tay was a teachable moment in software history, but at least the stakes were relatively small. To be fair, most robotics and AI makers comply with automation safety requirements, as long as systems operate in prescribed locations within a standard business environment.
Robots and AI may be getting smarter, and they can make manufacturing safer. At the same time, organizations that ignore automation safety can put their workers’ lives at risk. Educating employees is an important part of guaranteeing automation safety.
Power- and force-limited robots offer the ability for automation to work more closely with humans without traditional cages, but safety assessments are still necessary.
For instance, collaborative robot arm maker Universal Robots has added sensors to its e-Series line, yet most cobot suppliers acknowledge that automation safety depends on process, location, and payload as much as their technology.

Source: ClipArt.com
Validate on site
But it’s still dangerous to assume all of a system’s bugs have been worked out, especially when dealing with startups. On the buyer’s side, there are vanishingly few cookie-cutter roles for advanced automation today, which means systems may occasionally react in unexpected ways.
Indeed, “you can’t always 100% validate systems in the factory,” said Dan Posner, automation/robotics program manager at TÜV Rheinland, an international inspection-services firm. One of TÜV Rheinland’s services is validating robotics.
In addition, industry standards are less than comprehensive, and specifications can sometimes be contradictory. Manufacturing specifications are voluntary and usually address what is needed for a specific product. Specs can be unique to a single maker.
Standards are consensus agreements, usually applied across an industry. Specs sometimes become standards.
Problems with automation safety have prompted expensive re-work. But oversights could pose risks for a buyer’s products, its other robots, and its employees or customers.
Integrators such as RobotWorx are also crucial to making sure that automation safety takes both the device and the use case into account.
Design for automation safety
Robotics design can inadvertently cause unforeseen problems, said Posner. Increasingly, robots in manufacturing and those that are stationed in public places are designed to look futuristic and friendly, which might present an attractive hazard. People could let down their guard and get injured.

Source: ClipArt.com
“Some vendors I’ve talked to say robots frighten people,” he said. “Those vendors aren’t operating in reality. That does not happen.”
In autonomous vehicles, fatal accidents have caused setbacks for Tesla and Uber. This is partly because drivers trust the technology more than they should and because the handoff to manual control takes too long. This can weaken public or market confidence in autonomous safety and lead to more restrictive regulation.
The aerial drone space faces similar challenges as a result of highly publicized near-misses and pressure for the U.S. Federal Aviation Administration to allow beyond visual line of sight (BVLOS) and more autonomous operations.
Robotics vendors need to make sure their machines comply with ISO 15066, an international specification for making robotics around people.
Much of advanced automation “is coming from a small group of people, and most of them are not involved in the enterprise,” said Sicular. “Now, everyone wants to catch up.”
“Ask yourself this,” she said. “How many mistakes does someone make while doing something new?”
Watch for AI bias
The Microsoft incident has led to work to use machine learning and related technologies to reduce bias, but developers and users should check their assumptions.
Sicular noted an embarrassing glitch in automated soap dispensers discovered in 2015. The infrared-light detectors were calibrated to people with pale skin and didn’t dispense soap to people with darker complexions.
“I see among my clients those that massively miss the feedback loop,” she said. “They don’t know if they’re doing something right. If no immediate problem arises from their actions, they just keep going.”
Facial recognition software is another example of unintentional racism that algorithms can exhibit, based on the data that they are fed. Culture affects robot design and reactions more than developers may realize.
All too often, companies miss a built-in flaw with automation safety or AI bias until it gets reported in the news media, Sicular said.