A lot of science fiction, from Star Trek and Ex Machina through Westworld, makes it seem like human-level artificial intelligence is around the corner. However, sentient machines are still pretty far away. In the meantime, ethicists and regulators in Europe and elsewhere have begun considering robotics and AI rules.
The most pressing use cases involve autonomous machines in factories and warehouses. As more functions are performed by the onboard (and cloud-based) “brains” of robots, the more we need to anticipate potential outcomes and how to assign legal blame in the case of an accident.
In my previous article, we looked at how the EU may soon create an agency governing safety, liability, and other standards for mobile and social robots and for autonomous vehicles. The concept of electronic “personhood” essentially boils down to the legal status of these machines. Ownership versus independence will be a key factor in determining responsibility for any incident.
[note style=”success” show_icon=”false”]Business Takeaways:
- Public agencies and private companies need to collaborate on a consistent legal framework for determining responsibility in the case of accidents involving robots and AI.
- The novel issue of “robot rights” will require innovative solutions to be enforceable.
- The EU is leading efforts to anticipate technology development with robotics and AI rules.
A bright future?
In the 20th century, flight evolved from the introduction of biplanes before World War I through the moon landing and space shuttle. We are on the verge of an evolution of computer science and robotics in which truly autonomous, thinking machines could emerge.

Self-driving cars, such as this Volvo Drive Me, are leading the way for AI rules.
Social robots, self-driving cars, and various applications of machine learning offer many commercial opportunities. However, each of these involves not only technical challenges, but societal ones as well.
According to many industry pundits, AI has the potential for economic disruption on the scale of the Industrial Revolution.
Policy makers and businesses need to consider the effects of robotics and AI. If many production and service jobs are replaced through physical and software automation, how will taxation support programs such as Social Security?
Regulations address the physical safety of humans sharing workspaces or roads with robots, but they will also need to account for interactions and employment. Automation has its roots on the manufacturing floor, but now the very nature of work is changing all the way up the corporate ladder.
A legal person is something that is subject to human justice, and I do not believe it is possible in a well-designed system to create the kind of suffering that is intrinsic to human punishment.
— Joanna Bryson, professor at the University of Bath
Some of the key questions to be answered are:
- What will it be like to have machines that can reason (for the most part) like humans?
- Should we define a new legal status for autonomous machines and processes?
- Will they be slaves?
- Will they be the property of their owners?
In the U.S., the Federal Aviation Administration has required that all drones and drone pilots now be registered. I would expect a similar licensing ecosystem for certain robots and AI. This registration process could be one source of funding for an insurance pool to cover accident liability.
Sorting it all out
Governments must determine how their legal system should be modernized to deal with these potential scenarios up to and including potential personhood for a sentient machine.
Even before then, lawmakers and industry need to come up with a new insurance solution for autonomous vehicles and AI — something akin to “no fault” insurance for accidents when the machine logic is at fault. The question is, how do you fund an insurance system such as this?
One benefit to untangling any eventual “criminal AI” incident should be the ability to get to the machine’s “black box” data recorder, because unlike humans, a robot shouldn’t be able to forget or lie. The facts and data from any potential situation should be available to accident investigators.
This information should include both the sensor data and the logic choices made by the AI. Having this information will make fault determination much more straightforward in the courtroom.

Is a “kill switch” necessary for automation rules?
The European Parliament Committee on Legal Affairs has even gone so far as to propose the requirement of a robot “kill switch.” On the factory floor this has historically been implemented as an emergency stop. “Kill switch” sounds more dramatic, but the question remains whether any robots need some way to be halted in case they go bad.
We’ve already seeing one example of this scenario play out with the latest drone-disabling technologies. While drones aren’t yet fully autonomous, the same basic idea is at play here: retaining the ability to remotely stop a device that is doing something undesirable. The U.K. is considering similar measures.
Global debate requires unified solutions
As robotics and AI companies continue to innovate and produce ever more autonomous machines, it is critical that we understand the impact of laws in various regions. The debate about allowing self-driving to run on the road alongside human drivers has resulted in some cities outlawing these vehicles, while other cities have welcomed them.
Across the broad spectrum of robotic applications, it could be chaos for the manufacturers to have to comply with a variety of local restrictions. This is why it’s important to start now with the exploration of reasonable “rules of the road” for all regions. I hope some uniform and sane guidelines will emerge from the early adopters.
[note style=”success” show_icon=”true”]More on Robotics Policy and AI Rules:
- Top 5 DARPA Robot and AI Projects of the Past Year
- Robotics, AI, and Automation Transform the Workplace
- Autonomous Cars Accelerate Toward the Future
- RoboBusiness Europe and TUS Expo Organize First-Ever International Robotics Week
- Industrial Transformation Coming From Deep Learning, Says Japanese Startup
- Google’s Top 5 AI Safety Concerns
- Why Tesla Autopilot Should Be Disabled
- EU Proposes to Tax Robots as ‘Electronic Persons’
- FAA Drone Registration Faces Second Lawsuit
- Fraunhofer IFF Robots Punch People for Research
Are AI rules a zero-sum game?
As machines get smarter, we need to balance machine autonomy with the creation of effective legal frameworks to protect the rights of humans who could be harmed by them.
The EU has taken the lead in exploring potential scenarios. It’s now time for the rest of the world to contribute to this discussion.