ACM.ORG?A doctor examines a stroke patient located in a rural community in South America from a hospital located thousands of miles away. A stock trade is executed instantly based on real-time information. A new pair of shoes is purchased for a rock-bottom price during an Internet flash sale. A car is guided down the highway at a safe distance from others, automatically adjusting the engine speed, steering controls, and brakes based on real-time traffic information.
In each of these cases, there is no question that robotic technology is making life easier, safer, or more convenient for human beings. Despite these benefits, concerns remain about what happens when robotic technology fails, either unintentionally or by design, resulting in economic loss, property damage, injury, or loss of life.
Certainly, for some robotic systems, the issue of responsibility when an adverse action takes place largely follows traditional product liability case law, according to several legal experts, with the manufacturer bearing responsibility for a malfunctioning part, along with the operator (if it can be proven that the operator did not properly maintain the product or component according to the manufacturer’s specifications).
“The way the law works, there is a structure for product liability,” says Peter Asaro, a professor at New York City’s New School and a scholar at the Stanford Law School Center for Internet and Society. “For the most part, it applies to these systems as it always has. There are obviously going to be cases that will be difficult to decide, and there will be new precedents that will be established during that process.”
However, Asaro cautions, “A lot of companies are nervous about entering that marketplace because of the uncertainty about how some of these things will play out. There is a lot of effort right now, especially from Google, to get some structures in place in the law that would limit their liability, or shape it in a particular way that they could manage it.”
Designing Legal Structures to Manage the Unknown
Robots are being designed with increasingly greater sophistication and decision-making capabilities, instead of simply carrying out simple instructions that have been pre-programmed by their designers or operators. As these robots are able to act and react based on external, environmental stimuli, the ultimate liability for an accident or incident caused by a split-second decision made by the robot is less clear. In these cases, the robot or autonomous machine may be functioning as intended by the designer without any clear errors or failures, and if unintended results still occur, the issues of responsibility and liability must be sorted out.
Guy Fraker, former director of business trends and foresight for insurance giant State Farm and the co-founder and CEO of consultancy Autonomous Stuff, notes that real-world scenarios may crop up that have not been foreseen by robot designers. For example, Fraker notes the complexity of a scenario in which a driverless car makes its way through a parking lot, when a shopping cart and a baby carriage simultaneously roll in front of the car. Whereas human instinct would likely cause the driver to steer closer to the shopping cart (to avoid hurting an infant), it’s unclear whether an autonomous system that simply sees two objects in its path would be able to make a similar choice.
“The more we task robotics to act on our behalf, one of the first questions is, ‘who is responsible’ in the moment of truth,” Fraker says, noting that a conscious design decision or shortcoming that results in an accident will likely result in a product liability lawsuit, whereas a human failure to intervene when a system fails to perform as intended is more likely to result in a traditional insurance scenario. “We don’t have an answer for that [question] yet. Key private-sector stakeholders, trade associations, academic institutions, and public policymakers are working on frameworks for how to sort out this and other issues, but we just aren’t there yet.”
Another liability issue that is likely to arise involves the issue of human action?or inaction?in terms of either choosing to override the autonomous robot’s actions in the name of safety, or simply letting the robot continue its intended actions. If an accident occurs, whether with a driverless car, a telepresence robot that is navigating through a hospital hallway, or a drone that is headed toward a civilian village with no military value, the questions of liability are likely to be hotly debated.
“What is going to get really complicated is if these cars appear to be doing something unsafe, and the driver overrides it,” Asaro says. “Was it the manufacturer’s fault, or is it the individual’s fault for taking over? Because you are dealing with people’s perceptions and accounts of what is going on, it is a very fuzzy area that we really have not seen a lot of legal debate about. The answer to those questions is going to come from court cases.”
As robots increasingly are designed to utilize stimuli that are taken from the same space inhabited by humans, the potential for negative consequences is enhanced, as is the potential for lawsuits seeking to assess responsibility and collect damages. This has created a scenario in which manufacturers of robots are focused on perfecting their systems for 100% reliability, which would, in effect, make liability a non-issue. However, this view is misguided, according to some risk assessors.
“The other relevant question that we have to get a handle on is one of performance expectations. For example, with vehicle operation, we know [people] cause 93% of all accidents,” Fraker says. “What is an acceptable level of reliability for autonomous cars? Because if the answer is 99.99%, let’s stop now, let’s quit investing money, because perfection is not a realistic expectation.”
Cooper says the only way open source robotics will see support from manufacturers is if they develop a framework to codify acceptable robot behavior.
Despite the push among robot or autonomous system manufacturers to perfect their systems, it is likely the government will step in to make sure robotic technology is safe for use. Thus far, four states (Nevada, Texas, Florida, and California) have signed declarations making driverless cars legal, though there are significant conditions to be met, such as a requirement that a driver is sitting in the driver’s seat at all times, ready to take over in the event of an emergency, and that the driver must carry traditional accident insurance.
In the E.U., regulations governing the use of autonomous emergency braking and lane-departure warning systems have been passed, and are seen as signaling a desire to speed the introduction of driverless cars on the continent. Moreover, Fraker notes that in jurisdictions including Europe, China, and Brazil, the risk of litigation against product manufacturers is significantly lower than it is in the United States, given that these governments have taken a far more restrictive view of using the tort system to recover damages in the event of injury or death.
Furthermore, in the medical field, robots currently are and will continue to be heavily regulated in the U.S. by the Food and Drug Administration (FDA).
For example, the RP-VITA, a remote-presence robot developed jointly by In-Touch Health and iRobot, is essentially comprised of two key functions: allowing remote telemedicine sessions to occur through a remote audio and video link (as well as basic sensors), and a mobility component, which uses sensing technology to allow the robot to autonomously travel between rooms.
Early this year, the RP-VITA received 510(k) clearance by the FDA, which means that organization verified that the device is functionally equivalent to a device already in use. While the 510(k) clearance process does not shield manufacturers from liability claims, the process generally entails manufacturers taking steps to ensure their devices are safe.
There are no specific legal statutes surrounding the use of robots in the workplace, but the idea of robots working as agents on behalf of individuals or companies is raised by Samir Chopra, a professor of philosophy at the Brooklyn College of the City University of New York, and co-author of A Legal Theory for Autonomous Artificial Agents. Chopra says that as robots gain autonomy, it is likely that legal frameworks will be developed that will hold the owner or operator of a robot legally responsible for any accidents or problems that occur with the robot, due to the inability to effectively hold a machine legally liable for its actions.
“When that level of autonomy occurs, that’s when it starts to make less sense to think about these robots as tools of the people who operate them, and more like agents of the employers,” Chopra says, noting that this concept of agency is likely to occur as robots gain a greater level of autonomy. Furthermore, if a case of negligence or act of malice occurs as a result of the robot operating on behalf of its operator, imposing penalties on the robot itself would largely be ineffective. “What you might see down the line is that you would be forced to change [a robot offender’s code or design], so they could not do it again,” Chopra says.
Still, the technologies being used in driverless cars and other automated systems are far more accurate than humans, and are statistically less likely to cause harm than humans doing the same tasks.
“We don’t react well enough,” says Scott Nelson, CEO of MILE Auto Insurance, an automotive insurer founded to offer policies based on miles driven, rather than based on variables such as a driver’s age or location. “The question is, with the regulatory hurdles and the acceptance hurdles, are we going to be willing to turn our cars over [to robots], where we don’t do anything?”
That’s why most lawyers interviewed for this article agree that in the near term, liability and insurance structures are going to move very slowly in addressing the new wave of autonomous technologies, simply because there is little legal precedent for these systems. Further, organizations that are currently deploying robots declined to participate in this article, perhaps wanting to avoid boxing themselves into any specific legal corner if a problem were to arise.
Open Source Opens a Host of Questions
While most of the robotic technology in use today is based on closed, proprietary technology, robots are starting to be developed using open source software. As a result, liability might not be assigned to the developers of that software if something goes wrong, since the software itself is designed to be modified. As such, it is harder to quantify what types of functionality or uses could be considered “off?label” or unauthorized, or even ill-advised, based on an open source platform.
“Liability can be addressed currently with closed robotics, because they have restricted functionality,” says Diana Cooper, Articling Associate, La-Barge Weinstein LLP, and author of a paper entitled “A Licensing Approach to Regulation of Open Robotics,” which was presented at the We Robot Conference at Stanford Law School in April. Cooper notes the very nature of open source robotics is that the functionality is not predefined, and there are no restrictions in place governing what actions the robot can or cannot carry out. This is creating a considerable amount of confusion for robotics developers who are worried about liability, from the components used in robots to fully completed products.
“The problem is, how will the upstream parties that create certain components that are built into these end products be sheltered from liability from any downstream modifications that might be harmful?,” Cooper asks. “That is a whole different ball game, because we cannot provide warnings to the market, since we do not know what that end product will be, and we cannot rely on the defense of product misuse since open robots are intended for modification.”
Legal scholars such as Cooper and Ryan Calo of the Center for Internet and Society have brought up the idea of creating a licensing system, similar to an end-user license agreement found on productivity software. In essence, the license of the robot or robotic component would stipulate that the robot or robotic component would not be used for certain actions (such as creating a weapon, or otherwise harming people, animals, or property), and would indemnify the developer and/or manufacturer of the robot or component against any claims resulting from a robot or component being used in violation of the terms of the license.
Cooper, for one, says the only way open source robotics will see support from manufacturers is if they develop some sort of framework to codify acceptable robot behavior, and create a method for indemnifying upstream suppliers of hardware and software from potential liabilities.
Says Cooper: “Hardcore open-source advocates don’t really want any restrictions on the use, but now that we’re inputting the software into hardware components that have actuators and a physical presence that interacts with people, I think that perhaps we should be looking at imposing some tailored restrictions.”
About the Author
Keith Kirkpatrick is principal of 4K Research & Consulting, LLC, based in Lynbrook, NY.