August 10, 2012      

As military forces worldwide begin adopting autonomous robotic warriors to replace humans on the battlefield, new?and disturbing?ethical questions are emerging.

What happens, for instance, if a robot engaged in combat suddenly suffers a major software error or hardware malfunction that causes it to kill civilians, Who?s at fault: the robot, or the people who created and deployed the system?

This question has major implications?and not only for robot designers and the military. Businesses also have reason to worry. Who?s at fault, for instance, if an autonomous caregiver robot in a hospital or nursing home suddenly decides to ?pull the plug? on a patient? What happens if an assembly line robot decides turn around and lop off its human supervisor?s head?

Free Will

Good or bad robot

Many legal and robotics experts hold the view that robots lack free will and therefore cannot be held morally accountable for their actions. But University of Washington psychologists are finding that ordinary people don’t have such a clear-cut view of humanoid robots.

In a paper recently published in a professional journal, International Conference on Human-Robot Interaction, the researchers argued that humans of all types apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In a hypothetical case described in the paper the harm was financial, not life-threatening, yet the researchers claim it still demonstrated how humans react to robot errors. The findings, the researchers say, imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

The Lying, Cheating Robot

In an experiment conducted by a research team led by Peter Kahn, a University of Washington associate professor of psychology, 40 undergraduate students played a scavenger hunt with a humanlike robot named ?Robovie.? The robot appeared autonomous, but was actually remotely controlled by a researcher concealed in another room.

After engaging in some small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants’ reactions to the robot’s miscount.

“Most argued with Robovie,” says study co-author Heather Gary, a University of Washington doctoral student in developmental psychology. “Some accused Robovie of lying or cheating.”

When interviewed, 65 percent of participants said Robovie was to blame?at least to a certain degree?for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize. The experiment?s result suggests that as robots gain capabilities in language and social interactions, ?it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes,” the researchers wrote.


Relevant Across Contexts

?The question we address with our research, which asks who or what is morally accountable when a robot causes humans harm, is relevant across contexts,? says Gary. ?No matter whether a robot is interacting with people in the home, in an office or industrial setting, at an elder or childcare facility, or in a hospital, robots will inevitably cause harm to humans through hardware malfunctions and programming errors.?

Gray says that it?s easy to imagine a scenario in which a medication-dispensing robot provides an accidental overdose to a patient, or a robot that can open and close doors lets an inpatient out of a psychiatry ward. ?Our findings suggest that it is possible that the robot itself will not be perceived by the majority of people as merely an inanimate non-moral technology, but as partly, in some way, morally accountable for the harm it causes,? she says. ?This psychology will have to be factored into ongoing philosophical debate about robot ethics and jurisprudence.?

Gray feels that it?s only a matter of time before lawmakers and regulators begin addressing the issue of robot culpability. ?There will likely come a point where accountability issues do not outweigh commercial value,? she says. ?It is therefore critical that in the early stages of the development of personified computational systems … we codify regulations around their use.?