PERSPECTIVE: In terms of thinking about robots as manufactured products, many of the most practical and pressing issues facing robotics engineers can be seen as being essentially like those facing other engineers.
It has been argued that a good place to begin thinking about ethics in robots, is to look to the existing legal frameworks and how these might apply to robots today and in the near future .
In this presentation I want to consider a few of the most significant ways in which we might use jurisprudence in order to get a better understanding of robot ethics.
Robot ethics vs.engineering ethics
Primarily, this will consist in examining the nature of legal responsibilities involved in the production and use of robots and will be in this sense very general. I believe this is a better place to start thinking about robot ethics, rather than the immediate issues involved in the specific design choices of, or possible moral prohibitions on, the engineer, because those specific issues require a well-defined sense of how responsibility is focused, transferred and distributed in and around robots.
I also believe that it is only by understanding what makes robots unique from other technologies that we can begin to think about what makes robot ethics distinct from engineering ethics more generally.
To do this, we will focus on the concept of legal responsibility, and begin by considering robots as being like any other technological device or product, and from there move to thinking about what could make robots different in the eyes of the law, and what special considerations robotics engineers might need to make as a result.
Existing legal system tends to do a pretty good job
It is important to be clear that legal responsibility is not exactly the same thing as moral responsibility. Still, I believe it represents an excellent starting point for thinking about robot ethics for several reasons. As others have already noted , there is no single generally accepted moral theory, and only a few generally accepted morals.
And while there are differing legal interpretations of cases, and differing legal opinions among judges, the legal system ultimately tends to do a pretty good job of settling questions of responsibility. Thus, by beginning to think about these issues from the perspective of legal responsibility, we are more likely to arrive at practical answers.
This is because both 1) it is likely that legal requirements will be how robotics engineers will find themselves initially compelled to build robots ethically, and so the legal framework will structure those pressures and their technological solutions, and 2) the legal framework provides a system for understanding agency and responsibility, so we will not need to wait for a final resolution of which moral theory is ?right? or what moral agency ?really is? in order to begin to address the ethical issues currently facing robotics.
We might think of legal responsibility as a subset of moral responsibility. There is certainly a large overlap between what is legal and what is moral, even under differing moral theories. Indeed the disagreements between them represent a relatively small set of cases where what is morally acceptable, or required, is in violation of the law (e.g. civil disobedience, or speeding an injured person to the hospital), and the relatively large set of actions which are legally acceptable but morally despicable (e.g. being rude or obnoxious, making racists statements, or violating someone?s trust outside any legally-binding contracts).
While these cases do arise in real life, it is also safe to assume that the vast majority of practical decisions faced by humans, and potentially by robots, will be of the sort that legal and moral theories will largely agree on what the appropriate actions are. As such, building a robot that is capable of safeguarding the legal responsibility of those who build it and use it, would at least be a good start in building one which has moral responsibility.
Robots as quasi-agents or quasi-persons by law
How then can the law help us in our thinking about robots? There are several relevant aspects of the law, and we will consider each in turn, but first a brief overview. In the most straightforward sense, the law has a highly developed set of cases and principles that apply to product liability, and we can apply these to the treatment of robots as commercial products.
As robots begin to approach more sophisticated human-like performances, it seems likely that they might be treated as quasi-agents or quasi-persons by the law, enjoying only partial rights and duties. A closely related concept will be that of diminished responsibility, in which agents are considered as being not fully responsible for their own actions. This will bring us to the more abstract concept of agency itself in the law, and how responsibility is transferred to different agents.
Finally we will consider corporate punishment, which is relevant both as it applies to cases of wrongdoing in product liability, but also because it addresses the problem of legal punishments aimed at nonhuman agents, namely corporations.