September 20, 2013      

?The issues of liability, safety standards and trust are intrinsically linked to the acceptance and ultimate success of healthcare robotics.?
?Dr Catherine Easton


The introduction of a robot into a healthcare situation adds complications to the assignation of liability if something goes wrong.

There is the potential for damage to be caused to a patient, medical practitioner or other equipment if a robotic system malfunctions. Ethical issues arise in relation to agency and responsibility, with a need to establish who is in control and at what point a duty arises.

Layers of liability come into play with the potential to apportion blame to designers, programmers, medical staff and even the patient as the end-user.

Asaro[1] believes that these issues need to be addressed by embedding them within existing legal regimes and identifies product liability as a helpful framework.

Creators and manufacturers could be held negligent through both a failure to take proper care and a failure to warn. In this way, healthcare robots could be treated in the same way as any other manufactured product and liability apportioned according to the principles of negligence.

The level of negligence could be determined according to accepted industry standards.

Difficulties arise, however, in relation to sophisticated systems such as those created by the EU-funded ALIZ-E project,[2] which can evolve in a dynamic manner, building upon interactions with humans and the wider world.

This potentially unpredictable aspect leads to the need for a safety framework which both ensures that the experience of the end-user is paramount in the design process and also provides a strong focus on the ethical duties of the creator.

Safety standards

As is evident from the discussion of liability, the development of safety standards is crucial to the wider uptake and acceptance of healthcare robots.


Again, the evolutionary capabilities of some robotic systems cause difficulties in the development of guidelines and principles that cover potentially unforeseeable consequences of the design process.

Such standards need to provide an effective safety framework while also being flexible enough to respond to the rapid pace of technological development.

Harper[3] highlights the importance of mission-worthiness in standards creation; the dependability of the robotic technology in its specific environment.

In 2005 the International Standards Organization established an Advisory Group (AG) on standards for mobile service robots.

A set of standards relating to non-medical personal care robots are currently under development.[10] At a mundane level these cover specific tasks, potential environmental conditions and hazards, while creating validation tests by which to measure safety compliance.

Arguably, a physical malfunction such as a power surge or collision can be easily addressed by focusing upon the nature of the robotic system; what is more difficult to predict is the behavior of the humans with whom the robot is interacting.

Salter[5] identifies a ‘wildness’ in child-robot interactions which introduces numerous variables into the standards setting process. In work towards the development of so-called ‘roboethical’ guidelines,

Enz argues that no standard is valid without assessment and consideration of human expectations and fears. While there are difficulties inherent in introducing potential human behavior into any guidelines, it cannot be overlooked in the development of reliable, robust standards.


Robots in the healthcare environment will not be successful if they are not fully trusted by both patients and practitioners.

While safety standards can address potential physical or emotional damage, healthcare robotics will not be effective if users are uneasy about their interaction with the technology.

This goes further than merely feeling safe; patients need to feel comfortable with the technology and able to rely upon it even when in a vulnerable state.[6] Japan, a country in which over 23% of the population is aged 65 and above and in which there is a severe shortage of domestic labor, has been at the forefront of robotic developmen.t

Even in this apparently tech-friendly nation, however, responses to robot aides have not been entirely positive, with certain systems removed from hospitals due to patients’ lack of trust and their desire for human interaction.[7]

Research[8] into human/computer interaction has identified a need to develop an emotion-based architecture based upon regulative, expressive and motivational functions.

Hancock’s work identifies and analyses the link between human trust and a robotic system’s level of automation, behavior, dependability, reliability and predictability. Responses to robotic systems have also been found to differ significantly according to socio-demographic factors such as age, gender, education and even religious and cultural background.[9]

These issues need to be considered in relation to equality of service provision. If healthcare robots are to take on rehabilitative and therapeutic tasks previously carried out by humans, then they need to elicit a level of positive emotional response from users or they are likely to be rejected outright.

Robot rights?

While for this technology to be accepted, there is a need to address human suspicion of robotic systems, ethical issues can also arise when patients develop strong emotional attachments to a robotic healthcare provider.

On-going interaction with sophisticated dynamic systems can lead to a user projecting human qualities onto the technology and perceiving a psychological bond.[10] If this robotic system were to be violently destroyed, damaged or even reprogrammed this could have a detrimental effect on the mental well-being of its user.

Torrance[11] imagines a world in which rights could be extended to robots with humans under a responsibility to treat them ethically. Darling[12] draws an analogy to second-order rights such as animal rights, which are assigned not only due to the need to protect animals from pain but also due to a need to protect societal values.

Humanity, she argues, could be damaged if an entity onto which human characteristics have been projected is seen to be harmed. While this approach raises key ethical questions relating to robotic autonomy, quasi-personhood and, ultimately, consciousness, there is a limit to the extension of rights and corresponding duties to robots.

Human tendency to anthropomorphize does not just apply to robotic technology and similar attachments could be made to, for example, computers, televisions and kitchen implements.

It can be argued that robotic technology is not yet sophisticated enough to warrant its own framework of freedoms and responsibilities.

At a more basic level, healthcare practitioners working with robotic systems need to be aware of the potential for emotional links to develop and be provided with guidance on how best to support patients.


The issues of liability, safety standards and trust are intrinsically linked to the acceptance and ultimate success of healthcare robotics.

There is, however, a constant need to return to the rationale behind the implementation of this technology and the potential benefits it can bring.

Questions need to be raised around the ethics of the implementation of these systems and whether they are being developed for human good and the enhancement of service provision or whether they merely represent a misguided attempt to cut costs and, in turn, cut corners.

Dr Catherine Easton, Lecturer in Law at Lancaster University

[1] Asaro, P (2007) Robots and Responsibility from a Legal Perspective Proceedings of the IEEE 2007 International. Conference on Robotics and Automation

[2] ALIZ-E (2012) The ALIZ-E Project: Adaptive Strategies for Sustainable Long-Term Social Interaction

[3] Harper (2010) Towards the Development of International Safety Standards for Human Robot Interaction Int J Soc Robot (2010) 2: 229?234

[4] ISO/DIS 13482 Robots and robotic devices — Safety requirements for non-industrial robots — Non-medical personal care robot

[5] Salter, T. et al (2010) How wild is wild? A taxonomy to characterize the ‘wildness’ of child-robot interaction Int J Soc Robot 2 pp405?415

[6] Yagoda et al (2012) YouWant Me to Trust a ROBOT? The Development of a Human?Robot Interaction Trust Scale Rosemarie Int J Soc Robot 4 pp235?248

[7] Fitzpatrick, M. (2011) No, robot: Japan’s elderly fail to welcome their robot overlords 3 February 2011

[8] Hirth J, and Berns K (2011) Emotion-based architecture for social interactive robots. Int J Soc Robot (2011) 3 pp273?290

[9] Flandorfer.P. (2012) Population Ageing and Socially Assistive Robots for Elderly Persons: The Importance of Sociodemographic Factors for User Acceptance International Journal of Population Research

[10] Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. In IEEE Engineering in Medicine and Biology, p 53?60

[11] Torrance S (2008) Ethics and consciousness in artificial agents. Artif Intell Soc 22 (4)

[12] Darling, K. (2012) Extending Legal Rights to Social Robots, MIT We Robot Conference, University of Miami, April 2012