October 27, 2015      

“First, do no harm.” — The Hippocratic Oath

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” — Asimov’s First Law of Robotics

But what happens when an attacker surreptitiously takes control of a robot and transforms it into a rogue machine? The consequences could be quite dire, particularly in the rapidly growing field of healthcare robotics.

For the manufacturers of healthcare robots — ranging from precision surgical systems and mobile pharmaceutical dispensers to various patient-service robots — an emerging threat posed by cyberattacks and real-time hijacks presents a major technical and business challenge.

“In general, we need to be concerned with access to and control of a robot and with the data that lives on or is coming from a robot,” said Cory Kidd, founder and CEO of Catalia Health, a San Francisco company that develops interactive robots for patient engagement.

“In either case,” he said, “attacks will look very similar to any other software-based attack, but the results could be different, depending on whether we’re talking about a robot in a hospital, a home, or another sensitive environment.”

Hacking motivations

Perhaps the most immediate threat to healthcare robots comes from attackers motivated simply by the challenge and notoriety associated with invading a new and exciting technology.

“When we’re talking about healthcare-focused robots, some attacks will likely be motivated by the novelty of the technology,” Kidd said. “That doesn’t make them any less serious, as we don’t often know whether an attack is for that reason or something more nefarious.”

Yet as healthcare robots become increasingly commonplace, attackers are likely to find new reasons for launching cyber assaults.

“What motivates hackers to attack healthcare robots?” asked Tim Lynch, a computer interface analyst in Quincy, Mass. “Money, anarchy, terrorism, retribution, intimidation, getting rid of witness to get away with a crime, fame.”

Successful compromise of healthcare robotics could result in different types of damage.

“The possible harm really depends on the functionality of the system,” Kidd said.”Is this a surgical robot? A telehealth interface? A personal healthcare companion? A pill dispenser?” Any of these systems, as well as other devices, could be used to inflict some type of harm on an innocent patient.

Paging Robot Surgeons

Surgical robotics are becoming increasingly popular worldwide, partly because of perceived cost savings. In 2013, nearly 500,000 surgeries involved a doctor remotely manipulating a device during an operation.

According to Brandon Henry of RBC Capital Markets, the number of procedures conducted by 93 percent of robotic general surgeons stayed flat or rose sequentially.

For minimally invasive operations, robots are often seen as preferable to unaided human surgeons for faster recovery.

Robots are also being used in more major surgeries, such as joint replacements and removal of cancerous kidneys.

The market for surgical robotics is expected to increase from $3.2 billion last year to $20 billion in 2021, according to a report from WinterGreen Research Inc.

“They can deliver the wrong meds or the wrong dosage, crash into things in the ER, upload patient records to an outside party for ID theft, target individual patients, target doctors or nurses for attack by running into them, activate on-board cameras and upload images of patients, especially celebrity patients, and so on,” Lynch said.

Danger detected for robotic surgery

Perhaps the scariest scenario is an attacker hijacking a surgical robot in the middle of an operation, injuring or maybe even killing the patient. The question of whether healthcare robots can be infiltrated by attackers is no longer purely hypothetical.

Earlier this year, University of Washington BioRobotics Lab researchers reported that common cyberattacks could easily interfere with remotely operated robots on public networks.

To expose potential vulnerabilities, the researchers used a Raven II open-source, tele-operated surgical robot developed by University of Washington faculty and students.

By intentionally initiating “man in the middle” attacks that changed the commands traveling between the operator and robot, the researchers were able to interfere with several activities, such as making it difficult for the robot’s arms to grasp various types of objects. In some instances, the attackers were even able to completely override command inputs.

Denial-of-service attacks launched by the researchers also caused issues that would be life-threatening in a real situation. When the researchers overwhelmed the robot with torrents of useless data, the system became increasingly jerky and more difficult to operate. In fact, by using just a single packet of bad data, the attackers were able to trigger the robot’s emergency stop mechanism, rendering it useless.

While FDA-approved surgical robots used in clinical settings aren’t typically operated on public networks, the researchers’ proof-of-concept attacks showed what could happen if malicious invaders were somehow able to gain access to a private, secured surgical robot network.

The study also highlighted the threat to robots that may someday be used in remote locations where public networks might be the only available communications option.

“Telerobotic surgery is envisioned to be used in extreme conditions, where robots will have to operate in low-power and harsh conditions, with potentially lossy connection to the Internet,” noted a paper written by the researchers. “The last communication link may potentially even be a wireless link to a drone or a satellite, providing connection to a trusted facility.”

According to the researchers, the easiest attack gateway is likely to be a point between the network uplink and the robot. The researchers were able to divide possible attacks into three distinct categories: intention modification, intention manipulation, and hijacking attacks.

Intention modification attacks occur when an attacker directly interrupts a surgeon’s intended actions by modifying the data while packets are in transit and the surgeon has no control over them. These attacks are relatively easy to observe when executed correctly through unusual delays in robot movements, atypical robot actions, or the robot randomly turning itself on or off.

Intention manipulation attacks happen when an attacker modifies a robot’s feedback streams, such as video feeds and touch feedback. These attacks can be more difficult to mount, simply because of the amount of data that a robot transmits, the researchers said.

Yet, if executed correctly, these attacks can be highly difficult to detect and prevent, since they tend to be quite subtle. Since robot feedback is generally assumed to be accurate, even a surgeon’s valid actions may unintentionally harm a patient.

The open-source Raven II surgical robot was hacked.

University researchers demonstrated a theoretical surgical robotics weakness.

In hijacking attacks, a malicious individual or team commands the robot to totally ignore the surgeon’s actions and to instead perform some other, most likely harmful, types of activities.

The researchers noted that hijacking attacks include both temporary and permanent robot takeovers and, depending on the actions executed by the hijacked robot, results that are either very discreet or very noticeable.

In addition, the university team noted that the potential vulnerabilities “are not unique to tele-operated surgery, but are common to all tele-operated robots.”

Securing healthcare systems

While acknowledging the significance of the University of Washington research and the need for safeguards, Walter O’Brien, founder and CEO of Scorpion Computer Services, a Los Angeles security consulting firm, said that the threat of real-world robot attacks may be somewhat overstated.

“Any computer can be hacked, since a robot is just a computer,” said O’Brien, who is also the inspiration for and an executive producer of the CBS TV show Scorpion. “How much damage can be done, and what the motivation for the hacking is, is what’s more important in terms of the risk.”

O’Brien noted that while healthcare robot attacks are theoretically possible, they are unlikely to become commonplace since, unlike most computer attacks, there’s little personal or financial motivation for hijacking a system designed to improve someone’s well-being through surgery, prosthetics or medication delivery.

“Unless someone is trying to assassinate the president or something like that, why would you want to hurt or kill some guy getting a heart bypass; why would you want to dial in and screw that up?” he asked. “There’s no money to be made there.”

Yet there always remains the possibility of a deranged person deciding to hijack a healthcare robot to attack an innocent patient simply for the thrill or notoriety. That’s why robot developers still need to build safeguards into their products, noted O’Brien.

“The first thing is test the system fully,” he said. “If you don’t know how, go find someone and ask how.”

Robot security starts at the network, said Brown.

“The usual best practices for securing systems can help deter some low-hanging exploits,” Brown explained. “Encrypting data in transit, requiring authorization for operators, and restricting methods of interacting with the device can close some common vulnerabilities [that] hackers like to attack.”

O’Brien agreed that network security measures are essential for healthcare robotics, but he said that integral safety features will also play a major role in protecting patients from attackers who manage to circumvent network safeguards.

Override technologies that kick in and shut down a robot that is not performing within normal parameters are essential, O’Brien said.

Security-oriented robot design specifications could also be used to frustrate attackers.

“Don’t make stuff stronger than it needs to be,” O’Brien said. “When building a robot arm, for example, make it so that it will never be able to lift more than 10 pounds — then it will be unable to seriously hurt anyone simply because it doesn’t have the strength.”

Healthcare robots must have security features designed in, not added on, said Kidd.

“Establishing a foundation of good security practices from the architecture and design levels of a product is a key aspect of building secure systems,” he said.

Lynch agreed, saying, “Robots in healthcare have great potential, but we must be aware of the potential for evil people to abuse them.”