SANTA CLARA, Calif. — At RoboBusiness 2017 here this past week, I was one of the few private-sector attendees not from the robotics industry. I gave a presentation on “Robotics, AI, and IoT Security,” which looked at the possibilities and challenges that can emerge as robot and auto cybersecurity, the Internet of Things, and geopolitics converge. Several audience members asked questions about what to do next.
Automotive cybersecurity is a growing concern, and governments everywhere need to begin paying more attention to it. Some already are. Last month, the U.S. House of Representatives passed the SELF Drive Act, which would increase the number of self-driving cars allowed to be tested on roads to 100,000. Within the proposed legislation are provisions that let automakers choose which safeguards to use.
Separately, the U.K. unveiled eight guidelines in August to steer cybersecurity measures for self-driving vehicles. One of guidelines calls for automakers to ensure that their vehicles can still operate and function after being hit by a cyberattack.
- The U.S. and U.K. are among the governments starting to address robot and auto cybersecurity.
- To pre-empt liability legislation, some automakers have said they would take responsibility for accidents involving their self-driving cars.
- In addition, healthcare systems such as surgical robots and automated dispensaries need to be secured.
Laying down rules of the road for auto cybersecurity
What makes the US SELF Drive Act so important is that it doesn’t specify exactly what automakers need to do regarding securing their self-driving or connected vehicles. Instead, it creates parameters that companies need to work within, each designing their own solution and approach.
A similar trend is emerging in Europe, as the government has been less forceful on automakers to align with certain policies, resulting in both Volvo and Audi coming out and saying that they will claim liability for self-driving vehicles that are involved in an accident.
In addition, the British government in February floated a draft bill that would peg responsibility with auto manufacturers if their self-driving vehicles are involved in an accident.
When sharing this point during my talk, an audience member asked a good question. It’s one thing if a self-driving car company claims liability if its vehicle is in an accident. But, who is liable once a self-driving car is hacked and then is involved in an accident?
This builds on a chapter of my book called “Technological Terrorism.” In it, I talk about how self-driving cars, smart stoves, and consumer drones might become tools for terrorists to create chaos. For example, I propose that self-driving cars could be hacked by a group and used to attack pedestrians or hurt passengers by ramming the cars into buildings.
The question then, of who is liable once a self-driving car — or any other robot — is hacked is of immense relevance, not just for the company behind the product, but also for national security.
I responded by saying that I believe governments are responsible because they’re the parties that should begin laying out the framework and roadmap of future risks and challenges. Companies will then align with and adapt to their rules. In other words, governments need to lead, and companies will follow.
More on Robotic, IoT, and Auto Cybersecurity:
- IoT World Looks at Buzz, Barriers Around Industry 4.0
- Robotics, AI, and Autonomous Systems Take Center Stage at RoboBusiness Conference and Expo
- Introducing Industrial IoT to Manufacturing — Challenges and Benefits
- Robotics and AI Fears Improve Security Policy Controlling Robotics Investments
- Why Governments Should Attend RoboBusiness U.S. 2017
- Healthcare AI to Be First Point of Contact for Patients, Policymakers
- Japanese Military Drones, Robotics Develop in Response to U.S.-China Pivot
- Robotics Companies Should Develop a ‘GeoRobotics’ Strategy
Securing healthcare robotics
It isn’t just national security events such a self-driving vehicle being hacked by terrorists that should compel governments to pay attention to robot and auto cybersecurity. Robots are also being deployed in various settings, including healthcare.
In February, I spoke in front of the Canadian Senate talked about how robotics, artificial intelligence, and 3D printing will transform healthcare. I talked about cybersecurity and shared that in the UAE, a robot had begun dispensing medicine.
Back in 2015, security experts demonstrated that a surgical robot could be hacked. Sure, hospitals, factories, and other facilities may be isolated from the general Internet, but with IoT and a new generation of hackers, multiple layers of security should be developed.
The winner of the RoboBusiness 2017 Pitchfire startup competition was a Polish company that has designed a robot to dispense medicine in hospitals. Once again, cybersecurity is a concern. What happens if such a machine is hacked and dispenses the wrong medicine or the wrong quantities, resulting in people not receiving the proper treatment or potentially overdosing? The U.S. already has laws against drug tampering, but it has yet to account for new methods of dispensing and delivery.
Robotic and auto cybersecurity pose new challenges for companies and nations around the world. However, they also create the opportunity for a new breed of companies to solve these security problems, including robotics or AI firms.
As cybersecurity changes how governments view robot arms, consumer drones, robot pharmacists, and more, a question emerges for you, the individual. Are you thinking about cybersecurity when you purchase a robot, be it an automated vacuum cleaner or an AI assistant? And if no, then why not?