Robotics, and more generally, automation, is all over news headlines, but multinationals need to know what could affect them. This week, the global AI race saw a win for IBM, the U.S. government wants the authority to let law enforcement take down civilian drones, and a group in the U.K. is reviewing self-driving car laws to see if new offenses need to be on the books.
Robotics Business Review has partnered with Abishur Prakash at Center for Innovating the Future to provide its members with cutting-edge insights into recent developments in international robotics, artificial intelligence, and unmanned systems. Are you ready to be updated?
French bank relies on Americans in AI race
The competition among countries for “killer apps” in the AI race is intensifying. The country that “wins” may be the one whose companies supply the AI that everyone else uses.
For example, Orange Bank in France launched a new virtual advisor called Djingo. It lets customers interact with the bank 24/7. However, Djingo is not completely French — it is powered by IBM Watson.
Why didn’t Orange tap AI from China or Russia? After all, China-based Alibaba, the world’s largest e-commerce firm, recently made headlines for expanding its AI offerings in Europe to take on U.S. firms like Amazon and Microsoft.
There are two possible reasons for Orange’s decision. First, IBM Watson has built unmatched scale and capabilities. In other words, it is relatively easy to choose IBM Watson.
Second, even if a firm from China was offering similar capabilities, businesses in France may not be comfortable using Chinese AI. Fears about privacy, government interference, or intellectual property will be an obstacle to making Chinese AI global.
White House looks to protect against civilian drones
Drones pose a new kind of national security risk for countries.
I was one of the first to talk about this. In 2016, in my book Next Geopolitics: The Future of World Affairs (Technology) Volume One, I proposed the concept of “technological terrorism,” whereby certain groups could use drones, smart appliances, or self-driving cars to commit acts of terror. Now it appears the U.S. government is catching on to this risk.
The White House is moving forward with a proposal that will allow law enforcement agencies to bring down civilian drones. Drones could be brought down through radio signal interference technology, which law enforcement is currently banned from using.
Some may view this as government overreach, but the national security risks posed by drones are real. Drones are already being used to carry explosives in conflict zones. How will authorities control their use around music concerts, shopping malls, or public parks?
British body focuses on self-driving car rules
An independent body in the U.K. called “The Law Commission” will be launching a study into self-driving car laws in order to create new policies.
One area of policies will focus on criminalizing those who hack and take control of self-driving cars (or attempt to). Creating clear liability if self-driving cars malfunction or make mistakes is important.
In the U.K. specifically, insurance companies have warned that they will not be paying for self-driving cars getting speeding tickets. That’s on the vehicle itself.
But hacking self-driving cars is a different challenge.
Hacking is an increasingly geopolitical phenomenon, so governments must think about what they would do if a foreign nation, cybercriminals, or terrorists hack into self-driving cars. Their goal could be to cause accidents, derail logistics, or just “send a message.”
Criminalizing domestic hackers who break into self-driving cars for fun is one thing. But what is London’s plan for foreign hackers who break into self-driving cars because of geopolitics?