Robotics & Geopolitics: India Fires Warning Shot at Foreign Firms

Source: iStock

September 14, 2018      

Around the world this week, India banned foreign drone pilots from flying inside the country, DARPA announced technology where people could control aircraft with their brains, and countries found novel ways to export their technology to other countries as a way to extend influence.

Robotics Business Review has partnered with Abishur Prakash at Center for Innovating the Future to provide its readers with cutting-edge insights into recent developments in international robotics, artificial intelligence, and unmanned systems. Are you ready to be updated?

India’s latest drone policy puts foreign firms on notice

Robotics development: India has passed laws for drones that change the entire drone landscape in the country. Not only will those wanting to fly drones have to register with an authority, but there will also be rules for flying drones within buildings.

Perhaps the most stringent policy though, is one that bans foreigners from flying drones in the country. Under this policy, foreigners will not be allowed to fly drones in India, unless it is for commercial purposes, for which they will need to attain a license.

Geopolitical significance: For those not paying attention to India, which is expected to be the world’s second-largest economy by 2050, the drone policy may seem to be a “one off.” But it isn’t. This is the latest policy in a long line of rulings against anything foreign that infringes on India’s safety and freedom.

In July 2017, India banned self-driving cars, the first such regulation in the world, to protect local jobs. The latest drone policy may be reflective of the direction India is moving in when it comes to robotics, AI, and other new technologies. Such extreme policies, while damaging to foreign firms, help India builds its own local ecosystem and not become dependent on foreign technology, a major geopolitical risk going forward.

Globe India foreign firms clipart

India continues to protect its companies from foreign firms. Source: Clipart.com

So what else is fair game as India prepares new policies? One area could be data monetization, or creating services powered by data. Last month, Google announced that it was working with India on one such project. One of the pilot programs will see Google using AI to predict floods in India.

However, would Google, or any other technology company, be comfortable sharing its data with foreign firms, especially their competitors? Under a paper on AI, laid out by India’s government think tank, NITI Aayog, one of the proposals is for a “data marketplace” to remove restrictions on access to data. It specifically mentions benefiting “smaller players.”

Such a policy, if passed, would create major headaches for foreign technology companies, like Google, that would want to ensure their data benefits them before anyone else.

If foreign firms do not abide by India’s future data laws, they might follow the fate of Facebook, whose flagship service, “Free Basics,” was banned in India in 2016 for violating net neutrality laws. India’s policies towards new technologies appear to follow a trend: If it is from a foreign firm and poses a high risk to national security, economy, or society, it could be banned, period.

South Korea aims to lead Asia in automated regulation

Robotics development: South Korea’s Financial Supervisory Service, the financial watchdog in the country, has unveiled plans to create “machine-readable regulation,” which is a kind of digital or automated regulation.

With this kind of regulation, reporting from financial firms to the government (or vice versa) would be done automatically, without human involvement. It would speed up communications and ensure that illegal financial activity is caught faster. South Korea wants to be the first country in Asia with such regulation.

Geopolitical significance: At a time when geopolitics is being redefined by technology, there is no clear formula for how nations grow their influence. South Korea is experimenting with a type of regulation that is rare to grow its leadership in the region.

Only a few other countries are moving forward with machine-readable regulation, such as the U.K. (Financial Conduct Authority) and, to a certain degree, Singapore (Monetary Authority of Singapore) and the European Union (European Securities and Markets Authority).

The real question is how will South Korea’s machine-readable regulations translate into leadership, both in automation and in geopolitics?

One way is by exporting the machine-readable regulations to other countries, such as a packaged offering or framework. This is exactly what South Korea is doing with smart cities. South Korea has called its smart city project in Kuwait its first “smart city export.”

A similar tone is being set as South Korea discusses smart city projects with India and as South Korean cities, such as Incheon, look to export their own smart city services to the world.

Similarly, South Korea may choose to export its machine-readable regulations to specific countries in Asia and the Middle East where it wants to develop stronger relations. If machine-readable regulations can reduce corruption, bureaucracy, and illegal activity, other governments might jump at the opportunity to have such capabilities.

It’s also possible that South Korea won’t have to approach anyone. Other countries might adopt what South Korea is doing on their own. This is exactly what happened with China.

When Germany announced the world’s first self-driving regulations in 2017, China responded in March that it was examining adopting some of them. In other words, at a time when the world was competing to sell into China, Germany quietly exported its public policy to them.

Countries explore brain-computer interfaces for warfare

Robotics development: The U.S. Pentagon’s Defense Advanced Research Projects Agency (DARPA) has unveiled technology that lets someone control multiple “jets” with their mind, a type of telepathic control. DARPA has been working on this technology since 2015, when a paralyzed person with a small microchip implanted in their body could steer a virtual F-35 jet.

Geopolitical significance: The ability to control military assets with the mind could transform warfare. When it comes to brain-computer interfaces, DARPA is not the only organization pushing the envelope.

In July 2016, researchers at Arizona State University showed a system that let people control multiple drones with their brain (they were working with the U.S. Army).

In June 2015, Russian scientists unveiled a new “humanoid robot soldier.” The people behind it said they were also working on brain computer interface systems to create a “soldier of the future.”

In April, China unveiled it was “mining data” from worker’s brains to get a sense of the mental conditions of workers. According to officials, this technology was also being tested on soldiers in the Chinese military.

Old war room clipart.com foreign firms

Future wars might be planned with brain-computer interfaces, not old maps and generals. Source: Clipart.com

Using the mind to control military assets could also transform who future soldiers are. In fact, in the past, when the U.S. wanted to hire drone pilots, it looked to gamers, a shift in whom militaries have traditionally hired in the past. For future soldiers who can control drones, tanks, or humanoid robots with their minds, who will militaries recruit?

In August 2015, China was reportedly training students to control robots with their mind and make them move. Is this a sign of the demographics of future militaries?

Equally important is what kind of advantage brain-computer interfaces will give to countries. Because people will be directly connected to drones or other assets, they may be able to adapt to changing conditions on the battlefield faster.

In addition, two-way communication between the controller and drones is being developed. In the DARPA trial, the person controlling the drones could also receive signals from the drones themselves. This itself is transformative. It means that in future military operations, where brain-computer interfaces are present, it won’t just be humans telling technology what to do; it could be the other way too.