Robotics & Geopolitics: Basic Income May Herald More Robot Regulations; AI Affects Civil Liberties, Military Reach

Source: ClipArt.com

August 03, 2018      

For all the benefits that automation can bring, including greater productivity and efficiency, there are concerns about jobs and security that are motivating public policy. The latest experiment in universal basic income, flaws in facial recognition systems, and military AI developments demonstrate why businesses and policymakers must take reactions to robots into account.

Robotics Business Review has partnered with Abishur Prakash at Center for Innovating the Future to provide its readers with cutting-edge insights into recent developments in international robotics, AI, and unmanned systems. Are you ready to be updated?

Basic income on government agendas

Robotics development: The city of Stockton, Calif., is preparing a basic income initiative for 2019 whereby 100 residents would receive $500 a month for 18 months. There are no conditions or requirements. The 100 residents could do whatever they want with the money.


 
This year, RoboBusiness includes four conferences to make it easier for you to find the information you need most. Whether you are involved in running a robotics business, designing products, or implementing robotics solutions in your company – we have a conference to meet your needs.
Learn More!
 

The initiative in Stockton is being supported by leaders in Silicon Valley, including an organization belonging to the co-founder of Facebook. According to Stockton’s mayor, the reason for the basic income scheme is the “looming threat of automation and displacement.”

Geopolitical significance: What Stockton is doing reflects what many other cities and governments around the world are thinking: Automation will take jobs. It isn’t a question of whether this will happen; it is a question of what governments believe.

Governments that are considering universal basic income (UBI) proposal may transition to other policies, some of which could threaten the operations and profits of robotics businesses. Other locales that are thinking about basic income include the Canadian provinces of Ontario and British Columbia, as well as Alaska and Chicago in the U.S.

Also in the U.S., private investor Y Combinator is preparing its own UBI initiative in which it will give 3,000 people free money, with 1,000 people eligible to receive $1,000 a month until 2022.

There’s also Kenya, where a charity has launched a 12-year study that will give 6,000 people in 40 villages around $22 a month. India is also exploring a basic income scheme for 2020, which if successful, could be the world’s largest. In every country above, the perceived risk of automation replacing jobs is high.

As governments experiment with basic income, it may be a stepping stone to a robot tax or a ban on certain types of robots, such as customer-service models or self-driving taxicabs.

However, basic income may not be all that it is hyped up to be. Even if governments hand people money, it may not enable them to get another job. The skills required may be fundamentally different from what they have, so large parts of societies could become dependent on government support.

Policymakers should think about this as they draft proposals around basic income and other policies intended to mitigate the effects of automation.

When robots mess up, governments may get angry

Robotics development: A test by the American Civil Liberties Union (ACLU) of Amazon’s facial recognition software showed some surprising results. The ACLU fed Amazon Rekognition 535 images of Congress against 25,000 public mugshots. Amazon’s software ended up finding 28 “false matches.” In other words, the facial recognition software incorrectly identified 28 members of Congress as criminals.

Geopolitical significance: As robotics, AI, and autonomous vehicles find new uses worldwide, there are risks that these technologies could “mess up.” In addition to Amazon’s Rekognition false matches, there are other instances where machines got it wrong.

For example, researchers in Germany and Belgium simulated the 2018 World Cup in Russia more than 100,000 times. And, they predicted that the winner would be Spain, followed by Germany and Brazil. Clearly, the AI was way off the mark.

At the same time, IBM Watson, which is being used to fight cancer, has allegedly proposed “unsafe” treatments. In one case, Watson proposed that a patient who suffered from bleeding be given a drug that causes further bleeding.

In addition, researchers from the U.S. fooled a Google image-recognition system into thinking a 3D-printed turtle was a rifle, while a media platform caught Google Translate converting languages with a gender bias.

These examples point to the fragility of machine learning as it exists today and the further data and experience they need before institutions can rely on it. Companies producing and using automation risk government scrutiny and penalties if there is a lack of public trust. This lack of trust can easily extend to the governments using automation themselves.

If a government feels that robot or AI misbehavior is a threat to their control, culture, or society, it could ban the service. This is what happened in San Francisco, when legislators banned delivery robots from sidewalks because of safety concerns.

Tomorrow, instead of San Francisco, it might be Seoul banning financial AI systems from Washington or Tokyo banning Russian robot arms. Both developers and regulators must be aware of the risks and repercussions of deploying immature technologies.

AI gives countries new military power and reach

Robotics development: Scientists in China are working on small, unmanned submarines that can carry out entire missions on their own — including suicide attacks on enemy ships. These vessels will operate globally.

China said it expects to deploy these AI-driven submarines in the early 2020s in theaters that are strategic to Beijing, such as the South China Sea and Pacific Ocean.

Geopolitical significance: As China moves to modernize its military, AI is playing a leading role. For example, the country is converting its old Type 54 Soviet-era tanks into robotic tanks that could operate as a swarm. China is also developing AI that can assist commanders of its nuclear submarines.

Through AI and robotics, China wants to be able to match the military power and reach of any geopolitical rival. Here’s the thing, though: China isn’t alone. Almost every world power is investing in military AI. This is a huge opportunity for automation suppliers, but it can also cause headaches because of trade restrictions and skittish investors.

Alongside China is India. Recently, an AI taskforce set up by India’s ministry of defense submitted a report on how to use AI. Prior to this, a government official said that AI will be integrated into India’s air force, navy and army. India wants AI-based weapons systems.

Can India procure all of its AI needs at home, or will it look to companies from other countries? It appears India is already answering that question. It is working with Japan to bring AI and robotics into defense. Both countries will start work on a new unmanned ground vehicle (UGV).

There’s also Singapore, whose chief defense scientist has said that AI will be a “crucial part of defense” and South Korea, who is working on AI commanders to assist military generals by 2025.

As nations around the world look to AI to give them new geopolitical power, robotics businesses should be thinking about how to capitalize on this. However, they should not look at only large powers, like China or India. Arguably, smaller countries have just as big ambitions and desires but they don’t make the headlines.

Countries throughout South America, Eastern Europe and Africa may be looking to purchase military AI to boost their defense and power. And robotics businesses could benefit. Ironically, it may be AI that tells countries what to buy or do. China has announced that it will be developing AI to help diplomats make foreign-policy decisions — a page right out of my book.