There is no shortage of hype around artificial intelligence, and the need for an international AI approach is growing more acute. For instance, the U.S. this week said it could limit investments from China in AI and biotechnology.
The former head of Google China once commented that within the next 10 years, most unicorns — tech companies valued at $1 billion or more — will be international AI ones and that 50% of jobs will be replaced by AI. The UK’s Defence Science & Technology Laboratory has invested £100,000 ($127,000) for research into AI for defense.
As multiple sectors turn to machine learning and related technologies, there is no denying the value of the innovations and applications being considered. But, within this rapidly growing industry, there remains a huge void. There is little to no international AI policy to guide developers and users.
Who needs AI policy?
For some, an international AI policy may seem counterproductive. Why do we need a set of policies or a framework to guide AI, machine learning, and deep learning? Isn’t this more likely to hinder advances than to propel them?
We don’t know. But AI could be the first technology we create that exceeds all human capabilities, so it needs to be understood for what it is — a great advancement and a huge risk.
There are already several efforts under way to develop an AI framework.
In 2016, during a G7 ministerial meeting in Japan, the Japanese communications minister proposed creating a framework of international AI rules. As reported in The Japan Times, the initial plan was to have eight principles govern the conduct of AI. One of those principles was that AI should be controllable by humans.
Even before approval of the framework, UPI reported that Japan was looking at using AI to draft responses to policy questions in its parliament. However, Japan’s trade ministry wants the system to study relevant information in Japan’s parliament over the past five years.

There are multiple efforts under way to develop international AI guidelines. Source: ClipArt.com
In addition, a U.K. House of Commons Science and Technology Committee report in 2016 urged the British government to create a strategy to deal with AI and robotics. The report wasn’t focused on regulations, but rather on examining the impact these new technologies would have on British society.
While there hasn’t been an official government strategy to guide AI in the U.S., polls suggest there is demand for one. A March 2018 Gallup poll found that 85% of Americans use at least one of six products with AI elements.
Moreover, an Electronica poll of 1,000 U.S. consumers found that 79% want AI to know its limits — but like the idea of having AI in their devices.
Factoring multiple jurisdictions
When it comes to developing and implementing international AI rules, there are two main challenges. First, how do you implement rules while still ensuring that AI-driven systems can function properly?
The University of Oxford’s Lipnet could read lips with a 93.4% accuracy by tracking the movement of a person’s mouth in a video. Such technology brings up new questions of privacy and surveillance by corporations, criminals, or governments.

The lack of international AI guidelines affects issues such as privacy. Source: ClipArt.com
Will people’s conversations in coffee shops, boardrooms, and other settings now be susceptible to spying? How might government leaders deal with AI repeating what they are saying when the mic is off but the camera is still rolling, so to speak?
When it comes to policies around robotics and AI, how would the creators of LipNet influence or comply with potential international AI rules? For instance, if the Dutch government decided that AI cannot be used for surveillance, that could hurt LipNet commercialization across Europe.
Geopolitics and AI
The second problem is geopolitical. Taking the example of the G7 exploring an international AI framework, it would mean that the U.S., Canada, France, Germany, Italy, the U.K., and Japan would each implement the agreed-upon principles.
However, India, China, and Russia are likely to have very different views on what constitutes legitimate use of AI. This has a direct effect on businesses.
In August, 27 semiconductor organizations in China joined forces in a High-End Chip Alliance (HECA) to propel China’s semiconductor industry. Shortly after the creation of HECA, Focus Taiwan reported research institute warnings that Taiwanese semiconductor companies should pay close attention to the medium to long-term impact of the alliance.
The institute warned about a lack of consistency in international AI standards. For example, China could create an “Artificial Intelligence Regulatory Commission” that has different rules from the G7 but would influence many other nations and companies hoping to serve its market. Such a body may be unlikely, but it highlights the risk of regionalism.
In an increasingly globalized world, international AI and robotics development could vary even more than now without common guidelines. This void could also lead to heightened competition among multinationals, ethics organizations, and governments over the best ways to manage new technologies.
Companies need their own international AI policies
The best way for institutions and businesses interested in AI to manage such a complex environment could be to prepare country-specific AI strategies.
If you’re selling to or operating in China, note that its government is likely to be concerned about how the data is collected and stored and whether AI applications will help Chinese firms or foreign firms.
Operating in India? The government isn’t worried about anything in particular, which is another way of saying it’s worried about everything to do with automation, especially with its Make in India strategy.
The U.S., Germany, South Korea, and the United Arab Emirates all have their own demands and market expectations. As an AI company, you will need to understand them and integrate such requirements into your products and services.
Until recently, AI has been limited to laboratories in colleges and universities, startup offices in Silicon Valley and Shenzhen, and the corporate boardrooms of IBM and GE.
Now, AI is on every smartphone, and it’s coming to more autonomous robots and vehicles, intelligent appliances, and back-office process automation. Virtual assistants are advising everyone from the local plumber to the president.
Considering there is no international AI framework — and there is unlikely to one anytime soon — it will be up to industry to meet the challenge.