The European Union’s General Data Protection Regulation, or GDPR, goes into effect on May 25. The European privacy rule includes a number of provisions that will likely hinder the development and use of artificial intelligence in Europe, according to experts.
GDPR poses three main problems for businesses using AI: higher costs, practical limitations, and legal risks. The higher costs and legal risks could deter companies from using AI, while the practical limitations will make it difficult to use AI and undermine its effectiveness, according to Nick Wallace, a senior policy analyst at the Center for Data Innovation and one of the authors of “The Impact of the EU’s New Data Protection Regulation on AI.”
“There’s a lot in GDPR that touches AI in different ways,” he said. “The most important provisions in the GDPR are the rules on algorithm decision making.”
“For example, there is a general right not to be subject to wholly-automated decisions with legal or similarly significant effects, which amounts to a right to have a human review such decisions,” Wallace said, adding that there is “a requirement that will make it more difficult for companies to automate certain processes using AI. ”
Similarly, the GDPR states that customers have the right to demand that companies erase all copies of their personal data. For instance, a human would have to analyze all the relevant input and output data — a requirement that may undermine the integrity of algorithmic models, Wallace noted.
Raising the cost of AI
In addition, requiring companies to manually review significant algorithmic decisions could raise the overall cost of AI, said analysts.
“That will impose costs on AI because you have to pay someone to go over a decision that you have [previously] given over to a machine because it can do it faster, more accurately and at a lower cost than a human,” Wallace said. “If you could simply replace algorithmic decisions with human decision making, there’d be little point in investing in AI in the first place. That weakens incentives to use AI because it raises potential costs.”
The right to explanation could also reduce AI accuracy, Wallace said. There’s a tradeoff between the accuracy of an algorithmic decision and the transparency of an algorithmic decision.
“There are going to be certain circumstances when you do need to have transparency when the tradeoff is worth making,” he said. “But the GDPR applies it to so many decisions so broadly that you’re going to have situations where companies are being forced to use less-accurate algorithms in order for them to be transparent to a human reviewer.”
Some people have even expressed concern that the European privacy rule will be an “AI killer,” said Rebecca Wettemann, vice president of research at Nucleus Research.
“When you think about everything we’re doing with AI around recommendations, around decision making, around profiling,” she said. “GDPR requires that I have the right to see the underlying decisions factors of any decisions — why I was declined for a loan, for example.”
“If companies are taking a black-box approach to AI, using those recommendation engines without having the transparency, they have no way to conform to those GDPR requirements,” noted Wettemann.
Concerns around accuracy
The right to erase data could also reduce the accuracy of some algorithmic models.
“For AI to work requires a ton of data, and the GDPR is about treating personal data with respect,” said Roy Pereira, the CEO of Zoom.Ai, an artificial intelligence provider.
But the European privacy rule restricts the usage of personal data, and in some ways it prohibits the use personal data altogether. And that would conflict with machine learning and deep learning’s need for large data sets, he said.
“We are going to see a lot of companies change what their strategies around leveraging artificial intelligence technologies,” Pereira said. For example, the right to be forgotten is a key part of the European privacy legislation. It enables end users to request that companies scrub their personal data from their databases.
“The problem is that there are not a lot of source-data providers out there,” he said. “Google is one, maybe Facebook, but there are a tremendous number of companies out there that are processors. They process data that they get from a source or even another processor, and they derive data.”
“This is what AI is really good at, deriving some other data, so the problem then becomes how does a company ensure that if it delete a person’s data, it doesn’t delete the final result — the insight that was derived from that data as well?” Pereira asked.
That’s going to have some ramifications that aren’t fully understood, he said. Noncompliance with the GDPR’s extremely complex provisions could result in stiff penalties that make advanced data processing a legally and financially risky endeavor.
Because the European privacy rule’s requirements would be impractical — and in some cases impossible — to fulfill, many companies will ultimately limit their uses of AI, Pereira asserted.
Benefits of European privacy protection
Nevertheless, there are some benefits to the GDPR, according to Wallace. For one thing, it establishes a single set of rules for the entire European Union, which is an advantage because it means that companies using data for AI don’t have to work with vastly different regulatory environments throughout Europe.
“The other benefit is that the GDPR makes it illegal for member states to force companies to store personal data within their territories on the basis of privacy, which makes things more competitive in terms of the cloud computing environment that supports AI,” Wallace said.
That’s not perfect because it does impose quite onerous limitations on storing data outside the European Union, which will raise costs for AI. But at least within the union, the European privacy rule relaxes restrictions a little bit, he added.
“The golden opportunity that GDPR presents for AI is an intelligent engine that can look at multiple data sources, multiple ways that information is being processed, take some basic rules and automate a lot of the requirements of GDPR around tagging, around associating consent around the right to be forgetten,” Wettemann said. “The challenge is getting AI to do that effectively and do it with an audit trail, which is what a lot of AI is missing today.”