Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe

Deep learning, the latest in AI technology, could clash with new regulations from the European Union, the world's single largest online market.
This image may contain Home Decor Symbol Flag and Window
Getty Images

Neural networks are changing the Internet. Inspired by the networks of neurons inside the human brain, these deep mathematical models can learn discrete tasks by analyzing enormous amounts of data. They've learned to recognize faces in photos, identify spoken commands, and translate text from one language to another. And that's just a start. They're also moving into the heart of tech giants like Google and Facebook. They're helping to choose what you see when you query the Google search engine or visit your Facebook News Feed.

All this is sharpening the behavior of online services. But it also means the Internet is poised for an ideological confrontation with the European Union, the world's single largest online market.

In April, the EU laid down new regulations for the collection, storage, and use of personal data, including online data. Ten years in the making and set to take effect in 2018, the General Data Protection Regulation guards the data of EU citizens even when collected by companies based in other parts of the world. It codifies the "right to be forgotten", which lets citizens request that certain links not appear when their name is typed into Internet search engines. And it gives EU authorities the power to fine companies an enormous 20 million euro---or four percent of their global revenue---if they infringe.

But that's not all. With a few paragraphs buried in the measure's reams of bureaucrat-speak, the GDPR also restricts what the EU calls "automated individual decision-making." And for the world's biggest tech companies, that's a potential problem. "Automated individual decision-making" is what neural networks do. "They're talking about machine learning," says Bryce Goodman, a philosophy and social science researcher at Oxford University who, together with a fellow Oxford researcher, recently published a paper exploring the potential effects of these new regulations.

Hard to Explain

The regulations prohibit any automated decision that "significantly affects" EU citizens. This includes techniques that evaluate a person's "performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements." At the same time, the legislation provides what Goodman calls a "right to explanation." In other words, the rules give EU citizens the option of reviewing how a particular service made a particular algorithmic decision.

Both of these stipulations could strike at the heart of major Internet services. At Facebook, for example, machine learning systems are already driving ad targeting, and these depend on so much personal data. What's more, machine learning doesn't exactly lend itself to that "right of explanation." Explaining what goes on inside a neural network is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it's difficult to determine exactly why they work so well. You can't easily trace their precise path to a final answer.

Viktor Mayer-Schönberger, an Oxford expert in Internet governance who helped draft parts of the new legislation, says that the GDPR's description of automated decisions is open to interpretation. But at the moment, he says, the "big question" is how this language affects deep neural networks. Deep neural nets depend on vast amounts of data, and they generate complex algorithms that can be opaque even to those who put these systems in place. "On both those levels, the GDPR has something to say," Mayer-Schönberger says.

Poised for Conflict

Goodman, for one, believes the regulations strike at the center of Facebook's business model. "The legislation has these large multi-national companies in mind," he says. Facebook did not respond to a request for comment on the matter, but the tension here is obvious. The company makes billions of dollars a year targeting ads, and it's now using machine learning techniques to do so. All signs indicate that Google has also applied neural networks to ad targeting, just as it has applied them to "organic" search results. It too did not respond to a request for comment.

But Goodman isn't just pointing at the big Internet players. The latest in machine learning is trickling down from these giants to the rest of the Internet. The new EU regulations, he says, could affect the progress of everything from ordinary online recommendation engines to credit card and insurance companies.

European courts may ultimately find that neural networks don't fall into the automated decision category, that they're more about statistical analysis, says Mayer-Schönberger. Even then, however, tech companies are left wrestling with the "right to explanation." As he explains, part of the beauty of deep neural nets is that they're "black boxes." They work beyond the bounds of human logic, which means the myriad businesses that will adopt this technology in the coming years will have trouble sussing out the kind of explanation the EU regulations seem to demand.

"It's not impossible," says Chris Nicholson, the CEO and founder of the neural networking startup Skymind. "But it's complicated."

Human Intervention

One way around this conundrum is for human decision makers to intervene or override automated algorithms. In many cases, this already happens, since so many services use machine learning in tandem with other technologies, including rules explicitly defined by humans. This is how the Google search engine works. "A lot of the time, algorithms are just part of the solution----a human-in-the-loop solution," Nicholson says.

But the Internet is moving towards more automation, not less. And in the end, human intervention isn't necessarily the best answer. "Humans are far worse," one commenter wrote on Hacker News, the popular tech discussion site. "We are incredibly biased."

It's a fair argument. And it will only become fairer as machine learning continues to improve. People tend to put their faith in humans over machines, but machines are growing more and more important. This is the same tension at the heart of ongoing discussions over the ethics of self-driving cars. Some say: "We can't let machines make moral decisions." But others say: "You'll change your mind when you see how much safer the roads are." Machines will never be human. But in some cases, they will be better than human.

Beyond Data Protection

Ultimately, as Goodman implies, the conundrums presented by the new EU regulations will extend to everything. Machine learning is the way of the future, whether the task is generating search results, navigating roads, trading stocks, or finding a romantic partner. Google is now on a mission to retrain its staff for this new world order. Facebook offers all sorts of tools that let anyone inside the company tap into the power of machine learning. Google, Microsoft, and Amazon are now offering their machine learning techniques to the rest of the world via their cloud computing services.

The GDPR deals in data protection. But this is just one area of potential conflict. How, for instance, will anti-trust laws treat machine learning? Google is now facing a case that accuses the company of discriminating against certain competitors in its search results. But this case was brought years ago. What happens when companies complain that machines are doing the discriminating?

"Refuting the evidence becomes more problematic," says Mayer-Schönbergerd, because even Google may have trouble explaining why a decision is made.