What if You Trained Google's Chatbot on Mein Kampf?

As machines become more adept at learning on their own, we need to ask how they're learning their sense of right and wrong.
GettyImages164478634story
Colorful speech bubblesGetty Images

Google recently built a chatbot that can learn how to talk to you. Artificial intelligence researchers Oriol Vinyals and Quoc Le trained the thinking machine on reams of old movie dialogue, and it learned to carry on a pretty impressive conversation about the meaning of life.

But what happens, asked my friend David Kesterson, if you train it on Mein Kampf?

It's a harrowing question. And it's just one that concerns him. Another is just as troubling: What if you train it on The Bible? The Bible, after all, can be interpreted in so many ways. What, in the end, will the machine learn? "What if it begins to make moral judgments based on its reading and interpretation?" he says.

In some ways, these are just thought experiments. Today, if you train the chatbot on Mein Kampf, it just sounds like Hitler. It doesn't think like him. We're a long way from computers that really think. But as machines become more and more adept at learning on their own, approaching real thought, we must ask the ethical questions early and often. Across the globe, some of the world's brightest minds are working to focus more attention on the social implications of AI. British physicist Stephen Hawking has warned that creating true artificial intelligence could "spell the end of the human race." Elon Musk, the founder of electric car company Tesla, recently addressed a conference on AI ethics, warning of an artificial "intelligence explosion" that could catch us off guard.

Prominent AI researchers, such as Facebook's Yann LeCun, have downplayed these fears. LeCun points out that you can build super-intelligent machines without making them autonomous. A machine can play chess at super-human levels, but lack the ability to do much else—including reprogram itself. But LeCun says that an AI uprising is certainly something we need to "think about, design precautionary measures, and establish guidelines."

One AI company is already working to do those things. Based in Austin, Texas, Lucid is selling a system based on Cyc, an AI tool that researcher Doug Lenat has been developing for more than 30 years. The aim is to produce a system that has "common sense"—the can "reason." Lucid is now bringing this to the business world. In an effort to understand the ramifications of such a system—and most likely, protect itself from bad PR—the company is working to setup an "ethics advisory panel" that will examine the big ethical questions of AI.

"We believe, like most pundits in the world, that AI is a hugely disruptive thing—-whether you see dystopian things or pure nirvana or somewhere in between," says Lucid founder and CEO Michael Stewart.

Stewart thinks that Musk's concerns are overblown, but worth listening to. "I'm not sure he has a full understanding of the full architecture of strong AI, any more than I do of electrical cars," Stewart says. "He's putting his finger on real issues—some we think are stretched and some we don't."

Lucid hopes to work hand-in-hand with the likes of Musk and Hawking, and Stewart says the company is already collaborating with researchers at Oxford University and Cambridge (Hawking's home base), and that bioethicist John Harris has agreed to be part of the panel. "We want to engage with that community, and we want to do it in a rational way, with overt knowledge," he says. "We know what this technology is capable of."