OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way

OpenAI, a research institute cofounded by Elon Musk, aims to create artificial intelligence that's better than people at everything.
a video game player on a computer
Shiho Fukada/Bloomberg/Getty Images

One Saturday last month, five men ages 19 through 26 strode confidently out of a cloud of magenta smoke in a converted auto showroom in San Francisco. They sat at a line of computer keyboards to loud cheers from a crowd of a few hundred. Ninety minutes of intense mouse-clicking later, the five’s smiles had turned sheepish and the applause consolatory. Team OG, champions at the world’s most lucrative videogame, Dota 2, had lost two consecutive games to a collective of artificial intelligence bots.

The result was notable because complex videogames are mathematically more challenging than cerebral-seeming board games like chess or Go. Yet leaning against a wall backstage, Sam Altman, CEO of OpenAI, the research institute that created the bots, was as relieved as he was celebratory.

“We were all pretty nervous this morning—I thought we had like a 60-40 chance,” said Altman, a compact figure in a white T-shirt and whiter, showy sneakers. He became OpenAI’s CEO in March after stepping down as president of influential startup incubator YCombinator and had reason to be measured about the day’s win. To succeed in his new job, Altman needs bots to do more than beat humans at videogames—he needs them to be better than people at everything.

OpenAI’s stated mission is to ensure that all of humanity benefits from any future AI that’s capable of outperforming “humans at most economically valuable work.” Such technology, dubbed artificial general intelligence, or AGI, does not seem close, but OpenAI says it and others are making progress. The organization has shown it can produce research on par with the best in the world. It has also been accused of hype and fearmongering by AI experts critical of its fixation on AGI and AI technology’s potential hazards.

Under Altman’s plans, OpenAI’s research—and provocations—would accelerate. Previously chair of the organization, he took over as CEO after helping flip most of the nonprofit’s staff into a new for-profit company, in hopes of tapping investors for the billions he claims he needs to shape the destiny of AI and humanity. Altman says the big tech labs at Alphabet and elsewhere need to be pressured by a peer not driven to maximize shareholder value. “I don’t want a world where a single tech company creates AGI and captures all of the value and makes all of the decisions,” he says.

At an MIT event in late 2014, Tesla CEO Elon Musk described AI research as like “summoning the demon.” In the summer of 2015, he got talking with Altman and a few others over dinner about creating a research lab independent of the tech industry to steer AI in a positive direction. OpenAI was announced late that year, with Altman and Musk as cochairs. Musk left the board early in 2018, citing potential conflicts with his other roles.

In its short life, OpenAI has established itself as a serious venue for AI research. Ilya Sutskever, a cofounder of the organization who left a plum position in Google’s AI group to lead its research, oversees a staff that includes fellow ex-Googlers and alumni of Facebook, Microsoft, and Intel. Their work on topics such as robotics and machine learning has appeared at top peer-reviewed conferences. The group has teamed up with Google parent Alphabet to research AI safety; beating Team OG in Dota 2 earned respect from experts in AI and gaming.

OpenAI’s metamorphosis into a for-profit corporation was driven by a feeling that keeping pace with giants such as Alphabet will require access to ever-growing computing resources. In 2015, OpenAI said it had $1 billion in committed funding from Altman, Musk, LinkedIn cofounder Reid Hoffman, early Facebook investor Peter Thiel, and Amazon. Altman now says a single billion won’t be enough. “The amount of money we needed to be successful in the mission is much more gigantic than I originally thought,” he says.

OpenAI CTO Greg Brockman, center, shakes hands with members of professional e-gaming team OG after they lost two games of Dota 2 to his researchers' artificial intelligence bots.

OpenAI

IRS filings show that in 2017, when OpenAI showed its first Dota-playing bot, it spent $8 million on cloud computing. Its outlay has likely grown significantly since. In 2018, OpenAI disclosed that a precursor to the system that defeated Team OG tied up more than 120,000 processors rented from Google’s cloud division for weeks. The champion-beating version trained for 10 months, playing the equivalent of 45,000 years of Dota against versions of itself. Asked how much that cost, Greg Brockman, OpenAI’s chief technology officer, says the project required “millions of dollars” but declined to elaborate.

Altman isn’t sure if OpenAI will continue to rely on the cloud services of rivals—he remains open to buying or even designing AI hardware. The organization is keeping close tabs on new chips being developed by Google and a raft of startups to put more punch behind machine learning algorithms.

To raise the funds needed to ensure access to future hardware, Altman has been trying to sell investors on a scheme wild even for Silicon Valley. Sink money into OpenAI, the pitch goes, and the company will pay you back 100-fold—once it invents bots that outperform humans at most economically valuable work.

Altman says delivering that pitch has been “the most interesting fundraising experience of my life—it doesn’t fit anyone’s model.” The strongest interest comes from AI-curious wealthy individuals, he says. Hoffman and VC firm Khosla Ventures have invested in the new, for-profit OpenAI but didn’t respond to requests for comment. No one is told when to expect returns, but betting on OpenAI is not for the impatient. VC firms are informed they’ll have to extend the duration of their funds beyond the industry standard decade. “We tell them upfront, you're not going to get a return in 10 years,” Altman says.

Even as it tries to line up funding, OpenAI is drawing criticism from some leading AI researchers. In February, OpenAI published details of language processing software that could also generate remarkably fluid text. It let some news outlets—including WIRED—try out the software but said the full package and specifications would be kept private out of concern they could be used maliciously, for example to pollute social networks.

That annoyed some prominent names in AI research, including Facebook's chief AI scientist Yann LeCun. In public Facebook posts, he defended open publication of AI research and joked that people should stop having babies, since they could one day create fake news. Mark Zuckerberg clicked “like” on the baby joke; LeCun did not respond to a request for comment.

For some, the episode highlighted how OpenAI’s mission leads it to put an ominous spin on work that isn’t radically different from that at other corporate or academic labs. “They’re doing more or less identical research to everyone else but want to raise billions of dollars on it,” says Zachary Lipton, a professor who works on machine learning at Carnegie Mellon University and also says OpenAI has produced some good results. “The only way to do that is to be a little disingenuous.”

Altman concedes that OpenAI may have sounded the alarm too early—but says that’s better than being too late. “The tech industry has not done a good enough job trying to be proactive about how things may be abused,” he says. A Google cloud executive who helps implement the company’s internal AI ethics rules recently spoke in support of OpenAI’s self-censorship.

After the defeated Team OG departed the stage last month to sympathetic acclaim, OpenAI cued up a second experiment designed to demonstrate the congenial side of superhuman AI. Dota experts—and a few novices, including WIRED—played on teams alongside bots.

The AI software unlucky enough to get WIRED as a teammate mostly evinced superhuman indifference to helping a rookie player. It focused instead on winning the game, following instincts honed by months of expensive training.

Narrow hyper-competence is a hallmark of existing AI systems. A WIRED reporter could play Dota badly while taking occasional notes and talking with an OpenAI researcher, before riding a bicycle home in city traffic. Despite millions spent on training, the Dota bots could only play the specific version of the game they were designed for.

There’s little consensus on how to make AI software more flexible, or what components might be needed to make AGI more than a technological fantasy. Even Altman is daunted by the scale of the challenge. “I have days where I’m convinced it’s all going to happen and others where it all feels like a pipe dream,” he says.


More Great WIRED Stories