To Make Us All Safer, Robocars Will Sometimes Have to Kill

Not only will robocars fail to completely end traffic deaths, but they’ll be choosing who to sacrifice—all to make the roads of tomorrow a safer place.

Editor's note: This is the second entry in our new series Is That a Thing*, in which we explore tech's biggest myths, misconceptions, and—every so often—actual truths. Watch the first episode, about cellphones and cancer, here.*

Let’s say you’re driving down Main Street and your brakes give out. As the terror hits, a gaggle of children spills out into the road. Do you A) swerve into Keith’s Frozen Yogurt Emporium, killing yourself, covering your car in toppings, and sparing the kids or B) assume they’re the Children of the Corn and just power through, killing them and saving your own life? Any decent human would choose the former, of course, because even murderous kiddie farmers have rights.

But would a self-driving car make the right choice? Maybe yes. But even if it does, by programming a machine to save children, you're also programming it to kill the driver. This is known as the trolley problem (it’s older than self-driving cars, you see), and it illustrates a strange truth: Not only will robocars fail to completely eliminate traffic deaths, but on very, very rare occasions, they’ll be choosing who to sacrifice—all to make the roads of tomorrow a far safer place.

Cut your pearl-clutching: Self-driving cars will save countless lives. Humanity needs them, badly—more 30,000 people die every year in road accidents in the United States alone. Worldwide, it's more than a million. Because, it turns out, humans are terrible drivers. Machines, by contrast, are consistent, calculating, and incapable of getting drunk, angry, or distracted.

But autonomy can’t save everyone—the technology will never be perfect—and society must understand that very well before the technology arrives. Society also needs to understand that robocars are for the greater good. “Convincing the public must begin with understanding what the public is worried about and what the psychological mechanisms involved are,” says Iyad Rahwan of the MIT Media Lab, who’s studying just that.

In our little thought experiment with the frozen yogurt, most people would choose to sacrifice their own life for the good of the crowd. But Rahwan has found most people wouldn’t buy a self-driving car that could make the decision to kill them as the passenger. That’s silly and irrational, sure—this would be an exceedingly rare situation and overall you are far safer in the hands of a machine than driving yourself—but this finding poses a serious problem: Robocars may soon be ready to hit the road, but humans aren’t ready to accept the ethical challenges that come along with them.

But, in fairness, these are early days in the self-driving revolution. Researchers need to gather more data about public perception, and automakers in turn need to be open with their customers. “I think everybody is learning,” says Rahwan. “The public is learning, the regulators are learning, and the carmakers are learning as well.” Meaning for the time being, Keith’s Frozen Yogurt Emporium is safe from the merciless robocars.

For the time being.