What Happens if a Driverless Car Has to Choose Between Killing Its Passenger or a Pedestrian?
Philosophers have been debating a similar moral conundrum for years, but now the discussion has a new practical application.
Imagine you’re in a self-driving car, heading towards a collision with a group of pedestrians. The only other option is to drive off a cliff. What should the car do?
Philosophers have been debating a similar moral conundrum for years, but the discussion has a new practical application with the advent of self-driving cars, which are expected to be commonplace on the road in the coming years.
Specifically, self-driving cars from Google, Tesla, and others will need to address a much-debated thought experiment called The Trolley Problem (pdf). In the original set-up, a trolley is headed towards five people. You can pull a lever to switch to a different track, where just one person will be in the trolley’s path. Should you kill the one to save five?
Many people believe they should, but this moral instinct is complicated by other scenarios. For example: You’re standing on a footbridge above the track and can see a trolley hurtling towards five people. There’s a fat man standing next to you, and you know that his weight would be enough to stop the trolley. Is it moral to push him off the bridge to save five people?
Go off the cliff
When non-philosophers were asked how driverless cars should handle a situation where the death of either passenger or pedestrian is inevitable, most believed that cars should be programmed to avoid hurting bystanders, according to a paper uploaded to the scientific research site Arxiv (pdf) this month.
The researchers, led by psychologist Jean-François Bonnefon from the Toulouse School of Economics, presented a series of collision scenarios to around 900 participants in total. They found that 75% of people thought the car should always swerve and kill the passenger, even to save just one pedestrian.
Among the philosophers debating moral theory, this solution is complicated by various arguments that appeal to our moral intuitions but point to different answers. The Trolley Problem is fiercely debated precisely because it is a clear example of the tension between our moral duty not to cause harm, and our moral duty not to do bad things.
The former school of thought argues that the moral action is that which causes the maximum happiness to the maximum number of people, a theory known as utilitarianism. Based on this reasoning, a driverless car should take whatever action would save the most number of people, regardless of whether they are passenger or pedestrian.
If five people inside the car would be killed in a collision with the wall, then the driverless car should continue on even if it means hitting an innocent pedestrian. The reasoning may sound simplistic, but the details of Utilitarian theory, as set out by John Stuart Mill, are difficult to dispute.
Who is responsible?
However, other philosophers who have weighed in on the Trolley Problem argue that utilitarianism is a crude approach, and that the correct moral action doesn’t just evaluate the consequences of the action, but also considers who is morally responsible.
Helen Frowe, a professor of practical philosophy at Stockholm University, who has given a series of lectures on the Trolley Problem, says self-driving car manufactures should program vehicles to protect innocent bystanders, as those in the car have more responsibility for any danger.
“We have pretty stringent obligations not to kill people,” she tells Quartz. “If you decided to get into a self-driving car, then that’s imposing the risk.”
The ethics are particularly complicated when Frowe’s argument points to a different moral action than utilitarian theory. For example, a self-driving car could contain four passengers, or perhaps two children in the backseat. How does the moral calculus change?
If the car’s passengers are all adults, Frowe believes that they should die to avoid hitting one pedestrian, because the adults have chosen to be in the car and so have more moral responsibility.
Although Frowe believes that children are not morally responsible, she still argues that it’s not morally permissible to kill one person in order to save the lives of two children.
“As you increase the number of children, it will be easier to justify killing the one. But in cases where there are just adults in the car, you’d need to be able to save a lot of them—more than ten, maybe a busload—to make it moral to kill one.”
It’s better to do nothing
Pity the poor software designers (and, undoubtedly, lawyers) who are trying to figure this out, because it can get much more complicated.
What if a pedestrian acted recklessly, or even stepped out in front of the car with the intention of making it swerve, thereby killing the passenger? (Hollywood screenwriters, start your engines.) Since driverless cars cannot judge pedestrians’ intentions, this ethical wrinkle is practically very difficult to take into account.
Philosophers are far from a solution despite the scores of papers that debate every tiny ethical detail. For example, is it more immoral to actively swerve the car into a lone pedestrian than to simply do nothing and allow the vehicle to hit someone?
Former UCLA philosophy professor Warren Quinn explicitly rejected the utilitarian idea that morality should maximize happiness. Instead, he argued that humans have a duty to respect other persons (pdf), and so an action that directly and intentionally causes harm is ethically worse than an indirect action that happens to lead to harm.
Of course, cars will very rarely be in a situation where it there are only two courses of action, and the car can compute, with 100% certainty, that either decision will lead to death. But with enough driverless cars on the road, it’s far from implausible that software will someday have to make such a choice between causing harm to a pedestrian or passenger. Any safe driverless car should be able to recognize and balance these risks.
Self-driving car manufacturers have yet to reveal their stance on the issue. But, given the lack of philosophical unanimity, it seems unlikely they’ll find a universally acceptable solution. As for philosophers, time will tell if they enjoy having their theories tested in a very real way.