The Trolley Problem, cont. | The Self-Driving Car Problem
Thought experiments in moral philosophy and artificial intelligence
The following news and feature stories refer to "The Trolley Problem," a moral philosophy thought experiment developed by MIT philosopher Judith Jarvis Thomson. For context, here follows some background information on her original work:
The Trolley Problem | MIT Philosopher Judith Jarvis Thomson
If a runaway trolley is destined to hit a group of five people but can be diverted onto a track where it will hit only one, is it right to divert it? What if it can only be stopped by throwing somebody in front of it?
Developed by MIT philosopher Judith Jarvis Thomson, now emerita, the famous "trolley problem" has been debated for more than 40 years, as philosophers the world over struggle to understand what principle underlies the different responses elicited by the two scenarios.
In each case, one person is sacrificed to save five. Yet people overwhelmingly support diverting the train and object to throwing a person into its path. Why?
Today this question has crossed disciplines, taken up by researchers in the rapidly growing field of moral psychology, which aims to investigate moral responses empirically. Psychology professor Marc Hauser of Harvard, for example, is investigating the theory that some basic moral sense is hard-wired in the human brain — an idea analogous to MIT Professor emeritus Noam Chomsky's theory of universal grammar. Hauser has incorporated variations on the trolley problem into his "Moral Sense Test," an online survey that initially posed moral questions to 5,000 subjects in 120 countries (the test has since been taken by upwards of 150,000 people). Responses have proved remarkably consistent across gender, age, educational level, ethnicity, religion and national affiliation.
Professor Thomson nevertheless sees polls as irrelevant to the ethical question. In a recent paper re-examining the trolley problem, she writes that despite popular opinion, it is impermissible to kill one person by diverting the train to save five; the bystander may choose to do nothing. There is a major moral difference between killing five people and letting five die, Thomson says. Why remains a subject for debate. More
Judith Jarvis Thompson, MIT Professor Emerita of Philosophy, at a conference in her honor
What can the Trolley Problem teach self-driving car engineers?
Researchers in the Scalable Cooperation group at the Massachusetts Institute of Technology Media Lab revived and revised the moral quandary. It was 2016, so the trolley was now a self-driving car, and the trolley “switch” the car's programming, designed by godlike engineers. MIT's “Moral Machine” asked users to decide whether to, say, kill an old woman walker or an old man, or five dogs, or five slightly tubby male pedestrians.
Story at Wired
Self-driving cars will have to decide who should live and who should die. Here's who humans would kill.
“We don't suggest that [policymakers] should cater to the public's preferences. They just need to be aware of it, to expect a possible reaction when something happens. If, in an accident, a kid does not get special treatment, there might be some public reaction,” said Edmond Awad, a computer scientist at the Massachusetts Institute of Technology Media Lab who led the work.
Story at Washington Post
In a crash, should self-driving cars save passengers or pedestrians? 2 million people weigh in
Since 2016, scientists have posed this scenario to people around the world through the “Moral Machine,” an online platform hosted by the Massachusetts Institute of Technology that gauges how humans respond to ethical decisions made by artificial intelligence. On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories.
Coverage at PBS NewsHour