Jason Millar will lead a discussion of Abby Everett Jaques‘s Why the Moral Machine is a Monster on Saturday, April 13, at 2:30 p.m. at #werobot 2019.
The Moral Machine project, built by the MIT Media Lab’s Scalable Cooperation Group, is a game-like platform that presents users with a choice between two outcomes in a scenario in which a self-driving car is going to crash. Users must decide if the car should continue on its path or swerve to change course, where doing one or the other will affect how many people are killed; whether pedestrians or passengers are protected; whether people of different ages or social positions are favored; etc. The idea is that gathering data about people’s choices can inform the programming of autonomous vehicles, turning them into, as it says on the tin, moral machines.
In essence, then, the Moral Machine project seeks to crowdsource guidelines for the programming of autonomous vehicles, by using a version of a classic philosophical thought experiment: the tr
olley problem. The interface is fun, the topic is current, and the platform has gone viral, receiving millions of responses and enormous amounts of press. Unfortunately, the results of the experiment, recently published in Nature, turn out to be morally monstrous.
This paper explains how and why the Moral Machine goes astray. It collects several apparently disparate kinds of worries and shows how they flow from a few basic methodological errors. Both explicitly and implicitly, the paper argues, the Moral Machine asks the wrong questions, framing the ethical choices in terms that ensure respondents will fail to make good decisions.