A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability. As control has become distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s an accident, we intuitively—and our laws, in practice—want someone to take the blame.
The result of this ambiguity is that humans may emerge as “liability sponges” or “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component—accidentally or intentionally—that is intended to bear the brunt of the moral and legal penalties when the overall system fails.
Madeleine Elish’s paper uses the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current design paradigms and frameworks for accountability in human-robot systems. It begins by examining historical instances of “moral crumple zones” in the fields of aviation, nuclear energy and automated warfare. For instance, through an analysis of technical, social and legal histories of aviation autopilots, which can be seen as an early or proto-autonomous technology, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automated control. From the perspective of both legal liability and social perception, the systems which govern autopilots and other flight management systems have remained remarkably unaccountable in the case of accidents even while these autopilot systems are primarily in control of flight.
In all of the systems discussed, the paper analyzes the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. It argues that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.
This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.
Madeleine Elish will present Moral Crumple Zones: Cautionary Tales in Human Robot Interaction on Friday, April 1st at 8:45 AM with discussant Rebecca Crootof at the University of Miami Newman Alumni Center in Coral Gables, Florida.