Abeba Birhane, Jelle van Dijk, and Frank Pasquale will present their paper, Debunking Robot Rights: Metaphysically, Ethically and Legally, on Saturday, September 25th at 10:00am at #werobot 2021. Deb Raji will lead the discussion.
In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that could be denied or granted rights. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, rights which have undermined the US electoral process, as well as workers; and consumers’ rights. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists to fantasize about benevolently sentient machines, while so much of current AI and robotics is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.
Building on theories of phenomenology, post-Cartesian approaches to cognitive science, and critical race studies, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled and surveilled society. What we find is the seamless integration of machinic systems into daily lives in the name of convenience and efficiency. The last thing these systems need is legally enforceable “rights” to ensure persons defer to them. Conversely, the ‘autonomous intelligent machine’ is a sci-fi fantasy, a meme that functions to mask the environmental costs and human labour and which are the backbone of contemporary AI. The robot rights debate further mystifies and obscures these problems. And it could easily provide a normative rationale for permitting powerful entities developing and selling AI to be absolved from accountability and responsibility, given the general association of rights with responsibility.
Existing robotic systems (from chatbots to humanoid robots) are often portrayed as fully autonomous systems and that is part of the appeal for granting them rights. However, these systems are never fully autonomous, but always human-machine systems that run on human labour and environmental resources and are necessarily embedded in social systems from their conception, to development to deployment and beyond. Yet, the “rights” debate proceeds from the assumption that the entity in question is somewhat autonomous, or worse that it is devoid of exploited human labour. Approaching ethics requires reimagining ethics from the perspective, needs, and rights of the most marginalized and underserved. This means that any robot rights discussion that overlooks underpaid and exploited populations that serve as the backbone for “robots” as well as the environmental cost of creating AI, risks being disingenuous. The question should not be whether robotic systems deserve rights, but rather if we grant or deny rights to a robotic system, what consequences and implications arise for people owning, using, developing, and affected by the actual robots?
The time has come to change the narrative, from “robot rights” to the duties of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots). Damages, harm and suffering have been repeatedly documented as a result of the creation and integration (into the social world) of AI systems. Rather than speculating about the desert of hypothetical machines, the far more urgent conversation concerns robots and AI as concrete artifacts built by powerful corporations, further invading our private, public, and political space, and perpetuating injustice. A purely intellectual and theoretical debate is at risk of obscuring the real threat here: that many of the actual robots that corporations are building are doing people harm both directly and indirectly, and that a premature and speculative robot rights discourse risks even further unravelling our frail systems of accountability for technological harms.