This paper examines the possible future application of robotics to policing on the assumption that these will be systems that are controlled by programmed computers, rather than cyborgs. In particular, this paper examines the legal and moral requirements for the use of force by police, and whether robotic systems of the foreseeable future could meet these requirements, or whether those laws may need to be revised in light of robotic technologies, as some have argued.
Beyond this, the paper considers the racial dimensions of the use of force by police, and how such automation might impact the discriminatory nature of police violence. Many people believe that technologies are politically neutral, and might expect a future RoboCop to be similarly neutral, and consequently to lack racial prejudice and bias. In this way, RoboCop might be seen as a technological solution to racist policing. Many scholars have argued that technologies embody the values of the society that produces them, and often amplify the power disparities and biases of that society. In this way, RoboCop might be seen as an even more powerful, dangerous and unaccountable embodiment of racist policing.
The paper proceeds by examining the problems of racist policing from a number of diverse perspectives. These include examining the national and international legal standards for the use of force by police, as well as the guidelines issued by UN Human Rights Council, ICRC, and Amnesty International, and the legal implications of designing robotic systems to use violent and lethal force, remotely or autonomously.
From another perspective, the paper will consider the ways in which digital technologies are not racially neutral, but can actually embody forms of racism by design, both intentionally and unintentionally. This includes simple forms such as automatic faucets which fail to recognize dark skinned hands,the intentional tuning of color film stock to give greater dynamic range to white faces at the expense of black faces, and the numerous challenges of adapting facial recognition technologies to racially diverse faces. In other words, how might automated technologies that are intended to treat everyone equal, fail to do so? And further, how might automated technologies be expected to make special considerations for particularly vulnerable populations? The paper also considers the challenges of recognizing individuals in need of special consideration during police encounters, such as the elderly, children, pregnant women, people experiencing health emergencies, the mentally ill, and the physically handicapped including the deaf, blind and those utilizing wheelchairs, canes, prosthetics and other medical aides and devices.
The paper also considers the systemic nature of racism. The automation of policing might fail to address systemic racism, even if it could be successful in eliminating racial bias in individual police encounters. In particular, it considers the likely applications of data-driven policing. Given the efficiency aims of automation, it seems likely that automated patrols would be shaped by data from previous police calls and encounters. As is already the case with human policing, robotic police will likely be deployed more heavily in the communities of racial minorities, and the poor and disenfranchised where they will generate more interactions, more arrests, and thus provide data to further justify greater robotic police presence in those communities. That is, automated policing could easily reproduce the racist effects of existing practices and its explicit and implicit forms of racism.
Finally, the paper reflects on the need for greater community involvement in establishing police use-of-force standards, as well as the enforcement of those standards, and other norms governing policing. Moreover, as policing becomes increasingly automated, through both data-driven and robotic technologies, it is increasingly important to involve communities in the design and adoption of technologies used to keep the peace in those communities. Failing to do so will only further increase an adversarial stance between communities and their police force.
Peter Asaro will present Will #BlackLivesMatter to RoboCop? on Saturday, April 2nd at 3:15 PM with discussant Mary Anne Franks at the University of Miami Newman Alumni Center in Coral Gables, Florida.