The Legal Construction of Black Boxes

Elizabeth Kumar

Elizabeth KumarAndrew Selbst, and Suresh Venkatasubramanian will present their paper, The Legal Construction of Black Boxes, on Saturday, September 25th at 10:00am at #werobot 2021. Ryan Calo will lead the discussion.

Abstraction is a fundamental technique in computer science. Formal abstraction treats a system as defined entirely by its inputs, outputs, and the relationship that transforms inputs to outputs. If a system’s user knows those details, they need not know anything else about how the system works; the internal elements can be hidden from them in a “black box.” Abstraction also entails abstraction choices: What are the relevant inputs and outputs? What properties should the transformation between them have? What constitutes the “abstraction boundary?” These choices are necessary, but they have implications for legal proceedings that involve the use of machine learning (ML).

Andrew Selbst

This paper makes two arguments about abstraction in ML and legal proceedings. The first is that abstraction choices that can be treated as normative and epistemic claims made by developers that compete with judgments properly belonging to courts. Abstraction constitutes a claim as to the division of responsibility: what is inside the black box is the province of the developer; what is outside belongs to the user. Abstraction also is a factual definition, rendering the system an intelligible and interrogable object. Yet the abstraction that defines the boundary of a system is itself a design choice. When courts treat technology as a black box with a fixed outer boundary, they unwittingly abdicate their responsibility to make normative judgments as to the division of responsibility for certain wrongs, and abdicate part of their factfinding roles by taking the abstraction boundaries as a given. We demonstrate these effects in discussions of foreseeability in tort law, liability in discrimination law, and evidentiary burdens more broadly.

Suresh Venkatasubramanian

Our second argument builds from that observation. By interpreting the abstraction as one of many possible design choices, rather than a simple fact, courts can surface those choices as evidence to draw new lines of responsibility without necessarily interrogating the interior of the black box itself. Courts may draw on evidence about the system as presented to support these alternative lines of responsibility, but by analyzing the construction of the implied abstraction boundary of a system, they can also consider the context around its development and deployment.

Ryan Calo

Courts can rely on experts to compare a designer’s choices with emerging standard practices in the field of ML or assign a burden to a user to justify their use of off-the-shelf technology. After resurfacing the normative and epistemic contentions embedded in the technology, courts can use familiar lines of reasoning to assign liability as proper.