Administering Artificial Intelligence

Alicia Solow-Niederman

Michael Froomkin will lead a discussion of Alicia Solow-Niederman‘s Administering Artificial Intelligence on Saturday, April 13, at 11:00 a.m. at #werobot 2019.

Calls for sector-specific regulation or the creation of a federal agency or commission to guide and constrain artificial intelligence, or AI, development are increasing. This turn to administrative law is understandable because AI’s regulatory challenges seem similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But an “FDA for algorithms” or federal robotics commission is not a cross-cutting AI solution. AI is unique, even if it is not entirely different. AI’s distinctiveness comes from technical attributes (speed, complexity, and unpredictability) that strain traditional administrative law tactics, in combination with institutional settings and incentives, or strategic context, that affect its development path.

Michael Froomkin

This Article puts American AI governance in strategic context. Today, there is an imbalance of state and non-state AI authority. Commercial actors dominate research and development and private resources outstrip public investments. Even if we could redress this baseline, a fundamental, yet under-recognized problem remains. Any governance strategy must contend with the ways in which algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. When coding choices functionally operate as policy in this manner, the current trajectory of AI development augurs an era of private governance. Without rethinking our regulatory strategies, we risk losing the democratic accountability that is at the heart of public governance.