Is risk regulation right for AI?

May 17, 2023

In short: Is risk regulation the right legal framework for AI law? Margot E. Kaminski argues that it’s our default path. If we want to change that, we’ll need to imagine and advocate for better approaches.



In The Developing Law of AI: A Turn to Risk Regulation, Kaminski explains that the “fast-developing law of AI is shaping up to be … risk regulation. But risk regulation comes with baggage, and there is far more to the risk regulation toolkit than algorithmic impact assessments.”

Further:

Deploying risk regulation, whatever its benefits, is a normative choice with consequences. The type of risk regulation being used to regulate AI systems is light-touch law, centering on impact assessments and internal risk mitigation, and largely eschewing tort liability and individual rights, along with postmarket measures, licensing schemes, and substantive standards. It may well be worth looking at substantive areas of law that take AI risks more seriously—health law, or the law of financial systems— as potential models for better general AI risk regulation.

Multiple aspects of AI systems, and their harms, get lost in the current framing. AI is often the product of surveillance; this fact typically gets lost in discussions of AI risk regulation. The individual gets lost. The harms of AI systems can echo and reify problematic aspects of society. We should be asking, in the first place, whether we really want to measure, scale, and reproduce what are often deeply troubling aspects of existing social systems. The use of AI systems can leave less room for change, discretion, or compassion. There are big normative arguments to be had—and being had—on what this means for marginalized people and communities. These are policy conversations, not decisions to be left for enterprise risk management.