AI Legal Risks Could Increase Due to Loper Decision
Published 10/03/2024
Written by Dan Stocker, with contributions from the CSA AI Governance and Compliance Working Group.
AI and regulation
In just a short few years, artificial intelligence (AI) has gone through a massive hype cycle, and is entering a period where it will directly impact the broader population. There are, however, still many concerns about safety and irresponsible use. Across the globe, there has been a broad consensus that AI technology requires regulation. With the EU AI Act now in effect, as of August 1, 2024, there is widespread interest and trepidation about prospects for US-based legislation. Legislation has been proposed at both the US federal level, and in many US states.
That unease has gotten worse with the recent US Supreme Court case of Loper Bright Enterprises v. Raimondo, decided on June 28, 2024. To understand why, we must first tease out the important context of this (otherwise) dry topic. Without getting too deep in the details of the US system of government, we can summarize the general idea (which is similar to other forms of government) that a legislative authority passes a law, an executive function manages it, and a judicial authority interprets the law, where there is controversy over how the executive function has chosen to proceed.
The lawmaking process favors political compromise, which tends to ambiguity as a means of reaching win-win results. In the US, modern administrative law has evolved to deal with that ambiguity by delegating judgment for interpretation with the agency tasked to implement the law. A similar process can be seen in other jurisdictions. The standard justification for this is that the agency is best situated, by reason of specialization, expertise, and experience, to reach a reasonable interpretation. This is generally true but especially relevant for highly technical topics. The complexity of modern administrative law, with technically-challenging topics, and broad sets of stakeholders, is generally thought to preclude efforts at comprehensively-detailed statute construction. That complexity is not abating. Rather, it is accelerating.
What is important about the Loper decision?
Loper overturned the landmark 1984 case Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. The fundamental issue centers on interpretation of ambiguous statutory language and the role of federal administrative agencies to determine how to resolve those questions. The prior standard had established a mandate for courts to defer to reasonable decisions made by those agencies. Going forward, courts are directed to exercise independent judgment in how the statute should be disambiguated. This represents a much larger role for courts, and greater influence for challenges to existing agency interpretations. Loper allows courts to consider established agency interpretations, or disregard them entirely. Any deference is left to the judgment of the particular court.
While the nominal justification offered in Loper (a return to “traditional tools of statutory construction”) seems straightforward enough, the holding will exacerbate a systemic problem for the American system of government. Administrative law has the largest surface area of rule-making and outsized impact on market decisions by regulated organizations. The related topics of AI safety and responsible AI were already expected to challenge lawmakers to strike a principled balance, in order to materially address the risks of unregulated AI, and foster an environment that incentivizes innovation to build responsible AI systems.
Loper puts an entirely new, diverse class of actors directly into the critical path of determining the meaning and impact of such rules. Aside from established market leaders favoring the status quo, one specific objection to any AI regulation is that it will engender a years-long period of legal challenges before clarity is achieved. Lack of clarity, it is claimed, will slow growth (i.e., funding) and change the risk calculus for innovators. By encouraging Balkanization of interpretation, Loper will accentuate this problem.
What does all this mean in the short-term?
- The EU AI Act, much like the General Data Protection Regulation (GDPR) before it, will serve as a high-water mark for regulation of AI. Implementation will be phased in over the next several years, to allow member states time to establish the various authoritative bodies required for enforcement.
- In the short-term, in the US, President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence will be most impactful. The mandates for governance, risk management, and responsible use of AI will help establish these norms in ways that will make it easier for subsequent legislation to build upon.
- US state laws will be faster to “market” than US legislation, following Justice Louis Brandeis’s astute observation that "a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country." As a practical matter, states with stronger tech sector economies will exercise outsized influence on companies.
- Loper-based case law could arrive fairly quickly, and give early, diffuse signals for how it may affect federal AI regulations (when those arrive).
Discover more AI thought leadership from the CSA blog.
Related Articles:
The Evolution of DevSecOps with AI
Published: 11/22/2024
How Cloud-Native Architectures Reshape Security: SOC2 and Secrets Management
Published: 11/22/2024
It’s Time to Split the CISO Role if We Are to Save It
Published: 11/22/2024
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024