Our submission to the NTIA's AI accountability RFC

June 10, 2023

In our submission to the National Telecommunications and Information Administration’s request for comment on AI accountability we insist that effective regulation needs to start first with the outcomes that it wants to achieve – in this case the prevention of AI-related harms to individuals – and lay out steps to get there.




For AI accountability to have any force, governments and agencies should not defer to industry on AI policy. Governments serve the public but corporations do not, and it is imprudent to allow self-interested entities too much influence over the rules meant to restrain them.

This comment provides a critique of recent approaches to AI accountability proposed by industry and suggests a better path – one that encourages the responsible development of AI and broad protections for the public.

Who we are and why our perspective matters

We understand the motivations of the AI industry because we are part of it.

Imbue is an independent research company developing the reasoning models that will make human-centric AI agents possible. Our approach creates agents that think in language and are inspectable by design. We develop safety principles in-house and participate in shaping policy, regulation, and industry norms so that people will come before commercial interests.

We are convinced of the incredible potential of what we are building, yet we are concerned that incumbent interests have disproportionately defined today’s AI policy agenda with proposals that are either distracting, like drafting letters of concern, or toothless, like proposing weak governance models. This has inadvertently tilted the policy discourse to favor businesses over the communities we need to protect.

This needs to change.

How to regulate the AI industry to benefit every community

AI systems are too important to be governed exclusively by their creators; we need our laws and industry norms to protect people, even (and especially) when that means limiting the existence, uses, or profit-making opportunities for those systems. We already have legal frameworks and regulatory structures to do this – the question is whether we’ll properly use them. If we don’t, we risk squandering AI’s opportunity and repeating the errors that allowed, as an example, surveillance advertising to become a prevailing business model for technology’s previous era (re Q. 11).

Treat safety as a engineering discipline – and put it first

IN RESPONSE TO QUESTIONS 2, 3, 4, 5, 9, 11, 15, 21, 27, and 31

AI will make many tasks easier – including tasks that may cause serious damage and tasks that are motivated by malicious intent. Our safety opportunity with AI is to pursue a program that minimizes much of that harm both by supporting the development of safe systems and by preventing the deployment of unsafe ones.

At a minimum this will require the following:

  • Treating safety as an engineering discipline. As with nuclear power plants or airplanes, AI safety requires understanding the principles behind the system, and implementing many layers of failsafes for when things go wrong.1
  • Requiring companies to open their models to verify their soundness. AI systems can be evaluated by governments or third-parties without publicly releasing trade secrets and intellectual property. And unless evaluators can have access to the systems themselves, outside-in evaluation will be ineffective.2
  • Making public and private investments in our understanding of AI safety. Rigorous standards like the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF)3 make it possible to implement processes internally, a necessary precondition to building safe systems. This work should also include promulgating principles for the design and deployment of these systems.
  • Placing the burden on developers to prove that their systems are safe. If a developer cannot do this, they should not be allowed to release such powerful and potentially dangerous systems into our lives.

Above all else, safety should supersede other considerations, including commercial interests and convenience.

Give regulation teeth

IN RESPONSE TO QUESTIONS 1, 6, 8, 16, 18, 22, 24, and 30

We want to challenge the RFC’s implicit framing of AI accountability: as much as we need to encourage best practices within the industry, the purpose of AI policy should be to prevent specific harms (like bias and malicious behaviors) from affecting communities.

It’s the outcomes that matter – accountability processes only have value insofar as they improve those outcomes.

This means that enforcement needs to be strong enough to shift industry behavior. Regulations that can be ignored or sidestepped by paying fines are of little value. For a regulation to have force, it will need not only financial penalties, but the ability to shut down non-compliant systems and to hold irresponsible actors to account.

Right now we appear to be on a default path to risk regulation,4 which presupposes that regulation can cover all the risks of a given technology. But what happens when companies follow these rules but their products still cause harm? Risk regulations are usually overseen by expert agencies and focus on collective benefit, which means that individual recourse is much harder and the likelihood that a court will shut something down is much lower.

Disclosure requirements, which feel like they should help, can be a decoy: they give a sense of doing something, but without follow-on consequences, they can be meaningless.

A better approach is to establish and enforce regulations that require that systems above certain capability or usage thresholds be open to evaluation from government agencies or third-parties, to have meaningful consequences that include shutdown, and high standards of responsibility until we better understand the dangers of AI systems.

Self-regulation has a role to play in shaping industry norms. Treating safety as an engineering discipline is an example of this, and things like the NIST AI RMF framework and safety principles that we are drafting to share with the community can be highly impactful. But they should be understood for what they actually offer: best practices and guidance, not enforcement and compliance.

These regulations need to prioritize actual harms to mitigate today’s dangers, while allowing themselves the ability to evolve as AI systems advance.

Avoid anti-competitive traps

IN RESPONSE TO QUESTIONS 7 and 28

Businesses prefer less competition. They often achieve this with technical differentiation (which is what we are now seeing for developers of large language models). Policy can also be a moat for incumbents; early regulations can make it hard for new entrants or favor industry over the needs of the public.5

To encourage competition, we should avoid:

  • Licensing AI systems (at this stage). This requirement will make it much harder for new entrants and smaller companies to develop AI systems, while its intended goals can be achieved with other policy approaches (see above). Licenses can also create the false sense that a system is safe, even though we may not yet have sufficient technical understanding to make that assertion. This is especially true given the rapid progress in AI capabilities that we are seeing today.
  • An AI-focused federal agency (at this stage). An agency dedicated to AI is likely to conflict with existing agencies that are better positioned to mitigate the emerging harms of AI. This will raise compliance costs, risk regulatory capture, and weaken overall enforcement. We would do better to empower existing agencies and to stand up capabilities (like evaluating large language models) as needed.

Given the incredible investment and economic opportunity associated with its development, regulatory friction is unlikely to slow AI down at this stage. The more likely result of onerous regulation will be to consolidate the power of the industry’s first movers.

Start with the laws and frameworks that we already have

IN RESPONSE TO QUESTION 1e

We should not build interventions from scratch. We already have existing laws that we can use to address AI’s challenges instead of spinning our wheels waiting for rules that may be simply redundant. For example, we already have laws that govern deceptive and misleading practices and we can lean on those to punish the misuse of AI systems that fall within their reach.

Conclusion

If we want society to truly benefit from the incredible potential of AI, our regulations need to start by first asking which outcomes we want to prevent and which we want to encourage. From there we can provide sanctions, enforcement mechanisms, and accountability processes to drive those outcomes. Starting with processes gets this backwards.

In this comment we propose four approaches to do this:

  • Treat safety as a engineering discipline – and put it first;
  • Give regulation teeth;
  • Avoid anti-competitive traps; and
  • Start with the laws and frameworks that we already have.

We are grateful for this opportunity to share our perspectives with the NTIA. If you have additional questions please reach out to Imbue at policy@imbue.com.


References

  1. Imbue. “Our Approach to Safety.” Accessed June 7, 2023. https://imbue.com/company/safety/.

  2. Burnell, Ryan, Wout Schellaert, John Burden, Tomer D Ullman, Fernando Martinez-Plumed, Joshua B Tenenbaum, Danaja Rutar, et al. “Rethink Reporting of Evaluation Results in AI.” Science 380, no. 6641 (April 14, 2023): 136–38. https://doi.org/10.1126/science.adf6369.

  3. “AI Risk Management Framework | NIST.” NIST, March 30, 2023. https://www.nist.gov/itl/ai-risk-management-framework.

  4. Lawfare. “The Developing Law of AI Regulation: A Turn to Risk Regulation,” April 21, 2023. https://www.lawfareblog.com/developing-law-ai-regulation-turn-risk-regulation.

  5. Desai, Sameeksha, Johan Eklund, and Emma Lappi. “Entry Regulation and Persistence of Profits in Incumbent Firms.” Review of Industrial Organization 57, no. 3 (November 1, 2020): 537–58. https://doi.org/10.1007/s11151-020-09787-7.