Imbue raises $200M to build AI systems that can reason and code

September 7, 2023

We’re excited to announce our latest funding round, a $200M Series B at a valuation of over $1 billion, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. This latest funding will accelerate our development of AI systems that can reason and code, so they can help us accomplish larger goals in the world.

As part of this, we’re renaming from Generally Intelligent to Imbue to better reflect our focus on imbuing computers with intelligence and human values. We aim to rekindle the dream of the personal computer—for computers to be truly intelligent tools that empower us, giving us freedom, dignity, and agency to do the things we love.

Our goal remains the same: to build practical AI agents that can accomplish larger goals and safely work for us in the real world. To do this, we train foundation models optimized for reasoning. Today, we apply our models to develop agents that we can find useful internally, starting with agents that code. Ultimately, we hope to release systems that enable anyone to build robust, custom AI agents that put the productive power of AI at everyone’s fingertips.

Training foundation models for reasoning

Current AI systems are very limited in their ability to complete even simple tasks on their users’ behalf. While we expect capabilities in this area to advance rapidly over the next few years, much remains to be done before AI agents are capable of achieving larger goals in a way that is really robust, safe and useful.

We believe reasoning is the primary blocker to effective AI agents. Robust reasoning is necessary for effective action. It involves the ability to deal with uncertainty, to know when to change our approach, to ask questions and gather new information, to play out scenarios and make decisions, to make and discard hypotheses, and generally to deal with the complicated, hard-to-predict nature of the real world.

At Imbue, we create foundation models that are tailor-made for reasoning. This means taking advantage of the powerful capabilities afforded by very large language models, while understanding in a detailed, practical way how those models are trained, and where they fail. It means creating pre-training data specifically designed to reinforce good reasoning patterns, and also developing techniques that spend far more compute during inference time to arrive at robust conclusions and actions.

In order to create reasoning models that provide a robust foundation for AI agents, we take a “full stack” approach: training foundation models, prototyping experimental agents and interfaces, building robust tools and infrastructure, and understanding the theoretical underpinnings of how models learn.

  • Models. We pretrain our own very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks. Our latest funding round lets us operate at a scale that few other companies are able to: our ~10,000 H100 cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.
  • Agents. On top of our models, we prototype agents that we use internally within serious contexts of use. Right now, we’re primarily working on agents that can code, but we experiment with all sorts of agents to push our models in a useful direction, with the goal of getting to robust, reliable general-purpose agents that we use every day.
  • Interfaces. Today’s AI chat interfaces are skeuomorphic, much like how the first word processor programs had notepad lines. We think many core problems around agent robustness, trust, and collaboration can be solved through interface invention. Moreover, AI agents that can reason about the world provide an opportunity to rethink the very way we interact with computers, and to create systems that much more authentically support and enable people.
  • Tools. Great tools speed up our iteration loop. We invest heavily in building our own, whether simple agent prototypes for fixing type checking and linting errors, debugging and visualization interfaces on top of our agents and our models, or more sophisticated systems like CARBS, which allows us to automate much of hyperparameter tuning and network architecture search. Critically, great tools speed up our iteration loop.
  • Theory. We believe that a theory of deep learning must be developed in order to create models that provide a robust foundation for agents and remain safe in the long run. Our researchers have published about the theoretical underpinnings of self-supervised learning, as well as fundamental laws that govern the learning of systems like neural networks. Our current research focuses on feature learning, and understanding the core mechanisms behind the learning process in large language models.

Crucially, this full-stack approach unlocks feedback loops that speed up our work. Designing working agents and tools helps us iterate on better models much more quickly, in turn unlocking even more useful agents that can help create even better models. Working on theory helps us understand why neural network architectures learn the way they do, shaping how we engineer them and in turn advancing our understanding of theory and safety.

Developing AI agents that can reason and code

While we ultimately want to develop tools to allow anyone to create their own AI agents, our initial focus for these reasoning models is on agents that we use ourselves. This lets us understand how to keep improving the reasoning models specifically for agents, and build tooling to make agents reliable.

Given that coding is the core of our work, many (though not all) of our first prototype agents are designed to help with software engineering in our codebase. There are a few reasons for this initial focus:

  • We think serious use is necessary for invention. Tools and interfaces need to be invented for AI agents to work robustly alongside us. We believe the best way to invent these is as a byproduct of trying (and often failing) to make useful agents for our own daily work, and solving the problems in the way. We value the practical work of continually refining the tools we use in production every day over making demos, and believe it will result in much better products.
  • We think code improves reasoning. Training on code helps models learn to reason better; training without code seems to result in models that reason poorly. This is surprising, and may be because code is one of the few examples of explicit reasoning on the Internet. Moreover, because programming problems are so objective—the code either passes the tests or doesn’t—such problems form a relatively ideal test-bed for more generalized reasoning abilities, allowing us to understand if we are making meaningful improvements in our underlying systems.
  • We think code is important for action. Generating code is an effective way for agents to take actions on the computer. Stronger coding ability can translate directly into agents that are more likely to succeed at complicated tasks. For example, an agent that writes a SQL query to pull information out of a table is much more likely to satisfy a user request than an agent that tries to assemble that same information without using any code.
  • We think coding agents are strategically important. As agents improve and take over more of our work, our research and engineering velocity increases. Over time this helps not only build software systems—it helps us prototype what an organization can look like when enabled by truly working AI agents.

We don’t currently intend to productionize our coding agents—we see them as a way to improve agents more generally. Over time, we hope to expose these tools and models so that anyone can create their own AI agents that truly work for them.

Our north star: truly personal computers that give us freedom, dignity, and agency to do what we love

When we build AI agents, we’re actually building computers that can understand our goals, communicate proactively, and work for us in the background. Today, we’re glued to our screens because our computers need to be micromanaged—if we’re not in front of the screen, very little gets done. Truly useful AI agents will fundamentally change this, and free us up to focus on the things we really care about.

If we build this technology thoughtfully, we can live in a world where we don’t have to be glued to our screens anymore, where computers can help us thin the barrier between idea and execution. We will be free to explore our curiosity, discover the laws of the universe, create artistic masterpieces, understand each other more deeply, or just take the time to enjoy life.

We pay close attention to the safety risks of more capable AI systems. We believe safety is largely a result of high quality engineering, understanding, and regulation, and so we work on all three areas:

  • We engineer agents to reason in natural language and to be conditioned entirely by their end users’ goals (see our approach to safety).
  • We pursue fundamental laws of deep learning to help improve our understanding of today’s most important AI systems.
  • We develop tools for policymakers to make sense of the vast amount of regulatory proposals and translate them into policy that protects people.

The world we hope to enable—where people have freedom, dignity, agency—is also something we try to embody in our culture. Whether it’s something obvious like direct coaching and mentorship, or whether it’s one of the many cultural habits we’ve built up over time, we’re constantly searching for new ways to empower every member of our team.

If you’re excited to design AI agents that can reason and code, and to create a world where our computers, software, and AI systems truly serve human interests, we’re excited to work with you!

Check out our open positions on our careers page! For press-related inquiries, reach out to press@imbue.com.