We’re developing generally capable agents that can safely and reliably act in the digital world.

We care about the practical engineering of large language models, reinforcement learning, and other deep neural networks, informed by theoretical understanding.

We open source code and tools when possible so the community can benefit and easily reproduce our research.

Read more about our approach.

Research Highlights

Avalon: A Benchmark for RL Generalization Using Procedurally Generated Worlds

ResearchOctober 20, 2022

What Is Avalon? Avalon is a benchmark for generalization in RL Agents in Avalon must accomplish a wide range of tasks, all with the same…

Read more

Update: The state of LLM ethical decision-making

ResearchApril 01, 2023

We're well on our way to having intelligent agents in our digital lives. In their present form, they're connected to large language models…

Read more

Our Research Interests

We're interested in creating intelligent and safe software agents

This goal spans a wide range of topics, including:

Large language modelsReinforcement learningDeep learning theoryOptimizationGeneralizationRobustness and safetyTransformersLanguage acquisitionContinual learningWorld modelsSelf supervised learning...and more

Whenever possible, we strive to collaborate with the broader research community and open source our work. However, for practical and safety reasons, we do not distribute all research results publicly. A selection of some related projects that we have published are shown below.