Generally Intelligent is an independent research company developing AI agents with general intelligence that can be safely deployed in the real world. Our work combines theoretical understanding of deep neural networks with pragmatic engineering in a way that we believe is critical for responsibly engineering safe AI systems that embody human values.

We believe general-purpose AI systems have the potential to one day unlock extraordinary human creativity and insight. Such systems could empower humans across a wide range of fields, from scientific discovery and materials design, to personal assistants and tutors for every child, to countless other applications we can't yet fathom. As with other foundational technologies, such as electricity or the personal computer, it's hard to imagine today the full transformative impact on our societies and our lives.

Our ultimate aim is to deploy aligned human-level AI systems that can generalize to a wide range of economically useful tasks and assist with scientific research. Until we have such systems, our current focus is on researching and engineering core capabilities, and on developing appropriate frameworks for their governance.

Our Approach

Intelligence is an ability to achieve goals in a wide range of environments

Given this definition, we combine large language models, reinforcement learning, and other deep learning techniques to develop general agents that can accomplish tasks in a wide variety of digital contexts — like the desktop, browser, or editor. To do that, we simultaneously develop the capabilities of our agents while interrogating their failures and safety, conduct research into the theoretical foundations of deep learning, and study new interactions and computing interfaces that become unlocked.

We leverage large-scale compute to train our agents, though in a slightly different way from other organizations. We train both large models, as well as many different smaller agent architectures, so that we can explore the entire space of possibilities and better understand each component.

We believe that this fundamental understanding is essential for engineering safe and robust systems. In the same way that it is difficult to create safe bridges or chemical processes without deducing the underlying theory and components, we think it will be difficult to make safe and capable AI systems without such understanding.

Selected Investors & Advisors

Tim Hanson

Cofounder of Neuralink

Tom Brown

Lead author on GPT-3, Cofounder of Anthropic

Celeste Kidd

Professor of Psychology at UC Berkeley

Jed McCaleb

Founder of the Astera Institute

Jonas Schneider

Former robotics lead at OpenAI, CEO of Daedalus

Drew Houston

CEO of Dropbox

Michael Nielsen

Author of Neural Networks and Deep Learning


Kanjun Qiu


San Francisco

Josh Albrecht


San Francisco

Nicole Seo

Head of Talent

San Francisco

Jamie Simon

Research Fellow

San Francisco

Abe Fetterman

Member of Technical Staff

San Francisco

Ellie Kitanidis

Member of Technical Staff

San Francisco

Bryden Fogelman

Member of Technical Staff

San Francisco

Bartosz Wróblewski

Member of Technical Staff

San Francisco

Bas van Opheusden

Member of Technical Staff

San Francisco

Brandi Hagle

Office of the CEO & CTO


Michael Rosenthal

Machine Learning Engineer


Maksis Knutins

Machine Learning Engineer


Zack Polizzi

Machine Learning Engineer

New York