What's next in AI? In conversation with Kanjun Qiu and Reid Hoffman

July 6, 2023

Our CEO Kanjun Qiu recently sat down with Reid Hoffman, co-founder of LinkedIn and Inflection AI, at Figma’s Config 2023 conference for a wide-ranging discussion on the future of AI. Below are some key takeaways from their conversation. Watch the full interview here.



Unlocking reasoning

Kanjun describes generative AI as “Web 1.0” of the AI revolution—while AI systems today possess a powerful understanding of the world, their full potential isn’t yet realized. Kanjun believes the next phase is agentive AI, where trustworthy systems can take action in the world on behalf of humans. To get to this next phase, Kanjun and Reid believe AI agents need to learn how to reason.

Currently, LLMs can model data and make predictions, but generally lack the ability for internal reflection. For example, Reid says, you can tell an LLM its response is incorrect, but it’ll often double down on its wrong answers rather than analyze the flaws in its approach (e.g., by asking clarifying questions, playing out a scenario, finding a better way to accomplish your goal, etc.). As such, Kanjun and Reid agree that AI in its current form can be powerfully assistive, but that humans must remain in charge when systems require reasoning skills.

“Treating AI as a great tool and knowing it will be crazy wrong sometimes is okay.” – Reid Hoffman

AI as ‘amplification intelligence’

Reid likens AI to a “steam engine of the mind.” Whereas the industrial revolution gave humans physical superpowers (e.g., construction, transportation), AI gives humans cognitive superpowers by amplifying everything we do. Rather than replace humans, Reid believes AI will make humans more efficient, effective, and thoughtful by transforming processes that were once a function of computer science into a natural science.

“AI is bigger than the internet, mobile, or cloud because it’s an amplifier. It’s the sum of all of them and it’s what happens when we can bring amplification intelligence to everything we do.” – Reid Hoffman

Rethinking AI interface design

Humans maintain a healthy skepticism of AI agents. To cultivate trust and understanding, new computer design primitives will need to be developed that allow the agent to communicate its progress, show its work, edit its plans, and provide assurances that it understands the user’s preferences.

A good example of agent interface design is Inflection’s agent Pi, which Reid says uses an “emotive” cursor to promote EQ along with IQ. Having the cursor “feel” alive fosters a more human interaction, keeping the user from, say, getting impatient when waiting for the agent to respond.

“With AI agents comes new ways to think about [the] interface. How do you know what the computer is doing in the background for you? How can you trust what it’s doing? How do I inspect what’s going on? These are all open questions.” – Kanjun Qiu

But more work is needed. Kanjun believes unlocking the full potential of agentive AI may require rethinking computing paradigms entirely—similar to how the personal computer took off only after we moved away from command lines and invented the graphical user interface. Agentive AI can allow for a new type of computer, designed around its ability to reason and act, that’s much more useful and adaptable.

“When computers can program themselves, then that suddenly, totally changes what a computer is. The AI revolution actually is a transformation of the computer such that we, all of us, have the ability to make computers code, and not just software engineers. They become sculptable, moldable, more like clay.” – Kanjun Qiu

Making AI safer

Kanjun and Reid agreed that guardrails must be instituted that promote the amplifications of AI while dissuading bad actors. For example, AI can offer medical advice to individuals who don’t have access to a doctor, but what sorts of advice should we allow AI to provide vs. a real-life medical professional?

To make AI tools safe and accessible, Kanjun says we must listen to critical dialogue and treat AI safety as an engineering and design discipline. She likens AI to the automobile: neither is safe by default. However, as a society we’ve taken the time to educate drivers, improve roads, and institute traffic laws in order to make driving safe and accessible. A similar approach is needed for AI.

“There’s no silver bullet. It’s hard work thinking about the high leverage points we can implement to make AI safer.” – Kanjun Qiu



To watch the full session, click here. Interested in building human-like intelligence? We’re hiring.