Ben Eysenbach, CMU: On designing simpler and more principled RL algorithms

March 22, 2023

RSS · Spotify · Apple Podcasts · Pocket Casts

Ben Eysenbach is a Ph.D. student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he's tackled include long horizon reasoning, exploration, and representation learning. In this episode, we discuss designing more principled RL algorithms and much more.

Below are some highlights from our conversation as well as links to the papers, people, and groups referenced in the episode.

Some highlights from our conversation

“If we see all the states we’ve seen so far and look at the representations, let’s imagine that those representations have a length of one so we can think about them as points on a sphere. Then after we put each these points on the sphere, we can turn the sphere around and say, okay, where are most of the points and where are we missing points? And say, well, you’re missing points down near Antarctica or something. And then we can say, okay, let’s try to get down to Antarctica. And then we could, because we’re learning a goal condition policy, we say, okay, try to get here or try to get to a state that has this representation.”

“One thing that I’m really excited about is thinking about how we can leverage this idea of connecting contrastive learning to reinforcement learning to make use of advances in contrastive learning in other domains like NLP and computer vision. In NLP, we’ve seen really great uses of contrastive learning for things like CLIP that can connect image ideas with language using contrastive learning. And in our contrastive project, we saw how we can connect the states and the actions to the future states. As you might imagine that maybe there’s a way of plugging these components together, and indeed, you can feel that mathematically there is. And so one thing I’m really excited in exploring is saying, well, ‘can we use this to specify tasks?’ Not in terms of images of what you would want to happen, but rather language descriptions.”

“I think one of the reasons why I’m particularly excited about these problems is that these language models, they’re trained to maximize the likelihood of the next token that draws us a really strong connection to this way of treating reinforcement learning problems as predicting probabilities and as maximizing probabilities. And so I think that these tools are actually much, much more similar than they might seem on the surface.”

“I don’t know how controversial it is, but I would like to see more effort on taking even existing methods and applying them to new tasks, to real problems. I think part of this will require a shift in how we evaluate papers—evaluating them not so much on algorithmic novelty rather than on ‘did you actually solve some interesting problem?‘”

Referenced in this podcast

Thanks to Tessa Hall for editing the podcast.