Sugandha Sharma, MIT: On biologically inspired neural architectures, how memories can be implemented, and control theory

January 11, 2023

RSS · Spotify · Apple Podcasts · Pocket Casts

Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don’t exhibit catastrophic forgetting. In this episode, we chat about biologically inspired neural architectures, how memory could be implemented, why control theory is underrated, and much more.

Below are some highlights from our conversation as well as links to the papers, people, and groups referenced in the episode.

Some highlights from our conversation

"Neuroscience is such an open field still, it's so nascent, so in its early stages, that there are so many important questions. My perspective is that we really need to sync theory and experiment; whatever questions we address, we really need to think about how we can create this loop between theory and experiments so that we can test the predictions made by our models and then come back and correct our models based on the experiments and findings."

"If you're thinking about spatial navigation, there's also this question of generalization of spatial maps across so many different environments that we encounter in our daily lives. I lived in India and then I lived in Canada and now I'm in the U.S. And I still can go back to India and I'll remember the map of the house that I was living in. [...] That was another question that I was fascinated by...this high level question of how we are so good at generalizing across spaces."

"The basic idea is that if you separate the memory from features, the memory part of it doesn’t need to have an explicit, arbitrary information content. You could use a pre-defined set of states as attractors that form fixed points of dynamical systems. Because these are pre-chosen, they don’t really have any information content in them, so the upper bounds in terms of information theory stop applying and you can store an exponential number of predefined fixed states in this."

"I think the field has been biased towards experiments maybe also because of the kind of awards you get...if you find grid cells, you'll get a Nobel prize, which is cool. But also I'm not sure whether this is the only factor that's biasing, but traditionally the field has been biased towards experiments a lot, and I feel like definitely we need a lot of theorists to propose experiments so that they can be validated. Because without good theoretical ideas, I don’t think we can make sense of the data we are seeing."

Referenced in this podcast

Thanks to Tessa Hall for editing the podcast.