Martin Arjovsky, INRIA: On benchmarks for robustness and geometric information theory

October 14, 2021

RSS · Spotify · Apple Podcasts · Pocket Casts

Martín Arjovsky (Google Scholar) did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.

Some highlights from our conversation

“Now everyone is starting to solve robustness without having a benchmark that shows that robustness is a problem… There’s many, many anecdotal reports of these problems - and on deployed systems, things that really daily affect people! […] I would just like to have benchmark, things where I can test algorithms on this.”

“It’s this very counter-intuitive problem where throwing away data points is a form of regularization. […] You throw away things you already know.”

“Information theory for machine learning is mostly useless in its current form. Let me elaborate on this. I do think that it can be fixed, though, and would be totally fine spending five years of my life to do that. […] All of these information theoretic tools that they use, like for example entropy, really make sense only in discrete spaces. Continuous entropy does not make sense in most cases.”

“I’m tired of seeing papers that have an introduction about A, an experiment about B, and a theory about C.”


Referenced in this podcast



Thanks to Tessa Hall for editing the podcast.