I am a final year PhD student at the University of California, Berkeley. I am broadly interested in theoretical machine learning. My current focus is in optimization, where I am mainly working from the perspective of continuous time. Active projects include using ideas from numerical integration to develop computationally efficient adaptive step size schemes for optimization, and examining the effect of overparameterization on the dynamics of stochastic gradient descent in matrix factorization problems. Some of my other interests are the statistical mechanics of small systems out of equilibrium, and the three-way interface of computer science, statistics, and statistical mechanics. Most of my work involves searching for structure in optimization problems and exploiting that structure to develop solutions.

My advisors are Michael I. Jordan and Michael R. DeWeese. I am affiliated with the Statistical AI Learning group, the Berkeley AI Research group, and the Redwood Center for Theoretical Neuroscience.

In the summer of 2019 I interned at Google Brain, where I was hosted by Jascha Sohl-Dickstein. During the academic years 2018-21, my work was supported by a Google PhD Fellowship.

Before I came to Berkeley, I was a Junior Research Fellow at the National Center for Biological Sciences in Bangalore, India. Before that, I completed a Masters degree in theoretical physics at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. I was an undergraduate at Amherst College, where I received a degree in physics.

You can find me at neha *dot* wadia *at* berkeley *dot* edu.

Here is my Google Scholar page.