Hi! I am a Ph.D. student at Stanford University studying the principles and mechanisms that underlie learning in neural networks. My research focuses on:

  • How does data affect what information and computational abilities are learned by networks?
  • How can we engineer the data distribution to train networks to be more reliable, adaptable, and controllable?

Recently, I have worked on using the data distribution and loss landscape properties to make training more efficient, both in terms of requiring less data and training sparser networks. I am currently excited about investigating how the data distribution influences in-context learning, memorization, federated learning and continual learning.

I am advised by Surya Ganguli and am an intern at Meta AI. Previously, I was at the RegLab at Stanford University where I worked with partners at the Internal Revenue Service and Department of Labor on using machine learning to make government administrative processes more efficient. Before graduate school, I did my undergraduate at Brown University where I studied random matrix theory and its application to black hole physics.

I have also volunteered for SF New Deal where I helped research and draft their economic impact report. Whenever I get the chance, I like to go out social dancing and am part of Cardinal West Coast Swing and the Stanford Tango Club.