natalieabreu [at] g.harvard.edu

Natalie Abreu

Hi, I'm a third-year PhD student at Harvard University advised by Boaz Barak and Sham Kakade. I am broadly interested in the foundations of deep learning, with a focus on large language models (LLMS). Recently, I am particularly interested in optimization methods for LLMs. I'm grateful to be supported by a Kempner Institute Graduate Fellowship.

Previously, I attended the University of Southern California where I completed a BS in Computer Science and minor in Mathematics. During that time, I interned at Google and at MIT Lincoln Laboratory. In the spring of 2025, I interned at MSR with Kwangjun Ahn.

Publications

A Taxonomy of Transcendence
Natalie Abreu, Edwin Zhang, Eran Malach, Naomi Saphra
COLM 2025|arxiv
Dion: Distributed Orthonormalized Updates
Kwangjun Ahn, Byron Xu, Natalie Abreu, John Langford
Preprint|arxiv
Addressing Discrepancies in Semantic and Visual Alignment in Neural Networks
Natalie Abreu, Nathan Vaska, Victoria Helus
ICML 2023 Workshop on Data-centric Machine Learning Research|arxiv
Addressing Mistake Severity in Neural Networks with Semantic Knowledge
Natalie Abreu, Nathan Vaska, Victoria Helus
NeurIPS 2022 Workshop on Progress and Challenges in Building Trustworthy Embodied AI|arxiv

Teaching

Topics in Foundations of ML: AI Alignment and Safety
CS 2881 @ Harvard, Fall 2025 Teaching Fellow
Introduction to Algorithms and Their Limitations
CS 1200 @ Harvard, Fall 2024 Teaching Fellow
Theory of Computation
CSCI 475 @ USC, Spring 2023 Course Producer
Introduction to Algorithms
CSCI 270 @ USC, Spring 2022 - Spring 2023 Course Producer
Fundamentals of Computation
CSCI 102 @ USC, Spring 2021 Course Producer