I design diffusion language models, a revolutionary new way to generate text.
Unlike traditional language models that generate one token at a time, diffusion models output tokens in parallel. My research explores novel diffusion language models and architectures to enhance their quality, generation speed, and training efficiency.
I am a second-year PhD student at Cornell Tech. Previously, I completed my B.S. in Computer Science at UC Santa Barbara's College of Creative Studies.
During my undergrad, I've done small projects in geometric deep learning at MIT with Prof. Justin Solomon, and in drug design using graph neural networks with Nobel Laureate Frances Arnold.
Selected Papers
News
- May‑30‑25: Invited talk at the ASAP seminar on Block Diffusion.
- Apr‑24‑25: Presenting Block Diffusion as an oral at ICLR 2025, main track.
- Apr‑16‑25: Invited talk at Amazon AGI on Block Diffusion.