I design diffusion language models, a new paradigm for parallel text generation.
Unlike traditional language models that generate one token at a time, diffusion models output tokens in parallel. My research explores novel diffusion language models and architectures to enhance their quality, generation speed, and training efficiency.
I have previously interned at Runway AI, where I worked with Anastasis Germanidis. I completed my B.S. in Computer Science at UC Santa Barbara's College of Creative Studies. During my undergrad, I've done small projects in geometric deep learning at MIT CSAIL with Prof. Justin Solomon, and in drug design using graph neural networks with Nobel Laureate Frances Arnold.
Selected Works
Arriola, M., Gokaslan, A., Chiu, J. T., Yang, Z., Qi, Z., Han, J., Sahoo, S. S., Kuleshov, V. Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models. ICLR 2025 (Oral, Top 1.77%). [Paper] [Blog] [Code]

Arriola, M., Venkat, N., Granskog, J., Germanidis, A. Adapting Autoregressive Vision Language Models for Parallel Diffusion Decoding. Runway Research Blog. [Blog]
Arriola, M.*, Schiff, Y.*, Phung, H., Gokaslan, A., Kuleshov, V. Encoder-Decoder Block Diffusion Language Models for Efficient Training and Inference. NeurIPS 2025. [Paper: WIP] [Blog: WIP] [Code: WIP]
News
- Oct-25: Encoder-decoder diffusion LMs paper accepted to NeurIPS 2025. I was also recognized as a Top Reviewer!
- Jun-25: Started a summer internship at Runway in NYC!
- Apr‑25: Presenting Block Diffusion as an oral at ICLR 2025, main track.
- Apr‑25: Invited talk at Amazon AGI on Block Diffusion.