About
Hi, I'm Dylan! I'm a safety researcher at OpenAI, where I'm working on making language models safer.
I'm interested in curating better/safer training data and monitoring models for harmful behavior.
I'm also a final year PhD student in the Machine Learning Department at CMU, where I am fortunate to be advised by Zico Kolter.
Previously, I studied math and computer science at Brown University, where I was advised by Stephen Bach.
I've also worked as as a student researcher at Google Research, a research intern at GraySwan AI, Amazon AWS, the Bosch Center for AI, and NASA JPL.
If you are interested in my work, feel free to get in touch. I am always happy to chat about research!
News
- [Dec 2025]: I'm traveling to NeurIPS 2025 to present work on Safety Pretraining, black-box monitoring of model behavior, and measuring diversity in data curation!
- [Oct 2025]: I've joined OpenAI to make language models safer!
- [May 2024]: Spending the summer at Google Research in NYC, where I'll be working on data curation for LLM pretraining.
- [Apr 2024]: I'll be traveling to AISTATS to present "Auditing Fairness under Unobserved Confounding" and to ICLR to present work on providing generalization bounds for prompt engineering VLMs.
- [Jul 2023]: I'm giving a talk on learning data-driven priors for BNNs that incorporates interpretable domain qqknowledge at the KLR workshop @ ICML 2023!