I am a PhD candidate in Computer Science in the Stanford NLP group. My research focuses on illuminating and addressing the societal risks of conversational AI systems, such as sycophancy and anthropomorphism. I am advised by Dan Jurafsky and supported by the Knight-Hennessy Scholarship and the NSF Graduate Research Fellowship.

Previously, I did my undergraduate at Caltech, where I double-majored in computer science and history. I’ve also spent time at Microsoft Research (on the FATE team with Alexandra Olteanu and Su Lin Blodgett, and with Adam Kalai) and DeepMind.

Contact

My email is myra [at] cs [dot] stanford [dot] edu.

Updates

March 2026: Our work on AI sycophancy is the Science cover story!

June 2025: Our paper on how computer vision powers surveillance is out in Nature!

May 2025: Our work on social sycophancy is featured in MIT Technology Review!

May 2025: Two papers on measuring and mitigating anthropomorphic LLM outputs accepted to ACL 2025.

April 2025: Our paper on Using metaphors to understand public perceptions of AI accepted to FAccT 2025.

October 2024: Attending AIES.

October 2024: “I Am the One and Only, Your Cyber BFF”: Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI.