Hi! I’m a PhD student at Carnegie Mellon University, fortunate to be advised by Prof Shinji Watanabe. My research focuses on self-supervised learning for audio and multimodal alignment. I’m especially interested in using sound to provide complete context for speech language models.

Previously, I was a pre-doctoral researcher at Google DeepMind, working on low-resource language capabilities for Gemini with Dr. Partha Talukdar, Dr. Sriram Ganapathy and Dr. Shikhar Vashishth.

For more information please refer to my CV and publications.

Besides my research, I find joy in playing snooker and freestyle football (soccer). I’m always open to collaborations, feel free to reach out!