I am a final year PhD student at ETH Zurich advised by Prof. Niao He. My research focuses on developing theoretically-grounded algorithms for machine learning and optimization, with particular emphasis on large-scale optimization, reinforcement learning, and theoretical foundations. Previously, I had an honor to work with Prof. Boris Polyak on control theory problems and with Prof. Peter Richtárik on federated learning, focusing on communication-efficient distributed training.
My research contributions have appeared in leading venues across machine learning and optimization:
• Conferences: NeurIPS, ICML, AISTATS.
• Journals: SIAM Journal on Control and Optimization, Journal of Machine Learning Research.
I am currently supported by the ETH AI Center Doctoral Fellowship. During my master's studies in Germany, I was awarded the DAAD scholarship.
Research Interests
My research focuses on developing theoretical foundations and practical algorithms for machine learning and optimization. Here are my main research areas:
Large-Scale Optimization
Designing scalable optimization algorithms for efficient training of modern machine learning models.
My research approach combines theoretical rigor with practical efficiency. Each project aims to develop algorithms that are both theoretically sound and practically efficient, with a particular focus on understanding fundamental properties and establishing rigorous guarantees.