PhD Student at ETH AI Center and Computer Science Department of ETH Zurich, Switzerland
I am a final-year PhD student at ETH Zurich, advised by Prof. Niao He. My research focuses on developing theoretically grounded algorithms for machine learning and optimization, with an emphasis on data efficiency, scalability, and safety. Previously, I had an honor to work with Prof. Boris Polyak on control theory problems and with Prof. Peter Richtárik on federated learning, focusing on communication-efficient distributed training.
My research contributions have appeared in leading machine learning venues including NeurIPS, ICML, AISTATS, Journal of Machine Learning Research, as well as SIAM Journal on Optimization, SIAM Journal on Control and Optimization.
I am currently supported by the ETH AI Center Doctoral Fellowship and previously received a DAAD Scholarship for Master studies in Germany.
My work centers on three interconnected pillars that address fundamental challenges in modern machine learning:
🔬 Non-convex Optimization: I develop rigorous mathematical frameworks to understand complex optimization landscapes, including hidden convexity structures that enable global solutions to seemingly intractable non-convex problems.
⚡ Data Efficiency and Robustness: I design robust algorithms that maintain performance under challenging statistical conditions, such as heavy-tailed noise and limited data scenarios, with particular relevance for policy gradient methods in reinforcement learning.
🚀 Scalable Systems: I create communication-efficient distributed training algorithms that enable large-scale machine learning while preserving theoretical guarantees, including the popular EF21 algorithm.
March 2026 – Attending ELLIIT symposium and focus period on Optimization for Learning, Lund Sweden (Invited Visiting Scholar).