Theoretical Foundations: Hidden Convexity and Global Optimization

The Challenge: Most practically successful machine learning models involve highly non-convex optimization problems, yet their landscapes remain poorly understood. Traditional optimization theory often fails to explain why gradient methods work so well in practice.

My Approach: I develop the theory of "hidden convexity" — the idea that many seemingly non-convex problems actually admit equivalent convex structures, even if these structures are not directly computable. This framework provides rigorous explanations for the surprising success of optimization methods and enables the design of global solution algorithms.

Key Contributions:

  • Analysis of optimization algorithms under hidden convexity SIAM J. Optim., 2025; helps to resolve open questions in reinforcement learning and variationsl inference.
  • Using proporties of hidden convex structure for designing faster policy gradient methods in general utility RL. ICML, 2023.
  • Discovering new landscape structures in zero-sum game settings, which allow us improve complexities by orders of magnitude. SIAM J. Contr. Optim., 2025.
  • Analyzing the fundamental structures of a linear quadratic regulator with output control. Convergence of carefully designed gradient method. SIAM J. Contr. Optim., 2021.

Impact: This line of work has led to substantial understanding of efficiency of policy gradient methods and provides a principled path for understanding non-convex optimization landscapes in safety-critical applications.

Selected Publications

Research Impact

This theoretical framework has enabled breakthrough results in understanding why modern optimization methods work so well in practice, providing the mathematical foundations for reliable AI systems in safety-critical applications.