Publications
Ilyas Fatkhullin, Niao He. Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence, AISTATS 2024.
▼ Poster
Ilyas Fatkhullin, Niao He, Yifan Hu. Stochastic Optimization under Hidden Convexity, OptML workshop at NeurIPS 2023.
▼ Poster
Jiduan Wu, Anas Barakat, Ilyas Fatkhullin, Niao He. Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity, OptML workshop at NeurIPS 2023.
Ilyas Fatkhullin, Alexander Tyurin, Peter Richtárik. Momentum Provably Improves Error Feedback!, NeurIPS 2023.
Junchi Yang, Xiang Li, Ilyas Fatkhullin, Niao He. Two Sides of One Coin: the Limits of Untuned SGD and the Power of Adaptive Methods, NeurIPS 2023.
Anas Barakat, Ilyas Fatkhullin, Niao He. Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space, ICML 2023.
Ilyas Fatkhullin, Anas Barakat, Anastasia Kireeva, Niao He. Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies, ICML 2023.
Ilyas Fatkhullin, Jalal Etesami, Niao He, Negar Kiyavash. Sharp Analysis of Stochastic Optimization under Global Kurdyka-Łojasiewicz Inequality, NeurIPS 2022.
Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Eduard Gorbunov, Zhize Li. 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation, ICML 2022 (spotlight).
Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik. EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback, OptML workshop at NeurIPS 2021.
Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback, NeurIPS 2021 (oral).
Ilyas Fatkhullin, Boris Polyak. Optimizing Static Linear Feedback: Gradient Method, SIAM Journal on Control and Optimization 59, 3887-3911 (2021).
Boris Polyak, Ilyas Fatkhullin. Use of Projective Coordinate Descent in the Fekete Problem, Comput. Math. and Math. Phys. 60, 795–807 (2020).
×