Optimization Algorithms with Improved Convergence for Minimax in Strongly Convex and Nonconvex Settings
DOI:
https://doi.org/10.29304/jqcsm.2025.17.42547Keywords:
Optimization algorithms , minimax problem, convex and non-convex, AGDAbstract
The aim of this paper, is to address important issues in smooth minimax optimization by creating fast algorithms that can be used in strongly convex-concave and nonconvex-concave situations. For the strongly convex-concave scenario, a new approach was presented that cleverly combined the momentum of Nesterov's accelerated gradient descent (AGD) with the stability of the Mirror-Prox method. This hybrid technique significantly outperformed the previous benchmark, achieving a near-optimal convergence rate of . For complicated nonconvex-concave problems, an imprecise proximal point framework was created, which greatly outperformed the earlier result with a stationary point convergence rate of . This framework improved efficiency in high-dimensional settings by producing a rate of log m when applied to finite-sum problems. The adoption of adaptive error-control approaches to handle nonconvexity and the successful integration of acceleration techniques were identified as the primary advancements. The additions offer useful advantages for applications such as adversarial training, game equilibrium computing, and robust learning systems, and they successfully fill important theoretical gaps in minimax optimization. Extensive analyses demonstrated how well acceleration tactics were incorporated with fundamental issue features, resulting in enhanced approaches for intricate optimization scenarios.
Downloads
References
Jacob Abernethy, Kevin A. Lai, and Andre Wibisono. Last-iterate convergence rates for min-max optimization. arXiv preprint arXiv:1906.02027 , 2019.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875 , 2017.
Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In International Conference on Artificial Intelligence and Statistics , pages 2863–2873, 2020.
Zhong-Zhi Bai. Optimal parameters in the HSS-like methods for saddle-point problems. Numerical Linear Algebra with Applications , 16(6):447–479, 2009.
Zhong-Zhi Bai, Gene H. Golub, and Michael K. Ng. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM Journal on Matrix Analysis and Applications , 24(3):603–626, 2003.
Michele Benzi, Gene H. Golub, and Jörg Liesen. Numerical solution of saddle point problems. Acta Numerica , 14:1–137, 2005.
Yair Carmon, Yujia Jin, Aaron Sidford, and Kevin Tian. Variance reduction for matrix games. In Advances in Neural Information Processing Systems , pages 11377–11388, 2019.
Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision , 40(1):120–145, 2011.
Yunmei Chen, Guanghui Lan, and Yuyuan Ouyang. Optimal primal-dual methods for a class of saddle point problems. SIAM Journal on Optimization , 24(4):1779–1814, 2014.
Bo Dai, Albert Shaw, Lihong Li, Lin Xiao, Niao He, Zhen Liu, Jianshu Chen, and Le Song. SBEED: Convergent reinforcement learning with nonlinear function approximation. In International Conference on Machine Learning , pages 1133–1142, 2018.
Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training GANs with optimism. In International Conference on Learning Representations (ICLR 2018) , 2018.
Simon S Du, Jianshu Chen, Lihong Li, Lin Xiao, and Dengyong Zhou. Stochastic variance reduction methods for policy evaluation. In International Conference on Machine Learning , pages 1049–1058, 2017.
Simon S Du and Wei Hu. Linear convergence of the primal-dual gradient method for convex concave saddle point problems without strong convexity. In The 22nd International Conference on Artificial Intelligence and Statistics , pages 196–205, 2019.
Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. In International Conference on Learning Representations , 2019.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems , pages 2672–2680, 2014.
Osman Güler. New proximal point algorithms for convex minimization. SIAM Journal on Optimization , 2(4):649–664, 1992.
Joachim Hartung. An extension of Sion’s minimax theorem with an application to a method for constrained games. Pacific Journal of Mathematics , 103(2):401–408, 1982.
Magnus R. Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards , 49(6):409–436, 1952.
Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of single-call stochastic extra-gradient methods. In Advances in Neural Information Processing Systems , pages 6938–6948, 2019.
Adam Ibrahim, Waïss Azizian, Gauthier Gidel, and Ioannis Mitliagkas. Linear lower bounds and conditioning of differentiable games. arXiv preprint arXiv:1906.07300 , 2019.
David Kinderlehrer and Guido Stampacchia. An Introduction to Variational Inequalities and Their Applications , volume 31. SIAM, 1980.
G. M. Korpelevich. The extragradient method for finding saddle points and other problems. Matecon , 12:747–756, 1976.
Tianyi Lin, Chi Jin, and Michael I Jordan. On gradient descent ascent for nonconvex-concave minimax problems. arXiv preprint arXiv:1906.00331 , 2019.
Tianyi Lin, Chi Jin, and Michael I. Jordan. Near-optimal algorithms for minimax optimization. In Conference on Learning Theory, COLT 2020 , 9–12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research , pages 2738–2779. PMLR, 2020.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Raghad hasan Radhi

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.








