Submitted

HNAG++: A Super-Fast Accelerated Gradient Method for Strongly Convex Optimization

Long Chen and Zeyi Xu

Submitted

arXiv   Bibtex

ABSTRACT:

We introduce and analyze two methods -- {\it HNAG+} and {\it
HNAG++}-- for minimizing strongly convex functions with large
condition number $\kappa$.  For {\it HNAG+}, we prove a global linear
convergence rate of $1 - 2/\sqrt{\kappa}$, achieving the
information-theoretic optimal rate.  For {\it HNAG++}, we establish a
global asymptotic linear rate of $1 - 2\sqrt{2/\kappa}$ for functions
with H\"older continuous Hessians, representing the fastest known rate
among globally convergent first-order methods.  Extensive numerical
experiments on linear and nonlinear problems show that {\it HNAG++}
consistently outperforms existing accelerated gradient methods.