Abstract
We study accelerated optimization methods in the Gaussian phase retrieval
problem. In this setting, we prove that gradient methods with Polyak or
Nesterov momentum have similar implicit regularization to gradient descent.
This implicit regularization ensures that the algorithms remain in a nice
region, where the cost function is strongly convex and smooth despite being
nonconvex in general. This ensures that these accelerated methods achieve
faster rates of convergence than gradient descent. Experimental evidence
demonstrates that the accelerated methods converge faster than gradient descent
in practice.