Scholarship list
Conference paper
Stochastic and Private Nonconvex Outlier-Robust PCA
Date presented 08/16/2022
Mathematical and Scientific Machine Learning, 08/15/2022–08/17/2022, Beijing
We develop theoretically guaranteed stochastic methods for outlier-robust PCA. Outlier-robust PCA seeks an underlying low-dimensional linear subspace from a dataset that is corrupted with outliers. We are able to show that our methods, which are variants of stochastic geodesic gradient descent over the Grassmannian manifold, converge and recover an underlying subspace in various regimes through the development of a novel convergence analysis. The main application of this method is an effective differentially private algorithm for outlier-robust PCA that uses a Gaussian noise mechanism within the stochastic gradient method. Our results emphasize the advantages of the nonconvex methods over another convex approach to solve Outlier-robust PCA in the differentially private setting. Experiments on synthetic and stylized data verify these results.
Conference paper
Scalable Cluster-Consistency Statistics for Robust Multi-Object Matching
Date presented 12/03/2021
International Conference on 3D Vision 2021, 12/01/2021–12/03/2021, Online
We develop new statistics for robustly filtering corrupted keypoint matches in the structure from motion pipeline. The statistics are based on consistency constraints that arise within the clustered structure of the graph of keypoint matches. The statistics are designed to give smaller values to corrupted matches and than uncorrupted matches. These new statistics are combined with an iterative reweighting scheme to filter keypoints, which can then be fed into any standard structure from motion pipeline. This filtering method can be efficiently implemented and scaled to massive datasets as it only requires sparse matrix multiplication. We demonstrate the efficacy of this method on synthetic and real structure from motion datasets, and show that it achieves state-of-the-art accuracy and speed in these tasks.
Conference paper
Acceleration and Implicit Regularization in Gaussian Phase Retrieval
International Conference on Artificial Intelligence and Statistics, 05/02/2024–05/04/2024, Valencia, Spain
We study accelerated optimization methods in the Gaussian phase retrieval problem. In this setting, we prove that gradient methods with Polyak or Nesterov momentum have similar implicit regularization to gradient descent. This implicit regularization ensures that the algorithms remain in a nice region , where the cost function is strongly convex and smooth despite being nonconvex in general. This ensures that these accelerated methods achieve faster rates of convergence than gradient descent. Experimental evidence demonstrates that the accelerated methods converge faster than gradient descent in practice.
Conference paper
Gradient descent algorithms for Bures-Wasserstein barycenters
Conference on Learning Theory, 07/09/2020–07/12/2020
We study first order methods to compute the barycenter of a probability distribution P over the space of probability measures with finite second moment. We develop a framework to derive global rates of convergence for both gradient descent and stochastic gradient descent despite the fact that the barycenter functional is not geodesically convex. Our analysis overcomes this technical hurdle by employing a Polyak-Ł{}ojasiewicz (PL) inequality and relies on tools from optimal transport and metric geometry. In turn, we establish a PL inequality when P is supported on the Bures-Wasserstein manifold of Gaussian probability measures. It leads to the first global rates of convergence for first order methods in this context.