Scholarship list
Journal article
Simplifying Optimal Transport through Schatten-p Regularization
Published 02/24/2026
Transactions on machine learning research
We propose a new general framework for recovering low-rank structure in optimal transport using Schatten-p norm regularization. Our approach extends existing methods that promote sparse and interpretable transport maps or plans, while providing a unified and principled family of convex programs that encourage low-dimensional structure. The convexity of our formulation enables direct theoretical analysis: we derive optimality conditions and prove recovery guarantees for low-rank couplings, barycentric displacements, and cross-covariances in simplified settings. To efficiently solve the proposed program, we develop a mirror descent algorithm with convergence guarantees in the convex setting. Experiments on synthetic and real data demonstrate the method's efficiency, scalability, and ability to recover low-rank transport structures. In particular, we demonstrate its utility on a machine-learning task in learning transport between high-dimensional cell perturbations for biological applications. All code is publicly available at https://github.com/twmaunu/schatten_ot.
Conference proceeding
First-Order Algorithms for Optimization over Graph Laplacians
Published 07/10/2023
2023 International Conference on Sampling Theory and Applications (SampTA), 1 - 11
When solving an optimization problem over the set of graph Laplacian matrices, one must deal with a large number of constraints as well as the large objective variable size. In this paper we explore first-order methods for optimization over graph Laplacian matrices. These methods include two popular methods for constrained optimization: the mirror descent algorithm and the Frank-Wolfe (conditional gradient) algorithm. We derive efficiently implementable formulations of these algorithms over graph Laplacians, and use existing theory to show their iteration complexity in various regimes. Experiments demonstrate the efficiency of these methods over alternatives like interior point methods.
Conference proceeding
Bures-Wasserstein Barycenters and Low-Rank Matrix Recovery
Date presented 04/25/2023
Proceedings of Machine Learning Research, 206
International Conference on Artificial Intelligence and Statistics (AISTATS), 04/25/2023–04/27/2023, Valencia, Spain
We revisit the problem of recovering a low-rank positive semidefinite matrix from rank-one projections using tools from optimal transport. More specifically, we show that a variational formulation of this problem is equivalent to computing a Wasserstein barycenter. In turn, this new perspective enables the development of new geometric first-order methods with strong convergence guarantees in Bures-Wasserstein distance. Experiments on simulated data demonstrate the advantages of our new methodology over existing methods.
Journal article
Depth Descent Synchronization in SO(D)
Published 01/03/2023
International journal of computer vision, 131, 968 - 986
Conference proceeding
Score-based Generative Neural Networks for Large-Scale Optimal Transport
Published 01/01/2021
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 34
We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions. In certain cases, the optimal transport plan takes the form of a one-to-one mapping from the source support to the target support, but learning or even approximating such a map is computationally challenging for large and high-dimensional datasets due to the high cost of linear programming routines and an intrinsic curse of dimensionality. We study instead the Sinkhorn problem, a regularized form of optimal transport whose solutions are couplings between the source and the target distribution. We introduce a novel framework for learning the Sinkhorn coupling between two distributions in the form of a score-based generative model. Conditioned on source data, our procedure iterates Langevin Dynamics to sample target data according to the regularized optimal coupling. Key to this approach is a neural network parametrization of the Sinkhorn problem, and we prove convergence of gradient descent with respect to network parameters in this formulation. We demonstrate its empirical success on a variety of large scale optimal transport tasks.