Using random matrix theory in numerical linear algebra: Fast and stable randomized low-rank matrix approximation

20 November 2020
Yuji Nakatsukasa

In this new session a speaker tells us about how their area of mathematics can be used in different applications.

In this talk, Yuji Nakatsukasa tells us about how random matrix theory can be used in numerical linear algebra. 



Randomized SVD is a topic in numerical linear algebra that draws heavily from random matrix theory. It has become an extremely successful approach for efficiently computing a low-rank approximation of matrices. In particular the paper by Halko, Martinsson, and Tropp (SIREV 2011) contains extensive analysis, and has made it a very popular method. The classical Nystrom method is much faster, but only applicable to positive semidefinite matrices. This work studies a generalization of Nystrom's method applicable to general matrices, and shows that (i) it has near-optimal approximation quality comparable to competing methods, (ii) the computational cost is the near-optimal O(mnlog n+r^3) for a rank-r approximation of dense mxn matrices, and (iii) crucially, it can be implemented in a numerically stable fashion despite the presence of an ill-conditioned pseudoinverse. Numerical experiments illustrate that generalized Nystrom can significantly outperform state-of-the-art methods. In this talk I will highlight the crucial role played by a classical result in random matrix theory, namely the Marchenko-Pastur law, and also briefly mention its other applications in least-squares problems and compressed sensing.

The join button will be published on the right (Above the view all button) 30 minutes before the seminar starts (login required).