We discuss sketching techniques for sparse Linear Least Squares (LLS) problems, that perform a randomised dimensionality reduction for more efficient and scalable solutions. We give theoretical bounds for the accuracy of the sketched solution/residual when hashing matrices are used for sketching, quantifying carefully the trade-off between the coherence of the original, un-sketched matrix and the sparsity of the hashing matrix. We then use these bounds to quantify the success of our algorithm that employs a sparse factorisation of the sketched matrix as a preconditioner for the original LLS, before applying LSQR. We extensively compare our algorithm to state-of-the-art direct and iterative solvers for large-scale and sparse LLS, with encouraging results.
- Numerical Analysis Group Internal Seminar