Tue, 25 Feb 2020
14:00
L2

Fast and stable randomized low-rank matrix approximation

Yuji Nakatsukasa
(Oxford)
Abstract

Randomized SVD has become an extremely successful approach for efficiently computing a low-rank approximation of matrices. In particular the paper by Halko, Martinsson (who is speaking twice this week), and Tropp (SIREV 2011) contains extensive analysis, and made it a very popular method. 
The complexity for $m\times n$ matrices is $O(Nr+(m+n)r^2)$ where $N$ is the cost of a (fast) matrix-vector multiplication; which becomes $O(mn\log n+(m+n)r^2)$ for dense matrices. This work uses classical results in numerical linear algebra to reduce the computational cost to $O(Nr)$ without sacrificing numerical stability. The cost is essentially optimal for many classes of matrices, including $O(mn\log n)$ for dense matrices. The method can also be adapted for updating, downdating and perturbing the matrix, and is especially efficient relative to previous algorithms for such purposes.  

 

Tue, 25 Feb 2020
14:30
L2

Low-rank plus sparse matrices: ill-posedness and guaranteed recovery

Simon Vary
(Oxford)
Abstract

Robust principal component analysis and low-rank matrix completion are extensions of PCA that allow for outliers and missing entries, respectively. Solving these problems requires a low coherence between the low-rank matrix and the canonical basis. However, in both problems the well-posedness issue is even more fundamental; in some cases, both Robust PCA and matrix completion can fail to have any solutions due to the fact that the set of low-rank plus sparse matrices is not closed. Another consequence of this fact is that the lower restricted isometry property (RIP) bound cannot be satisfied for some low-rank plus sparse matrices unless further restrictions are imposed on the constituents. By restricting the energy of one of the components, we close the set and are able to derive the RIP over the set of low rank plus sparse matrices and operators satisfying concentration of measure inequalities. We show that the RIP of an operator implies exact recovery of a low-rank plus sparse matrix is possible with computationally tractable algorithms such as convex relaxations or line-search methods. We propose two efficient iterative methods called Normalized Iterative Hard Thresholding (NIHT) and Normalized Alternative Hard Thresholding (NAHT) that provably recover a low-rank plus sparse matrix from subsampled measurements taken by an operator satisfying the RIP.
 

Tue, 18 Feb 2020
14:30
L5

An element-based preconditioner for mixed finite element problems

Michael Wathen
(Rutherford Appleton Laboratory)
Abstract

We introduce a new and generic approximation to Schur complements arising from inf-sup stable mixed finite element discretizations of self-adjoint multi-physics problems. The approximation exploits the discretization mesh by forming local, or element, Schur complements and projecting them back to the global degrees of freedom. The resulting Schur complement approximation is sparse, has low construction cost (with the same order of operations as a general finite element matrix), and can be solved using off-the-shelf techniques, such as multigrid. Using the Ladyshenskaja-Babu\v{s}ka-Brezzi condition, we show that this approximation has favorable eigenvalue distributions with respect to the global Schur complement. We present several numerical results to demonstrate the viability of this approach on a range of applications. Interestingly, numerical results show that the method gives an effective approximation to the non-symmetric Schur complement from the steady state Navier-Stokes equations.
 

Tue, 18 Feb 2020
14:00
L5

FitBenchmarking: A tool for comparing fitting routines for our National Facilities (and beyond)

Tyrone Rees
(Rutherford Appleton Laboratory)
Abstract

In STFC's Computational Mathematics Group, we work alongside scientists at large-scale National Facilities, such as ISIS Neutron and Muon source, Diamond Light Source, and the Central Laser Facility. For each of these groups, non-linear least squares fitting is a vital part of their scientific workflow. In this talk I will describe FitBenchmarking, a software tool we have developed to benchmark the performance of different non-linear least squares solvers on real-world data. It is designed to be easily extended, so that new data formats and new minimizers can be added. FitBenchmarking will allow (i) scientists to determine objectively which fitting engine is optimal for solving their problem on their computing architecture, (ii) scientific software developers to quickly test state-of-the-art algorithms in their data analysis software, and (iii) mathematicians and numerical software developers to test novel algorithms against realistic datasets, and to highlight characteristics of problems where the current best algorithms are not sufficient.
 

Tue, 11 Feb 2020
14:30
L5

Adaptive Cubic Regularisation Methods under Dynamic Inexact Hessian Information for Nonconvex Optimisation

Gianmarco Gurioli
(Università di Firenze)
Abstract

ARC methods are Newton-type solvers for unconstrained, possibly nonconvex, optimisation problems. In this context, a novel variant based on dynamic inexact Hessian information is discussed. The approach preserves the optimal complexity of the basic framework and the main probabilistic results on the complexity and convergence analysis in the finite-sum minimisation setting will be shown. At the end, some numerical tests on machine learning problems, ill-conditioned databases and real-life applications will be given, in order to confirm the theoretical achievements. Joint work with Stefania Bellavia and Benedetta Morini (Università di Firenze). 

Tue, 04 Feb 2020
14:00
L5

Matrix Factorization with Expander Graphs

Michael Murray
(Oxford)
Abstract

Many computational techniques in data science involve the factorization of a data matrix into the product of two or more structured matrices. Examples include PCA, which relies on computing an SVD, recommendation systems, which leverage non-negative matrix factorization, infilling missing entries with low rank matrix completion, and finding sparse representations via dictionary learning. In our work we study a new matrix factorization problem, involving the recovery of $\textbf{A}$ and $\textbf{X}$ from $\textbf{Y} := \textbf{A}\textbf{X}$ under the following assumptions; $\textbf{A}$ is an $m \times n$ sparse binary matrix with a fixed number $d$ of nonzeros per column and $\textbf{X}$ is an $n \times N$ sparse real matrix whose columns have $k$ nonzeros and are dissociated. This setup is inspired and motivated by similar models studied in the dictionary learning literature as well as potential connections both with stochastic block models and combinatorial compressed sensing. In this talk we present a new algorithm, EBR, for solving this problem, as well as recovery guarantees in the context of a particular probabilistic data model. Using the properties of expander graphs we are able to show, under certain assumptions, that with just $N = \textit{O}( \log^2(n))$ samples then EBR recovers the factorization up to permutation with high probability. 

Tue, 04 Feb 2020
14:30
L5

Lightning Laplace and Stokes solvers

Pablo Brubeck
(Oxford)
Abstract

We extend the lightning Laplace solver (Gopal and Trefethen, SINUM 2019) to unbounded domains and to the biharmonic equation. Illustrating the high accuracy of such methods, we get beautiful contour plots of Moffatt eddies.

Tue, 28 Jan 2020
14:00
L5

Stable Computation of Generalized Polar Decompositions

Carolin Penke
(MPI-Magdeburg)
Abstract

The QDWH algorithm can compute the polar decomposition of a matrix in a stable and efficient way. We generalize this method in order to compute generalized polar decompositions with respect to signature matrices. Here, the role of the QR decomposition is played by the hyperbolic QR decomposition. However, it doesn't show the same favorable properties concerning stability as its orthogonal counterpart. Remedies are found by exploiting connections to the LDL^T factorization and by employing well-conditioned permuted graph bases. The computed polar decomposition is used to formulate a structure-preserving spectral divide-and-conquer method for pseudosymmetric matrices. Applications of this method are found in computational quantum physics, where eigenvalues and eigenvectors describe optical properties of condensed matter or molecules. Additional properties guarantee fast convergence and a reduction to symmetric definite eigenvalue problems after just one step of spectral divide-and-conquer.

Tue, 28 Jan 2020
14:30
L5

Dimensionality reduction techniques for global optimization

Adilet Otemissov
(Oxford)
Abstract

We show that the scalability challenges of Global Optimisation algorithms can be overcome for functions with low effective dimensionality, which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations. We propose the use of random subspace embeddings within a(ny) global minimisation algorithm, extending the approach in Wang et al. (2016). Using tools from random matrix theory and conic integral geometry, we investigate the efficacy and convergence of our random subspace embeddings approach, in a static and/or adaptive formulation. We illustrate our algorithmic proposals and theoretical findings numerically, using state of the art global solvers. This work is joint with Coralia Cartis.
 

Subscribe to