Tue, 18 Feb 2020
14:30
L5

An element-based preconditioner for mixed finite element problems

Michael Wathen
(Rutherford Appleton Laboratory)
Abstract

We introduce a new and generic approximation to Schur complements arising from inf-sup stable mixed finite element discretizations of self-adjoint multi-physics problems. The approximation exploits the discretization mesh by forming local, or element, Schur complements and projecting them back to the global degrees of freedom. The resulting Schur complement approximation is sparse, has low construction cost (with the same order of operations as a general finite element matrix), and can be solved using off-the-shelf techniques, such as multigrid. Using the Ladyshenskaja-Babu\v{s}ka-Brezzi condition, we show that this approximation has favorable eigenvalue distributions with respect to the global Schur complement. We present several numerical results to demonstrate the viability of this approach on a range of applications. Interestingly, numerical results show that the method gives an effective approximation to the non-symmetric Schur complement from the steady state Navier-Stokes equations.
 

Tue, 18 Feb 2020
14:00
L5

FitBenchmarking: A tool for comparing fitting routines for our National Facilities (and beyond)

Tyrone Rees
(Rutherford Appleton Laboratory)
Abstract

In STFC's Computational Mathematics Group, we work alongside scientists at large-scale National Facilities, such as ISIS Neutron and Muon source, Diamond Light Source, and the Central Laser Facility. For each of these groups, non-linear least squares fitting is a vital part of their scientific workflow. In this talk I will describe FitBenchmarking, a software tool we have developed to benchmark the performance of different non-linear least squares solvers on real-world data. It is designed to be easily extended, so that new data formats and new minimizers can be added. FitBenchmarking will allow (i) scientists to determine objectively which fitting engine is optimal for solving their problem on their computing architecture, (ii) scientific software developers to quickly test state-of-the-art algorithms in their data analysis software, and (iii) mathematicians and numerical software developers to test novel algorithms against realistic datasets, and to highlight characteristics of problems where the current best algorithms are not sufficient.
 

Tue, 11 Feb 2020
14:30
L5

Adaptive Cubic Regularisation Methods under Dynamic Inexact Hessian Information for Nonconvex Optimisation

Gianmarco Gurioli
(Università di Firenze)
Abstract

ARC methods are Newton-type solvers for unconstrained, possibly nonconvex, optimisation problems. In this context, a novel variant based on dynamic inexact Hessian information is discussed. The approach preserves the optimal complexity of the basic framework and the main probabilistic results on the complexity and convergence analysis in the finite-sum minimisation setting will be shown. At the end, some numerical tests on machine learning problems, ill-conditioned databases and real-life applications will be given, in order to confirm the theoretical achievements. Joint work with Stefania Bellavia and Benedetta Morini (Università di Firenze). 

Tue, 04 Feb 2020
14:00
L5

Matrix Factorization with Expander Graphs

Michael Murray
(Oxford)
Abstract

Many computational techniques in data science involve the factorization of a data matrix into the product of two or more structured matrices. Examples include PCA, which relies on computing an SVD, recommendation systems, which leverage non-negative matrix factorization, infilling missing entries with low rank matrix completion, and finding sparse representations via dictionary learning. In our work we study a new matrix factorization problem, involving the recovery of $\textbf{A}$ and $\textbf{X}$ from $\textbf{Y} := \textbf{A}\textbf{X}$ under the following assumptions; $\textbf{A}$ is an $m \times n$ sparse binary matrix with a fixed number $d$ of nonzeros per column and $\textbf{X}$ is an $n \times N$ sparse real matrix whose columns have $k$ nonzeros and are dissociated. This setup is inspired and motivated by similar models studied in the dictionary learning literature as well as potential connections both with stochastic block models and combinatorial compressed sensing. In this talk we present a new algorithm, EBR, for solving this problem, as well as recovery guarantees in the context of a particular probabilistic data model. Using the properties of expander graphs we are able to show, under certain assumptions, that with just $N = \textit{O}( \log^2(n))$ samples then EBR recovers the factorization up to permutation with high probability. 

Tue, 04 Feb 2020
14:30
L5

Lightning Laplace and Stokes solvers

Pablo Brubeck
(Oxford)
Abstract

We extend the lightning Laplace solver (Gopal and Trefethen, SINUM 2019) to unbounded domains and to the biharmonic equation. Illustrating the high accuracy of such methods, we get beautiful contour plots of Moffatt eddies.

Tue, 28 Jan 2020
14:00
L5

Stable Computation of Generalized Polar Decompositions

Carolin Penke
(MPI-Magdeburg)
Abstract

The QDWH algorithm can compute the polar decomposition of a matrix in a stable and efficient way. We generalize this method in order to compute generalized polar decompositions with respect to signature matrices. Here, the role of the QR decomposition is played by the hyperbolic QR decomposition. However, it doesn't show the same favorable properties concerning stability as its orthogonal counterpart. Remedies are found by exploiting connections to the LDL^T factorization and by employing well-conditioned permuted graph bases. The computed polar decomposition is used to formulate a structure-preserving spectral divide-and-conquer method for pseudosymmetric matrices. Applications of this method are found in computational quantum physics, where eigenvalues and eigenvectors describe optical properties of condensed matter or molecules. Additional properties guarantee fast convergence and a reduction to symmetric definite eigenvalue problems after just one step of spectral divide-and-conquer.

Tue, 28 Jan 2020
14:30
L5

Dimensionality reduction techniques for global optimization

Adilet Otemissov
(Oxford)
Abstract

We show that the scalability challenges of Global Optimisation algorithms can be overcome for functions with low effective dimensionality, which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations. We propose the use of random subspace embeddings within a(ny) global minimisation algorithm, extending the approach in Wang et al. (2016). Using tools from random matrix theory and conic integral geometry, we investigate the efficacy and convergence of our random subspace embeddings approach, in a static and/or adaptive formulation. We illustrate our algorithmic proposals and theoretical findings numerically, using state of the art global solvers. This work is joint with Coralia Cartis.
 

Tue, 21 Jan 2020
14:30
L5

Nonlinear Subspace Correction Methods

Thomas Roy
(Oxford)
Abstract

Subspace correction (SSC) methods are based on a "divide and conquer" strategy, where a global problem is divided into a sequence of local ones. This framework includes iterative methods such as Jacobi iteration, domain decomposition, and multigrid methods. We are interested in nonlinear PDEs exhibiting highly localized nonlinearities, for which Newton's method can take many iterations. Instead of doing all this global work, nonlinear SSC methods tackle the localized nonlinearities within subproblems. In this presentation, we describe the SSC framework, and investigate combining Newton's method with nonlinear SSC methods to solve a regularized Stefan problem.
 

Tue, 21 Jan 2020
14:00
L5

Vandermonde with Arnoldi

Nick Trefethen
(Oxford)
Abstract

Vandermonde matrices are exponentially ill-conditioned, rendering the familiar “polyval(polyfit)” algorithm for polynomial interpolation and least-squares fitting ineffective at higher degrees. We show that Arnoldi orthogonalization fixes the problem.

Subscribe to