Tue, 03 Mar 2020
14:30
L2

### Stochastic rounding: effect on linear algebra operations and application to time-dependent PDEs

Matteo Croci
(Oxford)
Abstract

The standard rounding procedure in floating-point computations is round to nearest (RN). In this talk we consider an alternative rounding strategy called stochastic rounding (SR) which has the appealing property of being exact (actually exact!) in expectation. In the first part of the talk we discuss recent developments in probabilistic rounding error analysis and we show how rounding errors grow at an O(\sqrt{n}) rate rather than O(n) when SR is employed. This shows that Wilkinson's rule of thumb provably holds for this type of rounding. In the second part of the talk we consider the application of SR to parabolic PDEs admitting a steady state solution. We show that when the heat equation is solved in half precision RN fails to compute an accurate solution, while SR successfully solves the problem to decent accuracy.

Tue, 03 Mar 2020
14:00
L2

### Deterministic Dynamic Pricing via Iterative Quadratic Programming

Jari Fowkes
(Oxford)
Abstract

We consider the problem of dynamically pricing multiple products on a network of resources, such as that faced by an airline selling tickets on its route network. For computational reasons this inherently stochastic problem is often approximated deterministically, however even the deterministic dynamic pricing problem can be impractical to solve. For this reason we have derived a novel iterative Quadratic Programming approximation to the deterministic dynamic pricing problem that is not only very fast to solve in practice but also exhibits a provably linear rate of convergence. This is joint work with Saksham Jain and Raphael Hauser.

Tue, 25 Feb 2020
14:00
L2

### Fast and stable randomized low-rank matrix approximation

Yuji Nakatsukasa
(Oxford)
Abstract

Randomized SVD has become an extremely successful approach for efficiently computing a low-rank approximation of matrices. In particular the paper by Halko, Martinsson (who is speaking twice this week), and Tropp (SIREV 2011) contains extensive analysis, and made it a very popular method.
The complexity for $m\times n$ matrices is $O(Nr+(m+n)r^2)$ where $N$ is the cost of a (fast) matrix-vector multiplication; which becomes $O(mn\log n+(m+n)r^2)$ for dense matrices. This work uses classical results in numerical linear algebra to reduce the computational cost to $O(Nr)$ without sacrificing numerical stability. The cost is essentially optimal for many classes of matrices, including $O(mn\log n)$ for dense matrices. The method can also be adapted for updating, downdating and perturbing the matrix, and is especially efficient relative to previous algorithms for such purposes.

Tue, 25 Feb 2020
14:30
L2

### Low-rank plus sparse matrices: ill-posedness and guaranteed recovery

Simon Vary
(Oxford)
Abstract

Robust principal component analysis and low-rank matrix completion are extensions of PCA that allow for outliers and missing entries, respectively. Solving these problems requires a low coherence between the low-rank matrix and the canonical basis. However, in both problems the well-posedness issue is even more fundamental; in some cases, both Robust PCA and matrix completion can fail to have any solutions due to the fact that the set of low-rank plus sparse matrices is not closed. Another consequence of this fact is that the lower restricted isometry property (RIP) bound cannot be satisfied for some low-rank plus sparse matrices unless further restrictions are imposed on the constituents. By restricting the energy of one of the components, we close the set and are able to derive the RIP over the set of low rank plus sparse matrices and operators satisfying concentration of measure inequalities. We show that the RIP of an operator implies exact recovery of a low-rank plus sparse matrix is possible with computationally tractable algorithms such as convex relaxations or line-search methods. We propose two efficient iterative methods called Normalized Iterative Hard Thresholding (NIHT) and Normalized Alternative Hard Thresholding (NAHT) that provably recover a low-rank plus sparse matrix from subsampled measurements taken by an operator satisfying the RIP.

Tue, 11 Feb 2020
14:00
L5

### Binary Matrix Completion for Recommender Systems

Melanie Beckerleg
(Oxford)
Tue, 04 Feb 2020
14:00
L5

### Matrix Factorization with Expander Graphs

Michael Murray
(Oxford)
Abstract

Many computational techniques in data science involve the factorization of a data matrix into the product of two or more structured matrices. Examples include PCA, which relies on computing an SVD, recommendation systems, which leverage non-negative matrix factorization, infilling missing entries with low rank matrix completion, and finding sparse representations via dictionary learning. In our work we study a new matrix factorization problem, involving the recovery of $\textbf{A}$ and $\textbf{X}$ from $\textbf{Y} := \textbf{A}\textbf{X}$ under the following assumptions; $\textbf{A}$ is an $m \times n$ sparse binary matrix with a fixed number $d$ of nonzeros per column and $\textbf{X}$ is an $n \times N$ sparse real matrix whose columns have $k$ nonzeros and are dissociated. This setup is inspired and motivated by similar models studied in the dictionary learning literature as well as potential connections both with stochastic block models and combinatorial compressed sensing. In this talk we present a new algorithm, EBR, for solving this problem, as well as recovery guarantees in the context of a particular probabilistic data model. Using the properties of expander graphs we are able to show, under certain assumptions, that with just $N = \textit{O}( \log^2(n))$ samples then EBR recovers the factorization up to permutation with high probability.

Tue, 04 Feb 2020
14:30
L5

### Lightning Laplace and Stokes solvers

Pablo Brubeck
(Oxford)
Abstract

We extend the lightning Laplace solver (Gopal and Trefethen, SINUM 2019) to unbounded domains and to the biharmonic equation. Illustrating the high accuracy of such methods, we get beautiful contour plots of Moffatt eddies.

Tue, 28 Jan 2020
14:30
L5

### Dimensionality reduction techniques for global optimization

(Oxford)
Abstract

We show that the scalability challenges of Global Optimisation algorithms can be overcome for functions with low effective dimensionality, which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations. We propose the use of random subspace embeddings within a(ny) global minimisation algorithm, extending the approach in Wang et al. (2016). Using tools from random matrix theory and conic integral geometry, we investigate the efficacy and convergence of our random subspace embeddings approach, in a static and/or adaptive formulation. We illustrate our algorithmic proposals and theoretical findings numerically, using state of the art global solvers. This work is joint with Coralia Cartis.

Tue, 21 Jan 2020
14:30
L5

### Nonlinear Subspace Correction Methods

Thomas Roy
(Oxford)
Abstract

Subspace correction (SSC) methods are based on a "divide and conquer" strategy, where a global problem is divided into a sequence of local ones. This framework includes iterative methods such as Jacobi iteration, domain decomposition, and multigrid methods. We are interested in nonlinear PDEs exhibiting highly localized nonlinearities, for which Newton's method can take many iterations. Instead of doing all this global work, nonlinear SSC methods tackle the localized nonlinearities within subproblems. In this presentation, we describe the SSC framework, and investigate combining Newton's method with nonlinear SSC methods to solve a regularized Stefan problem.

Tue, 21 Jan 2020
14:00
L5

### Vandermonde with Arnoldi

Nick Trefethen
(Oxford)
Abstract

Vandermonde matrices are exponentially ill-conditioned, rendering the familiar “polyval(polyfit)” algorithm for polynomial interpolation and least-squares fitting ineffective at higher degrees. We show that Arnoldi orthogonalization fixes the problem.

Subscribe to Oxford