Past Numerical Analysis Group Internal Seminar

1 May 2018
14:30
Alberto Paganini
Abstract

We construct a space of vector fields that are normal to differentiable curves in the plane. Its basis functions are defined via saddle point variational problems in reproducing kernel Hilbert spaces (RKHSs). First, we study the properties of these basis vector fields and show how to approximate them. Then, we employ this basis to discretise shape Newton methods and investigate the impact of this discretisation on convergence rates.

  • Numerical Analysis Group Internal Seminar
1 May 2018
14:00
Lindon Roberts
Abstract

Structure from Motion (SfM) is a problem which asks: given photos of an object from different angles, can we reconstruct the object in 3D? This problem is important in computer vision, with applications including urban planning and autonomous navigation. A key part of SfM is bundle adjustment, where initial estimates of 3D points and camera locations are refined to match the images. This results in a high-dimensional nonlinear least-squares problem, which is typically solved using the Gauss-Newton method. In this talk, I will discuss how dimensionality reduction methods such as block coordinates and randomised sketching can be used to improve the scalability of Gauss-Newton for bundle adjustment problems.

  • Numerical Analysis Group Internal Seminar
24 April 2018
14:30
Abinand Gopal
Abstract

Over the past decade, the randomized singular value decomposition (RSVD) algorithm has proven to be an efficient, reliable alternative to classical algorithms for computing low-rank approximations in a number of applications. However, in cases where no information is available on the singular value decay of the data matrix or the data matrix is known to be close to full-rank, the RSVD is ineffective. In recent years, there has been great interest in randomized algorithms for computing full factorizations that excel in this regime.  In this talk, we will give a brief overview of some key ideas in randomized numerical linear algebra and introduce a new randomized algorithm for computing a full, rank-revealing URV factorization.

  • Numerical Analysis Group Internal Seminar
24 April 2018
14:00
Thomas Roy
Abstract

In oil and gas reservoir simulation, standard preconditioners involve solving a restricted pressure system with AMG. Initially designed for isothermal models, this approach is often used in the thermal case. However, it does not incorporate heat diffusion or the effects of temperature changes on fluid flow through viscosity and density. We seek to develop preconditioners which consider this cross-coupling between pressure and temperature. In order to study the effects of both pressure and temperature on fluid and heat flow, we first consider a model of non-isothermal single phase flow through porous media. By focusing on single phase flow, we are able to isolate the properties of the pressure-temperature subsystem. We present a numerical comparison of different preconditioning approaches including block preconditioners.

  • Numerical Analysis Group Internal Seminar
22 March 2018
14:00
Simon Foucart
Abstract

The restricted isometry property is arguably the most prominent tool in the theory of compressive sensing. In its classical version, it features l_2 norms as inner and outer norms. The modified version considered in this talk features the l_1 norm as the inner norm, while the outer norm depends a priori on the distribution of the random entries populating the measurement matrix.  The modified version holds for a wider class of random matrices and still accounts for the success of sparse recovery via basis pursuit and via iterative hard thresholding. In the special case of Gaussian matrices, the outer norm actually reduces to an l_2 norm. This fact allows one to retrieve results from the theory of one-bit compressive sensing in a very simple way. Extensions to one-bit matrix recovery are then straightforward.
 

  • Numerical Analysis Group Internal Seminar
6 March 2018
14:30
Paul Moore
Abstract

Forecasting a diagnosis of Alzheimer’s disease is a promising means of selection for clinical trials of Alzheimer’s disease therapies. A positive PET scan is commonly used as part of the inclusion criteria for clinical trials, but PET imaging is expensive, so when a positive scan is one of the trial inclusion criteria it is desirable to avoid screening failures. In this talk I will describe a scheme for pre-selecting participants using statistical learning methods, and investigate how brain regions change as the disease progresses.  As a means of generating features I apply the Chen path signature. This is a systematic way of providing feature sets for multimodal data that can probe the nonlinear interactions in the data as an extension of the usual linear features. While it can easily perform a traditional analysis, it can also probe second and higher order events for their predictive value. Combined with Lasso regularisation one can auto detect situations where the observed data has nonlinear information.

  • Numerical Analysis Group Internal Seminar
6 March 2018
14:00
Oliver Sheridan-Methven
Abstract

The latest CPUs by Intel and ARM support vectorised operations, where a single set of instructions (e.g. add, multiple, bit shift, XOR, etc.) are performed in parallel for small batches of data. This can provide great performance improvements if each parallel instruction performs the same operation, but carries the risk of performance loss if each needs to perform different tasks (e.g. if else conditions). I will present the work I have done so far looking into how to recover the full performance of the hardware, and some of the challenges faced when trading off between ever larger parallel tasks, risks of tasks diverging, and how certain coding styles might be modified for memory bandwidth limited applications. Examples will be taken from finance and Monte Carlo applications, inspecting some standard maths library functions and possibly random number generation.

  • Numerical Analysis Group Internal Seminar
27 February 2018
14:30
Simon Vary
Abstract

Low-rank plus sparse matrices arise in many data-oriented applications, most notably in a foreground-background separation from a moving camera. It is known that low-rank matrix recovery from a few entries (low-rank matrix completion) requires low coherence (Candes et al 2009) as in the extreme cases when the low-rank matrix is also sparse, where matrix completion can miss information and be unrecoverable. However, the requirement of low coherence does not suffice in the low-rank plus sparse model, as the set of low-rank plus sparse matrices is not closed. We will discuss the relation of non-closedness of the low-rank plus sparse model to the notion of matrix rigidity function in complexity theory.

  • Numerical Analysis Group Internal Seminar
27 February 2018
14:00
Tabea Tscherpel
Abstract

The object of this talk is a class of generalised Newtonian fluids with implicit constitutive law.
Both in the steady and the unsteady case, existence of weak solutions was proven by Bul\'\i{}\v{c}ek et al. (2009, 2012) and the main challenge is the small growth exponent qq and the implicit law.
I will discuss the application of a splitting and regularising strategy to show convergence of FEM approximations to weak solutions of the flow. 
In the steady case this allows to cover the full range of growth exponents and thus generalises existing work of Diening et al. (2013). If time permits, I will also address the unsteady case.
This is joint work with Endre Suli.

  • Numerical Analysis Group Internal Seminar
20 February 2018
14:30
Bogdan Toader
Abstract

We consider the problem of localising non-negative point sources, namely finding their locations and amplitudes from noisy samples which consist of the convolution of the input signal with a known kernel (e.g. Gaussian). In contrast to the existing literature, which focuses on TV-norm minimisation, we analyse the feasibility problem. In the presence of noise, we show that the localised error is proportional to the level of noise and depends on the distance between each source and the closest samples. This is achieved using duality and considering the spectrum of the associated sampling matrix.

  • Numerical Analysis Group Internal Seminar

Pages