Mon, 12 Feb 2018

14:15 - 15:15
L3

Regularization by noise and path-by-path uniqueness for SDEs and SPDEs.

OLEG BUTKOVSKY
(Technion Israel)
Abstract

(Joint work with Siva Athreya & Leonid Mytnik).

It is well known from the literature that ordinary differential equations (ODEs) regularize in the presence of noise. Even if an ODE is “very bad” and has no solutions (or has multiple solutions), then the addition of a random noise leads almost surely to a “nice” ODE with a unique solution. The first part of the talk will be devoted to SDEs with distributional drift driven by alpha-stable noise. These equations are not well-posed in the classical sense. We define a natural notion of a solution to this equation and show its existence and uniqueness whenever the drift belongs to a certain negative Besov space. This generalizes results of E. Priola (2012) and extends to the context of stable processes the classical results of A. Zvonkin (1974) as well as the more recent results of R. Bass and Z.-Q. Chen (2001).

In the second part of the talk we investigate the same phenomenon for a 1D heat equation with an irregular drift. We prove existence and uniqueness of the flow of solutions and, as a byproduct of our proof, we also establish path-by-path uniqueness. This extends recent results of A. Davie (2007) to the context of stochastic partial differential equations.

[1] O. Butkovsky, L. Mytnik (2016). Regularization by noise and flows of solutions for a stochastic heat equation. arXiv 1610.02553. To appear in Annal. Probab.

[2] S. Athreya, O. Butkovsky, L. Mytnik (2018). Strong existence and uniqueness for stable stochastic differential equations with distributional drift. arXiv 1801.03473.

Mon, 22 Jan 2018

14:15 - 15:15
L3

Smooth Gaussian fields and critical percolation

DMITRY BELYAEV
(University of Oxford)
Abstract

Smooth Gaussian functions appear naturally in many areas of mathematics. Most of the talk will be about two special cases: the random plane model and the Bargmann-Fock ensemble. Random plane wave are conjectured to be a universal model for high-energy eigenfunctions of the Laplace operator in a generic domain. The Bargmann-Fock ensemble appears in quantum mechanics and is the scaling limit of the Kostlan ensemble, which is a good model for a `typical' projective variety. It is believed that these models, despite very different origins have something in common: they have scaling limits that are described be the critical percolation model. This ties together ideas and methods from many different areas of mathematics: probability, analysis on manifolds, partial differential equation, projective geometry, number theory and mathematical physics. In the talk I will introduce all these models, explain the conjectures relating them, and will talk about recent progress in understanding these conjectures.

Tue, 06 Mar 2018

14:30 - 15:00
L5

Predicting diagnosis and cognitive measures for Alzheimer’s disease

Paul Moore
(Oxford University)
Abstract

Forecasting a diagnosis of Alzheimer’s disease is a promising means of selection for clinical trials of Alzheimer’s disease therapies. A positive PET scan is commonly used as part of the inclusion criteria for clinical trials, but PET imaging is expensive, so when a positive scan is one of the trial inclusion criteria it is desirable to avoid screening failures. In this talk I will describe a scheme for pre-selecting participants using statistical learning methods, and investigate how brain regions change as the disease progresses.  As a means of generating features I apply the Chen path signature. This is a systematic way of providing feature sets for multimodal data that can probe the nonlinear interactions in the data as an extension of the usual linear features. While it can easily perform a traditional analysis, it can also probe second and higher order events for their predictive value. Combined with Lasso regularisation one can auto detect situations where the observed data has nonlinear information.

Tue, 06 Mar 2018

14:00 - 14:30
L5

Achieving high performance through effective vectorisation

Oliver Sheridan-Methven
(InFoMM)
Abstract

The latest CPUs by Intel and ARM support vectorised operations, where a single set of instructions (e.g. add, multiple, bit shift, XOR, etc.) are performed in parallel for small batches of data. This can provide great performance improvements if each parallel instruction performs the same operation, but carries the risk of performance loss if each needs to perform different tasks (e.g. if else conditions). I will present the work I have done so far looking into how to recover the full performance of the hardware, and some of the challenges faced when trading off between ever larger parallel tasks, risks of tasks diverging, and how certain coding styles might be modified for memory bandwidth limited applications. Examples will be taken from finance and Monte Carlo applications, inspecting some standard maths library functions and possibly random number generation.

Tue, 27 Feb 2018

14:30 - 15:00
L5

Low-rank plus Sparse matrix recovery and matrix rigidity

Simon Vary
(Oxford University)
Abstract

Low-rank plus sparse matrices arise in many data-oriented applications, most notably in a foreground-background separation from a moving camera. It is known that low-rank matrix recovery from a few entries (low-rank matrix completion) requires low coherence (Candes et al 2009) as in the extreme cases when the low-rank matrix is also sparse, where matrix completion can miss information and be unrecoverable. However, the requirement of low coherence does not suffice in the low-rank plus sparse model, as the set of low-rank plus sparse matrices is not closed. We will discuss the relation of non-closedness of the low-rank plus sparse model to the notion of matrix rigidity function in complexity theory.

Tue, 27 Feb 2018

14:00 - 14:30
L5

Finite element approximation of the flow of incompressible fluids with implicit constitutive law

Tabea Tscherpel
(PDE-CDT)
Abstract

The object of this talk is a class of generalised Newtonian fluids with implicit constitutive law.
Both in the steady and the unsteady case, existence of weak solutions was proven by Bul\'\i{}\v{c}ek et al. (2009, 2012) and the main challenge is the small growth exponent qq and the implicit law.
I will discuss the application of a splitting and regularising strategy to show convergence of FEM approximations to weak solutions of the flow. 
In the steady case this allows to cover the full range of growth exponents and thus generalises existing work of Diening et al. (2013). If time permits, I will also address the unsteady case.
This is joint work with Endre Suli.

Tue, 20 Feb 2018

14:30 - 15:00
L5

Sparse non-negative super-resolution - simplified and stabilised

Bogdan Toader
(InFoMM)
Abstract

We consider the problem of localising non-negative point sources, namely finding their locations and amplitudes from noisy samples which consist of the convolution of the input signal with a known kernel (e.g. Gaussian). In contrast to the existing literature, which focuses on TV-norm minimisation, we analyse the feasibility problem. In the presence of noise, we show that the localised error is proportional to the level of noise and depends on the distance between each source and the closest samples. This is achieved using duality and considering the spectrum of the associated sampling matrix.

Tue, 20 Feb 2018

14:00 - 14:30
L5

Inverse Problems in Electrochemistry

Katherine Gillow
(Oxford University)
Abstract

A simple experiment in the field of electrochemistry involves  controlling the applied potential in an electrochemical cell. This  causes electron transfer to take place at the electrode surface and in turn this causes a current to flow. The current depends on parameters in  the system and the inverse problem requires us to estimate these  parameters given an experimental trace of the current. We briefly  describe recent work in this area from simple least squares approximation of the parameters, through bootstrapping to estimate the distributions of the parameters, to MCMC methods which allow us to see correlations between parameters.

Tue, 13 Feb 2018

14:30 - 15:00
L5

From Convolutional Sparse Coding to Deep Sparsity and Neural Networks

Jeremias Sulam
(Technion Israel)
Abstract

Within the wide field of sparse approximation, convolutional sparse coding (CSC) has gained considerable attention in the computer vision and machine learning communities. While several works have been devoted to the practical aspects of this model, a systematic theoretical understanding of CSC seems to have been left aside. In this talk, I will present a novel analysis of the CSC problem based on the observation that, while being global, this model can be characterized and analyzed locally. By imposing only local sparsity conditions, we show that uniqueness of solutions, stability to noise contamination and success of pursuit algorithms are globally guaranteed. I will then present a Multi-Layer extension of this model and show its close relation to Convolutional Neural Networks (CNNs). This connection brings a fresh view to CNNs, as one can attribute to this architecture theoretical claims under local sparse assumptions, which shed light on ways of improving the design and implementation of these networks. Last, but not least, we will derive a learning algorithm for this model and demonstrate its applicability in unsupervised settings.

Tue, 13 Feb 2018

14:00 - 14:30
L5

Cubic Regularization Method Revisited: Quadratic Convergence to Degenerate Solutions and Applications to Phase Retrieval and Low-rank Matrix Recovery

Man-Chung Yue
(Imperial College)
Abstract

In this talk, we revisit the cubic regularization (CR) method for solving smooth non-convex optimization problems and study its local convergence behaviour. In their seminal paper, Nesterov and Polyak showed that the sequence of iterates of the CR method converges quadratically a local minimum under a non-degeneracy assumption, which implies that the local minimum is isolated. However, many optimization problems from applications such as phase retrieval and low-rank matrix recovery have non-isolated local minima. In the absence of the non-degeneracy assumption, the result was downgraded to the superlinear convergence of function values. In particular, they showed that the sequence of function values enjoys a superlinear convergence of order 4/3 (resp. 3/2) if the function is gradient dominated (resp. star-convex and globally non-degenerate). To remedy the situation, we propose a unified local error bound (EB) condition and show that the sequence of iterates of the CR method converges quadratically a local minimum under the EB condition. Furthermore, we prove that the EB condition holds if the function is gradient dominated or if it is star-convex and globally non-degenerate, thus improving the results of Nesterov and Polyak in three aspects: weaker assumption, faster rate and iterate instead of function value convergence. Finally, we apply our results to two concrete non-convex optimization problems that arise from phase retrieval and low-rank matrix recovery. For both problems, we prove that with overwhelming probability, the local EB condition is satisfied and the CR method converges quadratically to a global optimizer. We also present some numerical results on these two problems.

Subscribe to