Past Numerical Analysis Group Internal Seminar

20 February 2018
14:00
Katherine Gillow
Abstract

A simple experiment in the field of electrochemistry involves  controlling the applied potential in an electrochemical cell. This  causes electron transfer to take place at the electrode surface and in turn this causes a current to flow. The current depends on parameters in  the system and the inverse problem requires us to estimate these  parameters given an experimental trace of the current. We briefly  describe recent work in this area from simple least squares approximation of the parameters, through bootstrapping to estimate the distributions of the parameters, to MCMC methods which allow us to see correlations between parameters.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:30
Jeremias Sulam
Abstract

Within the wide field of sparse approximation, convolutional sparse coding (CSC) has gained considerable attention in the computer vision and machine learning communities. While several works have been devoted to the practical aspects of this model, a systematic theoretical understanding of CSC seems to have been left aside. In this talk, I will present a novel analysis of the CSC problem based on the observation that, while being global, this model can be characterized and analyzed locally. By imposing only local sparsity conditions, we show that uniqueness of solutions, stability to noise contamination and success of pursuit algorithms are globally guaranteed. I will then present a Multi-Layer extension of this model and show its close relation to Convolutional Neural Networks (CNNs). This connection brings a fresh view to CNNs, as one can attribute to this architecture theoretical claims under local sparse assumptions, which shed light on ways of improving the design and implementation of these networks. Last, but not least, we will derive a learning algorithm for this model and demonstrate its applicability in unsupervised settings.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:00
Man-Chung Yue
Abstract

In this talk, we revisit the cubic regularization (CR) method for solving smooth non-convex optimization problems and study its local convergence behaviour. In their seminal paper, Nesterov and Polyak showed that the sequence of iterates of the CR method converges quadratically a local minimum under a non-degeneracy assumption, which implies that the local minimum is isolated. However, many optimization problems from applications such as phase retrieval and low-rank matrix recovery have non-isolated local minima. In the absence of the non-degeneracy assumption, the result was downgraded to the superlinear convergence of function values. In particular, they showed that the sequence of function values enjoys a superlinear convergence of order 4/3 (resp. 3/2) if the function is gradient dominated (resp. star-convex and globally non-degenerate). To remedy the situation, we propose a unified local error bound (EB) condition and show that the sequence of iterates of the CR method converges quadratically a local minimum under the EB condition. Furthermore, we prove that the EB condition holds if the function is gradient dominated or if it is star-convex and globally non-degenerate, thus improving the results of Nesterov and Polyak in three aspects: weaker assumption, faster rate and iterate instead of function value convergence. Finally, we apply our results to two concrete non-convex optimization problems that arise from phase retrieval and low-rank matrix recovery. For both problems, we prove that with overwhelming probability, the local EB condition is satisfied and the CR method converges quadratically to a global optimizer. We also present some numerical results on these two problems.

  • Numerical Analysis Group Internal Seminar
6 February 2018
14:30
Patrick Farrell
Abstract


The question of what happens to the eigenvalues of a matrix after an additive perturbation has a long history, with notable contributions from Wilkinson, Sorensen, Golub, H\"ormander, Ipsen and Mehrmann, among many others. If the perturbed matrix $C \in \mathbb{C}^{n \times n}$ is given by $C = A + B$, these theorems typically consider the case where $A$ and/or $B$ are symmetric and $B$ has rank one. In this talk we will prove a theorem that bounds the number of distinct eigenvalues of $C$ in terms of the number of distinct eigenvalues of $A$, the diagonalisability of $A$, and the rank of $B$. This new theorem is more general in that it applies to arbitrary matrices $A$ perturbed by matrices of arbitrary rank $B$. We will also discuss various refinements of my bound recently developed by other authors.
 

  • Numerical Analysis Group Internal Seminar
6 February 2018
14:00
Seungchan Ko
Abstract

We consider a system of nonlinear partial differential equations modelling the steady motion of an incompressible non-Newtonian fluid, which is chemically reacting. The governing system consists of a steady convection-diffusion equation for the concentration and the generalized steady Navier–Stokes equations, where the viscosity coefficient is a power-law type function of the shear-rate, and the coupling between the equations results from the concentration-dependence of the power-law index. This system of nonlinear partial differential equations arises in mathematical models of the synovial fluid found in the cavities of moving joints. We construct a finite element approximation of the model and perform the mathematical analysis of the numerical method. Key technical tools include discrete counterparts of the Bogovski operator, De Giorgi’s regularity theorem and the Acerbi–Fusco Lipschitz truncation of Sobolev functions, in function spaces with variable integrability exponents.

  • Numerical Analysis Group Internal Seminar
30 January 2018
14:30
Jinyun Yuan
Abstract

In this talk we discuss the convergence rate of the Newton method for finding the singularity point on vetor fields. It is well-known that the Newton Method has local quadratic convergence rate with nonsingularity and Lipschitz condition. Here we release Lipschitz condition. With only nonsingularity, the Newton Method has superlinear convergence. If we have enough time, we can quickly give the damped Newton method on finding singularity on vector fields with superlinear convergence under nonsingularity condition only.

  • Numerical Analysis Group Internal Seminar
30 January 2018
14:00
Graham Baird
Abstract

In this talk we consider the issue of mass loss in fragmentation models due to 'shattering'. As a solution we propose a hybrid discrete/continuous model whereby the smaller particles are considered as having discrete mass, whilst above a certain cut-off, mass is taken to be a continuous variable. The talk covers the development of such a model, its initial analysis via the theory of operator semigroups and its numerical approximation using a finite volume discretisation.

  • Numerical Analysis Group Internal Seminar
23 January 2018
14:30
Niall Bootland
Abstract

We explore the use of applying multiple preconditioners for solving linear systems arising in simulations of incompressible two-phase flow. In particular, we use a selective MPGMRES algorithm, for which the search space grows linearly throughout the iterative solver, and block preconditioners based on Schur complement approximations

  • Numerical Analysis Group Internal Seminar
23 January 2018
14:00
Ellya Kawecki
Abstract

We introduce a discontinuous Galerkin finite element method (DGFEM) for Hamilton–Jacobi–Bellman equations on piecewise curved domains, and prove that the method is consistent, stable, and produces optimal convergence rates. Upon utilising a long standing result due to N. Krylov, we may characterise the Monge–Ampère equation as a HJB equation; in two dimensions, this HJB equation can be characterised further as uniformly elliptic HJB equation, allowing for the application of the DGFEM

  • Numerical Analysis Group Internal Seminar
16 January 2018
14:30
Ozzy Nilsen
Abstract

We propose a new parameter estimation technique for SDEs, based on the inverse problem of finding a forward operator describing the evolution of temporal data. Nonlinear dynamical systems on a state-space can be lifted to linear dynamical systems on spaces of higher, often infinite, dimension. Recently, much work has gone into approximating these higher-dimensional systems with linear operators calculated from data, using what is called Dynamic Mode Decomposition (DMD). For SDEs, this linear system is given by a second-order differential operator, which we can quickly calculate and compare to the DMD operator.

  • Numerical Analysis Group Internal Seminar

Pages