Past Numerical Analysis Group Internal Seminar

6 March 2018
14:00
Oliver Sheridan-Methven
Abstract

The latest CPUs by Intel and ARM support vectorised operations, where a single set of instructions (e.g. add, multiple, bit shift, XOR, etc.) are performed in parallel for small batches of data. This can provide great performance improvements if each parallel instruction performs the same operation, but carries the risk of performance loss if each needs to perform different tasks (e.g. if else conditions). I will present the work I have done so far looking into how to recover the full performance of the hardware, and some of the challenges faced when trading off between ever larger parallel tasks, risks of tasks diverging, and how certain coding styles might be modified for memory bandwidth limited applications. Examples will be taken from finance and Monte Carlo applications, inspecting some standard maths library functions and possibly random number generation.

  • Numerical Analysis Group Internal Seminar
27 February 2018
14:00
Jared Tanner
Abstract

Topological data analysis (TDA) is a method by which the topology one seeks to uncover the topology consistent with a data set.  Persistent homology considers the process of small balls growing around data points until they (sufficiently) interest at which point the associated points are connected and a simplicial complex formed.   The duration by which a topology is determined is then computed by forming and reducing a boundary matrix that denotes when a faces of a simplex are formed.  Reduction of the boundary matrix is qualitatively similar to gaussian elimination 
over a finite field, and historically is implemented in a very sequential manner.As the boundary matrix size grows polynomially on the dimension, a sequential method isn’t ideal for large data sets.  In this talk I will sketch the above process and a new algorithm with Rodrigo Mendoza-Smith by which the boundary matrix can be reduced in a massively parallel fashion.

  • Numerical Analysis Group Internal Seminar
20 February 2018
14:30
Bogdan Toader
Abstract

We consider the problem of localising non-negative point sources, namely finding their locations and amplitudes from noisy samples which consist of the convolution of the input signal with a known kernel (e.g. Gaussian). In contrast to the existing literature, which focuses on TV-norm minimisation, we analyse the feasibility problem. In the presence of noise, we show that the localised error is proportional to the level of noise and depends on the distance between each source and the closest samples. This is achieved using duality and considering the spectrum of the associated sampling matrix.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:30
Abstract

The object of this talk is a class of generalised Newtonian fluids with implicit constitutive law.
Both in the steady and the unsteady case, existence of weak solutions was proven by Bul\'\i{}\v{c}ek et al. (2009, 2012) and the main challenge is the small growth exponent $q$ and the implicit law.
I will discuss the application of a splitting and regularising strategy to show convergence of FEM approximations to weak solutions of the flow. 
In the steady case this allows to cover the full range of growth exponents and thus generalises existing work of Diening et al. (2013). If time permits, I will also address the unsteady case.
This is joint work with Endre Suli.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:00
Man-Chung Yue
Abstract

In this talk, we revisit the cubic regularization (CR) method for solving smooth non-convex optimization problems and study its local convergence behaviour. In their seminal paper, Nesterov and Polyak showed that the sequence of iterates of the CR method converges quadratically a local minimum under a non-degeneracy assumption, which implies that the local minimum is isolated. However, many optimization problems from applications such as phase retrieval and low-rank matrix recovery have non-isolated local minima. In the absence of the non-degeneracy assumption, the result was downgraded to the superlinear convergence of function values. In particular, they showed that the sequence of function values enjoys a superlinear convergence of order 4/3 (resp. 3/2) if the function is gradient dominated (resp. star-convex and globally non-degenerate). To remedy the situation, we propose a unified local error bound (EB) condition and show that the sequence of iterates of the CR method converges quadratically a local minimum under the EB condition. Furthermore, we prove that the EB condition holds if the function is gradient dominated or if it is star-convex and globally non-degenerate, thus improving the results of Nesterov and Polyak in three aspects: weaker assumption, faster rate and iterate instead of function value convergence. Finally, we apply our results to two concrete non-convex optimization problems that arise from phase retrieval and low-rank matrix recovery. For both problems, we prove that with overwhelming probability, the local EB condition is satisfied and the CR method converges quadratically to a global optimizer. We also present some numerical results on these two problems.

  • Numerical Analysis Group Internal Seminar
6 February 2018
14:00
Seungchan Ko
Abstract

We consider a system of nonlinear partial differential equations modelling the steady motion of an incompressible non-Newtonian fluid, which is chemically reacting. The governing system consists of a steady convection-diffusion equation for the concentration and the generalized steady Navier–Stokes equations, where the viscosity coefficient is a power-law type function of the shear-rate, and the coupling between the equations results from the concentration-dependence of the power-law index. This system of nonlinear partial differential equations arises in mathematical models of the synovial fluid found in the cavities of moving joints. We construct a finite element approximation of the model and perform the mathematical analysis of the numerical method. Key technical tools include discrete counterparts of the Bogovski operator, De Giorgi’s regularity theorem and the Acerbi–Fusco Lipschitz truncation of Sobolev functions, in function spaces with variable integrability exponents.

  • Numerical Analysis Group Internal Seminar

Pages