Numerical Analysis Group Internal Seminar

Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

Past events in this series
23 January 2018
14:00
Ellya Kawecki
Abstract

We introduce a discontinuous Galerkin finite element method (DGFEM) for Hamilton–Jacobi–Bellman equations on piecewise curved domains, and prove that the method is consistent, stable, and produces optimal convergence rates. Upon utilising a long standing result due to N. Krylov, we may characterise the Monge–Ampère equation as a HJB equation; in two dimensions, this HJB equation can be characterised further as uniformly elliptic HJB equation, allowing for the application of the DGFEM

  • Numerical Analysis Group Internal Seminar
23 January 2018
14:30
Niall Bootland
Abstract

We explore the use of applying multiple preconditioners for solving linear systems arising in simulations of incompressible two-phase flow. In particular, we use a selective MPGMRES algorithm, for which the search space grows linearly throughout the iterative solver, and block preconditioners based on Schur complement approximations

  • Numerical Analysis Group Internal Seminar
30 January 2018
14:30
Jinyun Yuan
Abstract

In this talk we discuss the convergence rate of the Newton method for finding the singularity point on vetor fields. It is well-known that the Newton Method has local quadratic convergence rate with nonsingularity and Lipschitz condition. Here we release Lipschitz condition. With only nonsingularity, the Newton Method has superlinear convergence. If we have enough time, we can quickly give the damped Newton method on finding singularity on vector fields with superlinear convergence under nonsingularity condition only.

  • Numerical Analysis Group Internal Seminar
6 February 2018
14:00
Seungchan Ko
Abstract

We consider a system of nonlinear partial differential equations modelling the steady motion of an incompressible non-Newtonian fluid, which is chemically reacting. The governing system consists of a steady convection-diffusion equation for the concentration and the generalized steady Navier–Stokes equations, where the viscosity coefficient is a power-law type function of the shear-rate, and the coupling between the equations results from the concentration-dependence of the power-law index. This system of nonlinear partial differential equations arises in mathematical models of the synovial fluid found in the cavities of moving joints. We construct a finite element approximation of the model and perform the mathematical analysis of the numerical method. Key technical tools include discrete counterparts of the Bogovski operator, De Giorgi’s regularity theorem and the Acerbi–Fusco Lipschitz truncation of Sobolev functions, in function spaces with variable integrability exponents.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:00
Man-Chung Yue
Abstract

In this talk, we revisit the cubic regularization (CR) method for solving smooth non-convex optimization problems and study its local convergence behaviour. In their seminal paper, Nesterov and Polyak showed that the sequence of iterates of the CR method converges quadratically a local minimum under a non-degeneracy assumption, which implies that the local minimum is isolated. However, many optimization problems from applications such as phase retrieval and low-rank matrix recovery have non-isolated local minima. In the absence of the non-degeneracy assumption, the result was downgraded to the superlinear convergence of function values. In particular, they showed that the sequence of function values enjoys a superlinear convergence of order 4/3 (resp. 3/2) if the function is gradient dominated (resp. star-convex and globally non-degenerate). To remedy the situation, we propose a unified local error bound (EB) condition and show that the sequence of iterates of the CR method converges quadratically a local minimum under the EB condition. Furthermore, we prove that the EB condition holds if the function is gradient dominated or if it is star-convex and globally non-degenerate, thus improving the results of Nesterov and Polyak in three aspects: weaker assumption, faster rate and iterate instead of function value convergence. Finally, we apply our results to two concrete non-convex optimization problems that arise from phase retrieval and low-rank matrix recovery. For both problems, we prove that with overwhelming probability, the local EB condition is satisfied and the CR method converges quadratically to a global optimizer. We also present some numerical results on these two problems.

  • Numerical Analysis Group Internal Seminar
13 February 2018
14:30
Abstract

The object of this talk is a class of generalised Newtonian fluids with implicit constitutive law.
Both in the steady and the unsteady case, existence of weak solutions was proven by Bul\'\i{}\v{c}ek et al. (2009, 2012) and the main challenge is the small growth exponent $q$ and the implicit law.
I will discuss the application of a splitting and regularising strategy to show convergence of FEM approximations to weak solutions of the flow. 
In the steady case this allows to cover the full range of growth exponents and thus generalises existing work of Diening et al. (2013). If time permits, I will also address the unsteady case.
This is joint work with Endre Suli.

  • Numerical Analysis Group Internal Seminar
20 February 2018
14:30
Bogdan Toader
Abstract

We consider the problem of localising non-negative point sources, namely finding their locations and amplitudes from noisy samples which consist of the convolution of the input signal with a known kernel (e.g. Gaussian). In contrast to the existing literature, which focuses on TV-norm minimisation, we analyse the feasibility problem. In the presence of noise, we show that the localised error is proportional to the level of noise and depends on the distance between each source and the closest samples. This is achieved using duality and considering the spectrum of the associated sampling matrix.

  • Numerical Analysis Group Internal Seminar

Pages

Add to My Calendar