Past Numerical Analysis Group Internal Seminar

28 November 2017
14:30
Charles Dapogny
Abstract

The purpose of this work is to introduce a new constraint functional for shape optimization problems, which enforces the constructibility by means of additive manufacturing processes, and helps in preventing the appearance of overhang features - large regions hanging over void which are notoriously difficult to assemble using such technologies. The proposed constraint relies on a simplified model for the construction process: it involves a continuum of shapes, namely the intermediate shapes corresponding to the stages of the construction process where the total, final shape is erected only up to a certain level. The shape differentiability of this constraint functional is analyzed - which is not a standard issue because of its peculiar structure. Several numerical strategies and examples are then presented. This is a joint work with G. Allaire, R. Estevez, A. Faure and G. Michailidis.

  • Numerical Analysis Group Internal Seminar
21 November 2017
14:30
Andrew Thompson
Abstract

I will describe a novel algorithm for computing the Walsh Hadamard Transform (WHT) which consists entirely of Haar wavelet transforms. The algorithm shares precisely the same serial complexity as the popular divide-and-conquer algorithm for the WHT. There is also a natural way to parallelize the algorithm which appears to have a number of attractive features.

  • Numerical Analysis Group Internal Seminar
21 November 2017
14:00
Joseph Field
Abstract

Medical imaging is a key diagnostic tool, and is paramount for disease detection and for patient monitoring during ongoing care. Often, to reduce the amount of radiation that a patient is subjected to, there is a strong incentive to consider image reconstruction from incomplete sets of measurements, and so the imaging process is formulated as a compressed sensing problem.

In this talk, we will focus on compressed sensing for digital tomosynthesis (DTS), in which three-dimensional images are reconstructed from a set of two-dimensional X-ray projections. We first discuss a reconstruction approach for static bodies, with a particular interest in the choice of basis for the image representation. We will then focus on the need for accurate image reconstructions when the body of interest is not stationary, but is undergoing simple motion, discussing two different approaches for tackling this dynamic problem.

  • Numerical Analysis Group Internal Seminar
14 November 2017
14:30
Florian Wechsung
Abstract

The design of shapes that are in some sense optimal is a task faced by engineers in a wide range of disciplines. In shape optimisation one aims to improve a given initial shape by iteratively deforming it - if the shape is represented by a mesh, then this means that the mesh has to deformed. This is a delicate problem as overlapping or highly stretched meshes lead to poor accuracy of numerical methods.

In the presented work we consider a novel mesh deformation method motivated by the Riemannian mapping theorem and based on conformal mappings.

  • Numerical Analysis Group Internal Seminar
14 November 2017
14:00
Federico Danieli
Abstract

Time parallelisation techniques provide an additional direction for the parallelisation of the solution of time-dependent PDEs or of systems of ODEs. In particular, the Parareal algorithm has imposed itself as the canonical choice to achieve parallelisation in time, also because of its simplicity and flexibility. The algorithm works by splitting the time domain in chunks, and iteratively alternating a prediction step (parallel), in which a "fine" solver is employed to achieve a high-accuracy solution within each chunk, to a correction step (serial) where a "coarse" solver is used to quickly propagate the update between the chunks. However, the stability of the method has proven to be highly sensitive to the choice of fine and coarse solver, even more so when applied to chaotic systems or advection-dominated problems.


In this presentation, an alternative formulation of Parareal is discussed. This aims to conduct the update by estimating directly the sensitivity of the solution of the integration with respect to the initial conditions, thus eliminating altogether the necessity of choosing the most apt coarse solver, and potentially boosting its convergence properties.

 

  • Numerical Analysis Group Internal Seminar
7 November 2017
14:30
Yuji Nakatsukasa
Abstract

Classical algorithms for numerical integration (quadrature/cubature) proceed by approximating the integrand with a simple function (e.g. a polynomial), and integrate the approximant exactly. In high-dimensional integration, such methods quickly become infeasible due to the curse of dimensionality.


A common alternative is the Monte Carlo method (MC), which simply takes the average of random samples, improving the estimate as more and more samples are taken. The main issue with MC is its slow "sqrt(variance/#samples)" convergence, and various techniques have been proposed to reduce the variance.


In this work we reveal a numerical analyst's interpretation of MC: it approximates the integrand with a simple(st) function, and integrates that function exactly. This observation leads naturally to MC-like methods that combines MC with function approximation theory, including polynomial approximation and sparse grids. The resulting method can be regarded as another variance reduction technique for Monte Carlo.

  • Numerical Analysis Group Internal Seminar
7 November 2017
14:00
Bartolomeo Stellato
Abstract

We develop a general purpose solver for quadratic programs based on operator splitting. We introduce a novel splitting that requires the solution of a quasi-definite linear system with the same coefficient matrix in each iteration. The resulting algorithm is very robust, and once the initial factorization is carried out, division free; it also eliminates requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. Moreover, it is able to detect primal or dual infeasible problems providing infeasibility certificates. The method supports caching the factorization of the quasi-definite system and warm starting, making it efficient for solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint and is library-free. Numerical benchmarks on problems arising from several application domains show that OSQP is typically 10x faster than interior-point methods, especially when factorization caching or warm start is used.


This is joint work with Goran Banjac, Paul Goulart, Alberto Bemporad and Stephen Boyd
 

  • Numerical Analysis Group Internal Seminar
31 October 2017
14:30
Athena Picarelli
Abstract

We provide the rate of convergence of general monotone numerical schemes for parabolic Hamilton-Jacobi-Bellman equations in bounded domains with Dirichlet boundary conditions. The so-called "shaking coefficients" technique introduced by Krylov is used. This technique is based on a perturbation of the dynamics followed by a regularization step by convolution. When restricting the equation to a domain, the perturbed problem may not satisfy such a restriction, so that a special treatment near the boundary is necessary. 

  • Numerical Analysis Group Internal Seminar
31 October 2017
14:00
Matthew Geleta
Abstract


The phenomenon of poor algorithmic scalability is a critical problem in large-scale machine learning and data science. This has led to a resurgence in the use of first-order (Hessian-free) algorithms from classical optimisation. One major drawback is that first-order methods tend to converge extremely slowly. However, there exist techniques for efficiently accelerating them.
    
The topic of this talk is the Dual Regularisation Nonlinear Acceleration algorithm (DRNA) (Geleta, 2017) for nonconvex optimisation. Numerical studies using the CUTEst optimisation problem set show the method to accelerate several nonconvex optimisation algorithms, including quasi-Newton BFGS and steepest descent methods. DRNA compares favourably with a number of existing accelerators in these studies.
    
DRNA extends to the nonconvex setting a recent acceleration algorithm due to Scieur et al. (Advances in Neural Information Processing Systems 29, 2016). We have proven theorems relating DRNA to the Kylov subspace method GMRES, as well as to Anderson's acceleration method and family of multi-secant quasi-Newton methods.
 

  • Numerical Analysis Group Internal Seminar

Pages