Computational Mathematics and Applications

Thu, 14/10/2010
14:00
Prof. Klaus Böhmer (Philipps University Marburg) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR
We extend for the first time the linear discretization theory of Schaback, developed for meshfree methods, to nonlinear operator equations, relying heavily on methods of Böhmer, Vol I. There is no restriction to elliptic problems or to symmetric numerical methods like Galerkin techniques. Trial spaces can be arbitrary, but have to approximate the solution well, and testing can be weak or strong. We present Galerkin techniques as an example. On the downside, stability is not easy to prove for special applications, and numerical methods have to be formulated as optimization problems. Results of this discretization theory cover error bounds and convergence rates. These results remain valid for the general case of fully nonlinear elliptic differential equations of second order. Some numerical examples are added for illustration.
Thu, 21/10/2010
14:00
Prof. Axel Voigt (Dresden University of Technology) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR
Starting from a Navier-Stokes-Cahn-Hilliard equation for a two-phase flow problem we discuss efficient numerical approaches based on adaptive finite element methods. Various extensions of the model are discussed: a) we consider the model on implicitly described geometries, which is used to simulate the sliding of droplets over nano-patterned surfaces, b) we consider the effect of soluble surfactants and show its influence on tip splitting of droplets under shear flow, and c) we consider bijels as a new class of soft matter materials, in which colloidal particles are jammed on the fluid-fluid interface and effect the motion of the interface due to an elastic force. The work is based on joint work with Sebastian Aland (TU Dresden), John Lowengrub (UC Irvine) and Knut Erik Teigen (U Trondheim).
Thu, 28/10/2010
14:00
Prof. Yvan Notay (Universite Libre de Bruxelles) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR

Algebraic multigrid methods are nowadays popular to solve linear systems arising from the discretization of elliptic PDEs. They try to combine the efficiency of well tuned specific schemes like classical (geometric-based) multigrid methods, with the ease of use of general purpose preconditioning techniques. This requires to define automatic coarsening procedures, which set up an hierarchy of coarse representations of the problem at hand using only information from the system matrix.

In this talk, we focus on aggregation-based algebraic multigrid methods. With these, the coarse unknowns are simply formed by grouping variables in disjoint subset called aggregates.

In the first part of the talk, we consider symmetric M-matrices with nonnegative row-sum. We show how aggregates can then be formed in such a way that the resulting method satisfies a prescribed bound on its convergence rate. That is, instead of the classical paradigm that applies an algorithm and then performs its analysis, the analysis is integrated in the set up phase so as to enforce minimal quality requirements. As a result, we obtain one of the first algebraic multigrid method with full convergence proof. The efficiency of the method is further illustrated by numerical results performed on finite difference or linear finite element discretizations of second order elliptic PDEs; the set of problems includes problems with jumps, anisotropy, reentering corners and/or unstructured meshes, sometimes with local refinement.

In the second part of the talk, we discuss nonsymmetric problems. We show how the previous approach can be extended to M-matrices with row- and column-sum both nonnegative, which includes some stable discretizations of convection-diffusion equations with divergence free convective flow. Some (preliminary) numerical results are also presented.

This is joint work with Artem Napov.

Thu, 04/11/2010
14:00
Prof. Eric de Sturler (Virginia Tech) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR
The Bi-Conjugate Gradient method (BiCG) is a well-known iterative solver (Krylov method) for linear systems of equations, proposed about 35 years ago, and the basis for some of the most successful iterative methods today, like BiCGSTAB. Nevertheless, the convergence behavior is poorly understood. The method satisfies a Petrov-Galerkin property, and hence its residual is constrained to a space of decreasing dimension (decreasing one per iteration). However, that does not explain why, for many problems, the method converges in, say, a hundred or a few hundred iterations for problems involving a hundred thousand or a million unknowns. For many problems, BiCG converges not much slower than an optimal method, like GMRES, even though the method does not satisfy any optimality properties. In fact, Anne Greenbaum showed that every three-term recurrence, for the first (n/2)+1 iterations (for a system of dimension n), is BiCG for some initial 'left' starting vector. So, why does the method work so well in most cases? We will introduce Krylov methods, discuss the convergence of optimal methods, describe the BiCG method, and provide an analysis of its convergence behavior.
Thu, 11/11/2010
14:00
Prof. Jean-Paul Berrut (Université de Fribourg) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR

Efficient linear and infinitely smooth approximation of functions from equidistant samples is a fascinating problem, at least since Runge showed in 1901 that it is not delivered by the interpolating polynomial.

In 1988, I suggested to substitute linear rational for polynomial interpolation by replacing the denominator 1 with a polynomial depending on the nodes, though not on the interpolated function. Unfortunately the so-obtained interpolant converges merely as the square of the mesh size. In 2007, Floater and Hormann have given for every integer a denominator that yields convergence of that prescribed order.

In the present talk I shall present the corresponding interpolant as well as some of its applications to differentiation, integration and the solution of boundary value problems. This is joint work with Georges Klein and Michael Floater.

Thu, 18/11/2010
14:00
Mr. Andreas Potschka (University of Heidelberg) Computational Mathematics and Applications Add to calendar Gibson Grd floor SR
Optimization problems with time-periodic parabolic PDE constraints can arise in important chemical engineering applications, e.g., in periodic adsorption processes. I will present a novel direct numerical method for this problem class. The main numerical challenges are the high nonlinearity and high dimensionality of the discretized problem. The method is based on Direct Multiple Shooting and inexact Sequential Quadratic Programming with globalization of convergence based on natural level functions. I will highlight the use of a generalized Richardson iteration with a novel two-grid Newton-Picard preconditioner for the solution of the quadratic subproblems. At the end of the talk I will explain the principle of Simulated Moving Bed processes and conclude with numerical results for optimization of such a process.
Thu, 25/11/2010
14:00
Dr. Vanessa Styles (University of Sussex) Computational Mathematics and Applications Add to calendar Rutherford Appleton Laboratory, nr Didcot
We propose and analyze a primal-dual active set method for local and non-local vector-valued Allen-Cahn variational inequalities. We show existence and uniqueness of a solution for the non-local vector-valued Allen-Cahn variational inequality in a formulation involving Lagrange multipliers for local and non-local constraints. Furthermore, convergence of the algorithm is shown by interpreting the approach as a semi-smooth Newton method and numerical simulations are presented.
Thu, 02/12/2010
14:00
Dr Julian Hall (University of Edinburgh) Computational Mathematics and Applications Add to calendar Rutherford Appleton Laboratory, nr Didcot
Implementations of the revised simplex method for solving large scale sparse linear programming (LP) problems are highly efficient for single-core architectures. This talk will discuss the limitations of the underlying techniques in the context of modern multi-core architectures, in particular with respect to memory access. Novel techniques for implementing the dual revised simplex method will be introduced, and their use in developing a dual revised simplex solver for multi-core architectures will be described.
Syndicate content