Forthcoming events in this series
Lattice rules in a nutshell
Abstract
Lattice rules are equal-weight quadrature/cubature rules for the approximation of multivariate integrals which use lattice points as the cubature nodes. The quality of such cubature rules is directly related to the discrepancy between the uniform distribution and the discrete distribution of these points in the unit cube, and so, they are a kind of low-discrepancy sampling points. As low-discrepancy based cubature rules look like Monte Carlo rules, except that they use cleverly chosen deterministic points, they are sometimes called quasi-Monte Carlo rules.
\\
\\
The talk starts by motivating the usage of Monte Carlo and then quasi-Monte Carlo methods after which some more recent developments are discussed. Topics include: worst-case errors in reproducing kernel Hilbert spaces, weighted spaces and the construction of lattice rules and sequences.
\\
\\
In the minds of many, quasi-Monte Carlo methods seem to share the bad stanza of the Monte Carlo method: a brute force method of last resort with slow order of convergence, i.e., $O(N^{-1/2})$. This is not so.
While the standard rate of convergence for quasi-Monte Carlo is rather slow, being $O(N^{-1})$, the theory shows that these methods achieve the optimal rate of convergence in many interesting function spaces.
E.g., in function spaces with higher smoothness one can have $O(N^{-\alpha})$, $\alpha > 1$. This will be illustrated by numerical examples.
Discontinuous Galerkin Methods for Modeling the Coastal Ocean
Abstract
The coastal ocean contains a diversity of physical and biological
processes, often occurring at vastly different scales. In this talk,
we will outline some of these processes and their mathematical
description. We will then discuss how finite element methods are used
in coastal ocean modeling and recent research into
improvements to these algorithms. We will also highlight some of the
successes of these methods in simulating complex events, such as
hurricane storm surges. Finally, we will outline several interesting
challenges which are ripe for future research.
The FEAST eigenvalue algorithm and solver with new perspectives for first-principle electronic structure calculations
Abstract
FEAST is a new general purpose eigenvalue algorithm that takes its inspiration from the density-matrix representation and contour integration technique in quantum mechanics [Phys. Rev. B 79, 115112, (2009)], and it can be understood as a subspace iteration algorithm using approximate spectral projection [http://arxiv.org/abs/1302.0432 (2013)]. The algorithm combines simplicity and efficiency and offers many important capabilities for achieving high performance, robustness, accuracy, and multi-level parallelism on modern computing platforms. FEAST is also the name of a comprehensive numerical library package which currently (v2.1) focuses on solving the symmetric eigenvalue problems on both shared-memory architectures (i.e. FEAST-SMP version -- also integrated into Intel MKL since Feb 2013) and distributed architectures (i.e. FEAST-MPI version) including three levels of parallelism MPI-MPI-OpenMP.
\\
\\
In this presentation, we aim at expanding the current capabilies of the FEAST eigenvalue algorithm and developing an unified numerical approach for solving linear, non-linear, symmetric and non-symmetric eigenvalue problems. The resulting algorithms retain many of the properties of the symmetric FEAST including the multi-level parallelism. It will also be outlined that the development strategy roadmap for FEAST is closely tied together with the needs to address the variety of eigenvalue problems arising in computational nanosciences. Consequently, FEAST will also be presented beyond the "black-box" solver as a fundamental modeling framework for electronic structure calculations.
\\
\\
Three problems will be presented and discussed: (i) a highly efficient and robust FEAST-based alternative to traditional self-consistent field
(SCF) procedure for solving the non-linear eigenvector problem (J. Chem. Phys. 138, p194101 (2013)]); (ii) a fundamental and practical solution of the exact muffin-tin problem for performing both accurate and scalable all-electron electronic structure calculations using FEAST on parallel architectures [Comp. Phys. Comm. 183, p2370 (2012)]; (iii) a FEAST-spectral-based time-domain propagation techniques for performing real-time TDDFT simulations. In order to illustrate the efficiency of the FEAST framework, numerical examples are provided for various molecules and carbon-based materials using our in-house all-electron real-space FEM implementation and both the DFT/Kohn-Sham/LDA and TDDFT/ALDA approaches.
Compressive Imaging: Stable Sampling Strategies using Shearlets
Abstract
In imaging science, efficient acquisition of images by few samples with the possibility to precisely recover the complete image is a topic of significant interest. The area of compressed sensing, which in particular advocates random sampling strategies, has had already a tremendous impact on both theory and applications. The necessary requirement for such techniques to be applicable is the sparsity of the original data within some transform domain. Recovery is then achieved by, for instance, $\ell_1$ minimization. Various applications however do not allow complete freedom in the choice of the samples. Take Magnet Resonance Imaging (MRI) for example, which only provides access to Fourier samples. For this particular application, empirical results still showed superior performance of compressed sensing techniques.
\\
\\
In this talk, we focus on sparse sampling strategies under the constraint that only Fourier samples can be accessed. Since images -- and in particular images from MRI -- are governed by anisotropic features and shearlets do provide optimally sparse approximations of those, compactly supported shearlet systems will be our choice for the reconstruction procedure. Our sampling strategy then exploits a careful variable density sampling of the Fourier samples with $\ell_1$-analysis based reconstruction using shearlets. Our main result provides success guarantees and shows that this sampling and reconstruction strategy is optimal.
\\
\\
This is joint work with Wang-Q Lim (Technische Universit\"at Berlin).
Numerical Modeling of Vesicles: Inertial Flow and Electric Fields
Abstract
The behavior of lipid vesicles is due to a complex interplay between the mechanics of the vesicle membrane, the surrounding fluids, and any external fields which may be present. In this presentation two aspects of vesicle behavior are explored: vesicles in inertial flows and vesicles exposed to electric fields.
The first half of the talk will present work done investigating the behavior of two-dimensional vesicles in inertial flows. A level set representation of the interface is coupled to a Navier-Stokes projection solver. The standard projection method is modified to take into account not only the volume incompressibility of the fluids but also the surface incompressibility of the vesicle membrane. This surface incompressibility is enforced by using the closest point level set method to calculate the surface tension needed to enforce the surface incompressibility. Results indicate that as inertial effects increase vesicle change from a tumbling behavior to a stable tank-treading configuration. The numerical results bear a striking similarity to rigid particles in inertial flows. Using rigid particles as a guide scaling laws are determined for vesicle behavior in inertial flows.
The second half of the talk will move to immersed interface solvers for three-dimensional vesicles exposed to electric fields. The jump conditions for pressure and fluid velocity will be developed for the three-dimensional Stokes flow or constant density Navier-Stokes equations assuming a piecewise continuous viscosity and an inextensible interface. An immersed interface solver is implemented to determine the fluid and membrane state. Analytic test cases have been developed to examine the convergence of the fluids solver.
Time permitting an immersed interface solver developed to calculate the electric field for a vesicle exposed to an electric field will be discussed. Unlike other multiphase systems, vesicle membranes have a time-varying potential which must be taken into account. This potential is implicitly determined along with the overall electric potential field.
Superconvergence for Discontinuous Galerkin solutions: Making it Useful
Abstract
The discontinuous Galerkin (DG) method has recently become one of the most widely researched and utilized discretization methodologies for solving problems in science and engineering. It is fundamentally based upon the mathematical framework of variational methods, and provides hope that computationally fast, efficient and robust methods can be constructed for solving real-world problems. By not requiring that the solution to be continuous across element boundaries, DG provides a flexibility that can be exploited both for geometric and solution adaptivity and for parallelization. This freedom comes at a cost. Lack of smoothness across elements can hamper simulation post-processing like feature extraction and visualization. However, these limitations can be overcome by taking advantage of an additional property of DG - that of superconvergence. Superconvergence can aid in addressing the lack of continuity through the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters. These filters respect the mathematical properties of the data while providing levels of smoothness so that commonly used visualization tools can be used appropriately, accurately, and efficiently. In this talk, the importance of superconvergence in applications such as visualization will be discussed as well as overcoming the mathematical barriers in making superconvergence useful for applications.
Barycentric Interpolation
Abstract
In this talk I will focus on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work "Der barycentrische Calcül" in 1827. For univariate data, this gives a special kind of rational interpolant which is guaranteed to have no poles and favourable approximation properties.
I further discuss how to extend this idea to bivariate data, where it leads to the concept of generalized barycentric coordinates and to an efficient method for interpolating data given at the vertices of an arbitrary polygon.
Scalable Data Analytics
Abstract
Very-large scale data analytics are an alleged golden goose for efforts in parallel and distributed computing, and yet contemporary statistics remain somewhat of a dark art for the uninitiated. In this presentation, we are going to take a mathematical and algorithmic look beyond the veil of Big Data by studying the structure of the algorithms and data, and by analyzing the fit to existing and proposed computer systems and programming models. Towards highly scalable kernels, we will also discuss some of the promises and challenges of approximation algorithms using randomization, sampling, and decoupled processing, touching some contemporary topics in parallel numerics.
The exponentially convergent trapezoid rule
Abstract
It is well known that the trapezoid rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with powerful algorithms all across scientific computing, including double exponential and Gauss quadrature, computation of inverse Laplace transforms, special functions, computational complex analysis, the computation of functions of matrices and operators, rational approximation, and the solution of partial differential equations.
This talk represents joint work with Andre Weideman of the University of Stellenbosch.
The How and Why of Balancing
Abstract
We consider the problem of taking a matrix A and finding diagonal matrices D and E such that the rows and columns of B = DAE satisfy some specific constraints. Examples of constraints are that
* the row and column sums of B should all equal one;
* the norms of the rows and columns of B should all be equal;
* the row and column sums of B should take values specified by vectors p and q.
Simple iterative algorithms for solving these problems have been known for nearly a century. We provide a simple framework for describing these algorithms that allow us to develop robust convergence results and describe a straightforward approach to accelerate the rate of convergence.
We describe some of the diverse applications of balancing with examples from preconditioning, clustering, network analysis and psephology.
This is joint work with Kerem Akartunali (Strathclyde), Daniel Ruiz (ENSEEIHT, Toulouse) and Bora Ucar (ENS, Lyon).
Introduction to tensor numerical methods in higher dimensions
Abstract
Tensor numerical methods provide the efficient separable representation of multivariate functions and operators discretized on large $n^{\otimes d}$-grids, providing a base for the solution of $d$-dimensional PDEs with linear complexity scaling in the dimension, $O(d n)$. Modern methods of separable approximation combine the canonical, Tucker, matrix product states (MPS) and tensor train (TT) low-parametric data formats.
\\
\\
The recent quantized-TT (QTT) approximation method is proven to provide the logarithmic data-compression on a wide class of functions and operators. Furthermore, QTT-approximation makes it possible to represent multi-dimensional steady-state and dynamical equations in quantized tensor spaces with the log-volume complexity scaling in the full-grid size, $O(d \log n)$, instead of $O(n^d)$.
\\
\\
We show how the grid-based tensor approximation in quantized tensor spaces applies to super-compressed representation of functions and operators (super-fast convolution and FFT, spectrally close preconditioners) as well to hard problems arising in electronic structure calculations, such as multi-dimensional convolution, and two-electron integrals factorization in the framework of Hartree-Fock calculations. The QTT method also applies to the time-dependent molecular Schr{\"o}dinger, Fokker-Planck and chemical master equations.
\\
\\
Numerical tests are presented indicating the efficiency of tensor methods in approximation of functions, operators and PDEs in many dimensions.
\\
\\
Optimization meets Statistics: Fast global convergence for high-dimensional statistical recovery
Abstract
Many methods for solving high-dimensional statistical inverse problems are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer.
\\
Particular examples include $\ell_1$-based methods for sparse vectors and matrices, nuclear norm for low-rank matrices, and various combinations thereof for matrix decomposition and robust PCA. In this talk, we describe an interesting connection between computational and statistical efficiency, in particular showing that the same conditions that guarantee that an estimator has good statistical error can also be used to certify fast convergence of first-order optimization methods up to statistical precision.
\\
\\
Joint work with Alekh Agarwahl and Sahand Negahban Pre-print (to appear in Annals of Statistics)
\\
http://www.eecs.berkeley.edu/~wainwrig/Papers/AgaNegWai12b_SparseOptFul…
High frequency acoustic scattering by screens: computation and analysis
Abstract
We address, in the study of acoustic scattering by 2D and 3D planar screens, three inter-related and challenging questions. Each of these questions focuses particularly on the formulation of these problems as boundary integral equations. The first question is, roughly, does it make sense to consider scattering by screens which occupy arbitrary open sets in the plane, and do different screens cause the same scattering if the open sets they occupy have the same closure? This question is intimately related to rather deep properties of fractional Sobolev spaces on general open sets, and the capacity and Haussdorf dimension of their boundary. The second question is, roughly, that, in answering the first question, can we understand explicitly and quantitatively the influence of the frequency of the incident time harmonic wave? It turns out that we can, that the problems have variational formations with sesquilinear forms which are bounded and coercive on fractional Sobolev spaces, and that we can determine explicitly how continuity and coercivity constants depend on the frequency. The third question is: can we design computational methods, adapted to the highly oscillatory solution behaviour at high frequency, which have computational cost essentially independent of the frequency? The answer here is that in 2D we can provably achieve solutions to any desired accuracy using a number of degrees of freedom which provably grows only logarithmically with the frequency, and that it looks promising that some extension to 3D is possible.
\\
\\
This is joint work with Dave Hewett, Steve Langdon, and Ashley Twigger, all at Reading.
Pointwise convergence of the feasibility violation for Moreau-Yosida regularized optimal control problems
Abstract
Subtitle:
And applications to problems involving pointwise constraints on the gradient of the state on non smooth polygonal domains
\\
\\
In this talk we are concerned with an analysis of Moreau-Yosida regularization of pointwise state constrained optimal control problems. As recent analysis has already revealed the convergence of the primal variables is dominated by the reduction of the feasibility violation in the maximum norm.
\\
\\
We will use a new method to derive convergence of the feasibility violation in the maximum norm giving improved the known convergence rates.
\\
\\
Finally we will employ these techniques to analyze optimal control problems governed by elliptic PDEs with pointwise constraints on the gradient of the state on non smooth polygonal domains. For these problems, standard analysis, fails because the control to state mapping does not yield sufficient regularity for the states to be continuously differentiable on the closure of the domain. Nonetheless, these problems are well posed. In particular, the results of the first part will be used to derive convergence rates for the primal variables of the regularized problem.
On the Origins of Domain Decomposition Methods
Abstract
Domain decomposition methods have been developed in various contexts, and with very different goals in mind. I will start my presentation with the historical inventions of the Schwarz method, the Schur methods and Waveform Relaxation. I will show for a simple model problem how all these domain decomposition methods function, give precise results for the model problem, and also explain the most general convergence results available currently for these methods. I will conclude with the parareal algorithm as a new variant for parallelization of evolution problems in the time direction.
A hybrid finite element-Lagrangian marker technique for geodynamics: Spatial discretisations, implicit solvers and numerics
Abstract
Over million year time scales, the evolution and deformation of rocks on Earth can be described by the equations governing the motion of a very viscous, incompressible fluid. In this regime, the rocks within the crust and mantle lithosphere exhibit both brittle and ductile behaviour. Collectively, these rheologies result in an effective viscosity which is non-linear and may exhibit extremely large variations in space. In the context of geodynamics applications, we are interested in studying large deformation processes both prior and post to the onset of material failure.
\\
\\
Here I introduce a hybrid finite element (FE) - Lagrangian marker discretisation which has been specifically designed to enable the numerical simulation of geodynamic processes. In this approach, a mixed FE formulation is used to discretise the incompressible Stokes equations, whilst the markers are used to discretise the material lithology.
\\
\\
First I will show the a priori error estimates associated with this hybrid discretisation and demonstrate the convergence characteristics via several numerical examples. Then I will discuss several multi-level preconditioning strategies for the saddle point problem which are robust with respect to both large variations in viscosity and the underlying topological structure of the viscosity field.
\\
Finally, I will describe an extension of the multi-level preconditioning strategy that enables high-resolution, three-dimensional simulations to be performed with a small memory footprint and which is performant on multi-core, parallel architectures.
Multi-task Learning and Structured Sparsity
Abstract
We discuss the problem of estimating a structured matrix with a large number of elements. A key motivation for this problem occurs in multi-task learning. In this case, the columns of the matrix correspond to the parameters of different regression or classification tasks, and there is structure due to relations between the tasks. We present a general method to learn the tasks' parameters as well as their structure. Our approach is based on solving a convex optimization problem, involving a data term and a penalty term. We highlight different types of penalty terms which are of practical and theoretical importance. They implement structural relations between the tasks and achieve a sparse representations of parameters. We address computational issues as well as the predictive performance of the method. Finally we discuss how these ideas can be extended to learn non-linear task functions by means of reproducing kernels.
Packing Ellipsoids with Overlap
Abstract
Problems of packing shapes with maximal density, sometimes into a
container of restricted size, are classical in discrete
mathematics. We describe here the problem of packing a given set of
ellipsoids of different sizes into a finite container, in a way that
allows overlap but that minimizes the maximum overlap between adjacent
ellipsoids. We describe a bilevel optimization algorithm for finding
local solutions of this problem, both the general case and the simpler
special case in which the ellipsoids are spheres. Tools from conic
optimization, especially semidefinite programming, are key to the
algorithm. Finally, we describe the motivating application -
chromosome arrangement in cell nuclei - and compare the computational
results obtained with this approach to experimental observations.
\\
\\
This talk represents joint work with Caroline Uhler (IST Austria).
A locally adaptive Cartesian finite-volume framework for solving PDEs on surfaces
Abstract
We describe our current efforts to develop finite volume
schemes for solving PDEs on logically Cartesian locally adapted
surfaces meshes. Our methods require an underlying smooth or
piecewise smooth grid transformation from a Cartesian computational
space to 3d surface meshes, but does not rely on analytic metric terms
to obtain second order accuracy. Our hyperbolic solvers are based on
Clawpack (R. J. LeVeque) and the parabolic solvers are based on a
diamond-cell approach (Y. Coudi\`ere, T. Gallou\"et, R. Herbin et
al). If time permits, I will also discuss Discrete Duality Finite
Volume methods for solving elliptic PDEs on surfaces.
\\
\\
To do local adaption and time subcycling in regions requiring high
spatial resolution, we are developing ForestClaw, a hybrid adaptive
mesh refinement (AMR) code in which non-overlapping fixed-size
Cartesian grids are stored as leaves in a forest of quad- or
oct-trees. The tree-based code p4est (C. Burstedde) manages the
multi-block connectivity and is highly scalable in realistic
applications.
\\
\\
I will present results from reaction-diffusion systems on surface
meshes, and test problems from the atmospheric sciences community.
Domain decomposition for total variation regularisation and applications
Abstract
Domain decomposition methods were introduced as techniques for solving partial differential equations based on a decomposition of the spatial domain of the problem into several subdomains. The initial equation restricted to the subdomains defines a sequence of new local problems. The main goal is to solve the initial equation via the solution of the local problems. This procedure induces a dimension reduction which is the major responsible of the success of such a method. Indeed, one of the principal motivations is the formulation of solvers which can be easily parallelized.
In this presentation we shall develop a domain decomposition algorithm to the minimization of functionals with total variation constraints. In this case the interesting solutions may be discontinuous, e.g., along curves in 2D. These discontinuities may cross the interfaces of the domain decomposition patches. Hence, the crucial difficulty is the correct treatment of interfaces, with the preservation of crossing discontinuities and the correct matching where the solution is continuous instead. I will present our domain decomposition strategy, including convergence results for the algorithm and numerical examples for its application in image inpainting and magnetic resonance imaging.
Optimally Blended Spectral-Finite Element Scheme for Wave Propagation and Non-Standard Reduced Integration
Abstract
We study the dispersion and dissipation of the numerical scheme obtained by taking a weighted averaging of the consistent (finite element) mass matrix and lumped (spectral element) mass matrix for the small wave number limit. We find and prove that for the optimum blending the resulting scheme
(a) provides $2p+4$ order accuracy for $p$th order method (two orders more accurate compared with finite and spectral element schemes);
(b) has an absolute accuracy which is $\mathcal{O}(p^{-3})$ and $\mathcal{O}(p^{-2})$ times better than that of the pure finite and spectral element schemes, respectively;
(c) tends to exhibit phase lag.
Moreover, we show that the optimally blended scheme can be efficiently implemented merely by replacing the usual Gaussian quadrature rule used to assemble the mass and stiffness matrices by novel nonstandard quadrature rules which are also derived.
On the design and error control of higher order in time ALE formulations
Abstract
ALE formulations are useful when approximating solutions of problems in deformable domains, such as fluid-structure interactions. For realistic simulations involving fluids in 3d, it is important that the ALE method is at least of second order of accuracy. Second order ALE methods in time, without any constraint on the time step, do not exist in the literature and the role of the so-called geometric conservation law (GCL) for stability and accuracy is not clear. We propose discontinuous Galerkin (dG) methods of any order in time for a time dependent advection-diffusion model problem in moving domains. We prove that our proposed schemes are unconditionally stable and that the conservative and non conservative formulations are equivalent. The same results remain true when appropriate quadrature is used for the approximation of the involved integrals in time. The analysis hinges on the validity of a discrete Reynolds' identity and generalises the GCL to higher order methods. We also prove that the computationally less intensive Runge-Kutta-Radau (RKR) methods of any order are stable, subject to a mild ALE constraint. A priori and a posteriori error analysis is provided. The final estimates are of optimal order of accuracy. Numerical experiments confirm and complement our theoretical results.
This is joint work with Andrea Bonito and Ricardo H. Nochetto.
Discontinuous Galerkin Methods for Surface PDEs
Abstract
The Discontinuous Galerkin (DG) method has been used to solve a wide range of partial differential equations. Especially for advection dominated problems it has proven very reliable and accurate. But even for elliptic problems it has advantages over continuous finite element methods, especially when parallelization and local adaptivity are considered.
In this talk we will first present a variation of the compact DG method for elliptic problems with varying coefficients. For this method we can prove stability on general grids providing a computable bound for all free parameters. We developed this method to solve the compressible Navier-Stokes equations and demonstrated its efficiency in the case of meteorological problems using our implementation within the DUNE software framework, comparing it to the operational code COSMO used by the German weather service.
After introducing the notation and analysis for DG methods in Euclidean spaces, we will present a-priori error estimates for the DG method on surfaces. The surface finite-element method with continuous ansatz functions was analysed a few years ago by Dzuik/Elliot; we extend their results to the interior penalty DG method where the non-smooth approximation of the surface introduces some additional challenges.
Numerical Methods for PDEs with Random Coefficients
Abstract
Partial differential equations (PDEs) with random coefficients are used in computer simulations of physical processes in science, engineering and industry applications with uncertain data. The goal is to obtain quantitative statements on the effect of input data uncertainties for a comprehensive evaluation of simulation results. However, these equations are formulated in a physical domain coupled with a sample space generated by random parameters and are thus very computing-intensive.
We outline the key computational challenges by discussing a model elliptic PDE of single phase subsurface flow in random media. In this application the coefficients are often rough, highly variable and require a large number of random parameters which puts a limit on all existing discretisation methods. To overcome these limits we employ multilevel Monte Carlo (MLMC), a novel variance reduction technique which uses samples computed on a hierarchy of physical grids. In particular, we combine MLMC with mixed finite element discretisations to calculate travel times of particles in groundwater flows.
For coefficients which can be parameterised by a small number of random variables we employ spectral stochastic Galerkin (SG) methods which give rise to a coupled system of deterministic PDEs. Since the standard SG formulation of the model elliptic PDE requires expensive matrix-vector products we reformulate it as a convection-diffusion problem with random convective velocity. We construct and analyse block-diagonal preconditioners for the nonsymmetric Galerkin matrix for use with Krylov subspace methods such as GMRES.