Forthcoming events in this series


Thu, 27 Feb 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Alternating minimal energy methods for linear systems in higher dimensions

Dr Dmitry Savostyanov
(University of Southampton)
Abstract

When high-dimensional problems are concerned, not much algorithms can break the curse of dimensionality, and solve them efficiently and reliably. Among those, tensor product algorithms, which implement the idea of separation of variables for multi-index arrays (tensors), seem to be the most general and also very promising. They originated in quantum physics and chemistry and descent broadly from the density matrix renormalization group (DMRG) and matrix product states (MPS) formalisms. The same tensor formats were recently re-discovered in the numerical linear algebra (NLA) community as the tensor train (TT) format.

Algorithms developed in the quantum physics community are based on the optimisation in tensor formats, that is performed subsequently for all components of a tensor format (i.e. all sites or modes).
The DMRG/MPS schemes are very efficient but very difficult to analyse, and at the moment only local convergence results for the simplest algorithm are available. In the NLA community, a common approach is to use a classical iterative scheme (e.g. GMRES) and enforce the compression to a tensor format at every step. The formal analysis is quite straightforward, but tensor ranks of the vectors which span the Krylov subspace grow rapidly with iterations, and the methods are struggling in practice.

The first attempt to merge classical iterative algorithms and DMRG/MPS methods was made by White (2005), where the second Krylov vector is used to expand the search space on the optimisation step.
The idea proved to be useful, but the implementation was based on the fair amount of physical intuition, and the algorithm is not completely justified.

We have recently proposed the AMEn algorithm for linear systems, that also injects the gradient direction in the optimisation step, but in a way that allows to prove the global convergence of the resulted scheme. The scheme can be easily applied for the computation of the ground state --- the differences to the algorithm of S. White are emphasized in Dolgov and Savostyanov (2013).
The AMEn scheme is already acknowledged in the NLA community --- for example it was recently applied for the computation of extreme eigenstates by Kressner, Steinlechner and Uschmajew (2013), using the block-TT format proposed by in Dolgov, Khoromskij, Oseledets and Savostyanov (2014).

At the moment, AMEn algorithm was applied
 - to simulate the NMR spectra of large molecules (such as ubiquitin),
 - to solve the Fokker-Planck equation for the non-Newtonian polymeric flows,
 - to the chemical master equation describing the mesoscopic model of gene regulative networks,
 - to solve the Heisenberg model problem for a periodic spin chain.
We aim to extend this framework and the analysis to other problems of NLA: eigenproblems, time-dependent problems, high-dimensional interpolation, and matrix functions;  as well as to a wider list of high-dimensional problems.

This is a joint work with Sergey Dolgov the from Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany.

Thu, 13 Feb 2014

14:00 - 15:00
L5

Finite element approximation of a quasi-static model of rock detachment

Dr Leonardo Figueroa
(Universidad de Concepción)
Abstract

We report on a numerical implementation of a quasi-static model of

rock detachment based on Allaire, Jouve and Van Goethem's

implementation of Francfort and Marigo's model of damage in brittle

solids, As such, local minimizers of a cost functional involving both

stored elastic energy and a damage penalization term are sought by

using a procedure which alternates between approximately solving a

linear elasticity system and advancing a transport equation for a

level set function describing the loci of still-attached rock. We pay

special attention to the mixed finite element method used in the

approximation of the linear elasticity system.

Thu, 06 Feb 2014

14:00 - 15:00
L5

Approximation on surfaces with radial basis functions: from global to local methods

Professor Grady Wright
(Boise State University)
Abstract

Radial basis function (RBF) methods are becoming increasingly popular for numerically solving partial differential equations (PDEs) because they are geometrically flexible, algorithmically accessible, and can be highly accurate. There have been many successful applications of these techniques to various types of PDEs defined on planar regions in two and higher dimensions, and to PDEs defined on the surface of a sphere. Originally, these methods were based on global approximations and their computational cost was quite high. Recent efforts have focused on reducing the computational cost by using ``local’’ techniques, such as RBF generated finite differences (RBF-FD).

In this talk, we first describe our recent work on developing a new, high-order, global RBF method for numerically solving PDEs on relatively general surfaces, with a specific focus on reaction-diffusion equations. The method is quite flexible, only requiring a set of ``scattered’’ nodes on the surface and the corresponding normal vectors to the surface at these nodes. We next present a new scalable local method based on the RBF-FD approach with this same flexibility. This is the first application of the RBF-FD method to general surfaces. We conclude with applications of these methods to some biologically relevant problems.

This talk represents joint work with Edward Fuselier (High Point University), Aaron Fogelson, Mike Kirby, and Varun Shankar (all at the University of Utah).

Thu, 23 Jan 2014

14:00 - 15:00
L5

Direct Search Based on Probabilistic Descent

Professor Luis Nunes Vicente
(University of Coimbra)
Abstract

Direct-search methods are a class of popular derivative-free

algorithms characterized by evaluating the objective function

using a step size and a number of (polling) directions.

When applied to the minimization of smooth functions, the

polling directions are typically taken from positive spanning sets

which in turn must have at least n+1 vectors in an n-dimensional variable space.

In addition, to ensure the global convergence of these algorithms,

the positive spanning sets used throughout the iterations

must be uniformly non-degenerate in the sense of having a positive

(cosine) measure bounded away from zero.

\\

\\

However, recent numerical results indicated that randomly generating

the polling directions without imposing the positive spanning property

can improve the performance of these methods, especially when the number

of directions is chosen considerably less than n+1.

\\

\\

In this talk, we analyze direct-search algorithms when the polling

directions are probabilistic descent, meaning that with a certain

probability at least one of them is of descent type. Such a framework

enjoys almost-sure global convergence. More interestingly, we will show

a global decaying rate of $1/\sqrt{k}$ for the gradient size, with

overwhelmingly high probability, matching the corresponding rate for

the deterministic versions of the gradient method or of direct search.

Our analysis helps to understand numerical behavior and the choice of

the number of polling directions.

\\

\\

This is joint work with Clément Royer, Serge Gratton, and Zaikun Zhang.

Thu, 05 Dec 2013

14:00 - 15:00
L5

Certified upper and lower bounds for the eigenvalues of the Maxwell operator

Dr Gabriel Barrenechea
(University of Strathclyde)
Abstract

We propose a strategy which allows computing eigenvalue enclosures for the Maxwell operator by means of the finite element method. The origins of this strategy can be traced back to over 20 years ago. One of its main features lies in the fact that it can be implemented on any type of regular mesh (structured or otherwise) and any type of elements (nodal or otherwise). In the first part of the talk we formulate a general framework which is free from spectral pollution and allows estimation of eigenfunctions.

We then prove the convergence of the method, which implies precise convergence rates for nodal finite elements. Various numerical experiments on benchmark geometries, with and without symmetries, are reported.

Thu, 28 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Block LU factorization with panel Rank Revealing Pivoting and its Communication Avoiding version

Dr Amal Khabou
(University of Manchester)
Abstract

We present a block LU factorization with panel rank revealing

pivoting (block LU_PRRP), an algorithm based on strong

rank revealing QR for the panel factorization.

Block LU_PRRP is more stable than Gaussian elimination with partial

pivoting (GEPP), with a theoretical upper bound of the growth factor

of $(1+ \tau b)^{(n/ b)-1}$, where $b$ is the size of the panel used

during the block factorization, $\tau$ is a parameter of the strong

rank revealing QR factorization, and $n$ is the number of columns of

the matrix. For example, if the size of the panel is $b = 64$, and

$\tau = 2$, then $(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$, where

$2^{n-1}$ is the upper bound of the growth factor of GEPP. Our

extensive numerical experiments show that the new factorization scheme

is as numerically stable as GEPP in practice, but it is more resistant

to some pathological cases where GEPP fails. We note that the block LU_PRRP

factorization does only $O(n^2 b)$ additional floating point operations

compared to GEPP.

Thu, 21 Nov 2013

14:00 - 15:00
L5

Sparse dictionary learning in the presence of noise and outliers

Dr Rémi Gribonval
(INRIA Rennes)
Abstract

A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.

This is joint work with Rodolphe Jenatton & Francis Bach.

Thu, 14 Nov 2013

14:00 - 15:00
L5

Range space Krylov methods for data assimilation in meteorology and oceanography

Professor Philippe Toint
(University of Namur)
Abstract

The context of data assimilation in oceanography will be described as well as the computational challenges associated with it. A class of numerical linear algebra methods is described whose purpose is to exploit the problem structure in order to reduce the computational burden and provide provable convergence results for what remains a (very large) nonlinear problem. This class belongs to the Krylov-space family of methods and the special structure used is the imbalance between the dimensions of the state space and the observation space. It is also shown how inexact matrix-vector products can be exploited. Finally, preconditioning issues and resulting adaptations of the trust-region methodology for nonlinear minimization will also be outlined.

By Serge Gratton, Selime Gurol, Philippe Toint, Jean Tshimanga and Anthony Weaver.

Thu, 07 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Sparse multifrontal QR factorization on the GPU

Professor Tim Davis
(University of Florida)
Abstract

Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.

This is joint work with Nuri Yeralan and Sanjay Ranka.

Thu, 31 Oct 2013

14:00 - 15:00
L5

Don't be afraid of the 1001st (numerical) derivative

Professor Folkmar Bornemann
(Technical University Munich)
Abstract

The accurate and stable numerical calculation of higher-order

derivatives of holomorphic functions (as required, e.g., in random matrix

theory to extract probabilities from a generating function) turns out to

be a surprisingly rich topic: there are connections to asymptotic analysis,

the theory of entire functions, and to algorithmic graph theory.

Thu, 24 Oct 2013

14:00 - 15:00
L5

A geometric theory of phase transitions in convex optimization

Dr Martin Lotz
(University of Manchester)
Abstract

Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.

Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.

Thu, 17 Oct 2013

14:00 - 15:00
L5

Model Selection with Piecewise Regular Gauges

Dr Gabriel Peyre
(Université Paris Dauphine)
Abstract

In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. We consider regularizations with convex positively 1-homogenous functionals (so-called gauges) which are piecewise smooth. Singularies of such functionals are crucial to force the solution to the regularization to belong to an union of linear space of low dimension. These spaces (the so-called "models") allows one to encode many priors on the data to be recovered, conforming to some notion of simplicity/low complexity. This family of priors encompasses many special instances routinely used in regularized inverse problems such as L^1, L^1-L^2 (group sparsity), nuclear norm, or the L^infty norm. The piecewise-regular requirement is flexible enough to cope with analysis-type priors that include a pre-composition with a linear operator, such as for instance the total variation and polyhedral gauges. This notion is also stable under summation of regularizers, thus enabling to handle mixed regularizations.

The main set of contributions of this talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We provide sufficient conditions that allow to provably controlling the deviation of the recovered solution from the true underlying object, as a function of the noise level. More precisely we establish two main results. The first one ensures that the solution to the inverse problem is unique and lives on the same low dimensional sub-space as the true vector to recover, with the proviso that the minimal signal to noise ratio is large enough. This extends previous results well-known for the L^1 norm [1], analysis L^1 semi-norm [2], and the nuclear norm [3] to the general class of piecewise smooth gauges. In the second result, we establish L^2 stability by showing that the L^2 distance between the recovered and true vectors is within a factor of the noise level, thus extending results that hold for coercive convex positively 1-homogenous functionals [4].

This is a joint work with S. Vaiter, C. Deledalle, M. Golbabaee and J. Fadili. For more details, see [5].

Bibliography:
[1] J.J. Fuchs, On sparse representations in arbitrary redundant bases. IEEE Transactions on Information Theory, 50(6):1341-1344, 2004.
[2] S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust Sparse Analysis Regularization, to appear in IEEE Transactions on Information Theory, 2013.
[3] F. Bach, Consistency of trace norm minimization, Journal of Machine Learning Research, 9, 1019-1048, 2008.
[4] M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals. Inverse Problems, 27(7):075014, 2011.
[5] S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré, Model Selection with Piecewise Regular Gauges, Preprint hal-00842603, 2013

Thu, 13 Jun 2013

14:00 - 15:00
Gibson Grd floor SR

Lattice rules in a nutshell

Dr Dirk Nuyens
(KU Leuven)
Abstract

Lattice rules are equal-weight quadrature/cubature rules for the approximation of multivariate integrals which use lattice points as the cubature nodes. The quality of such cubature rules is directly related to the discrepancy between the uniform distribution and the discrete distribution of these points in the unit cube, and so, they are a kind of low-discrepancy sampling points. As low-discrepancy based cubature rules look like Monte Carlo rules, except that they use cleverly chosen deterministic points, they are sometimes called quasi-Monte Carlo rules.

\\

\\

The talk starts by motivating the usage of Monte Carlo and then quasi-Monte Carlo methods after which some more recent developments are discussed. Topics include: worst-case errors in reproducing kernel Hilbert spaces, weighted spaces and the construction of lattice rules and sequences.

\\

\\

In the minds of many, quasi-Monte Carlo methods seem to share the bad stanza of the Monte Carlo method: a brute force method of last resort with slow order of convergence, i.e., $O(N^{-1/2})$. This is not so.

While the standard rate of convergence for quasi-Monte Carlo is rather slow, being $O(N^{-1})$, the theory shows that these methods achieve the optimal rate of convergence in many interesting function spaces.

E.g., in function spaces with higher smoothness one can have $O(N^{-\alpha})$, $\alpha > 1$. This will be illustrated by numerical examples.

Thu, 06 Jun 2013

14:00 - 15:00
Gibson Grd floor SR

Discontinuous Galerkin Methods for Modeling the Coastal Ocean

Professor Clint Dawson
(University of Texas at Austin)
Abstract

The coastal ocean contains a diversity of physical and biological

processes, often occurring at vastly different scales. In this talk,

we will outline some of these processes and their mathematical

description. We will then discuss how finite element methods are used

in coastal ocean modeling and recent research into

improvements to these algorithms. We will also highlight some of the

successes of these methods in simulating complex events, such as

hurricane storm surges. Finally, we will outline several interesting

challenges which are ripe for future research.

Thu, 30 May 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

The FEAST eigenvalue algorithm and solver with new perspectives for first-principle electronic structure calculations

Professor Eric Polizzi
(University of Massachusetts)
Abstract

FEAST is a new general purpose eigenvalue algorithm that takes its inspiration from the density-matrix representation and contour integration technique in quantum mechanics [Phys. Rev. B 79, 115112, (2009)], and it can be understood as a subspace iteration algorithm using approximate spectral projection [http://arxiv.org/abs/1302.0432 (2013)]. The algorithm combines simplicity and efficiency and offers many important capabilities for achieving high performance, robustness, accuracy, and multi-level parallelism on modern computing platforms. FEAST is also the name of a comprehensive numerical library package which currently (v2.1) focuses on solving the symmetric eigenvalue problems on both shared-memory architectures (i.e. FEAST-SMP version -- also integrated into Intel MKL since Feb 2013) and distributed architectures (i.e. FEAST-MPI version) including three levels of parallelism MPI-MPI-OpenMP.

\\

\\

In this presentation, we aim at expanding the current capabilies of the FEAST eigenvalue algorithm and developing an unified numerical approach for solving linear, non-linear, symmetric and non-symmetric eigenvalue problems. The resulting algorithms retain many of the properties of the symmetric FEAST including the multi-level parallelism. It will also be outlined that the development strategy roadmap for FEAST is closely tied together with the needs to address the variety of eigenvalue problems arising in computational nanosciences. Consequently, FEAST will also be presented beyond the "black-box" solver as a fundamental modeling framework for electronic structure calculations.

\\

\\

Three problems will be presented and discussed: (i) a highly efficient and robust FEAST-based alternative to traditional self-consistent field

(SCF) procedure for solving the non-linear eigenvector problem (J. Chem. Phys. 138, p194101 (2013)]); (ii) a fundamental and practical solution of the exact muffin-tin problem for performing both accurate and scalable all-electron electronic structure calculations using FEAST on parallel architectures [Comp. Phys. Comm. 183, p2370 (2012)]; (iii) a FEAST-spectral-based time-domain propagation techniques for performing real-time TDDFT simulations. In order to illustrate the efficiency of the FEAST framework, numerical examples are provided for various molecules and carbon-based materials using our in-house all-electron real-space FEM implementation and both the DFT/Kohn-Sham/LDA and TDDFT/ALDA approaches.

Thu, 23 May 2013

14:00 - 15:00
Gibson Grd floor SR

Compressive Imaging: Stable Sampling Strategies using Shearlets

Professor Gitta Kutyniok
(TU Berlin)
Abstract

In imaging science, efficient acquisition of images by few samples with the possibility to precisely recover the complete image is a topic of significant interest. The area of compressed sensing, which in particular advocates random sampling strategies, has had already a tremendous impact on both theory and applications. The necessary requirement for such techniques to be applicable is the sparsity of the original data within some transform domain. Recovery is then achieved by, for instance, $\ell_1$ minimization. Various applications however do not allow complete freedom in the choice of the samples. Take Magnet Resonance Imaging (MRI) for example, which only provides access to Fourier samples. For this particular application, empirical results still showed superior performance of compressed sensing techniques.

\\

\\

In this talk, we focus on sparse sampling strategies under the constraint that only Fourier samples can be accessed. Since images -- and in particular images from MRI -- are governed by anisotropic features and shearlets do provide optimally sparse approximations of those, compactly supported shearlet systems will be our choice for the reconstruction procedure. Our sampling strategy then exploits a careful variable density sampling of the Fourier samples with $\ell_1$-analysis based reconstruction using shearlets. Our main result provides success guarantees and shows that this sampling and reconstruction strategy is optimal.

\\

\\

This is joint work with Wang-Q Lim (Technische Universit\"at Berlin).

Thu, 16 May 2013

14:00 - 15:00
Gibson Grd floor SR

Numerical Modeling of Vesicles: Inertial Flow and Electric Fields

Dr David Salac
(University at Buffalo)
Abstract

The behavior of lipid vesicles is due to a complex interplay between the mechanics of the vesicle membrane, the surrounding fluids, and any external fields which may be present. In this presentation two aspects of vesicle behavior are explored: vesicles in inertial flows and vesicles exposed to electric fields.

The first half of the talk will present work done investigating the behavior of two-dimensional vesicles in inertial flows. A level set representation of the interface is coupled to a Navier-Stokes projection solver. The standard projection method is modified to take into account not only the volume incompressibility of the fluids but also the surface incompressibility of the vesicle membrane. This surface incompressibility is enforced by using the closest point level set method to calculate the surface tension needed to enforce the surface incompressibility. Results indicate that as inertial effects increase vesicle change from a tumbling behavior to a stable tank-treading configuration. The numerical results bear a striking similarity to rigid particles in inertial flows. Using rigid particles as a guide scaling laws are determined for vesicle behavior in inertial flows.

The second half of the talk will move to immersed interface solvers for three-dimensional vesicles exposed to electric fields. The jump conditions for pressure and fluid velocity will be developed for the three-dimensional Stokes flow or constant density Navier-Stokes equations assuming a piecewise continuous viscosity and an inextensible interface. An immersed interface solver is implemented to determine the fluid and membrane state. Analytic test cases have been developed to examine the convergence of the fluids solver.

Time permitting an immersed interface solver developed to calculate the electric field for a vesicle exposed to an electric field will be discussed. Unlike other multiphase systems, vesicle membranes have a time-varying potential which must be taken into account. This potential is implicitly determined along with the overall electric potential field.

Thu, 09 May 2013

14:00 - 15:00
Gibson Grd floor SR

Superconvergence for Discontinuous Galerkin solutions: Making it Useful

Dr Jennifer Ryan
(University of East Anglia)
Abstract

The discontinuous Galerkin (DG) method has recently become one of the most widely researched and utilized discretization methodologies for solving problems in science and engineering. It is fundamentally based upon the mathematical framework of variational methods, and provides hope that computationally fast, efficient and robust methods can be constructed for solving real-world problems. By not requiring that the solution to be continuous across element boundaries, DG provides a flexibility that can be exploited both for geometric and solution adaptivity and for parallelization. This freedom comes at a cost. Lack of smoothness across elements can hamper simulation post-processing like feature extraction and visualization. However, these limitations can be overcome by taking advantage of an additional property of DG - that of superconvergence. Superconvergence can aid in addressing the lack of continuity through the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters. These filters respect the mathematical properties of the data while providing levels of smoothness so that commonly used visualization tools can be used appropriately, accurately, and efficiently. In this talk, the importance of superconvergence in applications such as visualization will be discussed as well as overcoming the mathematical barriers in making superconvergence useful for applications.

Thu, 02 May 2013

14:00 - 15:00
Gibson Grd floor SR

Barycentric Interpolation

Dr Kai Hormann
(University of Lugano)
Abstract

In this talk I will focus on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work "Der barycentrische Calcül" in 1827. For univariate data, this gives a special kind of rational interpolant which is guaranteed to have no poles and favourable approximation properties.

I further discuss how to extend this idea to bivariate data, where it leads to the concept of generalized barycentric coordinates and to an efficient method for interpolating data given at the vertices of an arbitrary polygon.

Thu, 25 Apr 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Scalable Data Analytics

Dr Tobias Berka
(University of Cambridge)
Abstract

Very-large scale data analytics are an alleged golden goose for efforts in parallel and distributed computing, and yet contemporary statistics remain somewhat of a dark art for the uninitiated. In this presentation, we are going to take a mathematical and algorithmic look beyond the veil of Big Data by studying the structure of the algorithms and data, and by analyzing the fit to existing and proposed computer systems and programming models. Towards highly scalable kernels, we will also discuss some of the promises and challenges of approximation algorithms using randomization, sampling, and decoupled processing, touching some contemporary topics in parallel numerics.

Thu, 18 Apr 2013

14:00 - 15:00
Gibson Grd floor SR

The exponentially convergent trapezoid rule

Professor Nick Trefethen
(University of Oxford)
Abstract

It is well known that the trapezoid rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with powerful algorithms all across scientific computing, including double exponential and Gauss quadrature, computation of inverse Laplace transforms, special functions, computational complex analysis, the computation of functions of matrices and operators, rational approximation, and the solution of partial differential equations.

This talk represents joint work with Andre Weideman of the University of Stellenbosch.