Forthcoming events in this series


Thu, 20 Feb 2020

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Learning with nonlinear Perron eigenvectors

Francesco Tudisco
(Gran Sasso Science Institute GSSI)
Abstract

In this talk I will present a Perron-Frobenius type result for nonlinear eigenvector problems which allows us to compute the global maximum of a class of constrained nonconvex optimization problems involving multihomogeneous functions.

I will structure the talk into three main parts:

First, I will motivate the optimization of homogeneous functions from a graph partitioning point of view, showing an intriguing generalization of the famous Cheeger inequality.

Second, I will define the concept of multihomogeneous function and I will state our main Perron-Frobenious theorem. This theorem exploits the connection between optimization of multihomogeneous functions and nonlinear eigenvectors to provide an optimization scheme that has global convergence guarantees.

Third, I will discuss a few example applications in network science and machine learning that require the optimization of multihomogeneous functions and that can be solved using nonlinear Perron eigenvectors.

 

 

Thu, 13 Feb 2020

14:00 - 15:00
L4

Numerical real algebraic geometry and applications

Jonathan Hauenstein
(University of Notre Dame)
Abstract

Systems of nonlinear polynomial equations arise in a variety of fields in mathematics, science, and engineering.  Many numerical techniques for solving and analyzing solution sets of polynomial equations over the complex numbers, collectively called numerical algebraic geometry, have been developed over the past several decades.  However, since real solutions are the only solutions of interest in many applications, there is a current emphasis on developing new methods for computing and analyzing real solution sets.  This talk will summarize some numerical real algebraic geometric approaches as well as recent successes of these methods for solving a variety of problems in science and engineering.

Thu, 06 Feb 2020

14:00 - 15:00
L4

Quantifying the Estimation Error of Principal Component

Raphael Hauser
(University of Oxford)
Abstract

(Joint work with: Jüri Lember, Heinrich Matzinger, Raul Kangro)

Principal component analysis is an important pattern recognition and dimensionality reduction tool in many applications and are computed as eigenvectors

of a maximum likelihood covariance that approximates a population covariance. The eigenvectors are often used to extract structural information about the variables (or attributes) of the studied population. Since PCA is based on the eigen-decomposition of the proxy covariance rather than the ground-truth, it is important to understand the approximation error in each individual eigenvector as a function of the number of available samples. The combination of recent results of Koltchinskii & Lounici [8] and Yu, Wang & Samworth [11] yields such bounds. In the presented work we sharpen these bounds and show that eigenvectors can often be reconstructed to a required accuracy from a sample of strictly smaller size order.

Thu, 30 Jan 2020

14:00 - 15:00
L4

Using shared and distributed memory in the solution of large sparse systems

Iain Duff
(Rutherford Appleton Laboratory)
Abstract

We discuss the design of algorithms and codes for the solution of large sparse systems of linear equations on extreme scale computers that are characterized by having many nodes with multi-core CPUs or GPUs. We first use two approaches to get good single node performance. For symmetric systems we use task-based algorithms based on an assembly tree representation of the factorization. We then use runtime systems for scheduling the computation on both multicore CPU nodes and GPU nodes [6]. In this work, we are also concerned with the efficient parallel implementation of the solve phase using the computed sparse factors, and we show impressive results relative to other state-of-the-art codes [3]. Our second approach was to design a new parallel threshold Markowitz algorithm [4] based on Luby’s method [7] for obtaining a maximal independent set in an undirected graph. This is a significant extension since our graph model is a directed graph. We then extend the scope of both these approaches to exploit distributed memory parallelism. In the first case, we base our work on the block Cimmino algorithm [1] using the ABCD software package coded by Zenadi in Toulouse [5, 8]. The kernel for this algorithm is the direct factorization of a symmetric indefinite submatrix for which we use the above symmetric code. To extend the unsymmetric code to distributed memory, we use the Zoltan code from Sandia [2] to partition the matrix to singly bordered block diagonal form and then use the above unsymmetric code on the blocks on the diagonal. In both cases, we illustrate the added parallelism obtained from combining the distributed memory parallelism with the high single-node performance and show that our codes out-perform other state-of-the-art codes. This work is joint with a number of people. We developed the algorithms and codes in an EU Horizon 2020 Project, called NLAFET, that finished on 30 April 2019. Coworkers in this were: Sebastien Cayrols, Jonathan Hogg, Florent Lopez, and Stojce ´ ∗@email 1 Nakov. Collaborators in the block Cimmino part of the project were: Philippe Leleux, Daniel Ruiz, and Sukru Torun. Our codes available on the github repository https://github.com/NLAFET.

References [1] M. ARIOLI, I. S. DUFF, J. NOAILLES, AND D. RUIZ, A block projection method for sparse matrices, SIAM J. Scientific and Statistical Computing, 13 (1992), pp. 47–70. [2] E. BOMAN, K. DEVINE, L. A. FISK, R. HEAPHY, B. HENDRICKSON, C. VAUGHAN, U. CATALYUREK, D. BOZDAG, W. MITCHELL, AND J. TERESCO, Zoltan 3.0: Parallel Partitioning, Load-balancing, and Data Management Services; User’s Guide, Sandia National Laboratories, Albuquerque, NM, 2007. Tech. Report SAND2007-4748W http://www.cs.sandia. gov/Zoltan/ug_html/ug.html. [3] S. CAYROLS, I. S. DUFF, AND F. LOPEZ, Parallelization of the solve phase in a task-based Cholesky solver using a sequential task flow model, Int. J. of High Performance Computing Applications, To appear (2019). NLAFET Working Note 20. RAL-TR-2018-008. [4] T. A. DAVIS, I. S. DUFF, AND S. NAKOV, Design and implementation of a parallel Markowitz threshold algorithm, Technical Report RAL-TR-2019-003, Rutherford Appleton Laboratory, Oxfordshire, England, 2019. NLAFET Working Note 22. Submitted to SIMAX. [5] I. S. DUFF, R. GUIVARCH, D. RUIZ, AND M. ZENADI, The augmented block Cimmino distributed method, SIAM J. Scientific Computing, 37 (2015), pp. A1248–A1269. [6] I. S. DUFF, J. HOGG, AND F. LOPEZ, A new sparse symmetric indefinite solver using a posteriori threshold pivoting, SIAM J. Scientific Computing, To appear (2019). NLAFET Working Note 21. RAL-TR-2018-012. [7] M. LUBY, A simple parallel algorithm for the maximal independent set problem, SIAM J. Computing, 15 (1986), pp. 1036–1053. [8] M. ZENADI, The solution of large sparse linear systems on parallel computers using a hybrid implementation of the block Cimmino method., These de Doctorat, ´ Institut National Polytechnique de Toulouse, Toulouse, France, decembre 2013.

Thu, 23 Jan 2020

14:00 - 15:00
L4

Computational boundary element methods with Bempp

Timo Betcke
(UCL)
Abstract

Boundary integral equations are an elegant tool to model and simulate a range of physical phenomena in bounded and unbounded domains.

While mathematically well understood, the numerical implementation (e.g. via boundary element methods) still poses a number of computational challenges, from the efficient assembly of the underlying linear systems up to the fast preconditioned solution in complex applications. In this talk we provide an overview of some of these challenges and demonstrate the efficient implementation of boundary element methods on modern CPU and GPU architectures. As part of the talk we will present a number of practical examples using the Bempp-cl boundary element software, our next generation boundary element package, that has been developed in Python and supports modern vectorized CPU instruction sets and a number of GPU types.

Thu, 28 Nov 2019

14:00 - 15:00
L4

Minimizing convex quadratics with variable precision Krylov methods

Philippe Toint
(University of Namur)
Abstract

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients method are derived, the necessary quantities occurring in the theoretical bounds estimated and a new practical algorithm derived. Numerical experiments suggest that the new method has significant potential, including in the steadily more important context of multi-precision computations.

Thu, 21 Nov 2019

14:00 - 15:00
L4

Krylov methods for the solution of the nonlinear eigenvalue problem

Karl Meerbergen
(Catholic University of Leuven)
Abstract

Everybody is familiar with the concept of eigenvalues of a matrix. In this talk, we consider the nonlinear eigenvalue problem. These are problems for which the eigenvalue parameter appears in a nonlinear way in the equation. In physics, the Schroedinger equation for determining the bound states in a semiconductor device, introduces terms with square roots of different shifts of the eigenvalue. In mechanical and civil engineering, new materials often have nonlinear damping properties. For the vibration analysis of such materials, this leads to nonlinear functions of the eigenvalue in the system matrix.

One particular example is the sandwhich beam problem, where a layer of damping material is sandwhiched between two layers of steel. Another example is the stability analysis of the Helmholtz equation with a noise excitation produced by burners in a combustion chamber. The burners lead to a boundary condition with delay terms (exponentials of the eigenvalue).


We often receive the question: “How can we solve a nonlinear eigenvalue problem?” This talk explains the different steps to be taken for using Krylov methods. The general approach works as follows: 1) approximate the nonlinearity by a rational function; 2) rewrite this rational eigenvalue problem as a linear eigenvalue problem and then 3) solve this by a Krylov method. We explain each of the three steps.

Thu, 14 Nov 2019

14:00 - 15:00
L4

On the preconditioning of coupled multi-physics problems

Massimiliano Ferronato
(University of Padua)
Abstract

The fully coupled numerical simulation of different physical processes, which can typically occur
at variable time and space scales, is often a very challenging task. A common feature of such models is that
their discretization gives rise to systems of linearized equations with an inherent block structure, which
reflects the properties of the set of governing PDEs. The efficient solution of a sequence of systems with
matrices in a block form is usually one of the most time- and memory-demanding issue in a coupled simulation.
This effort can be carried out by using either iteratively coupled schemes or monolithic approaches, which
tackle the problem of the system solution as a whole.

This talk aims to discuss recent advances in the monolithic solution of coupled multi-physics problems, with
application to poromechanical simulations in fractured porous media. The problem is addressed either by proper
sparse approximations of the Schur complements or special splittings that can partially uncouple the variables
related to different physical processes. The selected approaches can be included in a more general preconditioning
framework that can help accelerate the convergence of Krylov subspace solvers. The generalized preconditioner
relies on approximately decoupling the different processes, so as to address each single-physics problem
independently of the others. The objective is to provide an algebraic framework that can be employed as a
general ``black-box'' tool and can be regarded as a common starting point to be later specialized for the
particular multi-physics problem at hand.

Numerical experiments, taken from real-world examples of poromechanical problems and fractured media, are used to
investigate the behaviour and the performance of the proposed strategies.

Thu, 07 Nov 2019

14:00 - 15:00
L4

A posteriori error analysis for domain decomposition

Simon Tavener
(Colorado State University)
Abstract

Domain decomposition methods are widely employed for the numerical solution of partial differential equations on parallel computers. We develop an adjoint-based a posteriori error analysis for overlapping multiplicative Schwarz domain decomposition and for overlapping additive Schwarz. In both cases the numerical error in a user-specified functional of the solution (quantity of interest), is decomposed into a component that arises due to the spatial discretization and a component that results from of the finite iteration between the subdomains. The spatial discretization error can be further decomposed in to the errors arising on each subdomain. This decomposition of the total error can then be used as part of a two-stage approach to construct a solution strategy that efficiently reduces the error in the quantity of interest.

Thu, 31 Oct 2019

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

On coarse spaces for solving the heterogenous Helmholtz equation with domain decomposition methods

Niall Bootland
(University of Strathclyde)
Abstract

The development of effective solvers for high frequency wave propagation problems, such as those described by the Helmholtz equation, presents significant challenges. One promising class of solvers for such problems are parallel domain decomposition methods, however, an appropriate coarse space is typically required in order to obtain robust behaviour (scalable with respect to the number of domains, weakly dependant on the wave number but also on the heterogeneity of the physical parameters). In this talk we introduce a coarse space based on generalised eigenproblems in the overlap (GenEO) for the Helmholtz equation. Numerical results within FreeFEM demonstrate convergence that is effectively independent of the wave number and contrast in the heterogeneous coefficient as well as good performance for minimal overlap.

Thu, 24 Oct 2019

14:00 - 15:00
L4

Reliable Real Computing

Fredrik Johansson
(University of Bordeaux)
Abstract

Can we get rigorous answers when computing with real and complex numbers? There are now many applications where this is possible thanks to a combination of tools from computer algebra and traditional numerical computing. I will give an overview of such methods in the context of two projects I'm developing. The first project, Arb, is a library for arbitrary-precision ball arithmetic, a form of interval arithmetic enabling numerical computations with rigorous error bounds. The second project, Fungrim, is a database of knowledge about mathematical functions represented in symbolic form. It is intended to function both as a traditional reference work and as a software library to support symbolic-numeric methods for problems involving transcendental functions. I will explain a few central algorithmic ideas and explain the research goals of these projects.

Thu, 20 Jun 2019

14:00 - 15:00
L4

Overcoming the curse of dimensionality: from nonlinear Monte Carlo to deep artificial neural networks

Professor Arnulf Jentzen
((ETH) Zurich)
Abstract

Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called "curse of dimensionality" in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we introduce new nonlinear Monte Carlo algorithms for high-dimensional nonlinear PDEs. We prove that such algorithms do indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE with a nonlinearity depending on the PDE solution can be solved approximatively without the curse of dimensionality.

Thu, 13 Jun 2019

14:00 - 15:00
L4

A structure-preserving finite element method for uniaxial nematic liquid crystals

Professor Ricardo Nochetto
(University of Maryland)
Abstract

The Landau-DeGennes Q-model of uniaxial nematic liquid crystals seeks a rank-one

traceless tensor Q that minimizes a Frank-type energy plus a double well potential

that confines the eigenvalues of Q to lie between -1/2 and 1. We propose a finite

element method (FEM) which preserves this basic structure and satisfies a discrete

form of the fundamental energy estimates. We prove that the discrete problem Gamma

converges to the continuous one as the meshsize tends to zero, and propose a discrete

gradient flow to compute discrete minimizers. Numerical experiments confirm the ability

of the scheme to approximate configurations with half-integer defects, and to deal with

colloidal and electric field effects. This work, joint with J.P. Borthagaray and S.

Walker, builds on our previous work for the Ericksen's model which we review briefly.

Thu, 06 Jun 2019

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Parallel numerical algorithms for resilient large scale simulations

Dr Mawussi Zounon
(Numerical Algorithms Group & University of Manchester)
Abstract

As parallel computers approach Exascale (10^18 floating point operations per second), processor failure and data corruption are of increasing concern. Numerical linear algebra solvers are at the heart of many scientific and engineering applications, and with the increasing failure rates, they may fail to compute a solution or produce an incorrect solution. It is therefore crucial to develop novel parallel linear algebra solvers capable of providing correct solutions on unreliable computing systems. The common way to mitigate failures in high performance computing systems consists of periodically saving data onto a reliable storage device such as a remote disk. But considering the increasing failure rate and the ever-growing volume of data involved in numerical simulations, the state-of-the-art fault-tolerant strategies are becoming time consuming, therefore unsuitable for large-scale simulations. In this talk, we will present a  novel class of fault-tolerant algorithms that do not require any additional resources. The key idea is to leverage the knowledge of numerical properties of solvers involved in a simulation to regenerate lost data due to system failures. We will also share the lessons learned and report on the numerical properties and the performance of the new resilience algorithms.

Thu, 30 May 2019

14:00 - 15:00
L4

Near-best adaptive approximation

Professor Peter Binev
(University of South Carolina)
Abstract

One of the major steps in the adaptive finite element methods (AFEM) is the adaptive selection of the next partition. The process is usually governed by a strategy based on carefully chosen local error indicators and aims at convergence results with optimal rates. One can formally relate the refinement of the partitions with growing an oriented graph or a tree. Then each node of the tree/graph corresponds to a cell of a partition and the approximation of a function on adaptive partitions can be expressed trough the local errors related to the cell, i.e., the node. The total approximation error is then calculated as the sum of the errors on the leaves (the terminal nodes) of the tree/graph and the problem of finding an optimal error for a given budget of nodes is known as tree approximation. Establishing a near-best tree approximation result is a key ingredient in proving optimal convergence rates for AFEM.

 

The classical tree approximation problems are usually related to the so-called h-adaptive approximation in which the improvements a due to reducing the size of the cells in the partition. This talk will consider also an extension of this framework to hp-adaptive approximation allowing different polynomial spaces to be used for the local approximations at different cells while maintaining the near-optimality in terms of the combined number of degrees of freedom used in the approximation.

 

The problem of conformity of the resulting partition will be discussed as well. Typically in AFEM, certain elements of the current partition are marked and subdivided together with some additional ones to maintain desired properties of the partition like conformity. This strategy is often described as “mark → subdivide → complete”. The process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells. This hints at the possibility to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step would be necessary. This talk will present such a strategy together with theoretical results about its near-optimal performance.

Thu, 23 May 2019

14:00 - 15:00
L4

Operator preconditioning and some recent developments for boundary integral equations

Dr Carolina Urzua Torres
(Mathematical Institute (University of Oxford))
Abstract

In this talk, I am going to give an introduction to operator preconditioning as a general and robust strategy to precondition linear systems arising from Galerkin discretization of PDEs or Boundary Integral Equations. Then, in order to illustrate the applicability of this preconditioning technique, I will discuss the simple case of weakly singular and hypersingular integral equations, arising from exterior Dirichlet and Neumann BVPs for the Laplacian in 3D. Finally, I will show how we can also tackle operators with a more difficult structure, like the electric field integral equation (EFIE) on screens, which models the scattering of time-harmonic electromagnetic waves at perfectly conducting bounded infinitely thin objects, like patch antennas in 3D.

Thu, 16 May 2019

14:00 - 15:00
L4

Parallel preconditioning for time-dependent PDEs and PDE control

Professor Andy Wathen
(Department of Mathematics)
Abstract

We present a novel approach to the solution of time-dependent PDEs via the so-called monolithic or all-at-once formulation.

This approach will be explained for simple parabolic problems and its utility in the context of PDE constrained optimization problems will be elucidated.

The underlying linear algebra includes circulant matrix approximations of Toeplitz-structured matrices and allows for effective parallel implementation. Simple computational results will be shown for the heat equation and the wave equation which indicate the potential as a parallel-in-time method.

This is joint work with Elle McDonald (CSIRO, Australia), Jennifer Pestana (Strathclyde University, UK) and Anthony Goddard (Durham University, UK)

Thu, 09 May 2019

14:00 - 15:00
L4

Quasi-optimal and pressure robust discretizations of the Stokes equations.

Dr Pietro Zanotti
(TU Dortmund)
Abstract

ABSTRACT

We approximate the solution of the stationary Stokes equations with various conforming and nonconforming inf-sup stable pairs of finite element spaces on simplicial meshes. Based on each pair, we design a discretization that is quasi-optimal and pressure robust, in the sense that the velocity H^1-error is proportional to the best H^1-error to the analytical velocity. This shows that such a property can be achieved without using conforming and divergence-free pairs. We bound also the pressure L^2-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure. Our construction can be summarized as follows. First, a linear operator acts on discrete velocity test functions, before the application of the load functional, and maps the discrete kernel into the analytical one.

Second, in order to enforce consistency, we  possibly employ a new augmented Lagrangian formulation, inspired by Discontinuous Galerkin methods.

Thu, 07 Mar 2019

14:00 - 15:00
L4

Flexible computational abstractions for complex preconditioners

Dr Lawrence Mitchell
(Durham University)
Abstract

Small block overlapping, and non-overlapping, Schwarz methods are theoretically highly attractive as multilevel smoothers for a wide variety of problems that are not amenable to point relaxation methods.  Examples include monolithic Vanka smoothers for Stokes, overlapping vertex-patch decompositions for $H(\text{div})$ and  $H(\text{curl})$ problems, along with nearly incompressible elasticity, and augmented Lagrangian schemes.

 While it is possible to manually program these different schemes,  their use in general purpose libraries has been held back by a lack   of generic, composable interfaces. We present a new approach to the   specification and development such additive Schwarz methods in PETSc  that cleanly separates the topological space decomposition from the  discretisation and assembly of the equations. Our preconditioner is  flexible enough to support overlapping and non-overlapping additive  Schwarz methods, and can be used to formulate line, and plane smoothers, Vanka iterations, amongst others. I will illustrate these new features with some examples utilising the Firedrake finite element library, in particular how the design of an approriate computational interface enables these schemes to be used as building blocks inside block preconditioners.

This is joint work with Patrick Farrell and Florian Wechsung (Oxford), and Matt Knepley (Buffalo).

Thu, 21 Feb 2019

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Tomographic imaging with flat-field uncertainty

Prof Martin Skovgaard Andersen
(Danish Technical University)
Abstract

Classical methods for X-ray computed tomography (CT) are based on the assumption that the X-ray source intensity is known. In practice, however, the intensity is measured and hence uncertain. Under normal circumstances, when the exposure time is sufficiently high, this kind of uncertainty typically has a negligible effect on the reconstruction quality. However, in time- or dose-limited applications such as dynamic CT, this uncertainty may cause severe and systematic artifacts known as ring artifacts.
By modeling the measurement process and by taking uncertainties into account, it is possible to derive a convex reconstruction model that leads to improved reconstructions when the signal-to-noise ratio is low. We discuss some computational challenges associated with the model and illustrate its merits with some numerical examples based on simulated and real data.

Thu, 14 Feb 2019

14:00 - 15:00
L4

Derivation, analysis and approximation of coupled PDEs on manifolds with high dimensionality gap

Prof Paolo Zunino
(Politecnico di Milano)
Abstract

 Multiscale methods based on coupled partial differential equations defined on bulk and embedded manifolds are still poorly explored from the theoretical standpoint, although they are successfully used in applications, such as microcirculation and flow in perforated subsurface reservoirs. This work aims at shedding light on some theoretical aspects of a multiscale method consisting of coupled partial differential equations defined on one-dimensional domains embedded into three-dimensional ones. Mathematical issues arise because the dimensionality gap between the bulk and the inclusions is larger than one, named as the high dimensionality gap case. First, we show that such model derives from a system of full three-dimensional equations, by the application of a topological model reduction approach. Secondly, we rigorously analyze the problem, showing that the averaging operators applied for the model reduction introduce a regularization effect that resolves the issues due to the singularity of solutions and to the ill-posedness of restriction operators. Then, we discretize the problem by means of the finite element method and we analyze the approximation error. Finally, we exploit the structure of the model reduction technique to analyze the modeling error. This study confirms that for infinitesimally small inclusions, the modeling error vanishes.

This is a joint work with Federica Laurino, Department of Mathematics, Politecnico di Milano.

Thu, 07 Feb 2019

14:00 - 15:00
L4

Polynomial approximation of high-dimensional functions - from regular to irregular domains

Prof. Ben Adcock
(Simon Fraser University)
Abstract

Driven by its numerous applications in computational science, the approximation of smooth, high-dimensional functions via sparse polynomial expansions has received significant attention in the last five to ten years.  In the first part of this talk, I will give a brief survey of recent progress in this area.  In particular, I will demonstrate how the proper use of compressed sensing tools leads to new techniques for high-dimensional approximation which can mitigate the curse of dimensionality to a substantial extent.  The rest of the talk is devoted to approximating functions defined on irregular domains.  The vast majority of works on high-dimensional approximation assume the function in question is defined over a tensor-product domain.  Yet this assumption is often unrealistic.  I will introduce a method, known as polynomial frame approximation, suitable for broad classes of irregular domains and present theoretical guarantees for its approximation error, stability, and sample complexity.  These results show the suitability of this approach for high-dimensional approximation through the independence (or weak dependence) of the various guarantees on the ambient dimension d.  Time permitting, I will also discuss several extensions.