# Past Computational Mathematics and Applications Seminar

23 May 2013
14:00
Professor Gitta Kutyniok
Abstract
In imaging science, efficient acquisition of images by few samples with the possibility to precisely recover the complete image is a topic of significant interest. The area of compressed sensing, which in particular advocates random sampling strategies, has had already a tremendous impact on both theory and applications. The necessary requirement for such techniques to be applicable is the sparsity of the original data within some transform domain. Recovery is then achieved by, for instance, $\ell_1$ minimization. Various applications however do not allow complete freedom in the choice of the samples. Take Magnet Resonance Imaging (MRI) for example, which only provides access to Fourier samples. For this particular application, empirical results still showed superior performance of compressed sensing techniques. \\ \\ In this talk, we focus on sparse sampling strategies under the constraint that only Fourier samples can be accessed. Since images -- and in particular images from MRI -- are governed by anisotropic features and shearlets do provide optimally sparse approximations of those, compactly supported shearlet systems will be our choice for the reconstruction procedure. Our sampling strategy then exploits a careful variable density sampling of the Fourier samples with $\ell_1$-analysis based reconstruction using shearlets. Our main result provides success guarantees and shows that this sampling and reconstruction strategy is optimal. \\ \\ This is joint work with Wang-Q Lim (Technische Universit\"at Berlin).
• Computational Mathematics and Applications Seminar
16 May 2013
14:00
Dr David Salac
Abstract

The behavior of lipid vesicles is due to a complex interplay between the mechanics of the vesicle membrane, the surrounding fluids, and any external fields which may be present. In this presentation two aspects of vesicle behavior are explored: vesicles in inertial flows and vesicles exposed to electric fields.

The first half of the talk will present work done investigating the behavior of two-dimensional vesicles in inertial flows. A level set representation of the interface is coupled to a Navier-Stokes projection solver. The standard projection method is modified to take into account not only the volume incompressibility of the fluids but also the surface incompressibility of the vesicle membrane. This surface incompressibility is enforced by using the closest point level set method to calculate the surface tension needed to enforce the surface incompressibility. Results indicate that as inertial effects increase vesicle change from a tumbling behavior to a stable tank-treading configuration. The numerical results bear a striking similarity to rigid particles in inertial flows. Using rigid particles as a guide scaling laws are determined for vesicle behavior in inertial flows.

The second half of the talk will move to immersed interface solvers for three-dimensional vesicles exposed to electric fields. The jump conditions for pressure and fluid velocity will be developed for the three-dimensional Stokes flow or constant density Navier-Stokes equations assuming a piecewise continuous viscosity and an inextensible interface. An immersed interface solver is implemented to determine the fluid and membrane state. Analytic test cases have been developed to examine the convergence of the fluids solver.

Time permitting an immersed interface solver developed to calculate the electric field for a vesicle exposed to an electric field will be discussed. Unlike other multiphase systems, vesicle membranes have a time-varying potential which must be taken into account. This potential is implicitly determined along with the overall electric potential field.

• Computational Mathematics and Applications Seminar
9 May 2013
14:00
Dr Jennifer Ryan
Abstract
The discontinuous Galerkin (DG) method has recently become one of the most widely researched and utilized discretization methodologies for solving problems in science and engineering. It is fundamentally based upon the mathematical framework of variational methods, and provides hope that computationally fast, efficient and robust methods can be constructed for solving real-world problems. By not requiring that the solution to be continuous across element boundaries, DG provides a flexibility that can be exploited both for geometric and solution adaptivity and for parallelization. This freedom comes at a cost. Lack of smoothness across elements can hamper simulation post-processing like feature extraction and visualization. However, these limitations can be overcome by taking advantage of an additional property of DG - that of superconvergence. Superconvergence can aid in addressing the lack of continuity through the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters. These filters respect the mathematical properties of the data while providing levels of smoothness so that commonly used visualization tools can be used appropriately, accurately, and efficiently. In this talk, the importance of superconvergence in applications such as visualization will be discussed as well as overcoming the mathematical barriers in making superconvergence useful for applications.
• Computational Mathematics and Applications Seminar
2 May 2013
14:00
Dr Kai Hormann
Abstract
In this talk I will focus on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work "Der barycentrische Calcül" in 1827. For univariate data, this gives a special kind of rational interpolant which is guaranteed to have no poles and favourable approximation properties. I further discuss how to extend this idea to bivariate data, where it leads to the concept of generalized barycentric coordinates and to an efficient method for interpolating data given at the vertices of an arbitrary polygon.
• Computational Mathematics and Applications Seminar
Dr Tobias Berka
Abstract
Very-large scale data analytics are an alleged golden goose for efforts in parallel and distributed computing, and yet contemporary statistics remain somewhat of a dark art for the uninitiated. In this presentation, we are going to take a mathematical and algorithmic look beyond the veil of Big Data by studying the structure of the algorithms and data, and by analyzing the fit to existing and proposed computer systems and programming models. Towards highly scalable kernels, we will also discuss some of the promises and challenges of approximation algorithms using randomization, sampling, and decoupled processing, touching some contemporary topics in parallel numerics.
• Computational Mathematics and Applications Seminar
18 April 2013
14:00
Professor Nick Trefethen
Abstract

It is well known that the trapezoid rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with powerful algorithms all across scientific computing, including double exponential and Gauss quadrature, computation of inverse Laplace transforms, special functions, computational complex analysis, the computation of functions of matrices and operators, rational approximation, and the solution of partial differential equations.

This talk represents joint work with Andre Weideman of the University of Stellenbosch.

• Computational Mathematics and Applications Seminar
Dr Philip Knight
Abstract

We consider the problem of taking a matrix A and finding diagonal matrices D and E such that the rows and columns of B = DAE satisfy some specific constraints. Examples of constraints are that

* the row and column sums of B should all equal one;
* the norms of the rows and columns of B should all be equal;
* the row and column sums of B should take values specified by vectors p and q.

Simple iterative algorithms for solving these problems have been known for nearly a century. We provide a simple framework for describing these algorithms that allow us to develop robust convergence results and describe a straightforward approach to accelerate the rate of convergence.

We describe some of the diverse applications of balancing with examples from preconditioning, clustering, network analysis and psephology.

This is joint work with Kerem Akartunali (Strathclyde), Daniel Ruiz (ENSEEIHT, Toulouse) and Bora Ucar (ENS, Lyon).

• Computational Mathematics and Applications Seminar
28 February 2013
14:00
Dr Boris Khoromskij
Abstract
Tensor numerical methods provide the efficient separable representation of multivariate functions and operators discretized on large $n^{\otimes d}$-grids, providing a base for the solution of $d$-dimensional PDEs with linear complexity scaling in the dimension, $O(d n)$. Modern methods of separable approximation combine the canonical, Tucker, matrix product states (MPS) and tensor train (TT) low-parametric data formats. \\ \\ The recent quantized-TT (QTT) approximation method is proven to provide the logarithmic data-compression on a wide class of functions and operators. Furthermore, QTT-approximation makes it possible to represent multi-dimensional steady-state and dynamical equations in quantized tensor spaces with the log-volume complexity scaling in the full-grid size, $O(d \log n)$, instead of $O(n^d)$. \\ \\ We show how the grid-based tensor approximation in quantized tensor spaces applies to super-compressed representation of functions and operators (super-fast convolution and FFT, spectrally close preconditioners) as well to hard problems arising in electronic structure calculations, such as multi-dimensional convolution, and two-electron integrals factorization in the framework of Hartree-Fock calculations. The QTT method also applies to the time-dependent molecular Schr{\"o}dinger, Fokker-Planck and chemical master equations. \\ \\ Numerical tests are presented indicating the efficiency of tensor methods in approximation of functions, operators and PDEs in many dimensions. \\ \\ http://personal-homepages.mis.mpg.de/bokh
• Computational Mathematics and Applications Seminar
21 February 2013
14:00
Professor Martin Wainwright
Abstract
Many methods for solving high-dimensional statistical inverse problems are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer. \\ Particular examples include $\ell_1$-based methods for sparse vectors and matrices, nuclear norm for low-rank matrices, and various combinations thereof for matrix decomposition and robust PCA. In this talk, we describe an interesting connection between computational and statistical efficiency, in particular showing that the same conditions that guarantee that an estimator has good statistical error can also be used to certify fast convergence of first-order optimization methods up to statistical precision. \\ \\ Joint work with Alekh Agarwahl and Sahand Negahban Pre-print (to appear in Annals of Statistics) \\ http://www.eecs.berkeley.edu/~wainwrig/Papers/AgaNegWai12b_SparseOptFull.pdf
• Computational Mathematics and Applications Seminar
14 February 2013
14:00
Professor Simon Chandler-Wilde
Abstract
We address, in the study of acoustic scattering by 2D and 3D planar screens, three inter-related and challenging questions. Each of these questions focuses particularly on the formulation of these problems as boundary integral equations. The first question is, roughly, does it make sense to consider scattering by screens which occupy arbitrary open sets in the plane, and do different screens cause the same scattering if the open sets they occupy have the same closure? This question is intimately related to rather deep properties of fractional Sobolev spaces on general open sets, and the capacity and Haussdorf dimension of their boundary. The second question is, roughly, that, in answering the first question, can we understand explicitly and quantitatively the influence of the frequency of the incident time harmonic wave? It turns out that we can, that the problems have variational formations with sesquilinear forms which are bounded and coercive on fractional Sobolev spaces, and that we can determine explicitly how continuity and coercivity constants depend on the frequency. The third question is: can we design computational methods, adapted to the highly oscillatory solution behaviour at high frequency, which have computational cost essentially independent of the frequency? The answer here is that in 2D we can provably achieve solutions to any desired accuracy using a number of degrees of freedom which provably grows only logarithmically with the frequency, and that it looks promising that some extension to 3D is possible. \\ \\ This is joint work with Dave Hewett, Steve Langdon, and Ashley Twigger, all at Reading.
• Computational Mathematics and Applications Seminar