Past Computational Mathematics and Applications Seminar

Professor Tim Davis
Abstract

Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.

This is joint work with Nuri Yeralan and Sanjay Ranka.

  • Computational Mathematics and Applications Seminar
31 October 2013
14:00
Professor Folkmar Bornemann
Abstract
The accurate and stable numerical calculation of higher-order derivatives of holomorphic functions (as required, e.g., in random matrix theory to extract probabilities from a generating function) turns out to be a surprisingly rich topic: there are connections to asymptotic analysis, the theory of entire functions, and to algorithmic graph theory.
  • Computational Mathematics and Applications Seminar
24 October 2013
14:00
Dr Martin Lotz
Abstract

Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.

Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.

  • Computational Mathematics and Applications Seminar
17 October 2013
14:00
Dr Gabriel Peyre
Abstract

In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. We consider regularizations with convex positively 1-homogenous functionals (so-called gauges) which are piecewise smooth. Singularies of such functionals are crucial to force the solution to the regularization to belong to an union of linear space of low dimension. These spaces (the so-called "models") allows one to encode many priors on the data to be recovered, conforming to some notion of simplicity/low complexity. This family of priors encompasses many special instances routinely used in regularized inverse problems such as L^1, L^1-L^2 (group sparsity), nuclear norm, or the L^infty norm. The piecewise-regular requirement is flexible enough to cope with analysis-type priors that include a pre-composition with a linear operator, such as for instance the total variation and polyhedral gauges. This notion is also stable under summation of regularizers, thus enabling to handle mixed regularizations.

The main set of contributions of this talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We provide sufficient conditions that allow to provably controlling the deviation of the recovered solution from the true underlying object, as a function of the noise level. More precisely we establish two main results. The first one ensures that the solution to the inverse problem is unique and lives on the same low dimensional sub-space as the true vector to recover, with the proviso that the minimal signal to noise ratio is large enough. This extends previous results well-known for the L^1 norm [1], analysis L^1 semi-norm [2], and the nuclear norm [3] to the general class of piecewise smooth gauges. In the second result, we establish L^2 stability by showing that the L^2 distance between the recovered and true vectors is within a factor of the noise level, thus extending results that hold for coercive convex positively 1-homogenous functionals [4].

This is a joint work with S. Vaiter, C. Deledalle, M. Golbabaee and J. Fadili. For more details, see [5].

Bibliography:
[1] J.J. Fuchs, On sparse representations in arbitrary redundant bases. IEEE Transactions on Information Theory, 50(6):1341-1344, 2004.
[2] S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust Sparse Analysis Regularization, to appear in IEEE Transactions on Information Theory, 2013.
[3] F. Bach, Consistency of trace norm minimization, Journal of Machine Learning Research, 9, 1019-1048, 2008.
[4] M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals. Inverse Problems, 27(7):075014, 2011.
[5] S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré, Model Selection with Piecewise Regular Gauges, Preprint hal-00842603, 2013

  • Computational Mathematics and Applications Seminar
13 June 2013
14:00
Dr Dirk Nuyens
Abstract
Lattice rules are equal-weight quadrature/cubature rules for the approximation of multivariate integrals which use lattice points as the cubature nodes. The quality of such cubature rules is directly related to the discrepancy between the uniform distribution and the discrete distribution of these points in the unit cube, and so, they are a kind of low-discrepancy sampling points. As low-discrepancy based cubature rules look like Monte Carlo rules, except that they use cleverly chosen deterministic points, they are sometimes called quasi-Monte Carlo rules. \\ \\ The talk starts by motivating the usage of Monte Carlo and then quasi-Monte Carlo methods after which some more recent developments are discussed. Topics include: worst-case errors in reproducing kernel Hilbert spaces, weighted spaces and the construction of lattice rules and sequences. \\ \\ In the minds of many, quasi-Monte Carlo methods seem to share the bad stanza of the Monte Carlo method: a brute force method of last resort with slow order of convergence, i.e., $O(N^{-1/2})$. This is not so. While the standard rate of convergence for quasi-Monte Carlo is rather slow, being $O(N^{-1})$, the theory shows that these methods achieve the optimal rate of convergence in many interesting function spaces. E.g., in function spaces with higher smoothness one can have $O(N^{-\alpha})$, $\alpha > 1$. This will be illustrated by numerical examples.
  • Computational Mathematics and Applications Seminar
6 June 2013
14:00
Professor Clint Dawson
Abstract
The coastal ocean contains a diversity of physical and biological processes, often occurring at vastly different scales. In this talk, we will outline some of these processes and their mathematical description. We will then discuss how finite element methods are used in coastal ocean modeling and recent research into improvements to these algorithms. We will also highlight some of the successes of these methods in simulating complex events, such as hurricane storm surges. Finally, we will outline several interesting challenges which are ripe for future research.
  • Computational Mathematics and Applications Seminar
Professor Eric Polizzi
Abstract
FEAST is a new general purpose eigenvalue algorithm that takes its inspiration from the density-matrix representation and contour integration technique in quantum mechanics [Phys. Rev. B 79, 115112, (2009)], and it can be understood as a subspace iteration algorithm using approximate spectral projection [http://arxiv.org/abs/1302.0432 (2013)]. The algorithm combines simplicity and efficiency and offers many important capabilities for achieving high performance, robustness, accuracy, and multi-level parallelism on modern computing platforms. FEAST is also the name of a comprehensive numerical library package which currently (v2.1) focuses on solving the symmetric eigenvalue problems on both shared-memory architectures (i.e. FEAST-SMP version -- also integrated into Intel MKL since Feb 2013) and distributed architectures (i.e. FEAST-MPI version) including three levels of parallelism MPI-MPI-OpenMP. \\ \\ In this presentation, we aim at expanding the current capabilies of the FEAST eigenvalue algorithm and developing an unified numerical approach for solving linear, non-linear, symmetric and non-symmetric eigenvalue problems. The resulting algorithms retain many of the properties of the symmetric FEAST including the multi-level parallelism. It will also be outlined that the development strategy roadmap for FEAST is closely tied together with the needs to address the variety of eigenvalue problems arising in computational nanosciences. Consequently, FEAST will also be presented beyond the "black-box" solver as a fundamental modeling framework for electronic structure calculations. \\ \\ Three problems will be presented and discussed: (i) a highly efficient and robust FEAST-based alternative to traditional self-consistent field (SCF) procedure for solving the non-linear eigenvector problem (J. Chem. Phys. 138, p194101 (2013)]); (ii) a fundamental and practical solution of the exact muffin-tin problem for performing both accurate and scalable all-electron electronic structure calculations using FEAST on parallel architectures [Comp. Phys. Comm. 183, p2370 (2012)]; (iii) a FEAST-spectral-based time-domain propagation techniques for performing real-time TDDFT simulations. In order to illustrate the efficiency of the FEAST framework, numerical examples are provided for various molecules and carbon-based materials using our in-house all-electron real-space FEM implementation and both the DFT/Kohn-Sham/LDA and TDDFT/ALDA approaches.
  • Computational Mathematics and Applications Seminar
23 May 2013
14:00
Professor Gitta Kutyniok
Abstract
In imaging science, efficient acquisition of images by few samples with the possibility to precisely recover the complete image is a topic of significant interest. The area of compressed sensing, which in particular advocates random sampling strategies, has had already a tremendous impact on both theory and applications. The necessary requirement for such techniques to be applicable is the sparsity of the original data within some transform domain. Recovery is then achieved by, for instance, $\ell_1$ minimization. Various applications however do not allow complete freedom in the choice of the samples. Take Magnet Resonance Imaging (MRI) for example, which only provides access to Fourier samples. For this particular application, empirical results still showed superior performance of compressed sensing techniques. \\ \\ In this talk, we focus on sparse sampling strategies under the constraint that only Fourier samples can be accessed. Since images -- and in particular images from MRI -- are governed by anisotropic features and shearlets do provide optimally sparse approximations of those, compactly supported shearlet systems will be our choice for the reconstruction procedure. Our sampling strategy then exploits a careful variable density sampling of the Fourier samples with $\ell_1$-analysis based reconstruction using shearlets. Our main result provides success guarantees and shows that this sampling and reconstruction strategy is optimal. \\ \\ This is joint work with Wang-Q Lim (Technische Universit\"at Berlin).
  • Computational Mathematics and Applications Seminar
16 May 2013
14:00
Dr David Salac
Abstract

The behavior of lipid vesicles is due to a complex interplay between the mechanics of the vesicle membrane, the surrounding fluids, and any external fields which may be present. In this presentation two aspects of vesicle behavior are explored: vesicles in inertial flows and vesicles exposed to electric fields.

The first half of the talk will present work done investigating the behavior of two-dimensional vesicles in inertial flows. A level set representation of the interface is coupled to a Navier-Stokes projection solver. The standard projection method is modified to take into account not only the volume incompressibility of the fluids but also the surface incompressibility of the vesicle membrane. This surface incompressibility is enforced by using the closest point level set method to calculate the surface tension needed to enforce the surface incompressibility. Results indicate that as inertial effects increase vesicle change from a tumbling behavior to a stable tank-treading configuration. The numerical results bear a striking similarity to rigid particles in inertial flows. Using rigid particles as a guide scaling laws are determined for vesicle behavior in inertial flows.

The second half of the talk will move to immersed interface solvers for three-dimensional vesicles exposed to electric fields. The jump conditions for pressure and fluid velocity will be developed for the three-dimensional Stokes flow or constant density Navier-Stokes equations assuming a piecewise continuous viscosity and an inextensible interface. An immersed interface solver is implemented to determine the fluid and membrane state. Analytic test cases have been developed to examine the convergence of the fluids solver.

Time permitting an immersed interface solver developed to calculate the electric field for a vesicle exposed to an electric field will be discussed. Unlike other multiphase systems, vesicle membranes have a time-varying potential which must be taken into account. This potential is implicitly determined along with the overall electric potential field.

  • Computational Mathematics and Applications Seminar

Pages