Past Computational Mathematics and Applications Seminar

Prof. Nancy Nichols
Abstract

To predict the behaviour of a dynamical system using a mathematical model, an accurate estimate of the current state of the system is needed in order to initialize the model. Complete information on the current state is, however, seldom available. The aim of optimal state estimation, known in the geophysical sciences as ‘data assimilation’, is to determine a best estimate of the current state using measured observations of the real system over time, together with the model equations. The problem is commonly formulated in variational terms as a very large nonlinear least-squares optimization problem. The lack of complete data, coupled with errors in the observations and in the model, leads to a highly ill-conditioned inverse problem that is difficult to solve.

To understand the nature of the inverse problem, we examine how different components of the assimilation system influence the conditioning of the optimization problem. First we consider the case where the dynamical equations are assumed to model the real system exactly. We show, against intuition, that with increasingly dense and precise observations, the problem becomes harder to solve accurately. We then extend these results to a 'weak-constraint' form of the problem, where the model equations are assumed not to be exact, but to contain random errors. Two different, but mathematically equivalent, forms of the problem are derived. We investigate the conditioning of these two forms and find, surprisingly, that these have quite different behaviour.

  • Computational Mathematics and Applications Seminar
2 June 2016
14:00
Professor Mark Embree
Abstract
Interpolatory matrix factorizations provide alternatives to the singular value decomposition for obtaining low-rank approximations; this class includes the CUR factorization, where the C and R matrices are subsets of columns and rows of the target matrix.  While interpolatory approximations lack the SVD's optimality, their ingredients are easier to interpret than singular vectors: since they are copied from the matrix itself, they inherit the data's key properties (e.g., nonnegative/integer values, sparsity, etc.). We shall provide an overview of these approximate factorizations, describe how they can be analyzed using interpolatory projectors, and introduce a new method for their construction based on the
Discrete Empirical Interpolation Method (DEIM).  To conclude, we will use this algorithm to gain insight into accelerometer data from an instrumented building.  (This talk describes joint work with Dan Sorensen (Rice) and collaborators in Virginia Tech's Smart Infrastucture Lab.)
  • Computational Mathematics and Applications Seminar
19 May 2016
14:00
Dr. Melina Freitag
Abstract

The requirement to compute Jordan blocks for multiple eigenvalues arises in a number of physical problems, for example panel flutter problems in aerodynamical stability, the stability of electrical power systems, and in quantum mechanics. We introduce a general method for computing a 2-dimensional Jordan block in a parameter-dependent matrix eigenvalue problem based on the so called Implicit Determinant Method. This is joint work with Alastair Spence (Bath).

  • Computational Mathematics and Applications Seminar
Dr Sam Relton
Abstract


In many applications we need to find or estimate the $p \ge 1$ largest elements of a matrix, along with their locations. This is required for recommender systems used by Amazon and Netflix, link prediction in graphs, and in finding the most important links in a complex network, for example. 

Our algorithm uses only matrix vector products and is based upon a power method for mixed subordinate norms. We have obtained theoretical results on the convergence of this algorithm via a comparison with rook pivoting for the LU  decomposition. We have also improved the practicality of the algorithm by producing a blocked version iterating on $n \times t$ matrices, as opposed to vectors, where $t$ is a tunable parameter. For $p > 1$ we show how deflation can be used to improve the convergence of the algorithm. 

Finally, numerical experiments on both randomly generated matrices and real-life datasets (the latter for $A^TA$ and $e^A$) show how our algorithms can reliably estimate the largest elements of a matrix whilst obtaining considerable speedups when compared to forming the matrix explicitly: over 1000x in some cases.

  • Computational Mathematics and Applications Seminar
5 May 2016
14:00
Professor Nilima Nigam
Abstract
Eigenfunctions of the Laplace operator with mixed Dirichet-Neumann boundary conditions may possess singularities, especially if the Dirichlet-Neumann junction occurs at angles $\geq \frac{\pi}{2}$. This suggests the use of boundary integral strategies to solve such eigenproblems. As with boundary value problems, integral-equation methods allow for a reduction of dimension, and the resolution of singular behaviour which may otherwise present challenges to volumetric methods.
 
In this talk, we present a  novel integral-equation algorithm for mixed Dirichlet-Neumann eigenproblems. This is based on joint work with Oscar Bruno and Eldar Akhmetgaliyev (Caltech).
 
For domains with smooth boundary, the singular behaviour of the eigenfunctions at  Dirichlet-Neumann junctions is incorporated as part of the discretization strategy for the integral operator.  The discretization we use is based on the high-order Fourier Continuation method (FC). 
 
 For non-smooth (Lipschitz) domains an alternative high-order discretization is presented which achieves high-order accuracy on the basis of graded meshes.
 
 In either case (smooth or Lipschitz boundary), eigenvalues are evaluated by examining the minimal singular values of a suitable discrete system. A naive implementation will not succeed even in simple situations. We implement a strategy inspired by one suggested by Trefethen and Betcke, who developed a modified method of particular solutions.
 
The method is conceptually simple, and allows for highly accurate and efficient computation of eigenvalues and eigenfunctions, even in challenging geometries. 
  • Computational Mathematics and Applications Seminar
28 April 2016
14:00
Professor Rob Kirby
Abstract

For many years, sum-factored algorithms for finite elements in rectangular reference geometry have combined low complexity with the mathematical power of high-order approximation.  However, such algorithms rely heavily on the tensor product structure inherent in the geometry and basis functions, and similar algorithms for simplicial geometry have proven elusive.

Bernstein polynomials are totally nonnegative, rotationally symmetric, and geometrically decomposed bases with many other remarkable properties that lead to optimal-complexity algorithms for element wise finite element computations.  The also form natural building blocks for the finite element exterior calculus bases for the de Rham complex so that H(div) and H(curl) bases have efficient representations as well.  We will also their relevance for explicit discontinuous Galerkin methods, where the element mass matrix requires special attention.

  • Computational Mathematics and Applications Seminar
Dr Salvatore Filippone
Abstract

We will review the basic building blocks of iterative solvers, i.e. sparse matrix-vector multiplication, in the context of GPU devices such 
as the cards by NVIDIA; we will then discuss some techniques in preconditioning by approximate inverses, and we will conclude with an 
application to an image processing problem from the biomedical field.

  • Computational Mathematics and Applications Seminar
25 February 2016
14:00
Michal Kocvara
Abstract

The aim of this talk is to design an efficient multigrid method for constrained convex optimization problems arising from discretization  of  some  underlying  infinite  dimensional  problems. Due  to problem  dependency  of this approach, we only consider bound constraints with (possibly) a linear equality constraint. As our aim is to target large-scale problems, we want to avoid computation of second 
derivatives of the objective function, thus excluding Newton like methods. We propose a smoothing operator that only uses first-order information and study the computational efficiency of the resulting method. In the second part, we consider application of multigrid techniques to more general optimization problems, in particular, the topology design problem.

  • Computational Mathematics and Applications Seminar

Pages