Past Computational Mathematics and Applications Seminar

13 October 2016
14:00
Prof. Coralia Cartis
Abstract


We present global rates of convergence for a general class of methods for nonconvex smooth optimization that include linesearch, trust-region and regularisation strategies, but that allow inaccurate problem information. Namely, we assume the local (first- or second-order) models of our function are only sufficiently accurate with a certain probability, and they can be arbitrarily poor otherwise. This framework subsumes certain stochastic gradient analyses and derivative-free techniques based on random sampling of function values. It can also be viewed as a robustness
assessment of deterministic methods and their resilience to inaccurate derivative computation such as due to processor failure in a distribute framework. We show that in terms of the order of the accuracy, the evaluation complexity of such methods is the same as their counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. Time permitting, we also discuss the case of inaccurate, probabilistic function value information, that arises in stochastic optimization. This work is joint with Katya Scheinberg (Lehigh University, USA).
 

  • Computational Mathematics and Applications Seminar
16 June 2016
14:00
Prof. Serkan Gugercin
Abstract

For linear dynamical systems, model reduction has achieved great success. In the case of linear dynamics,  we know how to construct, at a modest cost, (locally) optimalinput-independent reduced models; that is, reduced models that are uniformly good over all inputs having bounded energy. In addition, in some cases we can achieve this goal using only input/output data without a priori knowledge of internal  dynamics.  Even though model reduction has been successfully and effectively applied to nonlinear dynamical systems as well, in this setting,  bot the reduction process and the reduced models are input dependent and the high fidelity of the resulting approximation is generically restricted to the training input/data. In this talk, we will offer remedies to this situation.

 
First, we will  review  model reduction for linear systems by using rational interpolation as the underlying framework. The concept of transfer function will prove fundamental in this setting. Then, we will show how rational interpolation and transfer function concepts can be extended to nonlinear dynamics, specifically to bilinear systems and quadratic-in-state systems, allowing us to construct input-independent reduced models in this setting as well. Several numerical examples will be illustrated to support the discussion.
  • Computational Mathematics and Applications Seminar
Prof. Nancy Nichols
Abstract

To predict the behaviour of a dynamical system using a mathematical model, an accurate estimate of the current state of the system is needed in order to initialize the model. Complete information on the current state is, however, seldom available. The aim of optimal state estimation, known in the geophysical sciences as ‘data assimilation’, is to determine a best estimate of the current state using measured observations of the real system over time, together with the model equations. The problem is commonly formulated in variational terms as a very large nonlinear least-squares optimization problem. The lack of complete data, coupled with errors in the observations and in the model, leads to a highly ill-conditioned inverse problem that is difficult to solve.

To understand the nature of the inverse problem, we examine how different components of the assimilation system influence the conditioning of the optimization problem. First we consider the case where the dynamical equations are assumed to model the real system exactly. We show, against intuition, that with increasingly dense and precise observations, the problem becomes harder to solve accurately. We then extend these results to a 'weak-constraint' form of the problem, where the model equations are assumed not to be exact, but to contain random errors. Two different, but mathematically equivalent, forms of the problem are derived. We investigate the conditioning of these two forms and find, surprisingly, that these have quite different behaviour.

  • Computational Mathematics and Applications Seminar
2 June 2016
14:00
Professor Mark Embree
Abstract
Interpolatory matrix factorizations provide alternatives to the singular value decomposition for obtaining low-rank approximations; this class includes the CUR factorization, where the C and R matrices are subsets of columns and rows of the target matrix.  While interpolatory approximations lack the SVD's optimality, their ingredients are easier to interpret than singular vectors: since they are copied from the matrix itself, they inherit the data's key properties (e.g., nonnegative/integer values, sparsity, etc.). We shall provide an overview of these approximate factorizations, describe how they can be analyzed using interpolatory projectors, and introduce a new method for their construction based on the
Discrete Empirical Interpolation Method (DEIM).  To conclude, we will use this algorithm to gain insight into accelerometer data from an instrumented building.  (This talk describes joint work with Dan Sorensen (Rice) and collaborators in Virginia Tech's Smart Infrastucture Lab.)
  • Computational Mathematics and Applications Seminar
19 May 2016
14:00
Dr. Melina Freitag
Abstract

The requirement to compute Jordan blocks for multiple eigenvalues arises in a number of physical problems, for example panel flutter problems in aerodynamical stability, the stability of electrical power systems, and in quantum mechanics. We introduce a general method for computing a 2-dimensional Jordan block in a parameter-dependent matrix eigenvalue problem based on the so called Implicit Determinant Method. This is joint work with Alastair Spence (Bath).

  • Computational Mathematics and Applications Seminar
Dr Sam Relton
Abstract


In many applications we need to find or estimate the $p \ge 1$ largest elements of a matrix, along with their locations. This is required for recommender systems used by Amazon and Netflix, link prediction in graphs, and in finding the most important links in a complex network, for example. 

Our algorithm uses only matrix vector products and is based upon a power method for mixed subordinate norms. We have obtained theoretical results on the convergence of this algorithm via a comparison with rook pivoting for the LU  decomposition. We have also improved the practicality of the algorithm by producing a blocked version iterating on $n \times t$ matrices, as opposed to vectors, where $t$ is a tunable parameter. For $p > 1$ we show how deflation can be used to improve the convergence of the algorithm. 

Finally, numerical experiments on both randomly generated matrices and real-life datasets (the latter for $A^TA$ and $e^A$) show how our algorithms can reliably estimate the largest elements of a matrix whilst obtaining considerable speedups when compared to forming the matrix explicitly: over 1000x in some cases.

  • Computational Mathematics and Applications Seminar
5 May 2016
14:00
Professor Nilima Nigam
Abstract
Eigenfunctions of the Laplace operator with mixed Dirichet-Neumann boundary conditions may possess singularities, especially if the Dirichlet-Neumann junction occurs at angles $\geq \frac{\pi}{2}$. This suggests the use of boundary integral strategies to solve such eigenproblems. As with boundary value problems, integral-equation methods allow for a reduction of dimension, and the resolution of singular behaviour which may otherwise present challenges to volumetric methods.
 
In this talk, we present a  novel integral-equation algorithm for mixed Dirichlet-Neumann eigenproblems. This is based on joint work with Oscar Bruno and Eldar Akhmetgaliyev (Caltech).
 
For domains with smooth boundary, the singular behaviour of the eigenfunctions at  Dirichlet-Neumann junctions is incorporated as part of the discretization strategy for the integral operator.  The discretization we use is based on the high-order Fourier Continuation method (FC). 
 
 For non-smooth (Lipschitz) domains an alternative high-order discretization is presented which achieves high-order accuracy on the basis of graded meshes.
 
 In either case (smooth or Lipschitz boundary), eigenvalues are evaluated by examining the minimal singular values of a suitable discrete system. A naive implementation will not succeed even in simple situations. We implement a strategy inspired by one suggested by Trefethen and Betcke, who developed a modified method of particular solutions.
 
The method is conceptually simple, and allows for highly accurate and efficient computation of eigenvalues and eigenfunctions, even in challenging geometries. 
  • Computational Mathematics and Applications Seminar
28 April 2016
14:00
Professor Rob Kirby
Abstract

For many years, sum-factored algorithms for finite elements in rectangular reference geometry have combined low complexity with the mathematical power of high-order approximation.  However, such algorithms rely heavily on the tensor product structure inherent in the geometry and basis functions, and similar algorithms for simplicial geometry have proven elusive.

Bernstein polynomials are totally nonnegative, rotationally symmetric, and geometrically decomposed bases with many other remarkable properties that lead to optimal-complexity algorithms for element wise finite element computations.  The also form natural building blocks for the finite element exterior calculus bases for the de Rham complex so that H(div) and H(curl) bases have efficient representations as well.  We will also their relevance for explicit discontinuous Galerkin methods, where the element mass matrix requires special attention.

  • Computational Mathematics and Applications Seminar

Pages