Numerical Analysis Group Internal Seminar

26 September 2017
14:00
Abstract

We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i.e., each data point is a solution to a system of polynomial equations. In this case, the original matrix is possibly high-rank, but it becomes low-rank after mapping each column to a higher dimensional space of monomial features. Many well-studied extensions of linear models, including affine subspaces and their union, can be described by a variety model. We study the sampling requirements for matrix completion under a variety model with a focus on a union of subspaces. We also propose an efficient matrix completion algorithm that minimizes a surrogate of the rank of the matrix of monomial features, which is able to recover synthetically generated data up to the predicted sampling complexity bounds. The proposed algorithm also outperforms standard low-rank matrix completion and subspace clustering techniques in experiments with real data.

  • Numerical Analysis Group Internal Seminar
10 October 2017
14:00
Hendrik Ranocha
Abstract

High-order methods for conservation laws can be highly efficient if their stability is ensured. A suitable means mimicking estimates of the continuous level is provided by summation-by-parts (SBP) operators and the weak enforcement of boundary conditions. Recently, there has been an increasing interest in generalised SBP operators both in the finite difference and the discontinuous Galerkin spectral element framework.

However, if generalised SBP operators are used, the treatment of boundaries becomes more difficult since some properties of the continuous level are no longer mimicked discretely —interpolating the product of two functions will in general result in a value different from the product of the interpolations. Thus, desired properties such as conservation and stability are more difficult to obtain.

In this talk, the concept of generalised SBP operators and their application to entropy stable semidiscretisations will be presented. Several recent ideas extending the range of possible methods are discussed, presenting both advantages and several shortcomings.

  • Numerical Analysis Group Internal Seminar
10 October 2017
14:30
Jan Glaubitz
Abstract

Spectral element methods (SEM), such as discontinuous Galerkin (DG) and recent flux reconstruction (FR) methods, are a commonly used tool in the numerical treatment of hyperbolic conservation laws,  $\partial_t u + \partial_x f(u) = 0$. When space and time are decoupled by the method of lines, SEMs are designed as  semidiscretisations using piecewise polynomial approximations. First subdividing the spatial domain into smaller elements, then using a high order polynomial approximation in each element, they combine ideas from finite volume as well as spectral methods. In areas where the approximated (unknown) solution is sufficiently smooth, SEMs achieve very high orders of accuracy. Yet, already the first non-linear example of Burgers' equation is known to typically develop jump discontinuities (shocks) in finite time, even for smooth initial conditions. Further, when approximating discontinuous functions by high order polynomials, the Gibbs phenomenon arises. The polynomial approximation shows spurious oscillations, which pollute the numerical solution and might cause instabilities. In this talk, the artificial viscosity (AV) method will be addressed as a strategy to overcome such spurious oscillations. First proposed by Richtmyer and von Neumann in 1950, the idea of the AV method is to augment the original conservation law by a parabolic viscosity term,

\begin{equation*}
       \partial_t u + \partial_x f(u) = 0
         \quad \mapsto \quad
        \partial_t u + \partial_x f(u) = \varepsilon \partial_{xx} u.
   \end{equation*}

The overall concept is to approximate (discontinuous) solutions by smoother ones of a parabolic equation and to apply the numerical method to this new equation, where shocks are now replaced by thin but continuous layers. Since then, a great number of - some times more sometimes less - different AV methods has been proposed.

In this talk, the most prominent ones will be revised with respect to the design principles of conservation and entropy stability. The question I would like to tackle is:   \emph{Why do some AV methods perform better or worse then others?} The insight from this analysis will be used to highlight certain bottlenecks of often used shock capturing methods. Finally, novel viscosity terms are proposed in order to overcome these shortcomings.

  • Numerical Analysis Group Internal Seminar
17 October 2017
14:00
Abdul-Lateef Haji-Ali
Abstract

We propose and analyze a multilevel weighted least squares polynomial approximation method. Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method, which employs samples with different accuracies and is able to match the accuracy of single level approximations at reduced computational work. We prove complexity bounds under certain assumptions on polynomial approximability and sample work. Furthermore, we propose an adaptive
algorithm for situations where such assumptions cannot be verified a priori. Numerical experiments underline the practical applicability of our method.

  • Numerical Analysis Group Internal Seminar
7 November 2017
14:00
Bartolomeo Stellato
Abstract

We develop a general purpose solver for quadratic programs based on operator splitting. We introduce a novel splitting that requires the solution of a quasi-definite linear system with the same coefficient matrix in each iteration. The resulting algorithm is very robust, and once the initial factorization is carried out, division free; it also eliminates requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. Moreover, it is able to detect primal or dual infeasible problems providing infeasibility certificates. The method supports caching the factorization of the quasi-definite system and warm starting, making it efficient for solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint and is library-free. Numerical benchmarks on problems arising from several application domains show that OSQP is typically 10x faster than interior-point methods, especially when factorization caching or warm start is used.


This is joint work with Goran Banjac, Paul Goulart, Alberto Bemporad and Stephen Boyd
 

  • Numerical Analysis Group Internal Seminar