Past Numerical Analysis Group Internal Seminar

27 May 2014
14:00
Abstract
<!--StartFragment--> <p><span>We study the&nbsp;</span><span>behaviour</span><span>&nbsp;of orthogonal polynomials on triangles and their coefficients in the context of spectral approximations of partial differential equations. &nbsp;</span><span>For spectral approximation we consider series expansions&nbsp;</span><span>$u=\sum_{k=0}^{\infty} \hat{u}_k \phi_k$</span><span>&nbsp;in terms of orthogonal polynomials&nbsp;</span><span>$\phi_k$</span><span>.</span><span>&nbsp;We show that for any function&nbsp;</span><span>$u \in C^{\infty}$</span><span>&nbsp;the series expansion converges faster than with any polynomial order. &nbsp;With these result we are able to employ the polynomials&nbsp;</span><span>$\phi_k$&nbsp;</span><span>in the spectral difference method in order to solve hyperbolic conservation laws.</span><br /><br /><span>It is a well known fact that discontinuities can arise&nbsp;</span><span>leading to oscillatory numerical solutions. We compare standard filtering and the super spectral vanishing viscosity methods, which uses exponential filters build&nbsp;</span><span>from the differential operator of the respective orthogonal polynomials.&nbsp;</span><span>&nbsp;We will extend the spectral difference method for unstructured grids by using&nbsp;</span><br /><span>&nbsp;classical orthogonal polynomials and exponential filters. Finally we consider some numerical test cases.</span><br /><br /><br /></p>
  • Numerical Analysis Group Internal Seminar
20 May 2014
14:00
Jared L Aurentz
Abstract
<p>A fast method for computing eigenpairs of positive definite matrices using GPUs is presented. The method uses Chebyshev polynomial spectral transformations to map the desired eigenvalues of the original matrix $A$ to exterior eigenvalues of the transformed matrix $p(A)$, making them easily computable using existing Krylov methods. The construction of the transforming polynomial $p(z)$ can be done efficiently and only requires knowledge of the spectral radius of $A$. Computing $p(A)v$ can be done using only the action of $Av$. This requires no extra memory and is typically easy to parallelize. The method is implemented using the highly parallel GPU architecture and for specific problems, has a factor of 10 speedup over current GPU methods and a factor of 100 speedup over traditional shift and invert strategies on a CPU.</p>
  • Numerical Analysis Group Internal Seminar
13 May 2014
14:30
Ingrid von Glehn
Abstract

Partial differential equations defined on surfaces appear in various applications, for example image processing and reconstruction of non-planar images. In this talk, I will present a penalty method for evolution equations, based on an implicit representation of the surface. I will derive a simple equation in the surrounding space, formulated with an extension operator, and then show some analysis and applications of the method.

  • Numerical Analysis Group Internal Seminar
13 May 2014
14:00
Iain Smears
Abstract

Several problems lead to the question of how well can a fine grid function be approximated by a coarse grid function, such as preconditioning in finite element methods or data compression and image processing. Particular challenges in answering this question arise when the functions may be only piecewise-continuous, or when the coarse space is not nested in the fine space. In this talk, we solve the problem by using a stable approximation from a space of globally smooth functions as an intermediate step, thereby allowing the use of known approximation results to establish the approximability by a coarse space. We outline the proof, which relies on techniques from the theory of discontinuous Galerkin methods and on the theory of Helmholtz decompositions. Finally, we present an application of our to nonoverlapping domain decomposition preconditioners for hp-version DGFEM.

  • Numerical Analysis Group Internal Seminar
6 May 2014
14:30
Chris Farmer
Abstract

Given a model dynamical system, a model of any measuring apparatus relating states to observations, and a prior assessment of uncertainty, the probability density of subsequent system states, conditioned upon the history of the observations, is of some practical interest.

When observations are made at discrete times, it is known that the evolving probability density is a solution of the Bayesian filtering equations. This talk will describe the difficulties in approximating the evolving probability density using a Gaussian mixture (i.e. a sum of Gaussian densities). In general this leads to a sequence of optimisation problems and related high-dimensional integrals. There are other problems too, related to the necessity of using a small number of densities in the mixture, the requirement to maintain sparsity of any matrices and the need to compute first and, somewhat disturbingly, second derivatives of the misfit between predictions and observations. Adjoint methods, Taylor expansions, Gaussian random fields and Newton’s method can be combined to, possibly, provide a solution. The approach is essentially a combination of filtering methods and '4-D Var’ methods and some recent progress will be described.

  • Numerical Analysis Group Internal Seminar
6 May 2014
14:00
Nick Trefethen
Abstract

Everybody has heard of the Faraday cage effect, in which a wire mesh does a good job of blocking electric fields and electromagnetic waves. For example, the screen on the front of your microwave oven keeps the microwaves from getting out, while light with its smaller wavelength escapes so you can see your burrito.  Surely the mathematics of such a famous and useful phenomenon has been long ago worked out and written up in the physics books, right?

Well, maybe.   Dave Hewett and I have communicated with dozens of mathematicians, physicists, and engineers on this subject so far, and we've turned up amazingly little.   Everybody has a view of why the Faraday cage mathematics is obvious, and most of their views are different.  Feynman discusses the matter in his Lectures on Physicsbut so far as we can tell, he gets it wrong. 

For the static case at least (the Laplace equation), Hewett and I have made good progress with numerical explorations based on  Mikhlin's method backed up by a theorem.   The effect seems to much weaker than we had imagined -- are we missing something?  For time-harmonic waves (the Helmholtz equation), our simulations lead to further puzzles.  We need advice!  Where in the world is the literature on this problem? 

  • Numerical Analysis Group Internal Seminar
29 April 2014
14:00
David Bindel
Abstract

In 1890, G. H. Bryan demonstrated that when a ringing wine glass rotates, the shape of the vibration pattern precesses, and this effect is the basis for a family of high-precision gyroscopes. Mathematically, the precession can be described in terms of a symmetry-breaking perturbation due to gyroscopic effects of a geometrically degenerate pair of vibration modes.  Unfortunately, current attempts to miniaturize these gyroscope designs are subject to fabrication imperfections that also break the device symmetry. In this talk, we describe how these devices work and our approach to accurate and efficient simulations of both ideal device designs and designs subject to fabrication imperfections.

  • Numerical Analysis Group Internal Seminar
11 March 2014
14:00
Arnaud Doucet
Abstract

State-space models are a very popular class of time series models which have found thousands of applications in engineering, robotics, tracking, vision,  econometrics etc. Except for linear and Gaussian models where the Kalman filter can be used, inference in non-linear non-Gaussian models is analytically intractable.  Particle methods are a class of flexible and easily parallelizable simulation-based algorithms which provide consistent approximations to these inference problems. The aim of this talk is to introduce particle methods and to present the most recent developments in this area.

  • Numerical Analysis Group Internal Seminar
4 March 2014
14:00
Mohsin Javed
Abstract

The Euler-Maclaurin formula is a quadrature rule based on corrections to the trapezoid rule using odd derivatives at the end-points of the function being integrated. It appears that no one has ever thought about a related function approximation that will give us the Euler-Maclaurin quadrature rule, i.e., just like we can derive Newton-Cotes quadrature by integrating polynomial approximations of the function, we investigate, what function approximation will integrate exactly to give the corresponding Euler-Maclaurin quadrature. It turns out, that the right function approximation is a combination of a trigonometric interpolant and a polynomial.

To make the method more practical, we also look at the closely related Newton-Gregory quadrature, which is very similar to the Euler-Maclaurin formula but instead of derivatives, uses finite differences. Following almost the same procedure, we find another mixed function approximation, derivative free, whose exact integration yields the Newton-Gregory quadrature rule.

  • Numerical Analysis Group Internal Seminar

Pages