14:00
14:00
Forthcoming events in this series
Many different mechanisms for achieving such atomistic/continuum couplings have been developed. To assess their relative merits, in particular accuracy/cost ratio, I will present a numerical analysis framework. I will use this framework to analyse in more detail a particularly popular scheme (the BQCE scheme), identifying key approximation parameters which can then be balanced (in a surprising way) to obtain an optimised formulation.
Finally, I will demonstrate that this analysis shows how to remove a severe bottlenecks in the BQCE scheme, leading to a new scheme with optimal convergence rate.
the sphere using the framework of finite element exterior calculus. The
formulation can be viewed as an extension of the classical staggered
C-grid energy-enstrophy conserving and
energy-conserving/enstrophy-dissipating schemes which were defined on
latitude-longitude grids. This work is motivated by the need to use
pseudo-uniform grids on the sphere (such as an icosahedral grid or a
cube grid) in order to achieve good scaling on massively parallel
computers, and forms part of the multi-institutional UK “Gung Ho”
project which aims to design a next generation dynamical core for the
Met Office Unified Model climate and weather prediction system. The
rotating shallow water equations are a single layer model that is
used to benchmark the horizontal component of numerical schemes for
weather prediction models.
We show, within the finite element exterior calculus framework, that it
is possible
to build numerical schemes with horizontal velocity and layer depth that
have a con-
served diagnostic potential vorticity field, by making use of the
geometric properties of the scheme. The schemes also conserve energy and
enstrophy, which arise naturally as conserved quantities out of a
Poisson bracket formulation. We show that it is possible to modify the
discretisation, motivated by physical considerations, so that enstrophy
is dissipated, either by using the Anticipated Potential Vorticity
Method, or by inducing stabilised advection schemes for potential
vorticity such as SUPG or higher-order Taylor-Galerkin schemes. We
illustrate our results with convergence tests and numerical experiments
obtained from a FEniCS implementation on the sphere.
Computer simulation of the propagation and interaction of linear waves
is a core task in computational science and engineering.
The finite element method represents one of the most common
discretisation techniques for Helmholtz and Maxwell's equations, which
model time-harmonic acoustic and electromagnetic wave scattering.
At medium and high frequencies, resolution requirements and the
so-called pollution effect entail an excessive computational effort
and prevent standard finite element schemes from an effective use.
The wave-based Trefftz methods offer a possible way to deal with this
problem: trial and test functions are special solutions of the
underlying PDE inside each element, thus the information about the
frequency is directly incorporated in the discrete spaces.
This talk is concerned with a family of those methods: the so-called
Trefftz-discontinuous Galerkin (TDG) methods, which include the
well-known ultraweak variational formulation (UWVF).
We derive a general formulation of the TDG method for Helmholtz
impedance boundary value problems and we discuss its well-posedness
and quasi-optimality.
A complete theory for the (a priori) h- and p-convergence for plane
and circular/spherical wave finite element spaces has been developed,
relying on new best approximation estimates for the considered
discrete spaces.
In the two-dimensional case, on meshes with very general element
shapes geometrically graded towards domain corners, we prove
exponential convergence of the discrete solution in terms of number of
unknowns.
This is a joint work with Ralf Hiptmair, Christoph Schwab (ETH Zurich,
Switzerland) and Ilaria Perugia (Vienna, Austria).
This theoretical and numerical study concerns an infinite-rate chemistry diffusion flame in a tube, which is a simple model for a flame in a combustion chamber. The problem is linearized around the non-oscillating state in order to derive the direct and adjoint equations governing the evolution of infinitesimal oscillations.
The direct equations are used to predict the frequency, growth rate, and mode shape of the most unstable thermoacoustic oscillations. The adjoint equations are then used to calculate how the frequency and growth rate change in response to (i) changes to the base state such as the flame shape or the composition of the fuel (ii) generic passive feedback mechanisms that could be added to the device. This information can be used to stabilize the system, which is verified by subsequent experiments.
This analysis reveals that, as expected from a simple model, the phase delay between velocity and heat-release fluctuations is the key parameter in determining the sensitivities. It also reveals that this thermo-acoustic system is exceedingly sensitive to changes in the base state. This analysis can be extended to more accurate models and is a promising new tool for the analysis and control of thermo-acoustic oscillations.
optimal asymptotical rates. Practical experiences show that AFEMs with a maximum marking strategy
produces optimal results thereby being less sensitive to choices of the marking parameter.
\\
\\
In this talk, we prove that an AFEM with a modified maximum strategy is even instance optimal for the
total error, i.e., for the sum of the error and the oscillation. This is a non-asymptotical optimality result.
Our approach uses new techniques based on the minimisation of the Dirichlet energy and a newly
developed tree structure of the nodes of admissible triangulations.
require the determination of an unknown
function from a finite set of indirect measurements.
Examples include oceanography, oil recovery,
water resource management and weather forecasting.
The Bayesian approach to these problems
is natural for many reasons, including the
under-determined and ill-posed nature of the inversion,
the noise in the data and the uncertainty in
the differential equation models used to describe
complex mutiscale physics. The object of interest
in the Bayesian approach is the posterior
probability distribution on the unknown field [1].
\\
\\
However the Bayesian approach presents a
computationally formidable task as it
results in the need to probe a probability
measure on separable Banach space. Monte
Carlo Markov Chain methods (MCMC) may be
used to achieve this [2], but can be
prohibitively expensive. In this talk I
will discuss approximation of probability measures
by a Gaussian measure, looking for the closest
approximation with respect to the Kullback-Leibler
divergence. This methodology is widely
used in machine-learning [3]. In the context of
target measures on separable Banach space
which themselves have density with respect to
a Gaussian, I will show how to make sense of the
resulting problem in the calculus of variations [4].
Furthermore I will show how the approximate
Gaussians can be used to speed-up MCMC
sampling of the posterior distribution [5].
\\
\\
[1] A.M. Stuart. "Inverse problems: a Bayesian
perspective." Acta Numerica 19(2010) and
http://arxiv.org/abs/1302.6989
\\
[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,
"MCMC methods for functions: modifying old algorithms
to make them faster". Statistical Science 28(2013).
http://arxiv.org/abs/1202.0709
\\
[3] C.M. Bishop, "Pattern recognition and machine learning".
Springer, 2006.
\\
[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler
Approximations for measures on infinite dimensional spaces."
http://arxiv.org/abs/1310.7845
\\
[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms
for Kullback-Leibler approximation of probability measures in
infinite dimensions." In preparation.
When high-dimensional
problems are concerned, not much algorithms can break the curse of
dimensionality, and solve them efficiently and reliably. Among those, tensor
product algorithms, which implement the idea of separation of variables for
multi-index arrays (tensors), seem to be the most general and also very
promising. They originated in quantum physics and chemistry and descent broadly
from the density matrix renormalization group (DMRG) and matrix
product states (MPS) formalisms. The same tensor formats were recently
re-discovered in the numerical linear algebra (NLA) community as the tensor
train (TT) format.
Algorithms developed in the quantum physics community are based on the
optimisation in tensor formats, that is performed subsequently for all
components of a tensor format (i.e. all sites or modes).
The DMRG/MPS schemes are very efficient but very difficult to analyse, and at
the moment only local convergence results for the simplest algorithm are
available. In the NLA community, a common approach is to use a classical
iterative scheme (e.g. GMRES) and enforce the compression to a tensor format at
every step. The formal analysis is quite straightforward, but tensor ranks of
the vectors which span the Krylov subspace grow rapidly with iterations, and
the methods are struggling in practice.
The first attempt to merge classical iterative algorithms and DMRG/MPS methods
was made by White (2005), where the second Krylov vector is used to expand the
search space on the optimisation step.
The idea proved to be useful, but the implementation was based on the fair
amount of physical intuition, and the algorithm is not completely justified.
We have recently proposed the AMEn algorithm for linear systems, that also
injects the gradient direction in the optimisation step, but in a way that
allows to prove the global convergence of the resulted scheme. The
scheme can be easily applied for the computation of the ground state --- the
differences to the algorithm of S. White are emphasized in Dolgov and
Savostyanov (2013).
The AMEn scheme is already acknowledged in the NLA community --- for example it
was recently applied for the computation of extreme eigenstates by Kressner,
Steinlechner and Uschmajew (2013), using the block-TT format proposed by in
Dolgov, Khoromskij, Oseledets and Savostyanov (2014).
At the moment, AMEn algorithm was applied
- to simulate the NMR spectra of large molecules (such as ubiquitin),
- to solve the Fokker-Planck equation for the non-Newtonian polymeric
flows,
- to the chemical master equation describing the mesoscopic model of gene
regulative networks,
- to solve the Heisenberg model problem for a periodic spin chain.
We aim to extend this framework and the analysis to other problems of NLA:
eigenproblems, time-dependent problems, high-dimensional interpolation, and
matrix functions; as well as to a wider list of high-dimensional
problems.
This is a joint work with Sergey Dolgov the from Max-Planck Institute for
Mathematics in the Sciences, Leipzig, Germany.
rock detachment based on Allaire, Jouve and Van Goethem's
implementation of Francfort and Marigo's model of damage in brittle
solids, As such, local minimizers of a cost functional involving both
stored elastic energy and a damage penalization term are sought by
using a procedure which alternates between approximately solving a
linear elasticity system and advancing a transport equation for a
level set function describing the loci of still-attached rock. We pay
special attention to the mixed finite element method used in the
approximation of the linear elasticity system.
In this talk, we first describe our recent work on developing a new, high-order, global RBF method for numerically solving PDEs on relatively general surfaces, with a specific focus on reaction-diffusion equations. The method is quite flexible, only requiring a set of ``scattered’’ nodes on the surface and the corresponding normal vectors to the surface at these nodes. We next present a new scalable local method based on the RBF-FD approach with this same flexibility. This is the first application of the RBF-FD method to general surfaces. We conclude with applications of these methods to some biologically relevant problems.
This talk represents joint work with Edward Fuselier (High Point University), Aaron Fogelson, Mike Kirby, and Varun Shankar (all at the University of Utah).
algorithms characterized by evaluating the objective function
using a step size and a number of (polling) directions.
When applied to the minimization of smooth functions, the
polling directions are typically taken from positive spanning sets
which in turn must have at least n+1 vectors in an n-dimensional variable space.
In addition, to ensure the global convergence of these algorithms,
the positive spanning sets used throughout the iterations
must be uniformly non-degenerate in the sense of having a positive
(cosine) measure bounded away from zero.
\\
\\
However, recent numerical results indicated that randomly generating
the polling directions without imposing the positive spanning property
can improve the performance of these methods, especially when the number
of directions is chosen considerably less than n+1.
\\
\\
In this talk, we analyze direct-search algorithms when the polling
directions are probabilistic descent, meaning that with a certain
probability at least one of them is of descent type. Such a framework
enjoys almost-sure global convergence. More interestingly, we will show
a global decaying rate of $1/\sqrt{k}$ for the gradient size, with
overwhelmingly high probability, matching the corresponding rate for
the deterministic versions of the gradient method or of direct search.
Our analysis helps to understand numerical behavior and the choice of
the number of polling directions.
\\
\\
This is joint work with Clément Royer, Serge Gratton, and Zaikun Zhang.
We propose a strategy which allows computing eigenvalue enclosures for the Maxwell operator by means of the finite element method. The origins of this strategy can be traced back to over 20 years ago. One of its main features lies in the fact that it can be implemented on any type of regular mesh (structured or otherwise) and any type of elements (nodal or otherwise). In the first part of the talk we formulate a general framework which is free from spectral pollution and allows estimation of eigenfunctions.
We then prove the convergence of the method, which implies precise convergence rates for nodal finite elements. Various numerical experiments on benchmark geometries, with and without symmetries, are reported.
pivoting (block LU_PRRP), an algorithm based on strong
rank revealing QR for the panel factorization.
Block LU_PRRP is more stable than Gaussian elimination with partial
pivoting (GEPP), with a theoretical upper bound of the growth factor
of $(1+ \tau b)^{(n/ b)-1}$, where $b$ is the size of the panel used
during the block factorization, $\tau$ is a parameter of the strong
rank revealing QR factorization, and $n$ is the number of columns of
the matrix. For example, if the size of the panel is $b = 64$, and
$\tau = 2$, then $(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$, where
$2^{n-1}$ is the upper bound of the growth factor of GEPP. Our
extensive numerical experiments show that the new factorization scheme
is as numerically stable as GEPP in practice, but it is more resistant
to some pathological cases where GEPP fails. We note that the block LU_PRRP
factorization does only $O(n^2 b)$ additional floating point operations
compared to GEPP.
A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.
This is joint work with Rodolphe Jenatton & Francis Bach.
The context of data assimilation in oceanography will be described as well as the computational challenges associated with it. A class of numerical linear algebra methods is described whose purpose is to exploit the problem structure in order to reduce the computational burden and provide provable convergence results for what remains a (very large) nonlinear problem. This class belongs to the Krylov-space family of methods and the special structure used is the imbalance between the dimensions of the state space and the observation space. It is also shown how inexact matrix-vector products can be exploited. Finally, preconditioning issues and resulting adaptations of the trust-region methodology for nonlinear minimization will also be outlined.
By Serge Gratton, Selime Gurol, Philippe Toint, Jean Tshimanga and Anthony Weaver.
Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.
This is joint work with Nuri Yeralan and Sanjay Ranka.
derivatives of holomorphic functions (as required, e.g., in random matrix
theory to extract probabilities from a generating function) turns out to
be a surprisingly rich topic: there are connections to asymptotic analysis,
the theory of entire functions, and to algorithmic graph theory.
Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.
Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. We consider regularizations with convex positively 1-homogenous functionals (so-called gauges) which are piecewise smooth. Singularies of such functionals are crucial to force the solution to the regularization to belong to an union of linear space of low dimension. These spaces (the so-called "models") allows one to encode many priors on the data to be recovered, conforming to some notion of simplicity/low complexity. This family of priors encompasses many special instances routinely used in regularized inverse problems such as L^1, L^1-L^2 (group sparsity), nuclear norm, or the L^infty norm. The piecewise-regular requirement is flexible enough to cope with analysis-type priors that include a pre-composition with a linear operator, such as for instance the total variation and polyhedral gauges. This notion is also stable under summation of regularizers, thus enabling to handle mixed regularizations.
The main set of contributions of this talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We provide sufficient conditions that allow to provably controlling the deviation of the recovered solution from the true underlying object, as a function of the noise level. More precisely we establish two main results. The first one ensures that the solution to the inverse problem is unique and lives on the same low dimensional sub-space as the true vector to recover, with the proviso that the minimal signal to noise ratio is large enough. This extends previous results well-known for the L^1 norm [1], analysis L^1 semi-norm [2], and the nuclear norm [3] to the general class of piecewise smooth gauges. In the second result, we establish L^2 stability by showing that the L^2 distance between the recovered and true vectors is within a factor of the noise level, thus extending results that hold for coercive convex positively 1-homogenous functionals [4].
This is a joint work with S. Vaiter, C. Deledalle, M. Golbabaee and J. Fadili. For more details, see [5].
Bibliography:
[1] J.J. Fuchs, On sparse representations in arbitrary
redundant bases. IEEE Transactions on Information Theory, 50(6):1341-1344,
2004.
[2] S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust
Sparse Analysis Regularization, to appear in IEEE Transactions on Information
Theory, 2013.
[3] F. Bach, Consistency of trace norm minimization,
Journal of Machine Learning Research, 9, 1019-1048, 2008.
[4] M. Grasmair, Linear convergence rates for Tikhonov
regularization with positively homogeneous functionals. Inverse Problems,
27(7):075014, 2011.
[5] S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré, Model
Selection with Piecewise Regular Gauges, Preprint hal-00842603, 2013