Forthcoming events in this series


Thu, 12 Jun 2014
14:00
L5

Cyclic Schemes for PDE-Based Image Analysis

Professor Joachim Weickert
(Universität des Saarlandes)
Abstract

Many successful methods in image processing and computer vision involve

parabolic and elliptic partial differential equations (PDEs). Thus, there

is a growing demand for simple and highly efficient numerical algorithms

that work for a broad class of problems. Moreover, these methods should

also be well-suited for low-cost parallel hardware such as GPUs.

In this talk we show that two of the simplest methods for the numerical

analysis of PDEs can lead to remarkably efficient algorithms when they

are only slightly modified: To this end, we consider cyclic variants of

the explicit finite difference scheme for approximating parabolic problems,

and of the Jacobi overrelaxation method for solving systems of linear

equations.

Although cyclic algorithms have been around in the numerical analysis

community for a long time, they have never been very popular for a number

of reasons. We argue that most of these reasons have become obsolete and

that cyclic methods ideally satisfy the needs of modern image processing

applications. Interestingly this transfer of knowledge is not a one-way

road from numerical analysis to image analysis: By considering a

factorisation of general smoothing filters, we introduce novel, signal

processing based ways of deriving cycle parameters. They lead to hitherto

unexplored methods with alternative parameter cycles. These methods offer

better smoothing properties than classical numerical concepts such as

Super Time Stepping and the cyclic Richardson algorithm.

We present a number of prototypical applications that demonstrate the

wide applicability of our cyclic algorithms. They include isotropic

and anisotropic nonlinear diffusion processes, higher dimensional

variational problems, and higher order PDEs.

Thu, 29 May 2014
14:00
L5

Atomistic/Continuum Multiscale Methods

Christoph Ortner
(University of Warwick)
Abstract

For many questions of scientific interest, all-atom molecular simulations are still out of reach, in particular in materials engineering where large numbers of atoms, and often expensive force fields, are required. A long standing challenge has been to construct concurrent atomistic/continuum coupling schemes that use atomistic models in regions of space where high accuracy is required, with computationally efficient continuum models (e.g., FEM) of the elastic far-fields.

Many different mechanisms for achieving such atomistic/continuum couplings have been developed. To assess their relative merits, in particular accuracy/cost ratio, I will present a numerical analysis framework. I will use this framework to analyse in more detail a particularly popular scheme (the BQCE scheme), identifying key approximation parameters which can then be balanced (in a surprising way) to obtain an optimised formulation.

Finally, I will demonstrate that this analysis shows how to remove a severe bottlenecks in the BQCE scheme, leading to a new scheme with optimal convergence rate.

Thu, 22 May 2014
14:00
L5

A finite element exterior calculus framework for the rotating shallow water equations

Dr Colin Cotter
(Imperial College, London)
Abstract

We describe discretisations of the shallow water equations on

the sphere using the framework of finite element exterior calculus. The

formulation can be viewed as an extension of the classical staggered

C-grid energy-enstrophy conserving and

energy-conserving/enstrophy-dissipating schemes which were defined on

latitude-longitude grids. This work is motivated by the need to use

pseudo-uniform grids on the sphere (such as an icosahedral grid or a

cube grid) in order to achieve good scaling on massively parallel

computers, and forms part of the multi-institutional UK “Gung Ho”

project which aims to design a next generation dynamical core for the

Met Office Unified Model climate and weather prediction system. The

rotating shallow water equations are a single layer model that is

used to benchmark the horizontal component of numerical schemes for

weather prediction models.

We show, within the finite element exterior calculus framework, that it

is possible

to build numerical schemes with horizontal velocity and layer depth that

have a con-

served diagnostic potential vorticity field, by making use of the

geometric properties of the scheme. The schemes also conserve energy and

enstrophy, which arise naturally as conserved quantities out of a

Poisson bracket formulation. We show that it is possible to modify the

discretisation, motivated by physical considerations, so that enstrophy

is dissipated, either by using the Anticipated Potential Vorticity

Method, or by inducing stabilised advection schemes for potential

vorticity such as SUPG or higher-order Taylor-Galerkin schemes. We

illustrate our results with convergence tests and numerical experiments

obtained from a FEniCS implementation on the sphere.

Thu, 15 May 2014
14:00
L5

Plane Wave Discontinuous Galerkin Methods: Exponential Convergence of the hp-version

Andrea Moiola
(Reading University)
Abstract

Computer simulation of the propagation and interaction of linear waves
is a core task in computational science and engineering.
The finite element method represents one of the most common
discretisation techniques for Helmholtz and Maxwell's equations, which
model time-harmonic acoustic and electromagnetic wave scattering.
At medium and high frequencies, resolution requirements and the
so-called pollution effect entail an excessive computational effort
and prevent standard finite element schemes from an effective use.
The wave-based Trefftz methods offer a possible way to deal with this
problem: trial and test functions are special solutions of the
underlying PDE inside each element, thus the information about the
frequency is directly incorporated in the discrete spaces.

This talk is concerned with a family of those methods: the so-called
Trefftz-discontinuous Galerkin (TDG) methods, which include the
well-known ultraweak variational formulation (UWVF).
We derive a general formulation of the TDG method for Helmholtz
impedance boundary value problems and we discuss its well-posedness
and quasi-optimality.
A complete theory for the (a priori) h- and p-convergence for plane
and circular/spherical wave finite element spaces has been developed,
relying on new best approximation estimates for the considered
discrete spaces.
In the two-dimensional case, on meshes with very general element
shapes geometrically graded towards domain corners, we prove
exponential convergence of the discrete solution in terms of number of
unknowns.

This is a joint work with Ralf Hiptmair, Christoph Schwab (ETH Zurich,
Switzerland) and Ilaria Perugia (Vienna, Austria).

Thu, 01 May 2014
14:00
L5

Adjoint sensitivity analysis in Thermoacoustics

Dr Matthew Juniper
(Cambridge)
Abstract

Thermoacoustic oscillations occur in combustion chambers when heat release oscillations lock into pressure oscillations. They were first observed in lamps in the 18th century, in rockets in the 1930s, and are now one of the most serious problems facing gas turbine manufacturers.

This theoretical and numerical study concerns an infinite-rate chemistry diffusion flame in a tube, which is a simple model for a flame in a combustion chamber. The problem is linearized around the non-oscillating state in order to derive the direct and adjoint equations governing the evolution of infinitesimal oscillations.

The direct equations are used to predict the frequency, growth rate, and mode shape of the most unstable thermoacoustic oscillations. The adjoint equations are then used to calculate how the frequency and growth rate change in response to (i) changes to the base state such as the flame shape or the composition of the fuel (ii) generic passive feedback mechanisms that could be added to the device. This information can be used to stabilize the system, which is verified by subsequent experiments.

This analysis reveals that, as expected from a simple model, the phase delay between velocity and heat-release fluctuations is the key parameter in determining the sensitivities. It also reveals that this thermo-acoustic system is exceedingly sensitive to changes in the base state. This analysis can be extended to more accurate models and is a promising new tool for the analysis and control of thermo-acoustic oscillations.

Thu, 24 Apr 2014
14:00
L4

Modeling of reactive events

Professor Eric Van den Eijnden
(New York University)
Abstract

Dynamics in nature often proceed in the form of reactive events, aka activated processes. The system under study spends very long periods of time in various metastable states; only very rarely does it transition from one such state to another. Understanding the dynamics of such events requires us to study the ensemble of transition paths between the different metastable states. Transition path theory (TPT) is a general mathematical framework developed for this purpose. It is also the foundation for developing modern numerical algorithms such as the string method for finding the transition pathways or milestoning to calculate the reaction rate, and it can also be used in the context of Markov State Models (MSMs). In this talk, I will review the basic ingredients of the transition path theory and discuss connections with transition state theory (TST) as well as approaches to metastability based on potential theory and large deviation theory. I will also discuss how the string method arises in order to find approximate solutions in the framework of the transition path theory, the connections between milestoning and TPT, and the way the theory help building MSMs. The concepts and methods will be illustrated using examples from molecular dynamics, material science and atmosphere/ocean sciences.

Thu, 13 Mar 2014

14:00 - 15:00
L5

Instance optimality of an AFEM with maximum marking strategy

Professor Christian Kreuzer
(Ruhr University Bochum)
Abstract

Adaptive finite element methods (AFEMs) with Dörflers marking strategy are known to converge with

optimal asymptotical rates. Practical experiences show that AFEMs with a maximum marking strategy

produces optimal results thereby being less sensitive to choices of the marking parameter.

\\

\\

In this talk, we prove that an AFEM with a modified maximum strategy is even instance optimal for the

total error, i.e., for the sum of the error and the oscillation. This is a non-asymptotical optimality result.

Our approach uses new techniques based on the minimisation of the Dirichlet energy and a newly

developed tree structure of the nodes of admissible triangulations.

Thu, 06 Mar 2014

14:00 - 15:00
L5

Kullback-Leibler Approximation Of Probability Measures

Professor Andrew Stuart
(University of Warwick)
Abstract

Many problems in the physical sciences

require the determination of an unknown

function from a finite set of indirect measurements.

Examples include oceanography, oil recovery,

water resource management and weather forecasting.

The Bayesian approach to these problems

is natural for many reasons, including the

under-determined and ill-posed nature of the inversion,

the noise in the data and the uncertainty in

the differential equation models used to describe

complex mutiscale physics. The object of interest

in the Bayesian approach is the posterior

probability distribution on the unknown field [1].

\\

\\

However the Bayesian approach presents a

computationally formidable task as it

results in the need to probe a probability

measure on separable Banach space. Monte

Carlo Markov Chain methods (MCMC) may be

used to achieve this [2], but can be

prohibitively expensive. In this talk I

will discuss approximation of probability measures

by a Gaussian measure, looking for the closest

approximation with respect to the Kullback-Leibler

divergence. This methodology is widely

used in machine-learning [3]. In the context of

target measures on separable Banach space

which themselves have density with respect to

a Gaussian, I will show how to make sense of the

resulting problem in the calculus of variations [4].

Furthermore I will show how the approximate

Gaussians can be used to speed-up MCMC

sampling of the posterior distribution [5].

\\

\\

[1] A.M. Stuart. "Inverse problems: a Bayesian

perspective." Acta Numerica 19(2010) and

http://arxiv.org/abs/1302.6989

\\

[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,

"MCMC methods for functions: modifying old algorithms

to make them faster". Statistical Science 28(2013).

http://arxiv.org/abs/1202.0709

\\

[3] C.M. Bishop, "Pattern recognition and machine learning".

Springer, 2006.

\\

[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler

Approximations for measures on infinite dimensional spaces."

http://arxiv.org/abs/1310.7845

\\

[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms

for Kullback-Leibler approximation of probability measures in

infinite dimensions." In preparation.

Thu, 27 Feb 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Alternating minimal energy methods for linear systems in higher dimensions

Dr Dmitry Savostyanov
(University of Southampton)
Abstract

When high-dimensional problems are concerned, not much algorithms can break the curse of dimensionality, and solve them efficiently and reliably. Among those, tensor product algorithms, which implement the idea of separation of variables for multi-index arrays (tensors), seem to be the most general and also very promising. They originated in quantum physics and chemistry and descent broadly from the density matrix renormalization group (DMRG) and matrix product states (MPS) formalisms. The same tensor formats were recently re-discovered in the numerical linear algebra (NLA) community as the tensor train (TT) format.

Algorithms developed in the quantum physics community are based on the optimisation in tensor formats, that is performed subsequently for all components of a tensor format (i.e. all sites or modes).
The DMRG/MPS schemes are very efficient but very difficult to analyse, and at the moment only local convergence results for the simplest algorithm are available. In the NLA community, a common approach is to use a classical iterative scheme (e.g. GMRES) and enforce the compression to a tensor format at every step. The formal analysis is quite straightforward, but tensor ranks of the vectors which span the Krylov subspace grow rapidly with iterations, and the methods are struggling in practice.

The first attempt to merge classical iterative algorithms and DMRG/MPS methods was made by White (2005), where the second Krylov vector is used to expand the search space on the optimisation step.
The idea proved to be useful, but the implementation was based on the fair amount of physical intuition, and the algorithm is not completely justified.

We have recently proposed the AMEn algorithm for linear systems, that also injects the gradient direction in the optimisation step, but in a way that allows to prove the global convergence of the resulted scheme. The scheme can be easily applied for the computation of the ground state --- the differences to the algorithm of S. White are emphasized in Dolgov and Savostyanov (2013).
The AMEn scheme is already acknowledged in the NLA community --- for example it was recently applied for the computation of extreme eigenstates by Kressner, Steinlechner and Uschmajew (2013), using the block-TT format proposed by in Dolgov, Khoromskij, Oseledets and Savostyanov (2014).

At the moment, AMEn algorithm was applied
 - to simulate the NMR spectra of large molecules (such as ubiquitin),
 - to solve the Fokker-Planck equation for the non-Newtonian polymeric flows,
 - to the chemical master equation describing the mesoscopic model of gene regulative networks,
 - to solve the Heisenberg model problem for a periodic spin chain.
We aim to extend this framework and the analysis to other problems of NLA: eigenproblems, time-dependent problems, high-dimensional interpolation, and matrix functions;  as well as to a wider list of high-dimensional problems.

This is a joint work with Sergey Dolgov the from Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany.

Thu, 13 Feb 2014

14:00 - 15:00
L5

Finite element approximation of a quasi-static model of rock detachment

Dr Leonardo Figueroa
(Universidad de Concepción)
Abstract

We report on a numerical implementation of a quasi-static model of

rock detachment based on Allaire, Jouve and Van Goethem's

implementation of Francfort and Marigo's model of damage in brittle

solids, As such, local minimizers of a cost functional involving both

stored elastic energy and a damage penalization term are sought by

using a procedure which alternates between approximately solving a

linear elasticity system and advancing a transport equation for a

level set function describing the loci of still-attached rock. We pay

special attention to the mixed finite element method used in the

approximation of the linear elasticity system.

Thu, 06 Feb 2014

14:00 - 15:00
L5

Approximation on surfaces with radial basis functions: from global to local methods

Professor Grady Wright
(Boise State University)
Abstract

Radial basis function (RBF) methods are becoming increasingly popular for numerically solving partial differential equations (PDEs) because they are geometrically flexible, algorithmically accessible, and can be highly accurate. There have been many successful applications of these techniques to various types of PDEs defined on planar regions in two and higher dimensions, and to PDEs defined on the surface of a sphere. Originally, these methods were based on global approximations and their computational cost was quite high. Recent efforts have focused on reducing the computational cost by using ``local’’ techniques, such as RBF generated finite differences (RBF-FD).

In this talk, we first describe our recent work on developing a new, high-order, global RBF method for numerically solving PDEs on relatively general surfaces, with a specific focus on reaction-diffusion equations. The method is quite flexible, only requiring a set of ``scattered’’ nodes on the surface and the corresponding normal vectors to the surface at these nodes. We next present a new scalable local method based on the RBF-FD approach with this same flexibility. This is the first application of the RBF-FD method to general surfaces. We conclude with applications of these methods to some biologically relevant problems.

This talk represents joint work with Edward Fuselier (High Point University), Aaron Fogelson, Mike Kirby, and Varun Shankar (all at the University of Utah).

Thu, 23 Jan 2014

14:00 - 15:00
L5

Direct Search Based on Probabilistic Descent

Professor Luis Nunes Vicente
(University of Coimbra)
Abstract

Direct-search methods are a class of popular derivative-free

algorithms characterized by evaluating the objective function

using a step size and a number of (polling) directions.

When applied to the minimization of smooth functions, the

polling directions are typically taken from positive spanning sets

which in turn must have at least n+1 vectors in an n-dimensional variable space.

In addition, to ensure the global convergence of these algorithms,

the positive spanning sets used throughout the iterations

must be uniformly non-degenerate in the sense of having a positive

(cosine) measure bounded away from zero.

\\

\\

However, recent numerical results indicated that randomly generating

the polling directions without imposing the positive spanning property

can improve the performance of these methods, especially when the number

of directions is chosen considerably less than n+1.

\\

\\

In this talk, we analyze direct-search algorithms when the polling

directions are probabilistic descent, meaning that with a certain

probability at least one of them is of descent type. Such a framework

enjoys almost-sure global convergence. More interestingly, we will show

a global decaying rate of $1/\sqrt{k}$ for the gradient size, with

overwhelmingly high probability, matching the corresponding rate for

the deterministic versions of the gradient method or of direct search.

Our analysis helps to understand numerical behavior and the choice of

the number of polling directions.

\\

\\

This is joint work with Clément Royer, Serge Gratton, and Zaikun Zhang.

Thu, 05 Dec 2013

14:00 - 15:00
L5

Certified upper and lower bounds for the eigenvalues of the Maxwell operator

Dr Gabriel Barrenechea
(University of Strathclyde)
Abstract

We propose a strategy which allows computing eigenvalue enclosures for the Maxwell operator by means of the finite element method. The origins of this strategy can be traced back to over 20 years ago. One of its main features lies in the fact that it can be implemented on any type of regular mesh (structured or otherwise) and any type of elements (nodal or otherwise). In the first part of the talk we formulate a general framework which is free from spectral pollution and allows estimation of eigenfunctions.

We then prove the convergence of the method, which implies precise convergence rates for nodal finite elements. Various numerical experiments on benchmark geometries, with and without symmetries, are reported.

Thu, 28 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Block LU factorization with panel Rank Revealing Pivoting and its Communication Avoiding version

Dr Amal Khabou
(University of Manchester)
Abstract

We present a block LU factorization with panel rank revealing

pivoting (block LU_PRRP), an algorithm based on strong

rank revealing QR for the panel factorization.

Block LU_PRRP is more stable than Gaussian elimination with partial

pivoting (GEPP), with a theoretical upper bound of the growth factor

of $(1+ \tau b)^{(n/ b)-1}$, where $b$ is the size of the panel used

during the block factorization, $\tau$ is a parameter of the strong

rank revealing QR factorization, and $n$ is the number of columns of

the matrix. For example, if the size of the panel is $b = 64$, and

$\tau = 2$, then $(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$, where

$2^{n-1}$ is the upper bound of the growth factor of GEPP. Our

extensive numerical experiments show that the new factorization scheme

is as numerically stable as GEPP in practice, but it is more resistant

to some pathological cases where GEPP fails. We note that the block LU_PRRP

factorization does only $O(n^2 b)$ additional floating point operations

compared to GEPP.

Thu, 21 Nov 2013

14:00 - 15:00
L5

Sparse dictionary learning in the presence of noise and outliers

Dr Rémi Gribonval
(INRIA Rennes)
Abstract

A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.

This is joint work with Rodolphe Jenatton & Francis Bach.

Thu, 14 Nov 2013

14:00 - 15:00
L5

Range space Krylov methods for data assimilation in meteorology and oceanography

Professor Philippe Toint
(University of Namur)
Abstract

The context of data assimilation in oceanography will be described as well as the computational challenges associated with it. A class of numerical linear algebra methods is described whose purpose is to exploit the problem structure in order to reduce the computational burden and provide provable convergence results for what remains a (very large) nonlinear problem. This class belongs to the Krylov-space family of methods and the special structure used is the imbalance between the dimensions of the state space and the observation space. It is also shown how inexact matrix-vector products can be exploited. Finally, preconditioning issues and resulting adaptations of the trust-region methodology for nonlinear minimization will also be outlined.

By Serge Gratton, Selime Gurol, Philippe Toint, Jean Tshimanga and Anthony Weaver.

Thu, 07 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Sparse multifrontal QR factorization on the GPU

Professor Tim Davis
(University of Florida)
Abstract

Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.

This is joint work with Nuri Yeralan and Sanjay Ranka.

Thu, 31 Oct 2013

14:00 - 15:00
L5

Don't be afraid of the 1001st (numerical) derivative

Professor Folkmar Bornemann
(Technical University Munich)
Abstract

The accurate and stable numerical calculation of higher-order

derivatives of holomorphic functions (as required, e.g., in random matrix

theory to extract probabilities from a generating function) turns out to

be a surprisingly rich topic: there are connections to asymptotic analysis,

the theory of entire functions, and to algorithmic graph theory.

Thu, 24 Oct 2013

14:00 - 15:00
L5

A geometric theory of phase transitions in convex optimization

Dr Martin Lotz
(University of Manchester)
Abstract

Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.

Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.

Thu, 17 Oct 2013

14:00 - 15:00
L5

Model Selection with Piecewise Regular Gauges

Dr Gabriel Peyre
(Université Paris Dauphine)
Abstract

In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. We consider regularizations with convex positively 1-homogenous functionals (so-called gauges) which are piecewise smooth. Singularies of such functionals are crucial to force the solution to the regularization to belong to an union of linear space of low dimension. These spaces (the so-called "models") allows one to encode many priors on the data to be recovered, conforming to some notion of simplicity/low complexity. This family of priors encompasses many special instances routinely used in regularized inverse problems such as L^1, L^1-L^2 (group sparsity), nuclear norm, or the L^infty norm. The piecewise-regular requirement is flexible enough to cope with analysis-type priors that include a pre-composition with a linear operator, such as for instance the total variation and polyhedral gauges. This notion is also stable under summation of regularizers, thus enabling to handle mixed regularizations.

The main set of contributions of this talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We provide sufficient conditions that allow to provably controlling the deviation of the recovered solution from the true underlying object, as a function of the noise level. More precisely we establish two main results. The first one ensures that the solution to the inverse problem is unique and lives on the same low dimensional sub-space as the true vector to recover, with the proviso that the minimal signal to noise ratio is large enough. This extends previous results well-known for the L^1 norm [1], analysis L^1 semi-norm [2], and the nuclear norm [3] to the general class of piecewise smooth gauges. In the second result, we establish L^2 stability by showing that the L^2 distance between the recovered and true vectors is within a factor of the noise level, thus extending results that hold for coercive convex positively 1-homogenous functionals [4].

This is a joint work with S. Vaiter, C. Deledalle, M. Golbabaee and J. Fadili. For more details, see [5].

Bibliography:
[1] J.J. Fuchs, On sparse representations in arbitrary redundant bases. IEEE Transactions on Information Theory, 50(6):1341-1344, 2004.
[2] S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust Sparse Analysis Regularization, to appear in IEEE Transactions on Information Theory, 2013.
[3] F. Bach, Consistency of trace norm minimization, Journal of Machine Learning Research, 9, 1019-1048, 2008.
[4] M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals. Inverse Problems, 27(7):075014, 2011.
[5] S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré, Model Selection with Piecewise Regular Gauges, Preprint hal-00842603, 2013