InFoMM CDT Group Meeting
InFoMM CDT Group Meeting
InFoMM CDT Group Meeting
InFoMM CDT Group Meeting
InFoMM CDT Group Meeting
Nonlinear aggregation-diffusion equations in the diffusion-dominated and fair competitions regimes
Abstract
We analyse under which conditions equilibration between two competing effects, repulsion modelled by nonlinear diffusion and attraction modelled by nonlocal interaction, occurs. I will discuss several regimes that appear in aggregation diffusion problems with homogeneous kernels. I will first concentrate in the fair competition case distinguishing among porous medium like cases and fast diffusion like ones. I will discuss the main qualitative properties in terms of stationary states and minimizers of the free energies. In particular, all the porous medium cases are critical while the fast diffusion are not. In the second part, I will discuss the diffusion dominated case in which this balance leads to continuous compactly supported radially decreasing equilibrium configurations for all masses. All stationary states with suitable regularity are shown to be radially symmetric by means of continuous Steiner symmetrisation techniques. Calculus of variations tools allow us to show the existence of global minimizers among these equilibria. Finally, in the particular case of Newtonian interaction in two dimensions they lead to uniqueness of equilibria for any given mass up to translation and to the convergence of solutions of the associated nonlinear aggregation-diffusion equations towards this unique equilibrium profile up to translations as time tends to infinity. This talk is based on works in collaboration with S. Hittmeir, B. Volzone and Y. Yao and with V. Calvez and F. Hoffmann.
Each summer, a group of very enthusiastic teenage mathematicians come to spend six weeks in Oxford, working intensively on mathematics. They are participants in the PROMYS Europe programme, now in its fourth year and modelled on PROMYS in Boston, which was founded in 1989. One of the distinctive features of the PROMYS philosophy is that the students spend most of the programme discovering mathematical ideas and making connections for themselves, thereby getting a taste for life as a practising mathematician
Augmented Arnoldi-Tikhonov Methods for Ill-posed Problems
Abstract
$$
\def\curl#1{\left\{#1\right\}}
\def\vek#1{\mathbf{#1}}
$$
lll-posed problems arise often in the context of scientific applications in which one cannot directly observe the object or quantity of interest. However, indirect observations or measurements can be made, and the observable data $y$ can be represented as the wanted observation $x$ being acted upon by an operator $\mathcal{A}$. Thus we want to solve the operator equation \begin{equation}\label{eqn.Txy} \mathcal{A} x = y, \end{equation} (1) often formulated in some Hilbert space $H$ with $\mathcal{A}:H\rightarrow H$ and $x,y\in H$. The difficulty then is that these problems are generally ill-posed, and thus $x$ does not depend continuously on the on the right-hand side. As $y$ is often derived from measurements, one has instead a perturbed $y^{\delta}$ such that ${y - y^{\delta}}_{H}<\delta$. Thus due to the ill-posedness, solving (1) with $y^{\delta}$ is not guaranteed to produce a meaningful solution. One such class of techniques to treat such problems are the Tikhonov-regularization methods. One seeks in reconstructing the solution to balance fidelity to the data against size of some functional evaluation of the reconstructed image (e.g., the norm of the reconstruction) to mitigate the effects of the ill-posedness. For some $\lambda>0$, we solve \begin{equation}\label{eqn.tikh} x_{\lambda} = \textrm{argmin}_{\widetilde{x}\in H}\left\lbrace{\left\|{b - A\widetilde{x}} \right\|_{H}^{2} + \lambda \left\|{\widetilde{x}}\right\|_{H}^{2}} \right\rbrace. \end{equation} In this talk, we discuss some new strategies for treating discretized versions of this problem. Here, we consider a discreditized, finite dimensional version of (1), \begin{equation}\label{eqn.Axb} Ax = b \mbox{ with } A\in \mathbb{R}^{n\times n}\mbox{ and } b\in\mathbb{R}^{n}, \end{equation} which inherits a discrete version of ill conditioning from [1]. We propose methods built on top of the Arnoldi-Tikhonov method of Lewis and Reichel, whereby one builds the Krylov subspace \begin{equation}
\mathcal{K}_{j}(\vek A,\vek w) = {\rm span\,}\curl{\vek w,\vek A\vek w,\vek A^{2}\vek w,\ldots,\vek A^{j-1}\vek w}\mbox{ where } \vek w\in\curl{\vek b,\vek A\vek b}
\end{equation}
and solves the discretized Tikhonov minimization problem projected onto that subspace. We propose to extend this strategy to setting of augmented Krylov subspace methods. Thus, we project onto a sum of subspaces of the form $\mathcal{U} + \mathcal{K}_{j}$ where $\mathcal{U}$ is a fixed subspace and $\mathcal{K}_{j}$ is a Krylov subspace. It turns out there are multiple ways to do this leading to different algorithms. We will explain how these different methods arise mathematically and demonstrate their effectiveness on a few example problems. Along the way, some new mathematical properties of the Arnoldi-Tikhonov method are also proven.