Date
Thu, 25 Oct 2018
Time
14:00 - 15:00
Location
L4
Speaker
Prof Kirk Soodhalter
Organisation
Trinity College Dublin

$$
\def\curl#1{\left\{#1\right\}}
\def\vek#1{\mathbf{#1}}
$$
lll-posed problems arise often in the context of scientific applications in which one cannot directly observe the object or quantity of interest. However, indirect observations or measurements can be made, and the observable data $y$ can be represented as the wanted observation $x$ being acted upon by an operator $\mathcal{A}$. Thus we want to solve the operator equation \begin{equation}\label{eqn.Txy} \mathcal{A} x = y, \end{equation} (1) often formulated in some Hilbert space $H$ with $\mathcal{A}:H\rightarrow H$ and $x,y\in H$. The difficulty then is that these problems are generally ill-posed, and thus $x$ does not depend continuously on the on the right-hand side. As $y$ is often derived from measurements, one has instead a perturbed $y^{\delta}$ such that ${y - y^{\delta}}_{H}<\delta$. Thus due to the ill-posedness, solving (1) with $y^{\delta}$ is not guaranteed to produce a meaningful solution. One such class of techniques to treat such problems are the Tikhonov-regularization methods. One seeks in reconstructing the solution to balance fidelity to the data against size of some functional evaluation of the reconstructed image (e.g., the norm of the reconstruction) to mitigate the effects of the ill-posedness. For some $\lambda>0$, we solve \begin{equation}\label{eqn.tikh} x_{\lambda} = \textrm{argmin}_{\widetilde{x}\in H}\left\lbrace{\left\|{b - A\widetilde{x}} \right\|_{H}^{2} + \lambda \left\|{\widetilde{x}}\right\|_{H}^{2}} \right\rbrace. \end{equation} In this talk, we discuss some new strategies for treating discretized versions of this problem. Here, we consider a discreditized, finite dimensional version of (1), \begin{equation}\label{eqn.Axb} Ax =  b \mbox{ with }  A\in \mathbb{R}^{n\times n}\mbox{ and } b\in\mathbb{R}^{n}, \end{equation} which inherits a discrete version of ill conditioning from [1]. We propose methods built on top of the Arnoldi-Tikhonov method of Lewis and Reichel, whereby one builds the Krylov subspace \begin{equation}
\mathcal{K}_{j}(\vek A,\vek w) = {\rm span\,}\curl{\vek w,\vek A\vek w,\vek A^{2}\vek w,\ldots,\vek A^{j-1}\vek w}\mbox{ where } \vek w\in\curl{\vek b,\vek A\vek b}
\end{equation}
and solves the discretized Tikhonov minimization problem projected onto that subspace. We propose to extend this strategy to setting of augmented Krylov subspace methods. Thus, we project onto a sum of subspaces of the form $\mathcal{U} + \mathcal{K}_{j}$ where $\mathcal{U}$ is a fixed subspace and $\mathcal{K}_{j}$ is a Krylov subspace. It turns out there are multiple ways to do this leading to different algorithms. We will explain how these different methods arise mathematically and demonstrate their effectiveness on a few example problems. Along the way, some new mathematical properties of the Arnoldi-Tikhonov method are also proven.

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 14:57.