Date
Thu, 11 Feb 2016
Time
14:00 - 15:00
Location
L5
Speaker
Dr. Sergey Dolgov
Organisation
Bath University

Partial differential equations with more than three coordinates arise naturally if the model features certain kinds of stochasticity. Typical examples are the Schroedinger, Fokker-Planck and Master equations in quantum mechanics or cell biology, as well as quantification of uncertainty.
The principal difficulty of a straightforward numerical solution of such equations is the `curse of dimensionality': the storage cost of the discrete solution grows exponentially with the number of coordinates (dimensions).

One way to reduce the complexity is the low-rank separation of variables. One can see all discrete data (such as the solution) as multi-index arrays, or tensors. These large tensors are never stored directly.
We approximate them by a sum of products of smaller factors, each carrying only one of the original variables. I will present one of the simplest but powerful of such representations, the Tensor Train (TT) decomposition. The TT decomposition generalizes the approximation of a given matrix by a low-rank matrix to the tensor case. It was found that many interesting models allow such approximations with a significant reduction of storage demands.

A workhorse approach to computations with the TT and other tensor product decompositions is the alternating optimization of factors. The simple realization is however prone to convergence issues.
I will show some of the recent improvements that are indispensable for really many dimensions, or solution of linear systems with non-symmetric or indefinite matrices.

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 14:57.