12:30
12:30
12:30
Models for subglacial floods during surface lake drainage events
Abstract
As temperatures are increasing, so is the presence of meltwater lakes sitting on the surface of the Greenland Ice Sheet. Such lakes have the possibility of draining through cracks in the ice to the bedrock. Observed discharge rates have found that these lakes can drain at three times the flow rate of Niagara Falls. Current models of subglacial drainage systems are unable to cope with such a large and sudden volume of water. This motivates the idea of a 'subglacial blister' which propagates and slowly dissipates underneath the ice sheet. We present a basic hydrofracture model for understanding this process, before carrying out a number of extensions to observe the effects of turbulence, topography, leak-off and finite ice thickness.
15:30
Foundations for derived analytic and differential geometry
Abstract
In this talk I will describe how bornological spaces give a foundation for derived geometries. This works over any Banach ring allowing to define analytic and differential geometry over the integers. I will discuss applications of this approach such as the representability of certain moduli spaces and Galois actions on the cohomology of differetiable manifolds admitting a \Q-form.
IterativeCUR: One small sketch for big matrix approximations
Abstract
The computation of accurate low-rank matrix approximations is central to improving the scalability of various techniques in machine learning, uncertainty quantification, and control. Traditionally, low-rank approximations are constructed using SVD-based approaches such as truncated SVD or RandomizedSVD. Although these SVD approaches---especially RandomizedSVD---have proven to be very computationally efficient, other low-rank approximation methods can offer even greater performance. One such approach is the CUR decomposition, which forms a low-rank approximation using direct row and column subsets of a matrix. Because CUR uses direct matrix subsets, it is also often better able to preserve native matrix structures like sparsity or non-negativity than SVD-based approaches and can facilitate data interpretation in many contexts. This paper introduces IterativeCUR, which draws on previous work in randomized numerical linear algebra to build a new algorithm that is highly competitive compared to prior work: (1) It is adaptive in the sense that it takes as an input parameter the desired tolerance, rather than an a priori guess of the numerical rank. (2) It typically runs significantly faster than both existing CUR algorithms and techniques such as RandomizedSVD, in particular when these methods are run in an adaptive rank mode. Its asymptotic complexity is $\mathcal{O}(mn + (m+n)r^2 + r^3)$ for an $m\times n$ matrix of numerical rank $r$. (3) It relies on a single small sketch from the matrix that is successively downdated as the algorithm proceeds.
Random Embeddings for Global Optimization: Convergence Results Beyond Low Effective Dimension
Abstract
Roy Makhlouf will talk about: 'Random Embeddings for Global Optimization: Convergence Results Beyond Low Effective Dimension'
Timely optimization problems are high-dimensional, calling for dimensionality reduction techniques to solve them efficiently. The random embedding strategy, which optimizes the objective along a low-dimensional subspace of the search space, is arguably the simplest possible dimensionality reduction method. Recent works quantify the probability of success of this strategy to solve the original problem by lower bounding the probability of a random subspace to intersect the set of approximate global minimizers. These works showed that, when the objective has low effective dimension (i.e., is only varying along a low-dimensional subspace of the search space), random embeddings of sufficiently large dimension solve the original high-dimensional problem with probability one. In this work, we relax the low effective dimension assumption by considering objectives with anisotropic variability, namely, Lipschitz continuous functions whose Lipschitz constant is small (though nonzero) when the function is restricted to a high-dimensional subspace. Exploiting tools from stochastic geometry, we lower bound the probability for a random subspace to intersect the set of approximate global minimizers of these objectives, hence, the probability of random embeddings to succeed in solving (approximately) the original global optimization problem. Our findings offer deeper insights into the role of the dimension of the optimization problem in this probability of success.
A Very Short Introduction to Ptychographic Image Reconstruction
Abstract
Dr Jari Fowkes will talk about; 'A Very Short Introduction to Ptychographic Image Reconstruction'
I will present a very short introduction to the mathematics behind the scientific imaging technique known as ptychography, starting with a brief overview of the physics model and the various simplifications required, before moving on to the main ptychography inverse problem and the three principal classes of optimization algorithms currently being used in practice.