How much stress is enough stress and enlightened self-management.
Random matrices, random Young diagrams, and some random operators
Abstract
The rows of a Young diagram chosen at random with respect to the Plancherel measure are known to share some features with the eigenvalues of the Gaussian Unitary Ensemble. We shall discuss several ideas, going back to the work of Kerov and developed by Biane and by Okounkov, which to some extent clarify this similarity. Partially based on joint work with Jeong and on joint works in progress with Feldheim and Jeong and with Täufer.
Araç Kasko Değeri Sorgulama
Abstract
This presentation introduces a rigorous framework for the study of commonly used machine learning techniques (kernel methods, random feature maps, etc.) in the regime of large dimensional and numerous data. Exploiting the fact that very realistic data can be modeled by generative models (such as GANs), which are theoretically concentrated random vectors, we introduce a joint random matrix and concentration of measure theory for data processing. Specifically, we present fundamental random matrix results for concentrated random vectors, which we apply to the performance estimation of spectral clustering on real image datasets.
Randomised algorithms for computing low rank approximations of matrices
Abstract
The talk will describe how ideas from random matrix theory can be leveraged to effectively, accurately, and reliably solve important problems that arise in data analytics and large scale matrix computations. We will focus in particular on accelerated techniques for computing low rank approximations to matrices. These techniques rely on randomised embeddings that reduce the effective dimensionality of intermediate steps in the computation. The resulting algorithms are particularly well suited for processing very large data sets.
The algorithms described are supported by rigorous analysis that depends on probabilistic bounds on the singular values of rectangular Gaussian matrices. The talk will briefly review some representative results.
Note: There is a related talk in the Computational Mathematics and Applications seminar on Thursday Feb 27, at 14:00 in L4. There, the ideas introduced in this talk will be extended to the problem of solving large systems of linear equations.
Eigenvector overlaps for large random matrices and applications to financial data
Abstract
Whereas the spectral properties of random matrices has been the subject of numerous studies and is well understood, the statistical properties of the corresponding eigenvectors has only been investigated in the last few years. We will review several recent results and emphasize their importance for cleaning empirical covariance matrices, a subject of great importance for financial applications.
The Statistical Finite Element Method
Abstract
The finite element method (FEM) is one of the great triumphs of applied mathematics, numerical analysis and software development. Recent developments in sensor and signalling technologies enable the phenomenological study of systems. The connection between sensor data and FEM is restricted to solving inverse problems placing unwarranted faith in the fidelity of the mathematical description of the system. If one concedes mis-specification between generative reality and the FEM then a framework to systematically characterise this uncertainty is required. This talk will present a statistical construction of the FEM which systematically blends mathematical description with observations.
>
Adaptive & Multilevel Stochastic Galerkin Approximation for PDEs with Random Inputs
Randomised algorithms for solving systems of linear equations
Abstract
The task of solving large scale linear algebraic problems such as factorising matrices or solving linear systems is of central importance in many areas of scientific computing, as well as in data analysis and computational statistics. The talk will describe how randomisation can be used to design algorithms that in many environments have both better asymptotic complexities and better practical speed than standard deterministic methods.
The talk will in particular focus on randomised algorithms for solving large systems of linear equations. Both direct solution techniques based on fast factorisations of the coefficient matrix, and techniques based on randomised preconditioners, will be covered.
Note: There is a related talk in the Random Matrix Seminar on Tuesday Feb 25, at 15:30 in L4. That talk describes randomised methods for computing low rank approximations to matrices. The two talks are independent, but the Tuesday one introduces some of the analytical framework that supports the methods described here.