Understanding Concentration and Separation in Deep Neural Networks
Abstract
Deep convolutional networks have spectacular performances that remain mostly not understood. Numerical experiments show that they classify by progressively concentrating each class in separate regions of a low-dimensional space. To explain these properties, we introduce a concentration and separation mechanism with multiscale tight frame contractions. Applications are shown for image classification and statistical physics models of cosmological structures and turbulent fluids.
The C*-algebras associated to a Wieler solenoid
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
Abstract
Wieler has shown that every irreducible Smale space with totally disconnected stable sets is a solenoid (i.e., obtained via a stationary inverse limit construction). Through examples I will discuss how this allows one to compute the K-theory of the stable algebra, S, and the stable Ruelle algebra, S\rtimes Z. These computations involve writing S as a stationary inductive limit and S\rtimes Z as a Cuntz-Pimsner algebra. These constructions reemphasize the view point that Smale space C*-algebras are higher dimensional generalizations of Cuntz-Krieger algebras. The main results are joint work with Magnus Goffeng and Allan Yashinski.
Von Neumann algebras and equivalences between groups
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
Abstract
We have various ways of describing the extent to which two countably infinite groups are "the same." Are they isomorphic? If not, are they commensurable? Measure equivalent? Quasi-isometric? Orbit equivalent? W*-equivalent? Von Neumann equivalent? In this expository talk, we will define these notions of equivalence, discuss the known relationships between them, and work out some examples. Along the way, we will describe recent joint work with Ishan Ishan and Jesse Peterson.
On the Happy Marriage of Kernel Methods and Deep Learning
datasig.ox.ac.uk/events
Abstract
In this talk, we present simple ideas to combine nonparametric approaches based on positive definite kernels with deep learning models. There are many good reasons for bridging these two worlds. On the one hand, we want to provide regularization mechanisms and a geometric interpretation to deep learning models, as well as a functional space that allows to study their theoretical properties (eg invariance and stability). On the other hand, we want to bring more adaptivity and scalability to traditional kernel methods, which are crucially lacking. We will start this presentation by introducing models to represent graph data, then move to biological sequences, and images, showing that our hybrid models can achieves state-of-the-art results for many predictive tasks, especially when large amounts of annotated data are not available. This presentation is based on joint works with Alberto Bietti, Dexiong Chen, and Laurent Jacob.