Fri, 31 May 2024

15:00 - 16:00
L5

Applied Topology TBC

Bernadette Stolz-Pretzer
(École Polytechnique Fédérale de Lausanne (EPFL))

The join button will be published 30 minutes before the seminar starts (login required).

Fri, 17 May 2024

15:00 - 16:00
L5

Cohomology classes in the RNA transcriptome

Kelly Spry Maggs
(École Polytechnique Fédérale de Lausanne (EPFL))

The join button will be published 30 minutes before the seminar starts (login required).

Abstract

 

Single-cell sequencing data consists of a point cloud where the points are cells, with coordinates RNA expression levels in each gene. Since the tissue is destroyed by the sequencing procedure, the dynamics of gene expression must be inferred from the structure and geometry of the point cloud. In this talk, we will build a biological interpretation of the one-dimensional cohomology classes in hallmark gene subsets as models for transient biological processes. Such processes include the cell-cycle, but more generally model homeostatic negative feedback loops. Our procedure uses persistent cohomology to identify features, and integration of differential forms to estimate the cascade of genes associated with the underlying dynamics of gene expression.

This is joint work with Markus Youssef and Tâm Nguyen at EPFL.

Mon, 14 Nov 2022
14:15
L5

K-theoretic DT/PT invariants on Calabi-Yau 4-(orbi)folds

Sergej Monavari
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

Donaldson-Thomas theory is classically defined for moduli spaces of sheaves over a Calabi-Yau threefold. Thanks to recent foundational work of Cao-Leung, Borisov-Joyce and Oh-Thomas, DT theory has been extended to Calabi-Yau 4-folds. We discuss how, in this context, one can define natural K-theoretic refinements of Donaldson-Thomas invariants (counting sheaves on Hilbert schemes) and Pandharipande-Thomas invariants (counting sheaves on moduli spaces of stable pairs) and how — conjecturally — they are related. Finally, we introduce an extension of DT invariants to Calabi-Yau 4-orbifolds, and propose a McKay-type correspondence, which we expect to be suitably interpreted as a wall-crossing phenomenon. Joint work (in progress) with Yalong Cao and Martijn Kool.

Thu, 17 Nov 2022

12:00 - 13:00
L1

Idealised and Real Contact Sets in Knots and other Tight Structures

Prof. John Maddocks
(École Polytechnique Fédérale de Lausanne (EPFL))
Further Information

Born in Scotland and a former member of the British Olympic sailing team, the mathematician obtained his doctorate in Oxford. After several years as professor of mathematics in Maryland, USA, he returned to Europe to the École Polytechnique Fédérale de Lausanne (EPFL), where he has worked for nearly 20 years.

John Maddocks is a prominent expert in the multiscale modeling of DNA, the nucleic acid-based biological molecule that carries genetic information. He is interested above all in the nanomechanical properties of DNA molecules. These properties determine how DNA is "packed" and stored in our cells.

Text adapted from TU Berlin

Abstract

It has been known for some time that the contact sets between
self-avoiding idealised tubes (i.e. with exactly circular, normal
cross-sections) in various highly compact, tight structures comprise
double lines of contact. I will re-visit those results for two canonical
examples, namely the orthogonal clasp and the ideal trefoil knot. I will
then show experimental and 3D FEM simulation data for deformable elastic
tubes (obtained within the group of Pedro Reis at the EPFL) which
reveals that the ideal contact set lines bound (in a non-rigorous sense)
the actual contact patches that arise in reality.

[1] The shapes of physical trefoil knots, P. Johanns, P. Grandgeorge, C.
Baek, T.G. Sano, J.H. Maddocks, P.M. Reis, Extreme Mechanics Letters 43
(2021), p. 101172, DOI:10.1016/j.eml.2021.101172
[2]  Mechanics of two filaments in tight orthogonal contact, P.
Grandgeorge, C. Baek, H. Singh, P. Johanns, T.G. Sano, A. Flynn, J.H.
Maddocks, and P.M. Reis, Proceedings of the National Academy of Sciences
of the United States of America 118, no. 15 (2021), p. e2021684118
DOI:10.1073/pnas.2021684118

Fri, 03 Jun 2022
15:00
L3

Projected barcodes : a new class of invariants and distances for multi-parameter persistence modules

Nicolas Berkouk
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

In this talk, we will present a new class of invariants of multi-parameter persistence modules : \emph{projected barcodes}. Relying on Grothendieck's six operations for sheaves, projected barcodes are defined as derived pushforwards of persistence modules onto $\R$ (which can be seen as sheaves on a vector space in a precise sense). We will prove that the well-known fibered barcode is a particular instance of projected barcodes. Moreover, our construction is able to distinguish persistence modules that have the same fibered barcodes but are not isomorphic. We will present a systematic study of the stability of projected barcodes. Given F a subset of the 1-Lipschitz functions, this leads us to define a new class of well-behaved distances between persistence modules, the  F-Integral Sheaf Metrics (F-ISM), as the supremum over p in F of the bottleneck distance of the projected barcodes by p of two persistence modules. 

In the case where M is the collection in all degrees of the sublevel-sets persistence modules of a function f : X -> R^n, we prove that the projected barcode of M by a linear map p : R^n \to R is nothing but the collection of sublevel-sets barcodes of the post-composition of f by p. In particular, it can be computed using already existing softwares, without having to compute entirely M. We also provide an explicit formula for the gradient with respect to p of the bottleneck distance between projected barcodes, allowing to use a gradient ascent scheme of approximation for the linear ISM. This is joint work with François Petit.

 

Thu, 10 Mar 2022

12:00 - 13:00
L1

Topological classification and synthesis of neuron morphologies

Kathryn Hess
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

Motivated by the desire to automate classification of neuron morphologies, we designed a topological signature, the Topological Morphology Descriptor (TMD),  that assigns a so-called “barcode" to any geometric tree (i.e, any finite binary tree embedded in R^3). We showed that the TMD effectively determines  reliable clusterings of random and neuronal trees. Moreover, using the TMD we performed an objective, stable classification of pyramidal cells in the rat neocortex, based only on the shape of their dendrites.

We have also reverse-engineered the TMD, in order to digitally synthesize dendrites, to compensate for the relatively small number of available biological reconstructions. The algorithm we developed, called Topological Neuron Synthesis (TNS), stochastically generates a geometric tree from a barcode, in a biologically grounded manner. The synthesized neurons are statistically indistinguishable from real neurons of the same type, in terms of morpho-electrical properties and  connectivity. We synthesized networks of structurally altered neurons, revealing principles linking branching properties to the structure of large-scale networks.  We have also successfully applied these classification and synthesis techniques to microglia and astrocytes, two other types of cells that populate the brain.

In this talk I will provide an overview of the TMD and the TNS and then describe the results of our theoretical and computational analysis of their behavior and properties.

This talk is based on work in collaborations led by Lida Kanari at the Blue Brain Project.

 

Fri, 26 Feb 2021

15:00 - 16:00

A simplicial extension of node2vec

Celia Hacker
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

The well known node2vec algorithm has been used to explore network structures and represent the nodes of a graph in a vector space in a way that reflects the structure of the graph. Random walks in node2vec have been used to study the local structure through pairwise interactions. Our motivation for this project comes from a desire to understand higher-order relationships by a similar approach. To this end, we propose an extension of node2vec to a method for representing the k-simplices of a simplicial complex into Euclidean space. 

In this talk I outline a way to do this by performing random walks on simplicial complexes, which have a greater variety of adjacency relations to take into account than in the case of graphs. The walks on simplices are then used to obtain a representation of the simplices. We will show cases in which this method can uncover the roles of higher order simplices in a network and help understand structures in graphs that cannot be seen by using just the random walks on the nodes. 

Thu, 11 Feb 2021

14:00 - 15:00
Virtual

From design to numerical analysis of partial differential equations: a unified mathematical framework

Annalisa Buffa
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

Computer-based simulation of partial differential equations (PDEs) involves approximating the unknowns and relies on suitable description of geometrical entities such as the computational domain and its properties. The Finite Element Method (FEM) is by large the most popular technique for the computer-based simulation of PDEs and hinges on the assumption that discretized domain and unknown fields are both represented by piecewise polynomials, on tetrahedral or hexahedral partitions. In reality, the simulation of PDEs is a brick within a workflow where, at the beginning, the geometrical entities are created, described and manipulated with a geometry processor, often through Computer-Aided Design systems (CAD), and then used for the simulation of the mechanical behaviour of the designed object. This workflow is often repeated many times as part of a shape optimisation loop. Within this loop, the use of FEM on CAD geometries (which are mainly represented through their boundaries) calls then for (re-) meshing and re-interpolation techniques that often require human intervention and result in inaccurate solutions and lack of robustness of the whole process. In my talk, I will present the mathematical counterpart of this problem, I will discuss the mismatch in the mathematical representations of geometries and PDEs unknowns and introduce a promising framework where geometric objects and PDEs unknowns are represented in a compatible way. Within this framework, the challenges to be addressed in order to construct robust PDE solvers are many and I will discuss some of them. Mathematical results will besupported by numerical validation.

Fri, 21 Aug 2020

15:00 - 16:00
Virtual

Noisy neurons and rainbow worms: theoretical and statistical perspectives on trees and their barcodes

Adélie Garin
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

The TMD algorithm (Kanari et al. 2018) computes the barcode of a neuron (tree) with respect to the radial or path distance from the soma (root). We are interested in the inverse problem: how to understand the space of trees that are represented by the same barcode. Our tool to study this spaces is the stochastic TNS algorithm (Kanari et al. 2020) which generates trees from a given barcode in a biologically meaningful way. 

I will present some theoretical results on the space of trees that have the same barcode, as well as the effect of adding noise to the barcode. In addition, I will provide a more combinatorial perspective on the space of barcodes, expressed in terms of the symmetric group. I will illustrate these results with experiments based on the TNS.

This is joint work with L. Kanari and K. Hess. 

Mon, 09 Mar 2020

15:45 - 16:45
L3

Infinite limit of (fully connected) neural networks: Gaussian processes and kernel methods.

FRANCK GABRIEL
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

In practice, it is standard to initialize Artificial Neural Networks (ANN) with random parameters. We will see that this allows to describe, in the functional space, the limit of the evolution of (fully connected) ANN when their width tends towards infinity. Within this limit, an ANN is initially a Gaussian process and follows, during learning, a gradient descent convoluted by a kernel called the Neural Tangent Kernel. 

This description allows a better understanding of the convergence properties of neural networks, of how they generalize to examples during learning and has 

practical implications on the training of wide ANNs. 

Subscribe to École Polytechnique Fédérale de Lausanne (EPFL)