13:00
In this talk, we present a topological framework for interpreting the latent representations of Multilayer Perceptrons (MLPs) [1] using tools from Topological Data Analysis. Our approach constructs a simplicial tower, a sequence of simplicial complexes linked by simplicial maps, to capture how the topology of data evolves across network layers. This construction is based on the pullback of a cover tower on the output layer and is inspired by the Multiscale Mapper algorithm. The resulting commutative diagram enables a dual analysis: layer persistence, which tracks topological features within individual layers, and MLP persistence, which monitors how these features transform across layers. Through experiments on both synthetic and real-world medical datasets, we demonstrate how this method reveals critical topological transitions, identifies redundant layers, and provides interpretable insights into the internal organization of neural networks.