Date
Fri, 16 May 2025
13:00
Location
L6
Speaker
Andrea Guidolin
Organisation
University of Southampton

The join button will be published 30 minutes before the seminar starts (login required).

Deep learning models are known to be vulnerable to small malicious perturbations producing so-called adversarial examples. Vulnerability to adversarial examples is of particular concern in the case of models developed to operate in security- and safety-critical situations. As a consequence, the study of robustness properties of deep learning models has recently attracted significant attention.

In this talk we discuss how the stability results for the invariants of Topological Data Analysis can be exploited to design machine learning models with robustness guarantees. We propose a neural network architecture that can learn discriminative geometric representations of data from persistence diagrams. The learned representations enjoy Lipschitz stability with a controllable Lipschitz constant. In adversarial learning, this stability can be used to certify robustness for samples in a dataset, as we demonstrate on synthetic data.
Last updated on 13 May 2025, 4:16pm. Please contact us with feedback and comments about this page.