Seminar series
Date
Thu, 22 Oct 2020
Time
14:00 - 15:00
Location
Virtual
Speaker
Carl-Johann Simon-Gabriel
Organisation
ETH Zurich

Any binary classifier (or score-function) can be used to define a dissimilarity between two distributions of points with positive and negative labels. Actually, many well-known distribution-dissimilarities are classifier-based dissimilarities: the total variation, the KL- or JS-divergence, the Hellinger distance, etc. And many recent popular generative modelling algorithms compute or approximate these distribution-dissimilarities by explicitly training a classifier: eg GANs and their variants. After a brief introduction to these classifier-based dissimilarities, I will focus on the influence of the classifier's capacity. I will start with some theoretical considerations illustrated on maximum mean discrepancies --a weak form of total variation that has grown popular in machine learning-- and then focus on deep feed-forward networks and their vulnerability to adversarial examples. We will see that this vulnerability is already rooted in the design and capacity of our current networks, and will discuss ideas to tackle this vulnerability in future.

Further Information

datasig.ox.ac.uk/events

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.