Date
Fri, 16 Jun 2023
Time
15:00 - 16:00
Location
Lecture room 5
Speaker
Bei Wang

Deep convolutional neural networks such as GoogLeNet and ResNet have become ubiquitous in image classification tasks, whereas
transformer-based language models such as BERT and its variants have found widespread use in natural language processing. In this talk, I
will discuss recent efforts in exploring the topology of artificial neuron activations in deep learning, from images to word embeddings.
First, I will discuss the topology of convolutional neural network activations, which provides semantic insight into how these models
organize hierarchical class knowledge at each layer. Second, I will discuss the topology of word embeddings from transformer-based models.
I will explore the topological changes of word embeddings during the fine-tuning process of various models and discover model confusions in
the embedding spaces. If time permits, I will discuss on-going work in studying the topology of neural activations under adversarial attacks.
 

Please contact us with feedback and comments about this page. Last updated on 14 Jun 2023 14:01.