Date
Mon, 28 Apr 2014
Time
14:15 - 15:15
Location
Oxford-Man Institute
Speaker
YANN OLLIVIER
Organisation
PARIS SUD UNIVERSITY

Simple probabilistic models for sequential data (text, music...), e.g., hidden Markov models, cannot capture some structures such as
long-term dependencies or combinations of simultaneous patterns and probabilistic rules hidden in the data. On the other hand, models such as
recurrent neural networks can in principle handle any structure but are notoriously hard to learn given training data. By analyzing the structure of
neural networks from the viewpoint of Riemannian geometry and information theory, we build better learning algorithms, which perform well on difficult
toy examples at a small computational cost, and provide added robustness.

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.