Seminar series
Date
Mon, 22 Jan 2024
Time
14:00 - 15:00
Location
Lecture Room 3
Speaker
Prof. Justin Sirignano
Organisation
Mathematical Institute University of Oxford

Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity. In the case of an RNN with a simplified weight matrix, we prove the convergence of the RNN to the solution of an infinite-dimensional ODE coupled with the fixed point of a random algebraic equation. 
The analysis requires addressing several challenges which are unique to RNNs. In typical mean-field applications (e.g., feedforward neural networks), discrete updates are of magnitude O(1/N ) and the number of updates is O(N). Therefore, the system can be represented as an Euler approximation of an appropriate ODE/PDE, which it will converge to as N → ∞. However, the RNN hidden layer updates are O(1). Therefore, RNNs cannot be represented as a discretization of an ODE/PDE and standard mean-field techniques cannot be applied. Instead, we develop a fixed point analysis for the evolution of the RNN memory state, with convergence estimates in terms of the number of update steps and the number of hidden units. The RNN hidden layer is studied as a function in a Sobolev space, whose evolution is governed by the data sequence (a Markov chain), the parameter updates, and its dependence on the RNN hidden layer at the previous time step. Due to the strong correlation between updates, a Poisson equation must be used to bound the fluctuations of the RNN around its limit equation. These mathematical methods allow us to prove a neural tangent kernel (NTK) limit for RNNs trained on data sequences as the number of data samples and size of the neural network grow to infinity.

Please contact us with feedback and comments about this page. Last updated on 12 Jan 2024 09:27.