16:00
Particle filters for Data Assimilation
Note: we would recommend to join the meeting using the Teams client for best user experience.
Abstract
Modern Data Assimilation (DA) can be traced back to the sixties and owes a lot to earlier developments in linear filtering theory. Since then, DA has evolved independently of Filtering Theory. To-date it is a massively important area of research due to its many applications in meteorology, ocean prediction, hydrology, oil reservoir exploration, etc. The field has been largely driven by practitioners, however in recent years an increasing body of theoretical work has been devoted to it. In this talk, In my talk, I will advocate the interpretation of DA through the language of stochastic filtering. This interpretation allows us to make use of advanced particle filters to produce rigorously validated DA methodologies. I will present a particle filter that incorporates three additional add-on procedures: nudging, tempering and jittering. The particle filter is tested on a two-layer quasi-geostrophic model with O(10^6) degrees of freedom out of which only a minute fraction are noisily observed.
Asymptotic Analysis of Deep Residual Networks
Abstract
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation (SDE) or neither of these. Furthermore, we are able to formally prove the linear convergence of gradient descent to a global optimum for the training of deep residual networks with constant layer width and smooth activation function. We further prove that if the trained weights, as a function of the layer index, admit a scaling limit as the depth increases, then the limit has finite 2-variation.