Given a model dynamical system, a model of any measuring apparatus relating states to observations, and a prior assessment of uncertainty, the probability density of subsequent system states, conditioned upon the history of the observations, is of some practical interest.
When observations are made at discrete times, it is known that the evolving probability density is a solution of the Bayesian filtering equations. This talk will describe the difficulties in approximating the evolving probability density using a Gaussian mixture (i.e. a sum of Gaussian densities). In general this leads to a sequence of optimisation problems and related high-dimensional integrals. There are other problems too, related to the necessity of using a small number of densities in the mixture, the requirement to maintain sparsity of any matrices and the need to compute first and, somewhat disturbingly, second derivatives of the misfit between predictions and observations. Adjoint methods, Taylor expansions, Gaussian random fields and Newton’s method can be combined to, possibly, provide a solution. The approach is essentially a combination of filtering methods and '4-D Var’ methods and some recent progress will be described.