Particle exchange models with several conservation laws
Abstract
In this talk I will present an exclusion process with different types of particles: A, B and C. This last type can be understood as holes. Two scaling limits will be discussed: hydrodynamic limits in the boundary driven setting; and equilibrium fluctuations for an evolution on the torus. In the later case, we distinguish several cases, that depend on the choice of the jump rates, for which we get in the limit either the stochastic Burgers equation or the Ornstein-Uhlenbeck equation. These results match with predictions from non-linear fluctuating hydrodynamics.
(Joint work with G. Cannizzaro, A. Occelli, R. Misturini).
12:30
Recovering scattering distributions from covariance-map images of product distributions
Abstract
Molecules can be broken apart with a high-powered laser or an electron beam. The position of charged fragments can then be detected on a screen. From the mass to charge ratio, the identity of the fragments can be determined. The covariance of two fragments then gives us the projection of a distribution related to the initial scattering distribution. We formulate the mathematical transformation from the scattering distribution to the covariance distribution obtained from experiments. We expand the scattering distribution in terms of basis functions to obtain a linear system for the coefficients, which we use to solve the inverse problem. Finally, we show the result of our method on three examples of test data, and also with experimental data.
Solving Continuous Control via Q-Learning
Abstract
While there have been substantial successes of actor-critic methods in continuous control, simpler critic-only methods such as Q-learning often remain intractable in the associated high-dimensional action spaces. However, most actor-critic methods come at the cost of added complexity: heuristics for stabilisation, compute requirements as well as wider hyperparameter search spaces. To address this limitation, we demonstrate in two stages how a simple variant of Deep Q Learning matches state-of-the-art continuous actor-critic methods when learning from simpler features or even directly from raw pixels. First, we take inspiration from control theory and shift from continuous control with policy distributions whose support covers the entire action space to pure bang-bang control via Bernoulli distributions. And second, we combine this approach with naive value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL). We finally add illustrative examples from control theory as well as classical bandit examples from cooperative MARL to provide intuition for 1) when action extrema are sufficient and 2) how decoupled value functions leverage state information to coordinate joint optimization.