15:30
Let $X_1, \ldots$ be i.i.d. copies of some real random variable $X$. For any $\varepsilon_2, \varepsilon_3, \ldots$ in $\{0,1\}$, a basic algorithm introduced by H.A. Simon yields a reinforced sequence $\hat{X}_1, \hat{X}_2, \ldots$ as follows. If $\varepsilon_n=0$, then $\hat{X}_n$ is a uniform random sample from $\hat{X}_1, …, \hat{X}_{n-1}$; otherwise $\hat{X}_n$ is a new independent copy of $X$. The purpose of this talk is to compare the scaling exponent of the usual random walk $S(n)=X_1 +\ldots + X_n$ with that of its step reinforced version $\hat{S}(n)=\hat{X}_1+\ldots + \hat{X}_n$. Depending on the tail of $X$ and on asymptotic behavior of the sequence $\varepsilon_j$, we show that step reinforcement may speed up the walk, or at the contrary slow it down, or also does not affect the scaling exponent at all. Our motivation partly stems from the study of random walks with memory, notably the so-called elephant random walk and its variations.
Further Information
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.