Episode 3

OOMC episode 3 probability with Lauren

In episode 3, we look at as some probability with Lauren (3rd year, Maths & Stats), and we approximate some functions with James.

Random Walks

Lauren’s slides are here.

For the frog jumping on a line of lilypads, Lauren showed that the probability that the frog gets back to the starting lilypad is 1. We also mentioned that if the frog was instead allowed to jump around on a 2D grid of lilypads, then the probability of eventually returning to the starting lilypad is also 1. But with a 3D grid of lilypads (and a frog that’s allowed to jump left/ right/ forwards/ backwards/ up/ down) the probability is not 1. In fact, it’s about 34%. This is one of Pólya’s Random Walk Constants. That mathworld page has a terrifying triple-integral (!) expression for the probability. We don’t know what the probability exactly is for dimensions higher than three, but it gets smaller the more dimensions you have (the table of “numerical values” on that mathworld page means numbers that have been calculated by a computer trying out some of the random possibilities).

Lauren also mentioned some general theory about recurrence relations. I don’t think you learn this at A-level or equivalent (it depends how much “discrete mathematics” you’re doing), but it’s a bit of first-year university maths. I think that it just looks like magic- it’s the step where Lauren took an equation relating together different terms of a sequence, and tried lambda to the power of n, and it worked.

There’s a textbook entry on the theory that Lauren is using here with some examples and an explanation of the general technique. I’m linking to a textbook rather than giving you a simplified explanation because I think that at least some of you just want to see the literal technique that you might be taught at university. It's a tough read though; reading maths textbooks is a huge amount of work, and usually involves getting stuck or confused, stopping, going back to try to understand something earlier, then coming back and thinking again about the bit that didn’t make sense.

Binomial series and Taylor series

I rushed through binomial series on the livestream a bit (because I wanted to get to the Taylor Swift joke at the end). It’s in A-level and equivalent, so you will see it again! There’s a nice clean statement over on Wikipedia https://en.wikipedia.org/wiki/Binomial_series, but warning: the statement there uses complex numbers. At A-level, you will not be told about the binomial series with complex numbers, which is a shame, because it’s exactly the same expression as the binomial series with real numbers.

Here’s the Desmos page that Sergey sent in about binomial expansions; https://www.desmos.com/calculator/qbhr9khqep

And here’s the Desmos page that I made for the Taylor series for cosine; https://www.desmos.com/calculator/vuksdkymfr

The expressions that we’ve put into Desmos here are a bit hard to read. Sergey is using sigma notation for a sum (addition of terms with different $k$) and pi notation for a product (multiplication of terms with different $l$), which you might not have seen before. It’s tricky because Desmos needs to be told exactly what to do; it can’t deal with dots to indicate “and so on”. On the Wikipedia page for Binomial Series, you can see that there are two sets of “and so on” dots; one to indicate that the powers of $x$ keep going, and one to indicate that there are lots of brackets multiplied together in each coefficient. Sergey’s sum (big capital sigma, $\Sigma$) is to deal with the first set of “and so on” dots, and the product (big capital pi, $\Pi$) is to deal with the second set of “and so on” dots. Try moving the sliders to see how increasing the number of terms ($m$) gives a better approximation.

My expression for the cosine approximations is much worse, because I need to work out lots of derivatives of cosine at an arbitrary point. There is a pattern to these derivatives (you learn this pattern at some point in A-level), and I’ve written some fairly messy expressions to teach Desmos that pattern of derivatives. It’s not worth looking too closely at the expression! Instead, play around with the sliders, and try to see if you can get a feel for how the blue polynomial approximates the red cosine curve, near to the black point. The $m$-slider controls the number of terms in the approximation, and it’s the “degree” (highest power) of the polynomial.


If you want to get in touch with us about any of the mathematics in the video or the further reading, feel free to email us on oomc [at] maths.ox.ac.uk.