Season 9 Episode 8
Georgina joins us on the Oxford Online Maths Club to explore what happens to the roots of polynomials as you change the parameters.

Further Reading
Taylor Series
Imagine that you need to tell someone about values of \(y=e^x\) but bad news, they only know about polynomials, but good news they only need to calculate this for small values of \(x\) and they’re not worried about being a little bit wrong.
The easiest thing to say is “eh, \(e^x\) is 1, give or take, for small \(x\)”. And maybe that will do! But can we do better?
To use the language from the livestream, we might think of that as a first-order approximation to \(e^x\) for small \(x\), and we might try to find a next-order correction to the approximation.
Something that we have not captured in our approximation is that \(e^x\) is an increasing function. Maybe that gives us the idea to add a term that increases with \(x\) and we might remember that \(e^x\) has some non-zero derivative at \(x=0\). We know about exponentials, even if our friend doesn’t. So we might improve our approximation by writing \(e^x \approx 1+x\), based on our understanding of the derivative. This does a bit better; look at this Desmos plot and zoom in near \((0,1)\) and you’ll see that the green straight line is closer to the red exponential curve than the yellow horizontal line is.
The big idea is that if we’re willing to work with a polynomial of higher degree, we can find a better approximation to \(e^x\). I’ve included two more polynomials in that Desmos plot. How did I find the coefficients? Well, for the quadratic I made sure that not only did it have the same value and derivative as \(e^x\) at \(x=0\), but also the same second derivative. And then for the cubic, I also matched the third derivative.
I also want you to notice that none of these polynomials does a particularly good job of approximating \(e^x\) when \(x\) isn’t a small number! In particular, if \(x\) is very negative, then they all behave completely differently from \(e^x\). More subtly, if \(x\) is very positive, then they don’t grow quickly enough (exponentials grow faster than polynomials; this is not obvious it is true).
This works for other functions, not just \(e^x\). If you can differentiate it then you can approximate it. For example, here are some polynomial approximations to \(\cos(x)\) (in radians) in Desmos.
This series is called a Taylor Series (or perhaps a Maclaurin series, since we’re talking about approximations near the point \(x=0\) in particular). See Wolfram MathWorld for a definition, and see 3Blue3Brown for an excellent video explainer Taylor series | Chapter 11, Essence of calculus | YouTube.
Solving Equations
Georgina’s plan in the livestream was to try a series expansion for the location of the root. Instead of differentiation, just plug it in and see what happens.
For context, this content is in the second year of some Mathematics degrees (like the one at Oxford), and might not be covered until masters level in some other Mathematics degree courses. Well done for following it so far!
We saw one example near the end where one of the roots escaped to infinity; the roots of $$ \epsilon x^2 + x – 1 = 0 $$
In this case, Georgina suggested that we should try a series expansion that started with a term proportional to \(\epsilon^{-1}\). If you try \( a\epsilon^{-1} \) then you find \( a^2+a=0\) and from there you can take \( a=-1\) and add as many correction terms as you like, solving for one coefficient at a time, generating better and better approximations to the location of the root, as it runs off to infinity.
Harder problems
Here’s an example where something like that goes wrong. $$ \epsilon x^2 - 1 =0 $$
If you check in Desmos (or solve the equation), you’ll find that both roots run off to infinity as \(\epsilon\) gets close to zero. If you solve the equation, you’ll find the roots are at \(\pm \epsilon^{1/2}\). That doesn’t fit the pattern we’ve had so far; the power isn’t a whole number! In hindsight, $\epsilon^{1/2}$ is the right power to get the two terms to be the same size as each other (indeed, in this example the two terms are equal, because that’s all we’ve got!).
In general when we’re analysing problems like this, we choose this power of $\epsilon$ so that at least two terms in the equation are the same size.
Let’s try this for the equation $$ x^3-\epsilon x + \epsilon =0. $$
There's a root near $x=0$ but if we try $x=a \epsilon$ then we get a nasty surprise; that perturbation is too small, and we have $\epsilon=0$ plus some tiny terms on the left-hand-side. Oops. That doesn’t even involve $a$, so we can’t solve it for $a$. Instead, we realise that we’ve got to do something about this $+\epsilon$ term, and choose $x=a\epsilon^{1/3}$ instead. That balances the $x^3$ term with the $\epsilon$ term, without making the $\epsilon x$ term problematically large. We get an equation that we can solve for $a=-1$. Then we might work carefully from there, trying not to assume anything about the next power of $\epsilon$, except that it will be a higher power than $1/3$, so that the next correction is smaller than what we’ve currently got.
Boundary Layers for Flight
At the end, Georgina described some fluid dynamics to do with the flow around a wing. For this problem, the small parameter is the Reynolds number, which is a measure of the stickiness or internal friction of the air. Here’s a link to a Sixty Symbols video on Reynolds number.
The first thing you might try is to just ignore friction (let \(\epsilon=0\)). This leads to strange consequences, like the result that the drag on a cylinder vanishes for inviscid flow. This is called D’Alembert’s paradox and if it were how the real world worked, then you could stand outside on a windy day and you’d feel zero force from the wind. If you were a cylinder. It’s not a perfect analogy!
If you want to get in touch with us about any of the mathematics in the video or the further reading, feel free to email us on oomc [at] maths.ox.ac.uk.