Week 5: Taylor Approximations#

Demo by Christian Mikkelstrup, Hans Henrik Hermansen, Jakob Lemvig, Karl Johan Måstrup Kristensen, and Magnus Troen

from sympy import *
from sympy.abc import x,y,z,u,v,w,t
from dtumathtools import *
init_printing()

Taylor Polynomials for Functions of one Variable#

We would like to try to approximate, respectively, \(\ln(x)\) and \(\sin(x)\) via Taylor polynomials and investigate how the degree of the polynomials influence the approximation of the original function.

The command for Taylor expansion in SymPy is series and it has the following format:

\(\verb|series(function, variable, x0, K+1)|\)

NOTE!!: It is important to remember that the number of terms \(K\) in SymPy’s function call must be one larger than the \(K\) that is described in the textbook. So, if one wants for instance an approximating polynomial of the sixth degree with the expansion point \(x_0 = 0\), meaning \(P_6(x)\), for the function \(\cos(x)\), one must in SymPy write:

series(cos(x), x, 0, 7)
../_images/03e0f0b9dd53328be7e93a64c089ed2c5f45f98972baa0dd5e37764b1124c71e.png

Furthermore one can see that SymPy adds the term \(O(x^{K+1})\). This notation is called “Big O” and means, roughly speaking, that the error approaches zero faster than \(x^{N+1}\). So, it is a description of the remainder term \(R_K(x)\) as in Taylor’s formula 4.3.2 in the text book. It can, though, not be used for finding an expression of the remainder function. For that you must yourself make investigations on the function in the intented interval.

If we only want the Taylor polynomial (and not information about the size of the error), we will remove the \(O(x^{K+1})\) term using \(\verb|.removeO()|\):

series(cos(x), x, 0, 7).removeO()
../_images/b3406e5f072ddbc79d6e2b25b72a39971f9163d70b65dfce9cf217737dc19891.png

This is now a polynomial we can evaluate. So, with this information we can now investigate functions and their approximations. Consider for example the function \(f:=\ln(x)\):

Here we will first create a plot with \(\verb|show = False|\), in order for us to add the other plots to this same plot with \(\verb|.extend()|\). From our plot it is clear to see that when we increase \(K\), our approximation becomes better as well. We briefly check the same thing for \(\sin(x)\), this time with \(x_0 = 0\):

pl = plot(sin(x),xlim = (-3,3), ylim = (-3,3), show=False, legend = True)
for K in [0,1,2,3,6]:
    newseries = series(sin(x),x,0,K+1).removeO()
    display(Eq(Function(f'P_{K}')(x), newseries))
    newplot = plot(newseries,label = f"n = {K}", show=False)
    pl.extend(newplot)
pl.show()
../_images/c4c2b08eed1181344e38a06b5462836d01686eeae2b99a938c8f29fe7b8ab7ac.png ../_images/e10649681370b1908125312d266e142559b24e5323fc669dd4e569641a94045d.png ../_images/09395c87ff27970dfac597e1232bb775ad6c2b4bc6cea40cd528950f877ea98a.png ../_images/41cf4baf912bba1df6d2d3ab2e5f14c570402b754c49354ddf30fa60127fd6c6.png ../_images/db154e6456ef23dbc1ff2fd78449059e1c0f58fafa53ab8cfd8923f74887b7c7.png ../_images/ca750eb864398f2a7a9e22e4a8564e629689409002e54a192cb6a07c7628e3ed.png

Here we can see that only four different lines are clearly visible. If one looks above the plot, it is clear why. By definition we know that

\[\begin{align*} P_0(x) &= f(x_0) = \sin(0) = 0 \\ P_1(x) &= f(x_0) + f'(x_0)(x-x_0)\\ &= \sin(0) + \cos(0) x \\ &= x \\ P_2(x) &= f(x_0) + f'(x_0)(x-x_0) + \frac{1}{2} f''(x_0)(x-x_0)^2 \\ &= \sin(0) + \cos(0) x - \sin(0) x^2 \\ &= x \end{align*}\]

for \(x\in\mathbb{R}\).

Evaluation of Remainder Function using Taylor’s Formula#

We want to try to find an approximate value of \(\ln\left(\frac{5}{4}\right)\) using the approximating polynomial \(P_3(x)\) expanded from the point \(x_0=1\).

We shall first determine \(P_3(x)\):

x = symbols("x")
P3 = series(ln(x),x,1,4).removeO()
P3
../_images/b270e70a54f2a8fdfc63017efe6aa481c040e0710f51be4e43f6ae2df2e99549.png
val = P3.subs(x,Rational(5/4))
val, val.evalf()
../_images/060aeb21a65c8d45b042cb07d2433ec0fd1e5313b764b132d6bb5e69e4b5bdf2.png

We konw from Taylor’s formula 4.3.1 that a \(\xi \in ]1;\frac{5}{4}[\) exists such that the error \(R_3(\frac{5}{4})\) can be written as:

\[\begin{equation*} R_3\left(\frac{5}{4}\right) = \frac{f^{(4)}(\xi)}{4!}\cdot\left(\frac{5}{4} - 1\right)^4. \end{equation*}\]

Here we shall first find out which number in the interval for \(\xi\) thta leads to the largest possible error. If we approximate our error with this, we can be certain that the true error is smaller. \(f^{(4)}(\xi)\) is the only term that is dependent on \(\xi\) and is determined to be

xi = symbols("xi")
diff(ln(x),x,4).subs(x,xi)
../_images/6004dbe2fd646bca9aee73750eb0209576f637ce2b6626875300092a98bc85de.png

Here we get the result \(-\frac{6}{\xi^4}\). Now we shall just analyse the expression to find out which \(\xi\) that makes the expressions largest possible. We can in this case see that \(\xi\) is found in the denominator, and the expression thus increases if \(\xi\) decreases. That means for this case that we simply have to choose the smallest value possible for \(\xi\), which in this case is 1. Now we can carry out our evaluation of the error.

R3 = abs(diff(ln(x),x,4).subs(x,1) * (5/4 - 1) ** 4 /(factorial(4)))
display(Eq(Function('R_3')(S('5/4')), R3))
print('The correct value of ln(5/4) is in the interval:')
Interval(val - R3, val + R3)
../_images/690cd0688560e4e586933ad6a1967d3d0214b9a62ce1a8a1ab22272302c8a6a1.png
The correct value of ln(5/4) is in the interval:
../_images/fba114e60ced9744a2d3099ccfd4aa4978bd548fe00e4eb62ce58e7652ef7048.png

We have now evaluated the error, and we can now garantee that the true value of \(\ln(\frac{5}{4})\) is within the interval \(]0.2229;0.2250[\), (open due to rounding). Let us compare this with Python’s value (which in itself is an approximation),

ln(5/4), Interval(val - R3, val + R3).contains(ln(5/4))
\[\displaystyle \left( 0.22314355131421, \ \text{True}\right)\]

Limit Values using Taylor’s Limit Formula#

We will now use Taylor’s limit formula to determine the limit value of different expressions. This is often usable for fractions where the numerator, denominator, or both contain expressions that are not easy to work with. The way to access Taylor’s limit formula in SymPy is via the function \(\text{series}\), where we avoid using \(\text{.removeO()}\) afterwards. In the textbook, epsilon functions are used, but SymPy uses the \(O\) symbol. The two concepts are not entirely comparable, but if SymPy writes \(O(x^{K+1})\), we can replace that by \(\varepsilon(x) \, x^{K}\), where \(\varepsilon(x) \to 0\) for \(x \to 0\).

Example 1#

We will first investigate the expression \(\frac{\sin(x)}{x}\) when \(x\rightarrow 0\),

series(sin(x),x,0,n=3)
../_images/c2ca684f9c4618e4e6bff222280f1a1d8d31dfb6c85926478cbb5cdd3a2d1011.png

This gives us

\[\begin{gather*} \frac{\sin(x)}{x} = \frac{x + \epsilon (x) \cdot x^2}{x} = 1 + \epsilon (x)x \rightarrow 1, \text{ when } x \rightarrow 0 \end{gather*}\]

Example 2#

It can sometimes feel like guess work to figure out how many terms one needs to include, when one is to find a limit value using Taylor’s limit formula. Let us for example try

\[\begin{equation*} \frac{\mathrm e^x - \mathrm e^{-x} - 2x}{x-\sin(x)}, \end{equation*}\]

where \(x \rightarrow 0\). Let us evaluate the numerator and denominator separately and see what happens when we include more and more terms.

T = E ** x - E ** (-x) - 2*x
N = x - sin(x)
series(T,x,0,2), series(N,x,0,2)
../_images/e74847fcabcbf3ddf476516cdbe140cc6d17ebda9cc5c7bcf38b6c70a50d1746.png

Too imprecise since we only get the remainter term.

series(T,x,0,3), series(N,x,0,3)
../_images/18c969eaca33187f8c4f71c6412e0d94f6aeb647ceb4ddd18bca6b1954a8451b.png

Still too imprecise.

series(T,x,0,4), series(N,x,0,4)
../_images/3819d5648b5fd1ec4c9f01bcf0df4cb81c8515c75abe9244d4cde31259abab74.png

Here we have something where both terms have something usable. Here we get:

\[\begin{gather*} \frac{\frac{x^3}{3}+\epsilon(x)x^3}{\frac{x^3}{6}+\epsilon(x)x^3} = \\ \frac{\frac{1}{3} + \epsilon(x)}{\frac{1}{6}+\epsilon(x)} \rightarrow \frac{\frac{1}{3}}{\frac{1}{6}} = 2, \text{ when } x \rightarrow 0 \end{gather*}\]

As a check we can use the \(\text{limit()}\) function

limit(T/N,x,0)
../_images/b1fac8f1225f03d1681446e2a59ec935caa3f7b0c81185479a10e26dd6c63c3b.png

Taylor Polynomials for Functions of Two Variables#

We consider the following function of two variables:

\[ f:\mathbb{R}^2 \to \mathbb{R},\quad f(x,y) = \sin(x^2 + y^2). \]

It is plotted below:

x,y = symbols("x y", real = True)
f = sin(x ** 2 + y ** 2)
dtuplot.plot3d(f,(x,-1.5,1.5),(y,-1.5,1.5),rendering_kw={"color" : "blue"})
../_images/fd4f2684084ad1165a6080ccd95933fc0f4ee8d893ec2cb014bd6b7cab4dbfbb.png
<spb.backends.matplotlib.matplotlib.MatplotlibBackend at 0x7fd8658b49d0>

Let us determine the approximating first-degree polynomial with expansion point \((0,0)\)

P1 = dtutools.taylor(f,[x,0,y,0],degree=2)
P1
../_images/54b702655883ba04f93bdfac432be1621ee0afbe706b170b2fa5b938ca9ee4cf.png
p = dtuplot.plot3d(P1,(x,-1.5,1.5),(y,-1.5,1.5),show=false,rendering_kw={"alpha" : 0.5},camera={"azim":-81,"elev":15})
p.extend(dtuplot.plot3d(f,(x,-1.5,1.5),(y,-1.5,1.5),show=False))
p.show()
../_images/d848fb242e578eafbd042aca9af849f5b399e6b7e9d9b1576c27e950b1b25604.png

Here we can see that \(P1\) is located in the \((x,y)\) plane. With expansion point \((1/10,0)\) we have:

p = dtuplot.plot3d(dtutools.taylor(f,[x,0.1,y,0],degree=2),(x,-1.5,1.5),(y,-1.5,1.5),show=false,rendering_kw={"alpha" : 0.5},camera={"azim":-81,"elev":15})
p.extend(dtuplot.plot3d(f,(x,-1.5,1.5),(y,-1.5,1.5),show=False))
p.show()
../_images/23833ce835cd51a8d49b27f854a1a975366474cd326c1b8d4a94785acda62eea.png

We will return to the expansion point \((0,0)\). Let us see how the approximating second-degree polynomial looks:

P2 = dtutools.taylor(f,[x,0,y,0],3)
P2
../_images/25a06e30730e86f1f2cf96455d55f31898cebc0290b96a305c1bce7363740069.png
dtuplot.plot3d(f,P2,(x,-1.5,1.5),(y,-1.5,1.5))
../_images/10fc47af1d7f30c279d0a6a3a5bf1ed3097f98433c1c51711e2ccc5a6c5dc49a.png
<spb.backends.matplotlib.matplotlib.MatplotlibBackend at 0x7fd8655f1310>

This time, the approximating polynomial is an elliptic paraboloid. Lastly, let us have a look at how the approximating sixth-degree polynomial looks:

P6 = dtutools.taylor(f,[x,0,y,0],7)
p = dtuplot.plot3d(f,(x,-1.5,1.5),(y,-1.5,1.5),show=False)
p.legend=True
p.show()
../_images/027a049b17607d55a72486051144dd2235cab4e67b95665fe3db6fd9ac16845d.png

As expected, they are now much closer to each other. Let us investigate the error for these polynomials at different points to see how will they fit. Let us begin with \((0.2,0.2)\):

f_p1 = f.subs([(x, 1/5), (y, 1/5)])
P1_p1 = P1.subs([(x, 1/5), (y, 1/5)])
P2_p1 = P2.subs([(x, 1/5), (y, 1/5)])
P6_p1 = P6.subs([(x, 1/5), (y, 1/5)])

RHS_list = (f_p1 - P1_p1, f_p1 - P2_p1, f_p1 - P6_p1)
displayable_equations = [ Eq(Function(f'P_{i}')(S('1/5')), expression) for i, expression in zip((1,2,6), RHS_list) ]

display(*displayable_equations)
../_images/ab116b948ecd261abe4712ffc37bdd6145e0b52dea1bfa82a055afe68780a3bd.png ../_images/03512880317e45c9404073398d596ff74c8542825639b54dbcdd3584ed7bf074.png ../_images/87a76041c9e245083d037c5d2acc3f388b3c157fcdf98f90eea9820c74133d05.png

It all looks right. The error is much smaller for the approximating polynomials of higher degrees. Let us try with \((0.5,0.5)\):

f_p2 = f.subs([(x,1/2),(y,1/2)])
P1_p2 = P1.subs([(x,1/2),(y,1/2)])
P2_p2 = P2.subs([(x,1/2),(y,1/2)])
P6_p2 = P6.subs([(x,1/2),(y,1/2)])

RHS_list = (f_p2 - P1_p2, f_p2 - P2_p2, f_p2 - P6_p2)
displayable_equations = [ Eq(Function(f'P_{i}')(S('1/5')), expression) for i, expression in zip((1,2,6), RHS_list) ]

display(*displayable_equations)
../_images/67d3f0f89319765e673d9219422b45a2b0feafa1f3a291d9ca6d660e1a254a31.png ../_images/7fb52d43f215e379a28b14369097623f3fcdbcf9c7b8ba578e000087703c06b8.png ../_images/bb2f0d6d99d60f857c392f846a84070fc9deeab33de84bff18a2e5751ea58d95.png

The farther away from the expansion point \((0,0)\) we go, the larger is an error must we expect.

(It should be mentioned that our comparisons are based on the assumption that SymPy’s own approximations are much better than ours. This is most likely quite a good assumption in this case, but it is important to know that SymPy, Maple, and all other computer tools also perform approximations.)

Taylor Polynomials for functions of Three Variables#

Consider the function:

\[\begin{equation*} f: \mathbb{R}^3\to \mathbb{R},\quad f(x_1,x_2,x_3) = \sin(x_1^2 - x_2)e^{x_3}. \end{equation*}\]

We wish to determine the second-degree Taylor polynomial with expansion point \(\boldsymbol{x}_0 = (1,1,0)\).

x1,x2,x3 = symbols('x1,x2,x3', real = True)
f = sin(x1**2 - x2)*exp(x3)
f
../_images/ba59100cfa27a6bed18859392b00e178915187cde0b4891ff3de5ccdb81c7062.png

The second-degree Taylor polynomial for a function of multiple variables is given by

\[\begin{equation*} P_2(\boldsymbol{x}) = f(\boldsymbol{x}_0) + \left<(\boldsymbol{x} - \boldsymbol{x}_0), \nabla f(\boldsymbol{x}_0)\right> + \frac{1}{2}\left<(\boldsymbol{x} - \boldsymbol{x}_0), \boldsymbol{H}_f(\boldsymbol{x}_0)(\boldsymbol{x}-\boldsymbol{x}_0)\right> \end{equation*}\]

First we define \(\boldsymbol{x}_0\) and \(\boldsymbol{x}\):

x = Matrix([x1,x2,x3])
x0 = Matrix([1,1,0])

Then we find \(\nabla f(\boldsymbol{x}_0)\) and \(\boldsymbol{H}_f(\boldsymbol{x}_0)\):

nabla_f = dtutools.gradient(f,(x1,x2,x3)).subs([(x1,x0[0]),(x2,x0[1]),(x3,x0[2])])
nabla_f
\[\begin{split}\displaystyle \left[\begin{matrix}2\\-1\\0\end{matrix}\right]\end{split}\]
Hf = dtutools.hessian(f,(x1,x2,x3)).subs([(x1,x0[0]),(x2,x0[1]),(x3,x0[2])])
Hf
\[\begin{split}\displaystyle \left[\begin{matrix}2 & 0 & 2\\0 & 0 & -1\\2 & -1 & 0\end{matrix}\right]\end{split}\]

Now \(P_2\) can be determined:

P2 = f.subs([(x1,x0[0]),(x2,x0[1]),(x3,x0[2])]) + nabla_f.dot(x - x0) + S('1/2')* (x - x0).dot(Hf*(x - x0))
P2.simplify()
../_images/fb64b9a1c43dced694a957d8fee8d28b6e4b87b62697cb2f6bac60698ab9213b.png

We can now have a look at the difference between the approximating polynomial and the true function at some chosen values:

v1 = Matrix([1,1,0])
v2 = Matrix([1,0,1])
v3 = Matrix([0,1,1])
v4 = Matrix([1,2,3])
vs = [v1,v2,v3,v4]
for v in vs:
    print((f.subs({x1:v[0],x2:v[1],x3:v[2]}) - P2.subs({x1:v[0],x2:v[1],x3:v[2]})).evalf())
0
0.287355287178842
0.712644712821158
-12.9013965351501

Again we see that the error increases when we move farther away from the expansion point.