Truncation error#
Truncation error occurs when we approximate a mathematical function.
Taylor series#
Recall the EVER SO USEFUL Taylor series:
\(f(x+\Delta x) = f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x) \frac{\Delta x ^3}{6} + ...\)
\(= \sum_{n=0}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x ^n\)
but this is not useful unless we have an infinite amount of time and resources.
If \(\Delta x\) is small, \(\Delta x ^2\) is smaller, and \(\Delta x ^3\) smaller still. In fact, as long as \(f(x)\) is well behaved (loosely defined as continuous, smooth, differentiable, not infinite, etc) the derivatives don’t explode exponentially and the rightmost terms get very small.
So let’s truncate the series and only keep the first \(k\) terms:
\(f(x+\Delta x) = f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + E_k \)
where
\(E_k = \sum_{n=k}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x ^n\)
is the truncation error.
This quantity is akin to a True Error in that if we knew what \(E_k\) was exactly, we would have the true function \(f\)!
It is more useful to define the order of the error. Noting that the leading term is \(\propto \Delta x ^k\), we would say:
\(f(x_0+\Delta x) \approx f(x_0) + f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} + 𝒪(\Delta x ^3)\)
or that this is a third order approximation.
This is a useful statement, since it indicates the payoff for tuning the numerical parameters. In this case, halving the step size halves the error.
Example: Find the order of approximate derivative we calculated previously (known as the forward difference).#
\(f'(x) \approx \frac{f(x+\Delta x) - f(x)}{\Delta x}\)
We can substitute the Taylor series for \(f(x+\Delta x)\)
\(f'(x) \approx \frac{f(x) + f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} ... - f(x)}{\Delta x}\)
\(\approx \frac{f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} ...}{\Delta x}\)
\(\approx f'(x_0) + f''(x_0) \frac{\Delta x}{2} ...\)
\(\approx f'(x_0) + f''(x_0) \frac{\Delta x}{2} ...\)
\(\approx f'(x_0) + 𝒪(\Delta x)\)
Therefore this approximation, call the forward difference is a first order algorithm.
Example 2: Find the order of approximatino of the central difference formula,#
\(f'(x) \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2 \Delta x}\)
Substituting the (3rd order) Taylor series for \(f(x+\Delta x)\) and \(f(x+[-\Delta x])\)
\(f'(x) \approx \frac{f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x_0) \frac{\Delta x ^3}{6} ... - [f(x) + f'(x) [-\Delta x] + f''(x) \frac{-\Delta x ^2}{2} + f'''(x_0) \frac{-\Delta x ^3}{6}...]}{2 \Delta x}\)
\(\approx \frac{f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x_0) \frac{\Delta x ^3}{6} ... - f(x) + f'(x) \Delta x - f''(x) \frac{-\Delta x ^2}{2} + f'''(x_0) \frac{-\Delta x ^3}{6}...}{2 \Delta x}\)
\(\approx \frac{2 f'(x) \Delta x + 2 f'''(x_0) \frac{\Delta x ^3}{6} ...}{2 \Delta x}\)
\(\approx f'(x) + f'''(x_0) \frac{\Delta x ^2}{6} ...\)
\(\approx f'(x) + 𝒪(\Delta x ^2)\)
Therefore, the central difference approximation is a second order algorithm (for the same number of function calls!). By halving the step size, the error is quartered!
Common mathematical functions#
Computers are very good at addition / subtraction, multiplication / division, and exponentiation. How should we calculate other functions?
Let’s examine some Taylor expansions:
Function |
Taylor Expansion |
---|---|
\(\sin(x) \) |
\( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \) |
\( \cos(x) \) |
\( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots \) |
\( \exp(x) \) |
\( 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots \) |
\( \ln(1+x) \) |
\( x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots \) |
On the surface these look good, and in infinite precision they are globally convergent.
But we are not in infinite precision.
Example: Examine the terms of \(sin(x)\) for small x#
from numpy import pi, e, sqrt, binary_repr
import numpy as np
import math
def taylor_series_sin(x, n_terms):
"""
Calculate the Taylor series expansion of sin(x) up to n_terms.
Parameters:
x (float): The point at which to evaluate the Taylor series.
n_terms (int): The number of terms to include in the expansion.
Returns:
list: A list of terms in the Taylor series expansion.
"""
terms = []
for n in range(n_terms):
term = ((-1)**n * x**(2*n + 1)) / math.factorial(2*n + 1)
terms.append(term)
print("The terms are as follows:")
for i, term in enumerate(terms):
print(f"Term {i+1}: {term:.10f}")
print(f"Approximate sin({x}): {sum(terms):.10f}, and it should be {math.sin(x):.10f}")
For a small x:
taylor_series_sin(1, 10)
The terms are as follows:
Term 1: 1.0000000000
Term 2: -0.1666666667
Term 3: 0.0083333333
Term 4: -0.0001984127
Term 5: 0.0000027557
Term 6: -0.0000000251
Term 7: 0.0000000002
Term 8: -0.0000000000
Term 9: 0.0000000000
Term 10: -0.0000000000
Approximate sin(1): 0.8414709848, and it should be 0.8414709848
NB: The terms are flipping signs (potential for roundoff error), but more importantly they are decresing.
-> No problem
Example: Examine the terms of sin(x) for large x#
taylor_series_sin(10, 10)
The terms are as follows:
Term 1: 10.0000000000
Term 2: -166.6666666667
Term 3: 833.3333333333
Term 4: -1984.1269841270
Term 5: 2755.7319223986
Term 6: -2505.2108385442
Term 7: 1605.9043836822
Term 8: -764.7163731820
Term 9: 281.1457254346
Term 10: -82.2063524662
Approximate sin(10): -16.8118501374, and it should be -0.5440211109
Getting a bit funny…
taylor_series_sin(100, 10)
The terms are as follows:
Term 1: 100.0000000000
Term 2: -166666.6666666667
Term 3: 83333333.3333333284
Term 4: -19841269841.2698402405
Term 5: 2755731922398.5888671875
Term 6: -250521083854417.1875000000
Term 7: 16059043836821614.0000000000
Term 8: -764716373181981696.0000000000
Term 9: 28114572543455207424.0000000000
Term 10: -822063524662433021952.0000000000
Approximate sin(100): -794697857233433001984.0000000000, and it should be -0.5063656411
Completely wrong.
Why you should use a package#
In this case, the remedy is fairly simple, but if you are not careful, these function can behave very strangely. In practice, the means of calculation are very sophisticated for performance and stability, including other expansion techniques and sometimes even look-up tables.
This is why we use packages! :-)
The Taylor expansion is still useful to consider limiting behaviour.
For small \(x\),
\(exp(x) \approx 1+x\) which is subject to roundoff error. Therefore packages like numpy provide special functions like \(expm1 = exp(x)-1\)
# Poorer approximation
print(np.exp(1e-10) - 1)
#Better approximation
print(np.expm1(1e-10))
1.000000082740371e-10
1.00000000005e-10