Truncation and Rounding Error

I will only give examples here, if you want to know definitions, read a book!

Truncation Error

Given a finite different formula such as
$$ f'(x)\approx f’_{forward}(x)=\frac{f(x+h)-f(x)}{h},$$
one can estimate the truncation error by Taylor expanding \(f(x+h)=f(x)+hf'(x)+\frac{1}{2}h^2f”(x)+O(h^3)\)
$$ E_{trunc}=|f’_{forward}(x)-f'(x)|=\frac{1}{2}hf”(x)+O(h^2)=O(h),$$
assuming \(f(x)\) is well-behaved such that \(f”(x)\sim1\).

Rounding Error

In the same example as above, we may not be able to represent \(f(x+h)\) and \(f(x)\) exactly with floating-point numbers (\(h\) is an input and is therefore representable by definition). Worst case \(f(x+h)\) and \(f(x)\) both deviates by machine precision \(\epsilon_{mach}\) in opposite directions, giving a rounding error of

$$ E_{round}=|f’_{round}(x)-f'(x)|=\frac{2\epsilon_{mach}}{h}=O(\frac{\epsilon_{mach}}{h})$$

Interpretation

Both truncation and rounding error measure the error between what we have and the true answer \(E=|f_{we~have}(x)-f_{true}(x)|\). The difference is that for truncation error, this error comes from math (truncation of a formula such as Taylor expansion), whereas for rounding error, this error comes from computer (floating point representation instead of exact arithmetic).

There’s a game one can play to minimize “error” when only truncation and rounding error exist. I put quotes on “error” because in the real world, the error of a calculation is usually dominated by physical approximations we make in our model (or by our stupidity if you will), rather than small, well-controlled approximations used in math and/or (modern) computer representation theory. Buuut, let’s play this game anyways. Setting \(E_{trunc}=E_{round}\) in the above example, we get \(h=O(\epsilon_{mach}^{1/2})\), thus the “error” \(E=O(\frac{\epsilon_{mach}}{h})=O(h)=O(\epsilon_{mach}^{1/2})\).