A pure mathematician, an applied mathematician and an accountant all apply for the same job on Wall Street, and during their interviews they are all asked the same question: "What is the value of $(1-1/N)\times N + 1 - N$?" The pure mathematician replies "Zero, of course!" The applied mathematician asks "How large is $N$?" The accountant asks "What would you like it to be?" The accountant gets the job ...
... but the applied mathematician is on the right track, especially in an era where much research and virtually all application of mathematics is done on digital computers. I've been meaning to do some posts on
numerical analysis for a while now, motivated in part by a large number of questions appearing on optimization support forums that indicate the topic is foreign to a sizable number of people using mathematical programming software. (The poster child for questions of this type: "Why did solver XXX say that the optimal value of my variable is 0.999999823 when it is a 0-1 variable??")
I was a math major end to end. In my first quarter of graduate school, I took a course on numerical analysis -- I can't recall now why, as I was a "pure" mathematician in those days -- and it was a real eye-opener. Identities that hold in symbolic mathematics (including "chalkboard arithmetic") don't necessarily survive the conversion to a digital medium. I have no intention of trying to compress that first course in numerical analysis into a blog post, or even a series of posts, but perhaps an example or two will at least get the attention of some people who previously had not heard of it. The numerical analysis course turned out to be one of the most valuable math courses I took (at any level), and I invariably recommend it whenever someone asks "What courses should I take to prepare for a career in Operations Research?"
I tested the formula in the above joke in a small Java program, using
double-precision arithmetic, and the reported answer was zero no matter how large a value of $N$ I tried. This might or might not be a consequence of
compiler optimizations -- the compiler doing the arithmetic in a different order than I specified it. So I'll fall back on an example from that graduate course: computing a
partial sum of the (divergent)
harmonic series $\sum_{n=1}^\infty 1/n$.
One of the first things anyone learns about math, at an early age, is that addition is
commutative associative ... except, as it turns out, when a computer is involved. Here is a small Java program that computes $\sum_{n=1}^N 1/n$ twice, in one case summing in ascending order of $n$ ($1 + 1/2 + 1/3 + \dots + 1/n$) and in the other case summing in descending order of $n$ ($1/N + 1/(N-1) + \dots + 1$). I start with $N=1000$ and double $N$ each time through the main loop. To save myself repetition, let me stipulate that henceforth "large" and "small" refer to the magnitudes of numbers, without regard for their sign. If addition is commutative on the computer, the difference between summing in ascending and descending order should be zero.
Here is the output it generated:
N | 1 + ... + 1/N | 1/N + ... + 1 | Difference |
1000 | 7.485470860550343 | 7.485470860550341 | 2.6645352591003757E-15 |
2000 | 8.178368103610284 | 8.178368103610278 | 5.329070518200751E-15 |
4000 | 8.871390299795198 | 8.871390299795223 | -2.4868995751603507E-14 |
8000 | 9.564474984261391 | 9.564474984261425 | -3.375077994860476E-14 |
16000 | 10.257590915797937 | 10.257590915797914 | 2.3092638912203256E-14 |
32000 | 10.95072247160203 | 10.950722471602038 | -8.881784197001252E-15 |
64000 | 11.64386183972301 | 11.643861839723002 | 8.881784197001252E-15 |
128000 | 12.337005114048072 | 12.337005114048194 | -1.2256862191861728E-13 |
256000 | 13.030150341487087 | 13.030150341486952 | 1.3500311979441904E-13 |
512000 | 13.72329654548546 | 13.723296545485356 | 1.0480505352461478E-13 |
1024000 | 14.41644323776362 | 14.416443237764337 | -7.176481631177012E-13 |
The first takeaway is that we're not seeing a difference of zero, ever. (Actually, for $N$ up to about 40, the reported difference is zero.) The differences are small, but not zero. There is no obvious pattern to their sign, nor is their size changing monotonically, but they do seem generally to be growing with $N$, which among other things means that if we have a really large value of $N$, we could get a fairly seriously inaccurate answer (at least in absolute, rather than relative, terms).
Why does this happen? The short answer is that computers store non-integral numbers only to a specific finite precision. That creates "
truncation errors" in the representations of the numbers (0.1 as stored in a computer is not exactly 1/10) and "
rounding errors" when arithmetic is performed. In particular, adding or subtracting a large number and a small number can produce rounding errors. In the extreme, the small number is smaller than the precision at which the large number is stored, and so the sum or difference of the two is identical to the large number as far as the computer is concerned. That sort of error is more likely to occur when $n$ is ascending (we are adding progressively smaller terms $1/n$ to progressively larger partial sums) than when $n$ is descending (at the outset, we add small numbers to small partial sums, and eventually larger numbers to larger sums); so I am inclined to trust the second set of partial sums more than I do the first. Neither, though, is exactly correct.
You might not be too excited about the differences in the table -- even at $N=1,024,000$ the difference is in the thirteenth decimal place -- but consider the following coding error probably made by every rookie programmer to tackle a mathematical computation: testing whether the difference of two floating point (non-integer) numbers
equals zero when they should be testing whether it is
close enough to zero. Many a program has entered an indefinite loop thanks to a test for equality and a smidgen of rounding error.
Update: John D. Cook has a
related post in a different context (probability and statistics).