Back on “Pi Day” I shared a terrible way of calculating the digits of π. It’s neat in principle, yes. Drop a needle randomly on a uniformly lined surface. Keep track of how often the needle crosses over a line. From this you can work out the numerical value of π. But it’s a terrible method. To be sure that π is about 3.14, rather than 3.12 or 3.38, you can expect to need to do over three and a third million needle-drops. So I described this as a terrible way to calculate π.

A friend on Twitter asked if it was worse than adding up 4 * (1 – 1/3 + 1/5 – 1/7 + … ). It’s a good question. The answer is yes, it’s far worse than that. But I want to talk about working π out that way.

I suppose it’s not amazing by itself that there are many formulas to work out π. After all, properly, there are many ways formulas you could use to work out the value of “8”. But there are many formulas with forms that are pleasing to the eye and which add up to π. At least they’re pleasing to the early-21st-century western-civilization mathematician’s eye. One of the simplest is known as the Leibniz formula, or the Leibniz-Gregory formula. It was famously known to the mathematician-philosopher-lawyer-historian Gottfried Wilhelm Liebniz and to his contemporary, mathematician-astronomer/telescope-designer James Gregory. But the formula is this:

— or equivalently —

The dots at the end mean to “continue along this pattern forever”. That’s a common mathematical shorthand for people who believe they’ve made the pattern clear enough. That’s probably true here. Take each of the odd numbers in order, divide the number 4 by each of those numbers, and then alternately add or subtract each result of that division. When you’ve added all that up you will have exactly π.

The catch is that this is an infinite series. To work it out *exactly* we have to add together infinitely many terms. Three and a third million needle-drops is a lot. But that’s not so many compared to an infinitely great number of terms. We want to do a little less than an unbounded amount of work.

This makes us turn to calculus. If someone asks what calculus is good for, one answer is it can turn infinitely long problems into things we can actually do. The addition of a long sequence of numbers we call a “series”. This one is an “infinite series”. It’s right in calculus’s wheelhouse. You’ll find it in your high school or freshman calculus book in a chapter that looks like a strange digression from derivatives and integrals.

For some infinite series (I’m sorry the word doesn’t pluralize well) calculus can turn the whole series into an equivalent and simple formula. For example, it can tell us that must be equal to or 1.5. I promise. Unfortunately, there’s no doing that for this series.

The trick we can fall back on is truncation. Instead of working out infinitely many terms which we can’t do, we work out only as many as we need to get close enough to the correct answer. This is a good time to snort with derision. How can we know we’re close enough to a correct answer we presumably don’t know? I mean, yes, we know the digits of π, since we have Google. But if we were working out an infinite series for some other number we wouldn’t know what right was.

Here again calculus rescues us. For many infinite series we can work out “error estimates”. These tell us that the difference between working out infinitely many terms and working out finitely many terms won’t be larger than some number. If that some-number is small enough, good! We don’t have to work out more than that many terms. The error estimate is not the actual error. The actual error is the difference between the thing we’ve worked out and the thing we wanted. If we knew the actual error we would know what the actual answer was. What we have is an error estimate: some number we know is bigger than the actual error is. The actual answer must be somewhere between the truncated series minus the error estimate and the truncated series plus the error estimate.

The Leibniz formula is a type of series called an “alternating series”, or sometimes “alternating sign series”. It’s called that because the terms in it are alternately added and subtracted. (If you prefer, the terms are alternately positive and negative.) And each term is closer to zero than the one before it. 4/5 is smaller than 4/3; 4/7 is smaller than 4/5; 4/9 is smaller than 4/7, and so on. (For this we don’t care whether the term is positive or negative, just how big it is.)

This means that the error estimate of stopping after, say, 50 terms is easy to work out. The error will be no bigger than the 51st term is. If we worked out the first 100 terms, then the error would be no bigger than the 101st term. If we worked out the first 2,038 terms, the error would be no bigger than the 2,039th term. Alternating series are wonderful.

We can prove that this estimate of the error is true. By true I mean that the actual error is smaller than this error estimate. But the proof is probably more convincing and a good bit quicker to use a drawing:

After adding together fifty terms (or whatever; pick the number of terms you like) the sum reaches some number, the running total. The fifty-first (or whichever is next) term increases the running total a little. The 52nd term decreases the running total a little less than the 51st term increased it. The 53rd term increases the running total by a little less than the 52nd term decreased it. And so on. If we added together all the possible terms we’d settle on some number. That infinite sum can’t be any more than the 51st term’s size away from where the running total was after 50 terms.

(If the 51st term were negative, and the 52nd positive, then the 51st term would decrease the running total and the 52nd increase it, the 53rd decrease the running total and the 54th increase it, and so on. We handle this case by using the same picture as above, but turned upside-down.)

The 51st term might be a lot bigger than the error. That’s all right. The 51st term is only our *estimate* of the error. The *actual* error is the difference between the running total after 50 terms and the actual sum. We might never know what the actual error is. But if the estimate of the error is small enough for our needs, then the *actual* error is *certainly* small enough.

To work out how many terms we need to find π to two decimal places, then, we need to find for what odd number N the term is smaller than 0.005. If I’m doing this right, that would be 801. This would be the four hundredth term in the series. Remember, the denominators are only the odd numbers. (If I’m not doing this right, let me know, and I promise to fume about it for *days*.) And again, if I’m doing this right, after we’ve added together four hundred terms from the Leibniz we have an estimate of π of 3.1441, which is correct to two decimal places. (See previous promise.) In practice, three hundred terms is enough. But we wouldn’t have grounds to be sure of that.

Taking four divided by some number four hundred times and adding-or-subtracting the results is a pain. But it’s much less work than dropping over three million needles. So this series is a better method after all.

I tried memorising pi once, but for some reason I couldn’t finish. It wasn’t very rational of me. I sort of had to say that. (Actually, I probably didn’t…)

LikeLike

Aw, not to fear. I don’t think worse of you for saying it. It is the kind of joke people have to say, after all.

LikeLike

It’s really difficult to manually calculate pi using a series. William Shanks claimed to have calculated pi manually up to more than 700 digits using the Machin’s formula,

but he erred on the 528th digit, I think. It was a very amazing achievement nonetheless.

LikeLike

Shanks’s case is interesting, not just because of his great work and tragic error. There is also that museum rotunda that tries to honor him by displaying the digits of pi; it was built before his error was found.

So the question is: keep the digits he calculated which are wrong, or replace them with the digits he would have calculated had he done the work right? Bearing in mind the purpose is to honor Shanks’s work, and

nobodyis going to get the digits of pi from reading what is essentially a piece of memorial art.LikeLiked by 1 person

From what I know, the Gregory-Leibniz series, while theoretically correct, converges very slowly to the desired value. I tried it once, up to around eight hundred terms. It was nightmare trying to get the figure to converge to a reasonably good number of decimal places. Some other formulas are more useful for this purpose. This series remains one of theoretical interest and mathematical beauty.

LikeLike

Oh, there’s no need to disparage the series as ‘theoretically’ correct; it’s right, no question about that. It’s just a matter of how much work is required to get what you want out of it. As series approximations for pi go, it’s not very efficient. It takes a lot of work to get a few meager decimal places right. But at least it’s very easy to understand.

If you were stranded on a desert island and needed to calculate the digits of pi for some reason, you could remember this formula well enough and work out its terms well enough. Other formulas would get you more decimal places with fewer terms being calculated, but you have to remember and apply the formulas, and that’s a pain.

Interestingly, it’s possible to calculate an arbitrary

binarydigit of pi without working out all the binary digits that come before it. There’s no way to do that for the decimal digits of pi; I forget whether there’s merely no known way to do that, or if it’s known to be impossible to do that. But the result is if you wanted to know just the (say) 2,038 trillionth binary digit of pi, you could work that out without knowing anything about the 2,037,999,999,999,999 digits that came before it.LikeLiked by 1 person