## Something Cute I Never Noticed Before About Infinite Sums

This is a trifle, for which I apologize. I’ve been sick. But I ran across this while reading Carl B Boyer’s The History of the Calculus and its Conceptual Development. This is from the chapter “A Century Of Anticipation”, developments leading up to Newton and Leibniz and The Calculus As We Know It. In particular, while working out the indefinite integrals for simple powers — x raised to a whole number — John Wallis, whom you’ll remember from such things as the first use of the ∞ symbol and beating up Thomas Hobbes for his lunch money, noted this:

$\frac{0 + 1}{1 + 1} = \frac{1}{2}$

Which is fine enough. But then Wallis also noted that

$\frac{0 + 1 + 2}{2 + 2 + 2} = \frac{1}{2}$

And furthermore that

$\frac{0 + 1 + 2 + 3}{3 + 3 + 3 + 3} = \frac{1}{2}$

$\frac{0 + 1 + 2 + 3 + 4}{4 + 4 + 4 + 4 + 4} = \frac{1}{2}$

$\frac{0 + 1 + 2 + 3 + 4 + 5}{5 + 5 + 5 + 5 + 5 + 5} = \frac{1}{2}$

And isn’t that neat? Wallis goes on to conclude that this is true not just for finitely many terms in the numerator and denominator, but also if you carry on infinitely far. This seems like a dangerous leap to make, but they treated infinities and infinitesimals dangerously in those days.

What makes this work is — well, it’s just true; explaining how that can be is kind of like explaining how it is circles have a center point. All right. But we can prove that this has to be true at least for finite terms. A sum like 0 + 1 + 2 + 3 is an arithmetic progression. It’s the sum of a finite number of terms, each of them an equal difference from the one before or the one after (or both).

Its sum will be equal to the number of terms times the arithmetic mean of the first and last. That is, it’ll be the number of terms times the sum of the first and the last terms and divided that by two. So that takes care of the numerator. If we have the sum 0 + 1 + 2 + 3 + up to whatever number you like which we’ll call ‘N’, we know its value has to be (N + 1) times N divided by 2. That takes care of the numerator.

The denominator, well, that’s (N + 1) cases of the number N being added together. Its value has to be (N + 1) times N. So the fraction is (N + 1) times N divided by 2, itself divided by (N + 1) times N. That’s got to be one-half except when N is zero. And if N were zero, well, that fraction would be 0 over 0 and we know what kind of trouble that is.

It’s a tiny bit, although you can use it to make an argument about what to expect from $\int{x^n dx}$, as Wallis did. And it delighted me to see and to understand why it should be so.

• #### elkement (Elke Stangl) 4:38 pm on Monday, 5 June, 2017 Permalink | Reply

It reminds me of the famous story about young Gauss – when he baffled his teacher with this somewhat related ‘trick’ of adding up numbers between 1 to 100 very quickly (by actually calculating 101*50).

Like

• #### Joseph Nebus 1:09 am on Wednesday, 7 June, 2017 Permalink | Reply

That’s exactly what crossed my mind, especially as I realized I was doing the sum of 1 through 100 at least implicitly. It feels so playful to have something like that turn up.

Liked by 1 person

## Calculating Pi Less Terribly

Back on “Pi Day” I shared a terrible way of calculating the digits of π. It’s neat in principle, yes. Drop a needle randomly on a uniformly lined surface. Keep track of how often the needle crosses over a line. From this you can work out the numerical value of π. But it’s a terrible method. To be sure that π is about 3.14, rather than 3.12 or 3.38, you can expect to need to do over three and a third million needle-drops. So I described this as a terrible way to calculate π.

A friend on Twitter asked if it was worse than adding up 4 * (1 – 1/3 + 1/5 – 1/7 + … ). It’s a good question. The answer is yes, it’s far worse than that. But I want to talk about working π out that way.

Tom Batiuk’s Funky Winkerbean for the 17th of May, 2015. The worst part of this strip is Science Teacher Mark Twain will go back to the teachers’ lounge and complain that none of his students got it.

This isn’t part of the main post. But the comic strip happened to mention π on a day when I’m talking about π so who am I to resist coincidence?

(More …)

• #### Matthew Wright 9:30 pm on Sunday, 17 May, 2015 Permalink | Reply

I tried memorising pi once, but for some reason I couldn’t finish. It wasn’t very rational of me. I sort of had to say that. (Actually, I probably didn’t…)

Like

• #### Joseph Nebus 5:27 pm on Wednesday, 20 May, 2015 Permalink | Reply

Aw, not to fear. I don’t think worse of you for saying it. It is the kind of joke people have to say, after all.

Like

• #### abyssbrain 3:40 am on Monday, 18 May, 2015 Permalink | Reply

It’s really difficult to manually calculate pi using a series. William Shanks claimed to have calculated pi manually up to more than 700 digits using the Machin’s formula,

$\frac{\pi}{4}=4\arctan \frac{1}{5}-\arctan \frac{1}{239}$

but he erred on the 528th digit, I think. It was a very amazing achievement nonetheless.

Like

• #### Joseph Nebus 5:31 pm on Wednesday, 20 May, 2015 Permalink | Reply

Shanks’s case is interesting, not just because of his great work and tragic error. There is also that museum rotunda that tries to honor him by displaying the digits of pi; it was built before his error was found.

So the question is: keep the digits he calculated which are wrong, or replace them with the digits he would have calculated had he done the work right? Bearing in mind the purpose is to honor Shanks’s work, and nobody is going to get the digits of pi from reading what is essentially a piece of memorial art.

Liked by 1 person

• #### Chow Kim Wan 1:47 am on Wednesday, 3 June, 2015 Permalink | Reply

From what I know, the Gregory-Leibniz series, while theoretically correct, converges very slowly to the desired value. I tried it once, up to around eight hundred terms. It was nightmare trying to get the figure to converge to a reasonably good number of decimal places. Some other formulas are more useful for this purpose. This series remains one of theoretical interest and mathematical beauty.

Like

• #### Joseph Nebus 10:40 pm on Friday, 5 June, 2015 Permalink | Reply

Oh, there’s no need to disparage the series as ‘theoretically’ correct; it’s right, no question about that. It’s just a matter of how much work is required to get what you want out of it. As series approximations for pi go, it’s not very efficient. It takes a lot of work to get a few meager decimal places right. But at least it’s very easy to understand.

If you were stranded on a desert island and needed to calculate the digits of pi for some reason, you could remember this formula well enough and work out its terms well enough. Other formulas would get you more decimal places with fewer terms being calculated, but you have to remember and apply the formulas, and that’s a pain.

Interestingly, it’s possible to calculate an arbitrary binary digit of pi without working out all the binary digits that come before it. There’s no way to do that for the decimal digits of pi; I forget whether there’s merely no known way to do that, or if it’s known to be impossible to do that. But the result is if you wanted to know just the (say) 2,038 trillionth binary digit of pi, you could work that out without knowing anything about the 2,037,999,999,999,999 digits that came before it.

Liked by 1 person

## But How Interesting Is A Basketball Score?

When I worked out how interesting, in an information-theory sense, a basketball game — and from that, a tournament — might be, I supposed there was only one thing that might be interesting about the game: who won? Or to be exact, “did (this team) win”? But that isn’t everything we might want to know about a game. For example, we might want to know what a team scored. People often do. So how to measure this?

The answer was given, in embryo, in my first piece about how interesting a game might be. If you can list all the possible outcomes of something that has multiple outcomes, and how probable each of those outcomes is, then you can describe how much information there is in knowing the result. It’s the sum, for all of the possible results, of the quantity negative one times the probability of the result times the logarithm-base-two of the probability of the result. When we were interested in only whether a team won or lost there were just the two outcomes possible, which made for some fairly simple calculations, and indicates that the information content of a game can be as high as 1 — if the team is equally likely to win or to lose — or as low as 0 — if the team is sure to win, or sure to lose. And the units of this measure are bits, the same kind of thing we use to measure (in groups of bits called bytes) how big a computer file is.

## It Would Have Been One More Ride Because

I apologize for being slow writing the conclusion of the explanation for why my Dearly Beloved and I would expect one more ride following our plan to keep re-riding Disaster Transport as long as a fairly flipped coin came up tails. It’s been a busy week, and actually, I’d got stuck trying to think of a way to explain the sum I needed to take using only formulas that a normal person might find, or believe. I think I have it.

## Proving A Number Is Not 1

I want to do some more tricky examples of using this ε idea, where I show two numbers have to be the same because the difference between them is smaller than every positive number. Before I do, I want to put out a problem where we can show two numbers are not the same, since I think that makes it easier to see why the proof works where it does. It’s easy to get hypnotized by the form of an argument, and to not notice that the result doesn’t actually hold, particularly if all you see are repetitions of proofs where things work out and don’t see cases of the proof being invalid.

## What Numbers Equal Zero?

I want to give some examples of showing numbers are equal by showing the difference between them is ε. It’s a fairly abstruse idea but when it works amazing things become possible.

The easy example, although one that produces strong resistance, is showing that the number 1 is equal to the number 0.9999…. But here I have to say what I mean by that second number. It’s obvious to me that I mean a number formed by putting a decimal point up, and then filling in a ‘9’ to every digit past the decimal, repeating forever and ever without end. That’s a description so easy to grasp it looks obvious. I can give a more precise, less intuitively obvious, description, though, which makes it easier to prove what I’m going to be claiming.

• #### plaidfluff 10:37 pm on Friday, 16 March, 2012 Permalink | Reply

I prefer this proof, although some people take issue with it:

x=0.99999…..
10x=9.99999…..
10x-x=9.99999….-0.999999
9x=9
x=1

Like

• #### Joseph Nebus 3:37 am on Saturday, 17 March, 2012 Permalink | Reply

That’s a fine demonstration and maybe more intuitively obvious, although it has the problem for me right here that it doesn’t let me show off the idea of two numbers being the same because their difference is smaller than every positive number. I’d wanted to start with something fairly simple, and then go on to proving more complicated things equal to each other.

Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r