Why Not Infinitely Many More Rides?


The Disaster Transport building was used as a Toy Factory haunted house during Halloweekends.

Returning to the Disaster Transport ride problem: by flipping a coin after each ride of the roller coaster we’d decide whether to go around again. How many more times could I expect to ride? Using the letter k to represent the number of rides, and p(k) to represent the probability of getting that many rides, it’s a straightforward use of the formula for expectation value — the sum of all the possible outcomes times the probability of that particular outcome — to find the expected number of rides.

Where this gets to be a bit of a bother is that there are, properly speaking, infinitely many possible outcomes. There’s no reason, in theory, that a coin couldn’t come up tails every single time, and only the impatience of the Cedar Point management which would keep us from riding a million times, a billion times, an infinite number of times. Common sense tells us this can’t happen; the chance of getting a billion tails in a row is just impossibly tiny, but, how do we know all these outcomes that are incredibly unlikely don’t add up to something moderately likely? It happens in integral calculus all the time that a huge enough pile of tiny things adds up to a moderate thing, so why not here?

Well, it doesn’t, and not just because my computer’s symbolic algebra package told me so. It’s a neat problem in the area of mathematics termed sequences and series. A sequence is pretty nearly what the word means in English; it’s a bunch of quantities, with some kind of index. Generically these are usually written with some nice familiar letter like a or b or so, with a subscript like i or j or k or even something up to n, those letters in the middle of the alphabet that usually mean “some whole number goes here”. The advantage in writing a sequence symbolically like this is we’ll be able to introduce some tools that work, in principle, on all sequences instead of brining in a lot of tools for just this one little problem.

A series, well, this is a bit of terminology that confused me when I first learned it, and I admit I’m still not sure why it’s called a “series”, which to me suggests an organized bunch of things. No. In this context, a “series” is just “the sum of a sequence”. That is, it’s a number, a single number, and not a string of them.

My little Disaster Transport expected-value problem is an infinite series, because it’s the sum of a sequence with an infinitely large number of terms in it. Infinite series are wonderful things because they’re quite powerful and far-reaching, and weave through numerical and applied mathematics with astounding breadth and power. They’re also a pain because they don’t always act perfectly like normal everyday addition suggests they should.

For example: if you add together a finite series, only a finite number of terms, however many they are — ten, a hundred, a hundred million billion trillion — you get a finite number out of the sum. You can’t count on that with infinite series. Most famously, the series created by taking the sum of one and a half and a third and a quarter and a fifth and a sixth and on and on, is bigger than any number you can imagine. It’s infinitely large; it’s “divergent”, to use the lingo.

For another example: if you add together a finite series, you can add in whatever order you like. The sum commutes. If you want to add together the first, third, fifth, and tenth terms, and then everything else, fine. This is one of the essential tricks to doing arithmetic in your head: you can add long columns of numbers by picking out, oh, this pair of digits add up to ten, and this one adds up to fifteen, and this one adds up to five, and so on, rather than wasting your brain trying to remember what is seven plus six (nobody knows). But for an infinite series you can’t necessarily count on that. The series made by adding one and negative a half and positive a third and negative a quarter and positive a fifth and negative a sixth and so on — in that order — adds up to the natural logarithm of two, a little under 0.7. But by changing the order around, taking more of the positive numbers up front and waiting on the negative numbers, you can make it add up to 1.4. Or to π. Or to a hundred thousand. Or you can front-load the negative numbers, and make this series add up to zero. Or to negative a million. Or to absolutely any number you like.

Clearly, infinite series have issues.

Fortunately, many infinite series behave very nicely: they add up to a certain number, and better, they always add up to the same number no matter how you reorganize them. These are called “convergent” series. If Calculus classes are still run as they were when I took one, they’re the subject of a very confusing month spent on an apparently endless string of baffling-looking tests one can put a series to and learn whether it converges or not. In practice, I use maybe three of these tests because that’s all I really got good on before the test on the sequences-and-series chapter. (They’re also the ones it’s easiest to make up problems for, and so are probably the most testable.)

I actually started out with the hardest (for this problem) of the three I always use the first time I worked the problem out. I’ll spare you that one (the integral test) and jump right to the one I did use, the ratio test. In this, we look at what the ratio of the sequences’s terms, as the index grows very large — that is, what is the millionth term compared to the 999,999th? The billionth term compared to the 999,999,999th? The (k + 1)-st term divided by the k-th term? If that number, when k gets big enough, is between -1 and 1, then the series converges. If that number is bigger than 1 (or less than -1), then the series diverges. If that number is exactly positive 1 or exactly negative 1, then, the test can’t tell us what happens. This is why we have other tools.

So how does this apply to my expectation value problem? The infinite series was:

\sum_{k = 0}^{\infty} k \cdot \left(\frac12\right)^{k + 1} = 0 \times \left(\frac12\right)^1 + 1 \times \left(\frac12\right)^2 + 2 \times \left(\frac12\right)^3 + 3 \times \left(\frac12\right)^4 + 4 \times \left(\frac12\right)^5 + \cdots

So for this, each a_k = k\times \left(\frac12\right)^{k + 1}. So the ratio, the quotient of one term divided by the one before it, is:

\frac{a_{k + 1}}{a_k} = \frac{(k + 1)\times \left(\frac12\right)^{(k + 1) + 1}}{k\times \left(\frac12\right)^{k + 1}} = \frac{k + 1}{k} \times \frac{\frac12^{(k + 1) + 1}}{\frac12^{k + 1}} = \frac{k + 1}{k} \times \frac{\frac12 \cdot \frac12^{k + 1}}{\frac12^{k + 1}} = \frac{k + 1}{k} \times \frac12 \times \frac{\frac12^{k + 1}}{\frac12^{k + 1}} = \frac{k + 1}{2} \times \frac12

Now, if k is a very large number, the difference between k and k + 1 is almost ignorable. A billion and one divided by a billion is very, very close to 1. The resemblance to 1 is even more dramatic if k is very large. [ Note: don’t write stuff like this on your calculus test. You will fail, and correctly so, because I’m leaving out talk about limits and instead just plugging in some numbers that look kind of biggish to us. But we’re just talking out here what a series seems to be acting like, so, plugging in a couple numbers and seeing what happens is tolerable. We’re experimenting so we know what the right answer should look like. ] So if k is a very large number, then the ratio between the k + 1-st and the k-th term in the series is going to be one-half, which is definitely between negative and positive one.

Therefore, the series converges. If we add up all these fractions, it will come to some finite number. And better, even if we change the order in which we add up these fractions, it will add up to the same number. We can do pretty much anything we like, without fear. We can work out the problem in any convenient or attractive fashion and be confident that whatever errors turn up are not going to be caused by the series itself being malevolent.

(And by the way: the stuff that Bug was talking about which I didn’t quite understand the first time through? Bug quite cleverly rewrote the problem so it used a simpler series, one that’s much more familiar to everybody who does infinite series, and good that. Turning a problem into a more familiar but equivalent problem is usually the right move, because we do familiar problems better. In fact, Bug’s series is rewritten in such a way that this convergence test is obvious at a glance, so it only needs to be stated explicitly to someone who doesn’t know sequences and series extremely well. But the test is still lurking there.)

Author: Joseph Nebus

I was born 198 years to the day after Johnny Appleseed. The differences between us do not end there. He/him.

2 thoughts on “Why Not Infinitely Many More Rides?”

Please Write Something Good

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: