Wronski’s Formula For Pi: My Boring Mistake


So, I must confess failure. Not about deciphering Józef Maria Hoëne-Wronski’s attempted definition of π. He’d tried this crazy method throwing a lot of infinities and roots of infinities and imaginary numbers together. I believe I translated it into the language of modern mathematics fairly. And my failure is not that I found the formula actually described the number -½π.

Oh, I had an error in there, yes. And I’d found where it was. It was all the way back in the essay which first converted Wronski’s formula into something respectable. It was a small error, first appearing in the last formula of that essay and never corrected from there. This reinforces my suspicion that when normal people see formulas they mostly look at them to confirm there is a formula there. With luck they carry on and read the sentences around them.

My failure is I wanted to write a bit about boring mistakes. The kinds which you make all the time while doing mathematics work, but which you don’t worry about. Dropped signs. Constants which aren’t divided out, or which get multiplied in incorrectly. Stuff like this which you only detect because you know, deep down, that you should have gotten to an attractive simple formula and you haven’t. Mistakes which are tiresome to make, but never make you wonder if you’re in the wrong job.

The trouble is I can’t think of how to make an essay of that. We don’t tend to rate little mistakes like the wrong sign or the wrong multiple or a boring unnecessary added constant as important. This is because they’re not. The interesting stuff in a mathematical formula is usually the stuff representing variations. Change is interesting. The direction of the change? Eh, nice to know. A swapped plus or minus sign alters your understanding of the direction of the change, but that’s all. Multiplying or dividing by a constant wrongly changes your understanding of the size of the change. But that doesn’t alter what the change looks like. Just the scale of the change. Adding or subtracting the wrong constant alters what you think the change is varying from, but not what the shape of the change is. Once more, not a big deal.

But you also know that instinctively, or at least you get it from seeing how it’s worth one or two points on an exam to write -sin where you mean +sin. Or how if you ask the instructor in class about that 2 where a ½ should be, she’ll say, “Oh, yeah, you’re right” and do a hurried bit of erasing before going on.

Thus my failure: I don’t know what to say about boring mistakes that has any insight.

For the record here’s where I got things wrong. I was creating a function, named ‘f’ and using as a variable ‘x’, to represent Wronski’s formula. I’d gotten to this point:

f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

And then I observed how the stuff in curly braces there is “one of those magic tricks that mathematicians know because they see it all the time”. And I wanted to call in this formula, correctly:

\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }

So here’s where I went wrong. I took the 4\imath way off in the front of that first formula and combined it with the stuff in braces to make 2 times a sine of some stuff. I apologize for this. I must have been writing stuff out faster than I was thinking about it. If I had thought, I would have gone through this intermediate step:

f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\} \cdot \frac{2\imath}{2\imath}

Because with that form in mind, it’s easy to take the stuff in curled braces and the 2\imath in the denominator. From that we get, correctly, \sin\left(\frac{\pi}{4}\cdot\frac{1}{x}\right) . And then the -4\imath on the far left of that expression and the 2\imath on the right multiply together to produce the number 8.

So the function ought to have been, all along:

f(x) = 8 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

Not very different, is it? Ah, but it makes a huge difference. Carry through with all the L’Hôpital’s Rule stuff described in previous essays. All the complicated formula work is the same. There’s a different number hanging off the front, waiting to multiply in. That’s all. And what you find, redoing all the work but using this corrected function, is that Wronski’s original mess —

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

— should indeed equal:


All right, there’s an extra factor of 2 here. And I don’t think that is my mistake. Or if it is, other people come to the same mistake without my prompting.

Possibly the book I drew this from misquoted Wronski. It’s at least as good to have a formula for 2π as it is to have one for π. Or Wronski had a mistake in his original formula, and had a constant multiplied out front which he didn’t want. It happens to us all.



Wronski’s Formula For Pi: How Close We Came


Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:

\displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘\frac{1}{1 / x} — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.

The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.

Where trouble comes up is all those parts where \frac{1}{x} turns up. The derivatives of functions with a lot of \frac{1}{x} terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?

And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly \frac{1}{x} . And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at \frac{1}{y} . That is, and this is just a little bit of algebra:

g(y) = -2 \cdot \frac{1}{y} \cdot 2^{\frac{1}{2} y } \cdot \sin\left(\frac{\pi}{4} y\right)

The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then \frac{1}{x} has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?

… For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:

\displaystyle  \lim_{y \to 0^+} g(y) = \lim_{y \to 0^+}  -2\cdot\frac{2^{\frac{1}{2}y} \cdot \sin\left(\frac{\pi}{4} y\right)}{y}

Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives -2 \cdot \frac{1 \cdot 0}{0} . A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:

-2 \cdot 2^{\frac{1}{2}y} \sin\left(\frac{\pi}{4} y\right)

And the denominator is:


The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.

The first derivative of the numerator is going to be:

-2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4}

Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:

\displaystyle  \lim_{y \to 0^+} \frac{ -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4} }{1}

And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:

\displaystyle  -2 \cdot 2^{\frac{1}{2} 0} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} \cdot 0\right) + -2 \cdot 2^{\frac{1}{2} 0 } \cdot \cos\left(\frac{\pi}{4} \cdot 0\right) \cdot \frac{\pi}{4}

\frac{\pi}{4} \cdot 0 is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.

\displaystyle  -2 \cdot 2^{0} \cdot \log(2) \cdot \frac{1}{2} \cdot 0 + -2 \cdot 2^{ 0 } \cdot 1 \cdot \frac{\pi}{4}

And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:

\displaystyle   0 + -2 \cdot \frac{\pi}{4}

And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …


… So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted -\frac{1}{2} . The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?

Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.

The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.

I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.

Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using


So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:

\displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:

-2 \cdot \infty \cdot 1 \cdot 0

Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘2^{\frac{1}{2}\cdot\frac{1}{x}} ‘ for ‘x’ at ∞ is 1. The limit of ‘\sin(\frac{\pi}{4}\cdot\frac{1}{x}) for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.

Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)

So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:

\displaystyle  \lim_{x \to a} \frac{h(x)}{g(x)}

(Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)

Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:

\displaystyle  \lim_{x \to a} \frac{h'(x)}{g'(x)}

That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.

This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.

But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.

That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as \frac{1}{ 1 / x } ?

I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.

And it’s a perfectly good one. And it’s perfectly legitimate, too. \frac{1}{x} is a meaningful number if ‘x’ is any finite number other than zero. So is \frac{1}{ 1 / x } . Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that \frac{1}{x} wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that \frac{1}{x} would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.

I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.

So back to my function, which looks like this:

\displaystyle  f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

Do I want to replace ‘x’ with \frac{1}{1 / x} , or do I want to replace \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) with \frac{1}{1 / \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)} ? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.

So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:

\displaystyle f(x) = -2 \frac{2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}{\frac{1}{x}}

I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.

So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:

h(x) = 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

g(x) = \frac{1}{x}

The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:

g'(x) = -\frac{1}{x^2}

The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that 2^{\frac{1}{2}\cdot \frac{1}{x}} and that \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) . And each of those expressions contains within themselves another expression, that \frac{1}{x} . So this is going to require the Product Rule, of two expressions that each require the Chain Rule.

This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:

h'(x) = 2^{\frac{1}{2}\frac{1}{x}} \cdot \log(2) \cdot \frac{1}{2} \cdot (-1) \cdot \frac{1}{x^2} + 2^{\frac{1}{2}\frac{1}{x}} \cdot \cos( arg ) bleah

Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.

(If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)

Wronski’s Formula For Pi: A First Limit


When I last looked at Józef Maria Hoëne-Wronski’s attempted definition of π I had gotten it to this. Take the function:

f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

And find its limit when ‘x’ is ∞. Formally, you want to do this by proving there’s some number, let’s say ‘L’. And ‘L’ has the property that you can pick any margin-of-error number ε that’s bigger than zero. And whatever that ε is, there’s some number ‘N’ so that whenever ‘x’ is bigger than ‘N’, ‘f(x)’ is larger than ‘L – ε’ and also smaller than ‘L + ε’. This can be a lot of mucking about with expressions to prove.

Fortunately we have shortcuts. There’s work we can do that gets us ‘L’, and we can rely on other proofs that show that this must be the limit of ‘f(x)’ at some value ‘a’. I use ‘a’ because that doesn’t commit me to talking about ∞ or any other particular value. The first approach is to just evaluate ‘f(a)’. If you get something meaningful, great! We’re done. That’s the limit of ‘f(x)’ at ‘a’. This approach is called “substitution” — you’re substituting ‘a’ for ‘x’ in the expression of ‘f(x)’ — and it’s great. Except that if your problem’s interesting then substitution won’t work. Still, maybe Wronski’s formula turns out to be lucky. Fit in ∞ where ‘x’ appears and we get:

f(\infty) = -2 \infty 2^{\frac{1}{2}\cdot \frac{1}{\infty}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{\infty}\right)

So … all right. Not quite there yet. But we can get there. For example, \frac{1}{\infty} has to be — well. It’s what you would expect if you were a kid and not worried about rigor: 0. We can make it rigorous if you like. (It goes like this: Pick any ε larger than 0. Then whenever ‘x’ is larger than \frac{1}{\epsilon} then \frac{1}{x} is less than ε. So the limit of \frac{1}{x} at ∞ has to be 0.) So let’s run with this: replace all those \frac{1}{\infty} expressions with 0. Then we’ve got:

f(\infty) = -2 \infty 2^{0} \sin\left(0\right)

The sine of 0 is 0. 20 is 1. So substitution tells us limit is -2 times ∞ times 1 times 0. That there’s an ∞ in there isn’t a problem. A limit can be infinitely large. Think of the limit of ‘x2‘ at ∞. An infinitely large thing times an infinitely large thing is fine. The limit of ‘x ex‘ at ∞ is infinitely large. A zero times a zero is fine; that’s zero again. But having an ∞ times a 0? That’s trouble. ∞ times something should be huge; anything times zero should be 0; which term wins?

So we have to fall back on alternate plans. Fortunately there’s a tool we have for limits when we’d otherwise have to face an infinitely large thing times a zero.

I hope to write about this next time. I apologize for not getting through it today but time wouldn’t let me.

As I Try To Make Wronski’s Formula For Pi Into Something I Like


I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:

\displaystyle \lim_{x \to \infty} f(x)


f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number 1 + \imath raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it 1 - \imath raised to the same power. 1 + \imath and 1 - \imath are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.

Here’s the first way I figure to make my life simpler. It’s in rewriting that 1 + \imath and 1 - \imath stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.

At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number 1 + \imath matches to the point with x-coordinate 1, y-coordinate 1. The number 1 - \imath matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.

So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)

The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be \frac{\pi}{2} , because mathematicians like radians more than degrees. They make other work easier.

So. The point 1 + \imath corresponds to the polar coordinates r = \sqrt{2} and \theta = \frac{\pi}{4} . The point 1 - \imath corresponds to the polar coordinates r = \sqrt{2} and \theta = -\frac{\pi}{4} . Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.

With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by r e^{\imath\theta} , where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.

It’s easy to raise a number written like this to a power. r e^{\imath\theta} raised to the n-th power is going to be equal to r^n e^{\imath\theta \cdot n} . (Because (a \cdot b)^n = a^n \cdot b^n and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.

And we can get back to something that looks like 1 + \imath too. That is, something that’s a real number plus \imath times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that e^{\imath \phi} = \cos(\phi) + \imath \sin(\phi) for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.

OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.

And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.

This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.

For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.

What this all amounts to is that I can turn this:

f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

into this:

f(x) = -4 \imath x \left\{ \left(\sqrt{2} e^{\imath \frac{\pi}{4}}\right)^{\frac{1}{x}} -  \left(\sqrt{2} e^{-\imath \frac{\pi}{4}} \right)^{\frac{1}{x}} \right\}

without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:

f(x) = -4 \imath x \left\{ \sqrt{2}^{\frac{1}{x}} e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - \sqrt{2}^{\frac{1}{x}} e^{-\imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

I can’t look at that number, \sqrt{2}^{\frac{1}{x}} , sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:

f(x) = -4 \imath x \sqrt{2}^{\frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:

f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:

\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }

That’s good for any real-valued φ. For example, it’s good for the number \frac{\pi}{4}\cdot\frac{1}{x} . And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:

f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

And that’s the function whose limit I want to take at ∞. No, really.

Deciphering Wronski, Non-Standardly

I ran out of time to do my next bit on Wronski’s attempted definition of π. Next week, all goes well. But I have something to share anyway. William Lane Craig, of the The author of Boxing Pythagoras blog was intrigued by the starting point. And as a fan of studying how people understand infinity and infinitesimals (and how they don’t), this two-century-old example of mixing the numerous and the tiny set his course.

So here’s his essay, trying to work out Wronski’s beautiful weird formula from a non-standard analysis perspective. Non-standard analysis is a field that’s grown in the last fifty years. It’s probably fairly close in spirit to what (I think) Wronski might have been getting at, too. Non-standard analysis works with ideas that seem to match many people’s intuitive feelings about infinitesimals and infinities.

For example, can we speak of a number that’s larger than zero, but smaller than the reciprocal of any positive integer? It’s hard to imagine such a thing. But what if we can show that if we suppose such a number exists, then we can do this logically sound work with it? If you want to say that isn’t enough to show a number exists, then I have to ask how you know imaginary numbers or negative numbers exist.

Standard analysis, you probably guessed, doesn’t do that. It developed over the 19th century when the logical problems of these kinds of numbers seemed unsolvable. Mostly that’s done by limits, showing that a thing must be true whenever some quantity is small enough, or large enough. It seems safe to trust that the infinitesimally small is small enough, and the infinitely large is large enough. And it’s not like mathematicians back then were bad at their job. Mathematicians learned a lot of things about how infinitesimals and infinities work over the late 19th and early 20th century. It makes modern work possible.

Anyway, Boxing Pythagoras goes over what a non-standard analysis treatment of the formula suggests. I think it’s accessible even if you haven’t had much non-standard analysis in your background. At least it worked for me and I haven’t had much of the stuff. I think it’s also accessible if you’re good at following logical argument and won’t be thrown by Greek letters as variables. Most of the hard work is really arithmetic with funny letters. I recommend going and seeing if he did get to π.

As I Try To Figure Out What Wronski Thought ‘Pi’ Was

A couple weeks ago I shared a fascinating formula for π. I got it from Carl B Boyer’s The History of Calculus and its Conceptual Development. He got it from Józef Maria Hoëne-Wronski, early 19th-century Polish mathematician. His idea was that an absolute, culturally-independent definition of π would come not from thinking about circles and diameters but rather this formula:

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

Now, this formula is beautiful, at least to my eyes. It’s also gibberish. At least it’s ungrammatical. Mathematicians don’t like to write stuff like “four times infinity”, at least not as more than a rough draft on the way to a real thought. What does it mean to multiply four by infinity? Is arithmetic even a thing that can be done on infinitely large quantities? Among Wronski’s problems is that they didn’t have a clear answer to this. We’re a little more advanced in our mathematics now. We’ve had a century and a half of rather sound treatment of infinitely large and infinitely small things. Can we save Wronski’s work?

Start with the easiest thing. I’m offended by those \sqrt{-1} bits. Well, no, I’m more unsettled by them. I would rather have \imath in there. The difference? … More taste than anything sound. I prefer, if I can get away with it, using the square root symbol to mean the positive square root of the thing inside. There is no positive square root of -1, so, pfaugh, away with it. Mere style? All right, well, how do you know whether those \sqrt{-1} terms are meant to be \imath or its additive inverse, -\imath ? How do you know they’re all meant to be the same one? See? … As with all style preferences, it’s impossible to be perfectly consistent. I’m sure there are times I accept a big square root symbol over a negative or a complex-valued quantity. But I’m not forced to have it here so I’d rather not. First step:

\pi = \frac{4\infty}{\imath}\left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} -  \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}

Also dividing by \imath is the same as multiplying by -\imath so the second easy step gives me:

\pi = -4 \imath \infty \left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} -  \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}

Now the hard part. All those infinities. I don’t like multiplying by infinity. I don’t like dividing by infinity. I really, really don’t like raising a quantity to the one-over-infinity power. Most mathematicians don’t. We have a tool for dealing with this sort of thing. It’s called a “limit”.

Mathematicians developed the idea of limits over … well, since they started doing mathematics. In the 19th century limits got sound enough that we still trust the idea. Here’s the rough way it works. Suppose we have a function which I’m going to name ‘f’ because I have better things to do than give functions good names. Its domain is the real numbers. Its range is the real numbers. (We can define functions for other domains and ranges, too. Those definitions look like what they do here.)

I’m going to use ‘x’ for the independent variable. It’s any number in the domain. I’m going to use ‘a’ for some point. We want to know the limit of the function “at a”. ‘a’ might be in the domain. But — and this is genius — it doesn’t have to be. We can talk sensibly about the limit of a function at some point where the function doesn’t exist. We can say “the limit of f at a is the number L”. I hadn’t introduced ‘L’ into evidence before, but … it’s a number. It has some specific set value. Can’t say which one without knowing what ‘f’ is and what its domain is and what ‘a’ is. But I know this about it.

Pick any error margin that you like. Call it ε because mathematicians do. However small this (positive) number is, there’s at least one neighborhood in the domain of ‘f’ that surrounds ‘a’. Check every point in that neighborhood other than ‘a’. The value of ‘f’ at all those points in that neighborhood other than ‘a’ will be larger than L – ε and smaller than L + ε.

Yeah, pause a bit there. It’s a tricky definition. It’s a nice common place to crash hard in freshman calculus. Also again in Intro to Real Analysis. It’s not just you. Perhaps it’ll help to think of it as a kind of mutual challenge game. Try this.

  1. You draw whatever error bar, as big or as little as you like, around ‘L’.
  2. But I always respond by drawing some strip around ‘a’.
  3. You then pick absolutely any ‘x’ inside my strip, other than ‘a’.
  4. Is f(x) always within the error bar you drew?

Suppose f(x) is. Suppose that you can pick any error bar however tiny, and I can answer with a strip however tiny, and every single ‘x’ inside my strip has an f(x) within your error bar … then, L is the limit of f at a.

Again, yes, tricky. But mathematicians haven’t found a better definition that doesn’t break something mathematicians need.

To write “the limit of f at a is L” we use the notation:

\displaystyle \lim_{x \to a} f(x) = L

The ‘lim’ part probably makes perfect sense. And you can see where ‘f’ and ‘a’ have to enter into it. ‘x’ here is a “dummy variable”. It’s the falsework of the mathematical expression. We need some name for the independent variable. It’s clumsy to do without. But it doesn’t matter what the name is. It’ll never appear in the answer. If it does then the work went wrong somewhere.

What I want to do, then, is turn all those appearances of ‘∞’ in Wronski’s expression into limits of something at infinity. And having just said what a limit is I have to do a patch job. In that talk about the limit at ‘a’ I talked about a neighborhood containing ‘a’. What’s it mean to have a neighborhood “containing ∞”?

The answer is exactly what you’d think if you got this question and were eight years old. The “neighborhood of infinity” is “all the big enough numbers”. To make it rigorous, it’s “all the numbers bigger than some finite number that let’s just call N”. So you give me an error bar around ‘L’. I’ll give you back some number ‘N’. Every ‘x’ that’s bigger than ‘N’ has f(x) inside your error bars. And note that I don’t have to say what ‘f(∞)’ is or even commit to the idea that such a thing can be meaningful. I only ever have to think directly about values of ‘f(x)’ where ‘x’ is some real number.

So! First, let me rewrite Wronski’s formula as a function, defined on the real numbers. Then I can replace each ∞ with the limit of something at infinity and … oh, wait a minute. There’s three ∞ symbols there. Do I need three limits?

Ugh. Yeah. Probably. This can be all right. We can do multiple limits. This can be well-defined. It can also be a right pain. The challenge-and-response game needs a little modifying to work. You still draw error bars. But I have to draw multiple strips. One for each of the variables. And every combination of values inside all those strips has give an ‘f’ that’s inside your error bars. There’s room for great mischief. You can arrange combinations of variables that look likely to break ‘f’ outside the error bars.

So. Three independent variables, all taking a limit at ∞? That’s not guaranteed to be trouble, but I’d expect trouble. At least I’d expect something to keep the limit from existing. That is, we could find there’s no number ‘L’ so that this drawing-neighborhoods thing works for all three variables at once.

Let’s try. One of the ∞ will be a limit of a variable named ‘x’. One of them a variable named ‘y’. One of them a variable named ‘z’. Then:

f(x, y, z) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{y}} -  \left(1 - \imath\right)^{\frac{1}{z}} \right\}

Without doing the work, my hunch is: this is utter madness. I expect it’s probably possible to make this function take on many wildly different values by the judicious choice of ‘x’, ‘y’, and ‘z’. Particularly ‘y’ and ‘z’. You maybe see it already. If you don’t, you maybe see it now that I’ve said you maybe see it. If you don’t, I’ll get there, but not in this essay. But let’s suppose that it’s possible to make f(x, y, z) take on wildly different values like I’m getting at. This implies that there’s not any limit ‘L’, and therefore Wronski’s work is just wrong.

Thing is, Wronski wouldn’t have thought that. Deep down, I am certain, he thought the three appearances of ∞ were the same “value”. And that to translate him fairly we’d use the same name for all three appearances. So I am going to do that. I shall use ‘x’ as my variable name, and replace all three appearances of ∞ with the same variable and a common limit. So this gives me the single function:

f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

And then I need to take the limit of this at ∞. If Wronski is right, and if I’ve translated him fairly, it’s going to be π. Or something easy to get π from.

I hope to get there next week.

What Only One Person Ever Has Thought ‘Pi’ Means, And Who That Was

I’ve been reading Carl B Boyer’s The History of Calculus and its Conceptual Development. It’s been slow going, because reading about how calculus’s ideas developed is hard. The ideas underlying it are subtle to start with. And the ideas have to be discussed using vague, unclear definitions. That’s not because dumb people were making arguments. It’s because these were smart people studying ideas at the limits of what we understood. When we got clear definitions we had the fundamentals of calculus understood. (By our modern standards. The future will likely see us as accepting strange ambiguities.) And I still think Boyer whiffs the discussion of Zeno’s Paradoxes in a way that mathematics and science-types usually do. (The trouble isn’t imagining that infinite series can converge. The trouble is that things are either infinitely divisible or they’re not. Either way implies things that seem false.)

Anyway. Boyer got to a part about the early 19th century. This was when mathematicians were discovering infinities and infinitesimals are amazing tools. Also that mathematicians should maybe learn if they follow any rules. Because you can just plug symbols in to formulas and grind out what looks like they might mean and get answers. Sometimes this works great. Grind through the formulas for solving cubic polynomials as though square roots of negative numbers make sense. You get good results. Later, we worked out a coherent scheme of “complex-valued numbers” that justified it all. We can get lucky with infinities and infinitesimals, sometimes.

And this brought Boyer to an argument made by Józef Maria Hoëne-Wronski. He was a Polish mathematician whose fantastic ambition in … everything … didn’t turn out many useful results. Algebra, the Longitude Problem, building a rival to the railroad, even the Kosciuszko Uprising, none quite panned out. (And that’s not quite his name. The ‘n’ in ‘Wronski’ should have an acute mark over it. But WordPress’s HTML engine doesn’t want to imagine such a thing exists. Nor do many typesetters writing calculus or differential equations books, Boyer’s included.)

But anyone who studies differential equations knows his name, for a concept called the Wronskian. It’s a matrix determinant that anyone who studies differential equations hopes they won’t ever have to do after learning it. And, says Boyer, Wronski had this notion for an “absolute meaning of the number π”. (By “absolute” Wronski means one that not drawn from cultural factors like the weird human interset in circle perimeters and diameters. Compare it to the way we speak of “absolute temperature”, where the zero means something not particular to western European weather.)

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}


I will admit I’m not fond of “real” alternate definitions of π. They seem to me mostly to signal how clever the definition-originator is. The only one I like at all defines π as the smallest positive root of the simple-harmonic-motion differential equation. (With the right starting conditions and all that.) And I’m not sure that isn’t “circumference over diameter” in a hidden form.

And yes, that definition is a mess of early-19th-century wild, untamed casualness in the use of symbols. But I admire the crazypants beauty of it. If I ever get a couple free hours I should rework it into something grammatical. And then see if, turned into something tolerable, Wronski’s idea is something even true.

Boyer allows that “perhaps” because of the strange notation and “bizarre use of the symbol ∞” Wronski didn’t make much headway on this point. I can’t fault people for looking at that and refusing to go further. But isn’t it enchanting as it is?

Reading the Comics, September 8, 2017: First Split Week Edition, Part 1

It was looking like another slow week for something so early in the (United States) school year. Then Comic Strip Master Commend sent a flood of strips in for Friday and Saturday, so I’m splitting the load. It’s not a heavy one, as back-to-school jokes are on people’s minds. But here goes.

Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017 is a fair strip for this early in the school year. It’s an old joke about making subtraction understandable.

Dennis's Mom: 'How was school today?' Dennis: 'Not great. We just learned how to add and they're expecting us to subtract!' Mom: 'Let me see if I can help. If you have five pieces of candy, and you give Margaret there pieces of candy, what do you have?' Dennis: 'TEMPORARY INSANITY!!'
Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017. The joke pretty well explains itself, but I would like to point out the great use of color for highlighting here. The different shades are done in a way very consistent with the mid-century stylings of the characters, but are subtler than could have been done when Hank Ketcham started the comic in the 1950s. For that matter, it’s subtler than could have been printed until quite recently in the newspaper industry. It’s worth noticing.

Mark Anderson’s Andertoons for the 3rd is the Mark Anderson installment for this week, so I’m glad to have that. It’s a good old classic cranky-students setup and it reminds me that “unlike fractions” is a thing. I’m not quibbling with the term, especially not after the whole long-division mess a couple weeks back. I just hadn’t thought in a long while about how different denominators do make adding fractions harder.

Jeff Harris’s Shortcuts informational feature for the 3rd I couldn’t remember why I put on the list of mathematically-themed comic strips. The reason’s in there. There’s a Pi Joke. But my interest was more in learning that strawberries are a hybrid created in France from a North American and a Chilean breed. Isn’t that intriguing stuff?

Mom-type showing a flashcard, '5 x 7 = ?', to two kids. Boy: 'Isn't there an app for this sort of thing?'
Bill Abbott’s Specktickles for the 8th of September, 2017. I confess that I don’t know whether this comic is running in any newspapers. But I could find it easily enough so that’s why I read it and look for panels that touch on mathematics topics.

Bill Abbott’s Specktickles for the 8th uses arithmetic — multiplication flash cards — as emblem of stuff to study. About all I can say for that.

Reading the Comics, August 17, 2017: Professor Edition

To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line.

Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that.

Professor, by a pie chart, reading a letter: 'Dear Professor: We are excited about your new theory. Would you build us a prototype? And how much would you charge for each slice? - Sara Lee.'
Joe Martin’s Mr Boffo for the 15th of August, 2017. I am curious what sort of breakthrough in pie-slicing would be worth the Sara Lee company’s attention. Occasionally you’ll see videos of someone who cuts a pie (or cake or whatever) into equal-area slices using some exotic curve, but that’s to show off that something can be done, not that something is practical.

Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics.

Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here.

By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time.

Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like.

Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.

Reading the Comics, August 12, 2017: August 10 and 12 Edition

The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.

Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.

Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.

Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.

Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.

John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.

Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.

Terrible and Less-Terrible Pi

As the 14th of March comes around it’s the time for mathematics bloggers to put up whatever they can about π. I will stir from my traditional crankiness about Pi Day (look, we don’t write days of the year as 3.14 unless we’re doing fake stardates) to bring back my two most π-relevant posts:

  • Calculating Pi Terribly is about a probability-based way to calculate just what π’s digits are. It’s a lousy way to do it, but it works, technically.
  • Calculating Pi Less Terribly is about an analysis-based way to calculate just what π’s digits are. It’s a less bad way to do it, although we actually use better-yet ways to work out the digits of a number like this.
  • And what the heck, Normal Numbers, from an A To Z sequence. We do not actually know that π is a normal number. It’s the way I would bet, though, and here’s something about why I’d bet that way.

Reading the Comics, March 4, 2017: Frazz, Christmas Trees, and Weddings Edition

It was another of those curious weeks when Comic Strip Master Command didn’t send quite enough comics my way. Among those they did send were a couple of strips in pairs. I can work with that.

Samson’s Dark Side Of The Horse for the 26th is the Roman Numerals joke for this essay. I apologize to Horace for being so late in writing about Roman Numerals but I did have to wait for Cecil Adams to publish first.

In Jef Mallett’s Frazz for the 26th Caulfield ponders what we know about Pythagoras. It’s hard to say much about the historical figure: he built a cult that sounds outright daft around himself. But it’s hard to say how much of their craziness was actually their craziness, how much was just that any ancient society had a lot of what seems nutty to us, and how much was jokes (or deliberate slander) directed against some weirdos. What does seem certain is that Pythagoras’s followers attributed many of their discoveries to him. And what’s certain is that the Pythagorean Theorem was known, at least a thing that could be used to measure things, long before Pythagoras was on the scene. I’m not sure if it was proved as a theorem or whether it was just known that making triangles with the right relative lengths meant you had a right triangle.

Greg Evans’s Luann Againn for the 28th of February — reprinting the strip from the same day in 1989 — uses a bit of arithmetic as generic homework. It’s an interesting change of pace that the mathematics homework is what keeps one from sleep. I don’t blame Luann or Puddles for not being very interested in this, though. Those sorts of complicated-fraction-manipulation problems, at least when I was in middle school, were always slogs of shuffling stuff around. They rarely got to anything we’d like to know.

Jef Mallett’s Frazz for the 1st of March is one of those little revelations that statistics can give one. Myself, I was always haunted by the line in Carl Sagan’s Cosmos about how, in the future, with the Sun ageing and (presumably) swelling in size and heat, the Earth would see one last perfect day. That there would most likely be quite fine days after that didn’t matter, and that different people might disagree on what made a day perfect didn’t matter. Setting out the idea of a “perfect day” and realizing there would someday be a last gave me chills. It still does.

Richard Thompson’s Poor Richard’s Almanac for the 1st and the 2nd of March have appeared here before. But I like the strip so I’ll reuse them too. They’re from the strip’s guide to types of Christmas trees. The Cubist Fur is described as “so asymmetrical it no longer inhabits Euclidean space”. Properly neither do we, but we can’t tell by eye the difference between our space and a Euclidean space. “Non-Euclidean” has picked up connotations of being so bizarre or even horrifying that we can’t hope to understand it. In practice, it means we have to go a little slower and think about, like, what would it look like if we drew a triangle on a ball instead of a sheet of paper. The Platonic Fir, in the 2nd of March strip, looks like a geometry diagram and I doubt that’s coincidental. It’s very hard to avoid thoughts of Platonic Ideals when one does any mathematics with a diagram. We know our drawings aren’t very good triangles or squares or circles especially. And three-dimensional shapes are worse, as see every ellipsoid ever done on a chalkboard. But we know what we mean by them. And then we can get into a good argument about what we mean by saying “this mathematical construct exists”.

Mark Litzler’s Joe Vanilla for the 3rd uses a chalkboard full of mathematics to represent the deep thinking behind a silly little thing. I can’t make any of the symbols out to mean anything specific, but I do like the way it looks. It’s quite well-done in looking like the shorthand that, especially, physicists would use while roughing out a problem. That there are subscripts with forms like “12” and “22” with a bar over them reinforces that. I would, knowing nothing else, expect this to represent some interaction between particles 1 and 2, and 2 with itself, and that the bar means some kind of complement. This doesn’t mean much to me, but with luck, it means enough to the scientist working it out that it could be turned into a coherent paper.

'Has Carl given you any reason not to trust him?' 'No, not yet. But he might.' 'Fi ... you seek 100% certainty in people, but that doesn't exist. In the end,' and Dethany is drawn as her face on a pi symbol, 'we're *all* irrational numbers.'
Bill Holbrook’s On The Fastrack for the 3rd of March, 2017. Fi’s dress isn’t one of those … kinds with the complicated pattern of holes in it. She got it torn while trying to escape the wedding and falling into the basement.

Bill Holbrook’s On The Fastrack is this week about the wedding of the accounting-minded Fi. And she’s having last-minute doubts, which is why the strip of the 3rd brings in irrational and anthropomorphized numerals. π gets called in to serve as emblematic of the irrational numbers. Can’t fault that. I think the only more famously irrational number is the square root of two, and π anthropomorphizes more easily. Well, you can draw an established character’s face onto π. The square root of 2 is, necessarily, at least two disconnected symbols and you don’t want to raise distracting questions about whether the root sign or the 2 gets the face.

That said, it’s a lot easier to prove that the square root of 2 is irrational. Even the Pythagoreans knew it, and a bright child can follow the proof. A really bright child could create a proof of it. To prove that π is irrational is not at all easy; it took mathematicians until the 19th century. And the best proof I know of the fact does it by a roundabout method. We prove that if a number (other than zero) is rational then the tangent of that number must be irrational, and vice-versa. And the tangent of π/4 is 1, so therefore π/4 must be irrational, so therefore π must be irrational. I know you’ll all trust me on that argument, but I wouldn’t want to sell it to a bright child.

'Fi ... humans are complicated. Like the irrational number pi, we can go on forever. You never get to the bottom of us! But right now, upstairs, there are two variables who *want* you in their lives. Assign values to them.' Carl, Fi's fiancee, is drawn as his face with a y; his kid as a face on an x.
Bill Holbrook’s On The Fastrack for the 4th of March, 2017. I feel bad that I completely forgot Carl had a kid and that the face on the x doesn’t help me remember anything.

Holbrook continues the thread on the 4th, extends the anthropomorphic-mathematics-stuff to call people variables. There’s ways that this is fair. We use a variable for a number whose value we don’t know or don’t care about. A “random variable” is one that could take on any of a set of values. We don’t know which one it does, in any particular case. But we do know — or we can find out — how likely each of the possible values is. We can use this to understand the behavior of systems even if we never actually know what any one of it does. You see how I’m going to defend this metaphor, then, especially if we allow that what people are likely or unlikely to do will depend on context and evolve in time.

Reading the Comics, February 15, 2017: SMBC Cuts In Line Edition

It’s another busy enough week for mathematically-themed comic strips that I’m dividing the harvest in two. There’s a natural cutting point since there weren’t any comics I could call relevant for the 15th. But I’m moving a Saturday Morning Breakfast Cereal of course from the 16th into this pile. That’s because there’s another Saturday Morning Breakfast Cereal of course from after the 16th that I might include. I’m still deciding if it’s close enough to on topic. We’ll see.

John Graziano’s Ripley’s Believe It Or Not for the 12th mentions the “Futurama Theorem”. The trivia is true, in that writer Ken Keeler did create a theorem for a body-swap plot he had going. The premise was that any two bodies could swap minds at most one time. So, after a couple people had swapped bodies, was there any way to get everyone back to their correct original body? There is, if you bring two more people in to the body-swapping party. It’s clever.

From reading comment threads about the episode I conclude people are really awestruck by the idea of creating a theorem for a TV show episode. The thing is that “a theorem” isn’t necessarily a mind-boggling piece of work. It’s just the name mathematicians give when we have a clearly-defined logical problem and its solution. A theorem and its proof can be a mind-wrenching bit of work, like Fermat’s Last Theorem or the Four-Color Map Theorem are. Or it can be on the verge of obvious. Keeler’s proof isn’t on the obvious side of things. But it is the reasoning one would have to do to solve the body-swap problem the episode posited without cheating. Logic and good story-telling are, as often, good partners.

Teresa Burritt’s Frog Applause is a Dadaist nonsense strip. But for the 13th it hit across some legitimate words, about a 14 percent false-positive rate. This is something run across in hypothesis testing. The hypothesis is something like “is whatever we’re measuring so much above (or so far below) the average that it’s not plausibly just luck?” A false positive is what it sounds like: our analysis said yes, this can’t just be luck, and it turns out that it was. This turns up most notoriously in medical screenings, when we want to know if there’s reason to suspect a health risk, and in forensic analysis, when we want to know if a particular person can be shown to have been a particular place at a particular time. A 14 percent false positive rate doesn’t sound very good — except.

Suppose we are looking for a rare condition. Say, something one person out of 500 will have. A test that’s 99 percent accurate will turn up positives for the one person who has got it and for five of the people who haven’t. It’s not that the test is bad; it’s just there are so many negatives to work through. If you can screen out a good number of the negatives, though, the people who haven’t got the condition, then the good test will turn up fewer false positives. So suppose you have a cheap or easy or quick test that doesn’t miss any true positives but does have a 14 percent false positive rate. That would screen out 430 of the people who haven’t got whatever we’re testing for, leaving only 71 people who need the 99-percent-accurate test. This can make for a more effective use of resources.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 13th is an algebra-in-real-life joke and I can’t make something deeper out of that.

Mike Shiell’s The Wandering Melon for the 13th is a spot of wordplay built around statisticians. Good for taping to the mathematics teacher’s walls.

Eric the Circle for the 14th, this one by “zapaway”, is another bit of wordplay. Tans and tangents.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th identifies, aptly, a difference between scientists and science fans. Weinersmith is right that loving trivia is a hallmark of a fan. Expertise — in any field, not just science — is more about recognizing patterns of problems and concepts, ways to bring approaches from one field into another, this sort of thing. And the digits of π are great examples of trivia. There’s no need for anyone to know the 1,681st digit of π. There’s few calculations you could ever do when you needed more than three dozen digits. But if memorizing digits seems like fun then π is a great set to learn. e is the only other number at all compelling.

The thing is, it’s very hard to become an expert in something without first being a fan of it. It’s possible, but if a field doesn’t delight you why would you put that much work into it? So even though the scientist might have long since gotten past caring how many digits of π, it’s awfully hard to get something memorized in the flush of fandom out of your head.

I know you’re curious. I can only remember π out to 3.14158926535787962. I might have gotten farther if I’d tried, but I actually got a digit wrong, inserting a ‘3’ before that last ’62’, and the effort to get that mistake out of my head obliterated any desire to waste more time memorizing digits. For e I can only give you 2.718281828. But there’s almost no hope I’d know that far if it weren’t for how e happens to repeat that 1828 stanza right away.

The End 2016 Mathematics A To Z: Normal Numbers

Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.

Normal Numbers

A normal number is any real number you never heard of.

Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.

We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?

Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.

In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.

It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.

That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.

The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.

This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.

Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.

I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher `1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.

But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.

And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.

Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?

We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.

It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.

It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.

It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.

We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.

Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.

So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.

Reading the Comics, October 19, 2016: An Extra Day Edition

I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s c^2 = a^2 + b^2 - 2 a b \cos\left(C\right) . Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

The Help Session ('Be sure to show your work'). 'It's simple --- if 3 deep breaths take 4.2 seconds, and your dread to confidence ratio is 2:1, how long will it take to alleviate your math anxiety?'
Hilary Price’s Rhymes With Orange for the 19th of October, 2016. I don’t think there’s enough data given to solve the problem. But it’s a start at least. Start by making a note of it on your suspiciously large sheet of paper.

Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

Reading the Comics, September 6, 2016: Oh Thank Goodness We’re Back Edition

That’s a relief. After the previous week’s suspicious silence Comic Strip Master Command sent a healthy number of mathematically-themed comics my way. They cover a pretty normal spread of topics. So this makes for a nice normal sort of roundup.

Mac King and Bill King’s Magic In A Minute for the 4th is an arithmetic-magic-trick. Like most arithmetic-magic it depends on some true but, to me, dull bit of mathematics. In this case, that 81,234,567 minus 12,345,678 is equal to something. As a kid this sort of trick never impressed me because, well, anyone can do subtraction. I didn’t appreciate that the fun of stage magic in presenting well the mundane.

Jerry Scott and Jim Borgman’s Zits for the 5th is an ordinary mathematics-is-hard joke. But it’s elevated by the artwork, which shows off the expressive and slightly surreal style that makes the comic so reliable and popular. The formulas look fair enough, the sorts of things someone might’ve been cramming before class. If they’re a bit jumbled up, well, Pierce hasn’t been well.

'Are you okay, Pierce? You don't look so good.' Pierce indeed throws up, nastily. 'I don't have a stomach for math.' He's vomited a table full of trigonometry formulas, some of them gone awry.
Jerry Scott and Jim Borgman’s Zits for the 5th of September, 2016. It sure looks to me like there’s more things being explicitly multiplied by ‘1’ than are needed, but it might be the formulas got a little scrambled as Pierce vomited. We’ve all been there. Fun fact: apart from a bit in Calculus I where they drill you on differentiation formulas you never really need the secant. It makes a couple formulas a little more compact and that’s it, so if it’s been nagging at your mind go ahead and forget it.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 6th is an anthropomorphic-shapes joke and I feel like it’s been here before. Ah, yeah, there it is, from about this time last year. It’s a fair one to rerun.

Mustard and Boloney popped back in on the 8th with a strip I don’t have in my archive at least. It’s your standard Pi Pun, though. If they’re smart they’ll rerun it in March. I like the coloring; it’s at least a pleasant panel to look at.

Percy Crosby’s Skippy from the 9th of July, 1929 was rerun the 6th of September. It seems like a simple kid-saying-silly-stuff strip: what is the difference between the phone numbers Clinton 2651 and Clinton 2741 when they add to the same number? (And if Central knows what the number is why do they waste Skippy’s time correcting him? And why, 87 years later, does the phone yell at me for not guessing correctly whether I need the area code for a local number and whether I need to dial 1 before that?) But then who cares what the digits in a telephone number add to? What could that tell us about anything?

As phone numbers historically developed, the sum can’t tell us anything at all. But if we had designed telephone numbers correctly we could have made it … not impossible to dial a wrong number, but at least made it harder. This insight comes to us from information theory, which, to be fair, we have because telephone companies spent decades trying to work out solutions to problems like people dialing numbers wrong or signals getting garbled in the transmission. We can allow for error detection by schemes as simple as passing along, besides the numbers, the sum of the numbers. This can allow for the detection of a single error: had Skippy called for number 2641 instead of 2741 the problem would be known. But it’s helpless against two errors, calling for 2541 instead of 2741. But we could detect a second error by calculating some second term based on the number we wanted, and sending that along too.

By adding some more information, other modified sums of the digits we want, we can even start correcting errors. We understand the logic of this intuitively. When we repeat a message twice after sending it, we are trusting that even if one copy of the message is garbled the recipient will take the version received twice as more likely what’s meant. We can design subtler schemes, ones that don’t require we repeat the number three times over. But that should convince you that we can do it.

The tradeoff is obvious. We have to say more digits of the number we want. It isn’t hard to reach the point we’re ending more error-detecting and error-correcting numbers than we are numbers we want. And what if we make a mistake in the error-correcting numbers? (If we used a smart enough scheme, we can work out the error was in the error-correcting number, and relax.) If it’s important that we get the message through, we shrug and accept this. If there’s no real harm done in getting the message wrong — if we can shrug off the problem of accidentally getting the wrong phone number — then we don’t worry about making a mistake.

And at this point we’re only a few days into the week. I have enough hundreds of words on the close of the week I’ll put off posting that a couple of days. It’s quite good having the comics back to normal.

Reading the Comics, June 26, 2015: June 23, 2016 Plus Golden Lizards Edition

And now for the huge pile of comic strips that had some mathematics-related content on the 23rd of June. I admit some of them are just using mathematics as a stand-in for “something really smart people do”. But first, another moment with the Magic Realism Bot:

So, you know, watch the lizards and all.

Tom Batiuk’s Funky Winkerbean name-drops E = mc2 as the sort of thing people respect. If the strip seems a little baffling then you should know that Mason’s last name is Jarr. He was originally introduced as a minor player in a storyline that wasn’t about him, so the name just had to exist. But since then Tom Batiuk’s decided he likes the fellow and promoted him to major-player status. And maybe Batiuk regrets having a major character with a self-consciously Funny Name, which is an odd thing considering he named his long-running comic strip for original lead character Funky Winkerbean.

'I don't know how adding an E to your last name will make much of a difference, Mason.' 'It will immediately give my name more gravitas ... like Shiela E ... the E Street Band ... e e commungs ... E = mc^2 ... ' And he smirks because that's just what the comic strip is really about.
Tom Batiuk’s Funky Winkerbean for the 23rd of June, 2016. They’re in the middle of filming one or possibly two movies about the silver-age comic book hero Starbuck Jones. This is all the comic strip is about anymore, so if you go looking for its old standbys — people dying — or its older standbys — band practice being rained on — sorry, you’ll have to look somewhere else. That somewhere else would be the yellowed strips taped to the walls in the teachers lounge.

Charlie Podrebarac’s CowTown depicts the harsh realities of Math Camp. I assume they’re the realities. I never went to one myself. And while I was on the Physics Team in high school I didn’t make it over to the competitive mathematics squad. Yes, I noticed that the not-a-numbers-person Jim Smith can’t come up with anything other than the null symbol, representing nothing, not even zero. I like that touch.

Ryan North’s Dinosaur Comics rerun is about Richard Feynman, the great physicist whose classic memoir What Do You Care What Other People Think? is hundreds of pages of stories about how awesome he was. Anyway, the story goes that Feynman noticed one of the sequences of digits in π and thought of the joke which T-Rex shares here.

π is believed but not proved to be a “normal” number. This means several things. One is that any finite sequence of digits you like should appear in its representation, somewhere. Feynman and T-Rex look for the sequence ‘999999’, which sure enough happens less than eight hundred digits past the decimal point. Lucky stroke there. There’s no reason to suppose the sequence should be anywhere near the decimal point. There’s no reason to suppose the sequence has to be anywhere in the finite number of digits of π that humanity will ever know. (This is why Carl Sagan’s novel Contact, which has as a plot point the discovery of a message apparently encoded in the digits of π, is not building on a stupid idea. That any finite message exists somewhere is kind-of certain. That it’s findable is not.)

e, mentioned in the last panel, is similarly thought to be a normal number. It’s also not proved to be. We are able to say that nearly all numbers are normal. It’s in much the way we can say nearly all numbers are irrational. But it is hard to prove that any numbers are. I believe that the only numbers humans have proved to be normal are a handful of freaks created to show normal numbers exist. I don’t know of any number that’s interesting in its own right that’s also been shown to be normal. We just know that almost all numbers are.

But it is imaginable that π or e aren’t. They look like they’re normal, based on how their digits are arranged. It’s an open question and someone might make a name for herself by answering the question. It’s not an easy question, though.

Missy Meyer’s Holiday Doodles breaks the news to me the 23rd was SAT Math Day. I had no idea and I’m not sure what that even means. The doodle does use the classic “two trains leave Chicago” introduction, the “it was a dark and stormy night” of Boring High School Algebra word problems.

Stephan Pastis’s Pearls Before Swine is about everyone who does science and mathematics popularization, and what we worry someone’s going to reveal about us. Um. Except me, of course. I don’t do this at all.

Ashleigh Brilliant’s Pot-Shots rerun is a nice little averages joke. It does highlight something which looks paradoxical, though. Typically if you look at the distributions of values of something that can be measured you get a bell cure, like Brilliant drew here. The value most likely to turn up — the mode, mathematicians say — is also the arithmetic mean. “The average”, is what everybody except mathematicians say. And even they say that most of the time. But almost nobody is at the average.

Looking at a drawing, Brilliant’s included, explains why. The exact average is a tiny slice of all the data, the “population”. Look at the area in Brilliant’s drawing underneath the curve that’s just the blocks underneath the upside-down fellow. Most of the area underneath the curve is away from that.

There’s a lot of results that are close to but not exactly at the arithmetic mean. Most of the results are going to be close to the arithmetic mean. Look at how many area there is under the curve and within four vertical lines of the upside-down fellow. That’s nearly everything. So we have this apparent contradiction: the most likely result is the average. But almost nothing is average. And yet almost everything is nearly average. This is why statisticians have their own departments, or get to make the mathematics department brand itself the Department of Mathematics and Statistics.

Reading the Comics, April 15, 2016: Remarkably, No Income Tax Comics Edition

I’m as startled as you are. While a couple comic strips mentioned United States Income Tax Day, they didn’t do so in a way that seemed on-point enough for this Reading The Comics post. Of course, United States Income Tax Day happens to be the 18th this year. I haven’t seen Sunday’s comics yet.

David L Hoyt and Jeff Knurek’s Jumble for the 11th of April one again uses arithmetic puns for its business. Also, if some science fiction writer doesn’t take hold of “Gribth” as a name for something they’re missing a fine syllable. “Tahew” is no slouch in the made-up word leagues either.

TAHEW O - - - O; NIRKB - - O - O; CLEANC O - O - O -; GRIBTH - - O - O O; She knew what two times two equaled and didn't have to - - - - - - - - - -.
David L Hoyt and Jeff Knurek’s Jumble for the 11th of April, 2016. The link will probably expire sometime before the year 2112.

Ryan North’s Dinosaur Comics for the 12th of April obviously originally ran sometime in mid-March. I have similarly ambiguous feelings about the value of Pi Day. I suppose it’s nice for people to think of “fun” and “mathematics” close together. Utahraptor’s distinction between “Pi Day” of March 14 and “Approximate Pi Day” of the 22nd of July s a curious one, though. It’s not as though 3.14 is any more exactly π than 22/7 is. I suppose you can argue that at some moment on 3/14 between 1:59:26 and 1:59:27 there’s some moment, 1:59:26.5358979 et cetera going on forever. But that assumes that time is a continuous thing, and it’s not like you’ll ever know what that moment is. By the time you might recognize it, it’s passed. They are all Approximate Pi Days; we just have to decide what the approximation is.

Bill Schorr’s The Grizzwells for the 12th is a silly-homework problem question. I know the point is to joke about how Fauna misunderstands a word. But if we pretend the assignment is for real, what might its point be? To show that students know the parts of a right triangle? I guess that’s all right, but it doesn’t seem like much of an assignment. I don’t blame her for getting snarky in the face of that.

Rick Kirkman and Jerry Scott’s Baby Blues for the 13th is a gag about picking random numbers for arithmetic homework. The approach is doomed, surely, although it’s probably not completely doomed. I’m not sure Hammie’s age, but if his homework is about adding and subtracting numbers he probably mostly gets problems that give results between zero and twenty, and almost always less than a hundred. He might hit some by luck.

'Quick! Give me five random numbers.' 'Nineteen, three, eleven, six, and eighty-one.' 'Perfect!' 'Wait --- why did you need five random numbers?' 'I had five homework problems left.' 'I can't wait to see your math grade.'
Rick Kirkman and Jerry Scott’s Baby Blues for the 13th of April, 2016. It’s only after Hammy walks away that Zoe wonders why he needs five random numbers?

I’ve mentioned some how people are awful at picking “random” numbers in their heads. Zoe shows off one of the ways people are bad at it. People asked to name numbers “randomly” pick odd numbers more than even numbers. Somehow they just feel random. I doubt Kirkman and Scott were thinking of that; among other things, five numbers is a very small sample. Four odds out of five isn’t peculiar, not yet. They were probably just trying to pick numbers that sounded funny while fitting the space available. I’m a bit surprised 37 didn’t make the list.

Mark Anderson’s Andertoons for the 13th is Mark Anderson’s Andertoons entry for this essay. I like the teacher’s answer, though.

Patrick Roberts’s Todd the Dinosaur for the 14th just uses arithmetic as the most economic way to fit several problems on-screen at once. They’ve got a compactness that sentence-diagramming just can’t match.

'It's just not coming to me, teacher!' 'That's okay, Todd. You can have this [ lollipop ] just for trying!' He licks it and suddenly answers the three arithmetic problems on the board. 'Good stuff, those Red Bull lollipops!'
Patrick Roberts’s Todd the Dinosaur for the 14th of April, 2016. No fair wondering why his more distant eye is always the larger one.

Greg Cravens’s The Buckets for the 15th amuses me with its use of coin-tossing as a way of making choices. I’m also amused the coin might be wrong only about half the time.

John Deering’s Strange Brew for the 15th is a visual puzzle. It’s intending to make use of a board full of mathematical symbols to represent deep thought. But the symbols aren’t quite mathematics. They look much more like LaTeX, a typesetting code used to express mathematics in print. Some of the symbols are obscured, so I can’t say exactly what’s meant. But it should be something like this:

F = \{F_{x} \in F_{c}: (is ... (1) ) \cap (minPixels < \|s\| < maxPixels ) \\ \partial{P} \\ (is_{connected}| > |s| - \epsilon) \}

At the risk of disappointing, this appears to me gibberish. The appearance of words like ‘minPixels’ and ‘maxPixels’ suggest a bit of computer code. So does having a subscript that’s the full word “connected”. I wonder where Deering drew this example from.

Reading the Comics, March 14, 2016: Pi Day Comics Event

Comic Strip Master Command had the regular pace of mathematically-themed comic strips the last few days. But it remembered what the 14th would be. You’ll see that when we get there.

Ray Billingsley’s Curtis for the 11th of March is a student-resists-the-word-problem joke. But it’s a more interesting word problem than usual. It’s your classic problem of two trains meeting, but rather than ask when they’ll meet it asks where. It’s just an extra little step once the time of meeting is made, but that’s all right by me. Anything to freshen the scenario up.

'Please answer this math question, Mr Wilkins. John is traveling east from San Francisco on a train at a speed of 80 miles per hour. Tom is going to that same meeting from New York, headed west, on a train traveling 100 miles per hour. In what state will they meet?' 'Couldn't they just Skype?'
Ray Billingsley’s Curtis for the 11th of March, 2016. I am curious what the path of the rail line is.

Tony Carrillo’s F Minus for the 11th was apparently our Venn Diagram joke for the week. I’m amused.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 12th of March name-drops statisticians. Statisticians are almost expected to produce interesting pictures of their results. It is the field that gave us bar charts, pie charts, scatter plots, and many more. Statistics is, in part, about understanding a complicated set of data with a few numbers. It’s also about turning those numbers into recognizable pictures, all in the hope of finding meaning in a confusing world (ours).

Brian Anderson’s Dog Eat Doug for the 13th of March uses walls full of mathematical scrawl as signifier for “stuff thought deeply about’. I don’t recognize any of the symbols specifically, although some of them look plausibly like calculus. I would not be surprised if Anderson had copied equations from a book on string theory. I’d do it to tell this joke.

And then came the 14th of March. That gave us a bounty of Pi Day comics. Among them:

'Happy Pi Day.' 'Mmm. I love apple pie.' 'Pi day, not Pie Day. Pi ... you know ... 3.14 ... March 14th. Get it?' 'Today is a pie-eating holiday?' 'Sort of. They do celebrate it with pie, but it's mostly about pi.' 'I don't understand what that kid says half the time.'
John Hambrock’s The Brilliant Mind of Edison Lee for the 14th of March, 2016. The strip is like this a lot.

John Hambrock’s The Brilliant Mind of Edison Lee trusts that the name of the day is wordplay enough.

Scott Hilburn’s The Argyle Sweater is also a wordplay joke, although it’s a bit more advanced.

Tim Rickard’s Brewster Rockit fuses the pun with one of its running, or at least rolling, gags.

Bill Whitehead’s Free Range makes an urban legend out of the obsessive calculation of digits of π.

And Missy Meyer’s informational panel cartoon Holiday Doodles mentions that besides “National” Pi Day it was also “National” Potato Chip Day, “National” Children’s Craft Day, and “International” Ask A Question Day. My question: for the first three days, which nation?

Edited To Add: And I forgot to mention, after noting to myself that I ought to mention it. The Price Is Right (the United States edition) hopped onto the Pi Day fuss. It used the day as a thematic link for its Showcase prize packages, noting how you could work out π from the circumference of your new bicycles, or how π was a letter from your vacation destination of Greece, and if you think there weren’t brand-new cars in both Showcases you don’t know the game show well. Did anyone learn anything mathematical from this? I am skeptical. Do people come away thinking mathematics is more fun after this? … Conceivably. At least it was a day fairly free of people declaring they Hate Math and Can Never Do It.