## Reading the Comics, July 12, 2022: Numerals Edition

The small set of comic strips with some interesting mathematical content for the first third of the month include an anthropomorphic numerals one and one about the representation of infinity. That’s enough to make a title.

Richard Thompson’s Cul de Sac repeat for the 3rd is making its third appearance in my column! I had mentioned it when it first ran, in July 2012, and then again in its July 2017 repeat. But in neither of those past times did I actually include the comic as I felt it likely GoComics would keep the link to it stable. I’m less confident now that they will keep the link up, as Thompson has died and his comic strip — this century’s best, to date — is in perpetual rerun.

I admit not having many thoughts which I haven’t said twice already. It’s a joke about making a character out of the representation of a number. Alice gives it personality and backstory and, as the kids say these days, lore. Anyway, be sure to check out this blog for the comic’s repeat the 4th of July, 2027. I hope to still be reading then.

I think mathematicians do tend to give, if not personality, at least traits to mathematical constructs. Like, 60, a number with abundant divisors, is likely to be seen as a less difficult number to calculate with than 58 is. A mathematician is likely to see $e^{\imath sin(x)}$ as a pleasant, well-behaved function, but is likely to see $\sin(e^{\frac{\imath}{x}})$ as a troublesome one. These examples all tie to how easy they are to do other stuff with. But it is natural to think fondly of things you see a lot that work nicely for you. These don’t always have to be “nice” things. If you want to test an idea about continuous curves, for example, it’s convenient to have handy the Koch curve. It’s this spiky fractal that’s nothing but corners, and you can use it to see if the idea holds up. Do this enough, and you come to see a reliable partner to your work.

Bill Amend’s FoxTrot for the 3rd is the one built on representing infinity. Best that one could hope for given Peter’s ambitious hopes here. I know the characters in this strip and I just little brother Jason wanted a Möbius-strip burger.

Tauhid Bondia’a Crabgrass for the 7th — a strip new to newspaper syndication, by the way — is a cryptography joke. This sort of thing is deeply entwined into mathematics, most deeply probability. This because we know that a three-letter (English) word is more likely to be ‘the’ or ‘and’ or ‘you’ than it is to be ‘qua’ or ‘ado’ or ‘pyx’. And either of those is more likely than ‘pqd’. So, if it’s a simple substitution, a coded word like ‘zpv’ gives a couple likely decipherings. The longer the original message, and the more it’s like regular English, the more likely it is that we can work out the encryption scheme.

But this is simple substitution. What’s a complex substitution? There are many possible schemes here. Their goal is to try to make, in the code text, every set of three-letter combinations to appear about as often as every other pair. That is, so that we don’t see ‘zpv’ happen any more, or less, than ‘sgd’ or ‘zmc’ do, so there’s no telling which word is supposed to be ‘the’ and which is ‘pyx’. Doing that well is a genuinely hard problem, and why cryptographers are paid I assume big money. It demands both excellent design of the code and excellent implementation of it. (One cryptography success for the Allies in World War II came about because some German weather stations needed to transmit their observations using two different cipher schemes. The one which the Allies had already cracked then gave an edge to working out the other.)

It also requires thinking of the costs of implementation. Kevin and Miles could work out a more secure code, but would it be worth it? They just need people to decide their message is too much effort to be worth cracking. Mrs Campbell seems to have reached that conclusion, at a glance. Not sure what Principal Sanders would have decided, were Miles not eager to get out of there. Operational security is always a challenge.

And that’s enough for the start of the month. All my Reading the Comics posts should be at this link,. I hope to have some more of them to discuss next week. We’ll see what happens.

## Reading the Comics, May 25, 2021: Hilbert’s Hotel Edition

I have only a couple strips this time, and from this week. I’m not sure when I’ll return to full-time comics reading, but I do want to share strips that inspire something.

Carol Lay’s Lay Lines for the 24th of May riffs on Hilbert’s Hotel. This is a metaphor often used in pop mathematics treatments of infinity. So often, in fact, a friend snarked that he wished for any YouTube mathematics channel that didn’t do the same three math theorems. Hilbert’s Hotel was among them. I think I’ve never written a piece specifically about Hilbert’s Hotel. In part because every pop mathematics blog has one, so there are better expositions available. I have a similar restraint against a detailed exploration of the different sizes of infinity, or of the Monty Hall Problem.

Hilbert’s Hotel is named for David Hilbert, of Hilbert problems fame. It’s a thought experiment to explore weird consequences of our modern understanding of infinite sets. It presents various cases about matching elements of a set to the whole numbers, by making it about guests in hotel rooms. And then translates things we accept in set theory, like combining two infinitely large sets, into material terms. In material terms, the operations seem ridiculous. So the set of thought experiments get labelled “paradoxes”. This is not in the logician sense of being things both true and false, but in the ordinary sense that we are asked to reconcile our logic with our intuition.

So the Hotel serves a curious role. It doesn’t make a complex idea understandable, the way many demonstrations do. It instead draws attention to the weirdness in something a mathematics student might otherwise nod through. It does serve some role, or it wouldn’t be so popular now.

It hasn’t always been popular, though. Hilbert introduced the idea in 1924, though per a paper by Helge Kragh, only to address one question. A modern pop mathematician would have a half-dozen problems. George Gamow’s 1947 book One Two Three … Infinity brought it up again, but it didn’t stay in the public eye. It wasn’t until the 1980s that it got a secure place in pop mathematics culture, and that by way of philosophers and theologians. If you aren’t up to reading the whole of Kragh’s paper, I did summarize it a bit more completely in this 2018 Reading the Comics essay.

Anyway, Carol Lay does an great job making a story of it.

Leigh Rubin’s Rubes for the 25th of May I’ll toss in here too. It’s a riff on the art convention of a blackboard equation being meaningless. Normally, of course, the content of the equation doesn’t matter. So it gets simplified and abstracted, for the same reason one draws a brick wall as four separate patches of two or three bricks together. It sometimes happens that a cartoonist makes the equation meaningful. That’s because they’re a recovering physics major like Bill Amend of FoxTrot. Or it’s because the content of the blackboard supports the joke. Which, in this case, it does.

The essays I write about comic strips I tag so they appear at this link. You may enjoy some more pieces there.

## Reading the Comics, May 12, 2020: Little Oop Counts For More Edition

The past week had a fair number of comic strips mentioning some aspect of mathematics. One of them is, really, fairly slight. But it extends a thread in the comic strip that I like and so that I will feature here.

Jonathan Lemon and Joey Alison Sayers’s Little Oop for the 10th continues the thread of young Alley Oop’s time discovering numbers. (This in a storyline that’s seen him brought to the modern day.) The Moo researchers of the time have found numbers larger than three. As I’d mentioned when this joke was first done, that Oop might not have had a word for “seven” until recently doesn’t mean he wouldn’t have understood that seven of a thing was more than five of a thing, or less than twelve of a thing. At least if he could compare them.

Sam Hurt’s Eyebeam for the 11th uses heaps of mathematical expressions, graphs, charts, and Venn diagrams to represent the concept of “data”. It’s spilled all over to represent “sloppy data”. Usually by the term we mean data that we feel is unreliable. Measurements that are imprecise, or that are unlikely to be reliable. Precision is, roughly, how many significant digits your measurement has. Reliability is, roughly, if you repeated the measurement would you get about the same number?

Nate Fakes’s Break of Day for the 12th is the anthropomorphic numerals joke for the week.

Ryan North’s Dinosaur Comics for the 12th talks about immortality. And what the probability of events means when there are infinitely many opportunities for a thing to happen.

We’re accustomed in probability to thinking of the expectation value. This is the chance that something will happen, given some number N opportunities to happen, if at each opportunity it has the probability p of happening. Let me assume the probability is always the same number. If it’s not, our work gets harder, although it’s basically the same kind of work. But, then, the expectation value, the number of times we’d expect to see the thing happen, is N times p. Which, as Utahraptor points out, we can expect has to be at least 1 for any event, however unlikely, given enough chances. So it should be.

But, then, to take Utahraptor’s example: what is the probability that an immortal being never trips down the stairs? At least not badly enough to do harm? Why should we think that’s zero? It’s not as if there’s a physical law that compels someone to go to stairs and then to fall down them to their death. And, if there’s any nonzero chance of someone not dying this way? Then, if there are enough immortals, there’s someone who will go forever without falling down stairs.

That covers just the one way to die, of course. But the same reasoning holds for every possible way to die. If there’s enough immortals, there’s someone who would not die from falling down stairs and from never being struck by a meteor. And someone who’d never fall down stairs and never be struck by a meteor and never fall off a cliff trying to drop an anvil on a roadrunner. And so on. If there are infinitely many people, there’s at least one who’d avoid all possible accidental causes of death.

More. If there’s infinitely many immortals, then there are going to be a second and a third — indeed, an infinite number — of people who happen to be lucky enough to never die from anything. Infinitely many immortals die of accidents, sure, but somehow not all of them. We can’t even say that more immortals die of accidents than don’t.

My point is that probability gets really weird when you try putting infinities into it. Proceed with extreme caution. But the results of basic, incautious, thinking can be quite heady.

Bill Amend’s FoxTrot Classics for the 12th has Paige cramming for a geometry exam. Don’t cram for exams; it really doesn’t work. It’s regular steady relaxed studying that you need. That and rest. There is nothing you do that you do better for being sleep-deprived.

Bob Weber Jr and Jay Stephens’s Oh Brother for the 12th has Lily tease her brother with a story problem. I believe the strip’s a rerun, but it had been gone altogether for more than a year. It’s nice to see it returned anyway.

And while I don’t regularly cover web-only comics here, Norm Feuti has carried on his Gil as a Sunday-only web comic. The strip for the 10th of May has Gil using a calculator for mathematics homework, with a teacher who didn’t say he couldn’t. I’m surprised she hadn’t set a guideline.

This carries me through half a week. I’ll have more mathematically-themed comic strips at this link soon. Thanks for reading.

## Reading the Comics, May 5, 2018: Does Anyone Know Where The Infinite Hotel Comes From Edition

With a light load of mathematically-themed comic strips I’m going to have to think of things to write about twice this coming week. Fortunately, I have plans. We’ll see how that works out for me. So far this year I’m running about one-for-eight on my plans.

Mort Walker and Dik Browne’s Hi and Lois for the 1st of November, 1960 looks pretty familiar somehow. Having noticed what might be the first appearance of “the answer is twelve?” in Peanuts I’m curious why Chip started out by guessing twelve. Probably just coincidence. Possibly that twelve is just big enough to sound mathematical without being conspicuously funny, like 23 or 37 or 42 might be. I’m a bit curious that after the first guess Sally looked for smaller numbers than twelve, while Chip (mostly) looked for larger ones. And I see a logic in going from a first guess of 12 to a second guess of either 4 or 144. The 32 is a weird one.

Tom Toles’s Randolph Itch, 2 am for the 30th of April, 2018 is on at least its third appearance around here. I suppose I have to retire the strip from consideration for these comics roundups. It didn’t run that long, sad to say, and I think I’ve featured all its mathematical strips. I’ll go on reading, though, as I like the style and Toles’s sense of humor.

Mark Tatulli’s Heart of the City for the 3rd of May is a riff on the motivation problem. For once, not about the motivation of the people in story problems to do what they do. It’s instead about why the student should care what the story people do. And, fair enough, really. It’s easy to calculate something you’d like to know the answer to. But give the teacher or textbook writer a break. There’s nothing that’s interesting to everybody. No, not even what minimum grade they need on this exam to get an A in the course. After a moment of clarity in fifth grade I never cared what my scores were. I just did my work and accepted the assessment. My choice not to worry about my grades had more good than bad results, but I admit, there were bad results too.

John McNamee’s Pie Comic for the 4th of May riffs on some ancient story-problems built on infinite sets. I don’t know the original source. I assume a Martin Gardiner pop-mathematics essay. I don’t know, though, and I’m curious if anyone does know.

Often I see these kinds of problem as set at the Hilbert Hotel. This references David Hilbert, the late-19th/early-20th century mastermind behind the 20th century’s mathematics field. They try to challenge people’s intuitions about infinitely large sets. Ponder a hotel with one room for each of the counting numbers. Suppose it’s full. How many guests can you add to it? Can you add infinitely many more guests, and still have room for them all? If you do it right, and if “infinitely many more guests” means something particular, yes. If certain practical points don’t get in the way. I mean practical for a hotel with infinitely many rooms.

This is a new-tag comic.

Dave Whamond’s Reality Check for the 4th is a riff on Albert Einstein’s best-known equation. He had some other work, granted. But who didn’t?

## Wronski’s Formula For Pi: My Boring Mistake

Previously:

So, I must confess failure. Not about deciphering Józef Maria Hoëne-Wronski’s attempted definition of π. He’d tried this crazy method throwing a lot of infinities and roots of infinities and imaginary numbers together. I believe I translated it into the language of modern mathematics fairly. And my failure is not that I found the formula actually described the number -½π.

Oh, I had an error in there, yes. And I’d found where it was. It was all the way back in the essay which first converted Wronski’s formula into something respectable. It was a small error, first appearing in the last formula of that essay and never corrected from there. This reinforces my suspicion that when normal people see formulas they mostly look at them to confirm there is a formula there. With luck they carry on and read the sentences around them.

My failure is I wanted to write a bit about boring mistakes. The kinds which you make all the time while doing mathematics work, but which you don’t worry about. Dropped signs. Constants which aren’t divided out, or which get multiplied in incorrectly. Stuff like this which you only detect because you know, deep down, that you should have gotten to an attractive simple formula and you haven’t. Mistakes which are tiresome to make, but never make you wonder if you’re in the wrong job.

The trouble is I can’t think of how to make an essay of that. We don’t tend to rate little mistakes like the wrong sign or the wrong multiple or a boring unnecessary added constant as important. This is because they’re not. The interesting stuff in a mathematical formula is usually the stuff representing variations. Change is interesting. The direction of the change? Eh, nice to know. A swapped plus or minus sign alters your understanding of the direction of the change, but that’s all. Multiplying or dividing by a constant wrongly changes your understanding of the size of the change. But that doesn’t alter what the change looks like. Just the scale of the change. Adding or subtracting the wrong constant alters what you think the change is varying from, but not what the shape of the change is. Once more, not a big deal.

But you also know that instinctively, or at least you get it from seeing how it’s worth one or two points on an exam to write -sin where you mean +sin. Or how if you ask the instructor in class about that 2 where a ½ should be, she’ll say, “Oh, yeah, you’re right” and do a hurried bit of erasing before going on.

Thus my failure: I don’t know what to say about boring mistakes that has any insight.

For the record here’s where I got things wrong. I was creating a function, named ‘f’ and using as a variable ‘x’, to represent Wronski’s formula. I’d gotten to this point:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

And then I observed how the stuff in curly braces there is “one of those magic tricks that mathematicians know because they see it all the time”. And I wanted to call in this formula, correctly:

$\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }$

So here’s where I went wrong. I took the $4\imath$ way off in the front of that first formula and combined it with the stuff in braces to make 2 times a sine of some stuff. I apologize for this. I must have been writing stuff out faster than I was thinking about it. If I had thought, I would have gone through this intermediate step:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\} \cdot \frac{2\imath}{2\imath}$

Because with that form in mind, it’s easy to take the stuff in curled braces and the $2\imath$ in the denominator. From that we get, correctly, $\sin\left(\frac{\pi}{4}\cdot\frac{1}{x}\right)$. And then the $-4\imath$ on the far left of that expression and the $2\imath$ on the right multiply together to produce the number 8.

So the function ought to have been, all along:

$f(x) = 8 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

Not very different, is it? Ah, but it makes a huge difference. Carry through with all the L’Hôpital’s Rule stuff described in previous essays. All the complicated formula work is the same. There’s a different number hanging off the front, waiting to multiply in. That’s all. And what you find, redoing all the work but using this corrected function, is that Wronski’s original mess —

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

— should indeed equal:

$2\pi$

All right, there’s an extra factor of 2 here. And I don’t think that is my mistake. Or if it is, other people come to the same mistake without my prompting.

Possibly the book I drew this from misquoted Wronski. It’s at least as good to have a formula for 2π as it is to have one for π. Or Wronski had a mistake in his original formula, and had a constant multiplied out front which he didn’t want. It happens to us all.

Fin.

## Wronski’s Formula For Pi: How Close We Came

Previously:

Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:

$\displaystyle \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘$\frac{1}{1 / x}$ — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.

The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.

Where trouble comes up is all those parts where $\frac{1}{x}$ turns up. The derivatives of functions with a lot of $\frac{1}{x}$ terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?

And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly $\frac{1}{x}$. And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at $\frac{1}{y}$. That is, and this is just a little bit of algebra:

$g(y) = -2 \cdot \frac{1}{y} \cdot 2^{\frac{1}{2} y } \cdot \sin\left(\frac{\pi}{4} y\right)$

The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then $\frac{1}{x}$ has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?

… For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:

$\displaystyle \lim_{y \to 0^+} g(y) = \lim_{y \to 0^+} -2\cdot\frac{2^{\frac{1}{2}y} \cdot \sin\left(\frac{\pi}{4} y\right)}{y}$

Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives $-2 \cdot \frac{1 \cdot 0}{0}$. A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:

$-2 \cdot 2^{\frac{1}{2}y} \sin\left(\frac{\pi}{4} y\right)$

And the denominator is:

$y$

The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.

The first derivative of the numerator is going to be:

$-2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4}$

Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:

$\displaystyle \lim_{y \to 0^+} \frac{ -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4} }{1}$

And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:

$\displaystyle -2 \cdot 2^{\frac{1}{2} 0} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} \cdot 0\right) + -2 \cdot 2^{\frac{1}{2} 0 } \cdot \cos\left(\frac{\pi}{4} \cdot 0\right) \cdot \frac{\pi}{4}$

$\frac{\pi}{4} \cdot 0$ is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.

$\displaystyle -2 \cdot 2^{0} \cdot \log(2) \cdot \frac{1}{2} \cdot 0 + -2 \cdot 2^{ 0 } \cdot 1 \cdot \frac{\pi}{4}$

And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:

$\displaystyle 0 + -2 \cdot \frac{\pi}{4}$

And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …

$-\frac{\pi}{2}$

… So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted $-\frac{1}{2}$. The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?

Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.

The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.

I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.

## Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using

Previously:

So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:

$\displaystyle \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:

$-2 \cdot \infty \cdot 1 \cdot 0$

Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘$2^{\frac{1}{2}\cdot\frac{1}{x}}$‘ for ‘x’ at ∞ is 1. The limit of ‘$\sin(\frac{\pi}{4}\cdot\frac{1}{x})$ for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.

Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)

So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:

$\displaystyle \lim_{x \to a} \frac{h(x)}{g(x)}$

(Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)

Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:

$\displaystyle \lim_{x \to a} \frac{h'(x)}{g'(x)}$

That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.

This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.

But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.

That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as $\frac{1}{ 1 / x }$?

I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.

And it’s a perfectly good one. And it’s perfectly legitimate, too. $\frac{1}{x}$ is a meaningful number if ‘x’ is any finite number other than zero. So is $\frac{1}{ 1 / x }$. Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that $\frac{1}{x}$ wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that $\frac{1}{x}$ would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.

I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.

So back to my function, which looks like this:

$\displaystyle f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

Do I want to replace ‘x’ with $\frac{1}{1 / x}$, or do I want to replace $\sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$ with $\frac{1}{1 / \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}$? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.

So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:

$\displaystyle f(x) = -2 \frac{2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}{\frac{1}{x}}$

I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.

So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:

$h(x) = 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

$g(x) = \frac{1}{x}$

The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:

$g'(x) = -\frac{1}{x^2}$

The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that $2^{\frac{1}{2}\cdot \frac{1}{x}}$ and that $\sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$. And each of those expressions contains within themselves another expression, that $\frac{1}{x}$. So this is going to require the Product Rule, of two expressions that each require the Chain Rule.

This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:

$h'(x) = 2^{\frac{1}{2}\frac{1}{x}} \cdot \log(2) \cdot \frac{1}{2} \cdot (-1) \cdot \frac{1}{x^2} + 2^{\frac{1}{2}\frac{1}{x}} \cdot \cos( arg ) bleah$

Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.

(If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)

## Reading the Comics, January 13, 2018: Barney Google Is Messing With My Head For Some Reason Edition

I do not know what’s possessed John Rose, cartoonist for Barney Google and Snuffy Smith — possibly the oldest syndicated comic strip not in perpetual reruns — to decide he needs to mess with my head. So far as I’m aware we haven’t ever even had any interactions. While I’ll own up to snarking about the comic strip here and there, I mean, the guy draws Barney Google and Snuffy Smith. He won’t attract the snark community of, say, Marmaduke, but he knew the job was dangerous when he took it. There’s lots of people who’ve said worse things about the comic than I ever have. He can’t be messing with them all.

There’s no mathematical content to it, but here, continuing the curious thread of Elviney and Miss Prunelly looking the same, and Elviney turning out to have a twin sister, is the revelation that Elviney’s husband also has a twin.

This means something and I don’t know what.

To mathematics:

Zach Weinersmith’s Saturday Morning Breakfast Cereal gets my attention again for the 10th. There is this famous quotation from Leopold Kronecker, one of the many 19th century German mathematicians who challenged, and set, our ideas of what mathematics is. In debates about what should count as a proof Kronecker said something translated in English to, “God created the integers, all else is the work of man”. He favored proofs that only used finite numbers, and only finitely many operations, and was skeptical of existence proofs. Those are ones that show something with desired properties must exist, without necessarily showing how to find it. Most mathematicians accept existence proofs. If you can show how to find that thing, that’s a constructive proof. Usually mathematicians like those better.

Mark Tatulli’s Heart of the City for the 11th uses a bunch of arithmetic and word problems to represent all of Dean’s homework. All looks like reasonable homework for my best guess about his age.

Jon Rosenberg’s Scenes From A Multiverse for the 11th is a fun, simple joke with some complex stuff behind it. It’s riffing on the kind of atheist who wants moral values to come from something in the STEM fields. So here’s a mathematical basis for some moral principles. There are, yes, ethical theories that have, or at least imply having, mathematics behind them. Utilitarianism at least supposes that ethical behavior can be described as measurable and computable quantities. Nobody actually does that except maybe to make video games more exciting. But it’s left with the idea that one could, and hope that this would lead to guidance that doesn’t go horribly wrong.

Don Asmussen’s Bad Reporter for the 12th uses knowledge of arithmetic as a signifier of intelligence. Common enough joke style.

Thom Bluemel’s Birdbrains for the 13th starts Pi Day observances early, or maybe supposed the joke would be too out of season were it to come in March.

Greg Evans and Karen Evans’s Luann for the 13th uses mathematics to try building up the villainy of one of the strip’s designated villains. Ann Eiffel, there, uses a heap of arithmetic to make her lingerie sale sound better. This isn’t simply a riff on people not wanting to do arithmetic, although I understand people not wanding to work out what five percent of a purchase of over $200 is. There’s a good deal of weird psychology in getting people to buy things. Merely naming a number, for example, gets people to “anchor” their expectations to it. To speak of a free gift worth$75 makes any purchase below $75 seem more economical. To speak of a chance to win$1,000 prepares people to think they’ve got a thousand dollars coming in, and that they can safely spend under that. It’s amazing stuff to learn about, and it isn’t all built on people being too lazy to figure out what five percent off of $220 would be. T Lewis and Michael Fry’s Over the Hedge for the 13th uses &infty; along the way to making nonsense out of ice-skating judging. It’s a good way to make a hash of a rating system. Most anything done with infinitely large numbers or infinitely large sets challenges one’s intuition at least. This is part of what Leopold Kronecker was talking about. ## Wronski’s Formula For Pi: A First Limit Previously: When I last looked at Józef Maria Hoëne-Wronski’s attempted definition of π I had gotten it to this. Take the function: $f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$ And find its limit when ‘x’ is ∞. Formally, you want to do this by proving there’s some number, let’s say ‘L’. And ‘L’ has the property that you can pick any margin-of-error number ε that’s bigger than zero. And whatever that ε is, there’s some number ‘N’ so that whenever ‘x’ is bigger than ‘N’, ‘f(x)’ is larger than ‘L – ε’ and also smaller than ‘L + ε’. This can be a lot of mucking about with expressions to prove. Fortunately we have shortcuts. There’s work we can do that gets us ‘L’, and we can rely on other proofs that show that this must be the limit of ‘f(x)’ at some value ‘a’. I use ‘a’ because that doesn’t commit me to talking about ∞ or any other particular value. The first approach is to just evaluate ‘f(a)’. If you get something meaningful, great! We’re done. That’s the limit of ‘f(x)’ at ‘a’. This approach is called “substitution” — you’re substituting ‘a’ for ‘x’ in the expression of ‘f(x)’ — and it’s great. Except that if your problem’s interesting then substitution won’t work. Still, maybe Wronski’s formula turns out to be lucky. Fit in ∞ where ‘x’ appears and we get: $f(\infty) = -2 \infty 2^{\frac{1}{2}\cdot \frac{1}{\infty}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{\infty}\right)$ So … all right. Not quite there yet. But we can get there. For example, $\frac{1}{\infty}$ has to be — well. It’s what you would expect if you were a kid and not worried about rigor: 0. We can make it rigorous if you like. (It goes like this: Pick any ε larger than 0. Then whenever ‘x’ is larger than $\frac{1}{\epsilon}$ then $\frac{1}{x}$ is less than ε. So the limit of $\frac{1}{x}$ at ∞ has to be 0.) So let’s run with this: replace all those $\frac{1}{\infty}$ expressions with 0. Then we’ve got: $f(\infty) = -2 \infty 2^{0} \sin\left(0\right)$ The sine of 0 is 0. 20 is 1. So substitution tells us limit is -2 times ∞ times 1 times 0. That there’s an ∞ in there isn’t a problem. A limit can be infinitely large. Think of the limit of ‘x2‘ at ∞. An infinitely large thing times an infinitely large thing is fine. The limit of ‘x ex‘ at ∞ is infinitely large. A zero times a zero is fine; that’s zero again. But having an ∞ times a 0? That’s trouble. ∞ times something should be huge; anything times zero should be 0; which term wins? So we have to fall back on alternate plans. Fortunately there’s a tool we have for limits when we’d otherwise have to face an infinitely large thing times a zero. I hope to write about this next time. I apologize for not getting through it today but time wouldn’t let me. ## Deciphering Wronski, Non-Standardly I ran out of time to do my next bit on Wronski’s attempted definition of π. Next week, all goes well. But I have something to share anyway. William Lane Craig, of the The author of Boxing Pythagoras blog was intrigued by the starting point. And as a fan of studying how people understand infinity and infinitesimals (and how they don’t), this two-century-old example of mixing the numerous and the tiny set his course. So here’s his essay, trying to work out Wronski’s beautiful weird formula from a non-standard analysis perspective. Non-standard analysis is a field that’s grown in the last fifty years. It’s probably fairly close in spirit to what (I think) Wronski might have been getting at, too. Non-standard analysis works with ideas that seem to match many people’s intuitive feelings about infinitesimals and infinities. For example, can we speak of a number that’s larger than zero, but smaller than the reciprocal of any positive integer? It’s hard to imagine such a thing. But what if we can show that if we suppose such a number exists, then we can do this logically sound work with it? If you want to say that isn’t enough to show a number exists, then I have to ask how you know imaginary numbers or negative numbers exist. Standard analysis, you probably guessed, doesn’t do that. It developed over the 19th century when the logical problems of these kinds of numbers seemed unsolvable. Mostly that’s done by limits, showing that a thing must be true whenever some quantity is small enough, or large enough. It seems safe to trust that the infinitesimally small is small enough, and the infinitely large is large enough. And it’s not like mathematicians back then were bad at their job. Mathematicians learned a lot of things about how infinitesimals and infinities work over the late 19th and early 20th century. It makes modern work possible. Anyway, Boxing Pythagoras goes over what a non-standard analysis treatment of the formula suggests. I think it’s accessible even if you haven’t had much non-standard analysis in your background. At least it worked for me and I haven’t had much of the stuff. I think it’s also accessible if you’re good at following logical argument and won’t be thrown by Greek letters as variables. Most of the hard work is really arithmetic with funny letters. I recommend going and seeing if he did get to π. ## As I Try To Figure Out What Wronski Thought ‘Pi’ Was A couple weeks ago I shared a fascinating formula for π. I got it from Carl B Boyer’s The History of Calculus and its Conceptual Development. He got it from Józef Maria Hoëne-Wronski, early 19th-century Polish mathematician. His idea was that an absolute, culturally-independent definition of π would come not from thinking about circles and diameters but rather this formula: $\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$ Now, this formula is beautiful, at least to my eyes. It’s also gibberish. At least it’s ungrammatical. Mathematicians don’t like to write stuff like “four times infinity”, at least not as more than a rough draft on the way to a real thought. What does it mean to multiply four by infinity? Is arithmetic even a thing that can be done on infinitely large quantities? Among Wronski’s problems is that they didn’t have a clear answer to this. We’re a little more advanced in our mathematics now. We’ve had a century and a half of rather sound treatment of infinitely large and infinitely small things. Can we save Wronski’s work? Start with the easiest thing. I’m offended by those $\sqrt{-1}$ bits. Well, no, I’m more unsettled by them. I would rather have $\imath$ in there. The difference? … More taste than anything sound. I prefer, if I can get away with it, using the square root symbol to mean the positive square root of the thing inside. There is no positive square root of -1, so, pfaugh, away with it. Mere style? All right, well, how do you know whether those $\sqrt{-1}$ terms are meant to be $\imath$ or its additive inverse, $-\imath$? How do you know they’re all meant to be the same one? See? … As with all style preferences, it’s impossible to be perfectly consistent. I’m sure there are times I accept a big square root symbol over a negative or a complex-valued quantity. But I’m not forced to have it here so I’d rather not. First step: $\pi = \frac{4\infty}{\imath}\left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} - \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}$ Also dividing by $\imath$ is the same as multiplying by $-\imath$ so the second easy step gives me: $\pi = -4 \imath \infty \left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} - \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}$ Now the hard part. All those infinities. I don’t like multiplying by infinity. I don’t like dividing by infinity. I really, really don’t like raising a quantity to the one-over-infinity power. Most mathematicians don’t. We have a tool for dealing with this sort of thing. It’s called a “limit”. Mathematicians developed the idea of limits over … well, since they started doing mathematics. In the 19th century limits got sound enough that we still trust the idea. Here’s the rough way it works. Suppose we have a function which I’m going to name ‘f’ because I have better things to do than give functions good names. Its domain is the real numbers. Its range is the real numbers. (We can define functions for other domains and ranges, too. Those definitions look like what they do here.) I’m going to use ‘x’ for the independent variable. It’s any number in the domain. I’m going to use ‘a’ for some point. We want to know the limit of the function “at a”. ‘a’ might be in the domain. But — and this is genius — it doesn’t have to be. We can talk sensibly about the limit of a function at some point where the function doesn’t exist. We can say “the limit of f at a is the number L”. I hadn’t introduced ‘L’ into evidence before, but … it’s a number. It has some specific set value. Can’t say which one without knowing what ‘f’ is and what its domain is and what ‘a’ is. But I know this about it. Pick any error margin that you like. Call it ε because mathematicians do. However small this (positive) number is, there’s at least one neighborhood in the domain of ‘f’ that surrounds ‘a’. Check every point in that neighborhood other than ‘a’. The value of ‘f’ at all those points in that neighborhood other than ‘a’ will be larger than L – ε and smaller than L + ε. Yeah, pause a bit there. It’s a tricky definition. It’s a nice common place to crash hard in freshman calculus. Also again in Intro to Real Analysis. It’s not just you. Perhaps it’ll help to think of it as a kind of mutual challenge game. Try this. 1. You draw whatever error bar, as big or as little as you like, around ‘L’. 2. But I always respond by drawing some strip around ‘a’. 3. You then pick absolutely any ‘x’ inside my strip, other than ‘a’. 4. Is f(x) always within the error bar you drew? Suppose f(x) is. Suppose that you can pick any error bar however tiny, and I can answer with a strip however tiny, and every single ‘x’ inside my strip has an f(x) within your error bar … then, L is the limit of f at a. Again, yes, tricky. But mathematicians haven’t found a better definition that doesn’t break something mathematicians need. To write “the limit of f at a is L” we use the notation: $\displaystyle \lim_{x \to a} f(x) = L$ The ‘lim’ part probably makes perfect sense. And you can see where ‘f’ and ‘a’ have to enter into it. ‘x’ here is a “dummy variable”. It’s the falsework of the mathematical expression. We need some name for the independent variable. It’s clumsy to do without. But it doesn’t matter what the name is. It’ll never appear in the answer. If it does then the work went wrong somewhere. What I want to do, then, is turn all those appearances of ‘∞’ in Wronski’s expression into limits of something at infinity. And having just said what a limit is I have to do a patch job. In that talk about the limit at ‘a’ I talked about a neighborhood containing ‘a’. What’s it mean to have a neighborhood “containing ∞”? The answer is exactly what you’d think if you got this question and were eight years old. The “neighborhood of infinity” is “all the big enough numbers”. To make it rigorous, it’s “all the numbers bigger than some finite number that let’s just call N”. So you give me an error bar around ‘L’. I’ll give you back some number ‘N’. Every ‘x’ that’s bigger than ‘N’ has f(x) inside your error bars. And note that I don’t have to say what ‘f(∞)’ is or even commit to the idea that such a thing can be meaningful. I only ever have to think directly about values of ‘f(x)’ where ‘x’ is some real number. So! First, let me rewrite Wronski’s formula as a function, defined on the real numbers. Then I can replace each ∞ with the limit of something at infinity and … oh, wait a minute. There’s three ∞ symbols there. Do I need three limits? Ugh. Yeah. Probably. This can be all right. We can do multiple limits. This can be well-defined. It can also be a right pain. The challenge-and-response game needs a little modifying to work. You still draw error bars. But I have to draw multiple strips. One for each of the variables. And every combination of values inside all those strips has give an ‘f’ that’s inside your error bars. There’s room for great mischief. You can arrange combinations of variables that look likely to break ‘f’ outside the error bars. So. Three independent variables, all taking a limit at ∞? That’s not guaranteed to be trouble, but I’d expect trouble. At least I’d expect something to keep the limit from existing. That is, we could find there’s no number ‘L’ so that this drawing-neighborhoods thing works for all three variables at once. Let’s try. One of the ∞ will be a limit of a variable named ‘x’. One of them a variable named ‘y’. One of them a variable named ‘z’. Then: $f(x, y, z) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{y}} - \left(1 - \imath\right)^{\frac{1}{z}} \right\}$ Without doing the work, my hunch is: this is utter madness. I expect it’s probably possible to make this function take on many wildly different values by the judicious choice of ‘x’, ‘y’, and ‘z’. Particularly ‘y’ and ‘z’. You maybe see it already. If you don’t, you maybe see it now that I’ve said you maybe see it. If you don’t, I’ll get there, but not in this essay. But let’s suppose that it’s possible to make f(x, y, z) take on wildly different values like I’m getting at. This implies that there’s not any limit ‘L’, and therefore Wronski’s work is just wrong. Thing is, Wronski wouldn’t have thought that. Deep down, I am certain, he thought the three appearances of ∞ were the same “value”. And that to translate him fairly we’d use the same name for all three appearances. So I am going to do that. I shall use ‘x’ as my variable name, and replace all three appearances of ∞ with the same variable and a common limit. So this gives me the single function: $f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$ And then I need to take the limit of this at ∞. If Wronski is right, and if I’ve translated him fairly, it’s going to be π. Or something easy to get π from. I hope to get there next week. ## What Only One Person Ever Has Thought ‘Pi’ Means, And Who That Was I’ve been reading Carl B Boyer’s The History of Calculus and its Conceptual Development. It’s been slow going, because reading about how calculus’s ideas developed is hard. The ideas underlying it are subtle to start with. And the ideas have to be discussed using vague, unclear definitions. That’s not because dumb people were making arguments. It’s because these were smart people studying ideas at the limits of what we understood. When we got clear definitions we had the fundamentals of calculus understood. (By our modern standards. The future will likely see us as accepting strange ambiguities.) And I still think Boyer whiffs the discussion of Zeno’s Paradoxes in a way that mathematics and science-types usually do. (The trouble isn’t imagining that infinite series can converge. The trouble is that things are either infinitely divisible or they’re not. Either way implies things that seem false.) Anyway. Boyer got to a part about the early 19th century. This was when mathematicians were discovering infinities and infinitesimals are amazing tools. Also that mathematicians should maybe learn if they follow any rules. Because you can just plug symbols in to formulas and grind out what looks like they might mean and get answers. Sometimes this works great. Grind through the formulas for solving cubic polynomials as though square roots of negative numbers make sense. You get good results. Later, we worked out a coherent scheme of “complex-valued numbers” that justified it all. We can get lucky with infinities and infinitesimals, sometimes. And this brought Boyer to an argument made by Józef Maria Hoëne-Wronski. He was a Polish mathematician whose fantastic ambition in … everything … didn’t turn out many useful results. Algebra, the Longitude Problem, building a rival to the railroad, even the Kosciuszko Uprising, none quite panned out. (And that’s not quite his name. The ‘n’ in ‘Wronski’ should have an acute mark over it. But WordPress’s HTML engine doesn’t want to imagine such a thing exists. Nor do many typesetters writing calculus or differential equations books, Boyer’s included.) But anyone who studies differential equations knows his name, for a concept called the Wronskian. It’s a matrix determinant that anyone who studies differential equations hopes they won’t ever have to do after learning it. And, says Boyer, Wronski had this notion for an “absolute meaning of the number π”. (By “absolute” Wronski means one that not drawn from cultural factors like the weird human interset in circle perimeters and diameters. Compare it to the way we speak of “absolute temperature”, where the zero means something not particular to western European weather.) $\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$ Well. I will admit I’m not fond of “real” alternate definitions of π. They seem to me mostly to signal how clever the definition-originator is. The only one I like at all defines π as the smallest positive root of the simple-harmonic-motion differential equation. (With the right starting conditions and all that.) And I’m not sure that isn’t “circumference over diameter” in a hidden form. And yes, that definition is a mess of early-19th-century wild, untamed casualness in the use of symbols. But I admire the crazypants beauty of it. If I ever get a couple free hours I should rework it into something grammatical. And then see if, turned into something tolerable, Wronski’s idea is something even true. Boyer allows that “perhaps” because of the strange notation and “bizarre use of the symbol ∞” Wronski didn’t make much headway on this point. I can’t fault people for looking at that and refusing to go further. But isn’t it enchanting as it is? ## Reading the Comics, February 6, 2017: Another Pictureless Half-Week Edition Got another little flood of mathematically-themed comic strips last week and so once again I’ll split them along something that looks kind of middle-ish. Also this is another bunch of GoComics.com-only posts. Since those seem to be accessible to anyone whether or not they’re subscribers indefinitely far into the future I don’t feel like I can put the comics directly up and will trust you all to click on the links that you find interesting. Which is fine; the new GoComics.com design makes it annoyingly hard to download a comic strip. I don’t think that was their intention. But that’s one of the two nagging problems I have with their new design. So you know. Tony Cochran’s Agnes for the 5th sees a brand-new mathematics. Always dangerous stuff. But mathematicians do invent, or discover, new things in mathematics all the time. Part of the task is naming the things in it. That’s something which takes talent. Some people, such as Leonhard Euler, had the knack a great novelist has for putting names to things. The rest of us muddle along. Often if there’s any real-world inspiration, or resemblance to anything, we’ll rely on that. And we look for terminology that evokes similar ideas in other fields. … And, Agnes would like to know, there is mathematics that’s about approximate answers, being “right around” the desired answer. Unfortunately, that’s hard. (It’s all hard, if you’re going to take it seriously, much like everything else people do.) Scott Hilburn’s The Argyle Sweater for the 5th is the anthropomorphic numerals joke for this essay. Carol Lay’s Lay Lines for the 6th depicts the hazards of thinking deeply and hard about the infinitely large and the infinitesimally small. They’re hard. Our intuition seems well-suited to handing a modest bunch of household-sized things. Logic guides us when thinking about the infinitely large or small, but it takes a long time to get truly conversant and comfortable with it all. Paul Gilligan’s Pooch Cafe for the 6th sees Poncho try to argue there’s thermodynamical reasons for not being kind. Reasoning about why one should be kind (or not) is the business of philosophers and I won’t overstep my expertise. Poncho’s mathematics, that’s something I can write about. He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around. That’s fair enough as a description of what the first law of thermodynamics tells us about energy. The equation he reads off reads, “the change in the internal energy (Δ U) equals the heat added to the system (U) minus the work done by the system (W)”. Conservation laws aren’t unique to thermodynamics. But Poncho may be aware of just how universal and powerful thermodynamics is. I’m open to an argument that it’s the most important field of physics. Jonathan Lemon’s Rabbits Against Magic for the 6th is another strip Intro to Calculus instructors can use for their presentation on instantaneous versus average velocities. There’s been a bunch of them recently. I wonder if someone at Comic Strip Master Command got a speeding ticket. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 6th is about numeric bases. They’re fun to learn about. There’s an arbitrariness in the way we represent concepts. I think we can understand better what kinds of problems seem easy and what kinds seem harder if we write them out different ways. But base eleven is only good for jokes. ## The End 2016 Mathematics A To Z: Monster Group Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt. ## Monster Group. It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain. The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five? All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same. So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things. (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.) So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other. An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different. Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted. One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything. The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements. So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is. And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t. Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.) Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected. The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that. The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see. It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be. We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing. And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why. There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones. You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163. I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else. There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be. The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now. ## The End 2016 Mathematics A To Z: Local Today’s is another of those words that means nearly what you would guess. There are still seven letters left, by the way, which haven’t had any requested terms. If you’d like something described please try asking. ## Local. Stops at every station, rather than just the main ones. OK, I’ll take it seriously. So a couple years ago I visited Niagara Falls, and I stepped into the river, just above the really big drop. I didn’t have any plans to go over the falls, and didn’t, but I liked the thrill of claiming I had. I’m not crazy, though; I picked a spot I knew was safe to step in. It’s only in the retelling I went into the Niagara River just above the falls. Because yes, there is surely danger in certain spots of the Niagara River. But there are also spots that are perfectly safe. And not isolated spots either. I wouldn’t have been less safe if I’d stepped into the river a few feet closer to the edge. Nor if I’d stepped in a few feet farther away. Where I stepped in was locally safe. Over in mathematics we do a lot of work on stuff that’s true or false depending on what some parameters are. We can look at bunches of those parameters, and they often look something like normal everyday space. There’s some values that are close to what we started from. There’s others that are far from that. So, a “neighborhood” of some point is that point and some set of points containing it. It needs to be an “open” set, which means it doesn’t contain its boundary. So, like, everything less than one minute’s walk away, but not the stuff that’s precisely one minute’s walk away. (If we include boundaries we break stuff that we don’t want broken is why.) And certainly not the stuff more than one minute’s walk away. A neighborhood could have any shape. It’s easy to think of it as a little disc around the point you want. That’s usually the easiest to describe in a proof, because it’s “everything a distance less than (something) away”. (That “something” is either ‘δ’ or ‘ε’. Both Greek letters are called in to mean “a tiny distance”. They have different connotations about what the tiny distance is in.) It’s easiest to draw as little amoeba-like blob around a point, and contained inside a bigger amoeba-like blob. Anyway, something is true “locally” to a point if it’s true in that neighborhood. That means true for everything in that neighborhood. Which is what you’d expect. “Local” means just that. It’s the stuff that’s close to where we started out. Often we would like to know something “globally”, which means … er … everywhere. Universally so. But it’s usually easier to prove a thing locally. I suppose having a point where we know something is so makes it easier to prove things about what’s nearby. Distant stuff, who knows? “Local” serves as an adjective for many things. We think of a “local maximum”, for example, or “local minimum”. This is where whatever we’re studying has a value bigger (or smaller) than anywhere else nearby has. Or we speak of a function being “locally continuous”, meaning that we know it’s continuous near this point and we make no promises away from it. It might be “locally differentiable”, meaning we can take derivatives of it close to some interesting point. We say nothing about what happens far from it. Unless we do. We can talk about something being “local to infinity”. Your first reaction to that should probably be to slap the table and declare that’s it, we’re done. But we can make it sensible, at least to other mathematicians. We do it by starting with a neighborhood that contains the origin, zero, that point in the middle of everything. So, what’s the inverse of that? It’s everything that’s far enough away from the origin. (Don’t include the boundary, we don’t need those headaches.) So why not call that the “neighborhood of infinity”? Other than that it’s a weird set of words to put together? And if something is true in that “neighborhood of infinity”, what is that thing other than true “local to infinity”? I don’t blame you for being skeptical. ## The End 2016 Mathematics A To Z: Cantor’s Middle Third Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two. ## Cantor’s Middle Third. I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens. Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious. Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important. We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing. That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them. Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically. The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle. Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle. Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left. What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots. We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost. Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right? It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length. (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.) It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them. HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint. You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works. ## Reading the Comics, October 19, 2016: An Extra Day Edition I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway. Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way. Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s $c^2 = a^2 + b^2 - 2 a b \cos\left(C\right)$. Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0. That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break. Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences. Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel. Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky. Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0. And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large. Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them. Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one. ## Reading the Comics, September 24, 2016: Infinities Happen Edition I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics. Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen. Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye. Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone. Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it? Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it. Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians. ## L’Hopital’s Rule Without End: Is That A Thing? I was helping a friend learn L’Hôpital’s Rule. This is a Freshman Calculus thing. (A different one from last week, it happens. Folks are going back to school, I suppose.) The friend asked me a point I thought shouldn’t come up. I’m certain it won’t come up in the exam my friend was worried about, but I couldn’t swear it wouldn’t happen at all. So this is mostly a note to myself to think it over and figure out whether the trouble could come up. And also so this won’t be my most accessible post; I’m sorry for that, for folks who aren’t calculus-familiar. L’Hôpital’s Rule is a way of evaluating the limit of one function divided by another, of f(x) divided by g(x). If the limit of $\frac{f(x)}{g(x)}$ has either the form of $\frac{0}{0}$ or $\frac{\infty}{\infty}$ then you’re not stuck. You can take the first derivative of the numerator and the denominator separately. The limit of $\frac{f'(x)}{g'(x)}$ if it exists will be the same value. But it’s possible to have to do this several times over. I used the example of finding the limit, as x grows infinitely large, where f(x) = x2 and g(x) = ex. $\frac{x^2}{e^x}$ goes to $\frac{\infty}{\infty}$ as x grows infinitely large. The first derivatives, $\frac{2x}{e^x}$, also go to $\frac{\infty}{\infty}$. You have to repeat the process again, taking the first derivatives of numerator and denominator again. $\frac{2}{e^x}$ finally goes to 0 as x gets infinitely large. You might have to do this a bunch of times. If f(x) were x7 and g(x) again ex you’d properly need to do this seven times over. With experience you figure out you can skip some steps. Of course students don’t have the experience to know they can skip ahead to the punch line there, but that’s what the practice in homework is for. Anyway, my friend asked whether it’s possible to get a pattern that always ends up with $\frac{0}{0}$ or $\frac{\infty}{\infty}$ and never breaks out of this. And that’s what’s got me stuck. I can think of a few patterns that would. Start out, for example, with f(x) = e3x and g(x) = e2x. Properly speaking, that would never end. You’d get an infinity-over-infinity pattern every derivative you took. Similarly, if you started with $f(x) = \frac{1}{x}$ and $g(x) = e^{-x}$ you’d never come to an end. As x got infinitely large both f(x) and g(x) would go to zero and all their derivatives would be zero over and over and over and over again. But those are special cases. Anyone looking at what they were doing instead of just calculating would look at, say, $\frac{e^{3x}}{e^{2x}}$ and realize that’s the same as $e^x$ which falls out of the L’Hôpital’s Rule formulas. Or $\frac{\frac{1}{x}}{e^{-x}}$ would be the same as $\frac{e^x}{x}$ which is an infinity-over-infinity form. But it takes only one derivative to break out of the infinity-over-infinity pattern. So I can construct examples that never break out of a zero-over-zero or an infinity-over-infinity pattern if you calculate without thinking. And calculating without thinking is a common problem students have. Arguably it’s the biggest problem mathematics students have. But what I wonder is, are there ratios that end up in an endless zero-over-zero or infinity-over-infinity pattern even if you do think it out? And thus this note; I’d like to nag myself into thinking about that. ## Reading the Comics, July 6, 2016: Another Busy Week Edition It’s supposed to be the summer vacation. I don’t know why Comic Strip Master Command is so eager to send me stuff. Maybe my standards are too loose. This doesn’t even cover all of last week’s mathematically-themed comics. I’ll need another that I’ve got set for Tuesday. I don’t mind. Corey Pandolph and Phil Frank and Joe Troise’s The Elderberries rerun for the 3rd features one of my favorite examples of applied probability. The game show Deal or No Deal offered contestants the prize within a suitcase they picked, or a dealer’s offer. The offer would vary up or down as non-selected suitcases were picked, giving the chance for people to second-guess themselves. It also makes a good redemption game. The banker’s offer would typically be less than the expectation value, what you’d get on average from all the available suitcases. But now and then the dealer offered more than the expectation value and I got all ready to yell at the contestants. This particular strip focuses on a smaller question: can you pick which of the many suitcases held the grand prize? And with the right setup, yes, you can pick it reliably. Mac King and Bill King’s Magic in a Minute for the 3rd uses a bit of arithmetic to support a mind-reading magic trick. The instructions say to start with a number from 1 to 10 and do various bits of arithmetic which lead inevitably to 4. You can prove that for an arbitrary number, or you can just try it for all ten numbers. That’s tedious but not hard and it’ll prove the inevitability of 4 here. There aren’t many countries with names that start with ‘D’; Denmark’s surely the one any American (or European) reader is likeliest to name. But Dominica, the Dominican Republic, and Djibouti would also be answers. (List Of Countries Of The World.com also lists Dhekelia, which I never heard of either.) Anyway, with Denmark forced, ‘E’ almost begs for ‘elephant’. I suppose ’emu’ would do too, or ‘echidna’. And ‘elephant’ almost forces ‘grey’ for a color, although ‘white’ would be plausible too. A magician has to know how things like this work. Werner Wejp-Olsen’s feature Inspector Danger’s Crime Quiz for the 4th features a mathematician as victim of the day’s puzzle murder. I admit I’m skeptical of deathbed identifications of murderers like this, but it would spoil a lot of puzzle mysteries if we disallowed them. (Does anyone know how often a deathbed identification actually happens?) I can’t make the alleged answer make any sense to me. Danger of the trade in murder puzzles. Kris Straub’s Starship for the 4th uses mathematics as a stand-in for anything that’s hard to study and solve. I’m amused. John Hambrock’s The Brilliant Mind of Edison lee for the 6th is about the existentialist dread mathematics can inspire. Suppose there is a chance, within any given volume of space, of Earth being made. Well, it happened at least once, didn’t it? If the universe is vast enough, it seems hard to argue that there wouldn’t be two or three or, really, infinitely many versions of Earth. It’s a chilling thought. But it requires some big suppositions, most importantly that the universe actually is infinite. The observable universe, the one we can ever get a signal from, certainly isn’t. The entire universe including the stuff we can never get to? I don’t know that that’s infinite. I wouldn’t be surprised if it’s impossible to say, for good reason. Anyway, I’m not worried about it. Jim Meddick’s Monty for the 6th is part of a storyline in which Monty is worshipped by tiny aliens who resemble him. They’re a bit nerdy, and calculate before they understand the relevant units. It’s a common mistake. Understand the problem before you start calculating. ## Reading the Comics, June 25, 2016: What The Heck, Why Not Edition I had figured to do Reading the Comics posts weekly, and then last week went and gave me too big a flood of things to do. I have no idea what the rest of this week is going to look like. But given that I had four strips dated before last Sunday I’m going to err on the side of posting too much about comic strips. Scott Metzger’s The Bent Pinky for the 24th uses mathematics as something that dogs can be adorable about not understanding. Thus all the heads tilted, as if it were me in a photograph. The graph here is from economics, which has long had a challenging relationship with mathematics. This particular graph is qualitative; it doesn’t exactly match anything in the real world. But it helps one visualize how we might expect changes in the price of something to affect its sales. A graph doesn’t need to be precise to be instructional. Dave Whamond’s Reality Check for the 24th is this essay’s anthropomorphic-numerals joke. And it’s a reminder that something can be quite true without being reassuring. It plays on the difference between “real” numbers and things that really exist. It’s hard to think of a way that a number such as two could “really” exist that doesn’t also allow the square root of -1 to “really” exist. And to be a bit curmudgeonly, it’s a bit sloppy to speak of “the square root of negative one”, even though everyone does. It’s all right to expand the idea of square roots to cover stuff it didn’t before. But there’s at least two numbers that would, squared, equal -1. We usually call them i and -i. Square roots naturally have this problem,. Both +2 and -2 squared give us 4. We pick out “the” square root by selecting the positive one of the two. But neither i nor -i is “positive”. (Don’t let the – sign fool you. It doesn’t count.) You can’t say either i or -i is greater than zero. It’s not possible to define a “greater than” or “less than” for complex-valued numbers. And that’s even before we get into quaternions, in which we summon two more “square roots” of -1 into existence. Octonions can be even stranger. I don’t blame 1 for being worried. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th is a pleasant bit of pop-mathematics debunking. I’ve explained in the past how I’m a doubter of the golden ratio. The Fibonacci Sequence has a bit more legitimate interest to it. That’s sequences of numbers in which the next term is the sum of the previous two terms. The famous one of that is 1, 1, 2, 3, 5, 8, 13, 21, et cetera. It may not surprise you to know that the Fibonacci Sequence has a link to the golden ratio. As it goes on, the ratio between one term and the next one gets close to the golden ratio. The Harmonic Series is much more deeply weird. A series is the number we get from adding together everything in a sequence. The Harmonic Series grows out of the first sequence you’d imagine ever adding up. It’s 1 plus 1/2 plus 1/3 plus 1/4 plus 1/5 plus 1/6 plus … et cetera. The first time you hear of this you get the surprise: this sum doesn’t ever stop piling up. We say it ‘diverges’. It won’t on your computer; the floating-point arithmetic it does won’t let you add enormous numbers like ‘1’ to tiny numbers like ‘1/531,325,263,953,066,893,142,231,356,120’ and get the right answer. But if you actually added this all up, it would. The proof gets a little messy. But it amounts to this: 1/2 plus 1/3 plus 1/4? That’s more than 1. 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + 1/10 + 1/11 + 1/12? That’s also more than 1. 1/13 + 1/14 + 1/15 + et cetera up through + 1/32 + 1/33 + 1/34 is also more than 1. You need to pile up more and more terms each time, but a finite string of these numbers will add up to more than 1. So the whole series has to be more than 1 + 1 + 1 + 1 + 1 … and so more than any finite number. That’s all amazing enough. And then the series goes on to defy all kinds of intuition. Obviously dropping a couple of terms from the series won’t change whether it converges or diverges. Multiplying alternating terms by -1, so you have (say) 1 – 1/2 + 1/3 – 1/4 + 1/5 et cetera produces something that looks like it converges. It equals the natural logarithm of 2. But if you take those terms and rearrange them, you can produce any real number, positive or negative, that you want. And, as Weinersmith describes here, if you just skip the correct set of terms, you can make the sum converge. The ones with 9 in the denominator will be, then, 1/9, 1/19, 1/29, 1/90, 1/91, 1/92, 1/290, 1/999, those sorts of things. Amazing? Yes. Absurd? I suppose so. This is why mathematicians learn to be very careful when they do anything, even addition, infinitely many times. John Deering’s Strange Brew for the 25th is a fear-of-mathematics joke. The sign the warrior’s carrying is legitimate algebra, at least so far as it goes. The right-hand side of the equation gets cut off. In time, it would get to the conclusion that x equals –19/2, or -9.5. ## Reading the Comics, June 3, 2016: Word Problems Without Pictures Edition I haven’t got Sunday’s comics under review yet. But the past seven days were slow ones for mathematically-themed comics. Maybe Comic Strip Master Command is under the impression that it’s the (United States) summer break already. It’s not, although Funky Winkerbean did a goofy sequence graduating its non-player-character students. And Zits has been doing a summer reading storyline that only makes sense if Jeremy Duncan is well into summer. Maybe Comic Strip Master Command thinks it’s a month later than it actually is? Tony Cochrane’s Agnes for the 29th of May looks at first like a bit of nonsense wordplay. But whether a book with the subject “All About Books” would discuss itself, and how it would discuss itself, is a logic problem. And not just a logic problem. Start from pondering how the book All About Books would describe the content of itself. You can go from that to an argument that it’s impossible to compress every possible message. Imagine an All About Books which contained shorthand descriptions of every book. And the descriptions have enough detail to exactly reconstruct each original book. But then what would the book list for the description of All About Books? And self-referential things can lead to logic paradoxes swiftly. You’d have some fine ones if Agnes were to describe a book All About Not-Described Books. Is the book described in itself? The question again sounds silly. But thinking seriously about it leads us to the decidability problem. Any interesting-enough logical system will always have statements that are meaningful and true that no one can prove. Furthermore, the suggestion of an “All About All About Books’ Book” suggests to me power sets. That’s the set of all the ways you can collect the elements of a set. Power sets are always bigger than the original set. They lead to the staggering idea that there are many sizes of infinitely large sets, a never-ending stack of bigness. Robb Armstrong’s Jump Start for the 31st of May is part of a sequence about getting a tutor for a struggling kid. That it’s mathematics is incidental to the storyline, must be said. (It’s an interesting storyline, partly about the Jojo’s father, a police officer, coming to trust Ray, an ex-convict. Jump Start tells many interesting and often deeply weird storylines. And it never loses its camouflage of being an ordinary family comic strip.) It uses the familiar gimmick of motivating a word problem by making it about something tangible. Ken Cursoe’s Tiny Sepuku for the 2nd of June uses the motif of non-Euclidean geometry as some supernatural magic. It’s a small reference, you might miss it. I suppose it is true that a high-dimensional analogue to conic sections would focus things from many dimensions. If those dimensions match time and space, maybe it would focus something from all humanity into the brain. I would try studying instead, though. Russell Myers’s Broom Hilda for the 3rd is a resisting-the-word-problems joke. It’s funny to figure on missing big if you have to be wrong at all. But something you learn in numerical mathematics, particularly, is that it’s all right to start from a guess. Often you can take a wrong answer and improve it. If you can’t get the exact right answer, you can usually get a better answer. And often you can get as good as you need. So in practice, sorry to say, I can’t recommend going for the ridiculous answer. You can do better. ## A Leap Day 2016 Mathematics A To Z: Uncountable I’m drawing closer to the end of the alphabet. While I have got choices for ‘V’ and ‘W’ set, I’ll admit that I’m still looking for something that inspires me in the last couple letters. Such inspiration might come from anywhere. HowardAt58, of that WordPress blog, gave me the notion for today’s entry. ## Uncountable. What are we doing when we count things? Maybe nothing. We might be counting just to be doing something. Or we might be counting because we want to do nothing. Counting can be a good way into a restful state. Fair enough. Just because we do something doesn’t mean we care about the result. Suppose we do care about the result of our counting. Then what is it we do when we count? The mechanism is straightforward enough. We pick out things and say, or imagine saying, “one, two, three, four,” and so on. Or we at least imagine the numbers along with the things being numbered. When we run out of things to count, we take whatever the last number was. That’s how many of the things there were. Why are there eight light bulbs in the chandelier fixture above the dining room table? Because there are not nine. That’s how lay people count anyway. Mathematicians would naturally have a more sophisticated view of the business. A much more powerful counting scheme. Concepts in counting that go far beyond what you might work out in first grade. Yeah, so that’s what most of us would figure. Things don’t get much more sophisticated than that, though. This probably is because the idea of counting is tied to the theory of sets. And the theory of sets grew, in part, to come up with a logically solid base for arithmetic. So many of the key ideas of set theory are so straightforward they hardly seem to need explaining. We build the idea of “countable” off of the nice, familiar numbers 1, 2, 3, and so on. That set’s called the counting numbers. They’re the numbers that everybody seems to recognize as numbers. Not just people. Even animals seem to understand at least the first couple of counting numbers. Sometimes these are called the natural numbers. Take a set of things we want to study. We’re interested in whether we can match the things in that set one-to-one with the things in the counting numbers. We don’t have to use all the counting numbers. But we can’t use the same counting number twice. If we’ve matched one chandelier light bulb with the number ‘4’, we mustn’t match a different bulb with the same number. Similarly, if we’ve got the number ‘4’ matched to one bulb, we mustn’t match ‘4’ with another bulb at the same time. If we can do this, then our set’s countable. If we really wanted, we could pick the counting numbers in order, starting from 1, and match up all the things with counting numbers. If we run out of things, then we have a finitely large set. The last number we used to match anything up with anything is the size, or in the jargon, the cardinality of our set. We might not care about the cardinality, just whether the set is finite. Then we can pick counting numbers as we like in no particular order. Just use whatever’s convenient. But what if we don’t run out of things? And it’s possible we won’t. Suppose our set is the negative whole numbers: -1, -2, -3, -4, -5, and so on. We can match each of those to a counting number many ways. We always can. But there’s an easy way. Match -1 to 1, match -2 to 2, match -3 to 3, and so on. Why work harder than that? We aren’t going to run out of negative whole numbers. And we aren’t going to find any we can’t match with some counting number. And we aren’t going to have to match two different negative numbers to the same counting number. So what we have here is an infinitely large, yet still countable, set. So a set of things can be countable and finite. It can be countable and infinite. What else is there to be? There must be something. It’d be peculiar to have a classification that everything was in, after all. At least it would be peculiar except for people studying what it means to exist or to not exist. And most of those people are in the philosophy department, where we’re scared of visiting. So we must mean there’s some such thing as an uncountable set. The idea means just what you’d guess if you didn’t know enough mathematics to be tricky. Something is uncountable if it can’t be counted. It can’t be counted if there’s no way to match it up, one thing-to-one thing, with the counting numbers. We have to somehow run out of counting numbers. It’s not obvious that we can do that. Some promising approaches don’t work. For example, the set of all the integers — 1, 2, 3, 4, 5, and all that, and 0, and the negative numbers -1, -2, -3, -4, -5, and so on — is still countable. Match the counting number 1 to 0. Match the counting number 2 to 1. Match the counting number 3 to -1. Match 4 to 2. Match 5 to -2. Match 6 to 3. Match 7 to -3. And so on. Even ordered pair of the counting numbers don’t do it. We can match the counting number 1 to the pair (1, 1). Match the counting number 2 to the pair (2, 1). Match the counting number 3 to (1, 2). Match 4 to (3, 1). Match 5 to (2, 2). Match 6 to (1, 3). Match 7 to (4, 1). Match 8 to (3, 2). And so on. We can achieve similar staggering results with ordered triplets, quadruplets, and more. Ordered pairs of integers, positive and negative? Longer to do, yes, but just as doable. So are there any uncountable things? Sure. Wouldn’t be here if there weren’t. For example: think about the set that’s all the ways to pick things from a set. I sense your confusion. Let me give you an example. Suppose we have the set of three things. They’re the numbers 1, 2, and 3. We can make a bunch of sets out of things from this set. We can make the set that just has ‘1’ in it. We can make the set that just has ‘2’ in it. Or the set that just has ‘3’ in it. We can also make the set that has just ‘1’ and ‘2’ in it. Or the set that just has ‘2’ and 3′ in it. Or the set that just has ‘3’ and ‘1’ in it. Or the set that has all of ‘1’, ‘2’, and ‘3’ in it. And we can make the set that hasn’t got any of these in it. (Yes, that does too count as a set.) So from a set of three things, we were able to make a collection of eight sets. If we had a set of four things, we’d be able to make a collection of sixteen sets. With five things to start from, we’d be able to make a collection of thirty-two sets. This collection of sets we call the “power set” of our original set, and if there’s one thing we can say about it, it’s that it’s bigger than the set we start from. The power set for a finite set, well, that’ll be much bigger. But it’ll still be finite. Still be countable. What about the power set for an infinitely large set? And the power set of the counting numbers, the collection of all the ways you can make a set of counting numbers, is really big. Is it uncountably big? Let’s step back. Remember when I said mathematicians don’t get “much more” sophisticated than matching up things to the counting numbers? Here’s a little bit of that sophistication. We don’t have to match stuff up to counting numbers if we like. We can match the things in one set to the things in another set. If it’s possible to match them up one-to-one, with nothing missing in either set, then the two sets have to be the same size. The same cardinality, in the jargon. So. The set of the numbers 1, 2, 3, has to have a smaller cardinality than its power set. Want to prove it? Do this exactly the way you imagine. You run out of things in the original set before you run out of things in the power set, so there’s no making a one-to-one matchup between the two. With the infinitely large yet countable set of the counting numbers … well, the same result holds. It’s harder to prove. You have to show that there’s no possible way to match the infinitely many things in the counting numbers to the infinitely many things in the power set of the counting numbers. (The easiest way to do this is by contradiction. Imagine that you have made such a matchup, pairing everything in your power set to everything in the counting numbers. Then you go through your matchup and put together a collection that isn’t accounted for. Whoops! So you must not have matched everything up in the first place. Why not? Because you can’t.) But the result holds. The power set of the counting numbers is some other set. It’s infinitely large, yes. And it’s so infinitely large that it’s somehow bigger than the counting numbers. It is uncountable. There’s more than one uncountably large set. Of course there are. We even know of some of them. For example, there’s the set of real numbers. Three-quarters of my readers have been sitting anxiously for the past eight paragraphs wondering if I’d ever get to them. There’s good reason for that. Everybody feels like they know what the real numbers are. And the proof that the real numbers are a larger set than the counting numbers is easy to understand. An eight-year-old could master it. You can find that proof well-explained within the first ten posts of pretty much every mathematics blog other than this one. (I was saving the subject. Then I finally decided I couldn’t explain it any better than everyone else has done.) Are the real numbers the same size, the same cardinality, as the power set of the counting numbers? Sure, they are. No, they’re not. Whichever you like. This is one of the many surprising mathematical results of the surprising 20th century. Starting from the common set of axioms about set theory, it’s undecidable whether the set of real numbers is as big as the power set of the counting numbers. You can assume that it is. This is known as the Continuum Hypothesis. And you can do fine mathematical work with it. You can assume that it is not. This is known as the … uh … Rejecting the Continuum Hypothesis. And you can do fine mathematical work with that. What’s right depends on what work you want to do. Either is consistent with the starting hypothesis. You are free to choose either, or if you like, neither. My understanding is that most set theory finds it more productive to suppose that they’re not the same size. I don’t know why this is. I know enough set theory to lead you to this point, but not past it. But that the question can exist tells you something fascinating. You can take the power set of the power set of the counting numbers. And this gives you another, even vaster, uncountably large set. As enormous as the collection of all the ways to pick things out of the counting numbers is, this power set of the power set is even vaster. We’re not done. There’s the power set of the power set of the power set of the counting numbers. And the power set of that. Much as geology teaches us to see Deep Time, and astronomy Deep Space, so power sets teach us to see Deep … something. Deep Infinity, perhaps. ## A Leap Day 2016 Mathematics A To Z: Riemann Sphere To my surprise nobody requested any terms beginning with R’ for this A To Z. So I take this free day to pick on a concept I’d imagine nobody saw coming. ## Riemann Sphere. We need to start with the complex plane. This is just, well, a plane. All the points on the plane correspond to a complex-valued number. That’s a real number plus a real number times i. And i is one of those numbers which, squared, equals -1. It’s like the real number line, only in two directions at once. Take that plane. Now put a sphere on it. The sphere has radius one-half. And it sits on top of the plane. Its lowest point, the south pole, sits on the origin. That’s whatever point corresponds to the number 0 + 0i, or as humans know it, “zero”. We’re going to do something amazing with this. We’re going to make a projection, something that maps every point on the sphere to every point on the plane, and vice-versa. In other words, we can match every complex-valued number to one point on the sphere. And every point on the sphere to one complex-valued number. Here’s how. Imagine sitting at the north pole. And imagine that you can see through the sphere. Pick any point on the plane. Look directly at it. Shine a laser beam, if that helps you pick the point out. The laser beam is going to go into the sphere — you’re squatting down to better look through the sphere — and come out somewhere on the sphere, before going on to the point in the plane. The point where the laser beam emerges? That’s the mapping of the point on the plane to the sphere. There’s one point with an obvious match. The south pole is going to match zero. They touch, after all. Other points … it’s less obvious. But some are easy enough to work out. The equator of the sphere, for instance, is going to match all the points a distance of 1 from the origin. So it’ll have the point matching the number 1 on it. It’ll also have the point matching the number -1, and the point matching i, and the point matching -i. And some other numbers. All the numbers that are less than 1 from the origin, in fact, will have matches somewhere in the southern hemisphere. If you don’t see why that is, draw some sketches and think about it. You’ll convince yourself. If you write down what convinced you and sprinkle the word “continuity” in here and there, you’ll convince a mathematician. (WARNING! Don’t actually try getting through your Intro to Complex Analysis class doing this. But this is what you’ll be doing.) What about the numbers more than 1 from the origin? … Well, they all match to points on the northern hemisphere. And tell me that doesn’t stagger you. It’s one thing to match the southern hemisphere to all the points in a circle of radius 1 away from the origin. But we can match everything outside that little circle to the northern hemisphere. And it all fits in! Not amazed enough? How about this: draw a circle on the plane. Then look at the points on the Riemann sphere that match it. That set of points? It’s also a circle. A line on the plane? That’s also a line on the sphere. (Well, it’s a geodesic. It’s the thing that looks like a line, on spheres.) How about this? Take a pair of intersecting lines or circles in the plane. Look at what they map to. That mapping, squashed as it might be to the northern hemisphere of the sphere? The projection of the lines or circles will intersect at the same angles as the original. As much as space gets stretched out (near the south pole) or squashed down (near the north pole), angles stay intact. OK, but besides being stunning, what good is all this? Well, one is that it’s a good thing to learn on. Geometry gets interested in things that look, at least in places, like planes, but aren’t necessarily. These spheres are, and the way a sphere matches a plane is obvious. We can learn the tools for geometry on the Möbius strip or the Klein bottle or other exotic creations by the tools we prove out on this. And then physics comes in, being all weird. Much of quantum mechanics makes sense if you imagine it as things on the sphere. (I admit I don’t know exactly how. I went to grad school in mathematics, not in physics, and I didn’t get to the physics side of mathematics much at that time.) The strange ways distance can get mushed up or stretched out have echoes in relativity. They’ll continue having these echoes in other efforts to explain physics as geometry, the way that string theory will. Also important is that the sphere has a top, the north pole. That point matches … well, what? It’s got to be something infinitely far away from the origin. And this make sense. We can use this projection to make a logically coherent, sensible description of things “approaching infinity”, the way we want to when we first learn about infinitely big things. Wrapping all the complex-valued numbers to this ball makes the vast manageable. It’s also good numerical practice. Computer simulations have problems with infinitely large things, for the obvious reason. We have a couple of tools to handle this. One is to model a really big but not infinitely large space and hope we aren’t breaking anything. One is to create a “tiling”, making the space we are able to simulate repeat itself in a perfect grid forever and ever. But recasting the problem from the infinitely large plane onto the sphere can also work. This requires some ingenuity, to be sure we do the recasting correctly, but that’s all right. If we need to run a simulation over all of space, we can often get away with doing a simulation on a sphere. And isn’t that also grand? The Riemann named here is Bernhard Riemann, yet another of those absurdly prolific 19th century mathematicians, especially considering how young he was when he died. His name is all over the fundamentals of analysis and geometry. When you take Introduction to Calculus you get introduced pretty quickly to the Riemann Sum, which is how we first learn how to calculate integrals. It’s that guy. General relativity, and much of modern physics, is based on advanced geometries that again fall back on principles Riemann noticed or set out or described so well that we still think of them as he discovered. ## Reading the Comics, April 4, 2016: Precursor To April 5 Edition Comic Strip Master Command followed up its slow times with a rush of comic strips I can talk about. Or that I can sort-of talk about. There’s enough for a regular essay just about the comics from the 5th of April alone. So today’s Reading the Comics entry is just the strips up through the 4th of April. That makes for a slightly short collection but what can I do besides schedule these for a consistent day of the week regardless of how many comics there are to talk about? Dave Whamond’s Reality Check for the 3rd of April mentions the infinite-monkeys tale. And it even does so in iconic form, in talking about writing Shakespeare’s Hamlet. I don’t mean to disparage the comic, especially when it’s put five punch lines into the panel. (I admit I’m a little disappointed when a Sunday strip is the same one- or three-panel format as a regular daily comic, though.) But I’m pretty sure this same premise was done by Fred Allen on the radio sometime around 1940. I don’t think that mentioned the infinite monkeys, though. Missy Meyer’s Holiday Doodles for the 4th of April mentioned that it was Square Root Day. I am curious whether the comic will mention anything for the 9th of April. I have noticed some people muttering about this Perfect Squares Day. Also I’m surprised that “glases with tape over the bridge” is still a signifier of square-ness. Brandon Sheffield and Dami Lee’s Hot Comics for Cool People for the 4th titles its installment Perfect Geometry Comics. And it presents, as often will happen, some muddle of algebra and geometry as the way to work out a brilliantly perfect solution. Also, the comic features a dog in safety goggles, which is always good to see. Graham Nolan’s Sunshine State for the 4th presents a word problem that might be a good introduction to asymptotes. The ratio of two people’s ages will approach without ever quite equalling 1. But it will, if the people last long enough, come as close as one might want. There’s probably also a good lesson to be made by comparing this age problem to the problem of Achilles and the tortoise. ## Things To Be Thankful For A couple buildings around town have blackboard paint and a writing prompt on the walls. Here’s one my love and I wandered across the other day while going to Fabiano’s Chocolate for the obvious reason. (The reason was to see their novelty three-foot-tall, 75-pound solid chocolate bunny. Also to buy less huge piles of candy.) I recognized that mathematics majors had been past. Well, anyone with an interest in popular mathematics might have written they’re grateful for “G. Cantor”. His work’s escaped into the popular imagination, at least a bit. “C. Weirstrauβ”, though, that’s a mathematics major at work. Karl Weierstrass, the way his name’s rendered in the English-language mathematics books I know, was one of the people who made analysis what it is today. Analysis is, at heart, the study of why calculus works. He attacked the foundations of calculus, which by modern standards weren’t quite rigorous. And he did brilliantly, giving us the modern standards of rigor. He’s terrified generations of mathematics majors by defining what it is for a function to be continuous. Roughly, it means we can draw the graph of a function without having to lift a pencil. He put it in a non-rough manner. He also developed the precise modern idea for what a limit is. Roughly, a limit is exactly what you might think it means; but to be precise takes genius. Among Weierstrass’s students was Georg Cantor. His is a more familiar name. He proved that just because a set has infinitely many elements in it doesn’t mean that it can’t be quite small compared to other infinitely large sets. His Diagonal Argument shows there must be, in a sense, more real numbers than there are counting numbers. And a child can understand it. Cantor also pioneered the modern idea of set theory. For a while this looked like it might be the best way to understand why arithmetic works like it does. (My understanding is it’s now thought category theory more fundamental. But I don’t know category theory well enough to have an informed opinion.) The person grateful to Michigan State University basketball I assume wrote that before last Sunday, when the school wrecked so many NCAA tournament brackets. ## A Leap Day 2016 Mathematics A To Z: Fractions (Continued) Another request! I was asked to write about continued fractions for the Leap Day 2016 A To Z. The request came from Keilah, of the Knot Theorist blog. But I’d already had a c-word request in (conjecture). So you see my elegant workaround to talk about continued fractions anyway. ## Fractions (continued). There are fashions in mathematics. There are fashions in all human endeavors. But mathematics almost begs people to forget that it is a human endeavor. Sometimes a field of mathematics will be popular a while and then fade. Some fade almost to oblivion. Continued fractions are one of them. A continued fraction comes from a simple enough starting point. Start with a whole number. Add a fraction to it. $1 + \frac{2}{3}$. Everyone knows what that is. But then look at the denominator. In this case, that’s the ‘3’. Why couldn’t that be a sum, instead? No reason. Imagine then the number $1 + \frac{2}{3 + 4}$. Is there a reason that we couldn’t, instead of the ‘4’ there, have a fraction instead? No reason beyond our own timidity. Let’s be courageous. Does $1 + \frac{2}{3 + \frac{4}{5}}$ even mean anything? Well, sure. It’s getting a little hard to read, but $3 + \frac{4}{5}$ is a fine enough number. It’s 3.8. $\frac{2}{3.8}$ is a less friendly number, but it’s a number anyway. It’s a little over 0.526. (It takes a fair number of digits past the decimal before it ends, but trust me, it does.) And we can add 1 to that easily. So $1 + \frac{2}{3 + \frac{4}{5}}$ means a number a slight bit more than 1.526. Dare we replace the “5” in that expression with a sum? Better, with the sum of a whole number and a fraction? If we don’t fear being audacious, yes. Could we replace the denominator of that with another sum? Yes. Can we keep doing this forever, creating this never-ending stack of whole numbers plus fractions? … If we want an irrational number, anyway. If we want a rational number, this stack will eventually end. But suppose we feel like creating an infinitely long stack of continued fractions. Can we do it? Why not? Who dares, wins! OK. Wins what, exactly? Well … um. Continued fractions certainly had a fashionable time. John Wallis, the 17th century mathematician famous for introducing the ∞ symbol, and for an interminable quarrel with Thomas Hobbes over Hobbes’s attempts to reform mathematics, did much to establish continuous fractions as a field of study. (He’s credited with inventing the field. But all claims to inventing something big are misleading. Real things are complicated and go back farther than people realize, and inventions are more ambiguous than people think.) The astronomer Christiaan Huygens showed how to use continued fractions to design better gear ratios. This may strike you as the dullest application of mathematics ever. Let it. It’s also important stuff. People who need to scale one movement to another need this. In the 18th and 19th century continued fractions became interesting for higher mathematics. Continued fractions were the approach Leonhard Euler used to prove that e had to be irrational. That’s one of the superstar numbers of mathematics. Johan Heinrich Lambert used this to show that if θ is a rational number (other than zero) then the tangent of θ must be irrational. This is one path to showing that π must be irrational. Many of the astounding theorems of Srinivasa Ramanujan were about continued fractions, or ideas which built on continued fractions. But since the early 20th century the field’s evaporated. I don’t have a good answer why. The best speculation I’ve heard is that the field seems to fit poorly into any particular topic. Continued fractions get interesting when you have an infinitely long stack of nesting denominators. You don’t want to work with infinitely long strings of things before you’ve studied calculus. You have to be comfortable with these things. But that means students don’t encounter it until college, at least. And at that point fractions seem beneath the grade level. There’s a handful of proofs best done by them. But those proofs can be shown as odd, novel approaches to these particular problems. Studying the whole field is hardly needed. So, perhaps because it seems like an odd fit, the subject’s dried up and blown away. Even enthusiasts seem to be resigned to its oblivion. Professor Adam Van Tyul, then at Queens University in Kingston, Ontario, composed a nice set of introductory pages about continued fractions. But the page is defunct. Dr Ron Knott has a more thorough page, though, and one with calculators that work well. Will continued fractions make a comeback? Maybe. It might take the discovery of some interesting new results, or some better visualization tools, to reignite interest. Chaos theory, the study of deterministic yet unpredictable systems, first grew (we now recognize) in the 1890s. But it fell into obscurity. When we got some new theoretical papers and the ability to do computer simulations, it flowered again. For a time it looked ready to take over all mathematics, although we’ve got things under better control now. Could continued fractions do the same? I’m skeptical, but won’t rule it out. Postscript: something you notice quickly with continued fractions is they’re a pain to typeset. We’re all right with $1 + \frac{2}{3 + \frac{4}{5}}$. But after that the LaTeX engine that WordPress uses to render mathematical symbols is doomed. A real LaTeX engine gets another couple nested denominators in before the situation is hopeless. If you’re writing this out on paper, the way people did in the 19th century, that’s all right. But there’s no typing it out that way. But notation is made for us, not us for notation. If we want to write a continued fraction in which the numerators are all 1, we have a brackets shorthand available. In this we would write $2 + \frac{1}{3 + \frac{1}{4 + \cdots }}$ as [2; 3, 4, … ]. The numbers are the whole numbers added to the next level of fractions. Another option, and one that lends itself to having numerators which aren’t 1, is to write out a string of fractions. In this we’d write $2 + \frac{1}{3 +} \frac{1}{4 +} \frac{1}{\cdots + }$. We have to trust people notice the + sign is in the denominator there. But if people know we’re doing continued fractions then they know to look for the peculiar notation. ## Reading the Comics, February 6, 2016: Lottery Edition As mentioned, the lottery was a big thing a couple of weeks ago. So there were a couple of lottery-themed comics recently. Let me group them together. Comic strips tend to be anti-lottery. It’s as though people trying to make a living drawing comics for newspapers are skeptical of wild long-shot dreams. T Lewis and Michael Fry’s Over The Hedge started a lottery storyline the 1st of February. Verne, the turtle, repeats the tired joke that the lottery is a tax on people bad at mathematics. Enormous jackpots, like the$1,500,000,000 payout of a couple weeks back, break one leg of the anti-lottery argument. If the expected payout is large enough then the expectation value of playing can become positive. The expectation value is one of those statistics terms that almost tells you what it is just by the name. It’s what you would expect as the average result if you could repeat some experiment arbitrarily many times. If the payout is 1.5 billion, and the chance of winning one in 250 million, then the expected value of the payout is six dollars. If a ticket costs less than six dollars, then — if you could play over and over, hundreds of millions of times — you’d expect to come out ahead each time you play.

If you could. Of course, you can’t play the lottery hundreds of millions of times. You can play a couple of times at most. (Even if you join a pool at work and buy, oh, a thousand tickets. That’s still barely better than playing twice.) And the payout may be less than the full jackpot; multiple winners are common things in the most enormous jackpots. Still, if you’re pondering whether it’s sensible to spend two dollars on a billion-dollar lottery jackpot? You’re being fussy. You’ll spend at least that much on something more foolish and transitory — the lottery ticket can at least be used as a bookmark — I’ll bet.

Jef Mallett’s Frazz for the 4th of February picks up the anti-lottery crusade. Caulfield does pin down that lotteries work because people figure they have a better chance of winning than they truly do. Nobody buys a ticket because they figure it’s worth losing a dollar or two. It’s because they figure the chance is worth a little money.

Ken Cursoe’s Tiny Sepuku for the 4th of February consults the Chinese Zodiac Monkey for help on finding lucky numbers. There’s not really any finding them. Lotteries work hard to keep the winning numbers as unpredictable as possible. I have heard the lore that numbers up to 31 are picked by more people — they’re numbers that can be birthdays — so that multiple winners on the same drawing are more likely. I don’t know that this is true, though. I suspect that I could feel comfortable even with a four-way split of one and a half billions of dollars. Five-way would be out of the question, of course. Better to tear up the ticket than take that undignified split.

In Rick Detorie’s One Big Happy for the 3rd of February features Ruthie tossing off a confusing pile of numbers on the way to declaring herself bad at mathematics. It’s always the way.

Breaking up a whole number like 4 into different sums of whole numbers is a mathematics problem also. Splitting up 4 into, say, ‘2 plus 1 plus 1’, is a ‘partition’ of the number. I’m not sure of important results that follow this sort of integer partition directly. But splitting up sets of things different ways runs through a lot of mathematics. Integer partitions are the ones you can do in elementary school.

Percy Crosby’s Skippy for the 3rd of February — I believe it originally ran December 1928 — is a Roman numerals joke. The mathematical content may be low, but what the heck. It’s kind of timely. The Super Bowl, set for today, has been the most prominent use of Roman numerals we have anymore since the Star Trek movies stopped using them a quarter-century ago.

Bill Amend’s FoxTrot for the 7th of February seems to be in agreement. And yes, I’m disappointed the Super Bowl is giving up on Roman numerals, much the way I’m disappointed they’re using a standardized and quite boring logo for each year. Part of the glory of past Super Bowls is seeing old graphic design eras preserved like fossils.

Brian Gordon’s Fowl Language for the 5th of February shows a duck trying to explain incredibly huge numbers to his kid. It’s hard. You need to appreciate mathematics some to start appreciating real vastness. I’m not sure anyone can really have a feel for a number like 300 sextillion, the character’s estimate for the number of stars there are. You can make rationalizations for what numbers that big are like, but I suspect the mind shies back from staring directly at it.

Infinity, and the many different sizes of infinity, might be easier to work with. One doesn’t need to imagine infinitely many things to work out the properties of infinitely large sets. You could do as well with a neatly drawn rectangle and some other, bigger, rectangles. But if you want to talk about the number 300,000,000,000,000,000,000,000 then you do want to think of something true about that number which isn’t also true about eight or about nine hundred million. But geology teaches us to ponder Deep Time. Astronomy trains us to imagine incredibly vast distances. Why not spend some time pondering huge numbers?

And with all that said, I’d like to make one more call for any requests for my winter 2016 Mathematics A To Z glossary. There are quite a few attractive letters left unclaimed; a word or short term could be yours!

## Reading the Comics, December 23, 2015: Richard Thompson Christmas Trees Edition

Richard Thompson’s Cul de Sac for the 19th of December (a rerun, alas, from the 18th of December, 2010) gives me a name for this Reading the Comics installment. Just as in a Richard’s Poor Almanac mentioned last time he gives us a Christmas tree occupying a non-Euclidean space. Non-Euclidean spaces do open up the possibility of many wondrous and counterintuitive phenomena. Trees probably aren’t among them, but I don’t know a better shorthand way to describe their mysteries. And if you’re not sure why so many people say this was the greatest comic strip of our still-young century, look at little Pete in the last panel. Both his expression and the composition of the panel are magnificent.

Tom Toles’s Randolph Itch, 2 am for the 21st of December is a rerun. And it’s one that’s been mentioned around here as recently as August. I don’t care. It’s still a good funny slapstick joke. The kicker at the bottom is also a solid giggle.

Richard Thompson’s Poor Richard’s Almanac for the 21st of December justifies my theme with its Platonic Fir. The Platonic Ideals of objects are, properly speaking, philosophical constructs. If they are constructs, anyway, and not the things that truly exist, and yes, we must be careful what we mean by ‘exist’ in this context. But Thompson’s diagram shows this Platonic Fir drawn as a mathematical diagram. That’s another common motif. Mathematical constructs, ideas like “triangles” and “circles” and “rotations”, do suggest Platonic Ideals quite closely. We might be a bit pressed to say what the quintessence of chair-ness is, the thing all chairs must be aspects of. But we can be pretty sure we understand what a triangle is, apart from our messy and imperfect real-world approximations of a true triangle. When mathematics enthusiasts speak of the beauty of pure mathematics it does seem like they speak of the beauty of approaching Platonic Ideals.

John Graziano’s Ripley’s Believe It or Not for the 21st of December continues its Rubik’s Cube obsession. Graziano spells Rubik correctly this time.

Don Asmussen’s Bad Reporter panel for the 23rd of December does a joke that depends on the idea of getting to be “more than infinity”. Every kid has run into the problem of trying to understand “infinity plus one”. The way we speak of “infinity” we can’t really talk about getting “more than infinity”. But we are able to think meaningfully of ways to differentiate sizes of infinity. There are some infinitely large sets that, in a sensible way, are bigger than other infinitely large sets. That’s a fun field of mathematics. You can get to interesting questions in it without needing much background or experience. It’s almost ideal for pop-mathematics essays and if you don’t believe me, then look at how many results you get googling for “Cantor’s Diagonalization Argument”. It’s not an infinite number of results, but it’ll get you quite close.

Brian and Ron Boychuk’s Chuckle Brothers for the 23rd of December is the anthropomorphic-numerals joke for this time around.

Mark Litzler’s Joe Vanilla for the 23rd of December is built on the idea that it’s absurd to develop an algorithm that could predict earning potential, hairline at 50, and fidelity. It sounds silly at first glance. But if we’ve learned anything from sabermetrics it’s that all kinds of physical traits can be studied, and modeled, and predicted. With a large and reliable enough data set, and with a mindfully developed algorithm, these models can become quite good at predicting things. The underlying property is that on average, people are average. If we know what is typical, and we have reason to think that “typical” is not changing, then we can forecast the future pretty well based on what we already see. Or if we have reason to expect that “typical” is changing in ways we understand, we can still make good forecasts.

## Reading the Comics, October 1, 2015: Big Questions Edition

I’m cutting the collection of mathematically-themed comic strips at the transition between months. The set I have through the 1st of October is long enough already. That’s mostly because the first couple strips suggested some big topics at least somewhat mathematically-based came up. Those are fun to reason about, but take time to introduce. So let’s jump into them.

Lincoln Pierce’s Big Nate: First Class for the 27th of September was originally published the 22nd of September, 1991. Nate and Francis trade off possession of the basketball, and a strikingly high number of successful shots in a row considering their age, in the infinitesimally sliced last second of the game. There’s a rather good Zeno’s-paradox-type-question to be made out of this. Suppose the game started with one second to go and Nate ahead by one point, since it is his strip. At one-half second to go, Francis makes a basket and takes a one point lead. At one-quarter second to go, Nate makes a basket and takes a one point lead. At one-eighth of a second to go, Francis repeats the basket; at one-sixteenth of a second, Nate does. And so on. Suppose they always make their shots, and suppose that they are able to make shots without requiring any more than half the remaining time available. Who wins, and why?

Tim Rickard’s Brewster Rockit for the 27th of September is built on the question of whether the universe might be just a computer simulation, and if so, how we might tell. Being a computer simulation is one of those things that would seem to explain why mathematics tells us so much about the universe. One can make a probabilistic argument about this. Suppose there is one universe, and there are some number of simulations of the universe. Call that number N. If we don’t know whether we’re in the real or the simulated universe, then it would seem we have an estimated probability of being in the real universe of one divided by N plus 1. The chance of being in the real universe starts out none too great and gets dismally small pretty fast.

But this does put us in philosophical difficulties. If we are in something that is a complete, logically consistent universe that cannot be escaped, how is it not “the real” universe? And if “the real” universe is accessible from within “the simulation” then how can they be separate? The question is hard to answer and it’s far outside my realm of competence anyway.

Mark Leiknes’s Cow and Boy Classics for the 27th of September originally ran the 15th of September, 2008. And it talks about the ideas of zero-point energy and a false vacuum. This is about something that seems core to cosmology: how much energy is there in a vacuum? That is, if there’s nothing in a space, how much energy is in it? Quantum mechanics tells us it isn’t zero, in part because matter and antimatter flutter into and out of existence all the time. And there’s gravity, which is hard to explain quite perfectly. Mathematical models of quantum mechanics, and gravity, make various predictions about how much the energy of the vacuum should be. Right now, the models don’t give us really good answers.

Some suggest that there might be more energy in the vacuum than we could ever use, and that if there were some way to draw it off — well, there’d never be a limit to anything ever again. I think this an overly optimistic projection. The opposite side of this suggests that if it is possible to draw energy out of the vacuum, that means it must be possible to shift empty space from its current state to a lower-energy state, much the way you can get energy out of a pile of rocks by making the rocks fall. But the lower-energy vacuum might have different physics in ways that make it very hard for us to live, or for us to exist. I think this an overly pessimistic projection. But I am not an expert in the fields, which include cosmology, quantum mechanics, and certain rather difficult tinkerings with the infinitely many.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 28th of September is a joke in the form of true, but useless, word problem answers. Well, putting down a lower bound on what the answer is can help. If you knew what three times twelve was, you could get to four times twelve reliably, and that’s a help. But if you’re lost for three times twelve then you’re just stalling for time and the teacher knows it.

Paul Gilligan’s Pooch Cafe for the 28th of September uses the monkeys-on-keyboards concept. It’s shifted here to cats on a keyboard, but the principle is the same. Give a random process enough time and you can expect it to produce anything you want. It’s a matter of how long you can wait, though. And all the complications of how to make something that’s random. Cats won’t do it.

Mel Henze’s Gentle Creatures for the 29th of September is a rerun. I’m not sure when it was first printed. But it does use “ability to do mathematics” as a shorthand for “is intelligent at all”. That’s flattering to put in front of a mathematician, but I don’t think that’s really fair.

Paul Trap’s Thatababy for the 30th of September is a protest about using mathematics in real life. I’m surprised Thatababy’s Dad had an algebra teacher proclaiming differential equations would be used. Usually teachers assert that whatever they’re teaching will be useful, which is how we provide motivation.

## Measure.

Before painting a room you should spackle the walls. This fills up small holes and cracks. My father is notorious for using enough spackle to appreciably diminish the room’s volume. (So says my mother. My father disagrees.) I put spackle on as if I were paying for it myself, using so little my father has sometimes asked when I’m going to put any on. I’ll get to mathematics in the next paragraph.

One of the natural things to wonder about a set — a collection of things — is how big it is. The “measure” of a set is how we describe how big a set is. If we’re looking at a set that’s a line segment within a longer line, the measure pretty much matches our idea of length. If we’re looking at a shape on the plane, the measure matches our idea of area. A solid in space we expect has a measure that’s like the volume.

We might say the cracks and holes in a wall are as big as the amount of spackle it takes to fill them. Specifically, we mean it’s the least bit of spackle needed to fill them. And similarly we describe the measure of a set in terms of how much it takes to cover it. We even call this “covering”.

We use the tool of “cover sets”. These are sets with a measure — a length, a volume, a hypervolume, whatever — that we know. If we look at regular old normal space, these cover sets are typically circles or spheres or similar nice, round sets. They’re familiar. They’re easy to work with. We don’t have to worry about how to orient them, the way we might if we had square or triangular covering sets. These covering sets can be as small or as large as you need. And we suppose that we have some standard reference. This is a covering set with measure 1, this with measure 1/2, this with measure 24, this with measure 1/72.04, and so on. (If you want to know what units these measures are in, they’re “units of measure”. What we’re interested in is unchanged whether we measure in “inches” or “square kilometers” or “cubic parsecs” or something else. It’s just longer to say.)

You can imagine this as a game. I give you a set; you try to cover it. You can cover it with circles (or spheres, or whatever fits the space we’re in) that are big, or small, or whatever size you like. You can use as many as you like. You can cover more than just the things in the set I gave you. The only absolute rule is you must not miss anything, even one point, in the set I give you. Find the smallest total area of the covering circles you use. That smallest total area that covers the whole set is the measure of that set.

Generally, measure matches pretty well the intuitive feel we might have for length or area or volume. And the idea extends to things that don’t really have areas. For example, we can study the probability of events by thinking of the space of all possible outcomes of an experiment, like all the ways twenty coins might come up. We find the measure of the set of outcomes we’re interested in, like all the sets that have ten tails. The probability of the outcome we’re interested in is the measure of the set we’re interested in divided by the measure of the set of all possible outcomes. (There’s more work to do to make this quite true. In an advanced probability course we do this work. Please trust me that we could do it if we had to. Also you see why we stride briskly past the discussion of units. What unit would make sense for measuring “the space of all possible outcomes of an experiment” anyway?)

But there are surprises. For example, there’s the Cantor set. The easiest way to make the Cantor set is to start with a line of length 1 — of measure 1 — and take out the middle third. This produces two line segments of length, measure, 1/3 each. Take out the middle third of each of those segments. This leaves four segments each of length 1/9. Take out the middle third of each of those four segments, producing eight segments, and so on. If you do this infinitely many times you’ll create a set that has no measure; it fills no volume, it has no length. And yet you can prove there are just as many points in this set as there are in a real normal space. Somehow merely having a lot of points doesn’t mean they fill space.

Measure is useful not just because it can give us paradoxes like that. We often want to say how big sets, or subsets, of whatever we’re interested in are. And using measure lets us adapt things like calculus to become more powerful. We’re able to say what the integral is for functions that are much more discontinuous, more chopped up, than ones that high school or freshman calculus can treat, for example. The idea of measure takes length and area and such and makes it more abstract, giving it great power and applicability.

## Reading the Comics, April 6, 2015: Little Infinite Edition

As I warned, there were a lot of mathematically-themed comic strips the last week, and here I can at least get us through the start of April. This doesn’t include the strips that ran today, the 7th of April by my calendar, because I have to get some serious-looking men to look at my car and I just know they’re going to disapprove of what my CV joint covers look like, even though I’ve done nothing to them. But I won’t be reading most of today’s comic strips until after that’s done, and so commenting on them later.

Mark Anderson’s Andertoons (April 3) makes its traditional appearance in my roundup, in this case with a business-type guy declaring infinity to be “the loophole of all loopholes!” I think that’s overstating things a fair bit, but strange and very counter-intuitive things do happen when you try to work out a problem in which infinities turn up. For example: in ordinary arithmetic, the order in which you add together a bunch of real numbers makes no difference. If you want to add together infinitely many real numbers, though, it is possible to have them add to different numbers depending on what order you add them in. Most unsettlingly, it’s possible to have infinitely many real numbers add up to literally any real number you like, depending on the order in which you add them. And then things get really weird.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips (April 3) is the other strip in this roundup to at least name-drop infinity. I confess I don’t see how “being infinite” would help in bringing about world peace, but I suppose being finite hasn’t managed the trick just yet so we might want to think outside the box.

## Reading the Comics, March 26, 2015: Kind Of Hanging Around Edition

I’m sorry to have fallen silent the last few days; it’s been a bit busy and I’ve been working on follow-ups to a couple of threads. Fortunately Comic Strip Master Command is still around and working to make sure I don’t disappear altogether, and I have a selection of comic strips which at least include a Jumble world puzzle, which should be a fun little diversion.

Tony Rubino and Gary Markstein’s Daddy’s Home (March 23) asks what seems like a confused question to me, “if you believe in infinity, does that mean anything is possible?” As I say, I’m not sure I understand how belief in infinity comes into play, but that might just reflect my background: I’ve been thoroughly convinced that one can describe collections of things that have infinitely many elements — the counting numbers, rectangles, continuous functions — as well as that one can subdivide things — like segments of a number line — infinitely many times — as well as of quantities that are larger than any finite number and so must be infinitely large; so, what’s to not believe in? (I’m aware that there are philosophical and theological questions that get into things termed “potential” and “actual” infinities, but I don’t understand the questions those terms are meant to address.) The phrasing of “anything is possible” seems obviously flawed to me. But if we take it to mean instead “anything not logically inconsistent or physically prohibited is possible” then we seem to have a reasonable question, if that hasn’t just reduced to “anything not impossible is possible”. I guess ultimately I just wonder if the kid is actually trying to understand anything or if he’s just procrastinating.

## Reading the Comics, March 4, 2015: Driving Me Crazy Edition

I like it when there are themes to these collections of mathematical comics, but since I don’t decide what subjects cartoonists write about — Comic Strip Master Command does — it depends on luck and my ability to dig out loose connections to find any. Sometimes, a theme just drops into my lap, though, as with today’s collection: several cartoonists tossed off bits that had me double-checking their work and trying to figure out what it was I wasn’t understanding. Ultimately I came to the conclusion that they just made mistakes, and that’s unnerving since how could a mathematical error slip through the rigorous editing and checking of modern comic strips?

Mac and Bill King’s Magic in a Minute (March 1) tries to show off how to do a magic trick based on parity, using the spots on a die to tell whether it was turned in one direction or another. It’s a good gimmick, and parity — whether something is odd or even — can be a great way to encode information or to do simple checks against slight errors. That said, I believe the Kings made a mistake in describing the system: I can’t figure out how the parity of the three sides of a die facing you could not change, from odd to even or from even to odd, as the die is rotated one turn. I believe they mean that you should just count the dots on the vertical sides, so that for example in the “Howdy Do It?” panel in the lower right corner, add two and one to make three. But with that corrected it should be a good trick.

## How To Build Infinite Numbers

I had missed it, as mentioned in the above tweet. The link is to a page on the Form And Formalism blog, reprinting a translation of one of Georg Cantor’s papers in which he founded the modern understanding of sets, of infinite sets, and of infinitely large numbers. Although it gets into pretty heady topics, it doesn’t actually require a mathematical background, at least as I look at it; it just requires a willingness to follow long chains of reasoning, which I admit is much harder than algebra.

Cantor — whom I’d talked a bit about in a recent Reading The Comics post — was deeply concerned and intrigued by infinity. His paper enters into that curious space where mathematics, philosophy, and even theology blend together, since it’s difficult to talk about the infinite without people thinking of God. I admit the philosophical side of the discussion is difficult for me to follow, and the theological side harder yet, but a philosopher or theologian would probably have symmetric complaints.

The translation is provided as scans of a typewritten document, so you can see what it was like trying to include mathematical symbols in non-typeset text in the days before LaTeX (which is great at it, but requires annoying amounts of setup) or HTML (which is mediocre at it, but requires less setup) or Word (I don’t use Word) were available. Somehow, folks managed to live through times like that, but it wasn’t pretty.