## Reading the Comics, March 14, 2021: Pi Day Edition

I was embarrassed, on looking at old Pi Day Reading the Comics posts, to see how often I observed there were fewer Pi Day comics than I expected. There was not a shortage this year. This even though if Pi Day has any value it’s as an educational event, and there should be no in-person educational events while the pandemic is still on. Of course one can still do educational stuff remotely, mathematics especially. But after a year of watching teaching on screens and sometimes doing projects at home, it’s hard for me to imagine a bit more of that being all that fun.

But Pi Day being a Sunday did give cartoonists more space to explain what they’re talking about. This is valuable. It’s easy for the dreadfully online, like me, to forget that most people haven’t heard of Pi Day. Most people don’t have any idea why that should be a thing or what it should be about. This seems to have freed up many people to write about it. But — to write what? Let’s take a quick tour of my daily comics reading.

Tony Cochran’s Agnes starts with some talk about Daylight Saving Time. Agnes and Trout don’t quite understand how it works, and get from there to Pi Day. Or as Agnes says, Pie Day, missing the mathematics altogether in favor of the food.

Scott Hilburn’s The Argyle Sweater is an anthropomorphic-numerals joke. It’s a bit risqué compared to the sort of thing you expect to see around here. The reflection of the numerals is correct, but it bothered me too.

Georgia Dunn’s Breaking Cat News is a delightful cute comic strip. It doesn’t mention mathematics much. Here the cat reporters do a fine job explaining what Pi Day is and why everybody spent Sunday showing pictures of pies. This could almost be the standard reference for all the Pi Day strips.

Bill Amend’s FoxTrot is one of the handful that don’t mention pie at all. It focuses on representing the decimal digits of π. At least within the confines of something someone might write in the American dating system. The logic of it is a bit rough but if we’ve accepted 3-14 to represent 3.14, we can accept 1:59 as standing in for the 0.00159 of the original number. But represent 0.0015926 (etc) of a day however you like. If we accept that time is continuous, then there’s some moment on the 14th of March which matches that perfectly.

Jef Mallett’s Frazz talks about the eliding between π and pie for the 14th of March. The strip wonders a bit what kind of joke it is exactly. It’s a nerd pun, or at least nerd wordplay. If I had to cast a vote I’d call it a language gag. If they celebrated Pi Day in Germany, there would not be any comic strips calling it Tortentag.

Steenz’s Heart of the City is another of the pi-pie comics. I do feel for Heart’s bewilderment at hearing π explained at length. Also Kat’s desire to explain mathematics overwhelming her audience. It’s a feeling I struggle with too. The thing is it’s a lot of fun to explain things. It’s so much fun you can lose track whether you’re still communicating. If you set off one of these knowledge-floods from a friend? Try to hold on and look interested and remember any single piece anywhere of it. You are doing so much good for your friend. And if you realize you’re knowledge-flooding someone? Yeah, try not to overload them, but think about the things that are exciting about this. Your enthusiasm will communicate when your words do not.

Dave Whamond’s Reality Check is a pi-pie joke that doesn’t rely on actual pie. Well, there’s a small slice in the corner. It relies on the infinite length of the decimal representation of π. (Or its representation in any integer base.)

Michael Jantze’s Studio Jantze ran on Monday instead, although the caption suggests it was intended for Pi Day. So I’m including it here. And it’s the last of the strips sliding the day over to pie.

But there were a couple of comic strips with some mathematics mention that were not about Pi Day. It may have been coincidence.

Sandra Bell-Lundy’s Between Friends is of the “word problem in real life” kind. It’s a fair enough word problem, though, asking about how long something would take. From the premises, it takes a hair seven weeks to grow one-quarter inch, and it gets trimmed one quarter-inch every six weeks. It’s making progress, but it might be easier to pull out the entire grey hair. This won’t help things though.

Darby Conley’s Get Fuzzy is a rerun, as all Get Fuzzy strips are. It first (I think) ran the 13th of September, 2009. And it’s another Infinite Monkeys comic strip, built on how a random process should be able to create specific outcomes. As often happens when joking about monkeys writing Shakespeare, some piece of pop culture is treated as being easier. But for these problems the meaning of the content doesn’t count. Only the length counts. A monkey typing “let it be written in eight and eight” is as improbable as a monkey typing “yrg vg or jevggra va rvtug naq rvtug”. It’s on us that we find one of those more impressive than the other.

And this wraps up my Pi Day comic strips. I don’t promise that I’m back to reading the comics for their mathematics content regularly. But I have done a lot of it, and figure to do it again. All my Reading the Comics posts appear at this link. Thank you for reading and I hope you had some good pie.

I don’t know how Andertoons didn’t get an appearance here.

## My Things for Pi Day

I regret not having the time or energy to write something original about π for today. I hope you’ll accept this offering of past Reading the Comics posts covering the day, and some of my other π-related writings:

For the Pi Day Of The Century (3/14/15) I wrote Calculating Pi Terribly. It’s about a legitimate way to calculate the digits of π, using so very much work that nobody will ever do it. But the wonderful thing about it is it’s experimental. And it doesn’t involve something with an obvious circle. A couple months later I followed up with Calculating Pi Less Terribly, using one of the most famous numerical methods for calculating the digits of π. It’s still not that good, but it’s far better than the experimental approach.

In 2019, as part of that year’s A-to-Z, I wrote more extensively about Buffon’s Needle problem, the core of this experimental method for finding digits of π.

And then there’s comic strips. I seem to complain every year that there’s fewer Pi Day comic strips than I expected, which invites the question of just what I expect. Here’s, as best I can tell, the actual record:

I have not yet read today’s comics, so don’t know what they’ll offer. We shall see! Also, I apologize but some of the comics may have been removed from GoComics or Comics Kingdom, and so the links may be dead. I’m not happy about that. But if I wanted the essays discussing these strips to stay permanently sensible I’d have posted the comics on my own web site.

And one last thing, bringing up an essay I’ve shared before. The End 2016 Mathematics A To Z: Normal Numbers is … maybe … about π. Nobody knows whether π is a normal number. It most likely is, but we haven’t been able to prove it.

And the last thing. When I thought I would have time this March, I hoped to write something about how π can be defined starting from differential equations. Things changed my plans out from under me. But my 2020 A-to-Z essay on the Exponential gets at some of why π should turn up in the correct differential equation. That essay sets you up more to understand a famous equation, that $e^{\pi \imath} + 1 = 0$. But it’s not too far to getting π out of solving $y''(t) = -y(t)$ in the right circumstances. I may get to writing that one yet.

## I forgot to ever post this magical short story plot

The Magic Realism Bot twitter feed, which every four hours generates a fanciful plot, offered this bit of whimsy earlier this month.

A good number of people would like that crystal ball.

## My 2019 Mathematics A To Z: Wallis Products

Today’s A To Z term was suggested by Dina Yagodich, whose YouTube channel features many topics, including calculus and differential equations, statistics, discrete math, and Matlab. Matlab is especially valuable to know as a good quick calculation can answer many questions.

# Wallis Products.

The Wallis named here is John Wallis, an English clergyman and mathematician and cryptographer. His most tweetable work is how we follow his lead in using the symbol ∞ to represent infinity. But he did much in calculus. And it’s a piece of that which brings us to today. He particularly noticed this:

$\frac{1}{2}\pi = \frac{2}{1}\cdot \frac{2}{3}\cdot \frac{4}{3}\cdot \frac{4}{5}\cdot \frac{6}{5}\cdot \frac{6}{7}\cdot \frac{8}{7}\cdot \frac{8}{9}\cdot \frac{10}{9}\cdot \frac{10}{11}\cdots$

This is an infinite product. It’s multiplication’s answer to the infinite series. It always amazes me when an infinite product works. There are dangers when you do anything with an infinite number of terms. Even the basics of arithmetic, like that you can change the order in which you calculate but still get the same result, break down. Series, in which you add together infinitely many things, are risky, but I’m comfortable with the rules to know when the sum can be trusted. Infinite products seem more mysterious. Then you learn an infinite product converges if and only if the series made from the logarithms of the terms in it also converges. Then infinite products seem less exciting.

There are many infinite products that give us π. Some work quite efficiently, giving us lots of digits for a few terms’ work. Wallis’s formula does not. We need about a thousand terms for it to get us a π of about 3.141. This is a bit much to calculate even today. In 1656, when he published it in Arithmetica Infinitorum, a book I have never read? Wallis was able to do mental arithmetic well. His biography at St Andrews says once when having trouble sleeping he calculated the square root of a 53-digit number in his head, and in the morning, remembered it, and was right. Still, this would be a lot of work. How could Wallis possibly do it? And what work could possibly convince anyone else that he was right?

As it common to striking discoveries it was a mixture of insight and luck and persistence and pattern recognition. He seems to have started with pondering the value of

$\int_0^1 \left(1 - x^2\right)^{\frac{1}{2}} dx$

Happily, he knew exactly what this was: $\frac{1}{4}\pi$. He knew this because of a bit of insight. We can interpret the integral here as asking for the area that’s enclosed, on a Cartesian coordinate system, by the positive x-axis, the positive y-axis, and the set of points which makes true the equation $y = \left(1 - x^2\right)^\frac{1}{2}$. This curve is the upper half of a circle with radius 1 and centered on the origin. The area enclosed by all this is one-fourth the area of a circle of radius 1. So that’s how he could know the value of the integral, without doing any symbol manipulation.

The question, in modern notation, would be whether he could do that integral. And, for this? He couldn’t. But, unable to do the problem he wanted, he tried doing the most similar problem he could and see what that proved. $\left(1 - x^2\right)^{\frac{1}{2}}$ was beyond his power to integrate; but what if he swapped those exponents? Worked on $\left(1 - x^{\frac{1}{2}}\right)^2$instead? This would not — could not — give him what he was interested in. But it would give him something he could calculate. So can we:

$\int_0^1 \left(1 - x^{\frac{1}{2}}\right)^2 dx = \int_0^1 1 - 2x^{\frac{1}{2}} + x dx = 1 - 2\cdot\frac{2}{3} + \frac{1}{2} = \frac{1}{6}$

And now here comes persistence. What if it’s not $x^{\frac{1}{2}}$ inside the parentheses there? If it’s x raised to some other unit fraction instead? What if the parentheses aren’t raised to the second power, but to some other whole number? Might that reveal something useful? Each of these integrals is calculable, and he calculated them. He worked out a table for many values of

$\int_0^1 \left(1 - x^{\frac{1}{p}}\right)^q dx$

for different sets of whole numbers p and q. He trusted that if he kept this up, he’d find some interesting pattern. And he does. The integral, for example, always turns out to be a unit fraction. And there’s a deeper pattern. Let me share results for different values of p and q; the integral is the reciprocal of the number inside the table. The topmost row is values of q; the leftmost column is values of p.

0 1 2 3 4 5 6 7
0 1 1 1 1 1 1 1 1
1 1 2 3 4 5 6 7 7
2 1 3 6 10 15 21 28 36
3 1 4 10 20 35 56 84 120
4 1 5 15 35 70 126 210 330
5 1 6 21 56 126 252 462 792
6 1 7 28 84 210 462 924 1716
7 1 8 36 120 330 792 1716 3432

There is a deep pattern here, although I’m not sure Wallis noticed that one. Look along the diagonals, running from lower-left to upper-right. These are the coefficients of the binomial expansion. Yang Hui’s triangle, if you prefer. Pascal’s triangle, if you prefer that. Let me call the term in row p, column q of this table $a_{p, q}$. Then

$a_{p, q} = \frac{(p + 1)!}{p! q!}$

Great material, anyway. The trouble is that it doesn’t help Wallis with the original problem, which — in this notation — would have $p = \frac12$ and $q = \frac12$. What he really wanted was the Binomial Theorem, but western mathematicians didn’t know it yet. Here a bit of luck comes in. He had noticed there’s a relationship between terms in one column and terms in another, particularly, that

$a_{p, q} = \frac{p + q}{q} a_{p, q - 1}$

So why shouldn’t that hold if p and q aren’t whole numbers? … We would today say why should they hold? But Wallis was working with a different idea of mathematical rigor. He made assumptions that it turned out in this case were correct. Of course, had he been wrong, we wouldn’t have heard of any of this and I would have an essay on some other topic.

With luck in Wallis’s favor we can go back to making a table. What would the row for $p = \frac12$ look like? We’ll need both whole and half-integers. $p = \frac12, q = 1$ is easy; its reciprocal is 1. $p = \frac12, q = \frac12$ is also easy; that’s the insight Wallis had to start with. Its reciprocal is $\frac{4}{\pi}$. What about the rest? Use the equation just up above, relating $a_{p, q}$ to $a_{p, q - 1}$; then we can start to fill in:

0 1/2 1 3/2 2 5/2 3 7/2
1/2 1 $\frac{4}{\pi}$ $\frac{3}{2}$ $\frac{4}{3}\frac{4}{\pi}$ $\frac{3\cdot 5}{2\cdot 4}$ $\frac{2\cdot 4}{5}\frac{4}{\pi}$ $\frac{3\cdot 5\cdot 7}{2\cdot 4\cdot 6}$ $\frac{2\cdot 2\cdot 4\cdot 4}{5\cdot 7}\frac{4}{\pi}$

Anything we can learn from this? … Well, sure. For one, as we go left to right, all these entries are increasing. So, like, the second column is less than the third which is less than the fourth. Here’s a triple inequality for you:

$\frac{4}{\pi} < \frac{3}{2} < \frac{4}{3}\frac{4}{\pi}$

Multiply all that through by, on, $\frac{2}{\pi}$. And then divide it all through by $\frac{3}{2}$. What have we got?

$\frac{2\cdot 2}{3} < \frac{\pi}{2} < \frac{2\cdot 2}{3}\cdot \frac{2\cdot 2}{3}$

I did some rearranging of terms, but, that’s the pattern. One-half π has to be between $\frac{2\cdot 2}{3}$ and four-thirds that.

Move over a little. Start from the row where $q = \frac32$. This starts us out with

$\frac{4}{3}\frac{4}{\pi} < \frac{3}{2} < \frac{2\cdot 4}{5}\frac{4}{\pi}$

Multiply everything by $\frac{\pi}{4}$, and divide everything by $\frac{3}{2}$ and follow with some symbol manipulation. And here’s a tip which would have saved me some frustration working out my notes: $\frac{\pi}{4} = \frac{\pi}{2}\cdot\frac{3}{6}$. Also, 6 equals 2 times 3. Later on, you may want to remember that 8 equals 2 times 4. All this gets us eventually to

$\frac{2\cdot 2\cdot 4\cdot 4}{3\cdot 3\cdot 5} < \frac{\pi}{2} < \frac{2\cdot 2\cdot 4\cdot 4}{3\cdot 3\cdot 5}\cdot \frac{6}{5}$

Move over to the next terms, starting from $q = \frac52$. This will get us eventually to

$\frac{2\cdot 2\cdot 4\cdot 4 \cdot 6 \cdot 6}{3\cdot 3\cdot 5\cdot 5\cdot 7} < \frac{\pi}{2} < \frac{2\cdot 2\cdot 4\cdot 4 \cdot 6 \cdot 6}{3\cdot 3\cdot 5\cdot 5\cdot 7}\cdot \frac{8}{7}$

You see the pattern here. Whatever the value of $\frac{\pi}{2}$, it’s squeezed between some number, on the left side of this triple inequality, and that same number times … uh … something like $\frac{10}{9}$ or $\frac{12}{11}$ or $\frac{14}{13}$ or $\frac{1,000,000,000,002}{1,000,000,000,001}$. That last one is a number very close to 1. So the conclusion is that $\frac{\pi}{2}$ has to equal whatever that pattern is making for the number on the left there.

We can make this more rigorous. Like, we don’t have to just talk about squeezing the number we want between two nearly-equal values. We can rely on the use of the … Squeeze Theorem … to prove this is okay. And there’s much we have to straighten out. Particularly, we really don’t want to write out expressions like

$\frac{2\cdot 2 \cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8 \cdot 10\cdot 10 \cdots}{3\cdot 3\cdot 5\cdot 5 \cdot 7\cdot 7 \cdot 9\cdot 9 \cdot 11\cdot 11 \cdots}$

Put that way, it looks like, well, we can divide each 3 in the denominator into a 6 in the numerator to get a 2, each 5 in the denominator to a 10 in the numerator to get a 2, and so on. We get a product that’s infinitely large, instead of anything to do with π. This is that problem where arithmetic on infinitely long strings of things becomes dangerous. To be rigorous, we need to write this product as the limit of a sequence, with finite numerator and denominator, and be careful about how to compose the numerators and denominators.

But this is all right. Wallis found a lovely result and in a way that’s common to much work in mathematics. It used a combination of insight and persistence, with pattern recognition and luck making a great difference. Often when we first find something the proof of it is rough, and we need considerable work to make it rigorous. The path that got Wallis to these products is one we still walk.

There’s just three more essays to go this year! I hope to have the letter X published here, Thursday. All the other A-to-Z essays for this year are also at that link. And past A-to-Z essays are at this link. Thanks for reading.

## My 2019 Mathematics A To Z: Buffon’s Needle

Today’s A To Z term was suggested by Peter Mander. Mander authors CarnotCycle, which when I first joined WordPress was one of the few blogs discussing thermodynamics in any detail. When I last checked it still was, which is a shame. Thermodynamics is a fascinating field. It’s as deeply weird and counter-intuitive and important as quantum mechanics. Yet its principles are as familiar as a mug of warm tea on a chilly day. Mander writes at a more technical level than I usually do. But if you’re comfortable with calculus, or if you’re comfortable nodding at a line and agreeing that he wouldn’t fib to you about a thing like calculus, it’s worth reading.

# Buffon’s Needle.

I’ve written of my fondness for boredom. A bored mind is not one lacking stimulation. It is one stimulated by anything, however petty. And in petty things we can find great surprises.

I do not know what caused Georges-Louis Leclerc, Comte de Buffon, to discover the needle problem named for him. It seems like something born of a bored but active mind. Buffon had an active mind: he was one of Europe’s most important naturalists of the 1700s. He also worked in mathematics, and astronomy, and optics. It shows what one can do with an engaged mind and a large inheritance from one’s childless uncle who’s the tax farmer for all Sicily.

The problem, though. Imagine dropping a needle on a floor that has equally spaced parallel lines. What is the probability that the needle will land on any of the lines? It could occur to anyone with a wood floor who’s dropped a thing. (There is a similar problem which would occur to anyone with a tile floor.) They have only to be ready to ask the question. Buffon did this in 1733. He had it solved by 1777. We, with several centuries’ insight into probability and calculus, need less than 44 years to solve the question.

Let me use L as the length of the needle. And d as the spacing of the parallel lines. If the needle’s length is less than the spacing then this is an easy formula to write, and not too hard to calculate. The probability, P, of the needle crossing some line is:

$P = \frac{2}{\pi}\frac{L}{d}$

I won’t derive it rigorously. You don’t need me for that. The interesting question is whether this formula makes sense. That L and d are in it? Yes, that makes sense. The length of the needle and the gap between lines have to be in there. More, the probability has to have the ratio between the two. There’s different ways to argue this. Dimensional analysis convinces me, at least. Probability is a pure number. L is a measurement of length; d is a measurement of length. To get a pure number starting with L and d means one of them has to divide into the other. That L is in the numerator and d the denominator makes sense. A tiny needle has a tiny chance of crossing a line. A large needle has a large chance. That $\frac{L}{d}$ is raised to the first power, rather than the second or third or such … well, that’s fair. A needle twice as long having twice the chance of crossing a line? That sounds more likely than a needle twice as long having four times the chance, or eight times the chance.

Does the 2 belong there? Hard to say. 2 seems like a harmless enough number. It appears in many respectable formulas. That π, though …

That π …

π comes to us from circles. We see it in calculations about circles and spheres all the time. We’re doing a problem with lines and line segments. What business does π have showing up?

We can find reasons. One way is to look at a similar problem. Imagine dropping a disc on these lines. What’s the chance the disc falls across some line? That’s the chance that the center of the disc is less than one radius from any of the lines. What if the disc has an equal chance of landing anywhere on the floor? Then it has a probability of $\frac{L}{d}$ of crossing a line. If the radius is smaller than the distance between lines, anyway. If the radius is larger than that, the probability is 1.

Now draw a diameter line on this disc. What’s the chance that this diameter line crosses this floor line? That depends on a couple things. Whether the center of the disc is near enough a floor line. And what angle the diameter line makes with respect to the floor lines. If the diameter line is parallel the floor line there’s almost no chance. If the diameter line is perpendicular to the floor line there’s the best possible chance. But that angle might be anything.

Let me call that angle θ. The diameter line crosses the floor line if the diameter times the sine of θ is less than half the distance between floor lines. … Oh. Sine. Sine and cosine and all the trigonometry functions we get from studying circles, and how to draw triangles within circles. And this diameter-line problem looks the same as the needle problem. So that’s where π comes from.

I’m being figurative. I don’t think one can make a rigorous declaration that the π in the probability formula “comes from” this sine, any more than you can declare that the square-ness of a shape comes from any one side. But it gives a reason to believe that π belongs in the probability.

If the needle’s longer than the gap between floor lines, if $L > d$, there’s still a probability that the needle crosses at least one line. It never becomes certain. No matter how long the needle is it could fall parallel to all the floor lines and miss them all. The probability is instead:

$P = \frac{2}{\pi}\left(\frac{L}{d} - \sqrt{\left(\frac{L}{d}\right)^2 - 1} + \sec^{-1}\left(\frac{L}{d}\right)\right)$

Here $\sec^{-1}$ is the world-famous arcsecant function. That is, it’s whatever angle has as its secant the number $\frac{L}{d}$. I don’t mean to insult you. I’m being kind to the person reading this first thing in the morning. I’m not going to try justifying this formula. You can play with numbers, though. You’ll see that if $\frac{L}{d}$ is a little bit bigger than 1, the probability is a little more than what you get if $\frac{L}{d}$ is a little smaller than 1. This is reassuring.

The exciting thing is arithmetic, though. Use the probability of a needle crossing a line, for short needles. You can re-write it as this:

$\pi = 2\frac{L}{d}\frac{1}{P}$

L and d you can find by measuring needles and the lines. P you can estimate. Drop a needle many times over. Count how many times you drop it, and how many times it crosses a line. P is roughly the number of crossings divided by the number of needle drops. Doing this gives you a way to estimate π. This gives you something to talk about on Pi Day.

It’s a rubbish way to find π. It’s a lot of work, plus you have to sweep needles off the floor. Well, you can do it in simulation and avoid the risk of stepping on an overlooked needle. But it takes a lot of needle-drops to get good results. To be certain you’ve calculated the first two decimal points correctly requires 3,380,000 needle-drops. Yes, yes. You could get lucky and happen to hit on an estimate of 3.14 for π with fewer needle-drops. But if you were sincerely trying to calculate the digits of π this way? If you did not know what they were? You would need the three and a third million tries to be confident you had the number correct.

So this result is, as a practical matter, useless. It’s a heady concept, though. We think casually of randomness as … randomness. Unpredictability. Sometimes we will speak of the Law of Large Numbers. This is several theorems in probability. They all point to the same result. That if some event has (say) a probability of one-third of happening, then given 30 million chances, it will happen quite close to 10 million times.

This π result is another casting of the Law of Large Numbers, and of the apparent paradox that true unpredictability is itself predictable. There is no way to predict whether any one dropped needle will cross any line. It doesn’t even matter whether any one needle crosses any line. An enormous number of needles, tossed without fear or favor, will fall in ways that embed π. The same π you get from comparing the circumference of a circle to its diameter. The same π you get from looking at the arc-cosine of a negative one.

I suppose we could use this also to calculate the value of 2, but that somehow seems to touch lesser majesties.

Thank you again for reading. All of the Fall 2019 A To Z posts should be at this link. This year’s and all past A To Z sequences should be at this link. I’ve made my picks for next week’s topics, and am fooling myself into thinking I have a rough outline for them already. But I’m still open for suggestions for the letters E through H and appreciate suggestions.

## Reading the Comics, October 14, 2018: Possessive Edition

The first two comics for this essay have titles of the form Name’s Thing, so, that’s why this edition title. That’s good enough, isn’t it? And besides this series there was a Perry Bible Fellowship which at least depicted mathematical symbols. It’s a rerun, though, even among those shown on GoComics.com. It was rerun recently enough that I featured it around here back in June. It’s a bit risque. But the strip was rerun the 12th. Maybe I also need to drop Perry Bible Fellowship from the roster of comics I read for this.

On to the comics I haven’t dropped.

Tony Buino and Gary Markstein’s Daddy’s Home for the 11th tries using specific examples to teach mathematics. There’s strangeness to arithmetic. It’s about these abstract things like “thirty” and “addition” and such. But these things match very well the behaviors of discrete objects, ones that don’t blend together or shatter by themselves. So we can use the intuition we have for specific things to get comfortable working with the abstract. This doesn’t stop, either. Mathematicians like to work on general, abstract questions; they let us answer big swaths of questions all at once. But working out a specific case is usually easier, both to prove and to understand. I don’t know what’s the most advanced mathematics that could be usefully practiced by thinking about cupcakes. Probably something in group theory, in studying the rotations of objects that are perfectly, or nearly, rotationally symmetric.

John Zakour and Scott Roberts’s Maria’s Day for the 11th is a follow-up to a strip featured last week. Maria’s been getting help on her mathematics from one of her closet monsters. And includes the usual joke about Common Core being such a horrible thing that it must come from monsters. I don’t know whether in the comic strip’s universe the monster is supposed to be imaginary. (Usually, in a comic strip, the question of whether a character is imaginary-or-real is pointless. I think Richard Thompson’s Cul de Sac is the only one to have done something good with it.) But if the closet monster is in Maria’s imagination, it’s quite in line for her to think that teaching comes from some malevolent and inscrutable force.

Olivia Jaimes’s Nancy for the 12th features one of the first interesting mathematics questions you do in physics. This is often done with calculus. Not much, but more than Nancy and Esther could realistically have. It could be worked out experimentally, and that’s likely what the teacher was hoping for. Calculus isn’t really necessary, although it does show skeptical students there’s some value in all this d-dx business they’ve been working through. You can find the same answers by dimensional analysis, which is less intimidating. But you’d still need to know some trigonometry functions. That’s beyond whatever Nancy’s grade level is too. In any case, Nancy is an expert at identifying unstated assumptions, and working out loopholes in them. I’m curious whether the teacher would respect Nancy’s skill here. (The way the writing’s been going, I think she would.)

Francesco Marciuliano and Jim Keefe’s Sally Forth for the 13th is about new-friend Jenny trying to work out her relationship with Hilary-Faye-and-Nona. It’s a good bit of character work, but that is outside my subject here. In the last panel Nona admits she’s been talking, or at least thinking about τ versus π. This references a minor nerd-squabble that’s been going on a couple years. π is an incredibly well-known, useful number. It’s the only transcendental number you can expect a normal person to have ever heard of. Humans noticed it, historically, because the length of the circumference of a circle is π times the length of its diameter. Going between “the distance across” and “the distance around” turns out to be useful.

The thing is, many mathematical and physics formulas find it more convenient to write things in terms of the radius of a circle or sphere. And this makes 2π show up in formulas. A lot. Even in things that don’t obviously have circles in them. For example, the Gaussian distribution, which describes how much a sample looks like the population it’s sampled from, has 2π in it. So, the τ argument does, why write out 2π all these places? Why not decide that that’s the useful number to think about, give it the catchy name τ, and use that instead? All the interesting questions about π have exact, obvious parallel questions about τ. Any answers about one give us answers about the other. So why not make this switch and then … pocket the savings in having shorter formulas?

You may sense in me a certain skepticism. I don’t see where changing over gets us anything worth the bother. But there are fashions in mathematics as with everything else. Perhaps τ has some ability to clarify things in ways we’ll come to better appreciate.

This and my other Reading the Comics posts are this link. Essays inspired by Daddy’s Home are at this link. Other essays that mention Maria’s Day discussions should be at this link. Essays with a mention of Nancy, old and new, are at this link. And essays in which Sally Forth gets discussed will be at this link. It’s a new tag today, which does surprise me.

## Wronski’s Formula For Pi: My Boring Mistake

Previously:

So, I must confess failure. Not about deciphering Józef Maria Hoëne-Wronski’s attempted definition of π. He’d tried this crazy method throwing a lot of infinities and roots of infinities and imaginary numbers together. I believe I translated it into the language of modern mathematics fairly. And my failure is not that I found the formula actually described the number -½π.

Oh, I had an error in there, yes. And I’d found where it was. It was all the way back in the essay which first converted Wronski’s formula into something respectable. It was a small error, first appearing in the last formula of that essay and never corrected from there. This reinforces my suspicion that when normal people see formulas they mostly look at them to confirm there is a formula there. With luck they carry on and read the sentences around them.

My failure is I wanted to write a bit about boring mistakes. The kinds which you make all the time while doing mathematics work, but which you don’t worry about. Dropped signs. Constants which aren’t divided out, or which get multiplied in incorrectly. Stuff like this which you only detect because you know, deep down, that you should have gotten to an attractive simple formula and you haven’t. Mistakes which are tiresome to make, but never make you wonder if you’re in the wrong job.

The trouble is I can’t think of how to make an essay of that. We don’t tend to rate little mistakes like the wrong sign or the wrong multiple or a boring unnecessary added constant as important. This is because they’re not. The interesting stuff in a mathematical formula is usually the stuff representing variations. Change is interesting. The direction of the change? Eh, nice to know. A swapped plus or minus sign alters your understanding of the direction of the change, but that’s all. Multiplying or dividing by a constant wrongly changes your understanding of the size of the change. But that doesn’t alter what the change looks like. Just the scale of the change. Adding or subtracting the wrong constant alters what you think the change is varying from, but not what the shape of the change is. Once more, not a big deal.

But you also know that instinctively, or at least you get it from seeing how it’s worth one or two points on an exam to write -sin where you mean +sin. Or how if you ask the instructor in class about that 2 where a ½ should be, she’ll say, “Oh, yeah, you’re right” and do a hurried bit of erasing before going on.

Thus my failure: I don’t know what to say about boring mistakes that has any insight.

For the record here’s where I got things wrong. I was creating a function, named ‘f’ and using as a variable ‘x’, to represent Wronski’s formula. I’d gotten to this point:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

And then I observed how the stuff in curly braces there is “one of those magic tricks that mathematicians know because they see it all the time”. And I wanted to call in this formula, correctly:

$\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }$

So here’s where I went wrong. I took the $4\imath$ way off in the front of that first formula and combined it with the stuff in braces to make 2 times a sine of some stuff. I apologize for this. I must have been writing stuff out faster than I was thinking about it. If I had thought, I would have gone through this intermediate step:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\} \cdot \frac{2\imath}{2\imath}$

Because with that form in mind, it’s easy to take the stuff in curled braces and the $2\imath$ in the denominator. From that we get, correctly, $\sin\left(\frac{\pi}{4}\cdot\frac{1}{x}\right)$. And then the $-4\imath$ on the far left of that expression and the $2\imath$ on the right multiply together to produce the number 8.

So the function ought to have been, all along:

$f(x) = 8 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

Not very different, is it? Ah, but it makes a huge difference. Carry through with all the L’Hôpital’s Rule stuff described in previous essays. All the complicated formula work is the same. There’s a different number hanging off the front, waiting to multiply in. That’s all. And what you find, redoing all the work but using this corrected function, is that Wronski’s original mess —

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

— should indeed equal:

$2\pi$

All right, there’s an extra factor of 2 here. And I don’t think that is my mistake. Or if it is, other people come to the same mistake without my prompting.

Possibly the book I drew this from misquoted Wronski. It’s at least as good to have a formula for 2π as it is to have one for π. Or Wronski had a mistake in his original formula, and had a constant multiplied out front which he didn’t want. It happens to us all.

Fin.

## Wronski’s Formula For Pi: How Close We Came

Previously:

Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:

$\displaystyle \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘$\frac{1}{1 / x}$ — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.

The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.

Where trouble comes up is all those parts where $\frac{1}{x}$ turns up. The derivatives of functions with a lot of $\frac{1}{x}$ terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?

And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly $\frac{1}{x}$. And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at $\frac{1}{y}$. That is, and this is just a little bit of algebra:

$g(y) = -2 \cdot \frac{1}{y} \cdot 2^{\frac{1}{2} y } \cdot \sin\left(\frac{\pi}{4} y\right)$

The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then $\frac{1}{x}$ has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?

… For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:

$\displaystyle \lim_{y \to 0^+} g(y) = \lim_{y \to 0^+} -2\cdot\frac{2^{\frac{1}{2}y} \cdot \sin\left(\frac{\pi}{4} y\right)}{y}$

Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives $-2 \cdot \frac{1 \cdot 0}{0}$. A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:

$-2 \cdot 2^{\frac{1}{2}y} \sin\left(\frac{\pi}{4} y\right)$

And the denominator is:

$y$

The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.

The first derivative of the numerator is going to be:

$-2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4}$

Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:

$\displaystyle \lim_{y \to 0^+} \frac{ -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4} }{1}$

And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:

$\displaystyle -2 \cdot 2^{\frac{1}{2} 0} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} \cdot 0\right) + -2 \cdot 2^{\frac{1}{2} 0 } \cdot \cos\left(\frac{\pi}{4} \cdot 0\right) \cdot \frac{\pi}{4}$

$\frac{\pi}{4} \cdot 0$ is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.

$\displaystyle -2 \cdot 2^{0} \cdot \log(2) \cdot \frac{1}{2} \cdot 0 + -2 \cdot 2^{ 0 } \cdot 1 \cdot \frac{\pi}{4}$

And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:

$\displaystyle 0 + -2 \cdot \frac{\pi}{4}$

And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …

$-\frac{\pi}{2}$

… So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted $-\frac{1}{2}$. The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?

Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.

The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.

I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.

## Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using

Previously:

So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:

$\displaystyle \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:

$-2 \cdot \infty \cdot 1 \cdot 0$

Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘$2^{\frac{1}{2}\cdot\frac{1}{x}}$‘ for ‘x’ at ∞ is 1. The limit of ‘$\sin(\frac{\pi}{4}\cdot\frac{1}{x})$ for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.

Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)

So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:

$\displaystyle \lim_{x \to a} \frac{h(x)}{g(x)}$

(Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)

Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:

$\displaystyle \lim_{x \to a} \frac{h'(x)}{g'(x)}$

That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.

This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.

But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.

That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as $\frac{1}{ 1 / x }$?

I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.

And it’s a perfectly good one. And it’s perfectly legitimate, too. $\frac{1}{x}$ is a meaningful number if ‘x’ is any finite number other than zero. So is $\frac{1}{ 1 / x }$. Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that $\frac{1}{x}$ wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that $\frac{1}{x}$ would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.

I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.

So back to my function, which looks like this:

$\displaystyle f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

Do I want to replace ‘x’ with $\frac{1}{1 / x}$, or do I want to replace $\sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$ with $\frac{1}{1 / \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}$? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.

So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:

$\displaystyle f(x) = -2 \frac{2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}{\frac{1}{x}}$

I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.

So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:

$h(x) = 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

$g(x) = \frac{1}{x}$

The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:

$g'(x) = -\frac{1}{x^2}$

The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that $2^{\frac{1}{2}\cdot \frac{1}{x}}$ and that $\sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$. And each of those expressions contains within themselves another expression, that $\frac{1}{x}$. So this is going to require the Product Rule, of two expressions that each require the Chain Rule.

This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:

$h'(x) = 2^{\frac{1}{2}\frac{1}{x}} \cdot \log(2) \cdot \frac{1}{2} \cdot (-1) \cdot \frac{1}{x^2} + 2^{\frac{1}{2}\frac{1}{x}} \cdot \cos( arg ) bleah$

Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.

(If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)

## Wronski’s Formula For Pi: A First Limit

Previously:

When I last looked at Józef Maria Hoëne-Wronski’s attempted definition of π I had gotten it to this. Take the function:

$f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

And find its limit when ‘x’ is ∞. Formally, you want to do this by proving there’s some number, let’s say ‘L’. And ‘L’ has the property that you can pick any margin-of-error number ε that’s bigger than zero. And whatever that ε is, there’s some number ‘N’ so that whenever ‘x’ is bigger than ‘N’, ‘f(x)’ is larger than ‘L – ε’ and also smaller than ‘L + ε’. This can be a lot of mucking about with expressions to prove.

Fortunately we have shortcuts. There’s work we can do that gets us ‘L’, and we can rely on other proofs that show that this must be the limit of ‘f(x)’ at some value ‘a’. I use ‘a’ because that doesn’t commit me to talking about ∞ or any other particular value. The first approach is to just evaluate ‘f(a)’. If you get something meaningful, great! We’re done. That’s the limit of ‘f(x)’ at ‘a’. This approach is called “substitution” — you’re substituting ‘a’ for ‘x’ in the expression of ‘f(x)’ — and it’s great. Except that if your problem’s interesting then substitution won’t work. Still, maybe Wronski’s formula turns out to be lucky. Fit in ∞ where ‘x’ appears and we get:

$f(\infty) = -2 \infty 2^{\frac{1}{2}\cdot \frac{1}{\infty}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{\infty}\right)$

So … all right. Not quite there yet. But we can get there. For example, $\frac{1}{\infty}$ has to be — well. It’s what you would expect if you were a kid and not worried about rigor: 0. We can make it rigorous if you like. (It goes like this: Pick any ε larger than 0. Then whenever ‘x’ is larger than $\frac{1}{\epsilon}$ then $\frac{1}{x}$ is less than ε. So the limit of $\frac{1}{x}$ at ∞ has to be 0.) So let’s run with this: replace all those $\frac{1}{\infty}$ expressions with 0. Then we’ve got:

$f(\infty) = -2 \infty 2^{0} \sin\left(0\right)$

The sine of 0 is 0. 20 is 1. So substitution tells us limit is -2 times ∞ times 1 times 0. That there’s an ∞ in there isn’t a problem. A limit can be infinitely large. Think of the limit of ‘x2‘ at ∞. An infinitely large thing times an infinitely large thing is fine. The limit of ‘x ex‘ at ∞ is infinitely large. A zero times a zero is fine; that’s zero again. But having an ∞ times a 0? That’s trouble. ∞ times something should be huge; anything times zero should be 0; which term wins?

So we have to fall back on alternate plans. Fortunately there’s a tool we have for limits when we’d otherwise have to face an infinitely large thing times a zero.

I hope to write about this next time. I apologize for not getting through it today but time wouldn’t let me.

## As I Try To Make Wronski’s Formula For Pi Into Something I Like

Previously:

I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:

$\displaystyle \lim_{x \to \infty} f(x)$

where

$f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$

So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number $1 + \imath$ raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it $1 - \imath$ raised to the same power. $1 + \imath$ and $1 - \imath$ are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.

Here’s the first way I figure to make my life simpler. It’s in rewriting that $1 + \imath$ and $1 - \imath$ stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.

At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number $1 + \imath$ matches to the point with x-coordinate 1, y-coordinate 1. The number $1 - \imath$ matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.

So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)

The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be $\frac{\pi}{2}$, because mathematicians like radians more than degrees. They make other work easier.

So. The point $1 + \imath$ corresponds to the polar coordinates $r = \sqrt{2}$ and $\theta = \frac{\pi}{4}$. The point $1 - \imath$ corresponds to the polar coordinates $r = \sqrt{2}$ and $\theta = -\frac{\pi}{4}$. Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.

With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by $r e^{\imath\theta}$, where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.

It’s easy to raise a number written like this to a power. $r e^{\imath\theta}$ raised to the n-th power is going to be equal to $r^n e^{\imath\theta \cdot n}$. (Because $(a \cdot b)^n = a^n \cdot b^n$ and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.

And we can get back to something that looks like $1 + \imath$ too. That is, something that’s a real number plus $\imath$ times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that $e^{\imath \phi} = \cos(\phi) + \imath \sin(\phi)$ for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.

OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.

And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.

This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.

For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.

What this all amounts to is that I can turn this:

$f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$

into this:

$f(x) = -4 \imath x \left\{ \left(\sqrt{2} e^{\imath \frac{\pi}{4}}\right)^{\frac{1}{x}} - \left(\sqrt{2} e^{-\imath \frac{\pi}{4}} \right)^{\frac{1}{x}} \right\}$

without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:

$f(x) = -4 \imath x \left\{ \sqrt{2}^{\frac{1}{x}} e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - \sqrt{2}^{\frac{1}{x}} e^{-\imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

I can’t look at that number, $\sqrt{2}^{\frac{1}{x}}$, sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:

$f(x) = -4 \imath x \sqrt{2}^{\frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:

$\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }$

That’s good for any real-valued φ. For example, it’s good for the number $\frac{\pi}{4}\cdot\frac{1}{x}$. And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:

$f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

And that’s the function whose limit I want to take at ∞. No, really.

## Deciphering Wronski, Non-Standardly

I ran out of time to do my next bit on Wronski’s attempted definition of π. Next week, all goes well. But I have something to share anyway. William Lane Craig, of the The author of Boxing Pythagoras blog was intrigued by the starting point. And as a fan of studying how people understand infinity and infinitesimals (and how they don’t), this two-century-old example of mixing the numerous and the tiny set his course.

So here’s his essay, trying to work out Wronski’s beautiful weird formula from a non-standard analysis perspective. Non-standard analysis is a field that’s grown in the last fifty years. It’s probably fairly close in spirit to what (I think) Wronski might have been getting at, too. Non-standard analysis works with ideas that seem to match many people’s intuitive feelings about infinitesimals and infinities.

For example, can we speak of a number that’s larger than zero, but smaller than the reciprocal of any positive integer? It’s hard to imagine such a thing. But what if we can show that if we suppose such a number exists, then we can do this logically sound work with it? If you want to say that isn’t enough to show a number exists, then I have to ask how you know imaginary numbers or negative numbers exist.

Standard analysis, you probably guessed, doesn’t do that. It developed over the 19th century when the logical problems of these kinds of numbers seemed unsolvable. Mostly that’s done by limits, showing that a thing must be true whenever some quantity is small enough, or large enough. It seems safe to trust that the infinitesimally small is small enough, and the infinitely large is large enough. And it’s not like mathematicians back then were bad at their job. Mathematicians learned a lot of things about how infinitesimals and infinities work over the late 19th and early 20th century. It makes modern work possible.

Anyway, Boxing Pythagoras goes over what a non-standard analysis treatment of the formula suggests. I think it’s accessible even if you haven’t had much non-standard analysis in your background. At least it worked for me and I haven’t had much of the stuff. I think it’s also accessible if you’re good at following logical argument and won’t be thrown by Greek letters as variables. Most of the hard work is really arithmetic with funny letters. I recommend going and seeing if he did get to π.

## As I Try To Figure Out What Wronski Thought ‘Pi’ Was

A couple weeks ago I shared a fascinating formula for π. I got it from Carl B Boyer’s The History of Calculus and its Conceptual Development. He got it from Józef Maria Hoëne-Wronski, early 19th-century Polish mathematician. His idea was that an absolute, culturally-independent definition of π would come not from thinking about circles and diameters but rather this formula:

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

Now, this formula is beautiful, at least to my eyes. It’s also gibberish. At least it’s ungrammatical. Mathematicians don’t like to write stuff like “four times infinity”, at least not as more than a rough draft on the way to a real thought. What does it mean to multiply four by infinity? Is arithmetic even a thing that can be done on infinitely large quantities? Among Wronski’s problems is that they didn’t have a clear answer to this. We’re a little more advanced in our mathematics now. We’ve had a century and a half of rather sound treatment of infinitely large and infinitely small things. Can we save Wronski’s work?

Start with the easiest thing. I’m offended by those $\sqrt{-1}$ bits. Well, no, I’m more unsettled by them. I would rather have $\imath$ in there. The difference? … More taste than anything sound. I prefer, if I can get away with it, using the square root symbol to mean the positive square root of the thing inside. There is no positive square root of -1, so, pfaugh, away with it. Mere style? All right, well, how do you know whether those $\sqrt{-1}$ terms are meant to be $\imath$ or its additive inverse, $-\imath$? How do you know they’re all meant to be the same one? See? … As with all style preferences, it’s impossible to be perfectly consistent. I’m sure there are times I accept a big square root symbol over a negative or a complex-valued quantity. But I’m not forced to have it here so I’d rather not. First step:

$\pi = \frac{4\infty}{\imath}\left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} - \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}$

Also dividing by $\imath$ is the same as multiplying by $-\imath$ so the second easy step gives me:

$\pi = -4 \imath \infty \left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} - \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}$

Now the hard part. All those infinities. I don’t like multiplying by infinity. I don’t like dividing by infinity. I really, really don’t like raising a quantity to the one-over-infinity power. Most mathematicians don’t. We have a tool for dealing with this sort of thing. It’s called a “limit”.

Mathematicians developed the idea of limits over … well, since they started doing mathematics. In the 19th century limits got sound enough that we still trust the idea. Here’s the rough way it works. Suppose we have a function which I’m going to name ‘f’ because I have better things to do than give functions good names. Its domain is the real numbers. Its range is the real numbers. (We can define functions for other domains and ranges, too. Those definitions look like what they do here.)

I’m going to use ‘x’ for the independent variable. It’s any number in the domain. I’m going to use ‘a’ for some point. We want to know the limit of the function “at a”. ‘a’ might be in the domain. But — and this is genius — it doesn’t have to be. We can talk sensibly about the limit of a function at some point where the function doesn’t exist. We can say “the limit of f at a is the number L”. I hadn’t introduced ‘L’ into evidence before, but … it’s a number. It has some specific set value. Can’t say which one without knowing what ‘f’ is and what its domain is and what ‘a’ is. But I know this about it.

Pick any error margin that you like. Call it ε because mathematicians do. However small this (positive) number is, there’s at least one neighborhood in the domain of ‘f’ that surrounds ‘a’. Check every point in that neighborhood other than ‘a’. The value of ‘f’ at all those points in that neighborhood other than ‘a’ will be larger than L – ε and smaller than L + ε.

Yeah, pause a bit there. It’s a tricky definition. It’s a nice common place to crash hard in freshman calculus. Also again in Intro to Real Analysis. It’s not just you. Perhaps it’ll help to think of it as a kind of mutual challenge game. Try this.

1. You draw whatever error bar, as big or as little as you like, around ‘L’.
2. But I always respond by drawing some strip around ‘a’.
3. You then pick absolutely any ‘x’ inside my strip, other than ‘a’.
4. Is f(x) always within the error bar you drew?

Suppose f(x) is. Suppose that you can pick any error bar however tiny, and I can answer with a strip however tiny, and every single ‘x’ inside my strip has an f(x) within your error bar … then, L is the limit of f at a.

Again, yes, tricky. But mathematicians haven’t found a better definition that doesn’t break something mathematicians need.

To write “the limit of f at a is L” we use the notation:

$\displaystyle \lim_{x \to a} f(x) = L$

The ‘lim’ part probably makes perfect sense. And you can see where ‘f’ and ‘a’ have to enter into it. ‘x’ here is a “dummy variable”. It’s the falsework of the mathematical expression. We need some name for the independent variable. It’s clumsy to do without. But it doesn’t matter what the name is. It’ll never appear in the answer. If it does then the work went wrong somewhere.

What I want to do, then, is turn all those appearances of ‘∞’ in Wronski’s expression into limits of something at infinity. And having just said what a limit is I have to do a patch job. In that talk about the limit at ‘a’ I talked about a neighborhood containing ‘a’. What’s it mean to have a neighborhood “containing ∞”?

The answer is exactly what you’d think if you got this question and were eight years old. The “neighborhood of infinity” is “all the big enough numbers”. To make it rigorous, it’s “all the numbers bigger than some finite number that let’s just call N”. So you give me an error bar around ‘L’. I’ll give you back some number ‘N’. Every ‘x’ that’s bigger than ‘N’ has f(x) inside your error bars. And note that I don’t have to say what ‘f(∞)’ is or even commit to the idea that such a thing can be meaningful. I only ever have to think directly about values of ‘f(x)’ where ‘x’ is some real number.

So! First, let me rewrite Wronski’s formula as a function, defined on the real numbers. Then I can replace each ∞ with the limit of something at infinity and … oh, wait a minute. There’s three ∞ symbols there. Do I need three limits?

Ugh. Yeah. Probably. This can be all right. We can do multiple limits. This can be well-defined. It can also be a right pain. The challenge-and-response game needs a little modifying to work. You still draw error bars. But I have to draw multiple strips. One for each of the variables. And every combination of values inside all those strips has give an ‘f’ that’s inside your error bars. There’s room for great mischief. You can arrange combinations of variables that look likely to break ‘f’ outside the error bars.

So. Three independent variables, all taking a limit at ∞? That’s not guaranteed to be trouble, but I’d expect trouble. At least I’d expect something to keep the limit from existing. That is, we could find there’s no number ‘L’ so that this drawing-neighborhoods thing works for all three variables at once.

Let’s try. One of the ∞ will be a limit of a variable named ‘x’. One of them a variable named ‘y’. One of them a variable named ‘z’. Then:

$f(x, y, z) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{y}} - \left(1 - \imath\right)^{\frac{1}{z}} \right\}$

Without doing the work, my hunch is: this is utter madness. I expect it’s probably possible to make this function take on many wildly different values by the judicious choice of ‘x’, ‘y’, and ‘z’. Particularly ‘y’ and ‘z’. You maybe see it already. If you don’t, you maybe see it now that I’ve said you maybe see it. If you don’t, I’ll get there, but not in this essay. But let’s suppose that it’s possible to make f(x, y, z) take on wildly different values like I’m getting at. This implies that there’s not any limit ‘L’, and therefore Wronski’s work is just wrong.

Thing is, Wronski wouldn’t have thought that. Deep down, I am certain, he thought the three appearances of ∞ were the same “value”. And that to translate him fairly we’d use the same name for all three appearances. So I am going to do that. I shall use ‘x’ as my variable name, and replace all three appearances of ∞ with the same variable and a common limit. So this gives me the single function:

$f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$

And then I need to take the limit of this at ∞. If Wronski is right, and if I’ve translated him fairly, it’s going to be π. Or something easy to get π from.

I hope to get there next week.

## What Only One Person Ever Has Thought ‘Pi’ Means, And Who That Was

I’ve been reading Carl B Boyer’s The History of Calculus and its Conceptual Development. It’s been slow going, because reading about how calculus’s ideas developed is hard. The ideas underlying it are subtle to start with. And the ideas have to be discussed using vague, unclear definitions. That’s not because dumb people were making arguments. It’s because these were smart people studying ideas at the limits of what we understood. When we got clear definitions we had the fundamentals of calculus understood. (By our modern standards. The future will likely see us as accepting strange ambiguities.) And I still think Boyer whiffs the discussion of Zeno’s Paradoxes in a way that mathematics and science-types usually do. (The trouble isn’t imagining that infinite series can converge. The trouble is that things are either infinitely divisible or they’re not. Either way implies things that seem false.)

Anyway. Boyer got to a part about the early 19th century. This was when mathematicians were discovering infinities and infinitesimals are amazing tools. Also that mathematicians should maybe learn if they follow any rules. Because you can just plug symbols in to formulas and grind out what looks like they might mean and get answers. Sometimes this works great. Grind through the formulas for solving cubic polynomials as though square roots of negative numbers make sense. You get good results. Later, we worked out a coherent scheme of “complex-valued numbers” that justified it all. We can get lucky with infinities and infinitesimals, sometimes.

And this brought Boyer to an argument made by Józef Maria Hoëne-Wronski. He was a Polish mathematician whose fantastic ambition in … everything … didn’t turn out many useful results. Algebra, the Longitude Problem, building a rival to the railroad, even the Kosciuszko Uprising, none quite panned out. (And that’s not quite his name. The ‘n’ in ‘Wronski’ should have an acute mark over it. But WordPress’s HTML engine doesn’t want to imagine such a thing exists. Nor do many typesetters writing calculus or differential equations books, Boyer’s included.)

But anyone who studies differential equations knows his name, for a concept called the Wronskian. It’s a matrix determinant that anyone who studies differential equations hopes they won’t ever have to do after learning it. And, says Boyer, Wronski had this notion for an “absolute meaning of the number π”. (By “absolute” Wronski means one that not drawn from cultural factors like the weird human interset in circle perimeters and diameters. Compare it to the way we speak of “absolute temperature”, where the zero means something not particular to western European weather.)

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

Well.

I will admit I’m not fond of “real” alternate definitions of π. They seem to me mostly to signal how clever the definition-originator is. The only one I like at all defines π as the smallest positive root of the simple-harmonic-motion differential equation. (With the right starting conditions and all that.) And I’m not sure that isn’t “circumference over diameter” in a hidden form.

And yes, that definition is a mess of early-19th-century wild, untamed casualness in the use of symbols. But I admire the crazypants beauty of it. If I ever get a couple free hours I should rework it into something grammatical. And then see if, turned into something tolerable, Wronski’s idea is something even true.

Boyer allows that “perhaps” because of the strange notation and “bizarre use of the symbol ∞” Wronski didn’t make much headway on this point. I can’t fault people for looking at that and refusing to go further. But isn’t it enchanting as it is?

## Reading the Comics, September 8, 2017: First Split Week Edition, Part 1

It was looking like another slow week for something so early in the (United States) school year. Then Comic Strip Master Commend sent a flood of strips in for Friday and Saturday, so I’m splitting the load. It’s not a heavy one, as back-to-school jokes are on people’s minds. But here goes.

Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017 is a fair strip for this early in the school year. It’s an old joke about making subtraction understandable.

Mark Anderson’s Andertoons for the 3rd is the Mark Anderson installment for this week, so I’m glad to have that. It’s a good old classic cranky-students setup and it reminds me that “unlike fractions” is a thing. I’m not quibbling with the term, especially not after the whole long-division mess a couple weeks back. I just hadn’t thought in a long while about how different denominators do make adding fractions harder.

Jeff Harris’s Shortcuts informational feature for the 3rd I couldn’t remember why I put on the list of mathematically-themed comic strips. The reason’s in there. There’s a Pi Joke. But my interest was more in learning that strawberries are a hybrid created in France from a North American and a Chilean breed. Isn’t that intriguing stuff?

Bill Abbott’s Specktickles for the 8th uses arithmetic — multiplication flash cards — as emblem of stuff to study. About all I can say for that.

## Reading the Comics, August 17, 2017: Professor Edition

To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line.

Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that.

Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics.

Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here.

By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time.

Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like.

Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.

## Reading the Comics, August 12, 2017: August 10 and 12 Edition

The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.

Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.

Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.

Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.

Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.

John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.

Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.

## Terrible and Less-Terrible Pi

As the 14th of March comes around it’s the time for mathematics bloggers to put up whatever they can about π. I will stir from my traditional crankiness about Pi Day (look, we don’t write days of the year as 3.14 unless we’re doing fake stardates) to bring back my two most π-relevant posts:

• Calculating Pi Terribly is about a probability-based way to calculate just what π’s digits are. It’s a lousy way to do it, but it works, technically.
• Calculating Pi Less Terribly is about an analysis-based way to calculate just what π’s digits are. It’s a less bad way to do it, although we actually use better-yet ways to work out the digits of a number like this.
• And what the heck, Normal Numbers, from an A To Z sequence. We do not actually know that π is a normal number. It’s the way I would bet, though, and here’s something about why I’d bet that way.

## Reading the Comics, March 4, 2017: Frazz, Christmas Trees, and Weddings Edition

It was another of those curious weeks when Comic Strip Master Command didn’t send quite enough comics my way. Among those they did send were a couple of strips in pairs. I can work with that.

Samson’s Dark Side Of The Horse for the 26th is the Roman Numerals joke for this essay. I apologize to Horace for being so late in writing about Roman Numerals but I did have to wait for Cecil Adams to publish first.

In Jef Mallett’s Frazz for the 26th Caulfield ponders what we know about Pythagoras. It’s hard to say much about the historical figure: he built a cult that sounds outright daft around himself. But it’s hard to say how much of their craziness was actually their craziness, how much was just that any ancient society had a lot of what seems nutty to us, and how much was jokes (or deliberate slander) directed against some weirdos. What does seem certain is that Pythagoras’s followers attributed many of their discoveries to him. And what’s certain is that the Pythagorean Theorem was known, at least a thing that could be used to measure things, long before Pythagoras was on the scene. I’m not sure if it was proved as a theorem or whether it was just known that making triangles with the right relative lengths meant you had a right triangle.

Greg Evans’s Luann Againn for the 28th of February — reprinting the strip from the same day in 1989 — uses a bit of arithmetic as generic homework. It’s an interesting change of pace that the mathematics homework is what keeps one from sleep. I don’t blame Luann or Puddles for not being very interested in this, though. Those sorts of complicated-fraction-manipulation problems, at least when I was in middle school, were always slogs of shuffling stuff around. They rarely got to anything we’d like to know.

Jef Mallett’s Frazz for the 1st of March is one of those little revelations that statistics can give one. Myself, I was always haunted by the line in Carl Sagan’s Cosmos about how, in the future, with the Sun ageing and (presumably) swelling in size and heat, the Earth would see one last perfect day. That there would most likely be quite fine days after that didn’t matter, and that different people might disagree on what made a day perfect didn’t matter. Setting out the idea of a “perfect day” and realizing there would someday be a last gave me chills. It still does.

Richard Thompson’s Poor Richard’s Almanac for the 1st and the 2nd of March have appeared here before. But I like the strip so I’ll reuse them too. They’re from the strip’s guide to types of Christmas trees. The Cubist Fur is described as “so asymmetrical it no longer inhabits Euclidean space”. Properly neither do we, but we can’t tell by eye the difference between our space and a Euclidean space. “Non-Euclidean” has picked up connotations of being so bizarre or even horrifying that we can’t hope to understand it. In practice, it means we have to go a little slower and think about, like, what would it look like if we drew a triangle on a ball instead of a sheet of paper. The Platonic Fir, in the 2nd of March strip, looks like a geometry diagram and I doubt that’s coincidental. It’s very hard to avoid thoughts of Platonic Ideals when one does any mathematics with a diagram. We know our drawings aren’t very good triangles or squares or circles especially. And three-dimensional shapes are worse, as see every ellipsoid ever done on a chalkboard. But we know what we mean by them. And then we can get into a good argument about what we mean by saying “this mathematical construct exists”.

Mark Litzler’s Joe Vanilla for the 3rd uses a chalkboard full of mathematics to represent the deep thinking behind a silly little thing. I can’t make any of the symbols out to mean anything specific, but I do like the way it looks. It’s quite well-done in looking like the shorthand that, especially, physicists would use while roughing out a problem. That there are subscripts with forms like “12” and “22” with a bar over them reinforces that. I would, knowing nothing else, expect this to represent some interaction between particles 1 and 2, and 2 with itself, and that the bar means some kind of complement. This doesn’t mean much to me, but with luck, it means enough to the scientist working it out that it could be turned into a coherent paper.

Bill Holbrook’s On The Fastrack is this week about the wedding of the accounting-minded Fi. And she’s having last-minute doubts, which is why the strip of the 3rd brings in irrational and anthropomorphized numerals. π gets called in to serve as emblematic of the irrational numbers. Can’t fault that. I think the only more famously irrational number is the square root of two, and π anthropomorphizes more easily. Well, you can draw an established character’s face onto π. The square root of 2 is, necessarily, at least two disconnected symbols and you don’t want to raise distracting questions about whether the root sign or the 2 gets the face.

That said, it’s a lot easier to prove that the square root of 2 is irrational. Even the Pythagoreans knew it, and a bright child can follow the proof. A really bright child could create a proof of it. To prove that π is irrational is not at all easy; it took mathematicians until the 19th century. And the best proof I know of the fact does it by a roundabout method. We prove that if a number (other than zero) is rational then the tangent of that number must be irrational, and vice-versa. And the tangent of π/4 is 1, so therefore π/4 must be irrational, so therefore π must be irrational. I know you’ll all trust me on that argument, but I wouldn’t want to sell it to a bright child.

Holbrook continues the thread on the 4th, extends the anthropomorphic-mathematics-stuff to call people variables. There’s ways that this is fair. We use a variable for a number whose value we don’t know or don’t care about. A “random variable” is one that could take on any of a set of values. We don’t know which one it does, in any particular case. But we do know — or we can find out — how likely each of the possible values is. We can use this to understand the behavior of systems even if we never actually know what any one of it does. You see how I’m going to defend this metaphor, then, especially if we allow that what people are likely or unlikely to do will depend on context and evolve in time.

## Reading the Comics, February 15, 2017: SMBC Cuts In Line Edition

It’s another busy enough week for mathematically-themed comic strips that I’m dividing the harvest in two. There’s a natural cutting point since there weren’t any comics I could call relevant for the 15th. But I’m moving a Saturday Morning Breakfast Cereal of course from the 16th into this pile. That’s because there’s another Saturday Morning Breakfast Cereal of course from after the 16th that I might include. I’m still deciding if it’s close enough to on topic. We’ll see.

John Graziano’s Ripley’s Believe It Or Not for the 12th mentions the “Futurama Theorem”. The trivia is true, in that writer Ken Keeler did create a theorem for a body-swap plot he had going. The premise was that any two bodies could swap minds at most one time. So, after a couple people had swapped bodies, was there any way to get everyone back to their correct original body? There is, if you bring two more people in to the body-swapping party. It’s clever.

From reading comment threads about the episode I conclude people are really awestruck by the idea of creating a theorem for a TV show episode. The thing is that “a theorem” isn’t necessarily a mind-boggling piece of work. It’s just the name mathematicians give when we have a clearly-defined logical problem and its solution. A theorem and its proof can be a mind-wrenching bit of work, like Fermat’s Last Theorem or the Four-Color Map Theorem are. Or it can be on the verge of obvious. Keeler’s proof isn’t on the obvious side of things. But it is the reasoning one would have to do to solve the body-swap problem the episode posited without cheating. Logic and good story-telling are, as often, good partners.

Teresa Burritt’s Frog Applause is a Dadaist nonsense strip. But for the 13th it hit across some legitimate words, about a 14 percent false-positive rate. This is something run across in hypothesis testing. The hypothesis is something like “is whatever we’re measuring so much above (or so far below) the average that it’s not plausibly just luck?” A false positive is what it sounds like: our analysis said yes, this can’t just be luck, and it turns out that it was. This turns up most notoriously in medical screenings, when we want to know if there’s reason to suspect a health risk, and in forensic analysis, when we want to know if a particular person can be shown to have been a particular place at a particular time. A 14 percent false positive rate doesn’t sound very good — except.

Suppose we are looking for a rare condition. Say, something one person out of 500 will have. A test that’s 99 percent accurate will turn up positives for the one person who has got it and for five of the people who haven’t. It’s not that the test is bad; it’s just there are so many negatives to work through. If you can screen out a good number of the negatives, though, the people who haven’t got the condition, then the good test will turn up fewer false positives. So suppose you have a cheap or easy or quick test that doesn’t miss any true positives but does have a 14 percent false positive rate. That would screen out 430 of the people who haven’t got whatever we’re testing for, leaving only 71 people who need the 99-percent-accurate test. This can make for a more effective use of resources.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 13th is an algebra-in-real-life joke and I can’t make something deeper out of that.

Mike Shiell’s The Wandering Melon for the 13th is a spot of wordplay built around statisticians. Good for taping to the mathematics teacher’s walls.

Eric the Circle for the 14th, this one by “zapaway”, is another bit of wordplay. Tans and tangents.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th identifies, aptly, a difference between scientists and science fans. Weinersmith is right that loving trivia is a hallmark of a fan. Expertise — in any field, not just science — is more about recognizing patterns of problems and concepts, ways to bring approaches from one field into another, this sort of thing. And the digits of π are great examples of trivia. There’s no need for anyone to know the 1,681st digit of π. There’s few calculations you could ever do when you needed more than three dozen digits. But if memorizing digits seems like fun then π is a great set to learn. e is the only other number at all compelling.

The thing is, it’s very hard to become an expert in something without first being a fan of it. It’s possible, but if a field doesn’t delight you why would you put that much work into it? So even though the scientist might have long since gotten past caring how many digits of π, it’s awfully hard to get something memorized in the flush of fandom out of your head.

I know you’re curious. I can only remember π out to 3.14158926535787962. I might have gotten farther if I’d tried, but I actually got a digit wrong, inserting a ‘3’ before that last ’62’, and the effort to get that mistake out of my head obliterated any desire to waste more time memorizing digits. For e I can only give you 2.718281828. But there’s almost no hope I’d know that far if it weren’t for how e happens to repeat that 1828 stanza right away.

## The End 2016 Mathematics A To Z: Normal Numbers

Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.

## Normal Numbers

A normal number is any real number you never heard of.

Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.

We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?

Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.

In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.

It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.

That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.

The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.

This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.

Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.

I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher `1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.

But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.

And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.

Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?

We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.

It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.

It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.

It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.

We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.

Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.

So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.

## Reading the Comics, October 19, 2016: An Extra Day Edition

I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s $c^2 = a^2 + b^2 - 2 a b \cos\left(C\right)$. Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

## Reading the Comics, September 6, 2016: Oh Thank Goodness We’re Back Edition

That’s a relief. After the previous week’s suspicious silence Comic Strip Master Command sent a healthy number of mathematically-themed comics my way. They cover a pretty normal spread of topics. So this makes for a nice normal sort of roundup.

Mac King and Bill King’s Magic In A Minute for the 4th is an arithmetic-magic-trick. Like most arithmetic-magic it depends on some true but, to me, dull bit of mathematics. In this case, that 81,234,567 minus 12,345,678 is equal to something. As a kid this sort of trick never impressed me because, well, anyone can do subtraction. I didn’t appreciate that the fun of stage magic in presenting well the mundane.

Jerry Scott and Jim Borgman’s Zits for the 5th is an ordinary mathematics-is-hard joke. But it’s elevated by the artwork, which shows off the expressive and slightly surreal style that makes the comic so reliable and popular. The formulas look fair enough, the sorts of things someone might’ve been cramming before class. If they’re a bit jumbled up, well, Pierce hasn’t been well.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 6th is an anthropomorphic-shapes joke and I feel like it’s been here before. Ah, yeah, there it is, from about this time last year. It’s a fair one to rerun.

Mustard and Boloney popped back in on the 8th with a strip I don’t have in my archive at least. It’s your standard Pi Pun, though. If they’re smart they’ll rerun it in March. I like the coloring; it’s at least a pleasant panel to look at.

Percy Crosby’s Skippy from the 9th of July, 1929 was rerun the 6th of September. It seems like a simple kid-saying-silly-stuff strip: what is the difference between the phone numbers Clinton 2651 and Clinton 2741 when they add to the same number? (And if Central knows what the number is why do they waste Skippy’s time correcting him? And why, 87 years later, does the phone yell at me for not guessing correctly whether I need the area code for a local number and whether I need to dial 1 before that?) But then who cares what the digits in a telephone number add to? What could that tell us about anything?

As phone numbers historically developed, the sum can’t tell us anything at all. But if we had designed telephone numbers correctly we could have made it … not impossible to dial a wrong number, but at least made it harder. This insight comes to us from information theory, which, to be fair, we have because telephone companies spent decades trying to work out solutions to problems like people dialing numbers wrong or signals getting garbled in the transmission. We can allow for error detection by schemes as simple as passing along, besides the numbers, the sum of the numbers. This can allow for the detection of a single error: had Skippy called for number 2641 instead of 2741 the problem would be known. But it’s helpless against two errors, calling for 2541 instead of 2741. But we could detect a second error by calculating some second term based on the number we wanted, and sending that along too.

By adding some more information, other modified sums of the digits we want, we can even start correcting errors. We understand the logic of this intuitively. When we repeat a message twice after sending it, we are trusting that even if one copy of the message is garbled the recipient will take the version received twice as more likely what’s meant. We can design subtler schemes, ones that don’t require we repeat the number three times over. But that should convince you that we can do it.

The tradeoff is obvious. We have to say more digits of the number we want. It isn’t hard to reach the point we’re ending more error-detecting and error-correcting numbers than we are numbers we want. And what if we make a mistake in the error-correcting numbers? (If we used a smart enough scheme, we can work out the error was in the error-correcting number, and relax.) If it’s important that we get the message through, we shrug and accept this. If there’s no real harm done in getting the message wrong — if we can shrug off the problem of accidentally getting the wrong phone number — then we don’t worry about making a mistake.

And at this point we’re only a few days into the week. I have enough hundreds of words on the close of the week I’ll put off posting that a couple of days. It’s quite good having the comics back to normal.

## Reading the Comics, June 26, 2015: June 23, 2016 Plus Golden Lizards Edition

And now for the huge pile of comic strips that had some mathematics-related content on the 23rd of June. I admit some of them are just using mathematics as a stand-in for “something really smart people do”. But first, another moment with the Magic Realism Bot:

So, you know, watch the lizards and all.

Tom Batiuk’s Funky Winkerbean name-drops E = mc2 as the sort of thing people respect. If the strip seems a little baffling then you should know that Mason’s last name is Jarr. He was originally introduced as a minor player in a storyline that wasn’t about him, so the name just had to exist. But since then Tom Batiuk’s decided he likes the fellow and promoted him to major-player status. And maybe Batiuk regrets having a major character with a self-consciously Funny Name, which is an odd thing considering he named his long-running comic strip for original lead character Funky Winkerbean.

Charlie Podrebarac’s CowTown depicts the harsh realities of Math Camp. I assume they’re the realities. I never went to one myself. And while I was on the Physics Team in high school I didn’t make it over to the competitive mathematics squad. Yes, I noticed that the not-a-numbers-person Jim Smith can’t come up with anything other than the null symbol, representing nothing, not even zero. I like that touch.

Ryan North’s Dinosaur Comics rerun is about Richard Feynman, the great physicist whose classic memoir What Do You Care What Other People Think? is hundreds of pages of stories about how awesome he was. Anyway, the story goes that Feynman noticed one of the sequences of digits in π and thought of the joke which T-Rex shares here.

π is believed but not proved to be a “normal” number. This means several things. One is that any finite sequence of digits you like should appear in its representation, somewhere. Feynman and T-Rex look for the sequence ‘999999’, which sure enough happens less than eight hundred digits past the decimal point. Lucky stroke there. There’s no reason to suppose the sequence should be anywhere near the decimal point. There’s no reason to suppose the sequence has to be anywhere in the finite number of digits of π that humanity will ever know. (This is why Carl Sagan’s novel Contact, which has as a plot point the discovery of a message apparently encoded in the digits of π, is not building on a stupid idea. That any finite message exists somewhere is kind-of certain. That it’s findable is not.)

e, mentioned in the last panel, is similarly thought to be a normal number. It’s also not proved to be. We are able to say that nearly all numbers are normal. It’s in much the way we can say nearly all numbers are irrational. But it is hard to prove that any numbers are. I believe that the only numbers humans have proved to be normal are a handful of freaks created to show normal numbers exist. I don’t know of any number that’s interesting in its own right that’s also been shown to be normal. We just know that almost all numbers are.

But it is imaginable that π or e aren’t. They look like they’re normal, based on how their digits are arranged. It’s an open question and someone might make a name for herself by answering the question. It’s not an easy question, though.

Missy Meyer’s Holiday Doodles breaks the news to me the 23rd was SAT Math Day. I had no idea and I’m not sure what that even means. The doodle does use the classic “two trains leave Chicago” introduction, the “it was a dark and stormy night” of Boring High School Algebra word problems.

Stephan Pastis’s Pearls Before Swine is about everyone who does science and mathematics popularization, and what we worry someone’s going to reveal about us. Um. Except me, of course. I don’t do this at all.

Ashleigh Brilliant’s Pot-Shots rerun is a nice little averages joke. It does highlight something which looks paradoxical, though. Typically if you look at the distributions of values of something that can be measured you get a bell cure, like Brilliant drew here. The value most likely to turn up — the mode, mathematicians say — is also the arithmetic mean. “The average”, is what everybody except mathematicians say. And even they say that most of the time. But almost nobody is at the average.

Looking at a drawing, Brilliant’s included, explains why. The exact average is a tiny slice of all the data, the “population”. Look at the area in Brilliant’s drawing underneath the curve that’s just the blocks underneath the upside-down fellow. Most of the area underneath the curve is away from that.

There’s a lot of results that are close to but not exactly at the arithmetic mean. Most of the results are going to be close to the arithmetic mean. Look at how many area there is under the curve and within four vertical lines of the upside-down fellow. That’s nearly everything. So we have this apparent contradiction: the most likely result is the average. But almost nothing is average. And yet almost everything is nearly average. This is why statisticians have their own departments, or get to make the mathematics department brand itself the Department of Mathematics and Statistics.

## Reading the Comics, April 15, 2016: Remarkably, No Income Tax Comics Edition

I’m as startled as you are. While a couple comic strips mentioned United States Income Tax Day, they didn’t do so in a way that seemed on-point enough for this Reading The Comics post. Of course, United States Income Tax Day happens to be the 18th this year. I haven’t seen Sunday’s comics yet.

David L Hoyt and Jeff Knurek’s Jumble for the 11th of April one again uses arithmetic puns for its business. Also, if some science fiction writer doesn’t take hold of “Gribth” as a name for something they’re missing a fine syllable. “Tahew” is no slouch in the made-up word leagues either.

Ryan North’s Dinosaur Comics for the 12th of April obviously originally ran sometime in mid-March. I have similarly ambiguous feelings about the value of Pi Day. I suppose it’s nice for people to think of “fun” and “mathematics” close together. Utahraptor’s distinction between “Pi Day” of March 14 and “Approximate Pi Day” of the 22nd of July s a curious one, though. It’s not as though 3.14 is any more exactly π than 22/7 is. I suppose you can argue that at some moment on 3/14 between 1:59:26 and 1:59:27 there’s some moment, 1:59:26.5358979 et cetera going on forever. But that assumes that time is a continuous thing, and it’s not like you’ll ever know what that moment is. By the time you might recognize it, it’s passed. They are all Approximate Pi Days; we just have to decide what the approximation is.

Bill Schorr’s The Grizzwells for the 12th is a silly-homework problem question. I know the point is to joke about how Fauna misunderstands a word. But if we pretend the assignment is for real, what might its point be? To show that students know the parts of a right triangle? I guess that’s all right, but it doesn’t seem like much of an assignment. I don’t blame her for getting snarky in the face of that.

Rick Kirkman and Jerry Scott’s Baby Blues for the 13th is a gag about picking random numbers for arithmetic homework. The approach is doomed, surely, although it’s probably not completely doomed. I’m not sure Hammie’s age, but if his homework is about adding and subtracting numbers he probably mostly gets problems that give results between zero and twenty, and almost always less than a hundred. He might hit some by luck.

I’ve mentioned some how people are awful at picking “random” numbers in their heads. Zoe shows off one of the ways people are bad at it. People asked to name numbers “randomly” pick odd numbers more than even numbers. Somehow they just feel random. I doubt Kirkman and Scott were thinking of that; among other things, five numbers is a very small sample. Four odds out of five isn’t peculiar, not yet. They were probably just trying to pick numbers that sounded funny while fitting the space available. I’m a bit surprised 37 didn’t make the list.

Mark Anderson’s Andertoons for the 13th is Mark Anderson’s Andertoons entry for this essay. I like the teacher’s answer, though.

Patrick Roberts’s Todd the Dinosaur for the 14th just uses arithmetic as the most economic way to fit several problems on-screen at once. They’ve got a compactness that sentence-diagramming just can’t match.

Greg Cravens’s The Buckets for the 15th amuses me with its use of coin-tossing as a way of making choices. I’m also amused the coin might be wrong only about half the time.

John Deering’s Strange Brew for the 15th is a visual puzzle. It’s intending to make use of a board full of mathematical symbols to represent deep thought. But the symbols aren’t quite mathematics. They look much more like LaTeX, a typesetting code used to express mathematics in print. Some of the symbols are obscured, so I can’t say exactly what’s meant. But it should be something like this:

$F = \{F_{x} \in F_{c}: (is ... (1) ) \cap (minPixels < \|s\| < maxPixels ) \\ \partial{P} \\ (is_{connected}| > |s| - \epsilon) \}$

At the risk of disappointing, this appears to me gibberish. The appearance of words like ‘minPixels’ and ‘maxPixels’ suggest a bit of computer code. So does having a subscript that’s the full word “connected”. I wonder where Deering drew this example from.

## Reading the Comics, March 14, 2016: Pi Day Comics Event

Comic Strip Master Command had the regular pace of mathematically-themed comic strips the last few days. But it remembered what the 14th would be. You’ll see that when we get there.

Ray Billingsley’s Curtis for the 11th of March is a student-resists-the-word-problem joke. But it’s a more interesting word problem than usual. It’s your classic problem of two trains meeting, but rather than ask when they’ll meet it asks where. It’s just an extra little step once the time of meeting is made, but that’s all right by me. Anything to freshen the scenario up.

Tony Carrillo’s F Minus for the 11th was apparently our Venn Diagram joke for the week. I’m amused.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 12th of March name-drops statisticians. Statisticians are almost expected to produce interesting pictures of their results. It is the field that gave us bar charts, pie charts, scatter plots, and many more. Statistics is, in part, about understanding a complicated set of data with a few numbers. It’s also about turning those numbers into recognizable pictures, all in the hope of finding meaning in a confusing world (ours).

Brian Anderson’s Dog Eat Doug for the 13th of March uses walls full of mathematical scrawl as signifier for “stuff thought deeply about’. I don’t recognize any of the symbols specifically, although some of them look plausibly like calculus. I would not be surprised if Anderson had copied equations from a book on string theory. I’d do it to tell this joke.

And then came the 14th of March. That gave us a bounty of Pi Day comics. Among them:

John Hambrock’s The Brilliant Mind of Edison Lee trusts that the name of the day is wordplay enough.

Scott Hilburn’s The Argyle Sweater is also a wordplay joke, although it’s a bit more advanced.

Tim Rickard’s Brewster Rockit fuses the pun with one of its running, or at least rolling, gags.

Bill Whitehead’s Free Range makes an urban legend out of the obsessive calculation of digits of π.

And Missy Meyer’s informational panel cartoon Holiday Doodles mentions that besides “National” Pi Day it was also “National” Potato Chip Day, “National” Children’s Craft Day, and “International” Ask A Question Day. My question: for the first three days, which nation?

Edited To Add: And I forgot to mention, after noting to myself that I ought to mention it. The Price Is Right (the United States edition) hopped onto the Pi Day fuss. It used the day as a thematic link for its Showcase prize packages, noting how you could work out π from the circumference of your new bicycles, or how π was a letter from your vacation destination of Greece, and if you think there weren’t brand-new cars in both Showcases you don’t know the game show well. Did anyone learn anything mathematical from this? I am skeptical. Do people come away thinking mathematics is more fun after this? … Conceivably. At least it was a day fairly free of people declaring they Hate Math and Can Never Do It.

## Terrible And Less-Terrible Things with Pi

We are coming around “Pi Day”, the 14th of March, again. I don’t figure to have anything thematically appropriate for the day. I figure to continue the Leap Day 2016 Mathematics A To Z, and I don’t tend to do a whole two posts in a single day. Two just seems like so many, doesn’t it?

But I would like to point people who’re interested in some π-related stuff to what I posted last year. Those posts were:

• Calculating Pi Terribly, in which I show a way to work out the value of π that’s fun and would take forever. I mean, yes, properly speaking they all take forever, but this takes forever just to get a couple of digits right. It might be fun to play with but don’t use this to get your digits of π. Really.
• Calculating Pi Less Terribly, in which I show a way to do better. This doesn’t lend itself to any fun side projects. It’s just calculations. But it gets you accurate digits a lot faster.

## Reading the Comics, March 9, 2016: Mathematics Recreation Edition

I haven’t been skipping the comics, even with the effort of keeping up on the Leap Day 2016 A To Z Glossary. I just try to keep to the pace which Comic Strip Master Command sets.

The kids-information feature Short Cuts, by Jeff Harris, got ahead of “Pi Day” last Sunday. I imagine the feature gets run mid-week in some features, so that it’s better to run a full week before March 14th. But here’s a bundle of trivia, some jokes, some activities, that sort of thing. I am curious about one of Harris’s trivias, that Pi “plays an important role in some of the equations used in Einstein’s famous general theory of relativity”. That’s true, but it’s not as if general relativity is a rare appearance for pi in physics. Maybe Harris chose it on aesthetic grounds. General relativity has a familiar name and exotic concepts. And it allowed him to put in an equation that’s mysterious yet attractive-looking.

Samson’s Dark Side Of The Horse for the 7th of March made me wonder how many sudoku puzzles there are. The answer is — well, you have to start thinking carefully about what you mean by “how many”. For example: start with one puzzle. Swap out every appearance of a 1 with a 2, and a 2 with a 1. Is this new one actually a different puzzle? You can make a case for yes or for no. And that’s before we get into the question of how many clues to give to solve the puzzle. If I’m not misreading Wikipedia’s “Mathematics of Sudoku” page, the number of different nine-by-nine combinations of digits that can be legitimate sudoku puzzle solutions is 6,670,903,752,021,072,936,960. This was worked out in 2005 by Bertram Felgenhauer and Frazer Jarvis. They worked it out partly by logic, partly by brute force. Brute force is trying all the possibilities to see what works. It’s a method that rewards endurance. We like that we can turn it over to computers now. Or cartoon horses, whichever. They’re good at endurance.

Jef Mallett’s Frazz started a sequence about problem-writing on the 7th of March. Caulfield’s setup, complaining about trains and apple bushels, suggests he was annoyed by mathematics problems. I understand. Much of real mathematics starts with curiosity about something (how many sudoku puzzles are there?). Then it’s working out what computation might answer that question. Then it’s doing that calculation. And then it’s verifying that the calculation is right. Mathematics educators have to teach ways to do a calculation, and test that. And to teach how to know what calculation to do, and test that. That’s challenging enough. Add to that working out something to be curious about and you understand the appeal of stock setups. Maybe mathematics should include some courses in creative writing and short-short fiction. (Verification is, in my experience, the part nobody cares about. This is a shame. The hardest part of doing numerical mathematics is making sure your computation makes any sense.)

Richard Thompson’s Richard’s Poor Almanac rerun the 7th of March features the Non-Euclidean Creeper. It’s a plant perhaps related to the Cubist Fir Christmas tree and to the Otterloops’ troublesome non-Euclidean tree. Non-Euclidean geometry will probably always sound more intimidating and exotic. Euclidean geometry describes the way objects on the human scale behave. Shapes that fit on the table, or in your garden, follow Euclidean rules. But non-Euclidean isn’t magic; it’s the way that shapes on the surface of a globe work, for example. And the idea of drawing a thing like a square on the surface of the Earth isn’t so bizarre.

Paul Trap’s Thatababy for the 7th makes sport of geometry.

My love and I were talking the other day about Jim Toomey’s Sherman’s Lagoon. It’s a bit odd as comic strips go. It’s been around forever, for one, but nobody talks about it. It’s stayed reliably funny. Comic strips that’ve been around forever tend to … you know … not be. The strip’s done as a work-and-home strip except the cast is all sea life. And the thing is, Toomey keeps paying attention to new discoveries in sea life, and other animal research. And this is a fantastic era for discoveries in sea life, aside from how humans have now eaten all of it and we don’t have any left. I am not joking when I say the comic strip is an effortless way to keep up with new discoveries about the oceans.

I missed it when in December the discovery was announced to the world. But the setup, about the common name being given by a group of kids, is apparently quite correct. So we should expect from Toomey. (The scientific name is Etmopterus benchleyi. The last name refers to Peter Benchley, repentant Jaws novelist.) LiveScience.com’s article says lead author Dr Vicky Vásquez had to “scale them back” from their starting point, the “super ninja”. This differs from Hawthorne’s claim that the kids started from the “math stinks” shark, but it’s still a delight anyway.

## Reading the Comics, October 17, 2015: Rerun Edition

I hate to make it sound like I’m running out of things to say about mathematical comics. But the most recent bunch of strips have been reruns, as with Bill Amend’s FoxTrot or Tom Toles’s Randolph Itch, 2 am. And there’s some figurative reruns too, as a couple of things I’ve talked about before come around again. Also I’m not sure but I think I might have used this Edition Title before. It feels like one I might have. I hope you’ll enjoy anyway, please.

Bill Amend’s FoxTrot Classics for the 15th of October, originally run in 2004, is about binary numerals. It’s built on the fact the numeral ‘100’ represents a rather smaller number in base-two arithmetic than it does in base-ten. This is the sort of thing that’s funny to a mathematically-inclined nerd, such as Jason here. It’s the numerical equivalent of a pun, playing on how if you pretend something is in a different context, it would have a different meaning.

Dave Blazek’s Loose Parts for the 15th of October puts a shape other than a triangle into the orchestra pit. I’m amused, and it puts me in mind of the classic question, “Can One Hear The Shape Of A Drum?” The answer is tricky.

Bob Scott’s Molly and the Bear for the 15th of October is a Pi Day joke. I don’t believe it’s a rerun, but the engagingly-drawn strip is in reruns terribly often.

Tom Toles’s Randolph Itch, 2 am for the 15th of October is a rerun, not just from 1999 but from earlier this year. I don’t know if the strip is being run out of order or if the strip ran a shorter time than I thought. Anyway, it’s still a funny drawing and “r” doesn’t figure into it at all.

Rick Detorie’s One Big Happy for the 16th of October shows Ruthie teaching her stuffed dolls about the number 1. Ruthie is a bit confused about the difference between the number one and the numeral, the way we represent the number. That’s common enough.

She does kind of have a point, though. The number one gets represented as a vertical stroke in the Arabic numerals we commonly use; also in Roman numerals used in making dates harder to read; also in Ancient Egyptian numerals; also in Chinese numerals. One almost suspects everyone is copying each other, or just started off with a tally mark and kept with it. Things get more complicated around ‘three’ or ‘four’. But it isn’t really universal, of course. The Mayans used a single dot, which is admittedly pretty close as a scheme. The Babylonians used a vertical wedge, a little triangle atop a stem that was presumably easy to carve with the tools available.

Ruben Bolling’s Super-Fun-Pak Comix for the 16th of October reprings a Chaos Butterfly installment. And the reminder that a system can be deterministic yet unpredictable sets me up for …

The rerun of Tom Toles’s Randolph Itch, 2 am that appeared on the 17th. The page of horoscopes saying “what happens to you today will be random, based on laws of probability” is funny, although, “random”? There is, it appears, randomness deeply encoded in the universe. There seems to be no way that atoms and molecules could work if they could not be random. But randomness follows laws. Those laws are so fundamental, and imply averages so relentlessly, that they create a human-scale world which might as well be deterministic. (I am deliberately bundling up the question of whether beings have free will and putting it off to the corner, in a little box, where I will not bother it.) In principle, we should be able to predict the day; we just need enough information, and time to compute.

Of course in practice we can’t, and can’t even come close. We may be able to predict the broad strokes of the day, but it is filled with the unpredictable. We call that random, but that is really a confession of ignorance. It’s much the way we might say there is a “probability” of one in seven that you were born on a Tuesday. There’s no such thing. The probability is either 1, because you were born on a Tuesday, or 0, because you were not. What day any given date in the Julian or Gregorian calendar occurred is a determined thing. What we mean by “a probability of one in seven” is that we are ignorant of your birthday, or have not done the work of finding out what day of the week that was. Thus the day of the week appears random.

John Graziano’s Ripley’s Believe It or Not for the 17th of October claims that Les Stewart wrote out “every number from one to one million in words’, using seven typewriters, in a project that took sixteen years and seven months. Sixteen years and seven months is something close to half a billion seconds. So if we take this, he was averaging about fifty seconds to write out each number. This sounds unimpressive, but after all, he had to take some time to sleep and probably had other projects to work on as well. Perhaps he was also working on putting the numbers in alphabetical order.

## Reading the Comics, October 10, 2015: Wordplay Edition

Some of the past several days’ mathematically-themed comic strips have bits of wordplay in them. That’ll do for the theme. We get some familiar topics along the way.

Rick Detorie’s One Big Happy for the 6th of October is one of the wordplay jokes you can do about probability. (This is the strip that ran in newspapers this year. One Big Happy strips on Gocomics.com are reruns from several years back.)

Niklas Eriksson’s Carpe Diem for the 6th of October is a badly-timed Pi Day strip.

Tom Thaves’s Frank and Ernest for the 8th of October is a kids-resisting-algebra problem. The kid asks why ‘x’ has to be equal to something, why it can’t just be ‘x’. He’s wiser than his teacher has taught. We use ‘x’ as the name for a number whose exact identity we don’t know right away. Often, especially in introductory algebra, we hope to work out what number it is. That’s the sort of problem that makes us find x, or solve for x. But we don’t always care what x is. Sometimes we just want to say that it’s an example of a number with some interesting properties. We often use it this way when we try drawing the plot of a function. The plot shows all the coordinate sets that make some equation true, and we need x to organize our thoughts about that, but we never really care what x is.

Or we might use x as a ‘dummy variable’, the mathematical equivalent of falsework. We use the variable to get some work done, but never see it once we’re finished, and don’t ever care what it was. If we take the definite integral of a function of x over x, for example, the one thing our answer should not have is an ‘x’ in it. (Well, if we’re integrating some nasty function that can’t be evaluated except in terms of another integral maybe an ‘x’ will appear. But that’s a pathological case.)

Alternatively, x might be a parameter, something which has to be a fixed number for the sake of doing other work, but whose value we don’t really care about. This would be an eccentric choice — usually parameters are from earlier in the alphabet, rarely later than ‘l’ and almost never past ‘t’ — but sometimes that’s the best alternative.

In Jef Mallett’s Frazz for the 8th of October, Caulfield answers his teacher’s demand to “show his work” by presenting a slide rule. It’s a cute joke although I’m not on Caulfield’s side here. If all anyone cared about was whether the calculation was right we’d need no mathematics. We have computers. What is worth teaching is “how do you know what to compute”, with a sideline of “can you do the computations correctly”. It’s important to know what you mean to do. It’s also important to know how to plausibly find an answer if you don’t know exactly what to do. None of that is shown by the answer alone.

Jim Benton’s Jim Benton Cartoons for the 8th of October is some more mathematics wordplay. I’m amused by its logic.

Samson’s Dark Side of the Horse for the 9th of October is the first anthropomorphized-numerals joke we’ve had in a while.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 9th of October is our Venn Diagram joke for this installment. And it’s not quite a proper Venn diagram, but it’s hard to draw a proper Venn diagram for four propositions. Wikipedia’s entry offers a couple of examples of four-set Venn diagrams. The one made of ellipses is not too bad, although it also evokes “logo for some maybe European cable TV channel” to my eye.

Disney’s Donald Duck for the 10th of October, a rerun from goodness knows when, depicts accurately the most terrifying moment a mathematician endures. I am delighted to see that the equations written out are correct and even consistent from one panel to the next. And yes, real mathematicians will sometimes write down what seem like altogether too-obvious propositions. That’s a good way of making sure you aren’t tripping over the easy stuff on the way to the bigger conclusions. I think it’s a bit implausible that the entire board would be this level of stuff — by the time you have your PhD, at least in mathematics or physics, you don’t need help remembering what the cosine of 120 degrees is — but it’s all valid stuff. Well, I could probably use the help remembering the tangent angle-addition formula, if I ever needed to work out the tangent of the sum of two angles.

## Reading the Comics, September 14, 2015: Back To School Edition, Part II

Today’s mathematical-comics blog should get us up to the present. Or at least to Monday. Yes, I’m aware of which paradox of Zeno this evokes. Be nice.

Scott Adams’s Dilbert Classics for the 11th of September is a rerun from, it looks like, the 18th of July, 1992. Anyway, Dilbert has acquired a supercomputer and figures to calculate π to a lot of decimal places, to finally do something about the areas of circles. The calculation of the digits of pi is done, often on home-brewed supercomputers, to lots of digits. But the calculation of areas, or volumes, or circumferences or whatever, isn’t why. We just don’t need that many digits; forty digits of pi would be plenty for almost any calculation involving measuring things in the real world.

It’s actually a bit mysterious why the digits of pi should be worth bothering with. It’s not yet known if pi is a “normal” number, and that’s of some interest. In a normal number every finite sequence of digits appears, and as often as every other sequence of digits just as long. That is, if you break the digits of pi up into four-digit blocks, you should see “1701” and “2038” and “2468” and “9999” and all, each appearing roughly once per thousand blocks. It’s almost certain that pi is normal, because just about every number is. But it’s not proven. And if there were numerical evidence that pi wasn’t normal that would be mathematically interesting, though I wouldn’t blame everybody not in number theory for saying “huh” and “so what?” before moving on. As it is, calculating digits of pi is a good, challenging task that can be used to prove coding and computing abilities, and it might turn up something interesting. It may as well be that as anything.

Steve Breen and Mike Thompson’s Grand Avenue for the 11th of September is a “motivate the word problem” joke. So is Bill Amend’s FoxTrot Classics for the 14th (a rerun from the same day in 2004). I like Amend’s version better, partly because it gives more realistic problems. I also like that it mixes in a bit of French class. It’s not always mathematics that needs motivation.

J C Duffy’s Lug Nuts for the 11th of September is another pun strip badly timed for Pi Day.

Ruben Bolling’s Tom The Dancing Bug gave us a Super Fun-Pak Comics installment on the 11th. And that included a Chaos Butterfly installment pitting deterministic chaos against Schrödinger’s Cat. The Cat represents one (of many) interpretations of quantum mechanics, the “superposition” interpretation. It’s difficult to explain the idea philosophically, to say what is really going on. The mathematics is straightforward, though. In the most common model of quantum mechanics we describe what is going on by a probability distribution, a function that describes how likely each possible outcome is. Quantum mechanics describes how that distribution changes in time. In the superpositioning we have two, or more, probability distributions that describe very different states, but (in a way) averaged together. The changes of this combined distribution then become our idea of how the system changes in time. It’s just hard to say what it could mean when the superposition implies very different things, like a cat being both wet and dry, being equally true at once.

Julie Larson’s Dinette Set for the 12th of September is about double negatives. It’s also about the doomed attempt to bring logic to the constructions of English. At least in English a double negative — “not unwanted”, say — generally parses to a positive, even if the connotation is that the thing is only a bit wanted. This corresponds to what logicians would say. A logican might use “C” to stand in for some statement that can only be true or false. Then, saying “not not C” — an “is true” gets implicitly added to the end of that — is equivalent to saying “C [is true]”. My love, the philosopher, who knows much more Spanish than I do has pointed out that in Spanish the “not not” construction can intensify the strength of the negation, rather than annulling it. This causes us to wonder if Spanish-speaking logic students have a harder time understanding the “not not C” construction. I don’t know and would welcome insight. (Also I hope I have it right that a “not not” is an intensifier, rather than a softener. But I suppose it doesn’t matter, as long as the Spanish equivalent of “not not wanted” still connotes “unwanted”.)

Dan Collins’s Looks Good On Paper for the 12th of September is a simple early-autumn panorama kind of strip. Mathematics — particularly, geometry — gets used as the type case for elementary school. I suppose as long as diagramming sentences is out of fashion there’s no better easy-to-draw choice.

David L Hoyt and Jeff Knurek’s Jumble for the 14th of September is an abacus joke. For folks who want to do the Jumble themselves, a hint: the second word is not “Dummy” however appealing an unscramble that looks.

Stephen Beals’s Adult Children for the 14th builds on the idea of what if the universe were made wrong. And that’s expressed as a mathematics error in the building of the universe. The idea of mathematics as a transcendent and even god-touching thing is an old one. I imagine this reflects how a logically deduced fact has some kind of universal truth, that a sound argument is sound regardless of who makes it, or considers it. It’s a heady idea. Mathematics also allows us to say some very specific, and remarkable, things about the infinite. This is another god-touching notion. But we don’t have sound reason to think that universe-making must be mathematical. Mathematics can describe many aspects of the universe eerily well, yes. But is it necessary that a universe be mathematically consistent? The question seems to defy any kind of empirical answer; we must listen to philosophers, who can at least give us a reasoned answer.

Tom Thaves’s Frank and Ernest for the 14th of September depicts cave-Frank and cave-Ernest at the dawn of numbers. It suggests the symbol 1 being a representation of a stick, and 0 as a stone. The 1 as a stick seems at least imaginable; counting off things by representing them as sticks or as stroke marks feels natural. Of course I say that coming from a long heritage of doing just that. 0, as I understand it, seems to derive from making with a dot a place where zero of whatever was to be studied should appear; the dot grew into a loop probably to make it harder to miss.

## Reading the Comics, August 29, 2015: Unthemed Edition

I can’t think of any particular thematic link through the past week’s mathematical comic strips. This happens sometimes. I’ll make do. They’re all Gocomics.com strips this time around, too, so I haven’t included the strips. The URLs ought to be reasonably stable.

J C Duffy’s Lug Nuts (August 23) is a cute illustration of the first, second, third, and fourth dimensions. The wall-of-text might be a bit off-putting, especially the last panel. It’s worth the reading. Indeed, you almost don’t need the cartoon if you read the text.

Tom Toles’s Randolph Itch, 2 am (August 24) is an explanation of pie charts. This might be the best stilly joke of the week. I may just be an easy touch for a pie-in-the-face.

Charlie Podrebarac’s Cow Town (August 26) is about the first day of mathematics camp. It’s also every graduate students’ thesis defense anxiety dream. The zero with a slash through it popping out of Jim Smith’s mouth is known as the null sign. That comes to us from set theory, where it describes “a set that has no elements”. Null sets have many interesting properties considering they haven’t got any things. And that’s important for set theory. The symbol was introduced to mathematics in 1939 by Nicholas Bourbaki, the renowned mathematician who never existed. He was important to the course of 20th century mathematics.

Eric the Circle (August 26), this one by ‘Arys’, is a Venn diagram joke. It makes me realize the Eric the Circle project does less with Venn diagrams than I expected.

John Graziano’s Ripley’s Believe It Or Not (August 26) talks of a Akira Haraguchi. If we believe this, then, in 2006 he recited 111,700 digits of pi from memory. It’s an impressive stunt and one that makes me wonder who did the checking that he got them all right. The fact-checkers never get their names in Graziano’s Ripley’s.

Mark Parisi’s Off The Mark (August 27, rerun from 1987) mentions Monty Hall. This is worth mentioning in these parts mostly as a matter of courtesy. The Monty Hall Problem is a fine and imagination-catching probability question. It represents a scenario that never happened on the game show Let’s Make A Deal, though.

Jeff Stahler’s Moderately Confused (August 28) is a word problem joke. I do wonder if the presence of battery percentage indicators on electronic devices has helped people get a better feeling for percentages. I suppose only vaguely. The devices can be too strangely nonlinear to relate percentages of charge to anything like device lifespan. I’m thinking here of my cell phone, which will sit in my messenger bag for three weeks dropping slowly from 100% to 50%, and then die for want of electrons after thirty minutes of talking with my father. I imagine you have similar experiences, not necessarily with my father.

Thom Bluemel’s Birdbrains (August 29) is a caveman-mathematics joke. This one’s based on calendars, which have always been mathematical puzzles.

## Reading the Comics, June 25, 2015: Not Making A Habit Of This Edition

I admit I did this recently, and am doing it again. But I don’t mean to make it a habit. I ran across a few comic strips that I can’t, even with a stretch, call mathematically-themed, but I liked them too much to ignore them either. So they’re at the end of this post. I really don’t intend to make this a regular thing in Reading the Comics posts.

Justin Boyd’s engagingly silly Invisible Bread (June 22) names the tuning “two steps below A”. He dubs this “negative C#”. This is probably an even funnier joke if you know music theory. The repetition of the notes in a musical scale could be used as an example of cyclic or modular arithmetic. Really, that the note above G is A of the next higher octave, and the note below A is G of the next lower octave, probably explains the idea already.

If we felt like, we could match the notes of a scale to the counting numbers. Match A to 0, B to 1, C to 2 and so on. Work out sharps and flats as you like. Then we could think of transposing a note from one key to another as adding or subtracting numbers. (Warning: do not try to pass your music theory class using this information! Transposition of keys is a much more subtle process than I am describing.) If the number gets above some maximum, it wraps back around to 0; if the number would go below zero, it wraps back around to that maximum. Relabeling the things in a group might make them easier or harder to understand. But it doesn’t change the way the things relate to one another. And that’s why we might call something F or negative C#, as we like and as we hope to be understood.

Hilary Price’s Rhymes With Orange (June 23) reminds us how important it is to pick the correct piece of chalk. The mathematical symbols on the board don’t mean anything. A couple of the odder bits of notation might be meant as shorthand. Often in the rush of working out a problem some of the details will get written as borderline nonsense. The mathematician is probably more interested in getting the insight down. She’ll leave the details for later reflection.

Jason Poland’s Robbie and Bobby (June 23) uses “calculating obscure digits of pi” as computer fun. Calculating digits of pi is hard, at least in decimals, which is all anyone cares about. If you wish to know the 5,673,299,925th decimal digit of pi, you need to work out all 5,673,299,924 digits that go before it. There are formulas to work out a binary (or hexadecimal) digit of pi without working out all the digits that go before. This saves quite some time if you need to explore the nether-realms of pi’s digits.

The comic strip also uses Stephen Hawking as the icon for most-incredibly-smart-person. It’s the role that Albert Einstein used to have, and still shares. I am curious whether Hawking is going to permanently displace Einstein as the go-to reference for incredible brilliance. His pop culture celebrity might be a transient thing. I suspect it’s going to last, though. Hawking’s life has a tortured-genius edge to it that gives it Romantic appeal, likely to stay popular.

Paul Trap’s Thatababy (June 23) presents confusing brand-new letters and numbers. Letters are obviously human inventions though. They’ve been added to and removed from alphabets for thousands of years. It’s only a few centuries since “i” and “j” became (in English) understood as separate letters. They had been seen as different ways of writing the same letter, or the vowel and consonant forms of the same letter. If enough people found a proposed letter useful it would work its way into the alphabet. Occasionally the ampersand & has come near being a letter. (The ampersand has a fascinating history. Honestly.) And conversely, if we collectively found cause to toss one aside we could remove it from the alphabet. English hasn’t lost any letters since yogh (the Old English letter that looks like a 3 written half a line off) was dropped in favor of “gh”, about five centuries ago, but there’s no reason that it couldn’t shed another.

Numbers are less obviously human inventions. But the numbers we use are, or at least work like they are. Arabic numerals are barely eight centuries old in Western European use. Their introduction was controversial. People feared shopkeepers and moneylenders could easily cheat people unfamiliar with these crazy new symbols. Decimals, instead of fractions, were similarly suspect. Negative numbers took centuries to understand and to accept as numbers. Irrational numbers too. Imaginary numbers also. Indeed, look at the connotations of those names: negative numbers. Irrational numbers. Imaginary numbers. We can add complex numbers to that roster. Each name at least sounds suspicious of the innovation.

There are more kinds of numbers. In the 19th century William Rowan Hamilton developed quaternions. These are 4-tuples of numbers that work kind of like complex numbers. They’re strange creatures, admittedly, not very popular these days. Their greatest strength is in representing rotations in three-dimensional space well. There are also octonions, 8-tuples of numbers. They’re more exotic than quaternions and have fewer good uses. We might find more, in time.

Rina Piccolo’s entry in Six Chix this week (June 24) draws a house with extra dimensions. An extra dimension is a great way to add volume, or hypervolume, to a place. A cube that’s 20 feet on a side has a volume of 203 or 8,000 cubic feet, after all. A four-dimensional hypercube 20 feet on each side has a hypervolume of 160,000 hybercubic feet. This seems like it should be enough for people who don’t collect books.

Morrie Turner’s Wee Pals (June 24, rerun) is just a bit of wordplay. It’s built on the idea kids might not understand the difference between the words “ratio” and “racial”.

Tom Toles’s Randolph Itch, 2 am (June 25, rerun) inspires me to wonder if anybody’s ever sold novelty 4-D glasses. Probably they have, sometime.

Now for the comics that I just can’t really make mathematics but that I like anyway:

Phil Dunlap’s Ink Pen (June 23, rerun) is aimed at the folks still lingering in grad school. Please be advised that most doctoral theses do not, in fact, end in supervillainy.

Darby Conley’s Get Fuzzy (June 25, rerun) tickles me. But Albert Einstein did after all say many things in his life, and not everything was as punchy as that line about God and dice.

## Error

This is one of my A to Z words that everyone knows. An error is some mistake, evidence of our human failings, to be minimized at all costs. That’s … well, it’s an attitude that doesn’t let you use error as a tool.

An error is the difference between what we would like to know and what we do know. Usually, what we would like to know is something hard to work out. Sometimes it requires complicated work. Sometimes it requires an infinite amount of work to get exactly right. Who has the time for that?

This is how we use errors. We look for methods that approximate the thing we want, and that estimate how much of an error that method makes. Usually, the method involves doing some basic step some large number of times. And usually, if we did the step more times, the estimate of the error we make will be smaller. My essay “Calculating Pi Less Terribly” shows an example of this. If we add together more terms from that Leibniz formula we get a running total that’s closer to the actual value of π.

## Reading The Comics, May 22, 2015: Might Be Giving Up Mickey Mouse Edition

We’re drawing upon the end of the school year, on the United States calendar. Comic Strip Master Command may have ordered fewer mathematically-themed comic strips to be run. That’s all right. I have plans. I also may need to stop paying attention to the Disney comic strips, reruns of Mickey Mouse and Donald Duck. I explain why within.

Niklas Eriksson’s Carpe Diem made its first appearance in my pages here on the 16th of May. (It’s been newly introduced to United States comics. I’m sorry, I just can’t read all the syndicated newspaper comic strips in the world for mathematical content. If someone wants to franchise the Reading The Comics idea for a country they like, let’s talk. We can negotiate reasonable terms.) Anyway, it uses the usual string of mathematical symbols to express the idea of a lot of hard mathematical work. The big down-arrow just before superstar equation E = mc2 is authentic enough. Trying to show the chain of thought, or to point out the conclusions one hopes follow from the work done, is a common part of discovering or inventing mathematics.

Mike Peters’s Mother Goose and Grimm (May 17) riffs on that ancient and transparently stupid bit of folklore about the chance of an older woman having a better chance of dying in some improbable manner than of marrying successfully. It’s always been obvious nonsense and people passing along the claim uncritically should be ashamed of themselves. I’ll give Peters a pass since the point is to set up a joke, and joke-setup can get away with a lot. Still.