## Reading the Comics, December 28, 2018: More Christmas Break Edition

I apologize for running quite so late. Comic Strip Master Command tried to make it easy for me, by issuing few comic strips that had any mathematical content to speak of. I was just busier than all that, and even now, I can’t say quite how. Well, living, I suppose. But I’ve done plenty of things now and can settle back to the usual, if anyone knows just what that was.

Also I am drawing down on the number of cancelled, in-eternal-reruns comic strips on my daily feed. So that should reduce the number of times I feature a comic strip and realize I’ve described it four times already and haven’t got anything new to say. It’s hard for me, since most of these comics have some charms, or at least pleasant weirdness. But clearly just making a note to myself that I’ve said everything there is to say about Randolph Itch, 2 am, isn’t enough. I’m sorry, Randolph.

Bill Holbrook’s On The Fastrack for the 28th is an example of the cartoonist’s habit of drawing metaphors literally. Dethany does ask the auditor Fi about “accepting his numbers”. In this context the numbers aren’t intersting as numbers. They’re interesting as representations for a narrative. If the numbers are consistent with a believable story? If it’s more believable that they represent a truth than that they’re a hoax? We call that “accepting the numbers”, but what we’re accepting is the story they’re given as evidence for.

Auditing, and any critical thinking about numbers, involves some subtle uses of Bayesian probability. We’re working out the probability that this story is something we should believe. Each piece of evidence makes us think this probability is greater or lesser. With experience and skill one learns of patterns which suggest the story is false. Benford’s Law, for example, is often useful. Honestly-taken samples show tendencies, for example, in what leading digits appear. A discrepancy between what’s expected and what appears, if it can’t be explained, can be a sign of forgery.

Johnny Hart’s Back To BC rerun for the 27th is built on estimating the grains of sand on a beach. This is, as fits the setting, a very old query. Archimedes wrote The Sand Reckoner which estimated how many grains of sand could fit in the universe. Estimating the number of grains of sand on a beach, or in a universe, is a fun mathematical problem. Perhaps not a practical one, not directly. The answer is after all “lots”, and there is no way to verify the number.

But it can still be indirectly practical. To work with enormous but finite numbers of things is hard. We do well working with small numbers like ‘six’ and ‘fourteen’ and some of us are even good at around ‘thirty’. We don’t have a good intuition for how a number like 480,000,000,000,000,000 should work. And that’s important; if we try adding six and fourteen and get thirty, we realize there’s something not quite right before we’ve done too much more work. With enormous numbers we can go on not noticing the mistake’s there. We need to find ways to understand these inconvenient numbers using the skills and intuitions we already have. Aristotle had to develop new terminology for numbers to get the Ancient Greek numerals system to handle the problem coherently. Peter’s invention of a gillion is — I’ll go ahead and say — a sly reenactment of that.

Mark Pett’s Lucky Cow rerun for the 27th I do intend to make this enjoyable but cancelled strip’s last appearance here. It’s a Rubik’s Cube joke. It’s one about using a solution outside the rules of the problem. And as marginal as this one, I couldn’t quite bring myself to write a paragraph about the Todd the Dinosaur strip of the 29th, which also features the Rubik’s Cube.

Ryan Pagelow’s Buni for the 28th I’ll list as the anthropomorphic-numerals joke for the week, since it did turn out to be that slow a week here. I’m a bit curious what the now-9 is figuring to do next year. I suppose that one’s easy; it’s going to be going from 3 to 4 in a couple years that’s a real problem.

The various Reading the Comics posts should all be at this link. I like to think I’ll be back to having a post this coming Sunday, and maybe a second one next week if there are enough comic strips near enough to on-topic. Thanks for reading.

## Yes, I Am Late With The Comics Posts Today

I apologize that, even though the past week was light on mathematically-themed comic strips, I didn’t have them written up by my usual Sunday posting time. It was just too busy a week, and I am still decompressing from the A to Z sequence. I’ll have them as soon as I’m able.

In the meanwhile may I share a couple of things I thought worth reading, and that have been waiting in my notes folder for the chance to highlight?

This Fermat’s Library tweet is one of those entertaining consequences of probability, multiplied by the large number of people in the world. If you flip twenty coins in a row there’s a one in 1,048,576 chance that all twenty will come up heads, or all twenty will come up tails. So about one in every million times you flip twenty coins, they all come up the same way. If the seven billion people in the world have flipped at least twenty coins in their lives, then something like seven thousand of them had the coins turn up heads every single one of those twenty times. That all seven billion people have tossed a coin seems like the biggest point to attack this trivia on. A lot of people are too young, or don’t have access to, coins. But there’s still going to be thousands who did start their coin-flipping lives with a remarkable streak.

Also back in October, so you see how long things have been circulating around here, John D Cook published an article about the World Series. Or any series contest. At least ones where the chance of each side winning don’t depend on the previous games in the series. If one side has a probability ‘p’ of winning any particular game, what’s the chance they’ll win a best-four-of-seven? What makes this a more challenging mathematics problem is that a best-of-seven series stops after one side’s won four games. So you can’t simply say it’s the chance of four wins. You need to account for four wins out of five games, out of six games, and out of seven games. Fortunately there’s a lot of old mathematics that explores just this.

The economist Brandford DeLong noticed the first write-up of the Prisoners Dilemma. This is one of the first bits of game theory that anyone learns, and it’s an important bit. It establishes that the logic of cooperatives games — any project where people have to work together — can have a terrible outcome. What makes the most sense for the individuals makes the least sense for the group. That a good outcome for everyone depends on trust, whether established through history or through constraints everyone’s agreed to respect.

And finally here’s part of a series about quick little divisibility tests. This is that trick where you tell what a number’s divisible by through adding or subtracting its (base ten) digits. Everyone who’d be reading this post knows about testing for divisibility by three or nine. Here’s some rules for also testing divisibility by eleven (which you might know), by seven (less likely), and thirteen. With a bit of practice, and awareness of some exceptional numbers, you can tell by sight whether a number smaller than a thousand is prime. Add a bit of flourish to your doing this and you can establish a reputation as a magical mathematician.

## Reading the Comics, December 22, 2018: Christmas Break Edition

There were just enough mathematically-themed comic strips last week for me to make two posts out of it. This current week? Is looking much slower, at least as of Wednesday night. But that’s a problem for me to worry about on Sunday.

Eric the Circle for the 20th, this one by Griffinetsabine, mentions a couple of shapes. That’s enough for me, at least on a slow comics week. There is a fictional tradition of X marking the spot. It can be particularly credited to Robert Louis Stevenson’s Treasure Island. Any symbol could be used to note a special place on maps, certainly. Many maps are loaded with a host of different symbols to convey different information. Circles and crosses have the advantage of being easy to draw and difficult to confuse for one another. Squares, triangles, and stars are good too.

Bill Whitehead’s Free Range for the 22nd spoofs Wheel of Fortune with “theoretical mathematics”. Making a game out of filling in parts of a mathematical expression isn’t ridiculous, although it is rather niche. I don’t see how the revealed string of mathematical expressions build to a coherent piece, but perhaps a few further pieces would help.

The parts shown are all legitimate enough expressions. Well, like $a^2 + b^2 = (a + b)$ is only true for some specific numbers ‘a’ and ‘b’, but you can find solutions. $-b \pm \sqrt{b^2 - x^2y^2}$ is just an expression, not picking out any particular values of ‘b’ or ‘x’ or ‘y’ as interesting. But in conjunction with $a^2 + b^2 = (a + b)$ or other expressions there might be something useful. On the second row is a graph, highlighting a region underneath a curve (and above the x-axis) between two vertical lines. This is often the sort of thing looked at in calculus. It also turns up in probability, as the area under a curve like this can show the chance that an experiment will turn up something in a range of values. And $\frac{dy}{dx} = x^4 - \left(1 - x^2\right)^4$ is a straightforward differential equation. Its solution is a family of similar-looking polynomials.

Mark Pett’s Lucky Cow for the 22nd has run before. I’ve even made it the title strip for a Reading the Comics post back in 2014. So it’s probably time to drop this from my regular Reading the Comics reporting. The physicists comes running in with the left half of the time-dependent Schrödinger Equation. This is all over quantum mechanics. In this form, quantum mechanics contains information about how a system behaves by putting it into a function named $\psi$. Its value depends on space (‘x’). It can also depend on time (‘t’). The physicists pretends to not be able to complete this. Neil arranges to give the answer.

Schrödinger’s Equation looks very much like a diffusion problem. Normal diffusion problems don’t have that $\imath$ which appears in the part of Neil’s answer. But this form of equation turns up a lot. If you have something that acts like a fluid — and heat counts — then a diffusion problem is likely important in understanding it.

And, yes, the setup reminds me of a mathematical joke that I only encounter in lists of mathematics jokes. That one I told the last time this strip came up in the rotation. You might chuckle, or at least be convinced that it is a correctly formed joke.

Each of the Reading the Comics posts should all be at this link. And I have finished the alphabet in my Fall 2018 Mathematics A To Z glossary. There should be a few postscript thoughts to come this week, though.

## What I Wrote About in My 2018 Mathematics A To Z

I have reached the end! Thirteen weeks at two essays per week to describe a neat sampling of mathematics. I hope to write a few words about what I learned by doing all this. In the meanwhile, though, I want to gather together the list of all the essays I did put into this project.

## Reading the Comics, December 19, 2018: Andertoons Is Back Edition

I had not wanted to mention, for fear of setting off a panic. But Mark Anderson’s Andertoons, which I think of as being in every Reading the Comics post, hasn’t been around lately. If I’m not missing something, it hasn’t made an appearance in three months now. I don’t know why, and I’ve been trying not to look too worried by it. Mostly I’ve been forgetting to mention the strange absence. This even though I would think any given Tuesday or Friday that I should talk about the strip not having anything for me to write about. Fretting about it would make a great running theme. But I have never spotted a running theme before it’s finished. In any event the good news is that the long drought has ended, and Andertoons reappears this week. Yes, I’m hoping that it won’t be going to long between appearances this time.

Jef Mallett’s Frazz for the 16th talks about probabilities. This in the context of assessing risks. People are really bad at estimating probabilities. We’re notoriously worse at assessing risks, especially when it’s a matter of balancing a present cost like “fifteen minutes waiting while the pharmacy figures out whether insurance will pay for the flu shot” versus a nebulous benefit like “lessened chance of getting influenza, or at least having a less severe influenza”. And it’s asymmetric, too. We view improbable but potentially enormous losses differently from the way we view improbable but potentially enormous gains. And it’s hard to make the rationally-correct choice reliably, not when there are so many choices of this kind every day.

Tak Bui’s PC and Pixel for the 16th features a wall full of mathematical symbols, used to represent deep thought about a topic. The symbols are gibberish, yes. I’m not sure that an actual “escape probability” could be done in a legible way, though. Or even what precisely Professor Phillip might be calculating. I imagine it would be an estimate of the various ways he might try to escape, and what things might affect that. This might be for the purpose of figuring out what he might do to maximize his chances of a successful escape. Although I wouldn’t put it past the professor to just be quite curious what the odds are. There’s a thrill in having a problem solved, even if you don’t use the answer for anything.

Ruben Bolling’s Super-Fun-Pak Comix for the 18th has a trivia-panel-spoof dubbed Amazing Yet Tautological. One could make an argument that most mathematics trivia fits into this category. At least anything about something that’s been proven. Anyway, whether this is a tautological strip depends on what the strip means by “average” in the phrase “average serving”. There’s about four jillion things dubbed “average” and each of them has a context in which they make sense. The thing intended here, and the thing meant if nobody says anything otherwise, is the “arithmetic mean”. That’s what you get from adding up everything in a sample (here, the amount of egg salad each person in America eats per year) and dividing it by the size of the sample (the number of people in America that year). Another “average” which would make sense, but would break this strip, would be the median. That would be the amount of egg salad that half of all Americans eat more than, and half eat less than. But whether every American could have that big a serving really depends on what that median is. The “mode”, the most common serving, would also be a reasonable “average” to expect someone to talk about.

Mark Anderson’s Andertoons for the 19th is that strip’s much-awaited return to my column here. It features solid geometry, which is both an important part of geometry and also a part that doesn’t get nearly as much attention as plane geometry. It’s reductive to suppose the problem is that it’s harder to draw solids than planar figures. I suspect that’s a fair part of the problem, though. Mathematicians don’t get much art training, not anymore. And while geometry is supposed to be able to rely on pure reasoning, a good picture still helps. And a bad picture will lead us into trouble.

Each of the Reading the Comics posts should all be at this link. And I have finished the alphabet in my Fall 2018 Mathematics A To Z glossary. There should be a few postscript thoughts to come this week, though.

## My 2018 Mathematics A To Z: Zugzwang

My final glossary term for this year’s A To Z sequence was suggested by aajohannas, who’d also suggested “randomness” and “tiling”. I don’t know of any blogs or other projects they’re behind, but if I do hear, I’ll pass them on.

# Zugzwang.

Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.

Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?

But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.

This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.

The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.

But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.

There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.

Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.

The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.

Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.

… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.

But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.

There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.

We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.

It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.

The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literate gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.

Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.

I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.

It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.

And with that … I am done! All of the Fall 2018 Mathematics A To Z posts should be at this link. Next week I’ll post my big list of all the letters, though. And, as has become tradition, a post about what I learned by doing this project. And sometime before then I should have at least one more Reading the Comics post. Thanks kindly for reading and we’ll see when in 2019 I feel up to doing another of these.

## My 2018 Mathematics A To Z: Yamada Polynomial

I had another free choice. I thought I’d go back to one of the topics I knew and loved in grad school even though I didn’t have the time to properly study it then. It turned out I had forgotten some important points and spent a night crash-relearning knot theory. This isn’t a bad thing necessarily.

This is a thing which comes from graphs. Not the graphs you ever drew in algebra class. Graphs as in graph theory. These figures made of spots called vertices. Pairs of vertices are connected by edges. There’s many interesting things to study about these.

One path to take in understanding graphs is polynomials. Of course I would bring things back to polynomials. But there’s good reasons. These reasons come to graph theory by way of knot theory. That’s an interesting development since we usually learn graph theory before knot theory. But knot theory has the idea of representing these complicated shapes as polynomials.

There are a bunch of different polynomials for any given graph. The oldest kind, the Alexander Polynomial, J W Alexander developed in the 1920s. And that was about it until the 1980s when suddenly everybody was coming up with good new polynomials. The definitions are different. They give polynomials that look different. Some are able to distinguish between a knot and the knot that’s its reflection across a mirror. Some, like the Alexander aren’t. But they’re common in some important ways. One is that they might not actually be, you know, polynomials. I mean, they’ll be the sum of numbers — whole numbers, even — times a variable raised to a power. The variable might be t, might be x. Might be something else, but it doesn’t matter. It’s a pure dummy variable. But the variable might be raised to a negative power, which isn’t really a polynomial. It might even be raised to, oh, one-half or three-halves, or minus nine-halves, or something like that. We can try saying this is “a polynomial in t-to-the-halves”. Mostly it’s because we don’t have a better name for it.

And going from a particular knot to a polynomial follows a pretty common procedure. At least it can, when you’re learning knot theory and feel a bit overwhelmed trying to prove stuff about “knot invariants” and “homologies” and all. Having a specific example can be such a comfort. You can work this out by an iterative process. Take a specific drawing of your knot. There’s places where the strands of the knot cross over one another. For each of those crossings you ponder some alternate cases where the strands cross over in a different way. And then you add together some coefficient times the polynomial of this new, different knot. The coefficient you get by the rules of whatever polynomial you’re making. The new, different knots are, usually, no more complicated than what you started with. They’re often simpler knots. This is what saves you from an eternity of work. You’re breaking the knot down into more but simpler knots. Just the fact of doing that can be satisfying enough. Eventually you get to something really simple, like a circle, and declare that’s some basic polynomial. Then there’s a lot bit of adding up coefficients and powers and all that. Tedious but not hard.

Knots are made from a continuous loop of … we’ll just call it thread. It can fold over itself many times. It has to, really, or it hasn’t got a chance of being more interesting than a circle. A graph is different. That there are vertices seems to change things. Less than you’d think, though. The thread of a knot can cross over and under itself. Edges of a graph can cross over and under other edges. This isn’t too different. We can also imagine replacing a spot where two edges cross over and under the other with an intersection and new vertex.

So we get to the Yamada polynomial by treating a graph an awful lot like we might treat a knot. Take the graph and split it up at each overlap. At each overlap we have something that looks, at least locally, kind of like an X. An upper left, upper right, lower left, and lower right intersection. The lower left connects to the upper right, and the upper left connects to the lower right. But these two edges don’t actually touch; one passes over the other. (By convention, the lower left going to the upper right is on top.)

There’s three alternate graphs. One has the upper left connected to the lower left, and the upper right connected to the lower right. This looks like replacing the X with a )( loop. The second alternate has the upper left connected to the upper right, and the lower left connected to the lower right. This looks like … well, that )( but rotated ninety degrees. I can’t do that without actually including a picture. The third alternate puts a vertex in the X. So now the upper left, upper right, lower left, and lower right all connect to the new vertex in the center.

Probably you’d agree that replacing the original X with a )( pattern, or its rotation, probably doesn’t make the graph any more complicated. And it might make the graph simpler. But adding that new vertex looks like trouble. It looks like it’s getting more complicated. We might get stuck in an infinite regression of more-complicated polynomials.

What saves us is the coefficient we’re multiplying the polynomials for these new graphs by. It’s called the “chromatic coefficient” and it reflects how many different colors you need to color in this graph. An edge needs to connect two different colors. And — what happens if an edge connects a vertex to itself? That is, the edge loops around back to where it started? That’s got a chromatic number of zero and the moment we get a single one of these loops anywhere in our graph we can stop calculating. We’re done with that branch of the calculations. This is what saves us.

There’s a catch. It’s a catch that knot polynomials have, too. This scheme writes a polynomial not just for a particular graph but a particular way of rendering this graph. There’s always other ways to draw it. If nothing else you can always twirl a edge over itself, into a loop like you get when Christmas tree lights start tangling themselves up. But you can move the vertices to different places. You can have an edge go outside the rest of the figure instead of inside, that sort of thing. Starting from a different rendition of the shape gets you to a different polynomial.

Superficially different, anyway. What you get from two different renditions of the same graph are polynomials different by your dummy variable raised to a whole number. Also maybe a plus-or-minus sign. You can see a difference between, say, $t^{-1} - 2 + 3t$ (to make up an example) and $t - 2t^2 + 3t^3$. But you can see that second polynomial is just $t^2\left(t^{-1} - 2 + 3t\right)$. It’s some confounding factor times something that is distinctive to the graph.

And that distinctive part, the thing that doesn’t change if you draw the graph differently? That’s the Yamada polynomial, at last. It’s a way to represent this collection of vertices and edges using only coefficients and exponents.

I would like to give an impressive roster of uses for these polynomials here. I’m afraid I have to let you down. There is the obvious use: if you suspect two graphs are really the same, despite how different they look, here’s a test. Calculate their Yamada polynomials and if they’re different, you know the graphs were different. It can be hard to tell. Get anything with more than, say, eight vertices and 24 edges in it and you’re not going to figure that out by sight.

I encountered the Yamada polynomial specifically as part of a textbook chapter about chemistry. It’s easy to imagine there should be great links between knots and graphs and the way that atoms bundle together into molecules. The shape of their structures describes what they will do. But I am not enough of a chemist to say how this description helps chemists understand molecules. It’s possible that it doesn’t: Yamada’s paper introducing the polynomial was published in 1989. My knot theory textbook might have brought it up because it looked exciting. There are trends and fashions in mathematical thought too. I don’t know what several more decades of work have done to the polynomial’s reputation. I’m glad to hear from people who know better.

There’s one more term in the Fall 2018 Mathematics A To Z to come. Will I get the article about it written before Friday? We’ll know on Saturday! At least I don’t have more Reading the Comics posts to write before Sunday.

## Reading the Comics, December 15, 2018: Early Holiday Edition

So then this happened: Comic Strip Master Command didn’t have much they wanted me to write about this week. I made out three strips as being relevant enough to discuss at all. And even they don’t have topics that I felt I could really dig into. Coincidence, surely, although I like to think they were trying to help me get ahead of deadline on my A To Z essays for this last week of the run. It’s a noble thought, but doomed. I haven’t been more than one essay ahead of deadline the last three months. I know in past years I’ve gotten three or even four essays ahead of time and I don’t know why it hasn’t worked this time. I am going ahead and blaming that this these essays have been way longer than previous years’. So anyway, I thank Comic Strip Master Command for trying to make my Monday and my Thursday this week be less packed. It won’t help.

Darrin Bell and Theron Heir’s Rudy Park for the 10th uses mathematics as shorthand for a deep, thought-out theory of something. In this case, Randy’s theory of how to interest women. (He has rather a large number of romantic events around him.) It’s easy to suppose that people can be modeled mathematically. Even a crude model, one supposing that people have things they like and dislike, can give us good interesting results. This gets into psychology and sociology though. And probably requires computer modeling to get slightly useful results.

Randy’s blackboard has a good number of legitimate equations on it. They’re maybe not so useful to his problem of modeling people, though. The lower left corner, for example, are three of Maxwell’s Equations, describing electromagnetism. I’m not sure about all of these, in part because I think some might be transcribed incorrectly. The second equation in the upper left, for example, looks like it’s getting at the curl of a conserved force field being zero, but it’s idiosyncratic to write that with a ‘d’ to start with. The symbols all over the right with both subscripts and superscripts look to me like tensor work. This turns up in electromagnetism, certainly. Tensors turn up anytime something, such as electrical conductivity, is different in different directions. But I’ve never worked deeply in those fields so all I can confidently say is that they look like they parse.

Lincoln Pierce’s Big Nate for the 14th is part of a bit where Nate’s trying to write a gruesome detective mystery for kids. I’m not sure that’s a ridiculous idea, at least if the gore could be done at a level that wouldn’t be too visceral. Anyway, Nate has here got the idea of merging some educational value into the whole affair. It’s not presented as a story problem, just as characters explaining stuff to one another. There probably would be some room for an actual problem where Barky and Winky wanted to know something and had to work out how to find it from what they knew, though.

Mel Henze’s Gentle Creatures for the 14th uses a story problem to stand in for science fictional calculations. The strip’s in reruns and I’ve included it here at least four times, I discover, so that’s probably enough for the comic until it gets out of reruns.

And since it was a low-volume week, let me mention strips I didn’t decide fit. Ray Kassinger asked about Tim Rickard’s Brewster Rockit for the 12th. Might it be a play on Schrödinger’s Cat, the famous thought-experiment about how to understand the mathematics of quantum mechanics? It’s possible, but I think it’s more likely just that cats like sitting in boxes. Thaves’s Frank and Ernest for the 13th looks like it should be an anthropomorphic numerals joke. But it’s playing on the idiom about three being a crowd, and the whole of the mathematical content is that three is a number. John Zakour and Scott Roberts’s Maria’s Day for the 15th mentions mathematics. Particularly, Maria wishing they weren’t studying it. It’s a cameo appearance; it could be any subject whose value a student doesn’t see. That’s all I can make of it.

This and my other Reading the Comics posts should all be available at this link. And please check back in Tuesday to see whether I make deadline for the letter ‘Y’ in my Fall 2018 Mathematics A To Z glossary.

## Reading the Comics, December 8, 2018: Sam and Son Edition

That there were twelve comic strips making my cut as mention-worthy this week should have let me do three essays of four comics each. But the desire to include all the comics from the same day in one essay leaves me one short here. So be it. Three of the four cartoonists featured here have a name of Sansom or Samson, so, that’s an edition title for you. No, Sam and Silo do not appear here.

Art Sansom and Chip Sansom’s Born Loser for the 6th uses arithmetic as a test of deference. Will someone deny a true thing in order to demonstrate loyalty? Arithmetic is full of things that are inarguably true. If we take the ordinary meanings of one, plus, equals, and three, it can’t be that one plus one equals three. Most fields of human endeavor are vulnerable to personal taste, or can get lost in definitions and technicalities. Or the advance of knowledge: my love and I were talking last night how we remembered hearing, as kids, the trivia that panda bears were not really bears, but a kind of raccoon. (Genetic evidence has us now put giant pandas with the bears, and red pandas as part of the same superfamily as raccoons, but barely.) Or even be subject to sarcasm. Arithmetic has a harder time of that. Mathematical ideas do evolve in time, certainly. But basic arithmetic is pretty stable. Logic is also a reliable source of things we can be confident are true. But arithmetic is more familiar than most logical propositions.

Samson’s Dark Side of the Horse for the 8th is the Roman Numerals joke for the week. It’s also a bit of a wordplay joke, although the music wordplay rather tha mathematics. Me, I still haven’t heard a clear reason why ‘MIC’ wouldn’t be a legitimate Roman numeral representation of 1099. I’m not sure whether ‘MIC’ would step on or augment the final joke, though.

Pab Sungenis’s New Adventures of Queen Victoria for the 8th has a comedia dell’arte-based structure for its joke. (The strip does that, now and then.) The comic uses a story problem, with the calculated answer rejected for the nonsense it would be. I suppose it must be possible for someone to eat eighty apples over a long enough time that it’s not distressing, and yet another twenty apples wouldn’t spoil. I wouldn’t try it, though.

This and my other Reading the Comics posts should all be available at this link.

## My 2018 Mathematics A To Z: Extreme Value Theorem

The letter ‘X’ is a problem. For all that the letter ‘x’ is important to mathematics there aren’t many mathematical terms starting with it. Mr Wu, mathematics tutor and author of the MathTuition88 blog, had a suggestion. Why not 90s it up a little and write about an Extreme theorem? I’m game.

The Extreme Value Theorem, which I chose to write about, is a fundamental bit of analysis. There is also a similarly-named but completely unrelated Extreme Value Theory. This exists in the world of statistics. That’s about outliers, and about how likely it is you’ll find an even more extreme outlier if you continue sampling. This is valuable in risk assessment: put another way, it’s the question of what neighborhoods you expect to flood based on how the river’s overflowed the last hundred years. Or be in a wildfire, or be hit by a major earthquake, or whatever. The more I think about it the more I realize that’s worth discussing too. Maybe in the new year, if I decide to do some A To Z extras.

# Extreme Value Theorem.

There are some mathematical theorems which defy intuition. You can encounter one and conclude that can’t be so. This can inspire one to study mathematics, to understand how it could be. Famously, the philosopher Thomas Hobbes encountered the Pythagorean Theorem and disbelieved it. He then fell into a controversial love with the subject. Some you can encounter, and study, and understand, and never come to believe. This would be the Banach-Tarski Paradox. It’s the realization that one can split a ball into as few as five pieces, and reassemble the pieces, and have two complete balls. They can even be wildly larger or smaller than the one you started with. It’s dazzling.

And then there are theorems that seem the opposite. Ones that seem so obvious, and so obviously true, that they hardly seem like mathematics. If they’re not axioms, they might as well be. The extreme value theorem is one of these.

It’s a theorem about functions. Here, functions that have a domain and a range that are both real numbers. Even more specifically, about continuous functions. “Continuous” is a tricky idea to make precise, but we don’t have to do it. A century of mathematicians worked out meanings that correspond pretty well to what you’d imagine it should mean. It means you can draw a graph representing the function without lifting the pen. (Do not attempt to use this definition at your thesis defense. I’m skipping what a century’s worth of hard thinking about the subject.)

And it’s a theorem about “extreme” values. “Extreme” is a convenient word. It means “maximum or minimum”. We’re often interested in the greatest or least value of a function. Having a scheme to find the maximum is as good as having one to find a minimum. So there’s little point talking about them as separate things. But that forces us to use a bunch of syllables. Or to adopt a convention that “by maximum we always mean maximum or minimum”. We could say we mean that, but I’ll bet a good number of mathematicians, and 95% of mathematics students, would forget the “or minimum” within ten minutes. “Extreme”, then. It’s short and punchy and doesn’t commit us to a maximum or a minimum. It’s simply the most outstanding value we can find.

The Extreme Value Theorem doesn’t help us find them. It only proves to us there is an extreme to find. Particularly, it says that if a continuous function has a domain that’s a closed interval, then it has to have a maximum and a minimum. And it has to attain the maximum and the minimum at least once each. That is, something in the domain matches to the maximum. And something in the domain matches to the minimum. Could be multiple times, yes.

This might not seem like much of a theorem. Existence proofs rarely do. It’s a bias, I suppose. We like to think we’re out looking for solutions. So we suppose there’s a solution to find. Checking that there is an answer before we start looking? That seems excessive. Before heading to the airport we might check the flight wasn’t delayed. But we almost never check that there is still a Newark to fly to. I’m not sure, in working out problems, that we check it explicitly. We decide early on that we’re working with continuous functions and so we can try out the usual approaches. That we use the theorem becomes invisible.

And that’s sort of the history of this theorem. The Extreme Value Theorem, for example, is part of how we now prove Rolle’s Theorem. Rolle’s theorem is about functions continuous and differentiable on the interval from a to b. And functions that have the same value for a and for b. The conclusion is the function hass got a local maximum or minimum in-between these. It’s the theorem depicted in that xkcd comic you maybe didn’t check out a few paragraphs ago. Rolle’s Theorem is named for Michael Rolle, who prove the theorem (for polynomials) in 1691. The Indian mathematician Bhaskara II, in the 12th century, is credited with stating the theorem too. The Extreme Value Theorem was proven around 1860. (There was an earlier proof, by Bernard Bolzano, whose name you’ll find all over talk about limits and functions and continuity and all. But that was unpublished until 1930. The proofs known about at the time were done by Karl Weierstrass. His is the other name you’ll find all over talk about limits and functions and continuity and all. Go on, now, guess who it was proved the Extreme Value Theorem. And guess what theorem, bearing the name of two important 19th-century mathematicians, is at the core of proving that. You need at most two chances!) That is, mathematicians were comfortable using the theorem before it had a clear identity.

Once you know that it’s there, though, the Extreme Value Theorem’s a great one. It’s useful. Rolle’s Theorem I just went through. There’s also the quite similar Mean Value Theorem. This one is about functions continuous and differentiable on an interval. It tells us there’s at least one point where the derivative is equal to the mean slope of the function on that interval. This is another theorem that’s a quick proof once you have the Extreme Value Theorem. Or we can get more esoteric. There’s a technique known as Lagrange Multipliers. It’s a way to find where on a constrained surface a function is at its maximum or minimum. It’s a clever technique, one that I needed time to accept as a thing that could possibly work. And why should it work? Go ahead, guess what the centerpiece of at least one method of proving it is.

Step back from calculus and into real analysis. That’s the study of why calculus works, and how real numbers work. The Extreme Value Theorem turns up again and again. Like, one technique for defining the integral itself is to approximate a function with a “stepwise” function. This is one that looks like a pixellated, rectangular approximation of the function. The definition depends on having a stepwise rectangular approximation that’s as close as you can get to a function while always staying less than it. And another stepwise rectangular approximation that’s as close as you can get while always staying greater than it.

And then other results. Often in real analysis we want to know about whether sets are closed and bounded. The Extreme Value Theorem has a neat corollary. Start with a continuous function with domain that’s a closed and bounded interval. Then, this theorem demonstrates, the range is also a closed and bounded interval. I know this sounds like a technical point. But it is the sort of technical point that makes life easier.

The Extreme Value Theorem even takes on meaning when we don’t look at real numbers. We can rewrite it in topological spaces. These are sets of points for which we have an idea of a “neighborhood” of points. We don’t demand that we know what distance is exactly, though. What had been a closed and bounded interval becomes a mathematical construct called a “compact set”. The idea of a continuous function changes into one about the image of an open set being another open set. And there is still something recognizably the Extreme Value Theorem. It tells us about things called the supremum and infimum, which are slightly different from the maximum and minimum. Just enough to confuse the student taking real analysis the first time through.

Topological spaces are an abstracted concept. Real numbers are topological spaces, yes. But many other things also are. Neighborhoods and compact sets and open sets are also abstracted concepts. And so this theorem has its same quiet utility in these many spaces. It’s just there quietly supporting more challenging work.

And now I get to really relax: I already have a Reading the Comics post ready for tomorrow, and Sunday’s is partly written. Now I just have to find a mathematical term starting with ‘Y’ that’s interesting enough to write about.

## Reading the Comics, December 5, 2018: December 5, 2018 Edition

And then I noticed there were a bunch of comic strips with some kind of mathematical theme on the same day. Always fun when that happens.

Bill Holbrook’s On The Fastrack uses one of Holbrook’s common motifs. That’s the depicting as literal some common metaphor. in this case it’s “massaging the numbers”, which might seem not strictly mathematics. But while numbers are interesting, they’re also useful. To be useful they must connect to something we want to know. They need context. That context is always something of human judgement. If the context seems inappropriate to the listener, she thinks the presenter is massaging the numbers. If the context seems fine, we trust the numbers as showing something truth.

Scott Hilburn’s The Argyle Sweater is a seasonal pun that couldn’t wait for a day closer to Christmas. I’m a little curious why not. It would be the same joke with any subject, certainly. The strip did make me wonder if Ebeneezer Scrooge, in-universe, might have taken calculus. This lead me to see that it’s a bit vague what, precisely, Scrooge, or Scrooge-and-Marley, did. The movies are glad to position him as having a warehouse, and importing and exporting things, and making and collecting on loans and whatnot. These are all trades that mathematicians would like to think benefit from knowing advanced mathematics. The logic of making loans implies attention be paid to compounding interest, risks, and expectation values, as well as projecting cash-flow a fair bit into the future. But in the original text he doesn’t make any stated loans, and the only warehouse anyone enters is Fezziwig’s. Well, the Scrooge and Marley sign stands “above the warehouse door”, but we only ever go in to the counting-house. And yes, what Scrooge does besides gather money and misery is irrelevant to the setting of the story.

Teresa Burritt’s Dadaist strip Frog Applause uses knowledge of mathematics as an emblem of intelligence. “Multivariate analysis” is a term of art from statistics. It’s about measuring how one variable changes depending on two or more other variables. The goal is obvious: we know there are many things that influence anything of interest. Can we find what things have the strongest effects? The weakest effects? There are several ways we might mean “strongest” effect, too. It might mean that a small change in the independent variable produces a big change in the dependent one. Or it might mean that there’s very little noise, that a change in the independent variable produces a reliable change in the dependent one. Or we might have several variables that are difficult to measure precisely on their own, but with a combination that’s noticeable. The basic calculations for this look a lot like those for single-variable analysis. But there’s much more calculation. It’s more tedious, at least. My reading suggests that multivariate analysis didn’t develop much until there were computers cheap enough to do the calculations. Might be coincidence, though. Many machine-learning techniques can be described as multivariate analysis problems.

Greg Evans’s Luann Againn is a Pi Day joke from before the time when Pi Day was a thing. Brad’s magazine flipping like that is an unusual bit of throwaway background humor for the comic strip.

Doug Savage’s Savage Chickens is a bunch of shape jokes. Since I was talking about tiling the plane so recently the rhombus seemed on-point enough. I’m think the irregular heptagon shown here won’t tile the plane. But given how much it turns out I didn’t know, I wouldn’t want to commit to that.

I’m working hard on a latter ‘X’ essay for my Fall 2018 Mathematics A To Z glossary. That should appear on Friday. And there should be another Reading the Comics post later this week, at this link.

## My 2018 Mathematics A To Z: Witch of Agnesi

Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical.

# Witch of Agnesi.

I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote.

It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this:

$y = \frac{8a^3}{x^2 + 4a^2}$

In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat.

It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten.

We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here.

Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why $8a^3$? Why not $a^3$?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one.

The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project.

For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve.

Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve.

Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out $y = \frac{8a^3}{x^2 + 4a^2}$, yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve.

It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa.

It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too.

And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve.

It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea.

Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas.

(Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.)

It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me.

I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student.

And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading.

## Reading the Comics, December 4, 2018: Christmas Specials Edition

This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.

Graham Nolan’s Sunshine State for the 3rd is a misplaced Pi Day strip. I did check the copyright to see if it might be a rerun from when it was more seasonal.

Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.

John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.

But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.

Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.

I’m figuring to get to the letter ‘W’ in my Fall 2018 Mathematics A To Z glossary for Tuesday. Reading the Comics posts this week. And I also figure there should be two more When posted, they’ll be at this link.

## My 2018 Mathematics A To Z: Volume

Ray Kassinger, of the popular web comic Housepets!, had a silly suggestion when I went looking for topics. In one episode of Mystery Science Theater 3000, Crow T Robot gets the idea that you could describe the size of a space by the number of turkeys which fill it. (It’s based on like two minor mentions of “turkeys” in the show they were watching.)

I liked that episode. I’ve got happy memories of the time when I first saw it. I thought the sketch in which Crow T Robot got so volume-obsessed was goofy and dumb in the fun-nerd way.

I accept Mr Kassinger’s challenge only I’m going to take it seriously.

# Volume.

How big is a thing?

There is a legend about Thomas Edison. He was unimpressed with a new hire. So he hazed the college-trained engineer who deeply knew calculus. He demanded the engineer tell him the volume within a light bulb. The engineer went to work, making measurements of the shape of the bulb’s outside. And then started the calculations. This involves a calculus technique called “volumes of rotation”. This can tell the volume within a rotationally symmetric shape. It’s tedious, especially if the outer edge isn’t some special nice shape. Edison, fed up, took the bulb, filled it with water, poured that out into a graduated cylinder and said that was the answer.

I’m skeptical of legends. I’m skeptical of stories about the foolish intellectual upstaged by the practical man-of-action. And I’m skeptical of Edison because, jeez, I’ve read biographies of the man. Even the fawning ones make him out to be yeesh.

But the legend’s Edison had a point. If the volume of a shape is not how much stuff fits inside the shape, what is it? And maybe some object has too complicated a shape to find its volume. Can we think of a way to produce something with the same volume, but that is easier? Sometimes we can. When we do this with straightedge and compass, the way the Ancient Greeks found so classy, we call this “quadrature”. It’s called quadrature from its application in two dimensions. It finds, for a shape, a square with the same area. For a three-dimensional object, we find a cube with the same volume. Cubes are easy to understand.

Straightedge and compass can’t do everything. Indeed, there’s so much they can’t do. Some of it is stuff you’d think it should be able to, like, find a cube with the same volume as a sphere. Integration gives us a mathematical tool for describing how much stuff is inside a shape. It’s even got a beautiful shorthand expression. Suppose that D is the shape. Then its volume V is:

$V = \int\int\int_D dV$

Here “dV” is the “volume form”, a description of how the coordinates we describe a space in relate to the volume. The $\int\int\int$ is jargon, meaning, “integrate over the whole volume”. The subscript “D” modifies that phrase by adding “of D” to it. Writing “D” is shorthand for “these are all the points inside this shape, in whatever coordinate system you use”. If we didn’t do that we’d have to say, on each $\int$ sign, what points are inside the shape, coordinate by coordinate. At this level the equation doesn’t offer much help. It says the volume is the sum of infinitely many, infinitely tiny pieces of volume. True, but that doesn’t give much guidance about whether it’s more or less than two cups of water. We need to get more specific formulas, usually. We need to pick coordinates, for example, and say what coordinates are inside the shape. A lot of the resulting formulas can’t be integrated exactly. Like, an ellipsoid? Maybe you can integrate that. Don’t try without getting hazard pay.

We can approximate this integral. Pick a tiny shape whose volume is easy to know. Fill your shape with duplicates of it. Count the duplicates. Multiply that count by the volume of this tiny shape. Done. This is numerical integration, sometimes called “numerical quadrature”. If we’re being generous, we can say the legendary Edison did this, using water molecules as the tiny shape. And working so that he didn’t need to know the exact count or the volume of individual molecules. Good computational technique.

It’s hard not to feel we’re begging the question, though. We want the volume of something. So we need the volume of something else. Where does that volume come from?

Well, where does an inch come from? Or a centimeter? Whatever unit you use? You pick something to use as reference. Any old thing will do. Which is why you get fascinating stories about choosing what to use. And bitter arguments about which of several alternatives to use. And we express the length of something as some multiple of this reference length.

Volume works the same way. Pick a reference volume, something that can be one unit-of-volume. Other volumes are some multiple of that unit-of-volume. Possibly a fraction of that unit-of-volume.

Usually we use a reference volume that’s based on the reference length. Typically, we imagine a cube that’s one unit of length on each side. The volume of this cube with sides of length 1 unit-of-length is then 1 unit-of-volume. This seems all nice and orderly and it’s surely not because mathematicians have paid off by six-sided-dice manufacturers.

Does it have to be?

That we need some reference volume seems inevitable. We can’t very well say the area of something is ten times nothing-in-particular. Does that reference volume have to be a cube? Or even a rectangle or something else? It seems obvious that we need some reference shape that tiles, that can fill up space by itself … right?

What if we don’t?

I’m going to drop out of three dimensions a moment. Not because it changes the fundamentals, but because it makes something easier. Specifically, it makes it easier if you decide you want to get some construction paper, cut out shapes, and try this on your own. What this will tell us about area is just as true for volume. Area, for a two-dimensional sapce, and volume, for a three-dimensional, describe the same thing. If you’ll let me continue, then, I will.

So draw a figure on a clean sheet of paper. What’s its area? Now imagine you have a whole bunch of shapes with reference areas. A bunch that have an area of 1. That’s by definition. That’s our reference area. A bunch of smaller shapes with an area of one-half. By definition, too. A bunch of smaller shapes still with an area of one-third. Or one-fourth. Whatever. Shapes with areas you know because they’re marked on them.

Here’s one way to find the area. Drop your reference shapes, the ones with area 1, on your figure. How many do you need to completely cover the figure? It’s all right to cover more than the figure. It’s all right to have some of the reference shapes overlap. All you need is to cover the figure completely. … Well, you know how many pieces you needed for that. You can count them up. You can add up the areas of all these pieces needed to cover the figure. So the figure’s area can’t be any bigger than that sum.

Can’t be exact, though, right? Because you might get a different number if you covered the figure differently. If you used smaller pieces. If you arranged them better. This is true. But imagine all the possible reference shapes you had, and all the possible ways to arrange them. There’s some smallest area of those reference shapes that would cover your figure. Is there a more sensible idea for what the area of this figure would be?

And put this into three dimensions. If we start from some reference shapes of volume 1 and maybe 1/2 and 1/3 and whatever other useful fractions there are? Doesn’t this covering make sense as a way to describe the volume? Cubes or rectangles are easy to imagine. Tetrahedrons too. But why not any old thing? Why not, as the Mystery Science Theater 3000 episode had it, turkeys?

This is a nice, flexible, convenient way to define area. So now let’s see where it goes all bizarre. We know this thanks to Giuseppe Peano. He’s among the late-19th/early-20th century mathematicians who shaped modern mathematics. They did this by showing how much of our mathematics broke intuition. Peano was (here) exploring what we now call fractals. And noted a family of shapes that curl back on themselves, over and over. They’re beautiful.

And they fill area. Fill volume, if done in three dimensions. It seems impossible. If we use this covering scheme, and try to find the volume of a straight line, we get zero. Well, we find that any positive number is too big, and from that conclude that it has to be zero. Since a straight line has length, but not volume, this seems fine. But a Peano curve won’t go along with this. A Peano curve winds back on itself so much that there is some minimum volume to cover it.

This unsettles. But this idea of volume (or area) by covering works so well. To throw it away seems to hobble us. So it seems worth the trade. We allow ourselves to imagine a line so long and so curled up that it has a volume. Amazing.

And now I get to relax and unwind and enjoy a long weekend before coming to the letter ‘W’. That’ll be about some topic I figure I can whip out a nice tight 500 words about, and instead, produce some 1541-word monstrosity while I wonder why I’ve had no free time at all since August. Tuesday, give or take, it’ll be available at this link, as are the rest of these glossary posts. Thanks for reading.

## Reading the Comics, November 29, 2018: Closing Out November Edition

Today, I get to wrap up November’s suggested discussion topics as prepared by Comic Strip Master Command.

Mark Tatulli’s Lio for the 28th features a cameo for mathematics. At least mathematics class. It’s painted as the most tedious part of the school day. I’m not sure this is quite right for Lio as a character. He’s clever in a way that I think harmonizes well with how mathematics brings out universal truths. But there is a difference between mathematics and mathematics class, of course.

Tom Toles’s Randolph Itch, 2am for the 28th shows how well my resolution to drop the strip from my rotation here has gone. I don’t seem to have found it worthy of mention before, though. It plays on the difference between a note of money, the number of units of currency that note represents, and between “zero” and “nothing”. Also I’m enchanted now by the idea that maybe some government might publish a zero-dollar bill. At least for the sake of movie and television productions that need realistic-looking cash.

In the footer joke Randolph mentions how you can never have enough zeroes. Yes, but I’d say that’s true of twenties, too. There is a neat sense in which this is true for working mathematicians, though. At least for those doing analysis. One of the reliable tricks that we learn to do in analysis is to “add zero” to a quantity. This is, literally, going from some expression that might be, say, “a – b” to “a + 0 – b”, which of course has the same value. The point of doing that is that we know other things equal to zero. For example, for any number L, “-L + L” is zero. So we get the original expression from “a + 0 – b” over to “a – L + L – b”. And that becomes useful is you picked L so that you know something about “a – L” and about “L – b”. Because then it tells you something about “a – b” that you didn’t know before. Picking that L, and showing something true about “a – L” and “L – b”, is the tricky part.

Dan Collins’s Looks Good On Paper for the 29th is back with another Möbius Strip comic strip. Last time it was presented as the “Möbius Trip”, a looping journey. This time it’s a comic strip proper. If this particular Looks Good On Paper has run before I don’t seem to have mentioned it. Unlike the “Möbius Trip” comic, this one looks more clearly like it actually is a Möbius strip.

The Dumpties in the comic strip are presented as getting nauseated at the strange curling around. It’s good sense for the comic-in-the-comic, which just has to have something happen and doesn’t really need to make sense. But there is no real way to answer where a Möbius strip wraps around itself. I mean, we can declare it’s at the left and right ends of the strip as we hold it, sure. But this is an ad hoc placement. We can roll the belt along a little bit, not changing its shape, but changing the points where we think of the strip as turning over.

But suppose you were a flat creature, wandering a Möbius strip. Would you have any way to tell that you weren’t on the plane? You could, but it takes some subtle work. Like, you could try drawing shapes. These let you count a thing called the Euler Characteristic, which relates the numer of vertices, edges, and faces of a polyhedron. The Euler Characteristic for a Möbius strip is the same as that for a Klein bottle, a cylinder, or a torus. You could try drawing regions, and coloring them in, calling on the four-color map theorem. (Here I want just to mention the five-color map theorem, which is as these things go easy to prove.) A map on the plane needs at most four colors to have no neighboring territories share a color along an edge. (Territories here are contiguous, and we don’t count territories meeting at only a point as sharing an edge.) Same for a sphere, which is good for we folks who have the job of coloring in both globes and atlases. It’s also the same for a cylinder. On a Möbius strip, this number is six. On a torus, it’s seven. So we could tell, if we were on a Möbius strip, that we were. It can be subtle to prove, is all.

All of my regular Reading the Comics posts should all be at this link. The next in my Fall 2018 Mathematics A To Z glossary should be posted Tuesday. I’m glad for it if you do come around and read again.

## My 2018 Mathematics A To Z: Unit Fractions

My subject for today is another from Iva Sallay, longtime friend of the blog and creator of the Find the Factors recreational mathematics game. I think you’ll likely find something enjoyable at her site, whether it’s the puzzle or the neat bits of trivia as she works through all the counting numbers.

# Unit Fractions.

We don’t notice how unit fractions are around us. Likely there’s some in your pocket. Or there have been recently. Think of what you do when paying for a thing, when it’s not a whole number of dollars. (Pounds, euros, whatever the unit of currency is.) Suppose you have exact change. What do you give for the 38 cents?

Likely it’s something like a 25-cent piece and a 10-cent piece and three one-cent pieces. This is an American and Canadian solution. I know that 20-cent pieces are more common than 25-cent ones worldwide. It doesn’t make much difference; if you want it to be three 10-cent, one five-cent, and three one-cent pieces that’s as good. And granted, outside the United States it’s growing common to drop pennies altogether and round prices off to a five- or ten-cent value. Again, it doesn’t make much difference.

But look at the coins. The 25 cent piece is one-quarter of a dollar. It’s even called that, and stamped that on one side. I sometimes hear a dime called “a tenth of a dollar”, although mostly by carnival barkers in one-reel cartoons of the 1930s. A nickel is one-twentieth of a dollar. A penny is one-hundredth. A 20-cent piece is one-fifth of a dollar. And there are half-dollars out there, although not in the United States, not really anymore.

(Pre-decimalized currencies offered even more unit fractions. Using old British coins, for familiarity-to-me and great names, there were farthings, 1/960th of a pound; halfpennies, 1/480th; pennies, 1/240th; threepence, 1/80th of a pound; groats, 1/60th; sixpence, 1/40th; florins, 1/10th; half-crowns, 1/8th; crowns, 1/4th. And what seem to the modern wallet like impossibly tiny fractions like the half-, third-, and quarter-farthings used where 1/3840th of a pound might be a needed sum of money.)

Unit fractions get named and defined somewhere in elementary school arithmetic. They go on, becoming forgotten sometime after that. They might make a brief reappearance in calculus. There are some rational functions that get easier to integrate if you think of them as the sums of fractions, with constant numerators and polynomial denominators. These aren’t unit fractions. A unit fraction has a 1, the unit, in the numerator. But we see units along the way to integrating $\frac{1}{x^2 - x}$ as an example. And see it in the promise that there are still more amazing integrals to learn how to do.

They get more attention if you take a history of computation class. Or read the subject on your own. Unit fractions stand out in history. We learn the Ancient Egyptians worked with fractions as sums of unit fractions. That is, had they dollars, they would not look at the $\frac{38}{100}$ we do. They would look at $\frac{1}{4}$ plus $\frac{1}{10}$ plus $\frac{1}{100}$ plus $\frac{1}{100}$ plus $\frac{1}{100}$. When we count change we are using, without noticing it, a very old computing scheme.

This isn’t quite true. The Ancient Egyptians seemed to shun repeating a unit like that. To use $\frac{1}{100}$ once is fine; three times is suspicious. They would prefer something like $\frac{1}{3}$ plus $\frac{1}{24}$ plus $\frac{1}{200}$. Or maybe some other combination. I just wrote out the first one I found.

But there are many ways we can make 38 cents using ordinary coins of the realm. There are infinitely many ways to make up any fraction using unit fractions. There’s surely a most “efficient”. Most efficient might be the one which uses the fewest number of terms. Most efficient might be the one that uses the smallest denominators. Choose what you like; no one knows a scheme that always turns up the most efficient, either way. We can always find some representation, though. It may not be “good”, but it will exist, which may be good enough. Leonardo of Pisa, or as he got named in the 19th century, Fibonacci, proved that was true.

We may ask why the Egyptians used unit fractions. They seem inefficient compared to the way we work with fractions. Or, better, decimals. I’m not sure the question can have a coherent answer. Why do we have a fashion for converting fractions to a “proper” form? Why do we use the number of decimal points we do for a given calculation? Sometimes a particular mode of expression is the fashion. It comes to seem natural because everyone uses it. We do it too.

And there is practicality to them. Even efficiency. If you need π, for example, you can write it as 3 plus $\frac{1}{8}$ plus $\frac{1}{61}$ and your answer is off by under one part in a thousand. Combine this with the Egyptian method of multiplication, where you would think of (say) “11 times π” as “1 times π plus 2 times π plus 8 times π”. And with tables they had worked up which tell you what $\frac{2}{8}$ and $\frac{2}{61}$ would be in a normal representation. You can get rather good calculations without having to do more than addition and looking up doublings. Represent π as 3 plus $\frac{1}{8}$ plus $\frac{1}{61}$ plus $\frac{1}{5020}$ and you’re correct to within one part in 130 million. That isn’t bad for having to remember four whole numbers.

(The Ancient Egyptians, like many of us, were not absolutely consistent in only using unit fractions. They had symbols to represent $\frac{2}{3}$ and $\frac{3}{4}$, probably due to these numbers coming up all the time. Human systems vary to make the commonest stuff we do easier.)

Enough practicality or efficiency, if this is that. Is there beauty? Is there wonder? Certainly. Much of it is in number theory. Number theory splits between astounding results and results that would be astounding if we had any idea how to prove them. Many of the astounding results are about unit fractions. Take, for example, the harmonic series $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \cdots$. Truncate that series whenever you decide you’ve had enough. Different numbers of terms in this series will add up to different numbers. Eventually, infinitely many numbers. The numbers will grow ever-higher. There’s no number so big that it won’t, eventually, be surpassed by some long-enough truncated harmonic series. And yet, past the number 1, it’ll never touch a whole number again. Infinitely many partial sums. Partial sums differing from one another by one-googol-plex and smaller. And yet of the infinitely many whole numbers this series manages to miss them all, after its starting point. Worse, any set of consecutive terms, not even starting from 1, will never hit a whole number. I can understand a person who thinks mathematics is boring, but how can anyone not find it astonishing?

There are more strange, beautiful things. Consider heptagonal numbers, which Iva Sallay knows well. These are numbers like 1 and 7 and 18 and 34 and 55 and 1288. Take a heptagonal number of, oh, beads or dots or whatever, and you can lay them out to form a regular seven-sided figure. Add together the reciprocals of the heptagonal numbers. What do you get? It’s a weird number. It’s irrational, which you maybe would have guessed as more likely than not. But it’s also transcendental. Most real numbers are transcendental. But it’s often hard to prove any specific number is.

Unit fractions creep back into actual use. For example, in modular arithmetic, they offer a way to turn division back into multiplication. Division, in modular arithmetic, tends to be hard. Indeed, if you need an algorithm to make random-enough numbers, you often will do something with division in modular arithmetic. Suppose you want to divide by a number x, modulo y, and x and y are relatively prime, though. Then unit fractions tell us how to turn this into finding a greatest common denominator problem.

They teach us about our computers, too. Much of serious numerical mathematics involves matrix multiplication. Matrices are, for this purpose, tables of numbers. The Hilbert Matrix has elements that are entirely unit fractions. The Hilbert Matrix is really a family of square matrices. Pick any of the family you like. It can have two rows and two columns, or three rows and three columns, or ten rows and ten columns, or a million rows and a million columns. Your choice. The first row is made of the numbers $1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4},$ and so on. The second row is made of the numbers $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5},$ and so on. The third row is made of the numbers $\frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \frac{1}{6},$ and so on. You see how this is going.

Matrices can have inverses. It’s not guaranteed; matrices are like that. But the Hilbert Matrix does. It’s another matrix, of the same size. All the terms in it are integers. Multiply the Hilbert Matrix by its inverse and you get the Identity Matrix. This is a matrix, the same number of rows and columns as you started with. But nearly every element in the identity matrix is zero. The only exceptions are on the diagonal — first row, first column; second row, second column; third row, third column; and so on. There, the identity matrix has a 1. The identity matrix works, for matrix multiplication, much like the real number 1 works for normal multiplication.

Matrix multiplication is tedious. It’s not hard, but it involves a lot of multiplying and adding and it just takes forever. So set a computer to do this! And you get … uh ..

For a small Hilbert Matrix and its inverse, you get an identity matrix. That’s good. For a large Hilbert Matrix and its inverse? You get garbage. Large isn’t maybe very large. A 12 by 12 matrix gives you trouble. A 14 by 14 matrix gives you a mess. Well, on my computer it does. Cute little laptop I got when my former computer suddenly died. On a better computer? One designed for computation? … You could do a little better. Less good than you might imagine.

The trouble is that computers don’t really do mathematics. They do an approximation of it, numerical computing. Most use a scheme called floating point arithmetic. It mostly works well. There’s a bit of error in every calculation. For most calculations, though, the error stays small. At least relatively small. The Hilbert Matrix, built of unit fractions, doesn’t respect this. It and its inverse have a “numerical instability”. Some kinds of calculations make errors explode. They’ll overwhelm the meaningful calculation. It’s a bit of a mess.

Numerical instability is something anyone doing mathematics on the computer must learn. Must grow comfortable with. Must understand. The matrix multiplications, and inverses, that the Hilbert Matrix involves highlights those. A great and urgent example of a subtle danger of computerized mathematics waits for us in these unit fractions. And we’ve known and felt comfortable with them for thousands of years.

There’ll be some mathematical term with a name starting ‘V’ that, barring surprises, should be posted Friday. What’ll it be? I have an idea at least. It’ll be available at this link, as are the rest of these glossary posts.

## How November 2018 Treated My Mathematics Blog

I knew that November 2018 was going to be a less busy month around here than October would. I didn’t have the benefit of hosting the Playful Mathematics Education Blog Carnival for it. I’m hoping to host the carnival again, though. Not until after the new year. Not until after I’ve finished the Fall 2018 A To Z and have had some time to recuperate. It’s a weird thing but writing two 1500-to-2000-word essays each week hasn’t lightened my workload the way I figured. If you’re interested in the current Blog Carnival, by the way, here it is. Anyway, as reversions to the norm go, November was not bad. Here’s what it looked like.

So there were 1,611 pages viewed here in November. Down from the 2,010 of October, but noticeably higher than September’s 1,505. That’s still a third-highest month (March 2018 was busier still). But it’s weirdly gratifying. There were 847 unique visitors logged in November. That’s down from October’s 1,063, and even September’s 874. I make this out as my fifth-most-visitors month on record. All those months have been this year.

85 things got liked in November. That’s down from October’s 94, up from September’s 65, and overall part of a weird pattern. My likes are definitely declining over time. But there’s little local peaks. If there’s any pattern it’s kind of a sawtooth, with the height of the teeth dropping. I have no explanation for this phenomenon. There were 36 comments in November, well down from October’s 60, but equal to September’s. It’s above the running average of the last two months (28.5 comments per month) but it’s still well below, like, the average commentary you can expect on the Comics Curmudgeon. Granted, we serve different purposes.

Of the most popular essays this month the top two were perennials. Some A to Z stuff filled out the rest. I’m including the top six posts here there was a tie for fourth place, and sixth place was barely behind that. If this reason seems ad hoc, you understand it correctly. Read a lot around here were:

And where were all these readers coming from? Here’s the roster of countries and their readership totals:

United States 1,038
United Kingdom 72
Philippines 66
India 46
Denmark 37
Singapore 32
Australia 26
Sweden 15
Slovenia 14
Italy 12
Netherlands 12
Spain 11
Hong Kong SAR China 9
Germany 8
Brazil 7
Croatia 7
United Arab Emirates 7
Romania 6
Thailand 6
France 5
Puerto Rico 5
South Africa 5
Venezuela 5
European Union 4
Indonesia 4
Mexico 4
Norway 4
Pakistan 4
Poland 4
Austria 3
Israel 3
Nepal 3
Russia 3
Switzerland 3
Turkey 3
Algeria 2
Argentina 2
Belgium 2
Bulgaria 2
China 2
Finland 2
Georgia 2
Ghana 2
Greece 2
Japan 2
Jordan 2
Malaysia 2
New Zealand 2
Nigeria 2
Panama 2
Peru 2
Portugal 2
South Korea 2
Sri Lanka 2
Taiwan 2
Belize 1
Bhutan 1
Colombia 1 (***)
Costa Rica 1
Czech Republic 1 (**)
Guernsey 1
Kenya 1
Lebanon 1
Namibia 1
Palestinian Territories 1
Qatar 1
Saudi Arabia 1

70 countries sent me readers in November 2018. That’s down from October’s 74 but up from September’s 58. 13 of them were single-reader countries, down from October’s 23 and September’s 14. Czech Republic has been a single-reader country for three months. Colombia for four months now.

According to the Insights panel, I start the month at 71,506 total page views for the 1,185 posts I’ve done altogether. It also records 35,384 unique visitors, but I again have to defensively insist WordPress didn’t count unique visitors for the first couple months I was around here. I swear.

I published 23 posts in October. A to Z months tend to be busy ones. These posts held something like 26,644 words in total. For the 165 things I had posted this year, through to the start of December, I averaged 1,108 words per post. That’s up from the start of November’s 996 words per post, but still. I’m averaging 5.3 likes per post, and 2.7 comments per post. At the start of last month I was averaging 5.5 likes and 2.8 comments per post. This is probably not any important kind of variation. There’ve been 450 total comments and 870 total likes this year, as of the start of December.

## Reading the Comics, November 27, 2018: Multiplication Edition

Last week Comic Strip Master Command sent out just enough on-theme comics for two essays, the way I do them these days. The first half has some multiplication in two of the strips. So that’s enough to count as a theme for me.

Aaron Neathery’s Endtown for the 26th depicts a dreary, boring school day by using arithmetic. A lot of times tables. There is some credible in-universe reason to be drilling on multiplication like this. The setting is one where the characters can’t expect to have computers available. That granted, I’m not sure there’s a point to going up to memorizing four times 27. Going up to twelve-times seems like enough for common uses. For multiplying two- and longer-digit numbers together we usually break the problem up into a string of single-digit multiplications.

There are a handful of bigger multiplications that can make your life easier to know, like how four times 25 is 100. Or three times 33 is pretty near 100. But otherwise? … Of course, the story needs the class to do something dull and seemingly pointless. Going deep into multiplication tables communicates that to the reader quickly.

Thaves’s Frank and Ernest for the 26th is a spot of wordplay. Also a shout-out to my friends who record mathematics videos for YouTube. It is built on the conflation between the ideas of something multiplying and the amount of something growing. It’s easy to see where the idea comes from; just keep hitting ‘x 2’ on a calculator and the numbers grow excitingly fast. You get even more exciting results with ‘x 3’ or ‘x π’. But multiplying by 1 is still multiplication. As is multiplying by a number smaller than 1. Including negative numbers. That doesn’t hurt the joke any. That multiplying two things together doesn’t necessarily give you something larger is a consideration when you’re thinking rigorously about what multiplication can do. It doesn’t have to be part of normal speech.

John Hambrock’s The Brilliant Mind of Edison Lee for the 27th uses the form of a word problem to show off Edison’s gluttony. Edison tries to present it as teaching. We all have rationalizations for giving in to our appetites.

Nate Frakes’s Break of Day for the 27th is the anthropomorphic numerals joke for the week. I don’t know that there’s anything in the other numerals being odds rather than evens, or a mixture of odds and evens. It might just be that they needed to be anything but 1.

All of my regular Reading the Comics posts should all be at this link. The next in my Fall 2018 Mathematics A To Z glossary should be posted Tuesday. I’m glad for it if you do come around and read again.

## My 2018 Mathematics A To Z: Tiling

For today’s a to Z topic I again picked one nominated by aajohannas. This after I realized I was falling into a never-ending research spiral on Mr Wu, of Mathtuition’s suggested “torus”. I do have an older essay describing the torus, as a set. But that does leave out a lot of why a torus is interesting. Well, we’ll carry on.

# Tiling.

Here is a surprising thought for the next time you consider remodeling the kitchen. It’s common to tile the floor. Perhaps some of the walls behind the counter. What patterns could you use? And there are infinitely many possibilities. You might leap ahead of me and say, yes, but they’re all boring. A tile that’s eight inches square is different from one that’s twelve inches square and different from one that’s 12.01 inches square. Fine. Let’s allow that all square tiles are “really” the same pattern. The only difference between a square two feet on a side and a square half an inch on a side is how much grout you have to deal with. There are still infinitely many possibilities.

You might still suspect me of being boring. Sure, there’s a rectangular tile that’s, say, six inches by eight inches. And one that’s six inches by nine inches. Six inches by ten inches. Six inches by one millimeter. Yes, I’m technically right. But I’m not interested in that. Let’s allow that all rectangular tiles are “really” the same pattern. So we have “squares” and “rectangles”. There are still infinitely many tile possibilities.

Let me shorten the discussion here. Draw a quadrilateral. One that doesn’t intersect itself. That is, there’s four corners, four lines, and there’s no X crossings. If you have that, then you have a tiling. Get enough of these tiles and arrange them correctly and you can cover the plane. Or the kitchen floor, if you have a level floor. It might not be obvious how to do it. You might have to rotate alternating tiles, or set them in what seem like weird offsets. But you can do it. You’ll need someone to make the tiles for you, if you pick some weird pattern. I hope I live long enough to see it become part of the dubious kitchen package on junk home-renovation shows.

Let me broaden the discussion here. What do I mean by a tiling if I’m allowing any four-sided figure to be a tile? We start with a surface. Usually the plane, a flat surface stretching out infinitely far in two dimensions. The kitchen floor, or any other mere mortal surface, approximates this. But the floor stops at some point. That’s all right. The ideas we develop for the plane work all right for the kitchen. There’s some weird effects for the tiles that get too near the edges of the room. We don’t need to worry about them here. The tiles are some collection of open sets. No two tiles overlap. The tiles, plus their boundaries, cover the whole plane. That is, every point on the plane is either inside exactly one of the open sets, or it’s on the boundary between one (or more) sets.

There isn’t a requirement that all these sets have the same shape. We usually do, and will limit our tiles to one or two shapes endlessly repeated. It seems to appeal to our aesthetics and our installation budget. Using a single pattern allows us to cover the plane with triangles. Any triangle will do. Similarly any quadrilateral will do. For convex pentagonal tiles — here things get weird. There are fourteen known families of pentagons that tile the plane. Each member of the family looks about the same, but there’s some room for variation in the sides. Plus there’s one more special case that can tile the plane, but only that one shape, with no variation allowed. We don’t know if there’s a sixteenth pattern. But then until 2015 we didn’t know there was a 15th, and that was the first pattern found in thirty years. Might be an opening for someone with a good eye for doodling.

There are also exciting opportunities in convex hexagons. Anyone who plays strategy games knows a regular hexagon will tile the plane. (Regular hexagonal tilings fit a certain kind of strategy game well. Particularly they imply an equal distance between the centers of any adjacent tiles. Square and triangular tiles don’t guarantee that. This can imply better balance for territory-based games.) Irregular hexagons will, too. There are three known families of irregular hexagons that tile the plane. You can treat the regular hexagon as a special case of any of these three families. No one knows if there’s a fourth family. Ready your notepad at the next overlong, agenda-less meeting.

There aren’t tilings for identical convex heptagons, figures with seven sides. Nor eight, nor nine, nor any higher figure. You can cover them if you have non-convex figures. See any Tetris game where you keep getting the ‘s’ or ‘t’ shapes. And you can cover them if you use several shapes.

There’s some guidance if you want to create your own periodic tilings. I see it called the Conway Criterion. I don’t know the field well enough to say whether that is a common term. It could be something one mathematics popularizer thought of and that other popularizers imitated. (I don’t find “Conway Criterion” on the Mathworld glossary, but that isn’t definitive.) Suppose your polygon satisfies a couple of rules about the shapes of the edges. The rules are given in that link earlier this paragraph. If your shape does, then it’ll be able to tile the plane. If you don’t satisfy the rules, don’t despair! It might yet. The Conway Criterion tells you when some shape will tile the plane. It won’t tell you that something won’t.

(The name “Conway” may nag at you as familiar from somewhere. This criterion is named for John H Conway, who’s famous for a bunch of work in knot theory, group theory, and coding theory. And in popular mathematics for the “Game of Life”. This is a set of rules on a grid of numbers. The rules say how to calculate a new grid, based on this first one. Iterating them, creating grid after grid, can make patterns that seem far too complicated to be implicit in the simple rules. Conway also developed an algorithm to calculate the day of the week, in the Gregorian calendar. It is difficult to explain to the non-calendar fan how great this sort of thing is.)

This has all gotten to periodic tilings. That is, these patterns might be complicated. But if need be, we could get them printed on a nice square tile and cover the floor with that. Almost as beautiful and much easier to install. Are there tilings that aren’t periodic? Aperiodic tilings?

Well, sure. Easily. Take a bunch of tiles with a right angle, and two 45-degree angles. Put any two together and you have a square. So you’re “really” tiling squares that happen to be made up of a pair of triangles. Each pair, toss a coin to decide whether you put the diagonal as a forward or backward slash. Done. That’s not a periodic tiling. Not unless you had a weird run of luck on your coin tosses.

All right, but is that just a technicality? We could have easily installed this periodically and we just added some chaos to make it “not work”. Can we use a finite number of different kinds of tiles, and have it be aperiodic however much we try to make it periodic? And through about 1966 mathematicians would have mostly guessed that no, you couldn’t. If you had a set of tiles that would cover the plane aperiodically, there was also some way to do it periodically.

And then in 1966 came a surprising result. No, not Penrose tiles. I know you want me there. I’ll get there. Not there yet though. In 1966 Robert Berger — who also attended Rensselaer Polytechnic Institute, thank you — discovered such a tiling. It’s aperiodic, and it can’t be made periodic. Why do we know Penrose Tiles rather than Berger Tiles? Couple reasons, including that Berger has to use 20,426 distinct tile shapes. In 1971 Raphael M Robinson simplified matters a bit and got that down to six shapes. Roger Penrose in 1974 squeezed the set down to two, although by adding some rules about what edges may and may not touch one another. (You can turn this into a pure edges thing by putting notches into the shapes.) That really caught the public imagination. It’s got simplicity and accessibility to combine with beauty. Aperiodic tiles seem to relate to “quasicrystals”, which are what the name suggests and do happen in some materials. And they’ve got beauty. Aperiodic tiling embraces our need to have not too much order in our order.

I’ve discussed, in all this, tiling the plane. It’s an easy surface to think about and a popular one. But we can form tiling questions about other shapes. Cylinders, spheres, and toruses seem like they should have good tiling questions available. And we can imagine “tiling” stuff in more dimensions too. If we can fill a volume with cubes, or rectangles, it’s natural to wonder what other shapes we can fill it with. My impression is that fewer definite answers are known about the tiling of three- and four- and higher-dimensional space. Possibly because it’s harder to sketch out ideas and test them. Possibly because the spaces are that much stranger. I would be glad to hear more.

I’m hoping now to have a nice relaxing weekend. I won’t. I need to think of what to say for the letter ‘U’. On Tuesday I hope that it will join the rest of my A to Z essays at this link.

## Reading the Comics, November 24, 2018: Origins Edition

I’m not sure there is a theme to the back half of last week’s mathematically-based comic strips. If there is, it’s about showing some origins of things. I’ll go with that title, then.

Bill Holbrook’s On The Fastrack for the 21st is another in the curious thread of strips about Fi talking about mathematics. She’s presented as doing a good job inspiring kids to appreciate mathematics as a fun, exciting, interesting thing to think about. It’s good work. And I hope this does not sound like I am envious of a more successful, if fictional, mathematics popularizer. But I don’t see much in the strip of her doing this side job well. That is, of making the case that mathematics is worth the time spent on it. That’s a lot to ask given the confines of a syndicated daily newspaper comic strip, yes. What we can expect is some hint of what the actual good argument would look like. But this particular day’s strip rings false to me, for example. I don’t see how “here’s some pizza — but first, here’s a pop quiz” makes mathematics look as something other than a chore.

Pizza area offers many ways into mathematical ideas. How the area depends on the size of the pizza, for example. How the area depends on the shape, even independently of the size. How to slice a pizza fairly, especially if it’s not to be between four or six or eight people. What is the strangest shape you could make that would give people equal areas? Just the way slices intersect at angles inspires neat little geometry problems. How you might arrange toppings opens up symmetries and tilings, which are surprisingly big areas of mathematics. Setting problems on a pizza gives them a tangibility that could help capture young minds, surely. But I can’t make myself believe that this is a conversation to have when the pizza is entering the room.

Mike Peters’s Mother Goose and Grimm for the 22nd is a lottery joke. So if we suppose this was written about the last time the Powerball jackpot reached a half-billion dollars we can work out how far ahead of publication Mike Peters is working. One solid argument against ever buying a lottery ticket is, as Grimm notes, that you have zero chance of winning. (I’m open to an argument based on expectation value. And even more, I don’t object to people spending a reasonable bit of disposable income “foolishly”.) Mother Goose argues that her chances are vastly worse if she doesn’t buy a ticket. This is true. Are her chances “astronomically” worse? … That depends. A one in three hundred million chance (to use, roughly, the Powerball odds) is so small that it won’t happen to you. Is that any different than a zero in three hundred million chance [*]? Or than a six in three hundred million chance? In any case it won’t happen to you.

[*] Do you actually have zero chance of winning if you don’t have a ticket? I say no, you don’t. Someone might give you a winning ticket. Maybe you find one as a bookmark in a library book. Maybe you find it on the street and figure, what the heck, I’ll check. Unlikely? Sure. But impossible? Hardly.

Johnny Hart’s Back to BC for the 22nd has the form of the world’s oldest story problem. It could also be a joke about the discovery of the concept of zero and the struggle to understand it as a number. Given that clams are used as currency in the BC setting it also shows how finance has driven mathematical development. So the strip actually packs a fair bit of stuff into two panels. … And I’ll admit I’m not quite sure the joke parses, but if you read it quickly it looks like a good enough joke.

Johnny Hart’s Back to BC for the 24th is a more obvious joke. And it’s built on the learning abilities of animals, and the number sense of animals. A large animal stomping a foot evokes, to me at least, Clever Hans. This is a horse presented in the early 20th century as being able to actually do arithmetic. The horse would be given a question and would stop his hoof enough times to get to the right answer. However good the horse’s number sense might be, he had quite good behavioral sense. It turned out — after brilliant and pioneering work in animal cognition — that Hans was observing his trainer’s body language. When Wilhelm von Osten was satisfied that there’d been the right number of stomps, the horse stopped. This is sometimes presented as Hans merely’ taking subconscious cues from his trainer. But consider how carefully the horse must be observing an animal with a very different body, and how it must have understood cues of satisfaction. I can’t call that mere’. And the work of tracking down a signal that von Osten himself did not know he was sending (and, apparently, never accepted that he did) is also amazing. It serves as a reminder how hard biologists and zoologists have to work.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th gives a bit of Dad History about perspective. And, particularly, why artists didn’t seem to use it much before the 16th century. It gets more blatantly tied to mathematics by pointing out how it took ten thousand years of civilization to get Cartesian coordinates. We can argue about how many years civilization has been around. But it does seem strange that we went along for certainly the majority of that time without Cartesian coordinates. They seem so obvious it’s almost hard to not think of them. Many good ideas have such a legacy.

It’s easy to say why older pictures didn’t use perspective, though. For the most part, artists didn’t think perspective gave them something they wanted to show. Ancient peoples knew of perspective. It’s not as if ancient peoples were any dumber than we are, or any less able to look at square tiles held at different angles and at different distances. But we can convey information about the importance of things, or the flow of action of things, using position and relative size. That can be more important than showing that yes, an artist is aware that a square building far away looks small.

I’m less sure what I know about the history of coordinate systems, though, and particularly why it took until René Descartes to describe them. We have a legend of Descartes laying in bed, watching a fly on the tiled ceiling, and realizing he could describe where the fly was by what row and column of tile it was on. (In the past I have written this as though it happened. In writing this essay I went looking for a primary source and found nobody seems to have one. I shall try not to pass it on again without being very clear that it is just a legend.) But there have been tiled floors and walls and ceilings for a very long time. There have been flies even longer. Why didn’t anyone notice this?

One answer may be that they did. We just haven’t heard about it, because it was found by someone who didn’t catch the interest of a mathematical community. There’s likely a lot of such lost mathematics out there. But still, why not? Wouldn’t anyone with a mathematical inclination see that this is plainly a great discovery? And maybe not. What made Cartesian coordinates great was the realization that arithmetic and geometry, previously seen as separate liberal arts, were duals. A problem in one had an expression as a problem in the other. If you don’t make that connection, then Cartesian coordinates don’t solve any problems you have. They’re just a new way to index things you didn’t need indexed. So that would slow down using them any.

All of my regular Reading the Comics posts should all be at this link. Tomorrow should see the posting of my next my Fall 2018 Mathematics A To Z essay. And there’s still time to put in requests for the last half-dozen letters of the alphabet.