## Reading the Comics, June 26, 2022: First Doldrums of Summer Edition

I have not kept secret that I’ve had little energy lately. I hope that’s changing but can do little more than hope. I find it strange that my lack of energy seems to be matched by Comic Strip Master Command. Last week saw pretty slim pickings for mathematically-themed comics. Here’s what seems worth the sharing from my reading.

Lincoln Peirce’s Big Nate for the 22nd is a Pi Day joke, displaced to the prank day at the end of Nate’s school year. It’s also got a surprising number of people in the comments complaining that 3.1416 is only an approximation to π. It is, certainly, but so is any representation besides π or a similar mathematical expression. And introducing it with 3.1416 gives the reader the hint that this is about a mathematics expression and not an arbitrary symbol. It’s important to the joke that this be communicated clearly, and it’s hard to think of better ways to do that.

Dave Whamond’s Reality Check for the 24th is another in the line of “why teach algebra instead of something useful” strips. There are several responses. One is that certainly one should learn how to do a household budget; this was, at least back in the day, called home economics, and it was a pretty clear use of mathematics. Another is that a good education is about becoming literate in all the great thinking of humanity: you should come out knowing at least something coherent about mathematics and literature and exercise and biology and music and visual arts and more. Schools often fail to do all of this — how could they not? — but that’s not reason to fault them on parts of the education that they do. And anther is that algebra is about getting comfortable working with numbers before you know just what they are. That is, how to work out ways to describe a thing you want to know, and then to find what number (or range of numbers) that is. Still, these responses hardly matter. Mathematics has always lived in a twin space, of being both very practical and very abstract. People have always and will always complain that students don’t learn how to do the practical well enough. There’s not much changing that.

Charles Schulz’s Peanuts Begins for the 26th sees Violet challenge Charlie Brown to say what a non-perfect circle would be. I suppose this makes the comic more suitable for a philosophy of language blog, but I don’t know any. To be a circle requires meeting a particular definition. None of the things we ever point to and call circles meets that. We don’t generally have trouble connecting our imperfect representations of circles to the “perfect” ideal, though. And Charlie Brown said something meaningful in describing his drawing as being “a perfect circle”. It’s tricky pinning down exactly what it is, though.

And that is as much as last week moved me to write. This and my other Reading the Comics posts should be at this link. We’ll see whether the upcoming week picks up any.

## Reading the Comics, June 18, 2022: Pizza Edition

I’m back with my longest-running regular feature here. As I’ve warned I’m trying not to include every time one of the newspaper comics (that is, mostly, ones running on Comics Kingdom or GoComics) mentions the existence of arithmetic. So, for example, both Frank and Ernest and Rhymes with Orange did jokes about the names of the kinds of triangles. You can clip those at your leisure; I’m looking to discuss deeper subjects.

Scott Hilburn’s The Argyle Sweater is … well, it’s just an anthropomorphic-numerals joke. I have a weakness for The Wizard of Oz, that’s all. Also, I don’t know but somewhere in the nine kerspillion authorized books written since Baum’s death there be at least one with a “wizard of odds” plot.

Bill Amend’s FoxTrot reads almost like a word problem’s setup. There’s a difference in cost between pizzas of different sizes. Jason and Marcus make the supposition that they could buy the difference in sizes. They are asking for something physically unreasonable, but in a way that mathematics problems might do. The ring of pizza they’d be buying would be largely crust, after all. (Some people like crust, but I doubt any are ten-year-olds like Jason and Marcus.) The obvious word problem to spin out of this is extrapolating the costs of 20-inch or 8-inch pizzas, and maybe the base cost of making any pizza however tiny.

You can think of a 16-inch-wide circle as a 12-inch-wide circle with an extra ring around it. (An annulus, we’d say in the trades.) This is often a useful way to look at circles. If you get into calculus you’ll see the extra area you get from a slight increase in the diameter (or, more likely, the radius) all over the place. Also, in three dimensions, the difference in volume you get from an increase in diameter. There are also a good number of theorems with names like Green’s and Stokes’s. These are all about what you can know about the interior of a shape, like a pizza, from what you know about the ring around the edge.

Jim Meddick’s Monty sees Sedgwick, spoiled scion of New Jersey money, preparing for a mathematics test. He’s allowed the use of an abacus, one of the oldest and best-recognized computational aides. The abacus works by letting us turn the operations of basic arithmetic into physical operations. This has several benefits. We (generally) understand things in space pretty well. And the beads and wires serve as aides to memory, always a struggle. Sedgwick also brings out a “hyperbolic abacus”, a tool for more abstract operations like square roots and sines and cosines. I don’t know of anything by that name, but you can design mechanical tools to do particular computations. Slide rules, for example, generally have markings to let one calculate square roots and cube roots easily. Aircraft pilots might use a flight computer, a set of plastic discs to do quick estimates of flight time, fuel consumption, ground speed, and such. (There’s even an episode of the original Star Trek where Spock fiddles with one!)

I have heard, but not seen, that specialized curves were made to let people square circles with something approximating a compass-and-straightedge method. A contraption to calculate sines and cosines would not be hard to imagine. It would need to be a post on a hinge, mostly, with a set of lines to read off sine and cosine values over a range of angles. I don’t know of one that existed, as it’s easy enough to print out a table of trig functions, but it wouldn’t be hard to make.

And that’s enough for this week. This and all my other Reading the Comics posts should be at this link. I hope to get this back to a weekly column, but that does depend on Comic Strip Master Command doing what’s convenient for me. We’ll see how it turns out.

## I shouldn’t keep hiding the Playful Math Education Carnivals from you

I have not had the time or energy to host the Playful Math Education Carnival for a while now. I hope that changes but I don’t know when it will. Still, there is no good reason for me not to let you know when Denise Gaskins’ project, of gathering educational or recreational or just delightful mathematics links, has a new edition.

Nature Study Australia, which started as a nature-study and activity blog, hosts the most recent essay. It’s the 156th of the sequence, and so starts with the sorts of fun facts abou the number 156. From there it spreads into calculation tricks and practices, and eventually into the games and activities that highlight these sequences.

If you should write about mathematics even just sometimes, you might consider hosting the blog. It’s a worthwhile challenge, and you can sign up for future months at Denise Gaskins’s blog.

## Reading the Comics, June 3, 2022: Prime Choices Edition

I intended to be more casual about these comics when I resumed reading them for their mathematics content. I feel like Comic Strip Master Command is teasing me, though. There has been an absolute drought of comics with enough mathematics for me to really dig into. You can see that from this essay, which covers nearly a month of the strips I read and has two pieces that amount to “the cartoonist knows what a prime number is”. I must go with what I have, though.

Mark Anderson’s Andertoons for the 12th of May I would have sworn was a repeat. If it is, I don’t seem to have featured it before. It gives us Wavehead — I’ve learned his name is not consistent — learning about division. The first kind of division, at least, with a quotient and a remainder. The novel thing here, with integer division, is that the result is not a single number, but rather an ordered pair. I hadn’t thought about it that way before, I suppose since integer division and ordered pairs are introduced so far apart in one’s education.

We mostly put away this division-with-remainders as soon as we get comfortable with decimals. 19 &div; 4 becoming “4 remainder 3” or “4.75” or “4 $latex\frac{3}{4}$” all impose a roughly equal cognitive load. But this division reappears in (high school) algebra, when we start dividing polynomials. (Almost anything you can do with integers there’s a similar thing you can do with polynomials. This is not just because you can rewrite the integer “4” as the polynomial “f(x) = 0x + 4”.) There may be something easier to understand in turning $\left(x^2 + 3x - 3\right) \div \left(x - 2\right)$ into $\left(x + 1\right)$ remainder $\left(4x - 1\right)$.

A thing happening here is that integer arithmetic is a ring. We study a lot of rings, as it’s not hard to come up with things that look like addition and subtraction and multiplication. Rings we don’t assume to have division that stays in your set. They can turn into pairs, like with integers or with polynomials. Having that division makes the ring into a field, so-called because we don’t have enough things called a “field” already.

Scott Hilburn’s The Argyle Sweater for the 16th of May is one of the prime number strips from this collection. About the only note worth mention is that the indivisibility of 3 depends on supposing we mean the integer 3. If we decided 3 was a real number, we would have every real number other than zero as a divisor. There’s similar results for complex numbers or polynomials. I imagine there’s a good fight one could get going about whether 3-in-integer-arithmetic is the same number as 3-in-real-arithmetic. I’m not ready for that right now, though.

I like the blood bag Dracula’s drinking from. Nice touch.

Dave Coverly’s Speed Bump for the 16th of May names the ways to classify triangles based on common side lengths (or common angles). There is some non-absurdity in the joke’s premise. Not the existence of these particular pennants. But that someone who loves a subject enough to major in it will often be a bit fannish about it? Yes. It’s difficult to imagine going any other way. You need to get to a pretty high leve of mathematics to go seriously into triangles, but the option is there.

Dave Whamond’s Reality Check for the 3rd of June is the other comic strip playing on the definition of “prime”. Here it’s applied to the hassle of package delivery, and the often comical way that items will get boxed in what seems to be no logical pattern. But there is a reason behind that lack of pattern. It is an extremely hard problem to get bunches of things together at once. It gets even harder when those things have to come from many different sources, and get warehoused in many disparate locations. Add to that the shipper’s understandable desire to keep stuff sitting around, waiting, for as little time as possible. So the waste in package and handling and delivery costs seems worth it to send an order in ten boxes than in finding how to send it all in one.

It feels like an obvious offense to reason to use four boxes to send five items. It can be hard to tell whether the cost of organizing things into fewer boxes outweighs the additional cost of transporting, mostly, air. This is not to say that I think the choice is necessarily made correctly. I don’t trust organizations to not decide “I dunno, we always did it this way”. I want instead to note that when you think hard about a question it often becomes harder to say what a good answer would be.

I can give you a good answer, though, if your question is how to read more comic strips alongside me. I try to put all my Reading the Comics posts at this link. You can see something like a decade’s worth of my finding things to write about students not answering word problems. Thank you for reading along with this.

## How May 2022 Treated My Mathematics Blog

The easy way to put this article is, if I don’t read my mathematics blog why should anyone else? There is truth to this. I have mentioned several times that this has been a difficult year for me, and I’ve had to ration where I put my energy. I’ve avoided going a whole week without a post, but it’s only by reposting old material that I’ve managed that. Even the old standby of writing about the mathematics in comic strips has fallen short, as Comic Strip Master Command isn’t sending so many worth my attention these days. These are strange times.

The result is a decline in my readership, although it’s less of one than I had expected. There were no comments at all around here in May, which, have to say, seems fair. There wasn’t much to comment on, especially with just four essays posted. That’s my lowest posting volume in years. It’s also not the first time I had zero comments in a month, which takes some sting off.

So there were 2,057 page views here in May. That’s a bit below the twelve-month running mean of 2,212.3 views per month leading up to May. And below the running median of 2,114.5 views. Per posting, the number looks impressive, though, with 514.3 page views per posting. That beats the running mean of 309.1 and median of 302.8.

There were 1,358 unique visitors recorded in May. That’s again a slight decline from the 1,528.2 running mean and 1,461.5 running median. And, again, per posting the numbers seem impressive. 339.5 unique visitors with each posting, above the mean of 213.2 and median of 211.3. The implication, yes, is if I didn’t post at all I’d have infinitely many readers, a conclusion which hurts my feelings.

There were twenty likes given in May, up from April but still below the mean of 35.3 and median of 33. It’s a per-posting average of 5.0 likes per posting, above the mean of 4.6 and median of 4.2 but there’s no way there’s statistical significance to that. And, of course, no comments, compared to a running mean of 9.7 and median of 7.

With so few essays posted it’s easy to report the order of their popularity. I’m not sure whether their order depends on how interesting the text was or how early in the month they were posted. There’s no way the difference is statistically significant. But here’s the May 2022 pieces ranked most popular to least:

WordPress figures I started the month with a grand total of 1,714 posts. These all together drew 3,319 comments and 161,316 page views from 97,265 recorded unique visitors. It also figures my average post for the month had 876 words in it, bringing my average post for the year 2022 down to 1,037 words per posting. I’ve managed to put together 40,451 words so far this year. This surprises me by being close to half what I’ve managed on my humor blog, where I post every day. There, I have several regular columns, such as story comic plot summaries, that are popular and relatively easy to write.

Having said all that, will this look at May’s figures affect my writing any? I do think I have enough comic strips for a post, that should be next Wednesday, at least. If Comic Strip Master Command works with me, there could be more. But this all will depend on my emotional and energy reserves.

Some of my faithful readers may wonder: am I preparing to say something sad about this year’s A-to-Z? I’m not prepared to say, not yet. What I am is thinking about whether I want to commit to such a big, hard project. I am aware how much it would tax me to do, and while I would like to have it done, there is so much doing to get there. It will depend on how June treats me.

## About Chances of Winning on The Price Is Right, Again

While I continue to wait for time and muse and energy and inspiration to write fresh material, let me share another old piece. This bit from a decade ago examines statistical quirks in The Price Is Right. Game shows offer a lot of material for probability questions. The specific numbers have changed since this was posted, but, the substance hasn’t. I got a bunch of essays out of one odd incident mentioned once on the show, and let me do something useful with that now.

To the serious game show fans: Yes, I am aware that the “Item Up For Bid” is properly called the “One-Bid”. I am writing for a popular audience. (The name “One-Bid” comes from the original, 1950s, run of the show, when the game was entirely about bidding for prizes. A prize might have several rounds of bidding, or might have just the one, and that format is the one used for the Item Up For Bid for the current, 1972-present, show.)

Putting together links to all my essays about trapezoid areas made me realize I also had a string of articles examining that problem of The Price Is Right, with Drew Carey’s claim that only once in the show’s history had all six contestants winning the Item Up For Bids come from the same seat in Contestants’ Row. As with the trapezoid pieces they form a more or less coherent whole, so, let me make it easy for people searching the web for the likelihood of clean sweeps or of perfect games on The Price Is Right to find my thoughts.

## Something Neat About Triangles, Again

I apologize for not having anything fresh to share today. It’s been a difficult week, one of many. So I would like to share something from years ago, and something I still find delightful.

I was reading a biography of Donald Coxeter, one of the most important geometers of the 20th century, and it mentioned in passing something Coxeter referred to as Morley’s Miracle Theorem. The theorem was proved in 1899 by Frank Morley, who taught at Haverford College (if that sounds vaguely familiar that’s because you remember it’s where Dave Barry went) and then Johns Hopkins (which may be familiar on the strength of its lacrosse team), and published this in the first issue of the Transactions of the American Mathematical Society. And, yes, perhaps it isn’t actually important, but the result is so unexpected and surprising that I wanted to share it with you. The biography also includes a proof Coxeter wrote for the theorem, one that’s admirably straightforward, but let me show the result without the proof so you can wonder about it.

First, start by drawing a triangle. It doesn’t have to have any particular interesting properties other than existing. I’ve drawn an example one.

The next step is to cut into three equal pieces each of the interior angles of the triangle, and draw those lines. I’m doing that in separate diagrams for each of the triangle’s three original angles because I want to better suggest the process.

I should point out, this trisection of the angles can be done however you like, which is probably going to be by measuring the angles with a protractor and dividing the angle by three. I made these diagrams just by sketching them out, so they aren’t perfect in their measure, but if you were doing the diagram yourself on a sheet of scratch paper you wouldn’t bother getting the protractor out either. (And, famously, you can’t trisect an angle if you’re using just compass and straightedge to draw things, but you don’t have to restrict yourself to compass and straightedge for this.)

Now the next bit is to take the points where adjacent angle trisectors intersect — that is, for example, where the lower red line crosses the lower green line; where the upper red line crosses the left blue line; and where the right blue line crosses the upper green line. Draw lines connecting these points together and …

This new triangle, drawn in purple on my sketch, is an equilateral triangle!

(It may look a little off, but that’s because I didn’t measure the trisectors when I drew them in and just eyeballed it. If I had measured the angles and drawn the new ones in carefully, it would have been perfect.)

I’ve been thinking back on this and grinning ever since reading it. I certainly didn’t see that punch line coming.

## Reading the Comics, May 7, 2022: Does Comic Strip Master Command Not Do Mathematics Anymore Edition?

I mentioned in my last Reading the Comics post that it seems there are fewer mathematics-themed comic strips than there used to be. I know part of this is I’m trying to be more stringent. You don’t need me to say every time there’s a Roman numerals joke or that blackboards get mathematics symbols put on them. Still, it does feel like there’s fewer candidate strips. Maybe the end of the 2010s was a boom time for comic strips aimed at high school teachers and I only now appreciate that? Only further installments of this feature will let us know.

Jim Benton’s Jim Benton Cartoons for the 18th of April, 2022 suggests an origin for those famous overlapping circle pictures. This did get me curious what’s known about how John Venn came to draw overlapping circles. There’s no reason he couldn’t have used triangles or rectangles or any shape, after all. It looks like the answer is nobody really knows.

Venn, himself, didn’t name the diagrams after himself. Wikipedia credits Charles Dodgson (Lewis Carroll) as describing “Venn’s Method of Diagrams” in 1896. Clarence Irving Lewis, in 1918, seems to be the first person to write “Venn Diagram”. Venn wrote of them as “Eulerian Circles”, referencing the Leonhard Euler who just did everything. Sir William Hamilton — the philosopher, not the quaternions guy — posthumously published the Lectures On Metaphysics and Logic which used circles in these diagrams. Hamilton asserted, correctly, that you could use these to represent logical syllogisms. He wrote that the 1712 logic text Nucleus Logicae Weisianae — predating Euler — used circles, and was right about that. He got the author wrong, crediting Christian Weise instead of the correct author, Johann Christian Lange.

With 1712 the trail seems to end to this lay person doing a short essay’s worth of research. I don’t know what inspired Lange to try circles instead of any other shape. My guess, unburdened by evidence, is that it’s easy to draw circles, especially back in the days when every mathematician had a compass. I assume they weren’t too hard to typeset, at least compared to the many other shapes available. And you don’t need to even think about setting them with a rotation, the way a triangle or a pentagon might demand. But I also would not rule out a notion that circles have some connotation of perfection, in having infinite axes of symmetry and all points on them being equal in distance from the center and such. Might be the reasons fit in the intersection of the ethereal and the mundane.

Daniel Beyer’s Long Story Short for the 29th of April, 2022 puts out a couple of concepts from mathematical physics. These are all about geometry, which we now see as key to understanding physics. Particularly cosmology. The no-boundary proposal is a model constructed by James Hartle and Stephen Hawking. It’s about the first $10^{-43}$ seconds of the universe after the Big Bang. This is an era that was so hot that all our well-tested models of physical law break down. The salient part of the Hartle-Hawking proposal is the idea that in this epoch time becomes indistinguishable from space. If I follow it — do not rely on my understanding for your thesis defense — it’s kind of the way that stepping away from the North Pole first creates the ideas of north and south and east and west. It’s very hard to think of a way to test this which would differentiate it from other hypotheses about the first instances of the universe.

The Weyl Curvature is a less hypothetical construct. It’s a tensor, one of many interesting to physicists. This one represents the tidal forces on a body that’s moving along a geodesic. So, for example, how the moon of a planet gets distorted over its orbit. The Weyl Curvature also offers a way to describe how gravitational waves pass through vacuum. I’m not aware of any serious question of the usefulness or relevance of the thing. But the joke doesn’t work without at least two real physics constructs as setup.

Liniers’ Macanudo for the 5th of May, 2022 has one of the imps who inhabit the comic asserting responsibility for making mathematics work. It’s difficult to imagine what a creature could do to make mathematics work, or to not work. If pressed, we would say mathematics is the set of things we’re confident we could prove according to a small, pretty solid-seeming set of logical laws. And a somewhat larger set of axioms and definitions. (Few of these are proved completely, but that’s because it would involve a lot of fiddly boring steps that nobody doubts we could do if we had to. If this sounds sketchy, consider: do you believe my claim that I could alphabetize the books on the shelf to my right, even though I’ve never done that specific task? Why?) It would be like making a word-search puzzle not work.

The punch line, the blue imp counting seventeen of the orange imp, suggest what this might mean. Mathematics as a set of statements following some rule, is a niche interest. What we like is how so many mathematical things seem to correspond to real-world things. We can imagine mathematics breaking that connection to the real world. The high temperature rising one degree each day this week may tell us something about this weekend, but it’s useless for telling us about November. So I can imagine a magical creature deciding what mathematical models still correspond to the thing they model. Be careful in trying to change their mind.

And that’s as many comic strips from the last several weeks that I think merit discussion. All of my Reading the Comics posts should be at this link, though. And I hope to have a new one again sometime soon. I’ll ask my contacts with the cartoonists. I have about half of a contact.

## How April 2022 Treated My Mathematics Blog

This past month I moved towards the sort of thing that’s normal for my blog here. Mostly, Reading the Comics posts, with another piece that was about a mathematical curiosity. That is a typical selection of posts when I’m not doing something special, such as an A-to-Z sequence. So, with a new month begun, I like to see how it was received. As usual, I check WordPress’s statistics for the past month, and compare it to the running average for the twelve months leading up to that.

WordPress figures there were 2,121 page views here in April. That’s a little below the running mean of 2,286.8 page views. It’s almost exactly at the running median, though, of 2,122 page views in a month. So this suggests April turned out quite average. There were 1,404 recorded unique visitors. This is below the running mean of 1,602.7 unique visitors, and noticeably below the running median of 1,479. This suggests a month a bit below average.

Per posting, though? That suggests an increasing readership. There were 424.2 page views recorded per posting in April, above the running mean of 301.7 and running median of 302.8. There were 280.8 unique visitors per posting, also well above the 211.1 mean and 211.3 median. That’s not to say every post got 281 visitors, since many of the visitors looked at stuff from before April. This is what keeps me from re-blogging even more repeats.

That it was a slow month seems supported by the record of likes and comments, though. There were 19 likes given in April, well below the mean of 39.5 and median of 39. That’s a little less bad considered per posting, but still. That’s 3.8 likes per posting, below the running mean of 5.0 and running median of 4.5. There were an anemic two comments, way below the mean of 11.3 and median of 9.5. That’s just 0.4 comments per posting, compared to an already not-great mean of 1.4 and median of 1.2.

I had thought I posted more in April than a mere five pieces. Not so. Here’s the order of popularity of my posts, which are not quite in chronological order. I too quirk an eye at what the most popular thing of April was:

WordPress figures I posted 3,089 words in April, my fewest since September. And that comes to an average of 617.8 words per posting, again my lowest since September. For the year I’ve published 36,947 words, and have averaged 1,056 words per posting.

I started May with a total of 159,259 recorded page views from a recorded 95,907 unique visitors. But WordPress didn’t start telling us unique visitor counts until my blog here was a couple years old, so don’t take that too literally.

## How to Add Up Powers of Numbers

Do you need to know the formula to tell you what the sum of the first N counting numbers, raised to a power? No, you do not. Not really. It can save a bit of time to know the sum of the numbers raised to the first power. Most mathematicians would know it, or be able to recreate it fast enough:

$\sum_{n = 1}^{N} n = 1 + 2 + 3 + \cdots + N = \frac{1}{2}N\left(N + 1\right)$

But there are similar formulas to add up, say, the counting numbers squared, or cubed, or so. And a toot on Mathstodon, the mathematics-themed instance of social network Mastodon, makes me aware of a cute paper about this. In it Dr Alessandro Mariani describes A simple mnemonic to compute sums of powers.

It’s a neat one. Mariani describes a way to use knowledge of the sum of numbers to the first power to generate a formula for the sum of squares. And then to use the sum of squares formula to generate the sum of cubes. The sum of cubes then lets you get the sub of fourth-powers. And so on. This takes a while to do if you’re interested in the sum of twentieth powers. But do you know how many times you’ll ever need to generate that formula? Anyway, as Mariani notes, this sort of thing is useful if you find yourself at a mathematics competition. Or some other event where you can’t just have the computer calculate this stuff.

Mariani’s process is a great one. Like many mnemonics it doesn’t make literal sense. It expects one to integrate and differentiate polynomials. Anyone likely to be interested in a formula for the sums of twelfth powers knows how to do those in their sleep. But they’re integrating and differentiating polynomials for which, in context, the integrals and derivatives don’t exist. Or at least don’t mean anything. That’s all right. If all you want is the right answer, it’s okay to get there by a wrong method. At least if you verify the answer is right, which the last section of Mariani’s paper does. So, give it a read if you’d like to see a neat mathematical trick to a maybe useful result.

## Reading the Comics, April 17, 2022: Did I Catch Comic Strip Master Command By Surprise Edition

Part of the thrill of Reading the Comics posts is that the underlying material is wholly outside my control. The subjects discussed, yes, although there are some quite common themes. (Students challenging the word problem; lottery jokes; monkeys at typewriters.) But also quantity. Part of what burned me out on Reading the Comics posts back in 2020 was feeling the need to say something about lots of comic strips . Now?

I mentioned last week seeing only three interesting strips, and one of them, Andertoons, was a repeat I’d already discussed. This week there were only two strips that drew a first note and again, Andertoons was a repeat I’d already discussed. Mark Anderson’s comic for the 17th I covered in enough detail back in August of 2019. I don’t know how many new Andertoons are put into the rotation at GoComics. But the implication is Comic Strip Master Command ordered mathematics-comics production cut down, and they haven’t yet responded to my doing these again. I guess we’ll know for sure if things pick up in a couple weeks, as the lead time allows.

So Rick McKee and Kent Sligh’s Mount Pleasant for the 15th of April is all I have to discuss. It’s part of the long series of students resisting the teacher’s question. The teacher is asking a fair enough question, that of how to do a problem that has several parts. She does ask how we “should” solve the problem of finding what 4 + 4 – 2 equals. The catch is there are several ways to do this, all of them as good. We know this if we’ve accepted subtraction as a kind of addition, and if we’ve accepted addition as commutative.

So the order is our choice. We can add 4 and 4 and then subtract 2. Or subtract 2 from the second 4, and then add that to the first 4. If you want, and can tell the difference, you could subtract 2 from the first 4, and then add the second 4 to that.

For this problem it doesn’t make any difference. But one can imagine similar ones where the order you tackle things in can make calculations easier, or harder. 5 + 7 – 2, for example, I find easier if I work it out as 5 + ( 7 – 2), that is, 5 + 5. So it’s worth taking a moment to consider whether rearranging it can make the calculation more reliable. I don’t know whether the teacher meant to challenge the students to see that there are alternatives, and no uniquely “right” answer. It’s possible McKee and Sligh did not have the teaching plan worked out.

That makes for another week’s worth of comic strips to discuss. All of my Reading the Comics posts should be at this link. Thanks for reading this and I will let you know if Comic Strip Master Command increases production of comics with mathematics themes.

## Reading the Comics, April 10, 2022: Quantum Entanglement Edition

I remember part of why I stopped doing Reading the Comics posts regularly was their volume. I read a lot of comics and it felt like everyone wanted to do a word problem joke. Since I started easing back into these posts it’s seemed like they’ve disappeared. When I put together this week’s collection, I only had three interesting ones. And one was Andertoons for the 10th of April. Andertoons is a stalwart here, but this particular strip was one I already talked about, back in 2019.

Another was the Archie repeat for the 10th of April. And that only lists mathematics as a school subject. It would be the same joke if it were English lit. Saying “differential calculus” gives it the advantage of specificity. It also suggests Archie is at least a good enough student to be taking calculus in high school, which isn’t bad. Differential calculus is where calculus usually starts, with the study of instantaneous changes. A person can, and should, ask how a change can be instantaneous. Part of what makes differential calculus is learning how to find something that matches our intuition about what it should be. And that never requires us to do something appalling like divide zero by zero. Our current definition took a couple centuries of wrangling to find a scheme that makes sense. It’s a bit much to expect high school students to pick it up in two months.

Ripley’s Believe It Or Not for the 10th of April, 2022 was the most interesting piece. This referenced a problem I didn’t remember having heard about, the “36 Officers puzzle” of Leonhard Euler. Euler’s name you know as he did foundational work in every field of mathematics ever. This particular puzzle ates to 1779, according to an article in Quanta Magazine which one of the Ripley’s commenters offered. Six army regiments each have six officers of six different ranks. How can you arrange them in a six-by-six square so that no row or column repeats a rank or regiment?

The problem sounds like it shouldn’t be hard. The two-by-two version of this is easy. So is three-by-three and four-by-four and even five-by-five. Oddly, seven-by-seven is, too. It looks like some form of magic square, and seems not far off being a sudoku problem either. So it seems weird that six-by-six should be particularly hard, but sometimes it happens like that. In fact, this happens to be impossible; a paper by Gaston Terry in 1901 proved there were none.

The solution discussed by Ripley’s is of a slightly different problem. So I’m not saying to not believe it, just, that you need to believe it with reservations. The modified problem casts this as a quantum-entanglement, in which the rank and regiment of an officer in one position is connected to that of their neighbors. I admit I’m not sure I understand this well enough to explain; I’m not confident I can give a clear answer why a solution of the entangled problem can’t be used for the classical problem.

The problem, at this point, isn’t about organizing officers anymore. It never was, since that started as an idle pastime. Legend has it that it started as a challenge about organizing cards; if you look at the paper you’ll see it presenting states as card suits and values. But the problem emerged from idle curiosity into practicality. These turn out to be applicable to quantum error detection codes. I’m not certain I can explain how myself. You might be able to convince yourself of this by thinking how you know that someone who tells you the sum of six odd numbers is itself an odd number made a mistake somewhere, and you can then look for what went wrong.

And that’s as many comics from last week as I feel like discussing. All my Reading the Comics posts should be gathered at this link. Thanks for reading this and I hope to do this again soon.

## Reading the Comics, April 2, 2022: Pi Day Extra Edition

I’m not sure that I will make a habit of this. It’s been a while since I did a regular Reading the Comics post, looking for mathematics topics in syndicated newspaper comic strips. I thought I might dip my toes in those waters again. Since my Pi Day essay there’ve been only a few with anything much to say. One of them was a rerun I’ve discussed before, too, a Bloom County Sunday strip that did an elaborate calculation to conceal the number 1. I’ve written about that strip twice before, in May 2016 and then in October 2016, so that’s too well-explained to need revisiting.

As it happens two of the three strips remaining were repeats, though ones I don’t think I’ve addressed before here.

Bill Amend’s FoxTrot Classics for the 18th of March looks like a Pi Day strip. It’s not, though: it originally ran the 16th of March, 2001. We didn’t have Pi Day back then.

What Peter Fox is doing is drawing a unit circle — a circle of radius 1 — and dividing it into a couple common angles. Trigonometry students are expected to know the sines and cosines and tangents of a handful of angles. If they don’t know them, they can work these out from first principles. Draw a line from the center of the unit circle at an angle measured counterclockwise from the positive x-axis. Find where that line you’ve just drawn intersects the unit circle. The x-coordinate of that point has the same value as the cosine of that angle. The y-coordinate of that point has the same value as the sine of that angle. And for a handful of angles — the ones Peter marks off in the second panel — you can work them out by reason alone.

These angles we know as, like, 45 degrees or 120 degrees or 135 degrees. Peter writes them as $\frac{\pi}{4}$ or $\frac{2}{3}\pi$ or $\frac{9}{8}\pi$, because these are radian measure rather than degree measure. It’s a different scale, one that’s more convenient for calculus. And for some ordinary uses too: an angle of (say) $\frac{3}{4}\pi$ radians sweeps out an arc of length $\frac{3}{4}\pi$ on the unit circle. You can see where that’s easier to keep straight than how long an arc of 135 degrees might be.

Drawing this circle is a good way to work out or remember sines and cosines for the angles you’re expected to know, which is why you’d get them on a trig test.

Scott Hilburn’s The Argyle Sweater for the 27th of March summons every humorist’s favorite piece of topology, the Möbius strip. Unfortunately the line work makes it look to me like Hilburn’s drawn a simple loop of a steak. Follow the white strip along the upper edge. Could be the restaurant does the best it can with a challenging presentation.

August Ferdinand Möbius by the way was an astronomer, working most of his career at the Observatory at Leipzig. (His work as a professor was not particularly successful; he was too poor a lecturer to keep students.) His father was a dancing teacher, and his mother was a descendant of Martin Luther, although I imagine she did other things too.

Rina Piccolo’s Tina’s Groove for the 2nd of April makes its first appearance in a Reading the Comics post in almost a decade. The strip ended in 2017 and only recently has Comics Kingdom started showing reprints. The strip is about the numerical coincidence between 3.14 of a thing and the digits of π. It originally ran at the end of March, 2007, which like the vintage FoxTrot reminds us how recent a thing Pi Day is to observe.

3.14 hours is three hours, 8.4 minutes, which implies that she clocked in at about 9:56.

And that’s this installment. All my Reading the Comics posts should be at this link. I don’t know when I’ll publish a next one, but it should be there, too. Thanks for reading.

## How March 2022 Treated My Mathematics Blog

I expected readers to be happy I was finishing the Little 2021 Mathematics A-to-Z. My doubt was how happy they would be. Turns out they were a middling amount of happy. So this is my regular review of the readership statistics for the past month, as provided by WordPress.

I published eight things in March, which is average for me the past twelve months. It was a long, long time ago that I went whole months posting something every day. But my twelve-month running mean has been 8.5 posts per month, and the median 8, so that’s just in line. There were 2,272 page views recorded in March, which is below the running mean of 2,336.4 and above the running median of 2,122. So, average, like I said. There were 1,545 unique visitors, below the running mean of 1,640.0 and above the running median of 1,479.

Prorated by posting, the showing is a little worse. There were 284.0 views and 193.1 unique visitors per posting in March. The running mean is 301.9 views and 211.6 visitors per posting. The median, 302.8 views and 211.3 visitors. I have no explanation for this phenomenon.

I have a hypothesis. There were 32 likes given in the month, below the mean of 39.3 and median of 35. But several of the posts were pointers to other essays and those are naturally less well-liked. That came to 4.0 likes per posting, below the mean of 4.9 likes per posting and median of 4.5 likes per posting. Comments were anemic again, with only four given in the month. The mean is an impossible-seeming 11.8 and median 10. Per posting, there were 0.5 comments here in March, compared to a mean of 1.4 and median of 1.2. So it goes.

What was popular in March? Pi Day comic strips, of course, and my making something out of the NCAA March Madness basketball tournament. Here’s the March postings in descending order of popularity.

Stuff from before this past month was popular too, including several of the individual Pi Day pages. And my post about the most and least likely dates for Easter, which is sure to be a seasonal favorite.

WordPress figures that I posted 6,655 words in March, for an average post length of 1,128. If that number seems familiar it does to me too. I had 1,128 words per posting, on average, in January too, an event that caused me to go check that I hadn’t recorded something wrong. But that was also a month with many more posts (many repeats). This brought my average words per post for the year down to 831.9, close to half what my average was at the end of February.

WordPress figures that I started April 2022 with a total of 1,705 posts here. They’d drawn 3,317 comments, with a total 157,138 views from 94,502 recorded unique visitors.

If you’d like to be a regular reader around here, please read. There’s a button at the upper right of the page, “Follow Nebusresearch”. That adds this blog to your WordPress reader. There’s a field below that to get posts e-mailed as they’re published. I do nothing with the e-mail except send those posts. WordPress probably has some incomprehensible page where they say what the do with your e-mails. And if you have an RSS reader, you can put the essays feed into that.

## What I Learned Writing the Little 2021 Mathematics A-to-Z

I try, at the end of each of these A-to-Z sessions, to think about what I’ve learned from the experience. The challenge is reliably interesting, thanks to the kind readers who suggest topics. While I reserve the right to choose my own subject for any letter, I usually go for what of the suggestions sounds most interesting. That nudges me out of my comfortable, familiar thoughts and into topics I know less well. I would never have written about cohomologies if I waited to think I had something to say about them.

I didn’t have any deep experiences like that this time, although I did get a better handle on tangent spaces and why we like them. Most of what I did learn was about process, and about how to approach writing here.

For example, I started appealing for topics more letters ahead than I had previous projects. The goal was to let myself build a reserve, so that I would have a week or more to let an essay sit while I re-thought what I’d said. Early on, this worked well and I liked the results. It also made it easier to tie essays together; multiplication and addition could complement one another. This is something I could expand on.

And varying from the strict alphabetical order seems to have worked too. The advantage of doing every letter in order is that I’m pushed into some unpromising letters, like ‘Q’ or ‘Y’. It’s fantastic when I get a good essay out of that. But that’s harder work. This time around I did three topics starting with A, and three with T, and there’s so many more I could write.

The biggest and hardest thing I learned was related to how my plans went awry. How I lost the several-weeks lead time I started with, and how I had to put the project on hold for nearly three months.

2021 was a hard year, after another hard year, after a succession of hard years. Mostly, these were hard years because the world had been hard. Wearying, which is why I started out doing a mere 15 essays instead of the full 26. But not things that too directly hit my personal comfort. During the Little 2021 A-to-Z, though, the hard got intimate. Personal disasters hit starting in mid-August, and kept progressing — or dragging out — through to the new year. Just in time for the world-hardness of the first Omicron wave of the pandemic.

I have always thought of myself as a Sabbath-is-made-for-Man person. That is, schedules are ways to help you get done what you want or need; they’re not of value in themselves. Yet I do value them. I like their hold, and I thrive within them. Part of my surviving the pandemic, when all normal activities stopped, was the schedule of things I write here and on my humor blog. They offered a reason to do something particular. If I were not living up to this commitment, then what was I doing?

The answer is I would be not stressing myself past what I can do. I like these A-to-Z essays, and all the writing I do, or I wouldn’t do it. It’s nourishing and often exciting. But it is labor, and it is stress. Exercising a bit longer or a bit harder than one feels able to helps one build endurance and strength. But there are times one’s muscles are exhausted, or one’s joints are worked too much, and you must rest. Not just stick to the routine exercise, but take a break so that you can recover. I had not taken a serious break since starting this blog, and hadn’t realized I would need to. Over the course of this A-to-Z I learned I sometimes need to, and I should.

I need also to think of what I will do next. I’m not sure when I will feel confident that I can do a full A-to-Z, or even a truncated version. My hunch is I need to do more mathematical projects here that are fun and playful. This implies thinking of fun and playful projects, and thinking is the hard part again. But I understand, in a way I had not before, that I can let go.

The whole of the Little 2021 Mathematics A-to-Z sequence should be at this link. And then at this link should be all of the A-to-Z essays from all past years. Thank you.

## What I Wrote About In My Little 2021 Mathematics A to Z

It’s good to have an index of the topics I wrote about for each of my A-to-Z sequences. It’s good for me, at least. It makes my future work much easier. And it might help people find past essays. I hope to have my essay about what I learned from a project that was supposed to be nearly one-third shorter, and ended up sprawling past its designated year, next week.

All of the Little 2021 Mathematics A-to-Z essays should be at this link. And gathered at this link should be all of the A-to-Z essays from all past years. Thank you for your reading.

## Reading the Comics, March 14, 2022: Pi Day Edition

As promised I have the Pi Day comic strips from my reading here. I read nearly all the comics run on Comics Kingdom and on GoComics, no matter how hard their web sites try to avoid showing comics. (They have some server optimization thing that makes the comics sometimes just not load.) (By server optimization I mean “tracking for advertising purposes”.)

Pi Day in the comics this year saw the event almost wholly given over to the phonetic coincidence that π sounds, in English, like pie. So this is not the deepest bench of mathematical topics to discuss. My love, who is not as fond of wordplay as I am, notes that the ancient Greeks likely pronounced the name of π about the same way we pronounce the letter “p”. This may be etymologically sound, but that’s not how we do it in English, and even if we switched over, that would not make things better.

Scott Hilburn’s The Argyle Sweater is one of the few strips not to be about food. It is set in the world of anthropomorphized numerals, the other common theme to the day.

John Hambrook’s The Brilliant Mind of Edison Lee leads off with the food jokes, in this case cookies rather than pie. The change adds a bit of Abbott-and-Costello energy to the action.

Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel gets our first pie proper, this time tossed in the face. One of the commenters observes that the middle of a pecan pie can really hold heat, “Ouch”. Will’s holding it in his bare paw, though, so it can’t be that bad.

Jules Rivera’s Mark Trail makes the most casual Pi Day reference. If the narrator hadn’t interrupted in the final panel no one would have reason to think this referenced anything.

Mark Parisi’s Off The Mark is the other anthropomorphic numerals joke for the day. It’s built on the familiar fact that the digits of π go on forever. This is true for any integer base. In base π, of course, the representation of π is just “10”. But who uses that? And in base π, the number six would be something with infinitely many digits. There’s no fitting that in a one-panel comic, though.

Doug Savage’s Savage Chickens is the one strip that wasn’t about food or anthropomorphized numerals. There is no practical reason to memorize digits of π, other than that you’re calculating something by hand and don’t want to waste time looking them up. In that case there’s not much call go to past 3.14. If you need more than about 3.14159, get a calculator to do it. But memorizing digits can be fun, and I will not underestimate the value of fun in getting someone interested in mathematics.

For my part, I memorized π out to 3.1415926535787932, so that’s sixteen digits past the decimal. Always felt I could do more and I don’t know why I didn’t. The next couple digits are 8462, which has a nice descending-fifths cadence to it. The 626 following is a neat coda. My describing it this way may give you some idea to how I visualize the digits of π. They might help you, if you figure for some reason you need to do this. You do not, but if you enjoy it, enjoy it.

Bianca Xunise’s Six Chix for the 15th ran a day late; Xunise only gets the comic on Tuesdays and the occasional Sunday. It returns to the food theme.

And this brings me to the end of this year’s Pi Day comic strips. All of my Reading the Comics posts, past and someday future, should be at this link. And my various Pi Day essays should be here. Thank you for reading.

## Let Me Remind You How Interesting a Basketball Tournament Is

Several years ago I stumbled into a nice sequence. All my nice sequences have been things I stumbled upon. This one looked at the most basic elements of information theory by what they tell us about the NCAA College Basketball tournament. This is (in the main) a 64-team single-elimination playoff. It’s been a few years since I ran through the sequence. But it’s been a couple years since the tournament could be run with a reasonably clear conscience too. So here’s my essays:

And this spins off to questions about other sports events.

And I still figure to get to this year’s Pi Day comic strips. Soon. It’s been a while since I felt I had so much to write up.

## Here Are Past Years’ Pi Day Comic Strips

I haven’t yet read today’s comics; it takes a while to get through them. But I hope to summarize what Comic Strip Master Command has sent out for the syndicated comics for today. In the meanwhile, here’s Pi Day strips of past years.

And I have to offer a warning. GoComics.Com has discontinued a lot of comics in the past couple years. They’ve been brutal about removing the archives of strips they’ve discontinued. Comics Kingdom is similarly ruthless in removing strips not in production. And a recent and, to the user, bad code update broke a lot of what had been non-expiring links. But my discussions of the themes in the comic are still there. And, as I got more into the Reading the Comics project I got more likely to include the original comic. So that’s some compensation.

Here’s the past several years in comics from on or around the 14th of March:

• 2015, featuring The Argyle Sweater, Baldo, The Chuckle Brothers, Dog Eat Doug, FoxTrot Classics, Herb and Jamaal, Long Story Short, The New Adventures of Queen Victoria, Off The Mark, and Working Daze.
• 2016, featuring The Argyle Sweater, B.C., Brewster Rockit, The Brilliant Mind of Edison Lee, Curtis, Dog Eat Doug, F Minus, Free Range, and Holiday Doodles.
• 2017, featuring 2 Cows and a Chicken, Archie, The Argyle Sweater, Arlo and Janis, Lard’s World Peace Tips, Loose Parts, Off The Mark, Saturday Morning Breakfast Cereal, TruthFacts, and Working Daze.
• 2018, featuring The Argyle Sweater, Bear With Me, Funky Winterbean Classic, Mutt and Jeff, Off The Mark, Savage Chickens, Warped, and Working Daze.
• 2019, featuring The Brilliant Mind of Edison Lee, Liz Climo’s Cartoons, The Grizzwells, Off The Mark, and Working Daze.
• 2020, featuring Baldo, Calvin and Hobbes, Off The Mark, Real Life Adventures, Reality Check, and Warped.
• 2021, featuring Agnes, The Argyle Sweater, Between Friends, Breaking Cat News, FoxTrot, Frazz, Get Fuzzy, Heart of the City, Reality Check, and Studio Jantze.

As mentioned, I have yet to read today’s comics. I’m looking forward to it, at least to learn what Funky Winkerbean character I’m going to be most annoyed with this week. It will be Les Moore. I was also going to look forward to seeing if there would ever be a Pi Day strips roundup without The Argyle Sweater or Reality Check. It turns out there was one in 2019. Weird how you can get the impression something is always there even when it’s not.

## My Little 2021 Mathematics A-to-Z: Zorn’s Lemma

The joke to which I alluded last week was a quick pun. The setup is, “What is yellow and equivalent to the Axiom of Choice?” It’s the topic for this week, and the conclusion of the Little 2021 Mathematics A-to-Z. I again thank Mr Wu, of Singapore Maths Tuition, for a delightful topic.

# Zorn’s Lemma

Max Zorn did not name it Zorn’s Lemma. You expected that. He thought of it just as a Maximal Principle when introducing it in a 1934 presentation and 1935 paper. The word “lemma” connotes that some theorem is a small thing. It usually means it’s used to prove some larger and more interesting theorem. Zorn’s Lemma is one of those small things. With the right background, a rigorous proof is a couple not-too-dense paragraphs. Without the right background? It’s one of those proofs you read the statement of and nod, agreeing, that sounds reasonable.

The lemma is about partially ordered sets. A set’s partially ordered if it has a relationship between pairs of items in it. You will sometimes see a partially ordered set called a “poset”, a term of mathematical art which make me smile too. If we don’t know anything about the ordering relationship we’ll use the ≤ symbol, just like this was ordinary numbers. To be partially ordered, whenever x ≤ y and y ≤ x, we know that x and y must be equal. And the converse: if x = y then x ≤ y and y ≤ x. What makes this partial is that we’re not guaranteed that every x and y relate in some way. It’s a totally ordered set if we’re guaranteed that at least one of x ≤ y and y ≤ x is always true. And then there is such a thing as a well-ordered set. This is a totally ordered set for which every subset (unless it’s empty) has a minimal element.

If we have a couple elements, each of which we can put in some order, then we can create a chain. If x ≤ y and y ≤ z, then we can write x ≤ y ≤ z and we have at least three things all relating to one another. This seems like stuff too basic to notice, if we think too literally about the relationship being “is less than or equal to”. If the relationship is, say, “divides wholly into”, then we get some interesting different chains. Like, 2 divides into 4, which divides into 8, which divides into 24. And 3 divides into 6 which divides into 24. But 2 doesn’t divide into 3, nor 3 into 2. 4 doesn’t divide into 6, nor 6 into either 8 or 4.

So what Zorn’s Lemma says is, if all the chains in a partially ordered set each have an upper bound, then, the partially ordered set has a maximal element. “Maximal element” here means an element that doesn’t have a bigger comparable element. (That is, m is maximal if there’s no other element b for which m ≤ b. It’s possible that m and b can’t be compared, though, the way 6 doesn’t divide 8 and 8 doesn’t divide 6.) This is a little different from a “maximum” . It’s possible for there to be several maximal elements. But if you parse this as “if you can always find a maximum in a string of elements, there’s some maximum element”? And remember there could be many maximums? Then you’re getting the point.

You may also ask how this could be interesting. Zorn’s Lemma is an existence proof. Most existence proofs assure us a thing we thought existed does, but don’t tell us how to find it. This is all right. We tend to rely on an existence proof when we want to talk about some mathematical item but don’t care about fussy things like what it is. It is much the way we might talk about “an odd perfect number N”. We can describe interesting things that follow from having such a number even before we know what value N has.

A classic example, the one you find in any discussion of using Zorn’s Lemma, is about the basis for a vector space. This is like deciding how to give directions to a point in space. But vector spaces include some quite abstract things. One vector space is “the set of all functions you can integrate”. Another is “matrices whose elements are all four-dimensional rotations”. There might be literally infinitely many “directions” to go. How do we know we can find a set of directions that work as well as, for guiding us around a city, the north-south-east-west compass rose does? So there’s the answer. There are other things done all the time, too. A nontrivial ring-with-identity, for example, has to have a maximal ideal. (An ideal is a subset of the ring that’s still a ring.) This is handy to know if you’re working with rings a lot.

The joke in my prologue was built on the claim Zorn’s Lemma is equivalent to the Axiom of Choice. The Axiom of Choice is a piece of set theory that surprised everyone by being independent of the Zermelo-Fraenkel axioms. The Axiom says that, if you have a collection of disjoint nonempty sets, then there must exist at least one set with exactly one element from each of those sets. That is, you can pick one thing out of each of a set of bins. It’s easy to see how this has in common with Zorn’s Lemma being too obvious to imagine proving. That’s the sort of thing that makes a good axiom. Thing about a lemma, though, is we do prove it. That’s how we know it’s a lemma. How can a lemma be equivalent to an axiom?

I’l argue by analogy. In Euclidean geometry one of the axioms is this annoying statement about on which side of a line two other lines that intersect it will meet. If you have this axiom, you can prove some nice results, like, the interior angles of a triangle add up to two right angles. If you decide you’d rather make your axiom that bit about the interior angles adding up? You can go from that to prove the thing about two lines crossing a third line.

So it is here. If you suppose the Axiom of Choice is true, you can get Zorn’s Lemma: you can pick an element in your set, find a chain for which that’s the minimum, and find your maximal element from that. If you make Zorn’s Lemma your axiom? You can use x ≤ y to mean “x is a less desirable element to pick out of this set than is y”. And then you can choose a maximal element out of your set. (It’s a bit more work than that, but it’s that kind of work.)

There’s another theorem, or principle, that’s (with reservations) equivalent to both Zorn’s Lemma and the Axiom of Choice. It’s another piece that seems so obvious it should defy proof. This is the well-ordering theorem, which says that every set can be well-ordered. That is, so that every non-empty subset has some minimum element. Finally, a mathematical excuse for why we have alphabetical order, even if there’s no clear reason that “j” should come after “i”.

(I said “with reservations” above. This is because whether these are equivalent depends on what, precisely, kind of deductive logic you’re using. If you are not using ordinary propositional logic, and are using a “second-order logic” instead, they differ.)

Ermst Zermelo introduced the Axiom of Choice to set theory so that he could prove this in a way that felt reasonable. I bet you can imagine how you’d go from “every non-empty set has a minimum element” right back to “you can always pick one element of every set”, though. And, maybe if you squint, can see how to get from “there’s always a minimum” to “there has to be a maximum”. I’m speaking casually here because proving it precisely is more work than we need to do.

I mentioned how Zorn did not name his lemma after himself. Mathematicians typically don’t name things for themselves. Nor did he even think of it as a lemma. His name seems to have adhered to the principle in the late 30s. Credit the nonexistent mathematician Bourbaki writing about “le théorème de Zorn”. By 1940 John Tukey, celebrated for the Fast Fourier Transform, wrote of “Zorn’s Lemma”. Tukey’s impression was that this is how people in Princeton spoke of it at the time. He seems to have been the first to put the words “Zorn’s Lemma” in print, though. Zorn isn’t the first to have stated this. Kazimierez Kuratowski, in 1922, described what is clearly Zorn’s Lemma in a different form. Zorn remembered being aware of Kuratowski’s publication but did not remember noticing the property. The Hausdorff Maximal Principle, of Felix Hausdorff, has much the same content. Zorn said he did not know about Hausdorff’s 1927 paper until decades later.

Zorn’s lemma, the Axiom of Choice, the well-ordering theorem, and Hausdorff’s Maximal Principle all date to the early 20th century. So do a handful of other ideas that turn out to be equivalent. This was an era when set theory saw an explosive development of new and powerful ideas. The point of describing this chain is to emphasize that great concepts often don’t have a unique presentation. Part of the development of mathematics is picking through several quite similar expressions of a concept. Which one do we enshrine as an axiom, or at least the canonical presentation of the idea?

We have to choose.

And with this I at last declare the hard work Little 2021 Mathematics A-to-Z at an end. I plan to follow up, as traditional, with a little essay about what I learned while doing this project. All of the Little 2021 Mathematics A-to-Z essays should be at this link. And then all of the A-to-Z essays from all eight projects should be at this link. Thank you so for your support in these difficult times.

## How February 2022 Treated My Mathematics Blog

This past month I finished my hiatus, the one where I reran old A-to-Z pieces instead of finishing off what I thought would be a simple, small project for 2021. And, after a mishap, got back to finishing things. As a result I published fewer pieces in February than I had since October. I had an inflated posting record in December and January, from reposting old material. I expected that end to shrink my readership again. And, yes, that’s what happened.

In February, according to WordPress, I attracted 1,875 page views. That’s below the twelve-month running mean of 2,360.8 page views leading up to February 2022. It’s also below the running median of 2,151.5 page views. In fact, it’s the lowest number of page views in a month going back to July 2020, around here.

Ah, but what about unique visitors? There were 1,313 of those, figures WordPress. That’s below the twelve-month running mean of 1,661.9 and the running median of 1,534.5. It happens that’s also the lowest monthly figure going back to July 2020. (Although that by a whisker: July 2021 had a couple more views, and unique visitors, than did February 2022. I don’t know what’s wrong with Julys around here.)

The number of likes dropped to 28, way below the mea of 40.9 and median of 39.5. And that was the lowest count since November of 2021. And there were only two comments, way below the mean of 14.9 and median of 10, I haven’t been below that figure since December of 2019. At least these are non-July dates to deal with.

This would all be too sad to bear except that if you look at these figures per posting? Then they snap right back into line. Like, this was in February an average of 312.5 page views every time I posted something. The twelve months leading up to that saw a mean of 301.6 page views per posting and a median of 302.8 page views per posting. February saw 218.8 unique visitors per posting. The running mean was 212.2 and running median 211.3. Even the likes become not so bad: 4.7 per posting. The mean was 5.1 and the median 4.9. In this figuring, the only dire number was comments, a scant 0.3 per posting, compared to mean of 1.9 and median of 1.4. So in that light, you know, things aren’t so bad.

What are the popular things of February? It’s worth running the whole list down. In decreasing order of popularity we have:

Other stuff, from before February, was even more popular, though. It’s getting to be the time of year people look to learn what the most and least likely dates of Easter are, for example. (Easter 2022 is set for the 17th of April. This is on the less-likely side of the band from the 28th of March through 21st of April when Easter is most likely. However, it is one of the most likely dates for Easter in the lifetime of anyone reading this blog, that is, for the span from 1925 to 2100.)

WordPress credits me with publishing 9,163 words in February, for an average post length of 1,527.2 words. This brings my average post length for the year up to 1,237. This is impressive considering I’ve been trying to write my A-to-Zs short for 2021.

WordPress figures that I started March 2022 having posted 1,697 things here. They’ve altogether drawn 3,313 comments from a total 154,866 page views and 92,956 logged unique visitors.

If you’d like to be a regular reader around here, please keep reading. There’s a button at the upper right of the page, “Follow Nebusresearch”, to add this blog to your WordPress reader. There’s a field below that to get posts sent to you in e-mail as they’re published. I do nothing with the e-mail except send those posts; I can’t say what WordPress Master Command does with them. And if you have an RSS reader, you can put the essays feed into that.

## My Little 2021 Mathematics A-to-Z: Ordinary Differential Equations

Mr Wu, my Singapore Maths Tuition friend, has offered many fine ideas for A-to-Z topics. This week’s is another of them, and I’m grateful for it.

# Ordinary Differential Equations

As a rule, if you can do something with a number, you can do the same thing with a function. Not always, of course, but the exceptions are fewer than you might imagine. I’ll start with one of those things you can do to both.

A powerful thing we learn in (high school) algebra is that we can use a number without knowing what it is. We give it a name like ‘x’ or ‘y’ and describe what we find interesting about it. If we want to know what it is, we (usually) find some equation or set of equations and find what value of x could make that true. If we study enough (college) mathematics we learn its equivalent in functions. We give something a name like f or g or Ψ and describe what we know about it. And then try to find functions which make that true.

There are a couple common types of equation for these not-yet-known functions. The kind you expect to learn as a mathematics major involves differential equations. These are ones where your equation (or equations) involve derivatives of the not-yet-known f. A derivative describes the rate at which something changes. If we imagine the original f is a position, the derivative is velocity. Derivatives can have derivatives also; this second derivative would be the acceleration. And then second derivatives can have derivatives also, and so on, into infinity. When an equation involves a function and its derivatives we have a differential equation.

(The second common type is the integral equation, using a function and its integrals. And a third involves both derivatives and integrals. That’s known as an integro-differential equation, and isn’t life complicated enough? )

Differential equations themselves naturally divide into two kinds, ordinary and partial. They serve different roles. Usually an ordinary differential equation we can describe the change for from knowing only the current situation. (This may include velocities and accelerations and stuff. We could ask what the velocity at an instant means. But never mind that here.) Usually a partial differential equation bases the change where you are on the neighborhood of where your location. If you see holes you can pick in that, you’re right. The precise difference is about the independent variables. If the function f has more than one independent variable, it’s possible to take a partial derivative. This describes how f changes if one variable changes while the others stay fixed. If the function f has only the one independent variable, you can only take ordinary derivatives. So you get an ordinary differential equation.

But let’s speak casually here. If what you’re studying can be fully represented with a dashboard readout? Like, an ordered list of positions and velocities and stuff? You probably have an ordinary differential equation. If you need a picture with a three-dimensional surface or a color map to understand it? You probably have a partial differential equation.

One more metaphor. If you can imagine the thing you’re modeling as a marble rolling around on a hilly table? Odds are that’s an ordinary differential equation. And that representation covers a lot of interesting problems. Marbles on hills, obviously. But also rigid pendulums: we can treat the angle a pendulum makes and the rate at which those change as dimensions of space. The pendulum’s swinging then matches exactly a marble rolling around the right hilly table. Planets in space, too. We need more dimensions — three space dimensions and three velocity dimensions — for each planet. So, like, the Sun-Earth-and-Moon would be rolling around a hilly table with 18 dimensions. That’s all right. We don’t have to draw it. The mathematics works about the same. Just longer.

[ To be precise we need three momentum dimensions for each orbiting body. If they’re not changing mass appreciably, and not moving too near the speed of light, velocity is just momentum times a constant number, so we can use whichever is easier to visualize. ]

We mostly work with ordinary differential equations of either the first or the second order. First order means we have first derivatives in the equation, but never have to deal with more than the original function and its first derivative. Second order means we have second derivatives in the equation, but never have to deal with more than the original function or its first or second derivatives. You’ll never guess what a “third order” differential equation is unless you have experience in reading words. There are some reasons we stick to these low orders like first and second, though. One is that we know of good techniques for solving most first- and second-order ordinary differential equations. For higher-order differential equations we often use techniques that find a related normal old polynomial. Its solution helps with the thing we want. Or we break a high-order differential equation into a set of low-order ones. So yes, again, we search for answers where the light is good. But the good light covers many things we like to look at.

There’s simple harmonic motion, for example. It covers pendulums and springs and perturbations around stable equilibriums and all. This turns out to cover so many problems that, as a physics major, you get a little sick of simple harmonic motion. There’s the Airy function, which started out to describe the rainbow. It turns out to describe particles trapped in a triangular quantum well. The van der Pol equation, about systems where a small oscillation gets energy fed into it while a large oscillation gets energy drained. All kinds of exponential growth and decay problems. Very many functions where pairs of particles interact.

This doesn’t cover everything we would like to do. That’s all right. Ordinary differential equations lend themselves to numerical solutions. It requires considerable study and thought to do these numerical solutions well. But this doesn’t make the subject unapproachable. Few of us could animate the “Pink Elephants on Parade” scene from Dumbo. But could you draw a flip book of two stick figures tossing a ball back and forth? If you’ve had a good rest, a hearty breakfast, and have not listened to the news yet today, so you’re in a good mood?

The flip book ball is a decent example here, too. The animation will look good if the ball moves about the “right” amount between pages. A little faster when it’s first thrown, a bit slower as it reaches the top of its arc, a little faster as it falls back to the catcher. The ordinary differential equation tells us how fast our marble is rolling on this hilly table, and in what direction. So we can calculate how far the marble needs to move, and in what direction, to make the next page in the flip book.

Almost. The rate at which the marble should move will change, in the interval between one flip-book page and the next. The difference, the error, may not be much. But there is a difference between the exact and the numerical solution. Well, there is a difference between a circle and a regular polygon. We have many ways of minimizing and estimating and controlling the error. Doing that is what makes numerical mathematics the high-paid professional industry it is. Our game of catch we can verify by flipping through the book. The motion of four dozen planets and moons attracting one another is harder to be sure we calculate it right.

I said at the top that most anything one can do with numbers one can do with functions also. I would like to close the essay with some great parallel. Like, the way that trying to solve cubic equations made people realize complex numbers were good things to have. I don’t have a good example like that for ordinary differential equations, where the study expanded our ideas of what functions could be. Part of that is that complex numbers are more accessible than the stranger functions. Part of that is that complex numbers have a story behind them. The story features titanic figures like Gerolamo Cardano, Niccolò Tartaglia and Ludovico Ferrari. We see some awesome and weird personalities in 19th century mathematics. But their fights are generally harder to watch from the sidelines and cheer on. And part is that it’s easier to find pop historical treatments of the kinds of numbers. The historiography of what a “function” is is a specialist occupation.

But I can think of a possible case. A tool that’s sometimes used in solving ordinary differential equations is the “Dirac delta function”. Yes, that Paul Dirac. It’s a weird function, written as $\delta(x)$. It’s equal to zero everywhere, except where $x$ is zero. When $x$ is zero? It’s … we don’t talk about what it is. Instead we talk about what it can do. The integral of that Dirac delta function times some other function can equal that other function at a single point. It strains credibility to call this a function the way we speak of, like, $sin(x)$ or $\sqrt{x^2 + 4}$ being functions. Many will classify it as a distribution instead. But it is so useful, for a particular kind of problem, that it’s impossible to throw away.

So perhaps the parallels between numbers and functions extend that far. Ordinary differential equations can make us notice kinds of functions we would not have seen otherwise.

And with this — I can see the much-postponed end of the Little 2021 Mathematics A-to-Z! You can read all my entries for 2021 at this link, and if you’d like can find all my A-to-Z essays here. How will I finish off the shortest yet most challenging sequence I’ve done yet? Will it be yellow and equivalent to the Axiom of Choice? Answers should come, in a week, if all starts going well.

## My Little 2021 Mathematics A-to-Z: Tangent Space

And now, finally, I resume and hopefully finish what was meant to be a simpler and less stressful A-to-Z for last year. I’m feeling much better about my stress loads now and hope that I can soon enjoy the feeling of having a thing accomplished.

This topic is one of many suggestions that Elkement, one of my longest blog-friendships here, offered. It’s a creation that sent me back to my grad school textbooks, some of those slender paperback volumes with tiny, close-set type that turn out to be far more expensive than you imagine. Though not in this case: my most useful reference here was V I Arnold’s Ordinary Differential Equations, stamped inside as costing \$18.75. The field is full of surprises. Another wonderful reference was this excellent set of notes prepared by Jodin Morey. They would have done much to help me through that class.

# Tangent Space

Stand in midtown Manhattan, holding a map of midtown Manhattan. You have — not a tangent space, not yet. A tangent plane, representing the curved surface of the Earth with the flat surface of your map, though. But the tangent space is near: see how many blocks you must go, along the streets and the avenues, to get somewhere. Four blocks north, three west. Two blocks south, ten east. And so on. Those directions, of where you need to go, are the tangent space around you.

There is the first trick in tangent spaces. We get accustomed, early in learning calculus, to think of tangent lines and then of tangent planes. These are nice, flat approximations to some original curve. But while we’re introduced to the tangent space, and first learn examples of it, as tangent planes, we don’t stay there. There are several ways to define tangent spaces. One recasts tangent spaces in group theory terms, describing them as a ring based on functions that are equal to zero at the tangent point. (To be exact, it’s an ideal, based on a quotient group, based on two sets of such functions.)

That’s a description mathematicians are inclined to like, not only because it’s far harder to imagine than a map of the city is. But this ring definition describes the tangent space in terms of what we can do with it, rather than how to calculate finding it. That tends to appeal to mathematicians. And it offers surprising insights. Cleverer mathematicians than I am notice how this makes tangent spaces very close to Lagrange multipliers. Lagrange multipliers are a technique to find the maximum of a function subject to a constraint from another function. They seem to work by magic, and tangent spaces will echo that.

I’ll step back from the abstraction. There’s relevant observations to make from this map of midtown. The directions “four blocks north, three west” do not represent any part of Manhattan. It describes a way you might move in Manhattan, yes. But you could move in that direction from many places in the city. And you could go four blocks north and three west if you were in any part of any city with a grid of streets. It is a vector space, with elements that are velocities at a tangent point.

The tangent space is less a map showing where things are and more one of how to get to other places, closer to a subway map than a literal one. Still, the topic is steeped in the language of maps. I’ll find it a useful metaphor too. We do not make a map unless we want to know how to find something. So the interesting question is what do we try to find in these tangent spaces?

There are several routes to tangent spaces. The one I’m most familiar with is through dynamical systems. These are typically physics-driven, sometimes biology-driven, problems. They describe things that change in time according to ordinary differential equations. Physics problems particularly are often about things moving in space. Space, in dynamical systems, becomes “phase space”, an abstract universe spanned by all of the possible values of the variables. The variables are, usually, the positions and momentums of the particles (for a physics problem). Sometimes time and energy appear as variables. In biology variables are often things that represent populations. The role the Earth served in my first paragraph is now played by a manifold. The manifold represents whatever constraints are relevant to the problem. That’s likely to be conservation laws or limits on how often arctic hares can breed or such.

The evolution in time of this system, though, is now the tracing out of a path in phase space. An understandable and much-used system is the rigid pendulum. A stick, free to swing around a point. There are two useful coordinates here. There’s the angle the stick makes, relative to the vertical axis, $\theta$. And there’s how fast the stick is changing, $\dot{\theta}$. You can draw these axes; I recommend $\theta$ as the horizontal and $\dot{\theta}$ as the vertical axis but, you know, you do you.

If you give the pendulum a little tap, it’ll swing back and forth. It rises and moves to the right, then falls while moving to the left, then rises and moves to the left, then falls and moves to the right. In phase space, this traces out an ellipse. It’s your choice whether it’s going clockwise or anticlockwise. If you give the pendulum a huge tap, it’ll keep spinning around and around. It’ll spin a little slower as it gets nearly upright, but it speeds back up again. So in phase space that’s a wobbly line, moving either to the right or the left, depending what direction you hit it.

You can even imagine giving the pendulum just the right tap, exactly hard enough that it rises to vertical and balances there, perfectly aligned so it doesn’t fall back down. This is a special path, the dividing line between those ellipses and that wavy line. Or setting it vertically there to start with and trusting no truck driving down the street will rattle it loose. That’s a very precise dot, where $\dot{\theta}$ is exactly zero. These paths, the trajectories, match whatever walking you did in the first paragraph to get to some spot in midtown Manhattan. And now let’s look again at the map, and the tangent space.

Within the tangent space we see what changes would change the system’s behavior. How much of a tap we would need, say, to launch our swinging pendulum into never-ending spinning. Or how much of a tap to stop a spinning pendulum. Every point on a trajectory of a dynamical system has a tangent space. And, for many interesting systems, the tangent space will be separable into two pieces. One of them will be perturbations that don’t go far from the original trajectory. One of them will be perturbations that do wander far from the original.

These regions may have a complicated border, with enclaves and enclaves within enclaves, and so on. This can be where we get (deterministic) chaos from. But what we usually find interesting is whether the perturbation keeps the old behavior intact or destroys it altogether. That is, how we can change where we are going.

That said, in practice, mathematicians don’t use tangent spaces to send pendulums swinging. They tend to come up when one is past studying such petty things as specific problems. They’re more often used in studying the ways that dynamical systems can behave. Tangent spaces themselves often get wrapped up into structures with names like tangent bundles. You’ll see them proving the existence of some properties, describing limit points and limit cycles and invariants and quite a bit of set theory. These can take us surprising places. It’s possible to use a tangent-space approach to prove the fundamental theorem of algebra, that every polynomial has at least one root. This seems to me the long way around to get there. But it is amazing to learn that is a place one can go.

I am so happy to be finally finishing Little 2021 Mathematics A-to-Z. All of this project’s essays should be at this link. And all my glossary essays from every year should be at this link. Thank you for reading.

## The Plan, and How It Will Go Wrong

I spent a while repeating old A-to-Z materials for the letters T, O, and Z, the unfinished business from my Little 2021 Mathematics A-to-Z. This was to give me the time to recover and to prepare new essays to finish out the already-reduced project from last year. And then I figured once that was done, I could do three posts on successive Wednesdays, and a wrap-up post where I said what I learned. And then I’d be free to do whatever I felt like.

You notice this is not an A-to-Z essay. It’s been a struggle and this morning as I was preparing to finish off a fresh essay I realized: I had picked a topic I did already. No, it was not torus again. But there’s no doing a whole new essay in under three hours, especially not since I need to get groceries before a really nasty-looking bunch of weather we’re getting delivered this evening and tomorrow.

So it’s all pushed back another week again. All I can say is I hope I’ll be happy this hour next week.

## How January 2022 Treated My Mathematics Blog

It’s a reasonable time for me to check on my readership statistics for the past month. The current month is maybe fourteen minutes from ending, after all. January was my most prolific month since October 2020, with 16 posts published. Nearly all were repostings of old A-to-Z essays. But if you weren’t checking in here in 2015, how would you know the difference, except by my pointing it out?

I have long suspected the thing that most affects my readership is how many times I post. So how did this block of repeat posts affect my readership? Says WordPress, it was like this:

The number of pages viewed in January rose to 2,108, its highest figure since October 2021. That’s below the running averages for the twelve months ending in December 2021, though. The running mean was 2,402.7 views per month, and the median 2,337 views per month. Ah, but what if we rate that per posting? Then there were 131.8 views per posting. The running mean was 321.8 views per posting and the running mean 307.4. (And none of this is to say that any posting got 132 views. Most of what’s read any month is older material. The things that have had the chance to get some traction as the answer to search engine queries.)

The number of unique visitors rose from December, to 1,458 unique visitors in January. That’s still below the running mean of 1,694.5 visitors and the running median of 1,654.5. Per posting, the figure is even more dire: 91.1 visitors per posting, compared to a mean of 226.6 and median of 219.2. These per-posting unique visitor numbers are in line with the sort of thing I did back in 2019 or so, when I had lots of postings in both the A-to-Z and in the Reading the Comics line, though.

There were 51 things liked here in January, a slight rise and even above the mean of 40.1 and median of 38.5. Per posting, that’s 3.2 likes, compared to a mean of 5.3 and median of 5.6. All of these below the likability count of distant years like 2018, which were themselves much less liked than, say, 2015.

Comments fell again, with only four given or received around here in January. The mean is 15.7 and median 11.5. That’s a dire 0.3 comments per posting, although I grant there wasn’t a lot for people to respond to. The mean is 2.0 comments per posting, and median 1.6, and, you know, I’ve had worse months. (February is looking like one!)

I had a lot of posts get at least some views in January. The five most popular posts from the month were:

And for one I have enough posts it feels silly to list all of them in order of decreasing popularity. I’m a touch surprised none of the A-to-Z reposts were among the most popular. What the record suggests is people like amusing little trifles or me talking about myself. Ah, if only it weren’t painful to talk about myself.

WordPress credits me with 18,040 words published in January, for an average of 1,128 words per posting. That’s more than any month of 2020 or 2021, to my surprise.

WordPress figures that as of the start of February I’d posted 1,691 things where, drawing 152,987 views from 91,642 logged unique visitors. And that there were a total of 3,311 comments altogether.

And that should be enough looking back for now. I hope to resume, and complete, the Little 2021 A-to-Z next week, and after that, let’s just see what I do.

## From my Seventh A-to-Z: Zero Divisor

Here I stand at the end of the pause I took in 2021’s Little Mathematics A-to-Z, in the hopes of building the time and buffer space to write its last three essays. Have I succeeded? We’ll see next week, but I will say that I feel myself in a much better place than I was in December.

The Zero Devisor closed out my big project for the first plague year. It let me get back to talking about abstract algebra, one of the cores of a mathematics major’s education. And it let me get into graph theory, the unrequited love of my grad school life. The subject also let me tie back to Michael Atiyah, the start of that year’s A-to-Z. Often a sequence will pick up a theme and 2020’s gave a great illusion of being tightly constructed.

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

# Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$.

And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

## From my Sixth A-to-Z: Zeno’s Paradoxes

I suspect it is impossible to say enough about Zeno’s Paradoxes. To close out my 2019 A-to-Z, though, I tried saying something. There are four particularly famous paradoxes and I discuss what are maybe the second and third-most-popular ones here. (The paradox of the Dichotomy is surely most famous.) The problems presented are about motion and may seem to be about physics, or at least about perception. But calculus is built on differentials, on the idea that we can describe how fast a thing is changing at an instant. Mathematicians have worked out a way to define this that we’re satisfied with and that doesn’t require (obvious) nonsense. But to claim we’ve solved Zeno’s Paradoxes — as unwary STEM majors sometimes do — is unwarranted.

Also I was able to work in a picture from an amusement park trip I took, the closing weekend of Kings Island park in 2019 and the last day that The Vortex roller coaster would run.

Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.

And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

## From my Fifth A-to-Z: Zugzwang

The Fall 2018 A-to-Z gave me the chance to talk a bit more about game theory. It and knot theory are two of the fields of mathematics I most long to know better. Well, that and differential geometry. It also gave me the chance to show off how I read The Yiddish Policeman’s Union. I enjoyed the book.

My final glossary term for this year’s A To Z sequence was suggested by aajohannas, who’d also suggested “randomness” and “tiling”. I don’t know of any blogs or other projects they’re behind, but if I do hear, I’ll pass them on.

# Zugzwang.

Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.

Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?

But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.

This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.

The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.

But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.

There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.

Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.

The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.

Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.

… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.

But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.

There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.

We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.

It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.

The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literates gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.

Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.

I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.

It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.

And with that … I am done! All of the Fall 2018 Mathematics A To Z posts should be at this link. Next week I’ll post my big list of all the letters, though. And, as has become tradition, a post about what I learned by doing this project. And sometime before then I should have at least one more Reading the Comics post. Thanks kindly for reading and we’ll see when in 2019 I feel up to doing another of these.

## From my Fourth A-to-Z: Zeta Functions

I did not remember how long a buildup there was to my Summer 2017 writings about the Zeta function. But it’s something that takes a lot of setup. I don’t go into why the Riemann Hypothesis is interesting. I might have been saving that for a later A-to-Z. Or I might have trusted that since every pop mathematics blog has a good essay about the Riemann Hypothesis already there wasn’t much I could add.

I realize on re-reading that one might take me to have said that the final exam for my Intro to Complex Analysis course was always in the back of my textbook. I’d meant that after the final, I tucked it into my book and left it there. Probably nobody was confused by this.

Today Gaurish, of For the love of Mathematics, gives me the last subject for my Summer 2017 A To Z sequence. And also my greatest challenge: the Zeta function. The subject comes to all pop mathematics blogs. It comes to all mathematics blogs. It’s not difficult to say something about a particular zeta function. But to say something at all original? Let’s watch.

# Zeta Function.

The spring semester of my sophomore year I had Intro to Complex Analysis. Monday Wednesday 7:30; a rare evening class, one of the few times I’d eat dinner and then go to a lecture hall. There I discovered something strange and wonderful. Complex Analysis is a far easier topic than Real Analysis. Both are courses about why calculus works. But why calculus for complex-valued numbers works is a much easier problem than why calculus for real-valued numbers works. It’s dazzling. Part of this is that Complex Analysis, yes, builds on Real Analysis. So Complex can take for granted some things that Real has to prove. I didn’t mind. Given the way I crashed through Intro to Real Analysis I was glad for a subject that was, relatively, a breeze.

As we worked through Complex Variables and Applications so many things, so very many things, got to be easy. The basic unit of complex analysis, at least as we young majors learned it, was in contour integrals. These are integrals whose value depends on the values of a function on a closed loop. The loop is in the complex plane. The complex plane is, well, your ordinary plane. But we say the x-coordinate and the y-coordinate are parts of the same complex-valued number. The x-coordinate is the real-valued part. The y-coordinate is the imaginary-valued part. And we call that summation ‘z’. In complex-valued functions ‘z’ serves the role that ‘x’ does in normal mathematics.

So a closed loop is exactly what you think. Take a rubber band and twist it up and drop it on the table. That’s a closed loop. Suppose you want to integrate a function, ‘f(z)’. If you can always take its derivative on this loop and on the interior of that loop, then its contour integral is … zero. No matter what the function is. As long as it’s “analytic”, as the terminology has it. Yeah, we were all stunned into silence too. (Granted, mathematics classes are usually quiet, since it’s hard to get a good discussion going. Plus many of us were in post-dinner digestive lulls.)

Integrating regular old functions of real-valued numbers is this tedious process. There’s sooooo many rules and possibilities and special cases to consider. There’s sooooo many tricks that get you the integrals of some functions. And then here, with complex-valued integrals for analytic functions, you know the answer before you even look at the function.

As you might imagine, since this is only page 113 of a 341-page book there’s more to it. Most functions that anyone cares about aren’t analytic. At least they’re not analytic everywhere inside regions that might be interesting. There’s usually some points where an interesting function ‘f(z)’ is undefined. We call these “singularities”. Yes, like starships are always running into. Only we rarely get propelled into other universes or other times or turned into ghosts or stuff like that.

So much of the rest of the course turns into ways to avoid singularities. Sometimes you can spackle them over. This is when the function happens not to be defined somewhere, but you can see what it ought to be. Sometimes you have to do something more. This turns into a search for “removable” singularities. And this does something so brilliant it looks illicit. You modify your closed loop, so that it comes up very close, as close as possible, to the singularity, but studiously avoids it. Follow this game of I’m-not-touching-you right and you can turn your integral into two parts. One is the part that’s equal to zero. The other is the part that’s a constant times whatever the function is at the singularity you’re removing. And that ought to be easy to find the value for. (Being able to find a function’s value doesn’t mean you can find its derivative.)

Those tricks were hard to master. Not because they were hard. Because they were easy, in a context where we expected hard. But after that we got into how to move singularities. That is, how to do a change of variables that moved the singularities to where they’re more convenient for some reason. How could this be more convenient? Because of chapter five, “Series”. In regular old calculus we learn how to approximate well-behaved functions with polynomials. In complex-variable calculus, we learn the same thing all over again. They’re polynomials of complex-valued variables, but it’s the same sort of thing. And not just polynomials, but things that look like polynomials except they’re powers of $\frac{1}{z}$ instead. These open up new ways to approximate functions, and to remove singularities from functions.

And then we get into transformations. These are about turning a problem that’s hard into one that’s easy. Or at least different. They’re a change of variable, yes. But they also change what exactly the function is. This reshuffles the problem. Makes for a change in singularities. Could make ones that are easier to work with.

One of the useful, and so common, transforms is called the Laplace-Stieltjes Transform. (“Laplace” is said like you might guess. “Stieltjes” is said, or at least we were taught to say it, like “Stilton cheese” without the “ton”.) And it tends to create functions that look like a series, the sum of a bunch of terms. Infinitely many terms. Each of those terms looks like a number times another number raised to some constant times ‘z’. As the course came to its conclusion, we were all prepared to think about these infinite series. Where singularities might be. Which of them might be removable.

These functions, these results of the Laplace-Stieltjes Transform, we collectively call ‘zeta functions’. There are infinitely many of them. Some of them are relatively tame. Some of them are exotic. One of them is world-famous. Professor Walsh — I don’t mean to name-drop, but I discovered the syllabus for the course tucked in the back of my textbook and I’m delighted to rediscover it — talked about it.

That world-famous one is, of course, the Riemann Zeta function. Yes, that same Riemann who keeps turning up, over and over again. It looks simple enough. Almost tame. Take the counting numbers, 1, 2, 3, and so on. Take your ‘z’. Raise each of the counting numbers to that ‘z’. Take the reciprocals of all those numbers. Add them up. What do you get?

A mass of fascinating results, for one. Functions you wouldn’t expect are concealed in there. There’s strips where the real part is zero. There’s strips where the imaginary part is zero. There’s points where both the real and imaginary parts are zero. We know infinitely many of them. If ‘z’ is -2, for example, the sum is zero. Also if ‘z’ is -4. -6. -8. And so on. These are easy to show, and so are dubbed ‘trivial’ zeroes. To say some are ‘trivial’ is to say that there are others that are not trivial. Where are they?

Professor Walsh explained. We know of many of them. The nontrivial zeroes we know of all share something in common. They have a real part that’s equal to 1/2. There’s a zero that’s at about the number $\frac{1}{2} - \imath 14.13$. Also at $\frac{1}{2} + \imath 14.13$. There’s one at about $\frac{1}{2} - \imath 21.02$. Also about $\frac{1}{2} + \imath 21.02$. (There’s a symmetry, you maybe guessed.) Every nontrivial zero we’ve found has a real component that’s got the same real-valued part. But we don’t know that they all do. Nobody does. It is the Riemann Hypothesis, the great unsolved problem of mathematics. Much more important than that Fermat’s Last Theorem, which back then was still merely a conjecture.

What a prospect! What a promise! What a way to set us up for the final exam in a couple of weeks.

I had an inspiration, a kind of scheme of showing that a nontrivial zero couldn’t be within a given circular contour. Make the size of this circle grow. Move its center farther away from the z-coordinate $\frac{1}{2} + \imath 0$ to match. Show there’s still no nontrivial zeroes inside. And therefore, logically, since I would have shown nontrivial zeroes couldn’t be anywhere but on this special line, and we know nontrivial zeroes exist … I leapt enthusiastically into this project. A little less enthusiastically the next day. Less so the day after. And on. After maybe a week I went a day without working on it. But came back, now and then, prodding at my brilliant would-be proof.

The Riemann Zeta function was not on the final exam, which I’ve discovered was also tucked into the back of my textbook. It asked more things like finding all the singular points and classifying what kinds of singularities they were for functions like $e^{-\frac{1}{z}}$ instead. If the syllabus is accurate, we got as far as page 218. And I’m surprised to see the professor put his e-mail address on the syllabus. It was merely “bwalsh@math”, but understand, the Internet was a smaller place back then.

I finished the course with an A-, but without answering any of the great unsolved problems of mathematics.

## From my Third A-to-Z: Zermelo-Fraenkel Axioms

The close of my End 2016 A-to-Z let me show off one of my favorite modes, that of amateur historian of mathematics who doesn’t check his primary references enough. So far as I know I don’t have any serious errors here, but then, how would I know? … But keep in mind that the full story is more complicated and more ambiguous than presented. (This is true of all histories.) That I could fit some personal history in was also a delight.

I don’t know why Thoralf Skolem’s name does not attach to the Zermelo-Fraenkel Axioms. Mathematical things are named with a shocking degree of arbitrariness. Skolem did well enough for himself.

gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

## Zermelo-Fraenkel Axioms

I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprising all the subsets of the original set. Good stuff to know.

There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s the ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or for a more instructive view, it’s as simple as it could be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

## From my Second A-to-Z: Z-score

When I first published this I mentioned not knowing why ‘z’ got picked as a variable name. Any letter besides ‘x’ would make sense. As happens when I toss this sort of question out, I haven’t learned anything about why ‘z’ and not, oh, ‘y’ or ‘t’ or even ‘d’. My best guess is that we don’t want to confuse references to the original data with references to the transformed. And while you can write a ‘z’ so badly it looks like an ‘x’, it’s much easier to write a ‘y’ that looks like an ‘x’. I don’t know whether the Preliminary SAT is still a thing.

And we come to the last of the Leap Day 2016 Mathematics A To Z series! Z is a richer letter than x or y, but it’s still not so rich as you might expect. This is why I’m using a term that everybody figured I’d use the last time around, when I went with z-transforms instead.

## Z-Score

You get an exam back. You get an 83. Did you do well?

Hard to say. It depends on so much. If you expected to barely pass and maybe get as high as a 70, then you’ve done well. If you took the Preliminary SAT, with a composite score that ranges from 60 to 240, an 83 is catastrophic. If the instructor gave an easy test, you maybe scored right in the middle of the pack. If the instructor sees tests as a way to weed out the undeserving, you maybe had the best score in the class. It’s impossible to say whether you did well without context.

The z-score is a way to provide that context. It draws that context by comparing a single score to all the other values. And underlying that comparison is the assumption that whatever it is we’re measuring fits a pattern. Usually it does. The pattern we suppose stuff we measure will fit is the Normal Distribution. Sometimes it’s called the Standard Distribution. Sometimes it’s called the Standard Normal Distribution, so that you know we mean business. Sometimes it’s called the Gaussian Distribution. I wouldn’t rule out someone writing the Gaussian Normal Distribution. It’s also called the bell curve distribution. As the names suggest by throwing around “normal” and “standard” so much, it shows up everywhere.

A normal distribution means that whatever it is we’re measuring follows some rules. One is that there’s a well-defined arithmetic mean of all the possible results. And that arithmetic mean is the most common value to turn up. That’s called the mode. Also, this arithmetic mean, and mode, is also the median value. There’s as many data points less than it as there are greater than it. Most of the data values are pretty close to the mean/mode/median value. There’s some more as you get farther from this mean. But the number of data values far away from it are pretty tiny. You can, in principle, get a value that’s way far away from the mean, but it’s unlikely.

We call this standard because it might as well be. Measure anything that varies at all. Draw a chart with the horizontal axis all the values you could measure. The vertical axis is how many times each of those values comes up. It’ll be a standard distribution uncannily often. The standard distribution appears when the thing we measure satisfies some quite common conditions. Almost everything satisfies them, or nearly satisfies them. So we see bell curves so often when we plot how frequently data points come up. It’s easy to forget that not everything is a bell curve.

The normal distribution has a mean, and median, and mode, of 0. It’s tidy that way. And it has a standard deviation of exactly 1. The standard deviation is a way of measuring how spread out the bell curve is. About 95 percent of all observed results are less than two standard deviations away from the mean. About 99 percent of all observed results are less than three standard deviations away. 99.9997 percent of all observed results are less than six standard deviations away. That last might sound familiar to those who’ve worked in manufacturing. At least it des once you know that the Greek letter sigma is the common shorthand for a standard deviation. “Six Sigma” is a quality-control approach. It’s meant to make sure one understands all the factors that influence a product and controls them. This is so the product falls outside the design specifications only 0.0003 percent of the time.

This is the normal distribution. It has a standard deviation of 1 and a mean of 0, by definition. And then people using statistics go and muddle the definition. It is always so, with the stuff people actually use. Forgive them. It doesn’t really change the shape of the curve if we scale it, so that the standard deviation is, say, two, or ten, or π, or any positive number. It just changes where the tick marks are on the x-axis of our plot. And it doesn’t really change the shape of the curve if we translate it, adding (or subtracting) some number to it. That makes the mean, oh, 80. Or -15. Or eπ. Or some other number. That just changes what value we write underneath the tick marks on the plot’s x-axis. We can find a scaling and translation of the normal distribution that fits whatever data we’re observing.

When we find the z-score for a particular data point we’re undoing this translation and scaling. We figure out what number on the standard distribution maps onto the original data set’s value. About two-thirds of all data points are going to have z-scores between -1 and 1. About nineteen out of twenty will have z-scores between -2 and 2. About 99 out of 100 will have z-scores between -3 and 3. If we don’t see this, and we have a lot of data points, then that’s suggests our data isn’t normally distributed.

I don’t know why the letter ‘z’ is used for this instead of, say, ‘y’ or ‘w’ or something else. ‘x’ is out, I imagine, because we use that for the original data. And ‘y’ is a natural pick for a second measured variable. z’, I expect, is just far enough from ‘x’ it isn’t needed for some more urgent duty, while being close enough to ‘x’ to suggest it’s some measured thing.

The z-score gives us a way to compare how interesting or unusual scores are. If the exam on which we got an 83 has a mean of, say, 74, and a standard deviation of 5, then we can say this 83 is a pretty solid score. If it has a mean of 78 and a standard deviation of 10, then the score is better-than-average but not exceptional. If the exam has a mean of 70 and a standard deviation of 4, then the score is fantastic. We get to meaningfully compare scores from the measurements of different things. And so it’s one of the tools with which statisticians build their work.

## From my First A-to-Z: Z-transform

Back in the day I taught in a Computational Science department, which threw me out to exciting and new-to-me subjects more than once. One quite fun semester I was learning, and teaching, signal processing. This set me up for the triumphant conclusion of my first A-to-Z.

One of the things you can see in my style is mentioning the connotations implied by whether one uses x or z as a variable. Any letter will do, for the use it’s put to. But to use the name ‘z’ suggests an openness to something that ‘x’ doesn’t.

There’s a mention here about stability in algorithms, and the note that we can process data in ways that are stable or are unstable. I don’t mention why one would want or not want stability. Wanting stability hardly seems to need explaining; isn’t that the good option? And, often, yes, we want stable systems because they correct and wipe away error. But there are reasons we might want instability, or at least less stability. Too stable a system will obscure weak trends, or the starts of trends. Your weight flutters day by day in ways that don’t mean much, which is why it’s better to consider a seven-day average. If you took instead a 700-day running average, these meaningless fluctuations would be invisible. But you also would take a year or more to notice whether you were losing or gaining weight. That’s one of the things stability costs.

## z-transform.

The z-transform comes to us from signal processing. The signal we take to be a sequence of numbers, all representing something sampled at uniformly spaced times. The temperature at noon. The power being used, second-by-second. The number of customers in the store, once a month. Anything. The sequence of numbers we take to stretch back into the infinitely great past, and to stretch forward into the infinitely distant future. If it doesn’t, then we pad the sequence with zeroes, or some other safe number that we know means “nothing”. (That’s another classic mathematician’s trick.)

It’s convenient to have a name for this sequence. “a” is a good one. The different sampled values are denoted by an index. a0 represents whatever value we have at the “start” of the sample. That might represent the present. That might represent where sampling began. That might represent just some convenient reference point. It’s the equivalent of mileage maker zero; we have to have something be the start.

a1, a2, a3, and so on are the first, second, third, and so on samples after the reference start. a-1, a-2, a-3, and so on are the first, second, third, and so on samples from before the reference start. That might be the last couple of values before the present.

So for example, suppose the temperatures the last several days were 77, 81, 84, 82, 78. Then we would probably represent this as a-4 = 77, a-3 = 81, a-2 = 84, a-1 = 82, a0 = 78. We’ll hope this is Fahrenheit or that we are remotely sensing a temperature.

The z-transform of a sequence of numbers is something that looks a lot like a polynomial, based on these numbers. For this five-day temperature sequence the z-transform would be the polynomial $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0$. (z1 is the same as z. z0 is the same as the number “1”. I wrote it this way to make the pattern more clear.)

I would not be surprised if you protested that this doesn’t merely look like a polynomial but actually is one. You’re right, of course, for this set, where all our samples are from negative (and zero) indices. If we had positive indices then we’d lose the right to call the transform a polynomial. Suppose we trust our weather forecaster completely, and add in a1 = 83 and a2 = 76. Then the z-transform for this set of data would be $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$. You’d probably agree that’s not a polynomial, although it looks a lot like one.

The use of z for these polynomials is basically arbitrary. The main reason to use z instead of x is that we can learn interesting things if we imagine letting z be a complex-valued number. And z carries connotations of “a possibly complex-valued number”, especially if it’s used in ways that suggest we aren’t looking at coordinates in space. It’s not that there’s anything in the symbol x that refuses the possibility of it being complex-valued. It’s just that z appears so often in the study of complex-valued numbers that it reminds a mathematician to think of them.

A sound question you might have is: why do this? And there’s not much advantage in going from a list of temperatures “77, 81, 84, 81, 78, 83, 76” over to a polynomial-like expression $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$.

Where this starts to get useful is when we have an infinitely long sequence of numbers to work with. Yes, it does too. It will often turn out that an interesting sequence transforms into a polynomial that itself is equivalent to some easy-to-work-with function. My little temperature example there won’t do it, no. But consider the sequence that’s zero for all negative indices, and 1 for the zero index and all positive indices. This gives us the polynomial-like structure $\cdots + 0z^2 + 0z^1 + 1 + 1\left(\frac{1}{z}\right)^1 + 1\left(\frac{1}{z}\right)^2 + 1\left(\frac{1}{z}\right)^3 + 1\left(\frac{1}{z}\right)^4 + \cdots$. And that turns out to be the same as $1 \div \left(1 - \left(\frac{1}{z}\right)\right)$. That’s much shorter to write down, at least.

Probably you’ll grant that, but still wonder what the point of doing that is. Remember that we started by thinking of signal processing. A processed signal is a matter of transforming your initial signal. By this we mean multiplying your original signal by something, or adding something to it. For example, suppose we want a five-day running average temperature. This we can find by taking one-fifth today’s temperature, a0, and adding to that one-fifth of yesterday’s temperature, a-1, and one-fifth of the day before’s temperature a-2, and one-fifth a-3, and one-fifth a-4.

The effect of processing a signal is equivalent to manipulating its z-transform. By studying properties of the z-transform, such as where its values are zero or where they are imaginary or where they are undefined, we learn things about what the processing is like. We can tell whether the processing is stable — does it keep a small error in the original signal small, or does it magnify it? Does it serve to amplify parts of the signal and not others? Does it dampen unwanted parts of the signal while keeping the main intact?

We can understand how data will be changed by understanding the z-transform of the way we manipulate it. That z-transform turns a signal-processing idea into a complex-valued function. And we have a lot of tools for studying complex-valued functions. So we become able to say a lot about the processing. And that is what the z-transform gets us.

## A Moment Which Turns Out to Be Universal

I was reading a bit farther in Charles Coulson Gillispie’s Pierre-Simon Laplace, 1749 – 1827, A Life In Exact Science and reached this paragraph, too good not to share:

Wishing to study [ Méchanique céleste ] in advance, [ Jean-Baptiste ] Biot offered to read proof. When he returned the sheets, he would often ask Laplace to explain some of the many steps that had been skipped over with the famous phrase, “it is easy to see”. Sometimes, Biot said, Laplace himself would not remember how he had worked something out and would have difficulty reconstructing it.

So, it’s not just you and your instructors.

(Gillispie wrote the book along with Robert Fox and Ivor Grattan-Guinness.)

## How All Of 2021 Treated My Mathematics Blog

Oh, you know, how did 2021 treat anybody? I always do one of these surveys for the end of each month. It’s only fair to do one for the end of the year also.

2021 was my tenth full year blogging around here. I might have made more of that if the actual anniversary in late September hadn’t coincided with a lot of personal hardships. 2021 was a quiet year around these parts with only 94 things posted. That’s the fewest of any full year. (I posted only 41 things in 2011, but I only started posting at all in late September of that year.) That seems not to have done my readership any harm. There were 28,832 pages viewed in 2021, up from 24,474 in 2020 and a fair bit above the 24,662 given in my previously best-viewed year of 2019. Eleven data points (the partial year 2011, and the full years 2012 through 2021) aren’t many, so there’s no real drawing patterns here. But it does seem like I have a year of sharp increases and then a year of slight declines in page views. I suppose we’ll check in in 2023 and see if that pattern holds.

One thing not declining? The number of unique visitors. WordPress recorded 20,339 unique visitors in 2021, a comfortable bit above 2020’s 16,870 and 2019s 16,718. So far I haven’t seen a year-over-year decline in unique visitors. That’s gratifying.

Less gratifying: the number of likes continues its decline. It hasn’t increased, around here, since 2015 when a seemingly impossible 3,273 likes were given by readers. In 2021 there were only 481 likes, the fewest since 2013. The dropping-off of likes has looked so resembled a Poisson distribution that I’m tempted to see whether it actually fits that.

The number of comments dropped a slight bit. There were 188 given around here in 2021, but that’s only ten fewer than were given in 2020. It’s seven more than were given in 2019, so if there’s any pattern there I don’t know it.

WordPress lists 483 posts around here as having gotten four or more page views in the year. It won’t tell me everything that got even a single view, though. I’m not willing to do the work of stitching together the monthly page view data to learn everything that was of interest however passing. I’ll settle with knowing what was most popular. And what were my most popular posts of the year mercifully ended? These posts from 2021 got more views than all the others:

There were 143 countries, or country-like entities, sending me any page views in 2021. I don’t know how that compares to earlier years. But here’s the roster of where page views came from:

United States 13,723
Philippines 3,994
India 2,507
United Kingdom 865
Australia 659
Germany 442
Brazil 347
South Africa 296
European Union 273
Sweden 230
Singapore 210
Italy 204
Austria 178
France 143
Finland 141
Malaysia 135
South Korea 135
Hong Kong SAR China 132
Ireland 131
Netherlands 117
Turkey 117
Spain 107
Pakistan 105
Thailand 102
Mexico 101
United Arab Emirates 100
Indonesia 97
Switzerland 95
Norway 87
New Zealand 86
Belgium 76
Nigeria 76
Russia 74
Japan 64
Taiwan 62
Poland 55
Greece 54
Denmark 52
Colombia 51
Israel 49
Ghana 46
Portugal 44
Czech Republic 40
Vietnam 38
Saudi Arabia 33
Argentina 30
Lebanon 30
Nepal 28
Egypt 25
Kuwait 23
Serbia 22
Chile 21
Croatia 21
Jamaica 20
Peru 20
Tanzania 20
Costa Rica 19
Romania 17
Sri Lanka 16
Ukraine 15
Hungary 13
Jordan 13
Bulgaria 12
China 12
Albania 11
Bahrain 11
Morocco 11
Estonia 10
Qatar 10
Slovakia 10
Cyprus 9
Kenya 9
Zimbabwe 9
Algeria 8
Oman 8
Belarus 7
Georgia 7
Honduras 7
Lithuania 7
Puerto Rico 7
Venezuela 7
Bosnia & Herzegovina 6
Ethiopia 6
Iraq 6
Belize 5
Bhutan 5
Moldova 5
Uruguay 5
Dominican Republic 4
Guam 4
Kazakhstan 4
Macedonia 4
Mauritius 4
Zambia 4
Åland Islands 3
Antigua & Barbuda 3
Bahamas 3
Cambodia 3
Gambia 3
Guatemala 3
Slovenia 3
Suriname 3
American Samoa 2
Azerbaijan 2
Bolivia 2
Cameroon 2
Guernsey 2
Malta 2
Papua New Guinea 2
Réunion 2
Rwanda 2
Sudan 2
Uganda 2
Afghanistan 1
Andorra 1
Armenia 1
Fiji 1
Iceland 1
Isle of Man 1
Latvia 1
Liberia 1
Liechtenstein 1
Luxembourg 1
Maldives 1
Marshall Islands 1
Mongolia 1
Myanmar (Burma) 1
Namibia 1
Palestinian Territories 1
Panama 1
Paraguay 1
Senegal 1
St. Lucia 1
Togo 1
Tunisia 1
Vatican City 1

I don’t know that I’ve gotten a reader from Vatican City before. I hope it’s not about the essay figuring what dates are most and least likely for Easter. I’d expect them to know that already.

My plan is to spend a bit more time republishing posts from old A-to-Z’s. And then I hope to finish off the Little 2021 Mathematics A-to-Z, late and battered but still carrying on. I intend to post something at least once a week after that, although I don’t have a clear idea what that will be. Perhaps I’ll finally work out the algorithm for Compute!’s New Automatic Proofreader. Perhaps I’ll fill in with A-to-Z style essays for topics I had skipped before. Or I might get back to reading the comics for their mathematics topics. I’m open to suggestions.

## Some Progress on the Infinitude of Monkeys

I have been reading Pierre-Simon LaPlace, 1749 – 1827, A Life In Exact Science, by Charles Coulson Gillispie with Robert Fox and Ivor Grattan-Guinness. It’s less of a biography than I expected and more a discussion of LaPlace’s considerable body of work. Part of LaPlace’s work was in giving probability a logically coherent, rigorous meaning. Laplace discusses the gambler’s fallacy and the tendency to assign causes to random events. That, for example, if we came across letters from a printer’s font reading out ‘INFINITESIMAL’ we would think that deliberate. We wouldn’t think that for a string of letters in no recognized language. And that brings up this neat quote from Gillispie:

The example may in all probability be adapted from the chapter in the Port-Royal La Logique (1662) on judgement of future events, where Arnauld points out that it would be stupid to bet twenty sous against ten thousand livres that a child playing with printer’s type would arrange the letters to compose the first twenty lines of Virgil’s Aenid.

The reference here is to a book by Antoine Arnauld and Pierre Nicole that I haven’t read or heard of before. But it makes a neat forerunner to the Infinite Monkey Theorem. That’s the study of what probability means when put to infinitely great or long processes. Émile Borel’s use of monkeys at a typewriter echoes this idea of children playing beyond their understanding. I don’t know whether Borel knew of Arnauld and Nicole’s example. But I did not want my readers to miss a neat bit of infinite-monkey trivia. Or to miss today’s Bizarro, offering yet another comic on the subject.