Reading the Comics, May 7, 2022: Does Comic Strip Master Command Not Do Mathematics Anymore Edition?


I mentioned in my last Reading the Comics post that it seems there are fewer mathematics-themed comic strips than there used to be. I know part of this is I’m trying to be more stringent. You don’t need me to say every time there’s a Roman numerals joke or that blackboards get mathematics symbols put on them. Still, it does feel like there’s fewer candidate strips. Maybe the end of the 2010s was a boom time for comic strips aimed at high school teachers and I only now appreciate that? Only further installments of this feature will let us know.

Jim Benton’s Jim Benton Cartoons for the 18th of April, 2022 suggests an origin for those famous overlapping circle pictures. This did get me curious what’s known about how John Venn came to draw overlapping circles. There’s no reason he couldn’t have used triangles or rectangles or any shape, after all. It looks like the answer is nobody really knows.

Venn, himself, didn’t name the diagrams after himself. Wikipedia credits Charles Dodgson (Lewis Carroll) as describing “Venn’s Method of Diagrams” in 1896. Clarence Irving Lewis, in 1918, seems to be the first person to write “Venn Diagram”. Venn wrote of them as “Eulerian Circles”, referencing the Leonhard Euler who just did everything. Sir William Hamilton — the philosopher, not the quaternions guy — posthumously published the Lectures On Metaphysics and Logic which used circles in these diagrams. Hamilton asserted, correctly, that you could use these to represent logical syllogisms. He wrote that the 1712 logic text Nucleus Logicae Weisianae — predating Euler — used circles, and was right about that. He got the author wrong, crediting Christian Weise instead of the correct author, Johann Christian Lange.

John Venn, as a father, complaining: 'Why can't you brats pick up your HULA HOOPS when you're done playing with ... hang on. Wait a sec ... ' He's looking at three circles of about the same size, overlapping as a three-set Venn diagram. Caption: 'One day at the Venn House.'
Jim Benton’s Jim Benton Cartoons for the 18th of April, 2022. Although I didn’t have a tag for Jim Benton cartoons before I have discussed them a couple times. Future essays mentioning Jim Benton Cartoons should be at this link.

With 1712 the trail seems to end to this lay person doing a short essay’s worth of research. I don’t know what inspired Lange to try circles instead of any other shape. My guess, unburdened by evidence, is that it’s easy to draw circles, especially back in the days when every mathematician had a compass. I assume they weren’t too hard to typeset, at least compared to the many other shapes available. And you don’t need to even think about setting them with a rotation, the way a triangle or a pentagon might demand. But I also would not rule out a notion that circles have some connotation of perfection, in having infinite axes of symmetry and all points on them being equal in distance from the center and such. Might be the reasons fit in the intersection of the ethereal and the mundane.

Title: 'Physics hypotheses that are still on the table.' One is the No-Boundary Proposal, represented with a wireframe geodesic of an open cup. Another is The Weyl Curvature, represented with a wireframe model of a pointed ellipsoid. The punch line is The Victoria Principle, a small pile of beauty-care products.
Daniel Beyer’s Long Story Short for the 29th of April, 2022. This and other essays mentioning Long Story Short should be at this link.

Daniel Beyer’s Long Story Short for the 29th of April, 2022 puts out a couple of concepts from mathematical physics. These are all about geometry, which we now see as key to understanding physics. Particularly cosmology. The no-boundary proposal is a model constructed by James Hartle and Stephen Hawking. It’s about the first 10^{-43} seconds of the universe after the Big Bang. This is an era that was so hot that all our well-tested models of physical law break down. The salient part of the Hartle-Hawking proposal is the idea that in this epoch time becomes indistinguishable from space. If I follow it — do not rely on my understanding for your thesis defense — it’s kind of the way that stepping away from the North Pole first creates the ideas of north and south and east and west. It’s very hard to think of a way to test this which would differentiate it from other hypotheses about the first instances of the universe.

The Weyl Curvature is a less hypothetical construct. It’s a tensor, one of many interesting to physicists. This one represents the tidal forces on a body that’s moving along a geodesic. So, for example, how the moon of a planet gets distorted over its orbit. The Weyl Curvature also offers a way to describe how gravitational waves pass through vacuum. I’m not aware of any serious question of the usefulness or relevance of the thing. But the joke doesn’t work without at least two real physics constructs as setup.

Orange imp, speaking to a blue imp: 'What are you doing? Blue imp, who's sitting in the air, floating: 'I'm using my powers to make math work.' Orange: 'What?' Blue: 'If I lose my concentration, math stops working.' Blue falls over, crying, 'Oops!' Blue picks self up off the ground and says, 'There! Are all nineteen of you happy now?'
Liniers’ Macanudo for the 5th of May, 2022. Essays about some topic mentioned in Macanudo should be at this link.

Liniers’ Macanudo for the 5th of May, 2022 has one of the imps who inhabit the comic asserting responsibility for making mathematics work. It’s difficult to imagine what a creature could do to make mathematics work, or to not work. If pressed, we would say mathematics is the set of things we’re confident we could prove according to a small, pretty solid-seeming set of logical laws. And a somewhat larger set of axioms and definitions. (Few of these are proved completely, but that’s because it would involve a lot of fiddly boring steps that nobody doubts we could do if we had to. If this sounds sketchy, consider: do you believe my claim that I could alphabetize the books on the shelf to my right, even though I’ve never done that specific task? Why?) It would be like making a word-search puzzle not work.

The punch line, the blue imp counting seventeen of the orange imp, suggest what this might mean. Mathematics as a set of statements following some rule, is a niche interest. What we like is how so many mathematical things seem to correspond to real-world things. We can imagine mathematics breaking that connection to the real world. The high temperature rising one degree each day this week may tell us something about this weekend, but it’s useless for telling us about November. So I can imagine a magical creature deciding what mathematical models still correspond to the thing they model. Be careful in trying to change their mind.


And that’s as many comic strips from the last several weeks that I think merit discussion. All of my Reading the Comics posts should be at this link, though. And I hope to have a new one again sometime soon. I’ll ask my contacts with the cartoonists. I have about half of a contact.

How April 2022 Treated My Mathematics Blog


This past month I moved towards the sort of thing that’s normal for my blog here. Mostly, Reading the Comics posts, with another piece that was about a mathematical curiosity. That is a typical selection of posts when I’m not doing something special, such as an A-to-Z sequence. So, with a new month begun, I like to see how it was received. As usual, I check WordPress’s statistics for the past month, and compare it to the running average for the twelve months leading up to that.

WordPress figures there were 2,121 page views here in April. That’s a little below the running mean of 2,286.8 page views. It’s almost exactly at the running median, though, of 2,122 page views in a month. So this suggests April turned out quite average. There were 1,404 recorded unique visitors. This is below the running mean of 1,602.7 unique visitors, and noticeably below the running median of 1,479. This suggests a month a bit below average.

Per posting, though? That suggests an increasing readership. There were 424.2 page views recorded per posting in April, above the running mean of 301.7 and running median of 302.8. There were 280.8 unique visitors per posting, also well above the 211.1 mean and 211.3 median. That’s not to say every post got 281 visitors, since many of the visitors looked at stuff from before April. This is what keeps me from re-blogging even more repeats.

Bar chart of two and a half years' worth of monthly readership figures. There was a huge peak around October 2019, and a much lower but fairly steady wave of readership after that. It's slightly increased for January 2022, dropped for February, rose slightly for March, and dropped a small bit in April again.
I know that, on average, my readership has been growing with convincing steadiness for the last five years or so. But part of me still feels like there must be something I could do to get to, like, the reliable 2500-views-a-month level, or higher.

That it was a slow month seems supported by the record of likes and comments, though. There were 19 likes given in April, well below the mean of 39.5 and median of 39. That’s a little less bad considered per posting, but still. That’s 3.8 likes per posting, below the running mean of 5.0 and running median of 4.5. There were an anemic two comments, way below the mean of 11.3 and median of 9.5. That’s just 0.4 comments per posting, compared to an already not-great mean of 1.4 and median of 1.2.

I had thought I posted more in April than a mere five pieces. Not so. Here’s the order of popularity of my posts, which are not quite in chronological order. I too quirk an eye at what the most popular thing of April was:

WordPress figures I posted 3,089 words in April, my fewest since September. And that comes to an average of 617.8 words per posting, again my lowest since September. For the year I’ve published 36,947 words, and have averaged 1,056 words per posting.

I started May with a total of 159,259 recorded page views from a recorded 95,907 unique visitors. But WordPress didn’t start telling us unique visitor counts until my blog here was a couple years old, so don’t take that too literally.

I’d be glad if you chose to be a regular reader. There’s a button at the upper right of the page, “Follow Nebusresearch” which adds this blog to your WordPress reader. There’s a field below that to get posts e-mailed as they’re published. I do nothing with the e-mail except send those posts, but who knows what WordPress Master Command does with them? And if you have an RSS reader, you can put the essays feed into that. If you don’t have an RSS reader, you can sign up for a free account at Dreamwidth. You can use the ‘Reader’ page over there for this and any other RSS feeds you might want to follow.

Thank you all for your reading.

How to Add Up Powers of Numbers


Do you need to know the formula to tell you what the sum of the first N counting numbers, raised to a power? No, you do not. Not really. It can save a bit of time to know the sum of the numbers raised to the first power. Most mathematicians would know it, or be able to recreate it fast enough:

\sum_{n = 1}^{N} n = 1 + 2 + 3 + \cdots + N = \frac{1}{2}N\left(N + 1\right)

But there are similar formulas to add up, say, the counting numbers squared, or cubed, or so. And a toot on Mathstodon, the mathematics-themed instance of social network Mastodon, makes me aware of a cute paper about this. In it Dr Alessandro Mariani describes A simple mnemonic to compute sums of powers.

It’s a neat one. Mariani describes a way to use knowledge of the sum of numbers to the first power to generate a formula for the sum of squares. And then to use the sum of squares formula to generate the sum of cubes. The sum of cubes then lets you get the sub of fourth-powers. And so on. This takes a while to do if you’re interested in the sum of twentieth powers. But do you know how many times you’ll ever need to generate that formula? Anyway, as Mariani notes, this sort of thing is useful if you find yourself at a mathematics competition. Or some other event where you can’t just have the computer calculate this stuff.

Mariani’s process is a great one. Like many mnemonics it doesn’t make literal sense. It expects one to integrate and differentiate polynomials. Anyone likely to be interested in a formula for the sums of twelfth powers knows how to do those in their sleep. But they’re integrating and differentiating polynomials for which, in context, the integrals and derivatives don’t exist. Or at least don’t mean anything. That’s all right. If all you want is the right answer, it’s okay to get there by a wrong method. At least if you verify the answer is right, which the last section of Mariani’s paper does. So, give it a read if you’d like to see a neat mathematical trick to a maybe useful result.

Reading the Comics, April 17, 2022: Did I Catch Comic Strip Master Command By Surprise Edition


Part of the thrill of Reading the Comics posts is that the underlying material is wholly outside my control. The subjects discussed, yes, although there are some quite common themes. (Students challenging the word problem; lottery jokes; monkeys at typewriters.) But also quantity. Part of what burned me out on Reading the Comics posts back in 2020 was feeling the need to say something about lots of comic strips . Now?

I mentioned last week seeing only three interesting strips, and one of them, Andertoons, was a repeat I’d already discussed. This week there were only two strips that drew a first note and again, Andertoons was a repeat I’d already discussed. Mark Anderson’s comic for the 17th I covered in enough detail back in August of 2019. I don’t know how many new Andertoons are put into the rotation at GoComics. But the implication is Comic Strip Master Command ordered mathematics-comics production cut down, and they haven’t yet responded to my doing these again. I guess we’ll know for sure if things pick up in a couple weeks, as the lead time allows.

Teacher, pointing to the blackboard with 4 + 4 - 2 = written on it: 'Ella, how should we solve this problem?' Ella: 'Rock, paper, scissors?'
Rick McKee and Kent Sligh’s Mount Pleasant for the 15th of April, 2022. This is a relatively new comic strip (it only started last year), so I haven’t ever discussed it here before. Still This essay and any future ones to mention Mount Pleasant should be at this link.

So Rick McKee and Kent Sligh’s Mount Pleasant for the 15th of April is all I have to discuss. It’s part of the long series of students resisting the teacher’s question. The teacher is asking a fair enough question, that of how to do a problem that has several parts. She does ask how we “should” solve the problem of finding what 4 + 4 – 2 equals. The catch is there are several ways to do this, all of them as good. We know this if we’ve accepted subtraction as a kind of addition, and if we’ve accepted addition as commutative.

So the order is our choice. We can add 4 and 4 and then subtract 2. Or subtract 2 from the second 4, and then add that to the first 4. If you want, and can tell the difference, you could subtract 2 from the first 4, and then add the second 4 to that.

For this problem it doesn’t make any difference. But one can imagine similar ones where the order you tackle things in can make calculations easier, or harder. 5 + 7 – 2, for example, I find easier if I work it out as 5 + ( 7 – 2), that is, 5 + 5. So it’s worth taking a moment to consider whether rearranging it can make the calculation more reliable. I don’t know whether the teacher meant to challenge the students to see that there are alternatives, and no uniquely “right” answer. It’s possible McKee and Sligh did not have the teaching plan worked out.


That makes for another week’s worth of comic strips to discuss. All of my Reading the Comics posts should be at this link. Thanks for reading this and I will let you know if Comic Strip Master Command increases production of comics with mathematics themes.

Reading the Comics, April 10, 2022: Quantum Entanglement Edition


I remember part of why I stopped doing Reading the Comics posts regularly was their volume. I read a lot of comics and it felt like everyone wanted to do a word problem joke. Since I started easing back into these posts it’s seemed like they’ve disappeared. When I put together this week’s collection, I only had three interesting ones. And one was Andertoons for the 10th of April. Andertoons is a stalwart here, but this particular strip was one I already talked about, back in 2019.

Another was the Archie repeat for the 10th of April. And that only lists mathematics as a school subject. It would be the same joke if it were English lit. Saying “differential calculus” gives it the advantage of specificity. It also suggests Archie is at least a good enough student to be taking calculus in high school, which isn’t bad. Differential calculus is where calculus usually starts, with the study of instantaneous changes. A person can, and should, ask how a change can be instantaneous. Part of what makes differential calculus is learning how to find something that matches our intuition about what it should be. And that never requires us to do something appalling like divide zero by zero. Our current definition took a couple centuries of wrangling to find a scheme that makes sense. It’s a bit much to expect high school students to pick it up in two months.

Archie holds a textbook. His eyes are closed at first, and then they pop open. He chuckles some, and sighs wistfully. The teacher says, 'Archie, kindly keep your mind off your latest date and back on your differential calculus!' Archie whispers to Veronica, 'How did he know my mind wasn't on my differential calculus?'
Henry Scarpelli and Craig Boldman’s Archie for the 10th of April, 2022. This and other essays mentioning Archie are at this link.

Ripley’s Believe It Or Not for the 10th of April, 2022 was the most interesting piece. This referenced a problem I didn’t remember having heard about, the “36 Officers puzzle” of Leonhard Euler. Euler’s name you know as he did foundational work in every field of mathematics ever. This particular puzzle ates to 1779, according to an article in Quanta Magazine which one of the Ripley’s commenters offered. Six army regiments each have six officers of six different ranks. How can you arrange them in a six-by-six square so that no row or column repeats a rank or regiment?

Ripley's selection of panels. The relevant one: 'An 'impossible' math problem proposed by mathematician Leonhard Euler more than 240 years ago was solved in 2021 by using quantum entanglement.' There's also a panel about the baudet de Poitou donkey, in France, which grows hair long enough to reach the ground; it looks like a donkey covered in tinsel.
Ripley’s Believe It Or Not for the 10th of April, 2022 This and other essays mentioning Ripley’s Believe It Or Not are at this link. Previous essays have mentioned John Graziano, who wrote and illustrated the art for a long while. Ripley’s has recently picked up (at least one) new artist and been cagey about crediting them. I apologize for this but can’t fix it alone.
I like the look of that donkey. It’s festive.

The problem sounds like it shouldn’t be hard. The two-by-two version of this is easy. So is three-by-three and four-by-four and even five-by-five. Oddly, seven-by-seven is, too. It looks like some form of magic square, and seems not far off being a sudoku problem either. So it seems weird that six-by-six should be particularly hard, but sometimes it happens like that. In fact, this happens to be impossible; a paper by Gaston Terry in 1901 proved there were none.

The solution discussed by Ripley’s is of a slightly different problem. So I’m not saying to not believe it, just, that you need to believe it with reservations. The modified problem casts this as a quantum-entanglement, in which the rank and regiment of an officer in one position is connected to that of their neighbors. I admit I’m not sure I understand this well enough to explain; I’m not confident I can give a clear answer why a solution of the entangled problem can’t be used for the classical problem.

The problem, at this point, isn’t about organizing officers anymore. It never was, since that started as an idle pastime. Legend has it that it started as a challenge about organizing cards; if you look at the paper you’ll see it presenting states as card suits and values. But the problem emerged from idle curiosity into practicality. These turn out to be applicable to quantum error detection codes. I’m not certain I can explain how myself. You might be able to convince yourself of this by thinking how you know that someone who tells you the sum of six odd numbers is itself an odd number made a mistake somewhere, and you can then look for what went wrong.


And that’s as many comics from last week as I feel like discussing. All my Reading the Comics posts should be gathered at this link. Thanks for reading this and I hope to do this again soon.

Reading the Comics, April 2, 2022: Pi Day Extra Edition


I’m not sure that I will make a habit of this. It’s been a while since I did a regular Reading the Comics post, looking for mathematics topics in syndicated newspaper comic strips. I thought I might dip my toes in those waters again. Since my Pi Day essay there’ve been only a few with anything much to say. One of them was a rerun I’ve discussed before, too, a Bloom County Sunday strip that did an elaborate calculation to conceal the number 1. I’ve written about that strip twice before, in May 2016 and then in October 2016, so that’s too well-explained to need revisiting.

As it happens two of the three strips remaining were repeats, though ones I don’t think I’ve addressed before here.

Bill Amend’s FoxTrot Classics for the 18th of March looks like a Pi Day strip. It’s not, though: it originally ran the 16th of March, 2001. We didn’t have Pi Day back then.

Mathematics teacher: 'You have 10 minutes left, people.' Peter: 'You can do this, Peter, think!' He draws a unit circle, and starts cutting slices to represent angles of pi/4, (2/3)*pi, (9/8)*pi, -pi/,4, and so on. He thinks; his diagram of the circle with angles cut into it turns into a pizza pie, covered with pepperoni and mushrooms. Peter buries his head in his desk. Teacher: 'Let's try not to drool on our test, Mr Fox.' Peter: 'Trig class should NOT be the period before lunch.'
Bill Amend’s FoxTrot Classics for the 18th of March, 2022. The strip originally ran the 16th of March, 2001. This and other essays with FoxTrot can be found at this link.

What Peter Fox is doing is drawing a unit circle — a circle of radius 1 — and dividing it into a couple common angles. Trigonometry students are expected to know the sines and cosines and tangents of a handful of angles. If they don’t know them, they can work these out from first principles. Draw a line from the center of the unit circle at an angle measured counterclockwise from the positive x-axis. Find where that line you’ve just drawn intersects the unit circle. The x-coordinate of that point has the same value as the cosine of that angle. The y-coordinate of that point has the same value as the sine of that angle. And for a handful of angles — the ones Peter marks off in the second panel — you can work them out by reason alone.

These angles we know as, like, 45 degrees or 120 degrees or 135 degrees. Peter writes them as \frac{\pi}{4} or \frac{2}{3}\pi or \frac{9}{8}\pi , because these are radian measure rather than degree measure. It’s a different scale, one that’s more convenient for calculus. And for some ordinary uses too: an angle of (say) \frac{3}{4}\pi radians sweeps out an arc of length \frac{3}{4}\pi on the unit circle. You can see where that’s easier to keep straight than how long an arc of 135 degrees might be.

Drawing this circle is a good way to work out or remember sines and cosines for the angles you’re expected to know, which is why you’d get them on a trig test.

Customer of the Moebius steakhouse, pointing to his plate: 'Excuse me, I ordered the strip steak.' Waiter: 'Correct, sir. The Moebius strip steak.' The steak is a Moebius strip, curling back onto itself, resting atop the bare plate.
Scott Hilburn’s The Argyle Sweater for the 27th of March, 2022. This and other essays with The Argyle Sweater are at this link.

Scott Hilburn’s The Argyle Sweater for the 27th of March summons every humorist’s favorite piece of topology, the Möbius strip. Unfortunately the line work makes it look to me like Hilburn’s drawn a simple loop of a steak. Follow the white strip along the upper edge. Could be the restaurant does the best it can with a challenging presentation.

August Ferdinand Möbius by the way was an astronomer, working most of his career at the Observatory at Leipzig. (His work as a professor was not particularly successful; he was too poor a lecturer to keep students.) His father was a dancing teacher, and his mother was a descendant of Martin Luther, although I imagine she did other things too.

Waitress, punching her time clock (at 1:04 pm): 'Geez, is that all I worked today? 3.14 hours?' Tina: '3.14? Ha - that's pi.' Waitress: 'What's Pi?' Tina: 'In math it's an infinite number that just keeps going.' Waitress: 'Man, no wonder it felt like the day would never end.'
Rina Piccolo’s Tina’s Groove for the 2nd of April, 2022. The strip originally ran the 31st of March, 2007. Essays with Tina’s Groove are at this link.

Rina Piccolo’s Tina’s Groove for the 2nd of April makes its first appearance in a Reading the Comics post in almost a decade. The strip ended in 2017 and only recently has Comics Kingdom started showing reprints. The strip is about the numerical coincidence between 3.14 of a thing and the digits of π. It originally ran at the end of March, 2007, which like the vintage FoxTrot reminds us how recent a thing Pi Day is to observe.

3.14 hours is three hours, 8.4 minutes, which implies that she clocked in at about 9:56.


And that’s this installment. All my Reading the Comics posts should be at this link. I don’t know when I’ll publish a next one, but it should be there, too. Thanks for reading.

How March 2022 Treated My Mathematics Blog


I expected readers to be happy I was finishing the Little 2021 Mathematics A-to-Z. My doubt was how happy they would be. Turns out they were a middling amount of happy. So this is my regular review of the readership statistics for the past month, as provided by WordPress.

I published eight things in March, which is average for me the past twelve months. It was a long, long time ago that I went whole months posting something every day. But my twelve-month running mean has been 8.5 posts per month, and the median 8, so that’s just in line. There were 2,272 page views recorded in March, which is below the running mean of 2,336.4 and above the running median of 2,122. So, average, like I said. There were 1,545 unique visitors, below the running mean of 1,640.0 and above the running median of 1,479.

Bar chart of two and a half years' worth of monthly readership figures. There was a huge peak around October 2019, and a much lower but fairly steady wave of readership after that. It's slightly increased for January 2022, dropped for February, and rose slightly for March.
It’s uncanny how well this chart matches my mood.

Prorated by posting, the showing is a little worse. There were 284.0 views and 193.1 unique visitors per posting in March. The running mean is 301.9 views and 211.6 visitors per posting. The median, 302.8 views and 211.3 visitors. I have no explanation for this phenomenon.

I have a hypothesis. There were 32 likes given in the month, below the mean of 39.3 and median of 35. But several of the posts were pointers to other essays and those are naturally less well-liked. That came to 4.0 likes per posting, below the mean of 4.9 likes per posting and median of 4.5 likes per posting. Comments were anemic again, with only four given in the month. The mean is an impossible-seeming 11.8 and median 10. Per posting, there were 0.5 comments here in March, compared to a mean of 1.4 and median of 1.2. So it goes.

What was popular in March? Pi Day comic strips, of course, and my making something out of the NCAA March Madness basketball tournament. Here’s the March postings in descending order of popularity.

Stuff from before this past month was popular too, including several of the individual Pi Day pages. And my post about the most and least likely dates for Easter, which is sure to be a seasonal favorite.

WordPress figures that I posted 6,655 words in March, for an average post length of 1,128. If that number seems familiar it does to me too. I had 1,128 words per posting, on average, in January too, an event that caused me to go check that I hadn’t recorded something wrong. But that was also a month with many more posts (many repeats). This brought my average words per post for the year down to 831.9, close to half what my average was at the end of February.

WordPress figures that I started April 2022 with a total of 1,705 posts here. They’d drawn 3,317 comments, with a total 157,138 views from 94,502 recorded unique visitors.

If you’d like to be a regular reader around here, please read. There’s a button at the upper right of the page, “Follow Nebusresearch”. That adds this blog to your WordPress reader. There’s a field below that to get posts e-mailed as they’re published. I do nothing with the e-mail except send those posts. WordPress probably has some incomprehensible page where they say what the do with your e-mails. And if you have an RSS reader, you can put the essays feed into that.

Thank you all for your reading.

What I Learned Writing the Little 2021 Mathematics A-to-Z


I try, at the end of each of these A-to-Z sessions, to think about what I’ve learned from the experience. The challenge is reliably interesting, thanks to the kind readers who suggest topics. While I reserve the right to choose my own subject for any letter, I usually go for what of the suggestions sounds most interesting. That nudges me out of my comfortable, familiar thoughts and into topics I know less well. I would never have written about cohomologies if I waited to think I had something to say about them.

I didn’t have any deep experiences like that this time, although I did get a better handle on tangent spaces and why we like them. Most of what I did learn was about process, and about how to approach writing here.

For example, I started appealing for topics more letters ahead than I had previous projects. The goal was to let myself build a reserve, so that I would have a week or more to let an essay sit while I re-thought what I’d said. Early on, this worked well and I liked the results. It also made it easier to tie essays together; multiplication and addition could complement one another. This is something I could expand on.

And varying from the strict alphabetical order seems to have worked too. The advantage of doing every letter in order is that I’m pushed into some unpromising letters, like ‘Q’ or ‘Y’. It’s fantastic when I get a good essay out of that. But that’s harder work. This time around I did three topics starting with A, and three with T, and there’s so many more I could write.

The biggest and hardest thing I learned was related to how my plans went awry. How I lost the several-weeks lead time I started with, and how I had to put the project on hold for nearly three months.

2021 was a hard year, after another hard year, after a succession of hard years. Mostly, these were hard years because the world had been hard. Wearying, which is why I started out doing a mere 15 essays instead of the full 26. But not things that too directly hit my personal comfort. During the Little 2021 A-to-Z, though, the hard got intimate. Personal disasters hit starting in mid-August, and kept progressing — or dragging out — through to the new year. Just in time for the world-hardness of the first Omicron wave of the pandemic.

I have always thought of myself as a Sabbath-is-made-for-Man person. That is, schedules are ways to help you get done what you want or need; they’re not of value in themselves. Yet I do value them. I like their hold, and I thrive within them. Part of my surviving the pandemic, when all normal activities stopped, was the schedule of things I write here and on my humor blog. They offered a reason to do something particular. If I were not living up to this commitment, then what was I doing?

The answer is I would be not stressing myself past what I can do. I like these A-to-Z essays, and all the writing I do, or I wouldn’t do it. It’s nourishing and often exciting. But it is labor, and it is stress. Exercising a bit longer or a bit harder than one feels able to helps one build endurance and strength. But there are times one’s muscles are exhausted, or one’s joints are worked too much, and you must rest. Not just stick to the routine exercise, but take a break so that you can recover. I had not taken a serious break since starting this blog, and hadn’t realized I would need to. Over the course of this A-to-Z I learned I sometimes need to, and I should.

I need also to think of what I will do next. I’m not sure when I will feel confident that I can do a full A-to-Z, or even a truncated version. My hunch is I need to do more mathematical projects here that are fun and playful. This implies thinking of fun and playful projects, and thinking is the hard part again. But I understand, in a way I had not before, that I can let go.


The whole of the Little 2021 Mathematics A-to-Z sequence should be at this link. And then at this link should be all of the A-to-Z essays from all past years. Thank you.

What I Wrote About In My Little 2021 Mathematics A to Z


It’s good to have an index of the topics I wrote about for each of my A-to-Z sequences. It’s good for me, at least. It makes my future work much easier. And it might help people find past essays. I hope to have my essay about what I learned from a project that was supposed to be nearly one-third shorter, and ended up sprawling past its designated year, next week.

All of the Little 2021 Mathematics A-to-Z essays should be at this link. And gathered at this link should be all of the A-to-Z essays from all past years. Thank you for your reading.

Reading the Comics, March 14, 2022: Pi Day Edition


As promised I have the Pi Day comic strips from my reading here. I read nearly all the comics run on Comics Kingdom and on GoComics, no matter how hard their web sites try to avoid showing comics. (They have some server optimization thing that makes the comics sometimes just not load.) (By server optimization I mean “tracking for advertising purposes”.)

Pi Day in the comics this year saw the event almost wholly given over to the phonetic coincidence that π sounds, in English, like pie. So this is not the deepest bench of mathematical topics to discuss. My love, who is not as fond of wordplay as I am, notes that the ancient Greeks likely pronounced the name of π about the same way we pronounce the letter “p”. This may be etymologically sound, but that’s not how we do it in English, and even if we switched over, that would not make things better.

Scott Hilburn’s The Argyle Sweater is one of the few strips not to be about food. It is set in the world of anthropomorphized numerals, the other common theme to the day.

A numeral 3 reads the Personals, and circles one which reads: '.1415 looking for friendship, maybe more.' The caption: 'Pi-Curious'
Scott Hilburn’s The Argyle Sweater for the 14th of March, 2022. Essays with some mention of The Argyle Sweater are at this link. They’re also in near every Reading the Comics post. Hilburn has figured out his audience and it’s me.

John Hambrook’s The Brilliant Mind of Edison Lee leads off with the food jokes, in this case cookies rather than pie. The change adds a bit of Abbott-and-Costello energy to the action.

Grandpa, watching Edison bake a tray of pi-shaped cookies: 'What are those?' Edison: 'Pi cookies.' Grandpa: 'What are you going to fill them with?' Edison: Nothing.' Grandpa: 'So ... they're *not* pies, then.' Edison: 'Yeah they are. Look.' (He holds one out.) Grandpa, to Dad: 'That kid of yours doesn't know a thing about baking.'
John Hambrook’s The Brilliant Mind of Edison Lee for the 14th of March, 2022. This and other essays featuring The Brilliant Mind of Edison Lee should be at this link.

Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel gets our first pie proper, this time tossed in the face. One of the commenters observes that the middle of a pecan pie can really hold heat, “Ouch”. Will’s holding it in his bare paw, though, so it can’t be that bad.

Will, a dog, addressing the audience, while holding a pie in his hand: 'It's Pi day, which means you go find your nerdiest friend ... ' The pie splorts into Wheeler's face ' ... And hit them with a pie.' Wheeler, munching: 'At least it's pecan this year.'
Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel for the 14th of March, 2022. It’s been over five years since I had reason to mention Dogs of C Kennel, but you can find that reference here.

Jules Rivera’s Mark Trail makes the most casual Pi Day reference. If the narrator hadn’t interrupted in the final panel no one would have reason to think this referenced anything.

[ On the flight to Oregon, Mark Trail is already on a mission ... to learn everything he can about his father's new business partner, Jadsen Sterline! ] Mark Trail: 'Who is this guy and why's he trying to pull the wool over my dad's eyes?' Cherry Trail 'Mark? I snuck you a piece of pie from the airport cafe.' Mark Trail: 'Aw, thank!' [ Today is a good day for pie! ]
Jules Rivera’s Mark Trail for the 14th of March, 2022. I’m startled to learn this is not the only time I’ve mentioned Mark Trail. This and the other appearance are at this link, and if something comes up, it should be added there.

Mark Parisi’s Off The Mark is the other anthropomorphic numerals joke for the day. It’s built on the familiar fact that the digits of π go on forever. This is true for any integer base. In base π, of course, the representation of π is just “10”. But who uses that? And in base π, the number six would be something with infinitely many digits. There’s no fitting that in a one-panel comic, though.

At an intersection, the numeral 6 says, 'After you ... ' to the leading 3 of a decimal representation of pi. Caption: 'A decision Sharon came to regret.'
Mark Parisi’s Off The Mark for the 14th of March, 2022. You know what’s another comic that gets mentioned all the time in Reading the Comics posts? Off The Mark, as Mark Parisi has also decided I’m his target audience. Enjoy this and other essays mentioning the strip.

Doug Savage’s Savage Chickens is the one strip that wasn’t about food or anthropomorphized numerals. There is no practical reason to memorize digits of π, other than that you’re calculating something by hand and don’t want to waste time looking them up. In that case there’s not much call go to past 3.14. If you need more than about 3.14159, get a calculator to do it. But memorizing digits can be fun, and I will not underestimate the value of fun in getting someone interested in mathematics.

One chicken, sitting at a table with another; there's a clock on the table: 'How many digits of pi can you recite from memory?' Other chicken: 'Um ... you do know that speed dating isn't a contest to see how quickly you can scare away the other person, right?'
Doug Savage’s Savage Chickens for the 14th of March, 2022. This and other essays discussing something mentioned in Savage Chickens are at this link.

For my part, I memorized π out to 3.1415926535787932, so that’s sixteen digits past the decimal. Always felt I could do more and I don’t know why I didn’t. The next couple digits are 8462, which has a nice descending-fifths cadence to it. The 626 following is a neat coda. My describing it this way may give you some idea to how I visualize the digits of π. They might help you, if you figure for some reason you need to do this. You do not, but if you enjoy it, enjoy it.

Two women at a table, eating a pie. First woman: 'I thought Pi Day was yesterday.' Second woman: 'Why question the pie? Just enjoy.'
Bianca Xunise’s Six Chix for the 15th of March, 2022. Essays featuring topics mentioned in Six Chix are at this link.

Bianca Xunise’s Six Chix for the 15th ran a day late; Xunise only gets the comic on Tuesdays and the occasional Sunday. It returns to the food theme.

And this brings me to the end of this year’s Pi Day comic strips. All of my Reading the Comics posts, past and someday future, should be at this link. And my various Pi Day essays should be here. Thank you for reading.

Let Me Remind You How Interesting a Basketball Tournament Is


Several years ago I stumbled into a nice sequence. All my nice sequences have been things I stumbled upon. This one looked at the most basic elements of information theory by what they tell us about the NCAA College Basketball tournament. This is (in the main) a 64-team single-elimination playoff. It’s been a few years since I ran through the sequence. But it’s been a couple years since the tournament could be run with a reasonably clear conscience too. So here’s my essays:

And this spins off to questions about other sports events.

And I still figure to get to this year’s Pi Day comic strips. Soon. It’s been a while since I felt I had so much to write up.

Here Are Past Years’ Pi Day Comic Strips


I haven’t yet read today’s comics; it takes a while to get through them. But I hope to summarize what Comic Strip Master Command has sent out for the syndicated comics for today. In the meanwhile, here’s Pi Day strips of past years.

And I have to offer a warning. GoComics.Com has discontinued a lot of comics in the past couple years. They’ve been brutal about removing the archives of strips they’ve discontinued. Comics Kingdom is similarly ruthless in removing strips not in production. And a recent and, to the user, bad code update broke a lot of what had been non-expiring links. But my discussions of the themes in the comic are still there. And, as I got more into the Reading the Comics project I got more likely to include the original comic. So that’s some compensation.

Here’s the past several years in comics from on or around the 14th of March:

  • 2015, featuring The Argyle Sweater, Baldo, The Chuckle Brothers, Dog Eat Doug, FoxTrot Classics, Herb and Jamaal, Long Story Short, The New Adventures of Queen Victoria, Off The Mark, and Working Daze.
  • 2016, featuring The Argyle Sweater, B.C., Brewster Rockit, The Brilliant Mind of Edison Lee, Curtis, Dog Eat Doug, F Minus, Free Range, and Holiday Doodles.
  • 2017, featuring 2 Cows and a Chicken, Archie, The Argyle Sweater, Arlo and Janis, Lard’s World Peace Tips, Loose Parts, Off The Mark, Saturday Morning Breakfast Cereal, TruthFacts, and Working Daze.
  • 2018, featuring The Argyle Sweater, Bear With Me, Funky Winterbean Classic, Mutt and Jeff, Off The Mark, Savage Chickens, Warped, and Working Daze.
  • 2019, featuring The Brilliant Mind of Edison Lee, Liz Climo’s Cartoons, The Grizzwells, Off The Mark, and Working Daze.
  • 2020, featuring Baldo, Calvin and Hobbes, Off The Mark, Real Life Adventures, Reality Check, and Warped.
  • 2021, featuring Agnes, The Argyle Sweater, Between Friends, Breaking Cat News, FoxTrot, Frazz, Get Fuzzy, Heart of the City, Reality Check, and Studio Jantze.

As mentioned, I have yet to read today’s comics. I’m looking forward to it, at least to learn what Funky Winkerbean character I’m going to be most annoyed with this week. It will be Les Moore. I was also going to look forward to seeing if there would ever be a Pi Day strips roundup without The Argyle Sweater or Reality Check. It turns out there was one in 2019. Weird how you can get the impression something is always there even when it’s not.

My Little 2021 Mathematics A-to-Z: Zorn’s Lemma


The joke to which I alluded last week was a quick pun. The setup is, “What is yellow and equivalent to the Axiom of Choice?” It’s the topic for this week, and the conclusion of the Little 2021 Mathematics A-to-Z. I again thank Mr Wu, of Singapore Maths Tuition, for a delightful topic.

Zorn’s Lemma

Max Zorn did not name it Zorn’s Lemma. You expected that. He thought of it just as a Maximal Principle when introducing it in a 1934 presentation and 1935 paper. The word “lemma” connotes that some theorem is a small thing. It usually means it’s used to prove some larger and more interesting theorem. Zorn’s Lemma is one of those small things. With the right background, a rigorous proof is a couple not-too-dense paragraphs. Without the right background? It’s one of those proofs you read the statement of and nod, agreeing, that sounds reasonable.

The lemma is about partially ordered sets. A set’s partially ordered if it has a relationship between pairs of items in it. You will sometimes see a partially ordered set called a “poset”, a term of mathematical art which make me smile too. If we don’t know anything about the ordering relationship we’ll use the ≤ symbol, just like this was ordinary numbers. To be partially ordered, whenever x ≤ y and y ≤ x, we know that x and y must be equal. And the converse: if x = y then x ≤ y and y ≤ x. What makes this partial is that we’re not guaranteed that every x and y relate in some way. It’s a totally ordered set if we’re guaranteed that at least one of x ≤ y and y ≤ x is always true. And then there is such a thing as a well-ordered set. This is a totally ordered set for which every subset (unless it’s empty) has a minimal element.

If we have a couple elements, each of which we can put in some order, then we can create a chain. If x ≤ y and y ≤ z, then we can write x ≤ y ≤ z and we have at least three things all relating to one another. This seems like stuff too basic to notice, if we think too literally about the relationship being “is less than or equal to”. If the relationship is, say, “divides wholly into”, then we get some interesting different chains. Like, 2 divides into 4, which divides into 8, which divides into 24. And 3 divides into 6 which divides into 24. But 2 doesn’t divide into 3, nor 3 into 2. 4 doesn’t divide into 6, nor 6 into either 8 or 4.

So what Zorn’s Lemma says is, if all the chains in a partially ordered set each have an upper bound, then, the partially ordered set has a maximal element. “Maximal element” here means an element that doesn’t have a bigger comparable element. (That is, m is maximal if there’s no other element b for which m ≤ b. It’s possible that m and b can’t be compared, though, the way 6 doesn’t divide 8 and 8 doesn’t divide 6.) This is a little different from a “maximum” . It’s possible for there to be several maximal elements. But if you parse this as “if you can always find a maximum in a string of elements, there’s some maximum element”? And remember there could be many maximums? Then you’re getting the point.

You may also ask how this could be interesting. Zorn’s Lemma is an existence proof. Most existence proofs assure us a thing we thought existed does, but don’t tell us how to find it. This is all right. We tend to rely on an existence proof when we want to talk about some mathematical item but don’t care about fussy things like what it is. It is much the way we might talk about “an odd perfect number N”. We can describe interesting things that follow from having such a number even before we know what value N has.

A classic example, the one you find in any discussion of using Zorn’s Lemma, is about the basis for a vector space. This is like deciding how to give directions to a point in space. But vector spaces include some quite abstract things. One vector space is “the set of all functions you can integrate”. Another is “matrices whose elements are all four-dimensional rotations”. There might be literally infinitely many “directions” to go. How do we know we can find a set of directions that work as well as, for guiding us around a city, the north-south-east-west compass rose does? So there’s the answer. There are other things done all the time, too. A nontrivial ring-with-identity, for example, has to have a maximal ideal. (An ideal is a subset of the ring that’s still a ring.) This is handy to know if you’re working with rings a lot.

The joke in my prologue was built on the claim Zorn’s Lemma is equivalent to the Axiom of Choice. The Axiom of Choice is a piece of set theory that surprised everyone by being independent of the Zermelo-Fraenkel axioms. The Axiom says that, if you have a collection of disjoint nonempty sets, then there must exist at least one set with exactly one element from each of those sets. That is, you can pick one thing out of each of a set of bins. It’s easy to see how this has in common with Zorn’s Lemma being too obvious to imagine proving. That’s the sort of thing that makes a good axiom. Thing about a lemma, though, is we do prove it. That’s how we know it’s a lemma. How can a lemma be equivalent to an axiom?

I’l argue by analogy. In Euclidean geometry one of the axioms is this annoying statement about on which side of a line two other lines that intersect it will meet. If you have this axiom, you can prove some nice results, like, the interior angles of a triangle add up to two right angles. If you decide you’d rather make your axiom that bit about the interior angles adding up? You can go from that to prove the thing about two lines crossing a third line.

So it is here. If you suppose the Axiom of Choice is true, you can get Zorn’s Lemma: you can pick an element in your set, find a chain for which that’s the minimum, and find your maximal element from that. If you make Zorn’s Lemma your axiom? You can use x ≤ y to mean “x is a less desirable element to pick out of this set than is y”. And then you can choose a maximal element out of your set. (It’s a bit more work than that, but it’s that kind of work.)

There’s another theorem, or principle, that’s (with reservations) equivalent to both Zorn’s Lemma and the Axiom of Choice. It’s another piece that seems so obvious it should defy proof. This is the well-ordering theorem, which says that every set can be well-ordered. That is, so that every non-empty subset has some minimum element. Finally, a mathematical excuse for why we have alphabetical order, even if there’s no clear reason that “j” should come after “i”.

(I said “with reservations” above. This is because whether these are equivalent depends on what, precisely, kind of deductive logic you’re using. If you are not using ordinary propositional logic, and are using a “second-order logic” instead, they differ.)

Ermst Zermelo introduced the Axiom of Choice to set theory so that he could prove this in a way that felt reasonable. I bet you can imagine how you’d go from “every non-empty set has a minimum element” right back to “you can always pick one element of every set”, though. And, maybe if you squint, can see how to get from “there’s always a minimum” to “there has to be a maximum”. I’m speaking casually here because proving it precisely is more work than we need to do.

I mentioned how Zorn did not name his lemma after himself. Mathematicians typically don’t name things for themselves. Nor did he even think of it as a lemma. His name seems to have adhered to the principle in the late 30s. Credit the nonexistent mathematician Bourbaki writing about “le théorème de Zorn”. By 1940 John Tukey, celebrated for the Fast Fourier Transform, wrote of “Zorn’s Lemma”. Tukey’s impression was that this is how people in Princeton spoke of it at the time. He seems to have been the first to put the words “Zorn’s Lemma” in print, though. Zorn isn’t the first to have stated this. Kazimierez Kuratowski, in 1922, described what is clearly Zorn’s Lemma in a different form. Zorn remembered being aware of Kuratowski’s publication but did not remember noticing the property. The Hausdorff Maximal Principle, of Felix Hausdorff, has much the same content. Zorn said he did not know about Hausdorff’s 1927 paper until decades later.

Zorn’s lemma, the Axiom of Choice, the well-ordering theorem, and Hausdorff’s Maximal Principle all date to the early 20th century. So do a handful of other ideas that turn out to be equivalent. This was an era when set theory saw an explosive development of new and powerful ideas. The point of describing this chain is to emphasize that great concepts often don’t have a unique presentation. Part of the development of mathematics is picking through several quite similar expressions of a concept. Which one do we enshrine as an axiom, or at least the canonical presentation of the idea?

We have to choose.


And with this I at last declare the hard work Little 2021 Mathematics A-to-Z at an end. I plan to follow up, as traditional, with a little essay about what I learned while doing this project. All of the Little 2021 Mathematics A-to-Z essays should be at this link. And then all of the A-to-Z essays from all eight projects should be at this link. Thank you so for your support in these difficult times.

How February 2022 Treated My Mathematics Blog


This past month I finished my hiatus, the one where I reran old A-to-Z pieces instead of finishing off what I thought would be a simple, small project for 2021. And, after a mishap, got back to finishing things. As a result I published fewer pieces in February than I had since October. I had an inflated posting record in December and January, from reposting old material. I expected that end to shrink my readership again. And, yes, that’s what happened.

In February, according to WordPress, I attracted 1,875 page views. That’s below the twelve-month running mean of 2,360.8 page views leading up to February 2022. It’s also below the running median of 2,151.5 page views. In fact, it’s the lowest number of page views in a month going back to July 2020, around here.

Bar chart of two and a half years' worth of monthly readership figures. There was a huge peak around October 2019, and a much lower but fairly steady wave of readership after that. It's slightly increased for January 2022 and then dropped back again for February.
That huge peak in October 2019, about to fall off this view? That’s from one Redit-like aggregator having one thread that noticed me, and people coming in to look at one piece. Really shows how big the Internet is and how slight my place in it is.

Ah, but what about unique visitors? There were 1,313 of those, figures WordPress. That’s below the twelve-month running mean of 1,661.9 and the running median of 1,534.5. It happens that’s also the lowest monthly figure going back to July 2020. (Although that by a whisker: July 2021 had a couple more views, and unique visitors, than did February 2022. I don’t know what’s wrong with Julys around here.)

The number of likes dropped to 28, way below the mea of 40.9 and median of 39.5. And that was the lowest count since November of 2021. And there were only two comments, way below the mean of 14.9 and median of 10, I haven’t been below that figure since December of 2019. At least these are non-July dates to deal with.

This would all be too sad to bear except that if you look at these figures per posting? Then they snap right back into line. Like, this was in February an average of 312.5 page views every time I posted something. The twelve months leading up to that saw a mean of 301.6 page views per posting and a median of 302.8 page views per posting. February saw 218.8 unique visitors per posting. The running mean was 212.2 and running median 211.3. Even the likes become not so bad: 4.7 per posting. The mean was 5.1 and the median 4.9. In this figuring, the only dire number was comments, a scant 0.3 per posting, compared to mean of 1.9 and median of 1.4. So in that light, you know, things aren’t so bad.

What are the popular things of February? It’s worth running the whole list down. In decreasing order of popularity we have:

Other stuff, from before February, was even more popular, though. It’s getting to be the time of year people look to learn what the most and least likely dates of Easter are, for example. (Easter 2022 is set for the 17th of April. This is on the less-likely side of the band from the 28th of March through 21st of April when Easter is most likely. However, it is one of the most likely dates for Easter in the lifetime of anyone reading this blog, that is, for the span from 1925 to 2100.)

WordPress credits me with publishing 9,163 words in February, for an average post length of 1,527.2 words. This brings my average post length for the year up to 1,237. This is impressive considering I’ve been trying to write my A-to-Zs short for 2021.

WordPress figures that I started March 2022 having posted 1,697 things here. They’ve altogether drawn 3,313 comments from a total 154,866 page views and 92,956 logged unique visitors.

If you’d like to be a regular reader around here, please keep reading. There’s a button at the upper right of the page, “Follow Nebusresearch”, to add this blog to your WordPress reader. There’s a field below that to get posts sent to you in e-mail as they’re published. I do nothing with the e-mail except send those posts; I can’t say what WordPress Master Command does with them. And if you have an RSS reader, you can put the essays feed into that.

Thank you all for your reading, whatever way you do.

My Little 2021 Mathematics A-to-Z: Ordinary Differential Equations


Mr Wu, my Singapore Maths Tuition friend, has offered many fine ideas for A-to-Z topics. This week’s is another of them, and I’m grateful for it.

Ordinary Differential Equations

As a rule, if you can do something with a number, you can do the same thing with a function. Not always, of course, but the exceptions are fewer than you might imagine. I’ll start with one of those things you can do to both.

A powerful thing we learn in (high school) algebra is that we can use a number without knowing what it is. We give it a name like ‘x’ or ‘y’ and describe what we find interesting about it. If we want to know what it is, we (usually) find some equation or set of equations and find what value of x could make that true. If we study enough (college) mathematics we learn its equivalent in functions. We give something a name like f or g or Ψ and describe what we know about it. And then try to find functions which make that true.

There are a couple common types of equation for these not-yet-known functions. The kind you expect to learn as a mathematics major involves differential equations. These are ones where your equation (or equations) involve derivatives of the not-yet-known f. A derivative describes the rate at which something changes. If we imagine the original f is a position, the derivative is velocity. Derivatives can have derivatives also; this second derivative would be the acceleration. And then second derivatives can have derivatives also, and so on, into infinity. When an equation involves a function and its derivatives we have a differential equation.

(The second common type is the integral equation, using a function and its integrals. And a third involves both derivatives and integrals. That’s known as an integro-differential equation, and isn’t life complicated enough? )

Differential equations themselves naturally divide into two kinds, ordinary and partial. They serve different roles. Usually an ordinary differential equation we can describe the change for from knowing only the current situation. (This may include velocities and accelerations and stuff. We could ask what the velocity at an instant means. But never mind that here.) Usually a partial differential equation bases the change where you are on the neighborhood of where your location. If you see holes you can pick in that, you’re right. The precise difference is about the independent variables. If the function f has more than one independent variable, it’s possible to take a partial derivative. This describes how f changes if one variable changes while the others stay fixed. If the function f has only the one independent variable, you can only take ordinary derivatives. So you get an ordinary differential equation.

But let’s speak casually here. If what you’re studying can be fully represented with a dashboard readout? Like, an ordered list of positions and velocities and stuff? You probably have an ordinary differential equation. If you need a picture with a three-dimensional surface or a color map to understand it? You probably have a partial differential equation.

One more metaphor. If you can imagine the thing you’re modeling as a marble rolling around on a hilly table? Odds are that’s an ordinary differential equation. And that representation covers a lot of interesting problems. Marbles on hills, obviously. But also rigid pendulums: we can treat the angle a pendulum makes and the rate at which those change as dimensions of space. The pendulum’s swinging then matches exactly a marble rolling around the right hilly table. Planets in space, too. We need more dimensions — three space dimensions and three velocity dimensions — for each planet. So, like, the Sun-Earth-and-Moon would be rolling around a hilly table with 18 dimensions. That’s all right. We don’t have to draw it. The mathematics works about the same. Just longer.

[ To be precise we need three momentum dimensions for each orbiting body. If they’re not changing mass appreciably, and not moving too near the speed of light, velocity is just momentum times a constant number, so we can use whichever is easier to visualize. ]

We mostly work with ordinary differential equations of either the first or the second order. First order means we have first derivatives in the equation, but never have to deal with more than the original function and its first derivative. Second order means we have second derivatives in the equation, but never have to deal with more than the original function or its first or second derivatives. You’ll never guess what a “third order” differential equation is unless you have experience in reading words. There are some reasons we stick to these low orders like first and second, though. One is that we know of good techniques for solving most first- and second-order ordinary differential equations. For higher-order differential equations we often use techniques that find a related normal old polynomial. Its solution helps with the thing we want. Or we break a high-order differential equation into a set of low-order ones. So yes, again, we search for answers where the light is good. But the good light covers many things we like to look at.

There’s simple harmonic motion, for example. It covers pendulums and springs and perturbations around stable equilibriums and all. This turns out to cover so many problems that, as a physics major, you get a little sick of simple harmonic motion. There’s the Airy function, which started out to describe the rainbow. It turns out to describe particles trapped in a triangular quantum well. The van der Pol equation, about systems where a small oscillation gets energy fed into it while a large oscillation gets energy drained. All kinds of exponential growth and decay problems. Very many functions where pairs of particles interact.

This doesn’t cover everything we would like to do. That’s all right. Ordinary differential equations lend themselves to numerical solutions. It requires considerable study and thought to do these numerical solutions well. But this doesn’t make the subject unapproachable. Few of us could animate the “Pink Elephants on Parade” scene from Dumbo. But could you draw a flip book of two stick figures tossing a ball back and forth? If you’ve had a good rest, a hearty breakfast, and have not listened to the news yet today, so you’re in a good mood?

The flip book ball is a decent example here, too. The animation will look good if the ball moves about the “right” amount between pages. A little faster when it’s first thrown, a bit slower as it reaches the top of its arc, a little faster as it falls back to the catcher. The ordinary differential equation tells us how fast our marble is rolling on this hilly table, and in what direction. So we can calculate how far the marble needs to move, and in what direction, to make the next page in the flip book.

Almost. The rate at which the marble should move will change, in the interval between one flip-book page and the next. The difference, the error, may not be much. But there is a difference between the exact and the numerical solution. Well, there is a difference between a circle and a regular polygon. We have many ways of minimizing and estimating and controlling the error. Doing that is what makes numerical mathematics the high-paid professional industry it is. Our game of catch we can verify by flipping through the book. The motion of four dozen planets and moons attracting one another is harder to be sure we calculate it right.

I said at the top that most anything one can do with numbers one can do with functions also. I would like to close the essay with some great parallel. Like, the way that trying to solve cubic equations made people realize complex numbers were good things to have. I don’t have a good example like that for ordinary differential equations, where the study expanded our ideas of what functions could be. Part of that is that complex numbers are more accessible than the stranger functions. Part of that is that complex numbers have a story behind them. The story features titanic figures like Gerolamo Cardano, Niccolò Tartaglia and Ludovico Ferrari. We see some awesome and weird personalities in 19th century mathematics. But their fights are generally harder to watch from the sidelines and cheer on. And part is that it’s easier to find pop historical treatments of the kinds of numbers. The historiography of what a “function” is is a specialist occupation.

But I can think of a possible case. A tool that’s sometimes used in solving ordinary differential equations is the “Dirac delta function”. Yes, that Paul Dirac. It’s a weird function, written as \delta(x) . It’s equal to zero everywhere, except where x is zero. When x is zero? It’s … we don’t talk about what it is. Instead we talk about what it can do. The integral of that Dirac delta function times some other function can equal that other function at a single point. It strains credibility to call this a function the way we speak of, like, sin(x) or \sqrt{x^2 + 4} being functions. Many will classify it as a distribution instead. But it is so useful, for a particular kind of problem, that it’s impossible to throw away.

So perhaps the parallels between numbers and functions extend that far. Ordinary differential equations can make us notice kinds of functions we would not have seen otherwise.


And with this — I can see the much-postponed end of the Little 2021 Mathematics A-to-Z! You can read all my entries for 2021 at this link, and if you’d like can find all my A-to-Z essays here. How will I finish off the shortest yet most challenging sequence I’ve done yet? Will it be yellow and equivalent to the Axiom of Choice? Answers should come, in a week, if all starts going well.

My Little 2021 Mathematics A-to-Z: Tangent Space


And now, finally, I resume and hopefully finish what was meant to be a simpler and less stressful A-to-Z for last year. I’m feeling much better about my stress loads now and hope that I can soon enjoy the feeling of having a thing accomplished.

This topic is one of many suggestions that Elkement, one of my longest blog-friendships here, offered. It’s a creation that sent me back to my grad school textbooks, some of those slender paperback volumes with tiny, close-set type that turn out to be far more expensive than you imagine. Though not in this case: my most useful reference here was V I Arnold’s Ordinary Differential Equations, stamped inside as costing $18.75. The field is full of surprises. Another wonderful reference was this excellent set of notes prepared by Jodin Morey. They would have done much to help me through that class.

Tangent Space

Stand in midtown Manhattan, holding a map of midtown Manhattan. You have — not a tangent space, not yet. A tangent plane, representing the curved surface of the Earth with the flat surface of your map, though. But the tangent space is near: see how many blocks you must go, along the streets and the avenues, to get somewhere. Four blocks north, three west. Two blocks south, ten east. And so on. Those directions, of where you need to go, are the tangent space around you.

There is the first trick in tangent spaces. We get accustomed, early in learning calculus, to think of tangent lines and then of tangent planes. These are nice, flat approximations to some original curve. But while we’re introduced to the tangent space, and first learn examples of it, as tangent planes, we don’t stay there. There are several ways to define tangent spaces. One recasts tangent spaces in group theory terms, describing them as a ring based on functions that are equal to zero at the tangent point. (To be exact, it’s an ideal, based on a quotient group, based on two sets of such functions.)

That’s a description mathematicians are inclined to like, not only because it’s far harder to imagine than a map of the city is. But this ring definition describes the tangent space in terms of what we can do with it, rather than how to calculate finding it. That tends to appeal to mathematicians. And it offers surprising insights. Cleverer mathematicians than I am notice how this makes tangent spaces very close to Lagrange multipliers. Lagrange multipliers are a technique to find the maximum of a function subject to a constraint from another function. They seem to work by magic, and tangent spaces will echo that.

I’ll step back from the abstraction. There’s relevant observations to make from this map of midtown. The directions “four blocks north, three west” do not represent any part of Manhattan. It describes a way you might move in Manhattan, yes. But you could move in that direction from many places in the city. And you could go four blocks north and three west if you were in any part of any city with a grid of streets. It is a vector space, with elements that are velocities at a tangent point.

The tangent space is less a map showing where things are and more one of how to get to other places, closer to a subway map than a literal one. Still, the topic is steeped in the language of maps. I’ll find it a useful metaphor too. We do not make a map unless we want to know how to find something. So the interesting question is what do we try to find in these tangent spaces?

There are several routes to tangent spaces. The one I’m most familiar with is through dynamical systems. These are typically physics-driven, sometimes biology-driven, problems. They describe things that change in time according to ordinary differential equations. Physics problems particularly are often about things moving in space. Space, in dynamical systems, becomes “phase space”, an abstract universe spanned by all of the possible values of the variables. The variables are, usually, the positions and momentums of the particles (for a physics problem). Sometimes time and energy appear as variables. In biology variables are often things that represent populations. The role the Earth served in my first paragraph is now played by a manifold. The manifold represents whatever constraints are relevant to the problem. That’s likely to be conservation laws or limits on how often arctic hares can breed or such.

The evolution in time of this system, though, is now the tracing out of a path in phase space. An understandable and much-used system is the rigid pendulum. A stick, free to swing around a point. There are two useful coordinates here. There’s the angle the stick makes, relative to the vertical axis, \theta . And there’s how fast the stick is changing, \dot{\theta} . You can draw these axes; I recommend \theta as the horizontal and \dot{\theta} as the vertical axis but, you know, you do you.

If you give the pendulum a little tap, it’ll swing back and forth. It rises and moves to the right, then falls while moving to the left, then rises and moves to the left, then falls and moves to the right. In phase space, this traces out an ellipse. It’s your choice whether it’s going clockwise or anticlockwise. If you give the pendulum a huge tap, it’ll keep spinning around and around. It’ll spin a little slower as it gets nearly upright, but it speeds back up again. So in phase space that’s a wobbly line, moving either to the right or the left, depending what direction you hit it.

You can even imagine giving the pendulum just the right tap, exactly hard enough that it rises to vertical and balances there, perfectly aligned so it doesn’t fall back down. This is a special path, the dividing line between those ellipses and that wavy line. Or setting it vertically there to start with and trusting no truck driving down the street will rattle it loose. That’s a very precise dot, where \dot{\theta} is exactly zero. These paths, the trajectories, match whatever walking you did in the first paragraph to get to some spot in midtown Manhattan. And now let’s look again at the map, and the tangent space.

Within the tangent space we see what changes would change the system’s behavior. How much of a tap we would need, say, to launch our swinging pendulum into never-ending spinning. Or how much of a tap to stop a spinning pendulum. Every point on a trajectory of a dynamical system has a tangent space. And, for many interesting systems, the tangent space will be separable into two pieces. One of them will be perturbations that don’t go far from the original trajectory. One of them will be perturbations that do wander far from the original.

These regions may have a complicated border, with enclaves and enclaves within enclaves, and so on. This can be where we get (deterministic) chaos from. But what we usually find interesting is whether the perturbation keeps the old behavior intact or destroys it altogether. That is, how we can change where we are going.

That said, in practice, mathematicians don’t use tangent spaces to send pendulums swinging. They tend to come up when one is past studying such petty things as specific problems. They’re more often used in studying the ways that dynamical systems can behave. Tangent spaces themselves often get wrapped up into structures with names like tangent bundles. You’ll see them proving the existence of some properties, describing limit points and limit cycles and invariants and quite a bit of set theory. These can take us surprising places. It’s possible to use a tangent-space approach to prove the fundamental theorem of algebra, that every polynomial has at least one root. This seems to me the long way around to get there. But it is amazing to learn that is a place one can go.


I am so happy to be finally finishing Little 2021 Mathematics A-to-Z. All of this project’s essays should be at this link. And all my glossary essays from every year should be at this link. Thank you for reading.

The Plan, and How It Will Go Wrong


I spent a while repeating old A-to-Z materials for the letters T, O, and Z, the unfinished business from my Little 2021 Mathematics A-to-Z. This was to give me the time to recover and to prepare new essays to finish out the already-reduced project from last year. And then I figured once that was done, I could do three posts on successive Wednesdays, and a wrap-up post where I said what I learned. And then I’d be free to do whatever I felt like.

You notice this is not an A-to-Z essay. It’s been a struggle and this morning as I was preparing to finish off a fresh essay I realized: I had picked a topic I did already. No, it was not torus again. But there’s no doing a whole new essay in under three hours, especially not since I need to get groceries before a really nasty-looking bunch of weather we’re getting delivered this evening and tomorrow.

So it’s all pushed back another week again. All I can say is I hope I’ll be happy this hour next week.

How January 2022 Treated My Mathematics Blog


It’s a reasonable time for me to check on my readership statistics for the past month. The current month is maybe fourteen minutes from ending, after all. January was my most prolific month since October 2020, with 16 posts published. Nearly all were repostings of old A-to-Z essays. But if you weren’t checking in here in 2015, how would you know the difference, except by my pointing it out?

I have long suspected the thing that most affects my readership is how many times I post. So how did this block of repeat posts affect my readership? Says WordPress, it was like this:

Bar chart of two and a half years' worth of monthly readership figures. There was a huge peak around October 2019, and a much lower but fairly steady wave of readership after that. It's slightly increased for January 2022 compared to December 2021.
I like the views-per-visitor statistic. I don’t know why it’s only shown in the window that pops up when you hover over one of these monthly bars, though. The posts published per month seem like something it would be interesting to see presented as a bar chart too.

The number of pages viewed in January rose to 2,108, its highest figure since October 2021. That’s below the running averages for the twelve months ending in December 2021, though. The running mean was 2,402.7 views per month, and the median 2,337 views per month. Ah, but what if we rate that per posting? Then there were 131.8 views per posting. The running mean was 321.8 views per posting and the running mean 307.4. (And none of this is to say that any posting got 132 views. Most of what’s read any month is older material. The things that have had the chance to get some traction as the answer to search engine queries.)

The number of unique visitors rose from December, to 1,458 unique visitors in January. That’s still below the running mean of 1,694.5 visitors and the running median of 1,654.5. Per posting, the figure is even more dire: 91.1 visitors per posting, compared to a mean of 226.6 and median of 219.2. These per-posting unique visitor numbers are in line with the sort of thing I did back in 2019 or so, when I had lots of postings in both the A-to-Z and in the Reading the Comics line, though.

There were 51 things liked here in January, a slight rise and even above the mean of 40.1 and median of 38.5. Per posting, that’s 3.2 likes, compared to a mean of 5.3 and median of 5.6. All of these below the likability count of distant years like 2018, which were themselves much less liked than, say, 2015.

Comments fell again, with only four given or received around here in January. The mean is 15.7 and median 11.5. That’s a dire 0.3 comments per posting, although I grant there wasn’t a lot for people to respond to. The mean is 2.0 comments per posting, and median 1.6, and, you know, I’ve had worse months. (February is looking like one!)

I had a lot of posts get at least some views in January. The five most popular posts from the month were:

And for one I have enough posts it feels silly to list all of them in order of decreasing popularity. I’m a touch surprised none of the A-to-Z reposts were among the most popular. What the record suggests is people like amusing little trifles or me talking about myself. Ah, if only it weren’t painful to talk about myself.

WordPress credits me with 18,040 words published in January, for an average of 1,128 words per posting. That’s more than any month of 2020 or 2021, to my surprise.

WordPress figures that as of the start of February I’d posted 1,691 things where, drawing 152,987 views from 91,642 logged unique visitors. And that there were a total of 3,311 comments altogether.

And that should be enough looking back for now. I hope to resume, and complete, the Little 2021 A-to-Z next week, and after that, let’s just see what I do.

If you would like to see, the easiest way to is to keep reading around here. There’s a button at the upper right of the page, “Follow Nebusresearch”, which should add this page to your WordPress reader. Or you can get posts e-mailed to you, using the ‘Follow Nebusresearch Via E-mail” button beneath that. I do nothing with that e-mail address except send posts. I don’t know what WordPress does with it. Or you can put the RSS feed into your reader, and read what you like without my ability to ever know it, except by your correcting me. However you choose to do it, thank you.

From my Seventh A-to-Z: Zero Divisor


Here I stand at the end of the pause I took in 2021’s Little Mathematics A-to-Z, in the hopes of building the time and buffer space to write its last three essays. Have I succeeded? We’ll see next week, but I will say that I feel myself in a much better place than I was in December.

The Zero Devisor closed out my big project for the first plague year. It let me get back to talking about abstract algebra, one of the cores of a mathematics major’s education. And it let me get into graph theory, the unrequited love of my grad school life. The subject also let me tie back to Michael Atiyah, the start of that year’s A-to-Z. Often a sequence will pick up a theme and 2020’s gave a great illusion of being tightly constructed.


Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo Z , are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as Z_{10} for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at Z_{5} instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about Z_{8} ? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about Z_{12} ? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers Z , for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 Z_{12} , though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 Z_{13} ? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, Z_{p} , lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If R is any ring, then \Gamma(R) is the zero-divisor graph of R . (I know some of you think R is the real numbers. No; that’s a bold-faced \mathbb{R} instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in R . You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph \Gamma(R) makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which (x - 2)(x + 1) = 0 .


And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

From my Sixth A-to-Z: Zeno’s Paradoxes


I suspect it is impossible to say enough about Zeno’s Paradoxes. To close out my 2019 A-to-Z, though, I tried saying something. There are four particularly famous paradoxes and I discuss what are maybe the second and third-most-popular ones here. (The paradox of the Dichotomy is surely most famous.) The problems presented are about motion and may seem to be about physics, or at least about perception. But calculus is built on differentials, on the idea that we can describe how fast a thing is changing at an instant. Mathematicians have worked out a way to define this that we’re satisfied with and that doesn’t require (obvious) nonsense. But to claim we’ve solved Zeno’s Paradoxes — as unwary STEM majors sometimes do — is unwarranted.

Also I was able to work in a picture from an amusement park trip I took, the closing weekend of Kings Island park in 2019 and the last day that The Vortex roller coaster would run.


Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

A photograph of a blurry roller coaster passing through a vertical loop.
One of the many loops of Vortex, a roller coaster at Kings Island amusement park from 1987 to 2019. Taken by me the last day of the ride’s operation; this was one of the roller coaster’s runs after 7 pm, the close of the park the last day of the season.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.


And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

From my Fifth A-to-Z: Zugzwang


The Fall 2018 A-to-Z gave me the chance to talk a bit more about game theory. It and knot theory are two of the fields of mathematics I most long to know better. Well, that and differential geometry. It also gave me the chance to show off how I read The Yiddish Policeman’s Union. I enjoyed the book.


My final glossary term for this year’s A To Z sequence was suggested by aajohannas, who’d also suggested “randomness” and “tiling”. I don’t know of any blogs or other projects they’re behind, but if I do hear, I’ll pass them on.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Zugzwang.

Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.

Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?

But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.

This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.

The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.

But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.

There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.

Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.

The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.

Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.

… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.

But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.

There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.

We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.

It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.

The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literates gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.

Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.

I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.

It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.


And with that … I am done! All of the Fall 2018 Mathematics A To Z posts should be at this link. Next week I’ll post my big list of all the letters, though. And, as has become tradition, a post about what I learned by doing this project. And sometime before then I should have at least one more Reading the Comics post. Thanks kindly for reading and we’ll see when in 2019 I feel up to doing another of these.

From my Fourth A-to-Z: Zeta Functions


I did not remember how long a buildup there was to my Summer 2017 writings about the Zeta function. But it’s something that takes a lot of setup. I don’t go into why the Riemann Hypothesis is interesting. I might have been saving that for a later A-to-Z. Or I might have trusted that since every pop mathematics blog has a good essay about the Riemann Hypothesis already there wasn’t much I could add.

I realize on re-reading that one might take me to have said that the final exam for my Intro to Complex Analysis course was always in the back of my textbook. I’d meant that after the final, I tucked it into my book and left it there. Probably nobody was confused by this.


Today Gaurish, of For the love of Mathematics, gives me the last subject for my Summer 2017 A To Z sequence. And also my greatest challenge: the Zeta function. The subject comes to all pop mathematics blogs. It comes to all mathematics blogs. It’s not difficult to say something about a particular zeta function. But to say something at all original? Let’s watch.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Zeta Function.

The spring semester of my sophomore year I had Intro to Complex Analysis. Monday Wednesday 7:30; a rare evening class, one of the few times I’d eat dinner and then go to a lecture hall. There I discovered something strange and wonderful. Complex Analysis is a far easier topic than Real Analysis. Both are courses about why calculus works. But why calculus for complex-valued numbers works is a much easier problem than why calculus for real-valued numbers works. It’s dazzling. Part of this is that Complex Analysis, yes, builds on Real Analysis. So Complex can take for granted some things that Real has to prove. I didn’t mind. Given the way I crashed through Intro to Real Analysis I was glad for a subject that was, relatively, a breeze.

As we worked through Complex Variables and Applications so many things, so very many things, got to be easy. The basic unit of complex analysis, at least as we young majors learned it, was in contour integrals. These are integrals whose value depends on the values of a function on a closed loop. The loop is in the complex plane. The complex plane is, well, your ordinary plane. But we say the x-coordinate and the y-coordinate are parts of the same complex-valued number. The x-coordinate is the real-valued part. The y-coordinate is the imaginary-valued part. And we call that summation ‘z’. In complex-valued functions ‘z’ serves the role that ‘x’ does in normal mathematics.

So a closed loop is exactly what you think. Take a rubber band and twist it up and drop it on the table. That’s a closed loop. Suppose you want to integrate a function, ‘f(z)’. If you can always take its derivative on this loop and on the interior of that loop, then its contour integral is … zero. No matter what the function is. As long as it’s “analytic”, as the terminology has it. Yeah, we were all stunned into silence too. (Granted, mathematics classes are usually quiet, since it’s hard to get a good discussion going. Plus many of us were in post-dinner digestive lulls.)

Integrating regular old functions of real-valued numbers is this tedious process. There’s sooooo many rules and possibilities and special cases to consider. There’s sooooo many tricks that get you the integrals of some functions. And then here, with complex-valued integrals for analytic functions, you know the answer before you even look at the function.

As you might imagine, since this is only page 113 of a 341-page book there’s more to it. Most functions that anyone cares about aren’t analytic. At least they’re not analytic everywhere inside regions that might be interesting. There’s usually some points where an interesting function ‘f(z)’ is undefined. We call these “singularities”. Yes, like starships are always running into. Only we rarely get propelled into other universes or other times or turned into ghosts or stuff like that.

So much of the rest of the course turns into ways to avoid singularities. Sometimes you can spackle them over. This is when the function happens not to be defined somewhere, but you can see what it ought to be. Sometimes you have to do something more. This turns into a search for “removable” singularities. And this does something so brilliant it looks illicit. You modify your closed loop, so that it comes up very close, as close as possible, to the singularity, but studiously avoids it. Follow this game of I’m-not-touching-you right and you can turn your integral into two parts. One is the part that’s equal to zero. The other is the part that’s a constant times whatever the function is at the singularity you’re removing. And that ought to be easy to find the value for. (Being able to find a function’s value doesn’t mean you can find its derivative.)

Those tricks were hard to master. Not because they were hard. Because they were easy, in a context where we expected hard. But after that we got into how to move singularities. That is, how to do a change of variables that moved the singularities to where they’re more convenient for some reason. How could this be more convenient? Because of chapter five, “Series”. In regular old calculus we learn how to approximate well-behaved functions with polynomials. In complex-variable calculus, we learn the same thing all over again. They’re polynomials of complex-valued variables, but it’s the same sort of thing. And not just polynomials, but things that look like polynomials except they’re powers of \frac{1}{z} instead. These open up new ways to approximate functions, and to remove singularities from functions.

And then we get into transformations. These are about turning a problem that’s hard into one that’s easy. Or at least different. They’re a change of variable, yes. But they also change what exactly the function is. This reshuffles the problem. Makes for a change in singularities. Could make ones that are easier to work with.

One of the useful, and so common, transforms is called the Laplace-Stieltjes Transform. (“Laplace” is said like you might guess. “Stieltjes” is said, or at least we were taught to say it, like “Stilton cheese” without the “ton”.) And it tends to create functions that look like a series, the sum of a bunch of terms. Infinitely many terms. Each of those terms looks like a number times another number raised to some constant times ‘z’. As the course came to its conclusion, we were all prepared to think about these infinite series. Where singularities might be. Which of them might be removable.

These functions, these results of the Laplace-Stieltjes Transform, we collectively call ‘zeta functions’. There are infinitely many of them. Some of them are relatively tame. Some of them are exotic. One of them is world-famous. Professor Walsh — I don’t mean to name-drop, but I discovered the syllabus for the course tucked in the back of my textbook and I’m delighted to rediscover it — talked about it.

That world-famous one is, of course, the Riemann Zeta function. Yes, that same Riemann who keeps turning up, over and over again. It looks simple enough. Almost tame. Take the counting numbers, 1, 2, 3, and so on. Take your ‘z’. Raise each of the counting numbers to that ‘z’. Take the reciprocals of all those numbers. Add them up. What do you get?

A mass of fascinating results, for one. Functions you wouldn’t expect are concealed in there. There’s strips where the real part is zero. There’s strips where the imaginary part is zero. There’s points where both the real and imaginary parts are zero. We know infinitely many of them. If ‘z’ is -2, for example, the sum is zero. Also if ‘z’ is -4. -6. -8. And so on. These are easy to show, and so are dubbed ‘trivial’ zeroes. To say some are ‘trivial’ is to say that there are others that are not trivial. Where are they?

Professor Walsh explained. We know of many of them. The nontrivial zeroes we know of all share something in common. They have a real part that’s equal to 1/2. There’s a zero that’s at about the number \frac{1}{2} - \imath 14.13 . Also at \frac{1}{2} + \imath 14.13 . There’s one at about \frac{1}{2} - \imath 21.02 . Also about \frac{1}{2} + \imath 21.02 . (There’s a symmetry, you maybe guessed.) Every nontrivial zero we’ve found has a real component that’s got the same real-valued part. But we don’t know that they all do. Nobody does. It is the Riemann Hypothesis, the great unsolved problem of mathematics. Much more important than that Fermat’s Last Theorem, which back then was still merely a conjecture.

What a prospect! What a promise! What a way to set us up for the final exam in a couple of weeks.

I had an inspiration, a kind of scheme of showing that a nontrivial zero couldn’t be within a given circular contour. Make the size of this circle grow. Move its center farther away from the z-coordinate \frac{1}{2} + \imath 0 to match. Show there’s still no nontrivial zeroes inside. And therefore, logically, since I would have shown nontrivial zeroes couldn’t be anywhere but on this special line, and we know nontrivial zeroes exist … I leapt enthusiastically into this project. A little less enthusiastically the next day. Less so the day after. And on. After maybe a week I went a day without working on it. But came back, now and then, prodding at my brilliant would-be proof.

The Riemann Zeta function was not on the final exam, which I’ve discovered was also tucked into the back of my textbook. It asked more things like finding all the singular points and classifying what kinds of singularities they were for functions like e^{-\frac{1}{z}} instead. If the syllabus is accurate, we got as far as page 218. And I’m surprised to see the professor put his e-mail address on the syllabus. It was merely “bwalsh@math”, but understand, the Internet was a smaller place back then.

I finished the course with an A-, but without answering any of the great unsolved problems of mathematics.

From my Third A-to-Z: Zermelo-Fraenkel Axioms


The close of my End 2016 A-to-Z let me show off one of my favorite modes, that of amateur historian of mathematics who doesn’t check his primary references enough. So far as I know I don’t have any serious errors here, but then, how would I know? … But keep in mind that the full story is more complicated and more ambiguous than presented. (This is true of all histories.) That I could fit some personal history in was also a delight.

I don’t know why Thoralf Skolem’s name does not attach to the Zermelo-Fraenkel Axioms. Mathematical things are named with a shocking degree of arbitrariness. Skolem did well enough for himself.


gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

Zermelo-Fraenkel Axioms

I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprising all the subsets of the original set. Good stuff to know.

There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s the ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or for a more instructive view, it’s as simple as it could be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

From my Second A-to-Z: Z-score


When I first published this I mentioned not knowing why ‘z’ got picked as a variable name. Any letter besides ‘x’ would make sense. As happens when I toss this sort of question out, I haven’t learned anything about why ‘z’ and not, oh, ‘y’ or ‘t’ or even ‘d’. My best guess is that we don’t want to confuse references to the original data with references to the transformed. And while you can write a ‘z’ so badly it looks like an ‘x’, it’s much easier to write a ‘y’ that looks like an ‘x’. I don’t know whether the Preliminary SAT is still a thing.


And we come to the last of the Leap Day 2016 Mathematics A To Z series! Z is a richer letter than x or y, but it’s still not so rich as you might expect. This is why I’m using a term that everybody figured I’d use the last time around, when I went with z-transforms instead.

Z-Score

You get an exam back. You get an 83. Did you do well?

Hard to say. It depends on so much. If you expected to barely pass and maybe get as high as a 70, then you’ve done well. If you took the Preliminary SAT, with a composite score that ranges from 60 to 240, an 83 is catastrophic. If the instructor gave an easy test, you maybe scored right in the middle of the pack. If the instructor sees tests as a way to weed out the undeserving, you maybe had the best score in the class. It’s impossible to say whether you did well without context.

The z-score is a way to provide that context. It draws that context by comparing a single score to all the other values. And underlying that comparison is the assumption that whatever it is we’re measuring fits a pattern. Usually it does. The pattern we suppose stuff we measure will fit is the Normal Distribution. Sometimes it’s called the Standard Distribution. Sometimes it’s called the Standard Normal Distribution, so that you know we mean business. Sometimes it’s called the Gaussian Distribution. I wouldn’t rule out someone writing the Gaussian Normal Distribution. It’s also called the bell curve distribution. As the names suggest by throwing around “normal” and “standard” so much, it shows up everywhere.

A normal distribution means that whatever it is we’re measuring follows some rules. One is that there’s a well-defined arithmetic mean of all the possible results. And that arithmetic mean is the most common value to turn up. That’s called the mode. Also, this arithmetic mean, and mode, is also the median value. There’s as many data points less than it as there are greater than it. Most of the data values are pretty close to the mean/mode/median value. There’s some more as you get farther from this mean. But the number of data values far away from it are pretty tiny. You can, in principle, get a value that’s way far away from the mean, but it’s unlikely.

We call this standard because it might as well be. Measure anything that varies at all. Draw a chart with the horizontal axis all the values you could measure. The vertical axis is how many times each of those values comes up. It’ll be a standard distribution uncannily often. The standard distribution appears when the thing we measure satisfies some quite common conditions. Almost everything satisfies them, or nearly satisfies them. So we see bell curves so often when we plot how frequently data points come up. It’s easy to forget that not everything is a bell curve.

The normal distribution has a mean, and median, and mode, of 0. It’s tidy that way. And it has a standard deviation of exactly 1. The standard deviation is a way of measuring how spread out the bell curve is. About 95 percent of all observed results are less than two standard deviations away from the mean. About 99 percent of all observed results are less than three standard deviations away. 99.9997 percent of all observed results are less than six standard deviations away. That last might sound familiar to those who’ve worked in manufacturing. At least it des once you know that the Greek letter sigma is the common shorthand for a standard deviation. “Six Sigma” is a quality-control approach. It’s meant to make sure one understands all the factors that influence a product and controls them. This is so the product falls outside the design specifications only 0.0003 percent of the time.

This is the normal distribution. It has a standard deviation of 1 and a mean of 0, by definition. And then people using statistics go and muddle the definition. It is always so, with the stuff people actually use. Forgive them. It doesn’t really change the shape of the curve if we scale it, so that the standard deviation is, say, two, or ten, or π, or any positive number. It just changes where the tick marks are on the x-axis of our plot. And it doesn’t really change the shape of the curve if we translate it, adding (or subtracting) some number to it. That makes the mean, oh, 80. Or -15. Or eπ. Or some other number. That just changes what value we write underneath the tick marks on the plot’s x-axis. We can find a scaling and translation of the normal distribution that fits whatever data we’re observing.

When we find the z-score for a particular data point we’re undoing this translation and scaling. We figure out what number on the standard distribution maps onto the original data set’s value. About two-thirds of all data points are going to have z-scores between -1 and 1. About nineteen out of twenty will have z-scores between -2 and 2. About 99 out of 100 will have z-scores between -3 and 3. If we don’t see this, and we have a lot of data points, then that’s suggests our data isn’t normally distributed.

I don’t know why the letter ‘z’ is used for this instead of, say, ‘y’ or ‘w’ or something else. ‘x’ is out, I imagine, because we use that for the original data. And ‘y’ is a natural pick for a second measured variable. z’, I expect, is just far enough from ‘x’ it isn’t needed for some more urgent duty, while being close enough to ‘x’ to suggest it’s some measured thing.

The z-score gives us a way to compare how interesting or unusual scores are. If the exam on which we got an 83 has a mean of, say, 74, and a standard deviation of 5, then we can say this 83 is a pretty solid score. If it has a mean of 78 and a standard deviation of 10, then the score is better-than-average but not exceptional. If the exam has a mean of 70 and a standard deviation of 4, then the score is fantastic. We get to meaningfully compare scores from the measurements of different things. And so it’s one of the tools with which statisticians build their work.

From my First A-to-Z: Z-transform


Back in the day I taught in a Computational Science department, which threw me out to exciting and new-to-me subjects more than once. One quite fun semester I was learning, and teaching, signal processing. This set me up for the triumphant conclusion of my first A-to-Z.

One of the things you can see in my style is mentioning the connotations implied by whether one uses x or z as a variable. Any letter will do, for the use it’s put to. But to use the name ‘z’ suggests an openness to something that ‘x’ doesn’t.

There’s a mention here about stability in algorithms, and the note that we can process data in ways that are stable or are unstable. I don’t mention why one would want or not want stability. Wanting stability hardly seems to need explaining; isn’t that the good option? And, often, yes, we want stable systems because they correct and wipe away error. But there are reasons we might want instability, or at least less stability. Too stable a system will obscure weak trends, or the starts of trends. Your weight flutters day by day in ways that don’t mean much, which is why it’s better to consider a seven-day average. If you took instead a 700-day running average, these meaningless fluctuations would be invisible. But you also would take a year or more to notice whether you were losing or gaining weight. That’s one of the things stability costs.


z-transform.

The z-transform comes to us from signal processing. The signal we take to be a sequence of numbers, all representing something sampled at uniformly spaced times. The temperature at noon. The power being used, second-by-second. The number of customers in the store, once a month. Anything. The sequence of numbers we take to stretch back into the infinitely great past, and to stretch forward into the infinitely distant future. If it doesn’t, then we pad the sequence with zeroes, or some other safe number that we know means “nothing”. (That’s another classic mathematician’s trick.)

It’s convenient to have a name for this sequence. “a” is a good one. The different sampled values are denoted by an index. a0 represents whatever value we have at the “start” of the sample. That might represent the present. That might represent where sampling began. That might represent just some convenient reference point. It’s the equivalent of mileage maker zero; we have to have something be the start.

a1, a2, a3, and so on are the first, second, third, and so on samples after the reference start. a-1, a-2, a-3, and so on are the first, second, third, and so on samples from before the reference start. That might be the last couple of values before the present.

So for example, suppose the temperatures the last several days were 77, 81, 84, 82, 78. Then we would probably represent this as a-4 = 77, a-3 = 81, a-2 = 84, a-1 = 82, a0 = 78. We’ll hope this is Fahrenheit or that we are remotely sensing a temperature.

The z-transform of a sequence of numbers is something that looks a lot like a polynomial, based on these numbers. For this five-day temperature sequence the z-transform would be the polynomial 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 . (z1 is the same as z. z0 is the same as the number “1”. I wrote it this way to make the pattern more clear.)

I would not be surprised if you protested that this doesn’t merely look like a polynomial but actually is one. You’re right, of course, for this set, where all our samples are from negative (and zero) indices. If we had positive indices then we’d lose the right to call the transform a polynomial. Suppose we trust our weather forecaster completely, and add in a1 = 83 and a2 = 76. Then the z-transform for this set of data would be 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2 . You’d probably agree that’s not a polynomial, although it looks a lot like one.

The use of z for these polynomials is basically arbitrary. The main reason to use z instead of x is that we can learn interesting things if we imagine letting z be a complex-valued number. And z carries connotations of “a possibly complex-valued number”, especially if it’s used in ways that suggest we aren’t looking at coordinates in space. It’s not that there’s anything in the symbol x that refuses the possibility of it being complex-valued. It’s just that z appears so often in the study of complex-valued numbers that it reminds a mathematician to think of them.

A sound question you might have is: why do this? And there’s not much advantage in going from a list of temperatures “77, 81, 84, 81, 78, 83, 76” over to a polynomial-like expression 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2 .

Where this starts to get useful is when we have an infinitely long sequence of numbers to work with. Yes, it does too. It will often turn out that an interesting sequence transforms into a polynomial that itself is equivalent to some easy-to-work-with function. My little temperature example there won’t do it, no. But consider the sequence that’s zero for all negative indices, and 1 for the zero index and all positive indices. This gives us the polynomial-like structure \cdots + 0z^2 + 0z^1 + 1 + 1\left(\frac{1}{z}\right)^1 + 1\left(\frac{1}{z}\right)^2 + 1\left(\frac{1}{z}\right)^3 + 1\left(\frac{1}{z}\right)^4 + \cdots . And that turns out to be the same as 1 \div \left(1 - \left(\frac{1}{z}\right)\right) . That’s much shorter to write down, at least.

Probably you’ll grant that, but still wonder what the point of doing that is. Remember that we started by thinking of signal processing. A processed signal is a matter of transforming your initial signal. By this we mean multiplying your original signal by something, or adding something to it. For example, suppose we want a five-day running average temperature. This we can find by taking one-fifth today’s temperature, a0, and adding to that one-fifth of yesterday’s temperature, a-1, and one-fifth of the day before’s temperature a-2, and one-fifth a-3, and one-fifth a-4.

The effect of processing a signal is equivalent to manipulating its z-transform. By studying properties of the z-transform, such as where its values are zero or where they are imaginary or where they are undefined, we learn things about what the processing is like. We can tell whether the processing is stable — does it keep a small error in the original signal small, or does it magnify it? Does it serve to amplify parts of the signal and not others? Does it dampen unwanted parts of the signal while keeping the main intact?

We can understand how data will be changed by understanding the z-transform of the way we manipulate it. That z-transform turns a signal-processing idea into a complex-valued function. And we have a lot of tools for studying complex-valued functions. So we become able to say a lot about the processing. And that is what the z-transform gets us.

A Moment Which Turns Out to Be Universal


I was reading a bit farther in Charles Coulson Gillispie’s Pierre-Simon Laplace, 1749 – 1827, A Life In Exact Science and reached this paragraph, too good not to share:

Wishing to study [ Méchanique céleste ] in advance, [ Jean-Baptiste ] Biot offered to read proof. When he returned the sheets, he would often ask Laplace to explain some of the many steps that had been skipped over with the famous phrase, “it is easy to see”. Sometimes, Biot said, Laplace himself would not remember how he had worked something out and would have difficulty reconstructing it.

So, it’s not just you and your instructors.

(Gillispie wrote the book along with Robert Fox and Ivor Grattan-Guinness.)

How All Of 2021 Treated My Mathematics Blog


Oh, you know, how did 2021 treat anybody? I always do one of these surveys for the end of each month. It’s only fair to do one for the end of the year also.

2021 was my tenth full year blogging around here. I might have made more of that if the actual anniversary in late September hadn’t coincided with a lot of personal hardships. 2021 was a quiet year around these parts with only 94 things posted. That’s the fewest of any full year. (I posted only 41 things in 2011, but I only started posting at all in late September of that year.) That seems not to have done my readership any harm. There were 28,832 pages viewed in 2021, up from 24,474 in 2020 and a fair bit above the 24,662 given in my previously best-viewed year of 2019. Eleven data points (the partial year 2011, and the full years 2012 through 2021) aren’t many, so there’s no real drawing patterns here. But it does seem like I have a year of sharp increases and then a year of slight declines in page views. I suppose we’ll check in in 2023 and see if that pattern holds.

Bar chart of annual views and unique visitors from 2012 to the present. After nearly level view counts in 2019 and 2020 there was a good-size rise for 2021.
The number of unique visitors for 2012 is so tiny because they started recording that (so far as they let us know) in, like, late December so that figure is meaningless. The rest seem all right, though.

One thing not declining? The number of unique visitors. WordPress recorded 20,339 unique visitors in 2021, a comfortable bit above 2020’s 16,870 and 2019s 16,718. So far I haven’t seen a year-over-year decline in unique visitors. That’s gratifying.

Less gratifying: the number of likes continues its decline. It hasn’t increased, around here, since 2015 when a seemingly impossible 3,273 likes were given by readers. In 2021 there were only 481 likes, the fewest since 2013. The dropping-off of likes has looked so resembled a Poisson distribution that I’m tempted to see whether it actually fits that.

Bar chart of the annual likes from 2013 to the present. It rose sharply from 2013 to 2015 and has declined in a not-quite-exponential pattern since then.
I know, my first thought was that it looked like an overdamped system receiving a shock, but I don’t think the decline is consistent enough to support that.

The number of comments dropped a slight bit. There were 188 given around here in 2021, but that’s only ten fewer than were given in 2020. It’s seven more than were given in 2019, so if there’s any pattern there I don’t know it.

WordPress lists 483 posts around here as having gotten four or more page views in the year. It won’t tell me everything that got even a single view, though. I’m not willing to do the work of stitching together the monthly page view data to learn everything that was of interest however passing. I’ll settle with knowing what was most popular. And what were my most popular posts of the year mercifully ended? These posts from 2021 got more views than all the others:

Mercator-style map of the world, with the United States in dark red and most of the New World, western Europe, South and Pacific Rim Asia, Australia, and New Zealand in a more uniform pink. The Philippines and India are in an intermediately dark red.
Hey look, it’s a naturally occurring International Telecommunication Union zonal map! And at this point may I point out that besides being a lower-tier pop-mathematics writer I am also a lower-tier humor blogger?

There were 143 countries, or country-like entities, sending me any page views in 2021. I don’t know how that compares to earlier years. But here’s the roster of where page views came from:

Country Readers
United States 13,723
Philippines 3,994
India 2,507
Canada 1,393
United Kingdom 865
Australia 659
Germany 442
Brazil 347
South Africa 296
European Union 273
Sweden 230
Singapore 210
Italy 204
Austria 178
France 143
Finland 141
Malaysia 135
South Korea 135
Hong Kong SAR China 132
Ireland 131
Netherlands 117
Turkey 117
Spain 107
Pakistan 105
Thailand 102
Mexico 101
United Arab Emirates 100
Indonesia 97
Switzerland 95
Norway 87
New Zealand 86
Belgium 76
Nigeria 76
Russia 74
Japan 64
Taiwan 62
Bangladesh 58
Poland 55
Greece 54
Denmark 52
Colombia 51
Israel 49
Ghana 46
Portugal 44
Czech Republic 40
Vietnam 38
Saudi Arabia 33
Argentina 30
Lebanon 30
Ecuador 28
Nepal 28
Egypt 25
Kuwait 23
Serbia 22
Chile 21
Croatia 21
Jamaica 20
Peru 20
Tanzania 20
Costa Rica 19
Romania 17
Trinidad & Tobago 17
Sri Lanka 16
Ukraine 15
Hungary 13
Jordan 13
Bulgaria 12
China 12
Albania 11
Bahrain 11
Morocco 11
Estonia 10
Qatar 10
Slovakia 10
Cyprus 9
Kenya 9
Zimbabwe 9
Algeria 8
Oman 8
Belarus 7
Georgia 7
Honduras 7
Lithuania 7
Puerto Rico 7
Venezuela 7
Bosnia & Herzegovina 6
Ethiopia 6
Iraq 6
Belize 5
Bhutan 5
Moldova 5
Uruguay 5
Dominican Republic 4
Guam 4
Kazakhstan 4
Macedonia 4
Mauritius 4
Zambia 4
Åland Islands 3
Antigua & Barbuda 3
Bahamas 3
Cambodia 3
El Salvador 3
Gambia 3
Guatemala 3
Slovenia 3
Suriname 3
American Samoa 2
Azerbaijan 2
Bolivia 2
Cameroon 2
Guernsey 2
Malta 2
Papua New Guinea 2
Réunion 2
Rwanda 2
Sudan 2
Uganda 2
Afghanistan 1
Andorra 1
Armenia 1
Fiji 1
Grenada 1
Iceland 1
Isle of Man 1
Latvia 1
Liberia 1
Liechtenstein 1
Luxembourg 1
Maldives 1
Marshall Islands 1
Mongolia 1
Myanmar (Burma) 1
Namibia 1
Palestinian Territories 1
Panama 1
Paraguay 1
Senegal 1
St. Lucia 1
St. Vincent & Grenadines 1
Togo 1
Tunisia 1
Vatican City 1

I don’t know that I’ve gotten a reader from Vatican City before. I hope it’s not about the essay figuring what dates are most and least likely for Easter. I’d expect them to know that already.

My plan is to spend a bit more time republishing posts from old A-to-Z’s. And then I hope to finish off the Little 2021 Mathematics A-to-Z, late and battered but still carrying on. I intend to post something at least once a week after that, although I don’t have a clear idea what that will be. Perhaps I’ll finally work out the algorithm for Compute!’s New Automatic Proofreader. Perhaps I’ll fill in with A-to-Z style essays for topics I had skipped before. Or I might get back to reading the comics for their mathematics topics. I’m open to suggestions.

Some Progress on the Infinitude of Monkeys


I have been reading Pierre-Simon LaPlace, 1749 – 1827, A Life In Exact Science, by Charles Coulson Gillispie with Robert Fox and Ivor Grattan-Guinness. It’s less of a biography than I expected and more a discussion of LaPlace’s considerable body of work. Part of LaPlace’s work was in giving probability a logically coherent, rigorous meaning. Laplace discusses the gambler’s fallacy and the tendency to assign causes to random events. That, for example, if we came across letters from a printer’s font reading out ‘INFINITESIMAL’ we would think that deliberate. We wouldn’t think that for a string of letters in no recognized language. And that brings up this neat quote from Gillispie:

The example may in all probability be adapted from the chapter in the Port-Royal La Logique (1662) on judgement of future events, where Arnauld points out that it would be stupid to bet twenty sous against ten thousand livres that a child playing with printer’s type would arrange the letters to compose the first twenty lines of Virgil’s Aenid.

The reference here is to a book by Antoine Arnauld and Pierre Nicole that I haven’t read or heard of before. But it makes a neat forerunner to the Infinite Monkey Theorem. That’s the study of what probability means when put to infinitely great or long processes. Émile Borel’s use of monkeys at a typewriter echoes this idea of children playing beyond their understanding. I don’t know whether Borel knew of Arnauld and Nicole’s example. But I did not want my readers to miss a neat bit of infinite-monkey trivia. Or to miss today’s Bizarro, offering yet another comic on the subject.

A printer reports to William Shakespeare: 'There's no way I can deliver 37 plays and 150 sonnets. I've got no monkeys, and typewriters haven't been invented yet.'
Piraro and Wayno’s Bizarro for the 18th of January, 2022. I’m not promising a return to regular Reading the Comics posts. But essays that feature Bizarro, past and future, are at this link.

From my Seventh A-to-Z: Big-O and Little-O Notation


I toss off a mention in this essay, about its book publication. By the time it appeared I was thinking whether I could assemble these A-to-Z’s, or a set of them, into a book. I haven’t had the energy to put that together but it still seems viable.


Mr Wu, author of the Singapore Maths Tuition blog, asked me to explain a technical term today. I thought that would be a fun, quick essay. I don’t learn very fast, do I?

A note on style. I make reference here to “Big-O” and “Little-O”, capitalizing and hyphenating them. This is to give them visual presence as a name. In casual discussion they’re just read, or said, as the two words or word-and-a-letter. Often the Big- or Little- gets dropped and we just talk about O. An O, without further context, in my experience means Big-O.

The part of me that wants smooth consistency in prose urges me to write “Little-o”, as the thing described is represented with a lowercase ‘o’. But Little-o sounds like a midway game or an Eyerly Aircraft Company amusement park ride. And I never achieve consistency in my prose anyway. Maybe for the book publication. Until I’m convinced another is better, though, “Little-O” it is.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Big-O and Little-O Notation.

When I first went to college I had a campus post office box. I knew my box number. I also knew the length of the sluggish line for the combination lock code. The lock was a dial, lettered A through J. Being a young STEM-class idiot I thought, boy, would it actually be quicker to pick the lock than wait for the line? A three-letter combination, of ten options? That’s 1,000 possibilities. If I could try five a minute that’s, at worst, three hours 20 minutes. Combination might be anywhere in that set; I might get lucky. I could expect to spend 80 minutes picking my lock.

I decided to wait in line instead, and good that I did. I was unaware lock settings might not be a letter, like ‘A’. It could be the midway point between adjacent letters, like ‘AB’. That meant there were eight times as many combinations as I estimated, and I could expect to spend over ten hours. Even the slow line was faster than that. It transpired that my combination had two of these midway letters.

But that’s a little demonstration of algorithmic complexity. Also in cracking passwords by trial-and-error. Doubling the set of possible combination codes octuples the time it takes to break into the set. Making the combination longer would also work; each extra letter would multiply the cracking time by twenty. So you understand why your password should include “special characters” like punctuation, but most of all should be long.

We’re often interested in how long to expect a task to take. Sometimes we’re interested in the typical time it takes. Often we’re interested in the longest it could ever take. If we have a deterministic algorithm, we can say. We can count how many steps it takes. Sometimes this is easy. If we want to add two two-digit numbers together we know: it will be, at most, three single-digit additions plus, maybe, writing down a carry. (To add 98 and 37 is adding 8 + 7 to get 15, to add 9 + 3 to get 12, and to take the carry from the 15, so, 1 + 12 to get 13, so we have 135.) We can get a good quarrel going about what “a single step” is. We can argue whether that carry into the hundreds column is really one more addition. But we can agree that there is some smallest bit of arithmetic work, and proceed from that.

For any algorithm we have something that describes how big a thing we’re working on. It’s often ‘n’. If we need more than one variable to describe how big it is, ‘m’ gets called up next. If we’re estimating how long it takes to work on a number, ‘n’ is the number of digits in the number. If we’re thinking about a square matrix, ‘n’ is the number of rows and columns. If it’s a not-square matrix, then ‘n’ is the number of rows and ‘m’ the number of columns. Or vice-versa; it’s your matrix. If we’re looking for an item in a list, ‘n’ is the number of items in the list. If we’re looking to evaluate a polynomial, ‘n’ is the order of the polynomial.

In normal circumstances we don’t work out how many steps some operation does take. It’s more useful to know that multiplying these two long numbers would take about 900 steps than that it would need only 816. And so this gives us an asymptotic estimate. We get an estimate of how much longer cracking the combination lock will take if there’s more letters to pick from. This allowing that some poor soul will get the combination A-B-C.

There are a couple ways to describe how long this will take. The more common is the Big-O. This is just the letter, like you find between N and P. Since that’s easy, many have taken to using a fancy, vaguely cursive O, one that looks like \mathcal{O} . I agree it looks nice. Particularly, though, we write \mathcal{O}(f(n)) , where f is some function. In practice, we’ll see functions like \mathcal{O}(n) or \mathcal{O}(n^2 \log(n)) or \mathcal{O}(n^3) . Usually something simple like that. It can be tricky. There’s a scheme for multiplying large numbers together that’s \mathcal{O}(n \cdot 2^{\sqrt{2 log (n)}} \cdot log(n)) . What you will not see is something like \mathcal{O}(\sin (n)) , or \mathcal{O}(n^3 - n^4) or such. This comes to what we mean by the Big-O.

It’ll be convenient for me to have a name for the actual number of steps the algorithm takes. Let me call the function describing that g(n). Then g(n) is \mathcal{O}(f(n)) if once n gets big enough, g(n) is always less than C times f(n). Here c is some constant number. Could be 1. Could be 1,000,000. Could be 0.00001. Doesn’t matter; it’s some positive number.

There’s some neat tricks to play here. For example, the function ‘n ‘ is \mathcal{O}(n) . It’s also \mathcal{O}(n^2) and \mathcal{O}(n^9) and \mathcal{O}(e^{n}) . The function ‘n^2 ‘ is also \mathcal{O}(n^2) and those later terms, but it is not \mathcal{O}(n) . And you can see why \mathcal{O}(\sin(n)) is right out.

There is also a Little-O notation. It, too, is an upper bound on the function. But it is a stricter bound, setting tighter restrictions on what g(n) is like. You ask how it is the stricter bound gets the minuscule letter. That is a fine question. I think it’s a quirk of history. Both symbols come to us through number theory. Big-O was developed first, published in 1894 by Paul Bachmann. Little-O was published in 1909 by Edmund Landau. Yes, the one with the short Hilbert-like list of number theory problems. In 1914 G H Hardy and John Edensor Littlewood would work on another measure and they used Ω to express it. (If you see the letter used for Big-O and Little-O as the Greek omicron, then you see why a related concept got called omega.)

What makes the Little-O measure different is its sternness. g(n) is o(f(n)) if, for every positive number C, whenever n is large enough g(n) is less than or equal to C times f(n). I know that sounds almost the same. Here’s why it’s not.

If g(n) is \mathcal{O}(f(n)) , then you can go ahead and pick a C and find that, eventually, g(n) \le C f(n) . If g(n) is o(f(n)) , then I, trying to sabotage you, can go ahead and pick a C, trying my best to spoil your bounds. But I will fail. Even if I pick, like a C of one millionth of a billionth of a trillionth, eventually f(n) will be so big that g(n) \le C f(n) . I can’t find a C small enough that f(n) doesn’t eventually outgrow it, and outgrow g(n).

This implies some odd-looking stuff. Like, that the function n is not o(n) . But the function n is at least o(n^2) , and o(n^9) and those other fun variations. Being Little-O compels you to be Big-O. Big-O is not compelled to be Little-O, although it can happen.

These definitions, for Big-O and Little-O, I’ve laid out from algorithmic complexity. It’s implicitly about functions defined on the counting numbers. But there’s no reason I have to limit the ideas to that. I could define similar ideas for a function g(x), with domain the real numbers, and come up with an idea of being on the order of f(x).

We make some adjustments to this. The important one is that, with algorithmic complexity, we assumed g(n) had to be a positive number. What would it even mean for something to take minus four steps to complete? But a regular old function might be zero or negative or change between negative and positive. So we look at the absolute value of g(x). Is there some value of C so that, when x is big enough, the absolute value of g(x) stays less than C times f(x)? If it does, then g(x) is \mathcal{O}(f(x)) . Is it the case that for every positive number C it’s true that g(x) is less than C times f(x), once x is big enough? Then g(x) is o(f(x)) .

Fine, but why bother defining this?

A compelling answer is that it gives us a way to describe how different a function is from an approximation to that function. We are always looking for approximations to functions because most functions are hard. We have a small set of functions we like to work with. Polynomials are great numerically. Exponentials and trig functions are great analytically. That’s about all the functions that are easy to work with. Big-O notation particularly lets us estimate how bad an error we make using the approximation.

For example, the Runge-Kutta method numerically approximates solutions to ordinary differential equations. It does this by taking the information we have about the function at some point x to approximate its value at a point x + h. ‘h’ is some number. The difference between the actual answer and the Runge-Kutta approximation is \mathcal{O}(h^4) . We use this knowledge to make sure our error is tolerable. Also, we don’t usually care what the function is at x + h. It’s just what we can calculate. What we want is the function at some point a fair bit away from x, call it x + L. So we use our approximate knowledge of conditions at x + h to approximate the function at x + 2h. And use x + 2h to tell us about x + 3h, and from that x + 4h and so on, until we get to x + L. We’d like to have as few of these uninteresting intermediate points as we can, so look for as big an h as is safe.

That context may be the more common one. We see it, particularly, in Taylor Series and other polynomial approximations. For example, the sine of a number is approximately:

\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!} + \mathcal{O}(x^{11})

This has consequences. It tells us, for example, that if x is about 0.1, this approximation is probably pretty good. So it is: the sine of 0.1 (radians) is about 0.0998334166468282 and that’s exactly what five terms here gives us. But it also warns that if x is about 10, this approximation may be gibberish. And so it is: the sine of 10.0 is about -0.5440 and the polynomial is about 1448.27.

The connotation in using Big-O notation here is that we look for small h’s, and for \mathcal{O}(x) to be a tiny number. It seems odd to use the same notation with a large independent variable and with a small one. The concept carries over, though, and helps us talk efficiently about this different problem.


Today’s and all the other 2020 A-to-Z essays should appear at this link. Both the 2020 and all past A-to-Z essays ought to be at this link.

Thank you for reading.

From my Sixth A-to-Z: Operator


One of the many small benefits of these essays is getting myself clearly grounded on terms that I had accepted without thinking much about. Operator, like functional (mentioned in here), is one of them. I’m sure that when these were first introduced my instructors gave them clear definitions. Buut when they’re first introduced it’s not clear why these are important, or that we are going to spend the rest of grad school talking about them. So this piece from 2019’s A-to-Z sequence secured my footing on a term I had a fair understanding of. You get some idea of what has to be intended from the context in which the term is used. Also from knowing how terms like this tend to be defined. But having it down to where I could certainly pass a true-false test about “is this an operator”? That was new.


Today’s A To Z term is one I’ve mentioned previously, including in this A to Z sequence. But it was specifically nominated by Goldenoj, whom I know I follow on Twitter. I’m sorry not to be able to give you an account; I haven’t been able to use my @nebusj account for several months now. Well, if I do get a Twitter, Mathstodon, or blog account I’ll refer you there.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Operator.

An operator is a function. An operator has a domain that’s a space. Its range is also a space. It can be the same sapce but doesn’t have to be. It is very common for these spaces to be “function spaces”. So common that if you want to talk about an operator that isn’t dealing with function spaces it’s good form to warn your audience. Everything in a particular function space is a real-valued and continuous function. Also everything shares the same domain as everything else in that particular function space.

So here’s what I first wonder: why call this an operator instead of a function? I have hypotheses and an unwillingness to read the literature. One is that maybe mathematicians started saying “operator” a long time ago. Taking the derivative, for example, is an operator. So is taking an indefinite integral. Mathematicians have been doing those for a very long time. Longer than we’ve had the modern idea of a function, which is this rule connecting a domain and a range. So the term might be a fossil.

My other hypothesis is the one I’d bet on, though. This hypothesis is that there is a limit to how many different things we can call “the function” in one sentence before the reader rebels. I felt bad enough with that first paragraph. Imagine parsing something like “the function which the Laplacian function took the function to”. We are less likely to make dumb mistakes if we have different names for things which serve different roles. This is probably why there is another word for a function with domain of a function space and range of real or complex-valued numbers. That is a “functional”. It covers things like the norm for measuring a function’s size. It also covers things like finding the total energy in a physics problem.

I’ve mentioned two operators that anyone who’d read a pop mathematics blog has heard of, the differential and the integral. There are more. There are so many more.

Many of them we can build from the differential and the integral. Many operators that we care to deal with are linear, which is how mathematicians say “good”. But both the differential and the integral operators are linear, which lurks behind many of our favorite rules. Like, allow me to call from the vasty deep functions ‘f’ and ‘g’, and scalars ‘a’ and ‘b’. You know how the derivative of the function af + bg is a times the derivative of f plus b times the derivative of g? That’s the differential operator being all linear on us. Similarly, how the integral of af + bg is a times the integral of f plus b times the integral of g? Something mathematical with the adjective “linear” is giving us at least some solid footing.

I’ve mentioned before that a wonder of functions is that most things you can do with numbers, you can also do with functions. One of those things is the premise that if numbers can be the domain and range of functions, then functions can be the domain and range of functions. We can do more, though.

One of the conceptual leaps in high school algebra is that we start analyzing the things we do with numbers. Like, we don’t just take the number three, square it, multiply that by two and add to that the number three times four and add to that the number 1. We think about what if we take any number, call it x, and think of 2x^2 + 4x + 1 . And what if we make equations based on doing this 2x^2 + 4x + 1 ; what values of x make those equations true? Or tell us something interesting?

Operators represent a similar leap. We can think of functions as things we manipulate, and think of those manipulations as a particular thing to do. For example, let me come up with a differential expression. For some function u(x) work out the value of this:

2\frac{d^2 u(x)}{dx^2} + 4 \frac{d u(x)}{dx} + u(x)

Let me join in the convention of using ‘D’ for the differential operator. Then we can rewrite this expression like so:

2D^2 u + 4D u + u

Suddenly the differential equation looks a lot like a polynomial. Of course it does. Remember that everything in mathematics is polynomials. We get new tools to solve differential equations by rewriting them as operators. That’s nice. It also scratches that itch that I think everyone in Intro to Calculus gets, of wanting to somehow see \frac{d^2}{dx^2} as if it were a square of \frac{d}{dx} . It’s not, and D^2 is not the square of D . It’s composing D with itself. But it looks close enough to squaring to feel comfortable.

Nobody needs to do 2D^2 u + 4D u + u except to learn some stuff about operators. But you might imagine a world where we did this process all the time. If we did, then we’d develop shorthand for it. Maybe a new operator, call it T, and define it that T = 2D^2 + 4D + 1 . You see the grammar of treating functions as if they were real numbers becoming familiar. You maybe even noticed the ‘1’ sitting there, serving as the “identity operator”. You know how you’d write out Tv(x) = 3 if you needed to write it in full.

But there are operators that we use all the time. These do get special names, and often shorthand. For example, there’s the gradient operator. This applies to any function with several independent variables. The gradient has a great physical interpretation if the variables represent coordinates of space. If they do, the gradient of a function at a point gives us a vector that describes the direction in which the function increases fastest. And the size of that gradient — a functional on this operator — describes how fast that increase is.

The gradient itself defines more operators. These have names you get very familiar with in Vector Calculus, with names like divergence and curl. These have compelling physical interpretations if we think of the function we operate on as describing a moving fluid. A positive divergence means fluid is coming into the system; a negative divergence, that it is leaving. The curl, in fluids, describe how nearby streams of fluid move at different rate.

Physical interpretations are common in operators. This probably reflects how much influence physics has on mathematics and vice-versa. Anyone studying quantum mechanics gets familiar with a host of operators. These have comfortable names like “position operator” or “momentum operator” or “spin operator”. These are operators that apply to the wave function for a problem. They transform the wave function into a probability distribution. That distribution describes what positions or momentums or spins are likely, how likely they are. Or how unlikely they are.

They’re not all physical, though. Or not purely physical. Many operators are useful because they are powerful mathematical tools. There is a variation of the Fourier series called the Fourier transform. We can interpret this as an operator. Suppose the original function started out with time or space as its independent variable. This often happens. The Fourier transform operator gives us a new function, one with frequencies as independent variable. This can make the function easier to work with. The Fourier transform is an integral operator, by the way, so don’t go thinking everything is a complicated set of derivatives.

Another integral-based operator that’s important is the Laplace transform. This is a great operator because it turns differential equations into algebraic equations. Often, into polynomials. You saw that one coming.

This is all a lot of good press for operators. Well, they’re powerful tools. They help us to see that we can manipulate functions in the ways that functions let us manipulate numbers. It should sound good to realize there is much new that you can do, and you already know most of what’s needed to do it.


This and all the other Fall 2019 A To Z posts should be gathered here. And once I have the time to fiddle with tags I’ll have all past A to Z essays gathered at this link.

From my Fifth A-to-Z: Oriented Graph


My grad-student career took me into Monte Carlo methods and viscosity-free fluid flow. It’s a respectable path. But I could have ended up in graph theory; I got a couple courses in it in grad school and loved it. I just could not find a problem I could work on that was both solvable and interesting. But hints of that alternative path for me turn up now and then, such as in this piece from 2018.


I am surprised to have had no suggestions for an ‘O’ letter. I’m glad to take a free choice, certainly. It let me get at one of those fields I didn’t specialize in, but could easily have. And let me mention that while I’m still taking suggestions for the letters P through T, each other letter has gotten at least one nomination. I can be swayed by a neat term, though, so if you’ve thought of something hard to resist, try me. And later this month I’ll open up the letters U through Z. Might want to start thinking right away about what X, Y, and Z could be.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Oriented Graph.

This is another term from graph theory, one of the great mathematical subjects for doodlers. A graph, here, is made of two sets of things. One is a bunch of fixed points, called ‘vertices’. The other is a bunch of curves, called ‘edges’. Every edge starts at one vertex and ends at one vertex. We don’t require that every vertex have an edge grow from it.

Already you can see why this is a fun subject. It models some stuff really well. Like, anything where you have a bunch of sources of stuff, that come together and spread out again? Chances are there’s a graph that describes this. There’s a compelling all-purpose interpretation. Have vertices represent the spots where something accumulates, or rests, or changes, or whatever. Have edges represent the paths along which something can move. This covers so much.

The next step is a “directed graph”. This comes from making the edges different. If we don’t say otherwise we suppose that stuff can move along an edge in either direction. But suppose otherwise. Suppose there are some edges that can be used in only one direction. This makes a “directed edge”. It’s easy to see in graph theory networks of stuff like city streets. Once you ponder that, one-way streets follow close behind. If every edge in a graph is directed, then you have a directed graph. Moving from a regular old undirected graph to a directed graph changes everything you’d learned about graph theory. Mostly it makes things harder. But you get some good things in trade. We become able to model sources, for example. This is where whatever might move comes from. Also sinks, which is where whatever might move disappears from our consideration.

You might fear that by switching to a directed graph there’s no way to have a two-way connection between a pair of vertices. Or that if there is you have to go through some third vertex. I understand your fear, and wish to reassure you. We can get a two-way connection even in a directed graph: just have the same two vertices be connected by two edges. One goes one way, one goes the other. I hope you feel some comfort.

What if we don’t have that, though? What if the directed graph doesn’t have any vertices with a pair of opposite-directed edges? And that, then, is an oriented graph. We get the orientation from looking at pairs of vertices. Each pair either has no edge connecting them, or has a single directed edge between them.

There’s a lot of potential oriented graphs. If you have three vertices, for example, there’s seven oriented graphs to make of that. You’re allowed to have a vertex not connected to any others. You’re also allowed to have the vertices grouped into a couple of subsets, and connect only to other vertices in their own subset. This is part of why four vertices can give you 42 different oriented graphs. Five vertices can give you 582 different oriented graphs. You can insist on a connected oriented graph.

A connected graph is what you guess. It’s a graph where there’s no vertices off on their own, unconnected to anything. There’s no subsets of vertices connected only to each other. This doesn’t mean you can always get from any one vertex to any other vertex. The directions might not allow you to that. But if you’re willing to break the laws, and ignore the directions of these edges, you could then get from any vertex to any other vertex. Limiting yourself to connected graphs reduces the number of oriented graphs you can get. But not by as much as you might guess, at least not to start. There’s only one connected oriented graph for two vertices, instead of two. Three vertices have five connected oriented graphs, rather than seven. Four vertices have 34, rather than 42. Five vertices, 535 rather than 582. The total number of lost graphs grows, of course. The percentage of lost graphs dwindles, though.

There’s something more. What if there are no unconnected vertices? That is, every pair of vertices has an edge? If every pair of vertices in a graph has a direct connection we call that a “complete” graph. This is true whether the graph is directed or not. If you do have a complete oriented graph — every pair of vertices has a direct connection, and only the one direction — then that’s a “tournament”. If that seems like a whimsical name, consider one interpretation of it. Imagine a sports tournament in which every team played every other team once. And that there’s no ties. Each vertex represents one team. Each edge is the match played by the two teams. The direction is, let’s say, from the losing team to the winning team. (It’s as good if the direction is from the winning team to the losing team.) Then you have a complete, oriented, directed graph. And it represents your tournament.

And that delights me. A mathematician like me might talk a good game about building models. How one can represent things with mathematical constructs. Here, it’s done. You can make little dots, for vertices, and curved lines with arrows, for edges. And draw a picture that shows how a round-robin tournament works. It can be that direct.


From my Fourth A-to-Z: Open Set


It’s quite funny to notice the first paragraph’s shame at missing my self-imposed schedule. I still have not found confirmation of my hunch that “open” and “closed”, as set properties, were named independently. I haven’t found evidence I’m wrong, though, either.


Today’s glossary entry is another request from Elke Stangl, author of the Elkemental Force blog. I’m hoping this also turns out to be a well-received entry. Half of that is up to you, the kind reader. At least I hope you’re a reader. It’s already gone wrong, as it was supposed to be Friday’s entry. I discovered I hadn’t actually scheduled it while I was too far from my laptop to do anything about that mistake. This spoils the nice Monday-Wednesday-Friday routine of these glossary entries that dates back to the first one I ever posted and just means I have to quit forever and not show my face ever again. Sorry, Ulam Spiral. Someone else will have to think of you.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Open Set.

Mathematics likes to present itself as being universal truths. And it is. At least if we allow that the rules of logic by which mathematics works are universal. Suppose them to be true and the rest follows. But we start out with intuition, with things we observe in the real world. We’re happy when we can remove the stuff that’s clearly based on idiosyncratic experience. We find something that’s got to be universal.

Sets are pretty abstract things, as mathematicians use the term. They get to be hard to talk about; we run out of simpler words that we can use. A set is … a bunch of things. The things are … stuff that could be in a set, or else that we’d rule out of a set. We can end up better understanding things by drawing a picture. We draw the universe, which is a rectangular block, sometimes with dashed lines as the edges. The set is some blotch drawn on the inside of it. Some shade it in to emphasize which stuff we want in the set. If we need to pick out a couple things in the universe we drop in dots or numerals. If we’re rigorous about the drawing we could create a Venn Diagram.

When we do this, we’re giving up on the pure mathematical abstraction of the set. We’re replacing it with a territory on a map. Several territories, if we have several sets. The territories can overlap or be completely separate. We’re subtly letting our sense of geography, our sense of the spaces in which we move, infiltrate our understanding of sets. That’s all right. It can give us useful ideas. Later on, we’ll try to separate out the ideas that are too bound to geography.

A set is open if whenever you’re in it, you can’t be on its boundary. We never quite have this in the real world, with territories. The border between, say, New Jersey and New York becomes this infinitesimally slender thing, as wide in space as midnight is in time. But we can, with some effort, imagine the state. Imagine being as tiny in every direction as the border between two states. Then we can imagine the difference between being on the border and being away from it.

And not being on the border matters. If we are not on the border we can imagine the problem of getting to the border. Pick any direction; we can move some distance while staying inside the set. It might be a lot of distance, it might be a tiny bit. But we stay inside however we might move. If we are on the border, then there’s some direction in which any movement, however small, drops us out of the set. That’s a difference in kind between a set that’s open and a set that isn’t.

I say “a set that’s open and a set that isn’t”. There are such things as closed sets. A set doesn’t have to be either open or closed. It can be neither, a set that includes some of its borders but not other parts of it. It can even be both open and closed simultaneously. The whole universe, for example, is both an open and a closed set. The empty set, with nothing in it, is both open and closed. (This looks like a semantic trick. OK, if you’re in the empty set you’re not on its boundary. But you can’t be in the empty set. So what’s going on? … The usual. It makes other work easier if we call the empty set ‘open’. And the extra work we’d have to do to rule out the empty set doesn’t seem to get us anything interesting. So we accept what might be a trick.) The definitions of ‘open’ and ‘closed’ don’t exclude one another.

I’m not sure how this confusing state of affairs developed. My hunch is that the words ‘open’ and ‘closed’ evolved independent of each other. Why do I think this? An open set has its openness from, well, not containing its boundaries; from the inside there’s always a little more to it. A closed set has its closedness from sequences. That is, you can consider a string of points inside a set. Are these points leading somewhere? Is that point inside your set? If a string of points always leads to somewhere, and that somewhere is inside the set, then you have closure. You have a closed set. I’m not sure that the terms were derived with that much thought. But it does explain, at least in terms a mathematician might respect, why a set that isn’t open isn’t necessarily closed.

Back to open sets. What does it mean to not be on the boundary of the set? How do we know if we’re on it? We can define sets by all sorts of complicated rules: complex-valued numbers of size less than five, say. Rational numbers whose denominator (in lowest form) is no more than ten. Points in space from which a satellite dropped would crash into the moon rather than into the Earth or Sun. If we have an idea of distance we could measure how far it is from a point to the nearest part of the boundary. Do we need distance, though?

No, it turns out. We can get the idea of open sets without using distance. Introduce a neighborhood of a point. A neighborhood of a point is an open set that contains that point. It doesn’t have to be small, but that’s the connotation. And we get to thinking of little N-balls, circle or sphere-like constructs centered on the target point. It doesn’t have to be N-balls. But we think of them so much that we might as well say it’s necessary. If every point in a set has a neighborhood around it that’s also inside the set, then the set’s open.

You’re going to accuse me of begging the question. Fair enough. I was using open sets to define open sets. This use is all right for an intuitive idea of what makes a set open, but it’s not rigorous. We can give in and say we have to have distance. Then we have N-balls and we can build open sets out of balls that don’t contain the edges. Or we can try to drive distance out of our idea of open sets.

We can do it this way. Start off by saying the whole universe is an open set. Also that the union of any number of open sets is also an open set. And that the intersection of any finite number of open sets is also an open set. Does this sound weak? So it sounds weak. It’s enough. We get the open sets we were thinking of all along from this.

This works for the sets that look like territories on a map. It also works for sets for which we have some idea of distance, however strange it is to our everyday distances. It even works if we don’t have any idea of distance. This lets us talk about topological spaces, and study what geometry looks like if we can’t tell how far apart two points are. We can, for example, at least tell that two points are different. Can we find a neighborhood of one that doesn’t contain the other? Then we know they’re some distance apart, even without knowing what distance is.

That we reached so abstract an idea of what an open set is without losing the idea’s usefulness suggests we’re doing well. So we are. It also shows why Nicholas Bourbaki, the famous nonexistent mathematician, thought set theory and its related ideas were the core of mathematics. Today category theory is a more popular candidate for the core of mathematics. But set theory is still close to the core, and much of analysis is about what we can know from the fact of sets being open. Open sets let us explain a lot.

How December 2021, The Month I Crashed, Treated My Mathematics Blog


On my humor blog I joked I was holding off on my monthly statistics recaps waiting for December 2021 to get better. What held me back here is more attention- and energy-draining nonsense going on last week. It’s passed without lasting harm, that I know about, though. So I can get back to looking at how things looked here in December.

December was, technically, my most prolific month in the sorry year of 2021. I had twelve articles posted, in a year that mostly saw around five to seven posts a year. But more than half of them were repeats, copying the text of old A-to-Z’s, with a small introduction added. I’ve observed how much my readership seems to depend on the number of posts made, more than anything else. How did this sudden surge affect my statistics? … Here’s how.

Bar chart showing two and a half year's worth of monthly readership totals. The last several months have shown a slow but steady decline.
I can’t wait for the number of followers to roll over to 1,000, so that it’s easy to consider how many people hit ‘follow’ and then never read a word of my writing ever again.

This was another declining month, with the fewest number of page views — 1,946 — and unique visitors — 1,351 — since July 2021. As you’d expect, this was also below the twelve-month running means, of 2,437.7 views from 1,727.8 unique visitors. It’s also below the twelve-month running medians, of 2,436.5 views from 1,742 unique visitors.

I notice, looking at the years going back to 2018, that I’ve seen a readership drop in December each of the last several years. In 2019 my December readership was barely three-fifths the November readership, for example. In 2018 and 2020 readership fell by one-tenth to one-fifth. But those are also years where my A-to-Z was going regularly, and filling whole weeks with publication, in November, with only a few pieces in December. Having December be busier than November is novel.

So I’m curious whether other blogs see a similar November-to-December dropoff. I’m also curious if they have a publishing schedule that makes it easier to find actual patterns through the chaos.

There were 46 things liked in December, which is above the running mean of 40.5 and median of 38.5. There were nine comments given, below that mean of 15.3 and median of 11.5. On the other hand, what much was there to say? (And I appreciate each comment, particularly those of moral support.)

The per-posting numbers, of views and visitors and such, collapsed. I had expected that, since the laconic publishing schedule I settled on drove the per-posting averages way up. The twelve-month running mean of views per posting was 323.4, and median 307.4, for example. December saw 162.2 views per posting. There were a running mean of 228.4 visitors per posting, and median of 219.2 per posting, for the twelve months ending with November 2021. December 2021 saw 112.6 visitors per posting. So those numbers are way down. But they aren’t far off the figures I had in, say, the end of 2020, when I was doing 18 or 19 posts per month.


Might as well list all twelve posts of December, in their descending order of popularity. I’m not surprised the original A-to-Z stuff was most popular. Besides being least familiar, it also came first in the month, so had time to attract page views. Here’s the roster of how the month’s postings ranked.


WordPress credits me with publishing 16,789 words in December, an average of 1,399.1 words per post. That’s not only my most talkative month for 2021; that’s two of my most talkative months. There’s a whole third of the year I didn’t publish that much. This is all inflated by my reposting old articles in their entirety, of course. In past years I would include a pointer to an old A-to-Z essay, but not the whole thing.

This all brings my blog to a total 67,218 words posted for the year. It’s not the second-least-talkative year after all, although I’ll keep its comparisons to other years for a separate post.

At the closing of the year, WordPress figures I’ve posted 1,675 things here. They drew a total 150,883 page views from 90,187 visitors. This isn’t much compared to the first-tier pop-mathematics blogs. But it’s still more people than I could expect to meet in my life. So that’s nice to know about.

And now let’s look ahead to what 2022 is going to bring on all of this. I still intend to finish the Little 2021 Mathematics A-to-Z. Those essays should be at this link when I post them. I may get back to my Reading the Comics posts, as well. We’ll see.

From my Third A-to-Z: Osculating Circle


With the third A-to-Z choice for the letter O, I finally set ortho-ness down. I had thought the letter might become a reference for everything described as ortho-. It has to be acknowledged that two or three examples gets you the general idea of what’s got at when something is named ortho-, though.

Must admit, I haven’t that I remember ever solved a differential equation using osculating circles instead of, you know, polynomials or sine functions (Fourier series). But references I trust say that would be a way to go.


I’m happy to say it’s another request today. This one’s from HowardAt58, author of the Saving School Math blog. He’s given me some great inspiration in the past.

Osculating Circle.

It’s right there in the name. Osculating. You know what that is from that one Daffy Duck cartoon where he cries out “Greetings, Gate, let’s osculate” while wearing a moustache. Daffy’s imitating somebody there, but goodness knows who. Someday the mystery drives the young you to a dictionary web site. Osculate means kiss. This doesn’t seem to explain the scene. Daffy was imitating Jerry Colonna. That meant something in 1943. You can find him on old-time radio recordings. I think he’s funny, in that 40s style.

Make the substitution. A kissing circle. Suppose it’s not some playground antic one level up from the Kissing Bandit that plagues recess yet one or two levels down what we imagine we’d do in high school. It suggests a circle that comes really close to something, that touches it a moment, and then goes off its own way.

But then touching. We know another word for that. It’s the root behind “tangent”. Tangent is a trigonometry term. But it appears in calculus too. The tangent line is a line that touches a curve at one specific point and is going in the same direction as the original curve is at that point. We like this because … well, we do. The tangent line is a good approximation of the original curve, at least at the tangent point and for some region local to that. The tangent touches the original curve, and maybe it does something else later on. What could kissing be?

The osculating circle is about approximating an interesting thing with a well-behaved thing. So are similar things with names like “osculating curve” or “osculating sphere”. We need that a lot. Interesting things are complicated. Well-behaved things are understood. We move from what we understand to what we would like to know, often, by an approximation. This is why we have tangent lines. This is why we build polynomials that approximate an interesting function. They share the original function’s value, and its derivative’s value. A polynomial approximation can share many derivatives. If the function is nice enough, and the polynomial big enough, it can be impossible to tell the difference between the polynomial and the original function.

The osculating circle, or sphere, isn’t so concerned with matching derivatives. I know, I’m as shocked as you are. Well, it matches the first and the second derivatives of the original curve. Anything past that, though, it matches only by luck. The osculating circle is instead about matching the curvature of the original curve. The curvature is what you think it would be: it’s how much a function curves. If you imagine looking closely at the original curve and an osculating circle they appear to be two arcs that come together. They must touch at one point. They might touch at others, but that’s incidental.

Osculating circles, and osculating spheres, sneak out of mathematics and into practical work. This is because we often want to work with things that are almost circles. The surface of the Earth, for example, is not a sphere. But it’s only a tiny bit off. It’s off in ways that you only notice if you are doing high-precision mapping. Or taking close measurements of things in the sky. Sometimes we do this. So we map the Earth locally as if it were a perfect sphere, with curvature exactly what its curvature is at our observation post.

Or we might be observing something moving in orbit. If the universe had only two things in it, and they were the correct two things, all orbits would be simple: they would be ellipses. They would have to be “point masses”, things that have mass without any volume. They never are. They’re always shapes. Spheres would be fine, but they’re never perfect spheres even. The slight difference between a perfect sphere and whatever the things really are affects the orbit. Or the other things in the universe tug on the orbiting things. Or the thing orbiting makes a course correction. All these things make little changes in the orbiting thing’s orbit. The actual orbit of the thing is a complicated curve. The orbit we could calculate is an osculating — well, an osculating ellipse, rather than an osculating circle. Similar idea, though. Call it an osculating orbit if you’d rather.

That osculating circles have practical uses doesn’t mean they aren’t respectable mathematics. I’ll concede they’re not used as much as polynomials or sine curves are. I suppose that’s because polynomials and sine curves have nicer derivatives than circles do. But osculating circles do turn up as ways to try solving nonlinear differential equations. We need the help. Linear differential equations anyone can solve. Nonlinear differential equations are pretty much impossible. They also turn up in signal processing, as ways to find the frequencies of a signal from a sampling of data. This, too, we would like to know.

We get the name “osculating circle” from Gottfried Wilhelm Leibniz. This might not surprise. Finding easy-to-understand shapes that approximate interesting shapes is why we have calculus. Isaac Newton described a way of making them in the Principia Mathematica. This also might not surprise. Of course they would on this subject come so close together without kissing.

From my Second A-to-Z: Orthonormal


For early 2016 — dubbed “Leap Day 2016” as that’s when it started — I got a request to explain orthogonal. I went in a different direction, although not completely different. This essay does get a bit more into specifics of how mathematicians use the idea, like, showing some calculations and such. I put in a casual description of vectors here. For book publication I’d want to rewrite that to be clearer that, like, ordered sets of numbers are just one (very common) way to represent vectors.


Jacob Kanev had requested “orthogonal” for this glossary. I’d be happy to oblige. But I used the word in last summer’s Mathematics A To Z. And I admit I’m tempted to just reprint that essay, since it would save some needed time. But I can do something more.

Orthonormal.

“Orthogonal” is another word for “perpendicular”. Mathematicians use it for reasons I’m not precisely sure of. My belief is that it’s because “perpendicular” sounds like we’re talking about directions. And we want to extend the idea to things that aren’t necessarily directions. As majors, mathematicians learn orthogonality for vectors, things pointing in different directions. Then we extend it to other ideas. To functions, particularly, but we can also define it for spaces and for other stuff.

I was vague, last summer, about how we do that. We do it by creating a function called the “inner product”. That takes in two of whatever things we’re measuring and gives us a real number. If the inner product of two things is zero, then the two things are orthogonal.

The first example mathematics majors learn of this, before they even hear the words “inner product”, are dot products. These are for vectors, ordered sets of numbers. The dot product we find by matching up numbers in the corresponding slots for the two vectors, multiplying them together, and then adding up the products. For example. Give me the vector with values (1, 2, 3), and the other vector with values (-6, 5, -4). The inner product will be 1 times -6 (which is -6) plus 2 times 5 (which is 10) plus 3 times -4 (which is -12). So that’s -6 + 10 – 12 or -8.

So those vectors aren’t orthogonal. But how about the vectors (1, -1, 0) and (0, 0, 1)? Their dot product is 1 times 0 (which is 0) plus -1 times 0 (which is 0) plus 0 times 1 (which is 0). The vectors are perpendicular. And if you tried drawing this you’d see, yeah, they are. The first vector we’d draw as being inside a flat plane, and the second vector as pointing up, through that plane, like a thumbtack.

So that’s orthogonal. What about this orthonormal stuff?

Well … the inner product can tell us something besides orthogonality. What happens if we take the inner product of a vector with itself? Say, (1, 2, 3) with itself? That’s going to be 1 times 1 (which is 1) plus 2 times 2 (4, according to rumor) plus 3 times 3 (which is 9). That’s 14, a tidy sum, although, so what?

The inner product of (-6, 5, -4) with itself? Oh, that’s some ugly numbers. Let’s skip it. How about the inner product of (1, -1, 0) with itself? That’ll be 1 times 1 (which is 1) plus -1 times -1 (which is positive 1) plus 0 times 0 (which is 0). That adds up to 2. And now, wait a minute. This might be something.

Start from somewhere. Move 1 unit to the east. (Don’t care what the unit is. Inches, kilometers, astronomical units, anything.) Then move -1 units to the north, or like normal people would say, 1 unit o the south. How far are you from the starting point? … Well, you’re the square root of 2 units away.

Now imagine starting from somewhere and moving 1 unit east, and then 2 units north, and then 3 units straight up, because you found a convenient elevator. How far are you from the starting point? This may take a moment of fiddling around with the Pythagorean theorem. But you’re the square root of 14 units away.

And what the heck, (0, 0, 1). The inner product of that with itself is 0 times 0 (which is zero) plus 0 times 0 (still zero) plus 1 times 1 (which is 1). That adds up to 1. And, yeah, if we go one unit straight up, we’re one unit away from where we started.

The inner product of a vector with itself gives us the square of the vector’s length. At least if we aren’t using some freak definition of inner products and lengths and vectors. And this is great! It means we can talk about the length — maybe better to say the size — of things that maybe don’t have obvious sizes.

Some stuff will have convenient sizes. For example, they’ll have size 1. The vector (0, 0, 1) was one such. So is (1, 0, 0). And you can think of another example easily. Yes, it’s \left(\frac{1}{\sqrt{2}}, -\frac{1}{2}, \frac{1}{2}\right) . (Go ahead, check!)

So by “orthonormal” we mean a collection of things that are orthogonal to each other, and that themselves are all of size 1. It’s a description of both what things are by themselves and how they relate to one another. A thing can’t be orthonormal by itself, for the same reason a line can’t be perpendicular to nothing in particular. But a pair of things might be orthogonal, and they might be the right length to be orthonormal too.

Why do this? Well, the same reasons we always do this. We can impose something like direction onto a problem. We might be able to break up a problem into simpler problems, one in each direction. We might at least be able to simplify the ways different directions are entangled. We might be able to write a problem’s solution as the sum of solutions to a standard set of representative simple problems. This one turns up all the time. And an orthogonal set of something is often a really good choice of a standard set of representative problems.

This sort of thing turns up a lot when solving differential equations. And those often turn up when we want to describe things that happen in the real world. So a good number of mathematicians develop a habit of looking for orthonormal sets.

%d bloggers like this: