My Little 2021 Mathematics A-to-Z: Hyperbola


John Golden, author of the Math Hombre blog, had several great ideas for the letter H in this little A-to-Z for the year. Here’s one of them.

Hyperbola.

The hyperbola is where advanced mathematics begins. It’s a family of shapes, some of the pieces you get by slicing a cone. You can make an approximate one shining a flashlight on a wall. Other conic sections are familiar, everyday things, though. Circles we see everywhere. Ellipses we see everywhere we look at a circle in perspective. Parabolas we learn, in approximation, watching something tossed, or squirting water into the air. The hyperbola should be as accessible. Hold your flashlight parallel to the wall and look at the outline of light it casts. But the difference between this and a parabola isn’t obvious. And it’s harder to see parabolas in nature. It’s the path a space probe swinging past a planet makes? Great guide for all us who’ve launched space probes past Jupiter.

When we learn of hyperbolas, somewhere in high school algebra or in precalculus, they seem designed to break the rules we had inferred. We’ve learned functions like lines and quadradics (parabolas) and cubics. They’re nice, simple, connected shapes. The hyperbola comes in two pieces. We’ve learned that the graph of a function crosses any given vertical line at most once. Now, we can expect to see it twice. We learn to sketch functions by finding a few interesting points — roots, y-intercepts, things like that. Hyperbolas, we’re taught to draw this little central box and then two asymptotes. Also, we have asymptotes, a simpler curve that the actual curve almost equals.

We’re trained to see functions having the couple odd points where they’re not defined. Nobody expects y = 1 \div x to mean anything when x is zero. But we learn these as weird, isolated points. Now there’s this interval of x-values that don’t fit anything on the graph. Half the time, anyway, because we see two classes of hyperbolas. There’s ones that open like cups, pointing up and down. Those have definitions for every value of x. There’s ones that open like ears, pointing left and right. Those have a box in the center where no y satisfies the x’s. They seem like they’re taught just to be mean.

They’re not, of course. The only mathematical thing we teach just to be mean is integration by trigonometric substitution. The things which seem weird or new in hyperbolas are, largely, things we didn’t notice before. A vertical line put across a circle or ellipse crosses the curve twice, most points. There are two huge intervals, to the left and to the right of the circle, where no value of y makes the equation true. Circles are familiar, though. Ellipses don’t seem intimidating. We know we can’t turn x^2 + y^2 = 4 (a typical circle) into a function without some work. We have to write either f(x) = \sqrt{4 - x^2} or f(x) = -\sqrt{4 - x^2} , breaking the circle into two halves. The same happens for hyperbolas, though, with x^2 - y^2 = 4 (a typical hyperbola) turning into f(x) = \sqrt{x^2 - 4} or f(x) = -\sqrt{x^2 - 4} .

Even the definitions seem weird. The ellipse we can draw by taking a set distance and two focus points. If the distance from the first focus to a point plus the distance from the point to the second focus is that set distance, the point’s on the ellipse. We can use two thumbtacks and a piece of string to draw the ellipse. The hyperbola has a simliar rule, but weirder. You have your two focus points, yes. And a set distance. But the locus of points of the hyperbola is everything where the distance from the point to one focus minus the distance from the point to the other focus is that set distance. Good luck doing that with thumbtacks and string.

Yet hyperbolas are ready for us. Consider playing with a decent calculator, hitting the reciprocal button for different numbers. 1 turns to 1, yes. 2 turns into 0.5. -0.125 turns into -8. It’s the simplest iterative game to do on the calculator. If you sketch this, though, all the points (x, y) where one coordinate is the reciprocal of the other? It’s two curves. They approach without ever touching the x- and y-axes. Get far enough from the origin and there’s no telling this curve from the axes. It’s a hyperbola, one that obeys that vertical-line rule again. It has only the one value of x that can’t be allowed. We write it as y = \frac{1}{x} or even xy = 1 . But it’s the shape we see when we draw x^2 - y^2 = 2 , rotated. Or a rotation of one we see when we draw y^2 - x^2 = 2 . The equations of rotated shapes are annoying. We do enough of them for ellipses and parabolas and hyperbolas to meet the course requirement. But they point out how the hyperbola is a more normal construct than we fear.

And let me look at that construct again. An equation describing a hyperbola that opens horizontally or vertically looks like ax^2 - by^2 = c for some constant numbers a, b, and c. (If a, b, and c are all positive, this is a hyperbola opening horizontally. If a and b are positive and c negative, this is a hyperbola opening vertically.) An equation describing an ellipse, similarly with its axes horizontal or vertical looks like ax^2 + by^2 = c . (These are shapes centered on the origin. They can have other centers, which make the equations harder but not more enlightening.) The equations have very similar shapes. Mathematics trains us to suspect things with similar shapes have similar properties. That change from a plus to a minus seems too important to ignore, and yet …

I bet you assumed x and y are real numbers. This is convention, the safe bet. If someone wants complex-valued numbers they usually say so. If they don’t want to be explicit, they use z and w as variables instead of x and y. But what if y is an imaginary number? Suppose y = \imath t , for some real number t, where \imath^2 = -1 . You haven’t missed a step; I’m summoning this from nowhere. (Let’s not think about how to draw a point with an imaginary coordinate.) Then ax^2 - by^2 = c is ax^2 - b(\imath t)^2 = c which is ax^2 + bt^2 = c . And despite the weird letters, that’s a circle. By the same supposition we could go from ax^2 + by^2 = c , which we’d taken to be a circle, and get ax^2 - bt^2 = c , a hyperbola.

Fine stuff inspiring the question “so?” I made up a case and showed how that made two dissimilar things look alike. All right. But consider trigonometry, built on the cosine and sine functions. One good way to see the cosine and sine of an angle is as the x- and y-coordinates of a point on the unit circle, where x^2 + y^2 = 1 . (The angle \theta is the one from the point (\cos(\theta), \sin(\theta)) to the origin to the point (1, 0).)

There exists, in parallel to the familiar trig functions, the “hyperbolic trigonometric functions”. These have imaginative names like the hyperbolic sine and hyperbolic cosine. (And onward. We can speak of the “inverse hyperbolic cosecant”, if we wish no one to speak to us again.) Usually these get introduced in calculus, to give the instructor a tiny break. Their derivatives, and integrals, look much like those of the normal trigonometric functions, but aren’t the exact same problems over and over. And these functions, too, have a compelling meaning. The hyperbolic cosine of an angle and hyperbolic sine of an angle have something to do with points on a unit hyperbola, x^2 - y^2 = 1 .

Thinking back on the flashlight. We get a circle by holding the light perpendicular to the wall. We get a hyperbola holding the light parallel. We get a circle by drawing x^2 + y^2 = 1 with x and y real numbers. We get a hyperbola by (somehow) drawing x^2 + y^2 = 1 with x real and y imaginary. We remember something about representing complex-valued numbers with a real axis and an orthogonal imaginary axis.

One almost feels the connection. I can’t promise that pondering this will make hyperbolas be as familiar as circles or at least ellipses. But often a problem that brings us to hyperbolas has an alternate phrasing that’s ellipses, a nd vice-versa. But the common traits of these conic slices can guide you into a new understanding of mathematics.


Thank you for reading. I hope to have another piece next week at this time. This and all of this year’s Little Mathematics A to Z essays should be at this link. And the A-to-Z essays for every year should be at this link.

My Little 2021 Mathematics A-to-Z: Torus


Mr Wu, a mathematics tutor in Singapore and author of the blog about that, offered this week’s topic. It’s about one of the iconic mathematics shapes.

Torus

When one designs a board game, one has to decide what the edge of the board means. Some games make getting to the edge the goal, such as Candy Land or backgammon. Some games set their play so the edge is unreachable, such as Clue or Monopoly. Some make the edge an impassible limit, such as Go or Scrabble or Checkers. And sometimes the edge becomes something different.

Consider a strategy game like Risk or Civilization or their video game descendants like Europa Universalis. One has to be able to go east, or west, without limit. But there’s no making a cylindrical board. Or making a board infinite in extent, side to side. Instead, the game demands we connect borders. Moving east one space from just-at-the-Eastern-edge means we put the piece at just-at-the-Western-edge. As a video game this is seamless. As a tabletop game we just learn to remember those units in Alberta are not so far from Kamchatka as they look. We have the awkward point that the board doesn’t let us go over the poles. It doesn’t hurt game play: no one wants to invade Russia from the north. We can represent a boundless space on our table.

Sometimes we need more. Consider the arcade game Asteroid. The player’s spaceship hopes to survive by blasting into dust asteroids cluttered around them. The game ‘board’ is the arcade screen, a manageable slice of space. Asteroids move in any direction, often drifting off-screen. If they were out of the game, this would make victory so easy as to be unsatisfying. So the game takes a tip from the strategy games, and connects the right edge of the screen to the left. If we ask why an asteroid last seen moving to the right now appears on the left, well, there are answers. One is to say we’re in a very average segment of a huge asteroid field. There’s about as many asteroids that happen to be approaching from off-screen as recede from us. Why our local work destroying asteroids eliminates the off-screen asteroids is a mystery for the ages. Perhaps the rest of the fleet is also asteroid-clearing at about our pace. What matters is we still have to do something with the asteroids.

Almost. We’ve still got asteroids leaking away through the top and bottom. But we can use the same trick the right and left edges do. And now we have some wonderful things. One is a balanced game. Another is the space in which ship and asteroids move. It is no rectangle now, but a torus.

This is a neat space to explore. It’s unbounded, for example, just as the surface of the Earth is. Or (it appears) the actual universe is. Set your course right and your spaceship can go quite a long way without getting back to exactly where it started from, again much like the surface of the Earth or the universe. We can impersonate an unbounded space using a manageably small set of coordinates, a decent-size game board.

That’s a nice trick to have. Many mathematics problems are about how great blocks of things behave. And it’s usually easiest to model these things if there aren’t boundaries. We can, sure, but they’re hard, most of the time. So we analyze great, infinitely-extending stretches of things.

Analysis does great things. But we need sometimes to do simulations, too. Computers are, as ever, great tempting setups to this. Look at a spreadsheet with hundreds of rows and columns of cells. Each can represent a point in space, interacting with whatever’s nearby by whatever our rule is. And this can do very well … except these cells have to represent a finite territory. A million rows can’t span more than one million times the greatest distance between rows. We have to handle that.

There are tricks. One is to model the cells as being at ever-expanding distances, trusting that there are regions too dull to need much attention. Another is to give the boundary some values that, we figure, look as generic as possible. That “past here it carries on like that”. The trick that makes rhetorical sense to mention here is creating a torus, matching left edge to right, top edge to bottom. Front edge to back if it’s a three-dimensional model.

Making a torus works if a particular spot is mostly affected by its local neighborhood. This describes a lot of problems we find interesting. Many of them are in statistical mechanics, where we do a lot of problems about particules in grids that can do one of two things, depending on the locale. But many mechanics problems work like this too. If we’re interested in how a satellite orbits the Earth, we can ignore that Saturn exists, except maybe as something it might photograph.

And just making a grid into a torus doesn’t solve every problem. This is obvious if you imagine making a torus that’s two rows and two columns linked together. There won’t be much interesting behavior there. Even a reasonably large grid offers problems. There might be structures larger than the torus is across or wide, for example, worth study, and those will be missed. That we have a grid means that a shape is easier to represent if it’s horizontal or vertical. In a real continuous space there’s no directions to be partial to.

There are topology differences too. A famous result shows that four colors are enough to color any map on the plane. On the torus we need at least seven. Putting colors on things may seem like a trivial worry. But map colorings represent information about how stuff can be connected. And here’s a huge difference in these connections.

This all is about one aspect of a torus. Likely you came in wondering when I would get to talking about doughnut shapes, and the line about topology may have readied you to hear about coffee cups. The torus, like most any mathematical concept familiar enough ordinary people know the word, connects to many ideas. Some of them have more than one hole. Some have surfaces that intersect themselves. Some extend into four or more dimensions. Some are even constructs that appear in phase space, describing ways that complicated physical systems can behave. These are all reflections of this shape idea that we can learn from thinking about game boards.


This and all of this year’s Little Mathematics A to Z essays should be at this link. And the A-to-Z essays for every year should be at this link.

My Little 2021 Mathematics A-to-Z: Addition


John Golden, whom so far as I know doesn’t have an active blog, suggested this week’s topic. It pairs nicely with last week’s. I link to that in text, but if you would like to read all of this year’s Little Mathematics A to Z it should be at this link. And if you’d like to see all of my A-to-Z projects, pleas try this link. Thank you.

Addition

When I wrote about multiplication I came to the peculiar conclusion that it was the same as addition. This is true only in certain lights. When we study [abstract] algebra we look at things that look like arithmetic. The simplest useful thing that looks like arithmetic is a group. It has a set of elements, and a pairwise “group operation”. That group operation we call multiplication, if we don’t have a better name. We give it two elements and it gives us one. Under certain circumstances, this multiplication looks just like addition does.

But we have reason to think addition and multiplication aren’t the same. Where do we get addition?

We can make a meaningful addition by giving it something to interact with. By adding another operation. This turns the group into a ring. As it has two operations, it’s hard to resist calling one of them addition and the other multiplication. The new multiplication follows many of the rules the addition did. Adding two elements together gives you an element in the ring. So does multiplying. Addition is associative: a + (b + c) is the same thing as (a + b) + c . So it multiplication: a \times (b \times c) is the same thing as (a \times b) \times c .

And then the addition and the multiplication have to interact. If they didn’t, we’d just have a group with two operations. I don’t know anyone who’s found a good use for that. The way addition and multiplication interact we call distribution. This is represented by two rules, both of them depending on elements a, b, and c:

a\times(b + c) = a\times b + a\times c

(a + b)\times c = a\times c + b\times c

This is where we get something we have to call addition. It’s in having the two interacting group operations.

A problem which would have worried me at age eight: do we know we’re calling the correct operation “addition”? Yes, yes, names are arbitrary. But are we matching the thing we think we’re doing when we calculate 2 + 2 to addition and the thing for 2 x 2 to multiplication? How do we tell these two apart?

For all that they start the same, and resemble one another, there are differences. Addition has an identity, something that works like zero. a + 0 is always a , whatever a is. Multiplication … the multiplication we use every day has an identity, that is, 1. Are we required to have a multiplicative identity, something so that a \times 1 is always a ? That depends on what it said in the Introduction to Algebra textbook you learned on. If you want to be clear your rings do have a multiplicative identity you call it a “unit ring”. If you want to be clear you don’t care, I don’t know what to say. I’m told some people write that as “rng”, to hint that this identity is missing.

Addition always has an inverse. Whatever element a you pick, there is some -a so that -a + a is the additive identity. Multiplication? Even if we have a unit ring, there’s not always a reciprocal. The integers are a unit ring. But there are only two integers that have an integer multiplicative inverse, something you can multiply them by to get 1. If your unit ring does have a multiplicative inverse, this is called a division algebra. Rational numbers, for example, are a division algebra.

So for some rings, like the integers, there’s an obvious difference between addition and multiplication. But for the rational numbers? Can we tell the operations apart?

We can, through the additive identity, which please let me call 0. And the multiplicative identity, which please let me call 1. Is there a multiplicative inverse of 0? Suppose there is one; let me call it c , because I need some name. Then of all the things in the world, we know this:

0 \times c = 1

I can replace anything I like with something equal to it. So, for example, I can replace 0 with the sum of an element and its additive inverse. Like, (-a + a) for some element a . So then:

(-a + a) \times c = 1

And distribute this away!

-a\times c + a\times c = 1

I don’t know what number ac is, nor what its inverse -ac is. But I know its sum is zero. And so

0 = 1

This looks like trouble. But, all right, why not have the additive and the multiplicative identities be the same number? Mathematicians like to play with all kinds of weird things; why not this weirdness?

The why not is that you work out pretty fast that every element has to be equal to every other element. If you’re not sure how, consider the starting line of that little proof, but with an element b :

0 \times c \times b = 1 \times b

So there, finally, is a crack between addition and multiplication. Addition’s identity element, its zero, can’t have a multiplicative inverse. Multiplication’s identity element, its one, must have an additive inverse. We get addition from the thing we can’t un-multiply.

It may have struck you that if all we want is a ring with the lone element of 0 (or 1), then we can have addition and multiplication be indistinguishable again. And have the additive and multiplicative identities be the same thing. There’s nothing else for them to be. This is true, and we can. Unfortunately this ring doesn’t do much that’s interesting, except maybe prove some theorem we were working on isn’t always true. So we usually draw a box around it, acknowledge it once, and then exclude it from division algebras and fields and other things of interest. It’s much the same way we normally rule out 1 as a prime number. It’s an example that is too much bother to include given how unenlightening it is.

You can have groups and attach to them a multiplication and an addition and another binary operation. Those aren’t of such general interest that you study them much as an undergraduate.

And this is what we know of addition. It looks almost like a second multiplication. But it interacts just enough with multiplication to force the two to be distinguishable. From that we can create mathematics structures as interesting as arithmetic is.

How September 2021 Treated My Mathematics Blog


Better than it treated me! Which is a joke I used last month too. But it’s been a rough while but that’s all right, it’ll all turn around as soon as I buy one winning PowerBall lottery ticket. And since my custom, when I do play, is to buy two tickets at once, I look to be in very good shape as of Monday’s drawing. Thank you for your concern.

I posted seven things in September, including the much-delayed start of the Little Mathematics A-to-Z. Those postings drew 1,973 views altogether from 1,414 unique visitors. These numbers are far below the running averages for the twelve months running up to September. The mean was 2,580.6 views from 1,830.4 unique visitors per month. The median was 2,559 views from 1,801 unique visitors. So this implies a readership decline.

Per-posting, though, the numbers look better. I recorded 281.9 views per posting in September, from 202.0 unique visitors. (Again, this is total views, of everything, not just of September-dated essays.) The running mean was 273.7 views per posting from 194.0 unique visitors. The running median was 295.9 views per posting from 204.3 unique visitors. That’s all quite in line with things and suggests if I posted more, I would be read more. A fine theory, but how could it be implemented?

Bar chart showing two and a half years' worth of readership figures. After a fairly steep three-month decline both page views and unique readers rose slightly in August before drooping again in September.
I keep looking at that Insights tab and I never get any better at positioning myself.

31 likes were given to things in September, below the running average of 51.6 and the running mean of 47.5. It’s not much better per posting, though: 4.4 likes per posting in September, below the running mean of 5.2 per posting and median of 4.9 per posting. Comments are down a little, too, 10 given in the month compared to a mean of 18.0 and median of 15.5. That translates to 1.4 comments per posting, below the running mean of 1.9 per posting and running median of 1.6 per posting. So, yeah, if Mathematics WordPress isn’t dying it is successfully ejecting me from its body.


The things I posted in September ranked like this, in order of popularity:

Most popular altogether was How To Find A Logarithm Without Much Computing Power. That’s an essay which links to a string of essays that tell you just what it says on the tin.


WordPress estimates that I published 2,973 words in September, a modest but increasing 424.7 words per posting. My average essay so far this year has grown to 565 words. So far for 2021 I’ve posted 38,988 words. This is terse, for me. There have been years I did that in two months.

As of the start of October I’ve had 144,287 page views from 85,603 logged unique visitors, over the course of 1,651 posts. If you’d like to be a regular reader, please use the “Follow Nebusresearch” button at the upper right corner of this page. If you’d rather have essays sent to you by e-mail, use the button a little below that.

If you have an RSS reader you can use this feed for my essays. If you don’t have an RSS reader, I recommend it. They’re good things. You can get one from This Old Reader, for example, or set up one using NewsBlur. Or you can sign up for a free account at Dreamwidth or Livejournal. Use https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn to add RSS feeds to your Reading or Friends page.

My Twitter account has gone feral and only posts announcements of essays. But you can interact with me as @nebusj@mathstodon.xyz, on the Mastodon network. Thanks for reading, in whatever way you’re doing it, and here’s hoping for a good October.

My Little 2021 Mathematics A-to-Z: Multiplication


I wanted to start the Little 2021 Mathematics A-to-Z with more ceremony. These glossary projects are fun and work in about equal measure. But an already hard year got much harder about a month and a half back, and it hasn’t been getting much better. I’m even considering cutting down the reduced A-to-Z project I am doing. But I also feel I need to get some structured work under way. And sometimes only ambition will overcome a diminished world. So I begin, and with luck, will keep posting weekly essays about mathematical terms.

Today’s was a term suggested by Iva Sallay, longtime blog friend and creator of the Find The Factors recreational mathematics puzzle. Also a frequent host of the Playful Math Education Blog Carnival, a project quite worth reading and a great hosting challenge too. And as often makes for a delightful A-to-Z topic, it’s about something so commonplace one forgets it can hold surprises.

Multiplication

A friend pondering mathematics said they know you learn addition first, but that multiplication somehow felt more fundamental. I supported their insight. We learn two plus two first. It’s two times two where we start seeing strange things.

Suppose for the moment we’re interested only in the integers. Zero multiplied by anything is zero. There’s nothing like that in addition. Consider even numbers. An even number times anything gives you an even number again. There’s no duplicating that in addition. But this trait isn’t even unique to even numbers. Multiples of three, or four, or 237 assimilate the integers by multiplication the same way. You can find an integer to add to 2 to get 5; you can’t find an integer to multiply by 2 to get 5. Or consider prime numbers. There’s no integer you can make by only one, or only finitely many, different sums. New possibilities, and restrictions, happen in multiplication.

Whether this makes multiplication the foundation of mathematics, or at least arithmetic, is a judgement. It depends how basic your concepts must be, and what you decide is important. Mathematicians do have a field which studies “things that look like arithmetic”, though. We call this algebra. Or call it abstract algebra to clarify it’s not that stuff with the quadratic formula. And that starts with group theory. A group is made of two things. One is a collection of elements. The other is a thing to do with pairs of elements. Generically, we call that multiplication.

A possible multiplication has to follow a couple rules. It has to be a binary operation on your group’s set. That is, it matches two things in the set to something in the set. There has to be an identity, something that works like 1 does for multiplying numbers. It has to be associative. If you want to multiply three things together, you can start with whatever pair looks easier. Every element has to have an inverse, something you can multiply it by to get 1 as the product.

That’s all, and that’s not much. This description covers a lot of things. For example, there’s regular old multiplication, for the set of rational numbers (other than zero and I intend to talk about that later). For another, there’s rotations of a ball. Each axis you could turn the ball around on, and angle you could rotate it, is an element of the set of three-dimensional rotations. Multiplication we interpret as doing those rotations one after the other. There’s the multiplication of square matrices, ones that have the same number of rows and columns.

If you’re reading a pop mathematics blog, you know of \imath , the “imaginary unit”. You know it because \imath^2 = -1 . A bit more multiplying of these and you find a nice tight cycle. This forms a group, with four discernible elements: 1, \imath, -1, \mbox{ and } -\imath and regular multiplication. It’s a nice example of a “cyclic group”. We can represent the whole thing as multiplying a single element together: \imath^0, \imath, \imath^2, \imath^3 . We can think of \imath^4 but that’s got the same value as \imath^0 . Or \imath^5 , which has the same value as \imath^1 . With a little ingenuity we can even think of what we might mean by, say, \imath^{-1} and realize it has to be the same quantity as \imath^3 . Or \imath{-2} which has to equal \imath^2 . You see the cycle.

A cyclic group doesn’t have to have four elements. It needs to be generated by doing the multiplication over and over on one element, that’s all. It can have a single element, or two, or two hundred. Or infinitely many elements. Suppose we have a set built on the powers of an element that we’ll call e . This is a common name for “an element and we don’t care what it is”. It has nothing to do with the number called e, or any number. At least it doesn’t have to.

Please let me use the shorthand of e^2 to mean e times e , and e^3 to mean e^2 times e , and so on. Then we have a set that looks like, in part, \cdots e^{-3}, e^{-2}, e^{-1}, e^0, e^1, e^2, e^3. \cdots . They multiply together the way we might multiply x raised to powers. e^2 \times e^3 is e^5 , and e^4 \times e^{-4} is e^0 , and e^-3 \times e^2 is e^{-1} and so on.

Those exponents suggest something familiar. In this infinite cyclic group e^j \times e^k is e^{j + k} , where j and k are integers. Do we even need to write the e? Why not just write the j and k in a normal-size typeface? Is there a difference between cyclic-group multiplication and regular old addition of integers?

Not an important one. There’s differences in how we write the symbols, and what we think they mean. There’s not a difference in the way they interact. Regular old addition, in this light, we can see as a multiplication.

Calling addition “multiplication” can be confusing. So we deal with that a few ways. One is to say that rather than multiplication what a group has is a group operation. This lets us avoid fooling people into thinking we mean to take this times that. It lacks a good shorthand word, the way we might say “a times b” or “a plus b”. But we can call it “the group operation”, and say “times” or “plus” as fits our sentence and our sentiment.

I’ve left unanswered that mention of multiplication on the rational-numbers-except-zero making a group. If you include zero in the set, though, you don’t have multiplication as a group operation. There’s no inverse to zero. There seems to be an oversight in multiplication not being a multiplication. I hope to address that in the next A-to-Z essay, on Addition.


This, and my other essays for the Little 2021 Mathematics A-to-Z, should be at this link. And all my A-to-Z essays from every year should be at this link. Thanks for reading.

Do You Know a Friend Who Needs a Mathematician?


Recent events let me know I should make something explicit. I am interested in and looking for mathematical work. My particular skills are in numerical computing but anyone familiar with my writing knows my interest in education and communication. So I am not looking only for major projects. If you need someone to tutor you through the lesson on the directrix or the separatrix, I am game.

I am open also to computer programming work. My day job for the last decade and a half has got me terribly familiar with Asp.Net C#, SQL, Javascript, jQuery, and the OpenLayers GIS tools. Also I keep thinking to take a weekend and pick up Cobol, to put on the shelf beside my Fortran background.

Thank you for thinking of me.

The 148th Playful Math Education Carnival is posted


I apologize for missing its actual publication date, but better late than not at all. Math Book Magic, host of the Playful Math Education Blog Carnival, posted the 148th in the series, and it’s a good read. A healthy number of recreational mathematics puzzles, including some geometry puzzles I’ve been enjoying. As these essays are meant to do, this one gathers some recreational and some educational and some just fun mathematics.

Math In Nature is scheduled to host the next carnival. If you have any mathematics writing or videos or podcasts or such to share, or are aware of any that people might like, please let them know. And if you’d like to host a Playful Math Education Blog Carnival Denise Gaskins has several slots available over the next few months, including the chance to host the 150th of this series. It’s exhausting work, but it is satisfying work. Consider giving it a try.

Some fun with Latin Squares


I found a cute, playful bit paper on arXiv.org. Fun with Latin Squares, by Michael Han, Tanya Khovanova, Ella Kim, Evin Liang, Miriam (Mira)Lubashev, Oleg Polin, Vaibhav Rastogi, Benjamin Taycher, Ada Tsui, and Cindy Wei, appears to be the result of a school research project. A Latin Square is an arrangement of numbers. Like, if you have the whole numbers from 1 to 5, you can make a five-row, five-column magic square, with each number appearing once in each row and column. If you have the numbers from 1 to 10 you can make a ten-row, ten-column magic square, again with each number appearing once per row and column. And so on.

So what the arXiv.org paper does is look at different types of Latin Squares, and whip up some new ones by imposing new rules. Latin Squares are one of those corners of mathematics I haven’t thought about much. But they do connect to other problems, such as sudoku, or knights-tour and similar problems of chess piece movement. So we get enlightenment in those from considering these. And from thinking how we might vary the rules about how to arrange numbers. It’s pleasant, fun exercise.

How to Impress Someone by Multiplying Certain Big Numbers in Your Head


Mental arithmetic is fun. It has some use, yes. It’s always nice when you’re doing work to have some idea what a reasonable answer looks like. But mostly it’s fun to be able to spot, oh, 24 times 16, that’s got to be a little under 400.

I ran across this post, by Math1089, with a neat trick for certain multiplications. It’s limited in scope. Most mental-arithmetic tricks are; they have certain problems they do well and you need to remember a grab bag that covers enough to be useful. Here, the case is multiplying two numbers that start the same way, and whose ends are complements. That is, the ends add together to 10. (Or, to 100, or 1000, or some other power of two.) So, for example, you could use this trick to multiply together 41 and 49, or 64 and 66. (Or, if you needed, to multiply 2038 by 2062.)

It won’t directly solve 41 times 39, though, nor 64 times 65. But you can hack it together. 64 times 65 is 64 times 66 — you have a trick for that — minus 64. 41 times 39 is tougher, but, it’s 41 times 49 minus 41 times 10. 41 times 10 is easy to do. This is what I mean by learning a grab bag of tricks. You won’t outpace someone who has their calculator out and ready to go. But you might outpace someone who has to get their calculator out, and you’ll certainly impress them.

So it’s clever, and not hard to learn. If you feel like testing your high-school algebra prowess you can even work out why this trick works, and why it has the limits it does.

How August 2021 Treated My Mathematics Blog


Better than August 2021 treated me! I don’t wish to impose my woes on you, but the last month was one of the worst I’ve had. Besides various physical problems I also felt dreadfully burned out, which postponed my Little Mathematics A-to-Z yet again. I hope yet to get the sequence started, not to mention finished, although I want to get one more essay banked before I start publishing. If things go well, then, that’ll be this Wednesday; if it doesn’t, maybe next Wednesday.

Still, and despite everything, I was able to post seven things in August, a slow return to form. I am still trying to rebuild my energies. But my hope is to get up to about two posts a week, so for most months, eight to ten posts.

The postings I did do were received with this kind of readership:

Bar chart showing two and a half years' worth of readership figures. After a fairly steep three-month decline both page views and unique readers rose slightly in August.
I’m going to have such mixed feelings when that great big spike in October 2019 times out of these monthly recaps. I need to figure some way to get someone on Reddit or whoever to just casually mention me, once a week, until I get a book publishing deal. If you know how to arrange the details please leave a comment that I’ll answer by October at the latest.

So that’s a total of 2,136 page views for August. That’s up from July, though still below the twelve-month running mean of 2,572.6 views per month. It’s also below the median of 2,559 views per month. There were 1,465 unique visitors recorded. This is again below the running mean of 1,8237.7 unique visitors, and the running mean of 1,801 unique visitors.

There were 43 things liked in August, below the running mean of 53.4 and running median of 49.5. And there were a meager 10 comments received, below the mean of 18.7 and median of 18. I expect this will correct itself whenever I do get the Little Mathematics A-to-Z started; those always attract steady interest, and people writing back, even if it’s just to thank me for taking one of their topics as an essay.

Rated per-post, everything gets strikingly close to average. August came in at an mean 305.1 views per posting, compared to a twelve-month running mean of 257.2 and running median of 282.6. There were 209.3 unique visitors per posting, compared to a running mean of 182.7 and median of 197.0. There were 6.1 likes per posting, compared to a mean of 5.0 and median of 4.4. The only figure not above some per-post average was comments, which were 1.4 per posting. The mean comments per posting, from August 2020 through July 2021, was 1.9, and the median 1.4.


Here’s how August’s seven posts ranked in popularity, as in, number of page views for each post:

My most popular piece of all was a six-year-old pointer to Robert Austin’s diagram of the real number system and how the types of numbers relate to each other. Not sure but a lot of my most durable pieces just point to someone else’s work. The most popular thing that I had a hand in writing was a Reading the Comics post from December 2019 featuring The Far Side.


WordPress estimates that I published 2,440 words in August, a meager 348.6 words per post. I told you I was burned out. It estimates that for 2021 I’ve published a total of 36,015 words as of the start of September, an average of 581 words per posting.

As of the start of September I’ve had 142,313 page views, from 84,192 logged unique visitors. That over the course of 1,644 published posts. If you’d like to be a regular reader, please do. You can add the feed of my essays to whatever your RSS reader is. There are several ways you can get an RSS reader. You can use This Old Reader, for example, set up on NewsBlur. Or you can sign up for a free account at Dreamwidth or Livejournal. Use https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn to add RSS feeds to your Reading or Friends page.

You also can get essays e-mailed right to you, at publication. Please use this option if you want me to be self-conscious about the typos and grammatical errors that I never find before publication however hard I try. You can do that by using the “Follow NebusResearch via Email” box to the right-center of the page. If you have a WordPress account, you can use “Follow NebusResearch” on the top right to add my essays to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for being here, and here’s hoping for a happy September.

The 197th Carnival of Mathematics is published


And some happy news for those who like miscellaneous collections of mathematics stuff. Jeremy Kun has published the 197th edition of the Carnival of Mathematics. This differs from the Playful Math Education Blog Carnival in not having a specific focus on educational or recreational mathematics. That’s not to say there isn’t fun little stuff mentioned here. For example, Kun leads with a bit of trivia about 197 as a number. But there’s a stronger focus on more serious mathematics work, such as studying space-filling curves, or a neat puzzle about how to fold (roughly) equilateral triangles without measuring them.

How to Make Circles Into Circles on a Different Shape


Elkement, who’s been a longtime support of my blogging here, has been thinking about stereographic projection recently. This comes from playing with complex-valued numbers. It’s hard to start thinking about something like “what is 1 \div \left(2 + 3\imath \right) and not get into the projection. The projection itself Elkement describes a bit in this post, from early in August. It’s one of the ways to try to match the points on a sphere to the points on the entire, infinite plane. One common way to imagine it, and to draw it, is to imagine setting the sphere on the plane. Imagine sitting on the top of the sphere. Draw the line connecting the top of the sphere with whatever point you find interesting on the sphere, and then extend that line until it intersects the plane. Match your point on the sphere with that point on the plane. You can use this to trace out shapes on the sphere and find their matching shapes on the plane.

This distorts the shapes, as you’d expect. Well, the sphere has a finite area, the plane an infinite one. We can’t possibly preserve the areas of shapes in this transformation. But this transformation does something amazing that offends students when they first encounter it. It preserves circles: a circle on the original sphere becomes a circle on the plane, and vice-versa. I know, you want it to turn something into ellipses, at least. She takes a turn at thinking out reasons why this should be reasonable. There are abundant proofs of this, but it helps the intuition to see different ways to make the argument. And to have rough proofs, that outline the argument you mean to make. We need rigorous proofs, yes, but a good picture that makes the case convincing helps a good deal.

Why Can’t You Just Integrate e^{-x^2}, Anyway?


John Quintanilla, at the Mean Green Math blog, started a set of essays about numerical integration. And the good questions, too, like, why do numerical integration? In a post fram last week he looks at one integral that you encounter in freshman calculus, to learn you can’t do except numerically. (With a special exception.) That one is about the error function, also called the bell curve. The problem is finding the area underneath the curve described by

y = e^{-x^2}

What we mean by that is the area between some left boundary, x = a , and some right boundary, x = b , that’s above the x-axis, and below that curve. And there’s just no finding a, you know, answer. Something that looks like (to make up an answer) the area is (b - a)^2 e^{-(b - a)^2} or something normal like that. The one interesting exception is that you can find the area if the left bound is -\infty and the right bound +\infty . That’s done by some clever reasoning and changes of variables which is why we see that and only that in freshman calculus. (Oh, and as a side effect we can get the integral between 0 and infinity, because that has to be half of that.)

Anyway, Quintanilla includes a nice bit along the way, that I don’t remember from my freshman calculus, pointing out why we can’t come up with a nice simple formula like that. It’s a loose argument, showing what would happen if we suppose there is a way to integrate this using normal functions and showing we get a contradiction. A proper proof is much harder and fussier, but this is likely enough to convince someone who understands a bit of calculus and a bit of Taylor series.

How to Make a Straight Line in Different Circumstances


I no longer remember how I came to be aware of this paper. No matter. Here is Paul Rojas’s The straight line, the catenary, the brachistochrone, the circle, and Fermat. It is about a set of optimization problems, in this case, attempts to find the shortest path something can follow.

The talk of the catenary and the brachistochrone give away that this is a calculus paper. The catenary and the brachistochrone are some of the oldest problems in calculus as we know it. The catenary is the problem of what shape a weighted chain takes under gravity. The brachistochrone is the problem of what path a beam of light traces out moving through regions with different indexes of refraction. (As in, through films of glass or water or such.) Straight lines and circles we’ve heard of from other places.

The paper relies on calculus so if you’re not comfortable with that, well, skim over the lines with \int symbols. Rojas discusses the ways that we can treat all these different shapes as solutions of related, very similar problems. And there’s some talk about calculating approximate solutions. There is special delight in this as these are problems that can be done by an analog computer. You can build a tool to do some of these calculations. And I do mean “you”; the approach is to build a box, like, the sort of thing you can do by cutting up plastic sheets and gluing them together and setting toothpicks or wires on them. Then dip the model into a soap solution. Lift it out slowly and take a good picture of the soapy surface.

This is not as quick, or as precise, as fiddling with a Matlab or Octave or Mathematica simulation. But it can be much more fun.

Turns out that Giuseppe Peano tried to create an international language out of Latin


This is a slight piece, but I just learned that Giuseppe Peano spearheaded the creation of Latino sine flexione, an attempted auxiliary language. The name gives away the plan: “Latin without inflections”. That is, without the nouns and verbs changing form to reflect the role they play in a sentence. I know very little about languages, so I admit I don’t understand quite how this is supposed to work. I had the impression that what an Indo-European language skips in inflections it makes up for with prepositions, and Peano was trying to do without either. But he (and his associates) had something, apparently; he was able to publish the fifth edition of his Formulario Mathematico in the Latino sine flexione.

Giuseppe Peano is a name any mathematician would know and respect highly. He’s one of the logicians and set theorists of the late 19th and early 20th century who straightened out so much of the logical foundations of arithmetic. His “Peano axioms” are still the standard axiomatization of the natural numbers, that is, the logic that underlies what we think of as “four”. And into the logic of mathematical induction, a slick way of proving something true by breaking it up into infinitely many possible cases. You can see why the logic of this requires delicate treatment. And he was an inveterate thinker about notation. Wikipedia credits his 1889 treatise The Principles Of Arithmetic, Presented By A New Method as making pervasive the basic set theory symbols, including the notations for “is an element of”, “is a subset of”, “intersection of sets”, and “union of sets”. Florian Cajori’s History of Mathematical Notations also reveals to me that the step in analysis, when we stop writing “function f evaluated on element x” as “f(x)”, and move instead to “fx”, shows his influence. (He apparently felt the parentheses served no purpose. I … see his point, for f(x) or even f(g(x)) but feel that’s unsympathetic to someone dealing with f(a + sin(t)). I imagine he would agree those parentheses have a point.)

This is all a tiny thing, and anyone reading it should remember that the reality is far more complicated, and ambiguous, and confusing than I present. But it’s a reminder that mathematicians have always held outside fascinations. And that great mathematicians were also part of the intellectual currents of the pre-Great-War time, that sought utopia through things like universal languages and calendar reform and similar kinds of work.

Math Book Magic is hosting the next Playful Math Education Blog Carnival


Another mere little piece today. I’d wanted folks to know that Kelly Darke’s Math Book Magic is the next host for the Playful Math Education Blog Carnival. And would likely be able to use any nominations you had for blog posts, YouTube videos, books, games, or other activities that share what’s delightful about mathematics. The Playful Math Education Blog Carnival is a fun roundup to read, and to write — I’ve been able to host it a few times myself — and I hope anyone reading this will consider supporting it too.

How July 2021 Treated My Mathematics Blog


I didn’t quite abandon my mathematics blog in July, but it would be hard to prove otherwise. I published only five pieces, which I think is my lowest monthly production on record. One of them was the monthly statistics recap. One pointed to a neat thing I found. Three were pointers to earlier essays I’ve written here. It’s economical stuff, But it draws in fewer readers, a thing I’m conditioned to think of as bad. How bad?

I received 1,891 page views in July, way below the running mean of 2,545.0 for the twelve months ending with June 2021. This is also well below the running median of 2,559. There were 1,324 unique visitors in July, way below the running mean of 1,797.1 and median of 1,801. The number of likes barely dropped from June’s totals, with 34 things given a like here. That’s well down from the mean of 56.8 per month and the 55.5 per month median. And comments were dire, only four received compared to a mean of 20.5 and median of 19.

Bar chart of two and a half year's worth of monthly readership and unique-visitor counts. There was a great peak in October 2019, but more recently two months of decline after several months of steadily high reader counts.
Now I’m a bit curious if there is a WordPress Statistic that tells you how many posts you had per month. It’d be nice, I guess, to see just how strong a correlation there is between “posting stuff” and “getting read”.

That’s the kind of collapse which makes it look like the blog’s just dried up and floated away. But these readership figures are still a good bit above most of 2020, for example, or all but one month of 2018. I’m feeling the effects of the hedonic treadmill here.

And, now — if we consider that per posting? Suddenly my laconic nature starts to seem like genius. There were an average 378.2 views per posting in July. Not all July posts, but the number of views divided by the number posts given. That’s crushing the twelve-month mean of 232.9 views per posting, and twelve-month median of 235.0 views per posting. There were 264.8 unique visitors per posting. The twelve-month running mean was 165.2 unique visitors per posting, and the median 166.3.

Even the likes and comments look better this way. There were 6.8 likes for each time I posted, above the mean of 4.7 and median of 4.3. There were still only 0.8 comments per posting, below the mean of 1.9 and median of 1.6, but at least the numbers look closer together.


The order of popularity of July’s essays, most to least, was:

The most popular essay of all was No, You Can’t Say What 6/2(1+2) Equals. From this I infer some segment of Twitter got worked up about an ambiguous arithmetic expression again.


WordPress estimates that I published 3,103 words in July. This is an average of merely 517.2 words per posting, a figure that will increase as soon as I get this year’s A-to-Z under way. My average words per posting for 2021 declined to 611 thanks to all this. I am at 33,575 words for the year so far.

As of the start of August I’ve had 140,178 page views from an 82,728 logged unique visitors. If you’d like to be a regular reader, I’d like to be regularly read. Heck, I’d like to be read any old way people manage. You can get all my essays by adding the RSS feed to your reader. If you lack an RSS reader? There’s several good options. You can use This Old Reader, for example, set up on NewsBlur. Or you can sign up for a free account at Dreamwidth or Livejournal. Use https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn to add RSS feeds to your Reading or Friends page.

If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Use the “Follow NebusResearch via Email” box to the right-center of the page here.. Or if you have a WordPress account, you can use “Follow NebusResearch” on the top right to add this page to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.

I’m already looking for topics for the Little 2021 Mathematics A-to-Z


I hope to begin publishing this year’s Little Mathematics A-to-Z next week, with a rousing start in the letter “M”. I’m also hoping to work several weeks ahead of deadline for a change. To that end, I already need more letters! While I have a couple topics picked out for M-A-T-H, I’ll need topics for the next quartet. If you have a mathematics (or mathematics-adjacent) term starting with E, M, A, or T that I might write a roughly thousand-word essay about? Please, leave a comment and I’ll think about it.

If you do, please leave a mention of any project (mathematics or otherwise) you’d like people to know more about. And several folks were kind enough to make suggestions for M-A-T-H, several weeks ago. I’m still keeping those as possibilities for M, A, and T’s later appearances.

I’m open to re-examining a topic I’ve written about in the past, if I think I have something fresh to say about it. Past A-to-Z’s have been about these subjects:

E.


M.


A.


T.


The Little 2021 Mathematics A-to-Z should appear here, when I do start publishing. This and all past A-to-Z essays should be at this link. Thank you for reading.

How to Tell if a Point Is Inside a Shape


As I continue to approach readiness for the Little Mathematics A-to-Z, let me share another piece you might have missed. Back in 2016 somehow two A-to-Z’s wasn’t enough for me. I also did a string of “Theorem Thursdays”, trying to explain some interesting piece of mathematics. The Jordan Curve Theorem is one of them.

The theorem, at heart, seems too simple to even be mathematics. It says that a simple closed curve on the plane divides the plane into an inside and an outside. There are similar versions for surfaces in three-dimensional spaces. Or volumes in four-dimensional spaces and so on. Proving the theorem turns out to be more complicated than I could fit into an essay. But proving a simplified version, where the curve is a polygon? That’s doable. Easy, even.

And as a sideline you get an easy way to test whether a point is inside a shape. It’s obvious, yeah, if a point is inside a square. But inside a complicated shape, some labyrinthine shape? Then it’s not obvious, and it’s nice to have an easy test.

This is even mathematics with practical application. A few months ago in my day job I needed an automated way to place a label inside a potentially complicated polygon. The midpoint of the polygon’s vertices wouldn’t do. The shapes could be L- or U- shaped, so that the midpoint wasn’t inside, or was too close to the edge of another shape. Starting from the midpoint, though, and finding the largest part of the polygon near to it? That’s doable, and that’s the Jordan Curve Theorem coming to help me.

How to Make a Transcendental Number


I am, believe it or not, working ahead of deadline on the Little Mathematics A-to-Z for this year. I feel so happy about that. But that’s eating up time to write fresh stuff here. So please let me share some older material, this from my prolific year 2016.

Transcendental numbers, which I describe at this link, are nearly all the real numbers. We’re able to prove that even though we don’t actually know very many of them. We know some numbers that we’re interested in, like π and e , are. And that this has surprising consequences. π being a transcendental number means, for example, the Ancient Greek geometric challenge to square the circle using straightedge and compass is impossible.

However, it’s not hard to create a number that you know is transcendental. Here’s how to do it, with an easy step-by-step guide. If you want to create this and declare it’s named after you, enjoy! Nobody but you will ever care about this number, I’m afraid. Its only interesting traits will be that it’s transcendental and that you crafted it. Still, isn’t that nice anyway? I think it’s nice anyway.

How To Find A Logarithm Without Much Computing Power


I don’t yet have actual words committed to text editor for this year’s little A-to-Z yet. Soon, though. Rather than leave things completely silent around here, I’d like to re-share an old sequence about something which delighted me. A lon while ago I read Edmund Callis Berkeley’s Giant Brains: Or Machines That Think. It’s a book from 1949 about numerical computing. And it explained just how to really calculate logarithms.

Anyone who knows calculus knows, in principle, how to calculate a logarithm. I mean as in how to get a numerical approximation to whatever the log of 25 is. If you didn’t have a calculator that did logarithms, but you could reliably multiply and add numbers? There’s a polynomial, one of a class known as Taylor Series, that — if you add together infinitely many terms — gives the exact value of a logarithm. If you only add a finite number of terms together, you get an approximation.

That suffices, in principle. In practice, you might have to calculate so many terms and add so many things together you forget why you cared what the log of 25 was. What you want is how to calculate them swiftly. Ideally, with as few calculations as possible. So here’s a set of articles I wrote, based on Berkeley’s book, about how to do that.

Machines That Think About Logarithms sets out the question. It includes some talk about the kinds of logarithms and why we use each of them.

Machines That Do Something About Logarithms sets out principles. These are all things that are generically true about logarithms, including about calculating logarithms.

Machines That Give You Logarithms explains how to use those tools. And lays out how to get the base-ten logarithm for most numbers that you would like with a tiny bit of computing work. I showed off an example of getting the logarithm of 47.2286 using only three divisions, four additions, and a little bit of looking up stuff.

Without Machines That Think About Logarithms closes it out. One catch with the algorithm described is that you need to work out some logarithms ahead of time and have them on hand, ready to look up. They’re not ones that you care about particularly for any problem, but they make it easier to find the logarithm you do want. This essay talks about which logarithms to calculate, in order to get the most accurate results for the logarithm you want, using the least custom work possible.

And that’s the series! With that, in principle, you have a good foundation in case you need to reinvent numerical computing.

Is this mathematics thing ambiguous or confusing?


There is an excellent chance it is! Mathematicians sometimes assert the object of their study is a universal truth, independent of all human culture. It may be. But the expression of that interest depends on the humans expressing it. And as with all human activities it picks up quirks. Patterns that don’t seem to make sense. Or that seem to conflict with other patterns. It’s not two days ago I most recently saw someone cross that 0 times anything is 0, but 0! is 1.

Mathematicians are not all of one mind. They notice different things that seem important and want to focus on that. They use ways that make sense to their culture. When they create new notation, or new definitions, they use the old ones to guide them. When a topic’s interesting enough for many people to notice, they bring many trails of notation to describe it. Usually a consensus emerges, that there are some notations that work well to describe these concepts, and the others fall away. But it’s difficult to get complete consistency. Particularly when there are several major fields that don’t need to interact much, but do have some overlap.

Christian Lawson-Perfect has started something that might be helpful for understanding this. WhyStartAt.xyz is to be a collection of “ambiguous, inconsistent, or just plain unpleasant conventions in mathematical notation”. There’s four major categories already: inconsistencies, ambiguities, unpleasantness, and conflicting definitions. And there’s a set of references useful for anyone curious why something is a convention. (Nobody knows why we use ‘m’ for the slope in the slope-intercept or point-slope equations describing a line. Sometimes a convention is arbitrary.) It’s already great reading, though, not just for this line from our friend Thomas Hobbes.

How June 2021 Treated My Mathematics Blog


It’s the time of month when I like to look at what my popularity is like. How many readers I had, what they were reading, that sort of thing. And I’m even getting to it earlier than usual in the month of July. Credit a hot Sunday when I can’t think of other things to do instead.

According to WordPress there were 2,507 page views here in June 2021. That’s down from the last couple months. But it is above the twelve-month running mean, leading up to June, which was of 2,445.9 views per month. The twelve-month running median was 2,516.5. This all implies that June was quite in line with my average month from June 2020 through May 2021. It just looks like a decline is all.

There were 1,753 unique visitors recorded by WordPress in June. That again fits between the running averages. There were a mean 1,728.4 unique visitors per month between June 2020 and May 2021. There was a median of 1,800 unique visitors each month over that same range.

Bar chart showing two and a half years' worth of readership figures. There's an enormous spike in October 2018. After several increasing months of readership recently, June 2021 saw a modest drop in views and unique visitors.
Hey, remember when I tracked views per visitor? I don’t remember why I stopped doing that. The figures were volatile. But either way had a happy interpretation. A low number of views per visitor implied a lot of people found something interesting. A high number of views per visitor implied people were doing archive-binges and reading everything. I suppose I could start seriously tracking it now but then I’d have to add a column to my spreadsheet.

The number of likes given collapsed, a mere 36 clicks of the like button given in June compared to a mean of 57.3 and median of 55.5. Given how many of my posts were some variation of “I’m struggling to find the energy to write”? I can’t blame folks not finding the energy to like. Comments were up, though, surely in response to my appeal for Mathematics A-to-Z topics. If you’ve thought of any, please, let me know; I’m eager to know.

I had nine essays posted in June, including my readership review post. These were, in the order most-to-least popular (as measured by page views):

In June I posted 7,852 words, my most verbose month since October 2020. That comes to an average of 981.5 words per posting in June. But the majority of them were in a single post, the exploration of MLX, which shows how the mean can be a misleading measure. This does bring my words-per-posting mean for the year up to 622, an increase of 70 words per posting. I need to not do that again.

As of the start of July I’ve had 1,631 posts here, which gathered 138,286 total views from 81,404 logged unique visitors.

If you’d like to be a regular reader, this is a great time for it, as I’ve almost worked my way through my obsession with checksum routines of 1980s computer magazines! And there’s the A-to-Z starting soon. Each year I do a glossary project, writing essays about mathematics terms from across the dictionary, many based on reader suggestions. All 168 essays from past years are at this link. This year’s should join that set, too.

If you’d like to be a regular reader, thank you! You can get all these essays by their RSS feed, and never appear in my statistics. It’s easy to get an RSS reader if you need. This Old Reader is an option, for example, as is NewsBlur. Or you can sign up for a free account at Dreamwidth or Livejournal. Use https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn to add RSS feeds to your Reading or Friends page.

If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Or if you have a WordPress account, you can use “Follow NebusResearch” to add this page to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.

How did Compute!’s Automatic Proofreader Work?


After that work on MLX, the programs that Compute! and Compute!’s Gazette used to enter machine language programs, I figured I was done. There was the Automatic Proofreader, used to catch errors in typing in BASIC programs. But that program was written in the machine language of the 6502 line of microchip. I’ve never been much on machine language and figured I couldn’t figure out how it worked. And then on a lark I tried and saw. And it turned out to be easy.

With qualifiers, of course. Compute! and Compute!’s Gazette had two generations of Automatic Proofreader for Commodore computers. The magazines also had Automatic Proofreaders for the other eight-bit computers that they covered. I trust that those worked the same way, but — with one exception — don’t know. I haven’t deciphered most of those other proofreaders.

Cover of the October 1983 Compute!'s Gazette, offering as cover features the games oil Tycoon and Aardvark Attack, and promising articles on speeding up the Vic-20 and understanding sound on the 64.
The October 1983 Compute!’s Gazette, with the debut of the Automatic Proofreader. It was an era for wordy magazine covers. Also I have no idea who did the art for that Oil Tycoon article but I love how emblematic it was of 1980s video game cover art.

Let me introduce how it was used, though. Compute! and Compute!’s Gazette offered computer programs to type in. Many of them were in BASIC, which uses many familiar words of English as instructions. But you can still make typos entering commands, and this causes bugs or crashes in programs. The Automatic Proofreader, for the Commodore (and the Atari), put in a little extra step after you typed in a line of code. It calculated a checksum. It showed that on-screen after every line you entered. And you could check whether that matched the checksum the magazine printed. So the listing in the magazine would be something like:

100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$ :rem 34
110 C4=48:C6=16:C7=7:Z2=2:Z4=254:Z5=255:Z6=256:Z7=127 :rem 238
120 FA=PEEK(45)+Z6*PEEK(46): BS=PEEK(55)+Z6*PEEK(56): H$="0123456789ABCDEF" :rem118

Sample text entry, in this case for The New MLX. It shows about eight lines of BASIC instructions, each line ending in a colon, the command 'rem' and a number between 0 and 255.
The start of The New MLX, introduced in the December 1985 Compute!’s Gazette, and using the original Automatic Proofreader checksum. That program received lavish attention two weeks ago.

You would type in all those lines up to the :rem part. ‘rem’ here stands for ‘Remark’ and means the rest of the line is a comment to the programmer, not the computer. So they’d do no harm if you did enter them. But why type text you didn’t need?

So after typing, say, 100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$ you’d hit return and with luck get the number 34 up on screen. The Automatic Proofreader did not force you to re-type the line. You were on your honor to do that. (Nor were you forced to type lines in order. If you wished to type line 100, then 200, then 300, then 190, then 250, then 330, you could. The checksum would calculate the same.) And it didn’t only work for entering programs, these commands starting with line numbers. It would return a result for any command you entered. But since you wouldn’t know what the checksum should be for a freeform command, that didn’t tell you much.

Magazine printout of a Commodore 64 screen showing the Automatic Proofreader in use. There are several lines of BASIC program instructions and in the upper-left corner of the screen the number ':247' printed in cover-reversed format.
I’m delighted there’s a picture of what the Automatic Proofreader looked like in practice, because this saves me having to type in the Proofreader into an emulator and taking a screen shot of that. Also, props to Compute!’s Gazette for putting a curved cut around this screen image.

The first-generation Automatic Proofreader, which is what I’m talking about here, returned a number between 0 and 255. And it was a simple checksum. It could not detect transposed characters: the checksum for PIRNT was the same as PRINT and PRITN. And, it turns out, errors could offset: the checksum for PEEK(46) would be the same as that for PEEK(55).

And there was one bit of deliberate insensitivity built in. Spaces would not be counted. The checksum for FA=PEEK(45)+Z6*PEEK(46) would be the same as FA = PEEK( 45 ) + Z6 * PEEK( 46 ). So you could organize text in whatever way was most convenient.

Given this, and given the example of the first MLX, you may have a suspicion how the Automatic Proofreader calculated things. So did I and it turned out to be right. The checksum for the first-generation Automatic Proofreader, at least for the Commodore 64 and the Vic-20, was a simple sum. Take the line that’s been entered. Ignore spaces. But otherwise, take the ASCII code value for each character, and add that up, modulo 256. That is, if the sum is (say) 300, subtract 256 from that, that is, 44.

I’m fibbing a little when I say it’s the ASCII code values. The Commodore computers used a variation on ASCII, called PETSCII (Commodore’s first line of computers was the PET). For ordinary text the differences between ASCII and PETSCII don’t matter. The differences come into play for various characters Commodores had. These would be symbols like the suits of cards, or little circles, or checkerboard patterns. Symbols that, these days, we’d see as emojis, or at least part of an extended character set.

But translating all those symbols is … tedious, but not hard. If you want to do a simulated Automatic Proofreader in Octave, it’s almost no code at all. It turns out Octave and Matlab need no special command to get the ASCII code equivalent of text. So here’s a working simulation

function retval = automatic_proofreader (oneLine)
  trimmedLine = strrep(oneLine, " ", "");
  #	In Matlab this should be replace(oneLine, " ", "");
  retval = mod(sum(trimmedLine), 256);

endfunction

To call it type in a line of text:

automatic_proofreader("100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$")
The first page of the article introducing the Automatic Proofreader. The headline reads 'The Automatic Poofreader', with a robotic arm writing in a ^r. The subheading is 'Banish Typos Forever!'
Very optimistic subhead here considering the limits they acknowledged in the article about what the Automatic Proofreader could detect.

Capitalization matters! The ASCII code for capital-P is different from that for lowercase-p. Spaces won’t matter, though. More exotic characters, though, such as the color-setting commands, are trouble and let’s not deal with that right now. Also you can enclose your line in single-quotes, in case for example you want the checksum of a line that had double-quotes. Let’s agree that lines with single- and double-quotes don’t exist.

I understand the way Commodore 64’s work well enough that I can explain the Automatic Proofreader’s code. I plan to do that soon. I don’t know how the Atari version of the Automatic Proofreader worked, but since it had the same weaknesses I assume it used the same algorithm.

There is a first-generation Automatic Proofreader with a difference, though, and I’ll come to that.

History of Philosophy podcast has another episode on Nicholas of Cusa


A couple weeks ago I mentioned that Peter Adamson’s The History of Philosophy Without Any Gaps had an episode about Nicholas of Cusa. Last week the podcast had another one, a half-hour interview with Paul Richard Blum about him and his work.

As with the previous podcast, there’s almost no mention of Nicholas of Cusa’s mathematics work. On the other hand, if you learn the tiniest possible bit about Nicholas of Cusa, you learn everything there is to know about Nicholas of Cusa. (I believe this joke would absolutely kill with the right audience, and will hear nothing otherwise.) The St Andrews Maths History site has a biography focusing particularly on his mathematical work.

I’m sorry not to be able to offer more about his mathematical work. If someone knows of a mathematics-history podcast with a similar goal, please leave a comment. I’d love to know and to share with other people.

I’m looking for topics for the Little 2021 Mathematics A-to-Z


I’d like to say I’m ready to start this year’s Mathematics A-to-Z. I’m not sure I am. But if I wait until I’m sure, I’ve learned, I wait too long. As mentioned, this year I’m doing an abbreviated version of my glossary project. Rather than every letter in the alphabet, I intend to write one essay each for the letters in “Mathematics A-to-Z”. The dashes won’t be included.

While I have some thoughts in minds for topics, I’d love to know what my kind readers would like to see me discuss. I’m hoping to write about one essay, of around a thousand words, per week. One for each letter. The topic should be anything mathematics-related, although I tend to take a broad view of mathematics-related. (I’m also open to biographical sketches.) To suggest something, please, say so in a comment. If you do, please also let me know about any projects you have — blogs, YouTube channels, real-world projects — that I should mention at the top of that essay.

To keep things manageable, I’m looking for the first couple letters — MATH — first. But if you have thoughts for later in the alphabet please share them. I can keep track of that. I am happy to revisit a subject I think I have more to write about, too. Past essays for these letters that I’ve written include:

M.


A.


T.


H.


The reason I wrote a second Tiling essay is because I forgot I’d already written one in 2018. I hope not to make that same mistake again. But I am open to repeating a topic, or a variation of a topic, on purpose..

Here’s some Matlab/Octave code for your MLX simulator


I am embarrassed that after writing 72,650 words about MLX 2.0 for last week, I left something out. Specifically, I didn’t include code for your own simulation of the checksum routine on a more modern platform. Here’s a function that carries out the calculations of the Commodore 64/128 or Apple II versions of MLX 2.0. It’s written in Octave, the open-source Matlab-like numerical computation routine. If you can read this, though, you can translate it to whatever language you find convenient.

function [retval] = mlxII (oneline)
   z2 = 2;
   z4 = 254;
   z5 = 255;
   z6 = 256; 
   z7 = 127;
 
   address = oneline(1);
   entries = oneline(2:9);
   checksum = oneline(10);
   
   ck = 0;
   ck = floor(address/z6);
   ck = address-z4*ck + z5*(ck>z7)*(-1);
   ck = ck + z5*(ck>z5)*(-1);
#
#	This looks like but is not the sum mod 255.  
#	The 8-bit computers did not have a mod function and 
#	used this subtraction instead.
#	
   for i=1:length(entries),
     ck = ck*z2 + z5*(ck>z7)*(-1) + entries(i);
     ck = ck + z5*(ck>z5)*(-1);
   endfor
#
#	The checksum *can* be 255 (0xFF), but not 0 (0x00)!  
#	Using the mod function could make zeroes appear
#       where 255's should.
#
   retval = (ck == checksum);
endfunction

This reproduces the code as it was actually coded. Here’s a version that relies on Octave or Matlab’s ability to use modulo operations:

function [retval] = mlxIIslick (oneline)
   factors = 2.^(7:-1:0);

   address = oneline(1);
   entries = oneline(2:9);
   checksum = oneline(10);
   
   ck = 0;
   ck = mod(address - 254*floor(address/256), 255);
   ck = ck + sum(entries.*factors);
   ck = mod(ck, 255);
   ck = ck + 255*(ck == 0);

   retval = (ck == checksum);
endfunction

Enjoy! Please don’t ask when I’ll have the Automatic Proofreader solved.

How did Compute!’s and Compute!’s Gazette’s New MLX Work?


A couple months ago I worked out a bit of personal curiosity. This was about how MLX worked. MLX was a program used in Compute! and Compute!’s Gazette magazine in the 1980s, so that people entering machine-language programs could avoid errors. There were a lot of fine programs, some of them quite powerful, free for the typing-in. The catch is this involved typing in a long string of numbers, and if any were wrong, the program wouldn’t work.

So MLX, introduced in late 1983, was a program to make typing in programs better. You would enter in a string of six numbers — six computer instructions or data — and a seventh, checksum, number. Back in January I worked out finally what the checksum was. It turned out to be simple. Take the memory location of the first of your set of six instructions, modulo 256. Add to it each of the six instructions, modulo 256. That’s the checksum. If it doesn’t match the typed-in checksum, there’s an error.

There’s weaknesses to this, though. It’s vulnerable to transposition errors: if you were supposed to type in 169 002 and put in 002 169 instead, it wouldn’t be caught. It’s also vulnerable to casual typos: 141 178 gives the same checksum as 142 177.

Which is all why the original MLX lasted only two years.

What Was The New MLX?

The New MLX, also called MLX 2.0, appeared first in the June 1985 Compute!. This in a version for the Apple II. Six months later a version for the Commodore 64 got published, again in Compute!, though it ran in Compute!’s Gazette too. Compute! was for all the home computers of the era; Compute!’s Gazette specialized in the Commodore computers. I would have sworn that MLX got adapted for the Atari eight-bit home computers too, but can’t find evidence it ever was. By 1986 Compute! was phasing out its type-in programs and didn’t run much for Atari anymore.

Cover of the December 1986 Compute!'s Gazette, which includes small pictures to represent several features. One is a neat watercolor picture for 'Q Bird', showing a cheerful little blue bird resting on the head of a nervous-looking snake.
Programming challenge: a video game with the aesthetics of 1980s video-game-art, such as Q Bird’s look there.

The new MLX made a bunch of changes. Some were internal, about how to store a program being entered. One was dramatic in appearance. In the original MLX people typed in decimal numbers, like 32 or 169. In the new, they would enter hexadecimal digits, like 20 or A9. And a string of eight numbers on a line, rather than six. This promised to save our poor fingers. Where before we needed to type in 21 digits to enter six instructions, now we needed 18 digits to enter eight instructions. So the same program would take about two-thirds the number of keystrokes. A plausible line of code would look something like:

0801:0B 08 00 00 9E 32 30 36 EC
0809:31 00 00 00 A9 00 8D 20 3A
0811:D0 20 CF 14 20 1B 08 4C 96
0819:C7 0B A9 93 20 D2 FF A9 34

(This from the first lines for “Q-Bird”, a game published in the December 1986 Compute!’s Gazette.)

And, most important, there was a new checksum.

What was the checksum formula?

I had a Commodore 64, so I always knew MLX from its Commodore version. The key parts of the checksum code appear in it in lines 350 through 390. Let me copy out the key code, spaced a bit out for easier reading:

360 A = INT(AD/Z6):
    GOSUB 350:
    A = AD - A*Z6:
    GOSUB 350:
    PRINT":";
370 CK = INT(AD/Z6):
    CK = AD - Z4*CK + Z5*(CK>27):
    GOTO 390
380 CK = CK*Z2 + Z5*(CK>Z7) + A
390 CK = CK + Z5*(CK>Z5):
    RETURN

Z2, Z4, Z5, Z6, and Z7 are constants, defined at the start of the program. Z4 equals 254, Z5 equals 255, Z6 equals 256, and Z7, as you’d expect, is 127. Z2, meanwhile, was a simple 2.

About a dozen lines of Commodore 64 BASIC, including the lines that represent the checksum calculations for MLX 2.0.
The bits at the end of each line, :rem 240 and the like, are not part of the working code. They’re instead the Automatic Proofreader checksum. Automatic Proofreader was a different program, one written in machine language that you used to make sure you typed in BASIC programs correctly. After entering a line of BASIC, the computed checksum appeared in the corner of the window, and if it was the :rem number, you had typed the line in correctly. Now you might wonder how you knew you typed in the machine language code for the Automatic Proofreader correctly, if you need the Automatic Proofreader to enter MLX correctly. To this I offer LOOK A BIG DISTRACTING THING! (Runs away.)

A bit of Commodore BASIC here. INT means to take the largest whole number not larger than whatever’s inside. AD is the address of the start of the line being entered. CK is the checksum. A is one number, one machine language instruction, being put in. GOSUB, “go to subroutine”, means to jump to another line and execute commands from there, and then RETURN. That’s the command. The program then continues from the next instruction after the GOSUB. In this code, line 350 converts a number from decimal to hexadecimal and prints out the hexadecimal version. This bit about adding Z5 * (CK>Z7) looks peculiar.

Commodore BASIC evaluates logical expressions like CK > 27 into a bit pattern. That pattern looks like a number. We can use it like an integer. Many programming languages do something like that and it can allow for clever but cryptic programming tricks. An expression that’s false evaluates as 0; an expression that’s true evaluates as -1. So, CK + Z5*(CK>Z5) is an efficient little filter. If CK is smaller than Z5, it’s left untouched. If CK is larger than Z5, then subtract Z5 from CK. This keeps CK from being more than 255, exactly as we’d wanted.

But you also notice: this code makes no sense.

Like, starting the checksum with something derived from the address makes sense. Adding to that numbers based on the instructions makes sense. But the last instruction of line 370 is a jump straight to line 390. Line 380, where any of the actual instructions are put into the checksum, never gets called. Also, there’s eight instructions per line. Why is only one ever called?

And this was a bear to work out. One friend insisted I consider the possibility that MLX was buggy and nobody had found the defect. I could not accept that, not for a program that was so central to so much programming for so long. Also, not considering that it worked. Make almost any entry error and the checksum would not match.

Where’s the rest of the checksum formula?

This is what took time! I had to go through the code and find what other lines call lines 360 through 390. There’s a hundred lines of code in the Commodore version of MLX, which isn’t that much. They jump around a lot, though. By my tally 68 of these 100 lines jump to, or can jump to, something besides the next line of code. I don’t know how that compares to modern programming languages, but it’s still dizzying. For a while I thought it might be a net saving in time to write something that would draw a directed graph of the program’s execution flow. It might still be worth doing that.

The checksum formula gets called by two pieces of code. One of them is the code when the program gets entered. MLX calculates a checksum and verifies whether it matches the ninth number entered. The other role is in printing out already-entered data. There, the checksum doesn’t have a role, apart from making the on-screen report look like the magazine listing.

Here’s the code that calls the checksum when you’re entering code:

440 POKE 198,0:
    GOSUB 360:
    IF F THEN PRINT IN$ PRINT" ";
    [ many lines about entering your data here ]
560 FOR I=1 TO 25 STEP 3:
    B$ = MID$(IN$, I):
    GOSUB 320:
    IF I<25 THEN GOSUB 380: A(I/3)=A
570 NEXT:
    IF ACK THEN GOSUB 1060:
    PRINT "ERROR: REENTER LINE ":
    F = 1:
    GOTO 440
580 GOSUB 1080:
    [ several more lines setting up a new line of data to enter ]

Line 320 started the routine that turned a hexadecimal number, such as 7F, into decimal, such as 127. It returns this number as the variable named A. IN$ was the input text, part of the program you you enter. This should be 27 characters long. A(I/3) was an element in an array, the string of eight instructions for that entry. Yes, you could use the same name for an array and for a single, unrelated, number. Yes, this was confusing.

But here’s the logic. Line 440 starts work on your entry. It calculates the part of the checksum that comes from the location in memory that data’s entered in. Line 560 does several bits of work. It takes the entered instructions and converts the strings into numbers. Then it takes each of those instruction numbers and adds its contribution to the checksum. Line 570 compares whether the entered checksum matches the computed checksum. If it does match, good. If it doesn’t match, then go back and re-do the entry.

The code for displaying a line of your machine language program is shorter:

630 GOSUB 360:
    B = BS + AD - SA;
    FOR I = B TO B+7:
       A = PEEK(I):
       GOSUB 350:
       GOSUB 380:
       PRINT S$;
640 NEXT:
    PRINT "";       
    A = CK:
    GOSUB 350:
    PRINT

The bit about PEEK is looking into the buffer, which holds the entered instructions, and reading what’s there. The GOSUB 350 takes the number ‘A’ and prints out its hexadecimal representation. GOSUB 360 calculates the part of the checksum that’s based on the memory location. The GOSUB 380 contributes the part based on every instruction. S$ is a space. It’s used to keep all the numbers from running up against each other.

So what is the checksum formula?

The checksum takes in two parts. The first part is based on the address at the start of the line. Let me call that the number AD . The second part is based on the entry, the eight instructions following the line. Let me call them D_1 through D_8 . So this is easiest described in two parts.

The base of the checksum, which I’ll call ck_{0} , is:

ck_{0} = AD - 254 \cdot \left(floor(AD \div 256)\right) \\  \mbox { [ subtract 255 if this is 256 or greater ] }

For example, suppose the address is 49152 (in hexadecimal, C000), which was popular for Commodore 64 programming. Then ck_{0} would be 129. If the address is 2049 (in hexadecimal, 0801), another popular location, $latex ck_{0} would be 17.

Generally, the initial ck_{0} increases by 1 as the memory address for the start of a line increases. If you entered a line that started at memory address 49153 (hexadecimal C001) for some reason, that ck_{0} would be 130. A line which started at address 49154 (hexadecimal C002) would have ck_{0} start at 131. This progression continues until ck_{0} would reach 256. Then that greater-than filter at the end of the expression intrudes. A line starting at memory address 49278 (C07E) has ck_{0} of 255, and one starting at memory address 49279 (C07F) has ck_{0} of 1. I see reason behind this choice.

That’s the starting point. Now to use the actual data, the eight pieces D_1 through D_8 that are the actual instructions. The easiest way for me to describe this is do it as a loop, using ck_{0} to calculate ck_{1} , and ck_{1} to define ck_{2} and so on.

ck_{j} = 2 \cdot ck_{j - 1} \cdots \\  \mbox { [ subtract 255 if this is 256 or greater ] }  	\\   \cdots + d_{j} \\  \mbox { [ subtract 255 if this is 256 or greater ] }  	\mbox{for j = 1 ... 8}

That is, for each piece of data in turn, double the existing checksum and add the next data to it. If this sum is 256 or larger, subtract 255 from it. The working sum never gets larger than 512, thanks to that subtract-255-rule after the doubling. And then again that subtract-255-rule after adding d_j. Repeat through the eighth piece of data. That last calculated checksum, ck_{8} , is the checksum for the entry. If ck_{8} does match the entered checksum, go on to the next entry. If ck_{8} does not match the entered checksum, give a warning and go back and re-do the entry.

Why was MLX written like that?

There are mysterious bits to this checksum formula. First is where it came from. It’s not, as far as I can tell, a standard error-checking routine, or if it is it’s presented in a form I don’t recognize. But I know only small pieces of information theory, and it might be that this is equivalent to a trick everybody knows.

The formula is, at heart, “double your working sum and add the next instruction, and repeat”. At the end, take the sum modulo 255 so that the checksum is no more than two hexadecimal digits. Almost. In studying the program I spent a lot of time on a nearly-functionally-equivalent code that used modulo operations. I’m confident that if Apple II and Commodore BASIC had modulo functions, then MLX would have used them.

But those eight-bit BASICs did not. Instead the programs tested whether the working checksum had gotten larger than 255, and if it had, then subtracted 255 from it. This is a little bit different. It is possible for a checksum to be 255 (hexadecimal FF). This even happened. In the June 1985 Compute!, introducing the new MLX for the Apple II, we have this entry as part of the word processor Speedscript 3.0 that anyone could type in:

0848: 20 A9 00 8D 53 1E A0 00 FF

What we cannot have is a checksum of 0. (Unless a program began at memory location 0, and had instructions of nothing but 0. This would not happen. The Commodore 64, and the Apple II, used those low-address memory locations for system work. No program could use them.) Were the formulas written with modulo operations, we’d see 00 where we should see FF.

The start of the code for Apple SpeedScript 3.0, showing a couple dozen lines of machine language code.
So this program, which was a legitimate and useful and working word processor, was about 5,699 bytes long. This article is about 31,000 characters (and the characters are longer than a byte back then was), so, that’s the kind of compact writing they were capable of back then.

Doubling the working sum and then setting it to be in a valid range — from 1 to 255 — is easy enough. I don’t know how the designer settled on doubling, but have hypotheses. It’s a good scheme for catching transposition errors, entering 20 FF D2 where one means to enter 20 D2 FF.

The initial ck_{0} seems strange. The equivalent step for the original MLX was the address on which the entry started, modulo 256. Why the change?

My hypothesis is this change was to make it harder to start typing in the wrong entry. The code someone typed in would be long columns of numbers, for many pages. The text wasn’t backed by alternating bands of color, or periodic breaks, or anything else that made it harder for the eye to skip one or more lines of machine language code.

In the original MLX, skipping one line, or even a couple lines, can’t go undetected. The original MLX entered six pieces of data at a time. If your eye skips a line, the wrong data will mismatch the checksum by 6, or by 12, or by 18 — by 6 times the number of lines you miss. To have the checksum not catch this error, you have to skip 128 lines, and that’s not going to happen. That’s about one and a quarter columns of text and the eye just doesn’t make that mistake. Skimming down a couple lines, yes. Moving to the next column, yes. Next column plus 37 lines? No.

An entire page of lines of hexadecimal code, three columns of 83 lines each with nine sets of two-hexadecimal-digit numbers to enter. Plus the four-digit hexadecimal representation of the memory address for the line. It's a lot of data to enter.
So anyway this is why every kid who was really into their Commodore 64 has a repetitive strain injury today. Page of machine language instructions for SpeedCalc, a spreadsheet program, just like every 13-year-old kid needed.

In the new MLX, one enters eight instructions of code at a time. So skipping a line increases the checksum by 8 times the number of lines skipped. If the initial checksum were the line’s starting address modulo 256, then we’d only need to skip 16 lines to get the same initial checksum. Sixteen lines is a bit much to skip, but it’s less than one-sixth of a column. That’s not too far. And the eye could see 0968 where it means to read 0868. That’s a plausible enough error and one the new checksum would be helpless against.

So the more complicated, and outright weird, formula that MLX 2.0 uses betters this. Skipping 16 lines — entering the line for 0968 instead of 0868 — increases the base checksum by 2. Combined with the subtract-255 rule, you won’t get a duplicate of the checksum for, in most cases, 127 lines. Nobody is going to make that error.

So this explains the components. Why is the Commodore 64 version of MLX such a tangle of spaghetti code?

Here I have fewer answers. Part must be that Commodore BASIC was prone to creating messes. For example, it did not really have functions, smaller blocks of code with their own, independent, sets of variables. These would let, say, numbers convert from hexadecimal to decimal without interrupting the main flow of the program. Instead you had to jump, either by GOTO or GOSUB, to another part of the program. The Commodore or Apple II BASIC subroutine has to use the same variable names as the main part of the program, so, pick your variables wisely! Or do a bunch of reassigning values before and after the subroutine’s called.

Excerpt from two columns of the BASIC code for the Commodore 128 version of MLX. The first column includes several user-defined functions. The second column uses them as part of calculating the checksum.
And for completeness here’s excerpts from the Commodore 128 version of MLX. The checksum is calculated from lines 310 through 330. The reference to FNHB(AD) calls back to the rare user-defined function. On line 130 the DEF FN commands declare functions named HB, LB, and AD. The two-character codes before the line numbers, such as the SQ before the line 300, were for the new Automatic Proofreader, which did a better job catching common typing errors than the one using :rem (numbers) seen earlier.

To be precise, Commodore BASIC did let one define some functions. This by using the DEF FN command. It could take one number as the input, and return one number as output. The whole definition of the function couldn’t be more than 80 characters long. It couldn’t have a loop. Given these constraints, you can see why user-defined functions went all but unused.

The Commodore version jumps around a lot. Of its 100 lines of code, 68 jump or can jump to somewhere else. The Apple II version has 52 lines of code, 28 of which jump or can jump to another line. That’s just over 50 percent of the lines. I’m not sure how much of this reflects Apple II’s BASIC being better than Commodore’s. Commodore 64 BASIC we can charitably describe as underdeveloped. The Commodore 128 version of MLX is a bit shorter than the 64’s (90 lines of code). I haven’t analyzed it to see how much it jumps around. (But it does have some user-defined functions.)

Not quite a dozen lines of Apple II BASIC, including the lines that represent the checksum calculations for MLX 2.0.
The Apple II version of MLX just trusted you to type everything in right and good luck there. The checksum calculation — lines 560 and 570 here — are placed near the end of the program listing (it ends on line 610), rather than in the early-center.

The most mysterious element, to me, is the defining of some constants like Z2, which is 2, or Z5, which is 255. The Apple version of this doesn’t uses these constants. It uses 2 or 255 or such in the checksum calculation. I can rationalize replacing 254 with Z4, or 255 with Z5, or 127 with Z7. The Commodore 64 allowed only 80 tokens in a command line. So these values might save only a couple characters, but if they’re needed characters, good. Z2, though, only makes the line longer.

I would have guessed that this reflected experiments. That is, trying out whether one should double the existing sum and add a new number, or triple, or quadruple, or even some more complicated rule. But the Apple II version appeared first, and has the number 2 hard-coded in. This might reflect that Tim Victor, author of the Apple II version, preferred to clean up such details while Ottis R Cowper, writing the Commodore version, did not. Lacking better evidence, I have to credit that to style.

Is this checksum any good?

Whether something is “good” depends on what it is supposed to do. The New MLX, or MLX 2.0, was supposed to make it possible to type in long strings of machine-language code while avoiding errors. So it’s good if it protects against those errors without being burdensome.

It’s a light burden. The person using this types in 18 keystrokes per line. This carries eight machine-language instructions plus one checksum number. So only one-ninth of the keystrokes are overhead, things to check that other work is right. That’s not bad. And it’s better than the original version of MLX, where up to 21 keystrokes gave six instructions. And one-seventh of the keystrokes were the checksum overhead.

The checksum quite effectively guards against entering instructions on a wrong line. To get the same checksum that (say) line 0811 would have you need to jump to line 0C09. In print, that’s another column over and a third of the way down the page. It’s a hard mistake to make.

Entering a wrong number in the instructions — say, typing in 22 where one means 20 — gets caught. The difference gets multiplied by some whole power of two in the checksum. Which power depends on what number’s entered wrong. If the eighth instruction is entered wrong, the checksum is off by that error. If the seventh instruction is wrong, the checksum is off by two times that error. If the sixth instruction is wrong, the checksum is off by four times that error. And so on, so that if the first instruction is wrong, the checksum is off by 128 times that error. And these errors are taken not-quite-modulo 255.

The only way to enter a single number wrong without the checksum catching it is to type something 255 higher or lower than the correct number. And MLX confines you to entering a two-hexadecimal-digit number, that is, a number from 0 to 255. The only mistake it’s possible to make is to enter 00 where you mean FF, or FF where you mean 00.

What about transpositions? Here, the the new MLX checksum shines. Doubling the sum so far and adding a new term to it makes transpositions very likely to be caught. Not many, though. A transposition of the data at position number j and at position number k will go unnoticed only when d_j and d_k happen to make true

\left(2^j - 2^k\right)\cdot\left(d_j - d_k\right) = 0 \mbox{ mod } 255

This doesn’t happen much. It needs d_j and d_k to be 255 apart. Or for \left(2^j - 2^k\right) to be a divisor of 255 and d_j - d_k to be another divisor. I’ll discuss when that happens in the next section.

In practice, this is a great simple checksum formula. It isn’t hard to calculate, it catches most of the likely data-entry mistakes, and it doesn’t require much extra data entry to work.

What flaws did the checksum have?

The biggest flaw the MLX 2.0 checksum scheme has is that it’s helpless to distinguish FF, the number 255, from 00, the number 0. It’s so vulnerable to this that a warning got attached to the MLX listing in every issue of the magazines:

Because of the checksum formula used, MLX won’t notice if you accidentally type FF in place of 00, and vice versa. And there’s a very slim chance that you could garble a line and still end up with a combination of characters that adds up to the proper checksum. However, these mistakes should not occur if you take reasonable care while entering data.

So when can a transposition go wrong? Well, any time you swap a 00 and an FF on a line, however far apart they are. But also if you swap the elements in position j and k, if 2^j - 2^k is a divisor of 255 and d_j - d_k works with you, modulo 255.

For a transposition of adjacent instructions to go wrong — say, the third and the fourth numbers in a line — you need the third and fourth numbers to be 255 apart. That is, entering 00 FF where you mean FF 00 will go undetected. But that’s the only possible case for adjacent instructions.

A transposition past one space — say, swapping the third and the fifth numbers in a line — needs the two to be 85, 170, or 255 away. So, if you were supposed to enter (in hexadecimal) EE A9 44 and you instead entered 44 A9 EE, it would go undetected. That’s the only way a one-space transposition can happen. MLX will catch entering EE A9 45 as 45 A9 EE.

A transposition past two spaces — say, swapping the first and the fifth numbers — will always be caught unless the numbers are 255 apart, that is, a 00 and an FF. A transposition past three spaces — like, swapping the first and the sixth numbers — is vulnerable again. Then if the first and sixth numbers are off by 17 (or a multiple of 17) the swap will go unnoticed. A transposition across four spaces will always be caught unless it’s 00 for FF. A transposition across five spaces — like, swapping the second and eighth numbers — has to also have the two numbers be 85 or 170 or 255 apart to sneak through. And a transposition across six spaces — this has to be swapping the first and last elements in the line — again will be caught unless it’s 00 for FF.

Front cover of the June 1985 issue of Compute!, with the feature article being Apple Speedscript, a 'powerful word processor' inside. The art is a watercolor picture of a man in Apple T-shirt riding a bicycle. Behind him is a Commodore 128 floating in midair, and in front of him is a hand holding a flip-book animation.
So if you weren’t there in the 80s? This is pretty much what it was like. Well-toned men with regrettable moustaches pedaling their bikes while eight-bit computers exploded out of the void behind them and giants played with flip books in front of them.

Listing all the possible exceptions like this makes it sound dire. It’s not. The most likely transposition someone is going to make is swapping the order of two elements. That’s caught unless one of the numbers is FF and the other 00. If the transposition swaps non-neighboring numbers there’s a handful of new cases that might slip through. But you can estimate how often two numbers separated by one or three or five spaces are also different by 85 or 34 or another dangerous combination. (That estimate would suppose that every number from 0 to 255 is equally likely. They’re not, though, because popular machine language instruction codes such as A9 or 20 will be over-represented. So will references to important parts of computer memory such as, on the Commodore, FFD2.)

You will forgive me for not listing all the possible cases where competing typos in entering numbers will cancel out. I don’t want to figure them out either. I will go along with the magazines’ own assessment that there’s a “very slim chance” one could garble the line and get something that passes, though. After all, there are 18,446,744,073,709,551,615 conceivable lines of code one might type in, and only 255 possible checksums. Some garbled lines must match the correct checksum.

Could the checksum have been better?

The checksum could have been different. This is a trivial conclusion. “Better”? That demands thought. A good error-detection scheme needs to catch errors that are common or that are particularly dangerous. It should add as little overhead as possible.

The MLX checksum as it is catches many of the most common errors. A single entry mis-keyed, for example, except for the case of swapping 00 and FF. Or transposing one number for the one next to it. It even catches most transpositions with spaces between the transposed numbers. It catches almost all cases where one enters the entirely wrong line. And it does this for only two more keystrokes per eight pieces of data entered. That’s doing well.

The obvious gap is the inability to distinguish 00 from FF. There’s a cure for that, of course. Count the number of 00’s — or the number of FF’s — in a line, and include that as part of the checksum. It wouldn’t be particularly hard to enter (going back to the Q-Bird example)

0801:0B 08 00 00 9E 32 30 36 EC 2
0809:31 00 00 00 A9 00 8D 20 3A 4
0811:D0 20 CF 14 20 1B 08 4C 96 0
0819:C7 0B A9 93 20 D2 FF A9 34 0

(Or if you prefer, to have the extra checksums be 0 0 0 1.)

This adds to the overhead, yes, one more keystroke in what is already a good bit of typing. And one may ask whether you’re likely to ever touch 00 when you mean FF. They keys aren’t near one another. Then you learn that MLX soon got a patch which made keying much easier. They did this by making the characters in the rows under 7 8 9 0 type in digits. And the mapping used (on the Commodore 64) put the key to enter F right next to the key to enter 0.

The page of boilerplate text explaining MLX after it became a part of nearly every issue. In the rightmost column a chart explains how the program translates keys so that, for example, U, I, and O are read as the numbers 4, 5, and 6, to make a hexadecimal keypad for faster entry.
The last important revision of MLX made a data-entry keypad out of, for the Commodore 64, some of the letters on the keyboard. For the Commodore 128, it made a data-entry keypad out of … the keypad, but fitting in the hexadecimal numbers A, B, C, D, E, and F took some thought. But the 64 version still managed to put F and 0 next to each other, making it possible to enter FF where you meant 00 or vice-versa.

If you get ambitious, you might attempt even cleverer schemes. Suppose you want to catch those off-by-85 or off-by-17 differences that would detect transpositions. Why not, say, copy the last bits of each of your eight data, and use that to assemble a new checksum number? So, for example, in line 0801 up there the last bit of each number was 1-0-0-0-0-0-0-0 which is boring, but gives us 128, hexadecimal 80, as a second checksum. Line 0809 has eighth bits 1-0-0-0-1-0-1-0-0, or 138 (hex 8A). And so on; so we could have:

0801:0B 08 00 00 9E 32 30 36 EC 2 80
0809:31 00 00 00 A9 00 8D 20 3A 4 8A
0811:D0 20 CF 14 20 1B 08 4C 96 0 24
0819:C7 0B A9 93 20 D2 FF A9 34 0 B3

Now, though? We’ve got five keystrokes of overhead to sixteen keystrokes of data. Getting a bit bloated. It could be cleaned up a little; the single-digit count of 00’s (or FF’s) is redundant to the two-digit number formed from the cross-section I did there.

And if we were working in a modern programming language we could reduce the MLX checksum and this sampled-digit checksum to a single number. Use the bitwise exclusive-or of the two numbers as the new, ‘mixed’ checksum. Exclusive-or the sampled-digit with the mixed checksum and you get back the classic MLX checksum. You get two checksums in the space of one. In the program you’d build the sampled-digit checksum, and exclusive-or it with the mixed checksum, and get back what should be the MLX checksum. Or take the mixed checksum and exclusive-or it with the MLX checksum, and you get the sampled-digit checksum.

This almost magic move has two problems. This sampled digit checksum could catch transpositions that are off by 85 or 17. It won’t catch transpositions off by 17 or by 34, though, just as deadly. It will catch transpositions off by odd multiples of 17, at least. You would catch transpositions off by 85 or by 34 if you sampled the seventh digit, at least. Or if you build a sample based on the fifth or the third digit. But then you won’t catch transpositions off by 85 or by 17. You can add new sampled checksums. This threatens us again with putting in too many check digits for actual data entry.

The other problem is worse: Commodore 64 BASIC did not have a bitwise exclusive-or command. I was shocked, and I was more shocked to learn that Applesoft BASIC also lacked an exclusive-or. The Commodore 128 had exclusive-or, at least. But given that lack, and the inability to add an exclusive-or function that wouldn’t be infuriating? I can’t blame anyone for not trying.

So there is my verdict. There are some obvious enough ways that MLX’s checksum might have been able to catch more errors. But, given the constraints of the computers it was running on? A more sensitive error check likely would not have been available. Not without demanding much more typing. And, as a another practical matter, demanding the program listings in the magazine be smaller and harder to read. The New MLX did, overall, a quite good job catching errors without requiring too much extra typing. We’ll probably never see its like again.

In Which I Feel A Little Picked On


This is not a proper Reading the Comics post, since there’s nothing mathematical about this. But it does reflect a project I’ve been letting linger for months and that I intend to finish before starting the abbreviated Mathematics A-to-Z for this year.

Panel labelled Monday-Friday. A man sitting in an easy chair says, 'I'll get to it this weekend.' Panel labelled Weekend. The man sitting in the easy chair says, 'I need to relax. I'll do it next week.'
Jeff Stahler’s Moderately Confused for the 12th of June, 2021. Essays in which I discuss Moderately Confused, usually for its mathematical content, are at this link.

In the meanwhile. I have a person dear to me who’s learning college algebra. For no reason clear to me this put me in mind of last year’s essay about Extraneous Solutions. These are fun and infuriating friends. They’re created when you follow the rules about how you can rewrite a mathematical expression without changing its value. And yet sometimes you do these rewritings correctly and get a would-be solution that isn’t actually one. So I’d shared some thoughts about why they appear, and what tedious work keeps them from showing up.

Iva Sallay teaches you how to host the Playful Math Education Blog Carnival


Iva Sallay, creator of the Find The Factors recreational mathematics puzzle and a kind friend to my blog, posted Yes, YOU Can Host a Playful Math Education Blog Carnival. It explains in quite good form how to join in Denise Gaskins’s roaming blog event. It tries to gather educational or recreational or fun or just delightful mathematics links.

Hosting the blog carnival is a great experience I recommend for mathematics bloggers at least once. I seem to be up to hosting it about once a year, most recently in September 2020. Most important in putting one together is looking at your mathematics reading with different eyes. Sallay, though, goes into specifics about what to look for, and how to find that.

If you’d like to host a carnival you can sign up now for the June slot, blog #147, or for most of the rest of the year.

History of Philosophy podcast has episode on Nicholas of Cusa


I continue to share things I’ve heard, rather than created. Peter Adamson’s podcast The History Of Philosophy Without Any Gaps this week had an episode about Nicholas of Cusa. There’s another episode on him scheduled for two weeks from now.

Nicholas is one of those many polymaths of the not-quite-modern era. Someone who worked in philosophy, theology, astronomy, mathematics, with a side in calendar reform. He’s noteworthy in mathematics and theology and philosophy for trying to understand the infinite and the infinitesimal. Adamson’s podcast — about a half-hour — focuses on the philosophical and theological sides of things. But the mathematics can’t help creeping in, with questions like, how can you tell the difference between a straight line and the edge of a circle with infinitely large diameter? Or between a circle and a regular polygon with infinitely many sides?

The St Andrews Maths History site has an article on Nicholas that focuses more on the kinds of work he did.

How May 2021 Treated My Mathematics Blog


I’ll take this chance now to look over my readership from the past month. It’s either that or actually edit this massive article I’ve had sitting for two months. I keep figuring I’ll edit it this next weekend, and then the week ends before I do. This weekend, though, I’m sure to edit it into coherence. Just you watch.

According to WordPress I had 3,068 page views in May of 2021. That’s an impressive number: my 12-month running mean, leading up to May, was 2,366.0 views per month. The 12-month running median is a similar 2,394 views per month. That startles me, especially as I don’t have any pieces that obviously drew special interest. Sometimes there’s a flood of people to a particular page, or from a particular site. That didn’t happen this month, at least as far as I can tell. There was a steady flow of readers to all kinds of things.

There were 2,085 unique visitors, according to WordPress. That’s down from April, but still well above the running mean of 1,671.9 visitors. And above the median of 1,697 unique visitors.

When we rate things per post the dominance of the past month gets even more amazing. That’s an average 340.9 views per posting this month, compared to a mean of 202.5 or a median of 175.5. (Granted, yes, the majority of those were to things from earlier months; there’s almost ten years of backlog and people notice those too.) And it’s 231.7 unique visitors per posting, versus a mean of 144.7 and a median of 127.4.

Bar chart of two and a half years's worth of monthly readership figures. The last several months have seen a steady roughly 3,000 page views and 2,000 unique visitors a month, an increase over the preceding years.
The most important thing in tracking all this is I hope to someday catch WordPress giving me the same readership statistics two months in a row.

There were 48 likes given in May. That’s below the running mean of 56.3 and median of 55.5. Per-posting, though, these numbers look better. That’s 5.3 likes per posting over the course of May. The mean per posting was 4.5 and the median 4.1 over the previous twelve months. There were 20 comments, barely above the running mean of 19.4 and running median of 18. But that’s 2.2 comments per posting, versus a mean per posting of 1.7 and a median per posting of 1.4. I make my biggest impact with readers by shutting up more.

I got around to publishing nine things in May. A startling number of them were references to other people’s work or, in one case, me talking about using an earlier bit I wrote. Here’s the posts in descending order of popularity. I’m surprised how much this differs from simple chronological order. It suggests there are things people are eager to see, and one of them is Reading the Comics posts. Which I don’t do on a schedule anymore.

As that last and least popular post says, I plan to do an A-to-Z this year. A shorter one than usual, though, one of only fifteen week’s duration, and covering only ten different letters. It’s been a hard year and I need to conserve my energies. I’ll begin appealing for subjects soon.

In May 2021 I posted 4,719 words here, figures WordPress, bringing me to a total of 22,620 words this year. This averages out at 524.3 words per posting in May, and 552 words per post for the year.

As of the start of June I’ve had 1,623 posts to here, which gathered a total 135,779 views from a logged 79,646 unique visitors.

I’d be glad to have you as a regular reader. To be one that never appears in my statistics you can use the RSS feed for my essays. If you don’t have an RSS reader you can sign up for a free account at Dreamwidth or Livejournal. You can add any RSS feed by https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn and have it appear on your Friends page.

If you have a WordPress account, you can add my posts to your Reader. Use the “Follow NebusResearch” button to do that. Or you can use “Follow NebusResearch by E-mail” to get posts sent to your mailbox. That’s the way to get essays before I notice their most humiliating typos.

I’m @nebusj on Twitter, but don’t read or interact with it. It posts announcements of essays is all. I do read @nebusj@mathstodon.xyz, on the mathematics-themed Mastodon instance.

Thank you for reading, however it is you’re doing, and I hope you’ll do more of that. If you’re not reading, I suppose I don’t have anything more to say.

Announcing my 2021 Mathematics A-to-Z


I enjoy the tradition of writing an A-to-Z, a string of essays about topics from across the alphabet and mostly chosen by readers and commenters. I’ve done at least one each year since 2015 and it’s a thrilling, exhausting performance. I didn’t want to miss this year, too.

But note the “exhausting” there. It’s been a heck of a year and while I’ve been more fortunate than many, I also know my limits. I don’t believe I have the energy to do the whole alphabet. I tell myself these essays don’t have to be big productions, and then they turn into 2,500 words a week for 26 weeks. It’s nice work but it’s also a (slender) pop mathematics book a year, on top of everything else I write in the corners around my actual work.

So how to do less, and without losing the Mathematics A-to-Z theme? And Iva Sallay, creator of Find the Factors and always a kind and generous reader, had the solution. This year I’ll plan on a subset of the alphabet, corresponding to a simple phrase. That phrase? I’m embarrassed to say how long it took me to think of, but it must be the right one.

I plan to do, in this order, the letters of “MATHEMATICS A-TO-Z”.

That is still a 15-week course of essays, but I did want something that would still be a worthwhile project. I intend to keep the essays shorter this year, aiming at a 1,000-word cap, so look forward to me breaking 4,000 words explaining “saddle points”. This also implies that I’ll be doubling and even tripling letters, for the first time in one of these sequences. There’s to be three A’s, three T’s, and two M’s. Also one each of C, E, H, I, O, S, and Z. I figure I have one Z essay left before I exhaust the letter. I may deal with that problem in 2022.

I plan to set my call for topics soon. I’d like to get the sequence started publishing in July, so I have to do that soon. But to give some idea the range of things I’ve discussed before, here’s the roster of past, full-alphabet, A-to-Z topics:

I, too, am fascinated by the small changes in how I titled these posts and even chose whether to capitalize subject names in the roster. By “am fascinated by the small changes” I mean “am annoyed beyond reason by the inconsistencies”. I hope you too have an appropriate reaction to them.

Reading the Comics, May 25, 2021: Hilbert’s Hotel Edition


I have only a couple strips this time, and from this week. I’m not sure when I’ll return to full-time comics reading, but I do want to share strips that inspire something.

Carol Lay’s Lay Lines for the 24th of May riffs on Hilbert’s Hotel. This is a metaphor often used in pop mathematics treatments of infinity. So often, in fact, a friend snarked that he wished for any YouTube mathematics channel that didn’t do the same three math theorems. Hilbert’s Hotel was among them. I think I’ve never written a piece specifically about Hilbert’s Hotel. In part because every pop mathematics blog has one, so there are better expositions available. I have a similar restraint against a detailed exploration of the different sizes of infinity, or of the Monty Hall Problem.

Narration, with illustrations to match: 'Hilbert's Hotel: The infinite hotel was always filled to capacity. Yet if a new guest arrived, she was always given a room. After all, there were an infinite number of rooms. This paradox assumed that management could always add one or more to infinity. The brain-bruising hotel attracted a lot of mathematicians and philosophers. They liked to argue into the wee hours abou the nature of infinity. Unfortunately, they were a bunch of slobs. Management had to hire a new maid to keep up with the mess. Daunted by the number of rooms to clean... the maid set fire to the joint. The philosophers escaped ... but the hotel burned forever.'
Carol Lay’s Lay Lines for the 24th of May, 2021. This and a couple other essays inspired by something in Lay Lines are at this link. This comic is, per the copyright notice, from 2002. I don’t know anything of its publication history past that.

Hilbert’s Hotel is named for David Hilbert, of Hilbert problems fame. It’s a thought experiment to explore weird consequences of our modern understanding of infinite sets. It presents various cases about matching elements of a set to the whole numbers, by making it about guests in hotel rooms. And then translates things we accept in set theory, like combining two infinitely large sets, into material terms. In material terms, the operations seem ridiculous. So the set of thought experiments get labelled “paradoxes”. This is not in the logician sense of being things both true and false, but in the ordinary sense that we are asked to reconcile our logic with our intuition.

So the Hotel serves a curious role. It doesn’t make a complex idea understandable, the way many demonstrations do. It instead draws attention to the weirdness in something a mathematics student might otherwise nod through. It does serve some role, or it wouldn’t be so popular now.

It hasn’t always been popular, though. Hilbert introduced the idea in 1924, though per a paper by Helge Kragh, only to address one question. A modern pop mathematician would have a half-dozen problems. George Gamow’s 1947 book One Two Three … Infinity brought it up again, but it didn’t stay in the public eye. It wasn’t until the 1980s that it got a secure place in pop mathematics culture, and that by way of philosophers and theologians. If you aren’t up to reading the whole of Kragh’s paper, I did summarize it a bit more completely in this 2018 Reading the Comics essay.

Anyway, Carol Lay does an great job making a story of it.

Two people stand in front of a chalkboard which contains a gibberish equation: 'sqrt(PB+J(ax pi)^2) * Z/y { = D/8 + H} - 17^4 x G + z x 2 / 129 \div +/o + exp(null set mickey-mouse-ears), et cetera. One person says: 'Oh, it definitely proves something, all right ... when it comes to actual equations, at least one cartoonist doesn't know squat.'
Leigh Rubin’s Rubes for the 25th of May, 2021. This and other essays mentioning Rubes are at this link. I’m not sure whether that symbol at the end of the second line is meant to be Mickey Mouse ears, or a Venn diagram, or a symbol that I’m not recognizing.

Leigh Rubin’s Rubes for the 25th of May I’ll toss in here too. It’s a riff on the art convention of a blackboard equation being meaningless. Normally, of course, the content of the equation doesn’t matter. So it gets simplified and abstracted, for the same reason one draws a brick wall as four separate patches of two or three bricks together. It sometimes happens that a cartoonist makes the equation meaningful. That’s because they’re a recovering physics major like Bill Amend of FoxTrot. Or it’s because the content of the blackboard supports the joke. Which, in this case, it does.

The essays I write about comic strips I tag so they appear at this link. You may enjoy some more pieces there.

In Which I Get To Use My Own Work


We have goldfish, normally kept in an outdoor pond. It’s not a deep enough pond that it would be safe to leave them out for a very harsh winter. So we keep as many as we can catch in a couple 150-gallon tanks in the basement.

Recently, and irritatingly close to when we’d set them outside, the nitrate level in the tanks grew too high. Fish excrete ammonia. Microorganisms then turn the ammonia into nitrates and then nitrates. In the wild, the nitrates then get used by … I dunno, plants? Which don’t thrive enough hin our basement to clean them out. To get the nitrate out of the water all there is to do is replace the water.

We have six buckets, each holding five gallons, of water that we can use for replacement. So there’s up to 30 gallons of water that we could change out in a day. Can’t change more because tap water contains chloramines, which kill bacteria (good news for humans) but hurt fish (bad news for goldfish). We can treat the tap water to neutralize the chloramines, but want to give that time to finish. I have never found a good reference for how long this takes. I’ve adopted “about a day” because we don’t have a water tap in the basement and I don’t want to haul more than 30 gallons of water downstairs any given day.

So I got thinking, what’s the fastest way to get the nitrate level down for both tanks? Change 15 gallons in each of them once a day, or change 30 gallons in one tank one day and the other tank the next?

Several dozen goldfish, most of them babies, within a 150-gallon rubber stock tank, their wintering home.
Not a current picture, but the fish look about like this still.

And, happy to say, I realized this was the tea-making problem I’d done a couple months ago. The tea-making problem had a different goal, that of keeping as much milk in the tea as possible. But the thing being studied was how partial replacements of a solution with one component affects the amount of the other component. The major difference is that the fish produce (ultimately) more nitrates in time. There’s no tea that spontaneously produces milk. But if nitrate-generation is low enough, the same conclusions follow. So, a couple days of 30-gallon changes, in alternating tanks, and we had the nitrates back to a decent level.

We’d have put the fish outside this past week if I hadn’t broken, again, the tool used for cleaning the outside pond.