How to Tell if a Point Is Inside a Shape


As I continue to approach readiness for the Little Mathematics A-to-Z, let me share another piece you might have missed. Back in 2016 somehow two A-to-Z’s wasn’t enough for me. I also did a string of “Theorem Thursdays”, trying to explain some interesting piece of mathematics. The Jordan Curve Theorem is one of them.

The theorem, at heart, seems too simple to even be mathematics. It says that a simple closed curve on the plane divides the plane into an inside and an outside. There are similar versions for surfaces in three-dimensional spaces. Or volumes in four-dimensional spaces and so on. Proving the theorem turns out to be more complicated than I could fit into an essay. But proving a simplified version, where the curve is a polygon? That’s doable. Easy, even.

And as a sideline you get an easy way to test whether a point is inside a shape. It’s obvious, yeah, if a point is inside a square. But inside a complicated shape, some labyrinthine shape? Then it’s not obvious, and it’s nice to have an easy test.

This is even mathematics with practical application. A few months ago in my day job I needed an automated way to place a label inside a potentially complicated polygon. The midpoint of the polygon’s vertices wouldn’t do. The shapes could be L- or U- shaped, so that the midpoint wasn’t inside, or was too close to the edge of another shape. Starting from the midpoint, though, and finding the largest part of the polygon near to it? That’s doable, and that’s the Jordan Curve Theorem coming to help me.

How to Make a Transcendental Number


I am, believe it or not, working ahead of deadline on the Little Mathematics A-to-Z for this year. I feel so happy about that. But that’s eating up time to write fresh stuff here. So please let me share some older material, this from my prolific year 2016.

Transcendental numbers, which I describe at this link, are nearly all the real numbers. We’re able to prove that even though we don’t actually know very many of them. We know some numbers that we’re interested in, like π and e , are. And that this has surprising consequences. π being a transcendental number means, for example, the Ancient Greek geometric challenge to square the circle using straightedge and compass is impossible.

However, it’s not hard to create a number that you know is transcendental. Here’s how to do it, with an easy step-by-step guide. If you want to create this and declare it’s named after you, enjoy! Nobody but you will ever care about this number, I’m afraid. Its only interesting traits will be that it’s transcendental and that you crafted it. Still, isn’t that nice anyway? I think it’s nice anyway.

How To Find A Logarithm Without Much Computing Power


I don’t yet have actual words committed to text editor for this year’s little A-to-Z yet. Soon, though. Rather than leave things completely silent around here, I’d like to re-share an old sequence about something which delighted me. A lon while ago I read Edmund Callis Berkeley’s Giant Brains: Or Machines That Think. It’s a book from 1949 about numerical computing. And it explained just how to really calculate logarithms.

Anyone who knows calculus knows, in principle, how to calculate a logarithm. I mean as in how to get a numerical approximation to whatever the log of 25 is. If you didn’t have a calculator that did logarithms, but you could reliably multiply and add numbers? There’s a polynomial, one of a class known as Taylor Series, that — if you add together infinitely many terms — gives the exact value of a logarithm. If you only add a finite number of terms together, you get an approximation.

That suffices, in principle. In practice, you might have to calculate so many terms and add so many things together you forget why you cared what the log of 25 was. What you want is how to calculate them swiftly. Ideally, with as few calculations as possible. So here’s a set of articles I wrote, based on Berkeley’s book, about how to do that.

Machines That Think About Logarithms sets out the question. It includes some talk about the kinds of logarithms and why we use each of them.

Machines That Do Something About Logarithms sets out principles. These are all things that are generically true about logarithms, including about calculating logarithms.

Machines That Give You Logarithms explains how to use those tools. And lays out how to get the base-ten logarithm for most numbers that you would like with a tiny bit of computing work. I showed off an example of getting the logarithm of 47.2286 using only three divisions, four additions, and a little bit of looking up stuff.

Without Machines That Think About Logarithms closes it out. One catch with the algorithm described is that you need to work out some logarithms ahead of time and have them on hand, ready to look up. They’re not ones that you care about particularly for any problem, but they make it easier to find the logarithm you do want. This essay talks about which logarithms to calculate, in order to get the most accurate results for the logarithm you want, using the least custom work possible.

And that’s the series! With that, in principle, you have a good foundation in case you need to reinvent numerical computing.

Is this mathematics thing ambiguous or confusing?


There is an excellent chance it is! Mathematicians sometimes assert the object of their study is a universal truth, independent of all human culture. It may be. But the expression of that interest depends on the humans expressing it. And as with all human activities it picks up quirks. Patterns that don’t seem to make sense. Or that seem to conflict with other patterns. It’s not two days ago I most recently saw someone cross that 0 times anything is 0, but 0! is 1.

Mathematicians are not all of one mind. They notice different things that seem important and want to focus on that. They use ways that make sense to their culture. When they create new notation, or new definitions, they use the old ones to guide them. When a topic’s interesting enough for many people to notice, they bring many trails of notation to describe it. Usually a consensus emerges, that there are some notations that work well to describe these concepts, and the others fall away. But it’s difficult to get complete consistency. Particularly when there are several major fields that don’t need to interact much, but do have some overlap.

Christian Lawson-Perfect has started something that might be helpful for understanding this. WhyStartAt.xyz is to be a collection of “ambiguous, inconsistent, or just plain unpleasant conventions in mathematical notation”. There’s four major categories already: inconsistencies, ambiguities, unpleasantness, and conflicting definitions. And there’s a set of references useful for anyone curious why something is a convention. (Nobody knows why we use ‘m’ for the slope in the slope-intercept or point-slope equations describing a line. Sometimes a convention is arbitrary.) It’s already great reading, though, not just for this line from our friend Thomas Hobbes.

How June 2021 Treated My Mathematics Blog


It’s the time of month when I like to look at what my popularity is like. How many readers I had, what they were reading, that sort of thing. And I’m even getting to it earlier than usual in the month of July. Credit a hot Sunday when I can’t think of other things to do instead.

According to WordPress there were 2,507 page views here in June 2021. That’s down from the last couple months. But it is above the twelve-month running mean, leading up to June, which was of 2,445.9 views per month. The twelve-month running median was 2,516.5. This all implies that June was quite in line with my average month from June 2020 through May 2021. It just looks like a decline is all.

There were 1,753 unique visitors recorded by WordPress in June. That again fits between the running averages. There were a mean 1,728.4 unique visitors per month between June 2020 and May 2021. There was a median of 1,800 unique visitors each month over that same range.

Bar chart showing two and a half years' worth of readership figures. There's an enormous spike in October 2018. After several increasing months of readership recently, June 2021 saw a modest drop in views and unique visitors.
Hey, remember when I tracked views per visitor? I don’t remember why I stopped doing that. The figures were volatile. But either way had a happy interpretation. A low number of views per visitor implied a lot of people found something interesting. A high number of views per visitor implied people were doing archive-binges and reading everything. I suppose I could start seriously tracking it now but then I’d have to add a column to my spreadsheet.

The number of likes given collapsed, a mere 36 clicks of the like button given in June compared to a mean of 57.3 and median of 55.5. Given how many of my posts were some variation of “I’m struggling to find the energy to write”? I can’t blame folks not finding the energy to like. Comments were up, though, surely in response to my appeal for Mathematics A-to-Z topics. If you’ve thought of any, please, let me know; I’m eager to know.

I had nine essays posted in June, including my readership review post. These were, in the order most-to-least popular (as measured by page views):

In June I posted 7,852 words, my most verbose month since October 2020. That comes to an average of 981.5 words per posting in June. But the majority of them were in a single post, the exploration of MLX, which shows how the mean can be a misleading measure. This does bring my words-per-posting mean for the year up to 622, an increase of 70 words per posting. I need to not do that again.

As of the start of July I’ve had 1,631 posts here, which gathered 138,286 total views from 81,404 logged unique visitors.

If you’d like to be a regular reader, this is a great time for it, as I’ve almost worked my way through my obsession with checksum routines of 1980s computer magazines! And there’s the A-to-Z starting soon. Each year I do a glossary project, writing essays about mathematics terms from across the dictionary, many based on reader suggestions. All 168 essays from past years are at this link. This year’s should join that set, too.

If you’d like to be a regular reader, thank you! You can get all these essays by their RSS feed, and never appear in my statistics. It’s easy to get an RSS reader if you need. This Old Reader is an option, for example, as is NewsBlur. Or you can sign up for a free account at Dreamwidth or Livejournal. Use https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn to add RSS feeds to your Reading or Friends page.

If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Or if you have a WordPress account, you can use “Follow NebusResearch” to add this page to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.

How did Compute!’s Automatic Proofreader Work?


After that work on MLX, the programs that Compute! and Compute!’s Gazette used to enter machine language programs, I figured I was done. There was the Automatic Proofreader, used to catch errors in typing in BASIC programs. But that program was written in the machine language of the 6502 line of microchip. I’ve never been much on machine language and figured I couldn’t figure out how it worked. And then on a lark I tried and saw. And it turned out to be easy.

With qualifiers, of course. Compute! and Compute!’s Gazette had two generations of Automatic Proofreader for Commodore computers. The magazines also had Automatic Proofreaders for the other eight-bit computers that they covered. I trust that those worked the same way, but — with one exception — don’t know. I haven’t deciphered most of those other proofreaders.

Cover of the October 1983 Compute!'s Gazette, offering as cover features the games oil Tycoon and Aardvark Attack, and promising articles on speeding up the Vic-20 and understanding sound on the 64.
The October 1983 Compute!’s Gazette, with the debut of the Automatic Proofreader. It was an era for wordy magazine covers. Also I have no idea who did the art for that Oil Tycoon article but I love how emblematic it was of 1980s video game cover art.

Let me introduce how it was used, though. Compute! and Compute!’s Gazette offered computer programs to type in. Many of them were in BASIC, which uses many familiar words of English as instructions. But you can still make typos entering commands, and this causes bugs or crashes in programs. The Automatic Proofreader, for the Commodore (and the Atari), put in a little extra step after you typed in a line of code. It calculated a checksum. It showed that on-screen after every line you entered. And you could check whether that matched the checksum the magazine printed. So the listing in the magazine would be something like:

100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$ :rem 34
110 C4=48:C6=16:C7=7:Z2=2:Z4=254:Z5=255:Z6=256:Z7=127 :rem 238
120 FA=PEEK(45)+Z6*PEEK(46): BS=PEEK(55)+Z6*PEEK(56): H$="0123456789ABCDEF" :rem118

Sample text entry, in this case for The New MLX. It shows about eight lines of BASIC instructions, each line ending in a colon, the command 'rem' and a number between 0 and 255.
The start of The New MLX, introduced in the December 1985 Compute!’s Gazette, and using the original Automatic Proofreader checksum. That program received lavish attention two weeks ago.

You would type in all those lines up to the :rem part. ‘rem’ here stands for ‘Remark’ and means the rest of the line is a comment to the programmer, not the computer. So they’d do no harm if you did enter them. But why type text you didn’t need?

So after typing, say, 100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$ you’d hit return and with luck get the number 34 up on screen. The Automatic Proofreader did not force you to re-type the line. You were on your honor to do that. (Nor were you forced to type lines in order. If you wished to type line 100, then 200, then 300, then 190, then 250, then 330, you could. The checksum would calculate the same.) And it didn’t only work for entering programs, these commands starting with line numbers. It would return a result for any command you entered. But since you wouldn’t know what the checksum should be for a freeform command, that didn’t tell you much.

Magazine printout of a Commodore 64 screen showing the Automatic Proofreader in use. There are several lines of BASIC program instructions and in the upper-left corner of the screen the number ':247' printed in cover-reversed format.
I’m delighted there’s a picture of what the Automatic Proofreader looked like in practice, because this saves me having to type in the Proofreader into an emulator and taking a screen shot of that. Also, props to Compute!’s Gazette for putting a curved cut around this screen image.

The first-generation Automatic Proofreader, which is what I’m talking about here, returned a number between 0 and 255. And it was a simple checksum. It could not detect transposed characters: the checksum for PIRNT was the same as PRINT and PRITN. And, it turns out, errors could offset: the checksum for PEEK(46) would be the same as that for PEEK(55).

And there was one bit of deliberate insensitivity built in. Spaces would not be counted. The checksum for FA=PEEK(45)+Z6*PEEK(46) would be the same as FA = PEEK( 45 ) + Z6 * PEEK( 46 ). So you could organize text in whatever way was most convenient.

Given this, and given the example of the first MLX, you may have a suspicion how the Automatic Proofreader calculated things. So did I and it turned out to be right. The checksum for the first-generation Automatic Proofreader, at least for the Commodore 64 and the Vic-20, was a simple sum. Take the line that’s been entered. Ignore spaces. But otherwise, take the ASCII code value for each character, and add that up, modulo 256. That is, if the sum is (say) 300, subtract 256 from that, that is, 44.

I’m fibbing a little when I say it’s the ASCII code values. The Commodore computers used a variation on ASCII, called PETSCII (Commodore’s first line of computers was the PET). For ordinary text the differences between ASCII and PETSCII don’t matter. The differences come into play for various characters Commodores had. These would be symbols like the suits of cards, or little circles, or checkerboard patterns. Symbols that, these days, we’d see as emojis, or at least part of an extended character set.

But translating all those symbols is … tedious, but not hard. If you want to do a simulated Automatic Proofreader in Octave, it’s almost no code at all. It turns out Octave and Matlab need no special command to get the ASCII code equivalent of text. So here’s a working simulation

function retval = automatic_proofreader (oneLine)
  trimmedLine = strrep(oneLine, " ", "");
  #	In Matlab this should be replace(oneLine, " ", "");
  retval = mod(sum(trimmedLine), 256);

endfunction

To call it type in a line of text:

automatic_proofreader("100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$")
The first page of the article introducing the Automatic Proofreader. The headline reads 'The Automatic Poofreader', with a robotic arm writing in a ^r. The subheading is 'Banish Typos Forever!'
Very optimistic subhead here considering the limits they acknowledged in the article about what the Automatic Proofreader could detect.

Capitalization matters! The ASCII code for capital-P is different from that for lowercase-p. Spaces won’t matter, though. More exotic characters, though, such as the color-setting commands, are trouble and let’s not deal with that right now. Also you can enclose your line in single-quotes, in case for example you want the checksum of a line that had double-quotes. Let’s agree that lines with single- and double-quotes don’t exist.

I understand the way Commodore 64’s work well enough that I can explain the Automatic Proofreader’s code. I plan to do that soon. I don’t know how the Atari version of the Automatic Proofreader worked, but since it had the same weaknesses I assume it used the same algorithm.

There is a first-generation Automatic Proofreader with a difference, though, and I’ll come to that.

History of Philosophy podcast has another episode on Nicholas of Cusa


A couple weeks ago I mentioned that Peter Adamson’s The History of Philosophy Without Any Gaps had an episode about Nicholas of Cusa. Last week the podcast had another one, a half-hour interview with Paul Richard Blum about him and his work.

As with the previous podcast, there’s almost no mention of Nicholas of Cusa’s mathematics work. On the other hand, if you learn the tiniest possible bit about Nicholas of Cusa, you learn everything there is to know about Nicholas of Cusa. (I believe this joke would absolutely kill with the right audience, and will hear nothing otherwise.) The St Andrews Maths History site has a biography focusing particularly on his mathematical work.

I’m sorry not to be able to offer more about his mathematical work. If someone knows of a mathematics-history podcast with a similar goal, please leave a comment. I’d love to know and to share with other people.

I’m looking for topics for the Little 2021 Mathematics A-to-Z


I’d like to say I’m ready to start this year’s Mathematics A-to-Z. I’m not sure I am. But if I wait until I’m sure, I’ve learned, I wait too long. As mentioned, this year I’m doing an abbreviated version of my glossary project. Rather than every letter in the alphabet, I intend to write one essay each for the letters in “Mathematics A-to-Z”. The dashes won’t be included.

While I have some thoughts in minds for topics, I’d love to know what my kind readers would like to see me discuss. I’m hoping to write about one essay, of around a thousand words, per week. One for each letter. The topic should be anything mathematics-related, although I tend to take a broad view of mathematics-related. (I’m also open to biographical sketches.) To suggest something, please, say so in a comment. If you do, please also let me know about any projects you have — blogs, YouTube channels, real-world projects — that I should mention at the top of that essay.

To keep things manageable, I’m looking for the first couple letters — MATH — first. But if you have thoughts for later in the alphabet please share them. I can keep track of that. I am happy to revisit a subject I think I have more to write about, too. Past essays for these letters that I’ve written include:

M.


A.


T.


H.


The reason I wrote a second Tiling essay is because I forgot I’d already written one in 2018. I hope not to make that same mistake again. But I am open to repeating a topic, or a variation of a topic, on purpose..

Here’s some Matlab/Octave code for your MLX simulator


I am embarrassed that after writing 72,650 words about MLX 2.0 for last week, I left something out. Specifically, I didn’t include code for your own simulation of the checksum routine on a more modern platform. Here’s a function that carries out the calculations of the Commodore 64/128 or Apple II versions of MLX 2.0. It’s written in Octave, the open-source Matlab-like numerical computation routine. If you can read this, though, you can translate it to whatever language you find convenient.

function [retval] = mlxII (oneline)
   z2 = 2;
   z4 = 254;
   z5 = 255;
   z6 = 256; 
   z7 = 127;
 
   address = oneline(1);
   entries = oneline(2:9);
   checksum = oneline(10);
   
   ck = 0;
   ck = floor(address/z6);
   ck = address-z4*ck + z5*(ck>z7)*(-1);
   ck = ck + z5*(ck>z5)*(-1);
#
#	This looks like but is not the sum mod 255.  
#	The 8-bit computers did not have a mod function and 
#	used this subtraction instead.
#	
   for i=1:length(entries),
     ck = ck*z2 + z5*(ck>z7)*(-1) + entries(i);
     ck = ck + z5*(ck>z5)*(-1);
   endfor
#
#	The checksum *can* be 255 (0xFF), but not 0 (0x00)!  
#	Using the mod function could make zeroes appear
#       where 255's should.
#
   retval = (ck == checksum);
endfunction

This reproduces the code as it was actually coded. Here’s a version that relies on Octave or Matlab’s ability to use modulo operations:

function [retval] = mlxIIslick (oneline)
   factors = 2.^(7:-1:0);

   address = oneline(1);
   entries = oneline(2:9);
   checksum = oneline(10);
   
   ck = 0;
   ck = mod(address - 254*floor(address/256), 255);
   ck = ck + sum(entries.*factors);
   ck = mod(ck, 255);
   ck = ck + 255*(ck == 0);

   retval = (ck == checksum);
endfunction

Enjoy! Please don’t ask when I’ll have the Automatic Proofreader solved.

How did Compute!’s and Compute!’s Gazette’s New MLX Work?


A couple months ago I worked out a bit of personal curiosity. This was about how MLX worked. MLX was a program used in Compute! and Compute!’s Gazette magazine in the 1980s, so that people entering machine-language programs could avoid errors. There were a lot of fine programs, some of them quite powerful, free for the typing-in. The catch is this involved typing in a long string of numbers, and if any were wrong, the program wouldn’t work.

So MLX, introduced in late 1983, was a program to make typing in programs better. You would enter in a string of six numbers — six computer instructions or data — and a seventh, checksum, number. Back in January I worked out finally what the checksum was. It turned out to be simple. Take the memory location of the first of your set of six instructions, modulo 256. Add to it each of the six instructions, modulo 256. That’s the checksum. If it doesn’t match the typed-in checksum, there’s an error.

There’s weaknesses to this, though. It’s vulnerable to transposition errors: if you were supposed to type in 169 002 and put in 002 169 instead, it wouldn’t be caught. It’s also vulnerable to casual typos: 141 178 gives the same checksum as 142 177.

Which is all why the original MLX lasted only two years.

What Was The New MLX?

The New MLX, also called MLX 2.0, appeared first in the June 1985 Compute!. This in a version for the Apple II. Six months later a version for the Commodore 64 got published, again in Compute!, though it ran in Compute!’s Gazette too. Compute! was for all the home computers of the era; Compute!’s Gazette specialized in the Commodore computers. I would have sworn that MLX got adapted for the Atari eight-bit home computers too, but can’t find evidence it ever was. By 1986 Compute! was phasing out its type-in programs and didn’t run much for Atari anymore.

Cover of the December 1986 Compute!'s Gazette, which includes small pictures to represent several features. One is a neat watercolor picture for 'Q Bird', showing a cheerful little blue bird resting on the head of a nervous-looking snake.
Programming challenge: a video game with the aesthetics of 1980s video-game-art, such as Q Bird’s look there.

The new MLX made a bunch of changes. Some were internal, about how to store a program being entered. One was dramatic in appearance. In the original MLX people typed in decimal numbers, like 32 or 169. In the new, they would enter hexadecimal digits, like 20 or A9. And a string of eight numbers on a line, rather than six. This promised to save our poor fingers. Where before we needed to type in 21 digits to enter six instructions, now we needed 18 digits to enter eight instructions. So the same program would take about two-thirds the number of keystrokes. A plausible line of code would look something like:

0801:0B 08 00 00 9E 32 30 36 EC
0809:31 00 00 00 A9 00 8D 20 3A
0811:D0 20 CF 14 20 1B 08 4C 96
0819:C7 0B A9 93 20 D2 FF A9 34

(This from the first lines for “Q-Bird”, a game published in the December 1986 Compute!’s Gazette.)

And, most important, there was a new checksum.

What was the checksum formula?

I had a Commodore 64, so I always knew MLX from its Commodore version. The key parts of the checksum code appear in it in lines 350 through 390. Let me copy out the key code, spaced a bit out for easier reading:

360 A = INT(AD/Z6):
    GOSUB 350:
    A = AD - A*Z6:
    GOSUB 350:
    PRINT":";
370 CK = INT(AD/Z6):
    CK = AD - Z4*CK + Z5*(CK>27):
    GOTO 390
380 CK = CK*Z2 + Z5*(CK>Z7) + A
390 CK = CK + Z5*(CK>Z5):
    RETURN

Z2, Z4, Z5, Z6, and Z7 are constants, defined at the start of the program. Z4 equals 254, Z5 equals 255, Z6 equals 256, and Z7, as you’d expect, is 127. Z2, meanwhile, was a simple 2.

About a dozen lines of Commodore 64 BASIC, including the lines that represent the checksum calculations for MLX 2.0.
The bits at the end of each line, :rem 240 and the like, are not part of the working code. They’re instead the Automatic Proofreader checksum. Automatic Proofreader was a different program, one written in machine language that you used to make sure you typed in BASIC programs correctly. After entering a line of BASIC, the computed checksum appeared in the corner of the window, and if it was the :rem number, you had typed the line in correctly. Now you might wonder how you knew you typed in the machine language code for the Automatic Proofreader correctly, if you need the Automatic Proofreader to enter MLX correctly. To this I offer LOOK A BIG DISTRACTING THING! (Runs away.)

A bit of Commodore BASIC here. INT means to take the largest whole number not larger than whatever’s inside. AD is the address of the start of the line being entered. CK is the checksum. A is one number, one machine language instruction, being put in. GOSUB, “go to subroutine”, means to jump to another line and execute commands from there, and then RETURN. That’s the command. The program then continues from the next instruction after the GOSUB. In this code, line 350 converts a number from decimal to hexadecimal and prints out the hexadecimal version. This bit about adding Z5 * (CK>Z7) looks peculiar.

Commodore BASIC evaluates logical expressions like CK > 27 into a bit pattern. That pattern looks like a number. We can use it like an integer. Many programming languages do something like that and it can allow for clever but cryptic programming tricks. An expression that’s false evaluates as 0; an expression that’s true evaluates as -1. So, CK + Z5*(CK>Z5) is an efficient little filter. If CK is smaller than Z5, it’s left untouched. If CK is larger than Z5, then subtract Z5 from CK. This keeps CK from being more than 255, exactly as we’d wanted.

But you also notice: this code makes no sense.

Like, starting the checksum with something derived from the address makes sense. Adding to that numbers based on the instructions makes sense. But the last instruction of line 370 is a jump straight to line 390. Line 380, where any of the actual instructions are put into the checksum, never gets called. Also, there’s eight instructions per line. Why is only one ever called?

And this was a bear to work out. One friend insisted I consider the possibility that MLX was buggy and nobody had found the defect. I could not accept that, not for a program that was so central to so much programming for so long. Also, not considering that it worked. Make almost any entry error and the checksum would not match.

Where’s the rest of the checksum formula?

This is what took time! I had to go through the code and find what other lines call lines 360 through 390. There’s a hundred lines of code in the Commodore version of MLX, which isn’t that much. They jump around a lot, though. By my tally 68 of these 100 lines jump to, or can jump to, something besides the next line of code. I don’t know how that compares to modern programming languages, but it’s still dizzying. For a while I thought it might be a net saving in time to write something that would draw a directed graph of the program’s execution flow. It might still be worth doing that.

The checksum formula gets called by two pieces of code. One of them is the code when the program gets entered. MLX calculates a checksum and verifies whether it matches the ninth number entered. The other role is in printing out already-entered data. There, the checksum doesn’t have a role, apart from making the on-screen report look like the magazine listing.

Here’s the code that calls the checksum when you’re entering code:

440 POKE 198,0:
    GOSUB 360:
    IF F THEN PRINT IN$ PRINT" ";
    [ many lines about entering your data here ]
560 FOR I=1 TO 25 STEP 3:
    B$ = MID$(IN$, I):
    GOSUB 320:
    IF I<25 THEN GOSUB 380: A(I/3)=A
570 NEXT:
    IF ACK THEN GOSUB 1060:
    PRINT "ERROR: REENTER LINE ":
    F = 1:
    GOTO 440
580 GOSUB 1080:
    [ several more lines setting up a new line of data to enter ]

Line 320 started the routine that turned a hexadecimal number, such as 7F, into decimal, such as 127. It returns this number as the variable named A. IN$ was the input text, part of the program you you enter. This should be 27 characters long. A(I/3) was an element in an array, the string of eight instructions for that entry. Yes, you could use the same name for an array and for a single, unrelated, number. Yes, this was confusing.

But here’s the logic. Line 440 starts work on your entry. It calculates the part of the checksum that comes from the location in memory that data’s entered in. Line 560 does several bits of work. It takes the entered instructions and converts the strings into numbers. Then it takes each of those instruction numbers and adds its contribution to the checksum. Line 570 compares whether the entered checksum matches the computed checksum. If it does match, good. If it doesn’t match, then go back and re-do the entry.

The code for displaying a line of your machine language program is shorter:

630 GOSUB 360:
    B = BS + AD - SA;
    FOR I = B TO B+7:
       A = PEEK(I):
       GOSUB 350:
       GOSUB 380:
       PRINT S$;
640 NEXT:
    PRINT "";       
    A = CK:
    GOSUB 350:
    PRINT

The bit about PEEK is looking into the buffer, which holds the entered instructions, and reading what’s there. The GOSUB 350 takes the number ‘A’ and prints out its hexadecimal representation. GOSUB 360 calculates the part of the checksum that’s based on the memory location. The GOSUB 380 contributes the part based on every instruction. S$ is a space. It’s used to keep all the numbers from running up against each other.

So what is the checksum formula?

The checksum takes in two parts. The first part is based on the address at the start of the line. Let me call that the number AD . The second part is based on the entry, the eight instructions following the line. Let me call them D_1 through D_8 . So this is easiest described in two parts.

The base of the checksum, which I’ll call ck_{0} , is:

ck_{0} = AD - 254 \cdot \left(floor(AD \div 256)\right) \\  \mbox { [ subtract 255 if this is 256 or greater ] }

For example, suppose the address is 49152 (in hexadecimal, C000), which was popular for Commodore 64 programming. Then ck_{0} would be 129. If the address is 2049 (in hexadecimal, 0801), another popular location, $latex ck_{0} would be 17.

Generally, the initial ck_{0} increases by 1 as the memory address for the start of a line increases. If you entered a line that started at memory address 49153 (hexadecimal C001) for some reason, that ck_{0} would be 130. A line which started at address 49154 (hexadecimal C002) would have ck_{0} start at 131. This progression continues until ck_{0} would reach 256. Then that greater-than filter at the end of the expression intrudes. A line starting at memory address 49278 (C07E) has ck_{0} of 255, and one starting at memory address 49279 (C07F) has ck_{0} of 1. I see reason behind this choice.

That’s the starting point. Now to use the actual data, the eight pieces D_1 through D_8 that are the actual instructions. The easiest way for me to describe this is do it as a loop, using ck_{0} to calculate ck_{1} , and ck_{1} to define ck_{2} and so on.

ck_{j} = 2 \cdot ck_{j - 1} \cdots \\  \mbox { [ subtract 255 if this is 256 or greater ] }  	\\   \cdots + d_{j} \\  \mbox { [ subtract 255 if this is 256 or greater ] }  	\mbox{for j = 1 ... 8}

That is, for each piece of data in turn, double the existing checksum and add the next data to it. If this sum is 256 or larger, subtract 255 from it. The working sum never gets larger than 512, thanks to that subtract-255-rule after the doubling. And then again that subtract-255-rule after adding d_j. Repeat through the eighth piece of data. That last calculated checksum, ck_{8} , is the checksum for the entry. If ck_{8} does match the entered checksum, go on to the next entry. If ck_{8} does not match the entered checksum, give a warning and go back and re-do the entry.

Why was MLX written like that?

There are mysterious bits to this checksum formula. First is where it came from. It’s not, as far as I can tell, a standard error-checking routine, or if it is it’s presented in a form I don’t recognize. But I know only small pieces of information theory, and it might be that this is equivalent to a trick everybody knows.

The formula is, at heart, “double your working sum and add the next instruction, and repeat”. At the end, take the sum modulo 255 so that the checksum is no more than two hexadecimal digits. Almost. In studying the program I spent a lot of time on a nearly-functionally-equivalent code that used modulo operations. I’m confident that if Apple II and Commodore BASIC had modulo functions, then MLX would have used them.

But those eight-bit BASICs did not. Instead the programs tested whether the working checksum had gotten larger than 255, and if it had, then subtracted 255 from it. This is a little bit different. It is possible for a checksum to be 255 (hexadecimal FF). This even happened. In the June 1985 Compute!, introducing the new MLX for the Apple II, we have this entry as part of the word processor Speedscript 3.0 that anyone could type in:

0848: 20 A9 00 8D 53 1E A0 00 FF

What we cannot have is a checksum of 0. (Unless a program began at memory location 0, and had instructions of nothing but 0. This would not happen. The Commodore 64, and the Apple II, used those low-address memory locations for system work. No program could use them.) Were the formulas written with modulo operations, we’d see 00 where we should see FF.

The start of the code for Apple SpeedScript 3.0, showing a couple dozen lines of machine language code.
So this program, which was a legitimate and useful and working word processor, was about 5,699 bytes long. This article is about 31,000 characters (and the characters are longer than a byte back then was), so, that’s the kind of compact writing they were capable of back then.

Doubling the working sum and then setting it to be in a valid range — from 1 to 255 — is easy enough. I don’t know how the designer settled on doubling, but have hypotheses. It’s a good scheme for catching transposition errors, entering 20 FF D2 where one means to enter 20 D2 FF.

The initial ck_{0} seems strange. The equivalent step for the original MLX was the address on which the entry started, modulo 256. Why the change?

My hypothesis is this change was to make it harder to start typing in the wrong entry. The code someone typed in would be long columns of numbers, for many pages. The text wasn’t backed by alternating bands of color, or periodic breaks, or anything else that made it harder for the eye to skip one or more lines of machine language code.

In the original MLX, skipping one line, or even a couple lines, can’t go undetected. The original MLX entered six pieces of data at a time. If your eye skips a line, the wrong data will mismatch the checksum by 6, or by 12, or by 18 — by 6 times the number of lines you miss. To have the checksum not catch this error, you have to skip 128 lines, and that’s not going to happen. That’s about one and a quarter columns of text and the eye just doesn’t make that mistake. Skimming down a couple lines, yes. Moving to the next column, yes. Next column plus 37 lines? No.

An entire page of lines of hexadecimal code, three columns of 83 lines each with nine sets of two-hexadecimal-digit numbers to enter. Plus the four-digit hexadecimal representation of the memory address for the line. It's a lot of data to enter.
So anyway this is why every kid who was really into their Commodore 64 has a repetitive strain injury today. Page of machine language instructions for SpeedCalc, a spreadsheet program, just like every 13-year-old kid needed.

In the new MLX, one enters eight instructions of code at a time. So skipping a line increases the checksum by 8 times the number of lines skipped. If the initial checksum were the line’s starting address modulo 256, then we’d only need to skip 16 lines to get the same initial checksum. Sixteen lines is a bit much to skip, but it’s less than one-sixth of a column. That’s not too far. And the eye could see 0968 where it means to read 0868. That’s a plausible enough error and one the new checksum would be helpless against.

So the more complicated, and outright weird, formula that MLX 2.0 uses betters this. Skipping 16 lines — entering the line for 0968 instead of 0868 — increases the base checksum by 2. Combined with the subtract-255 rule, you won’t get a duplicate of the checksum for, in most cases, 127 lines. Nobody is going to make that error.

So this explains the components. Why is the Commodore 64 version of MLX such a tangle of spaghetti code?

Here I have fewer answers. Part must be that Commodore BASIC was prone to creating messes. For example, it did not really have functions, smaller blocks of code with their own, independent, sets of variables. These would let, say, numbers convert from hexadecimal to decimal without interrupting the main flow of the program. Instead you had to jump, either by GOTO or GOSUB, to another part of the program. The Commodore or Apple II BASIC subroutine has to use the same variable names as the main part of the program, so, pick your variables wisely! Or do a bunch of reassigning values before and after the subroutine’s called.

Excerpt from two columns of the BASIC code for the Commodore 128 version of MLX. The first column includes several user-defined functions. The second column uses them as part of calculating the checksum.
And for completeness here’s excerpts from the Commodore 128 version of MLX. The checksum is calculated from lines 310 through 330. The reference to FNHB(AD) calls back to the rare user-defined function. On line 130 the DEF FN commands declare functions named HB, LB, and AD. The two-character codes before the line numbers, such as the SQ before the line 300, were for the new Automatic Proofreader, which did a better job catching common typing errors than the one using :rem (numbers) seen earlier.

To be precise, Commodore BASIC did let one define some functions. This by using the DEF FN command. It could take one number as the input, and return one number as output. The whole definition of the function couldn’t be more than 80 characters long. It couldn’t have a loop. Given these constraints, you can see why user-defined functions went all but unused.

The Commodore version jumps around a lot. Of its 100 lines of code, 68 jump or can jump to somewhere else. The Apple II version has 52 lines of code, 28 of which jump or can jump to another line. That’s just over 50 percent of the lines. I’m not sure how much of this reflects Apple II’s BASIC being better than Commodore’s. Commodore 64 BASIC we can charitably describe as underdeveloped. The Commodore 128 version of MLX is a bit shorter than the 64’s (90 lines of code). I haven’t analyzed it to see how much it jumps around. (But it does have some user-defined functions.)

Not quite a dozen lines of Apple II BASIC, including the lines that represent the checksum calculations for MLX 2.0.
The Apple II version of MLX just trusted you to type everything in right and good luck there. The checksum calculation — lines 560 and 570 here — are placed near the end of the program listing (it ends on line 610), rather than in the early-center.

The most mysterious element, to me, is the defining of some constants like Z2, which is 2, or Z5, which is 255. The Apple version of this doesn’t uses these constants. It uses 2 or 255 or such in the checksum calculation. I can rationalize replacing 254 with Z4, or 255 with Z5, or 127 with Z7. The Commodore 64 allowed only 80 tokens in a command line. So these values might save only a couple characters, but if they’re needed characters, good. Z2, though, only makes the line longer.

I would have guessed that this reflected experiments. That is, trying out whether one should double the existing sum and add a new number, or triple, or quadruple, or even some more complicated rule. But the Apple II version appeared first, and has the number 2 hard-coded in. This might reflect that Tim Victor, author of the Apple II version, preferred to clean up such details while Ottis R Cowper, writing the Commodore version, did not. Lacking better evidence, I have to credit that to style.

Is this checksum any good?

Whether something is “good” depends on what it is supposed to do. The New MLX, or MLX 2.0, was supposed to make it possible to type in long strings of machine-language code while avoiding errors. So it’s good if it protects against those errors without being burdensome.

It’s a light burden. The person using this types in 18 keystrokes per line. This carries eight machine-language instructions plus one checksum number. So only one-ninth of the keystrokes are overhead, things to check that other work is right. That’s not bad. And it’s better than the original version of MLX, where up to 21 keystrokes gave six instructions. And one-seventh of the keystrokes were the checksum overhead.

The checksum quite effectively guards against entering instructions on a wrong line. To get the same checksum that (say) line 0811 would have you need to jump to line 0C09. In print, that’s another column over and a third of the way down the page. It’s a hard mistake to make.

Entering a wrong number in the instructions — say, typing in 22 where one means 20 — gets caught. The difference gets multiplied by some whole power of two in the checksum. Which power depends on what number’s entered wrong. If the eighth instruction is entered wrong, the checksum is off by that error. If the seventh instruction is wrong, the checksum is off by two times that error. If the sixth instruction is wrong, the checksum is off by four times that error. And so on, so that if the first instruction is wrong, the checksum is off by 128 times that error. And these errors are taken not-quite-modulo 255.

The only way to enter a single number wrong without the checksum catching it is to type something 255 higher or lower than the correct number. And MLX confines you to entering a two-hexadecimal-digit number, that is, a number from 0 to 255. The only mistake it’s possible to make is to enter 00 where you mean FF, or FF where you mean 00.

What about transpositions? Here, the the new MLX checksum shines. Doubling the sum so far and adding a new term to it makes transpositions very likely to be caught. Not many, though. A transposition of the data at position number j and at position number k will go unnoticed only when d_j and d_k happen to make true

\left(2^j - 2^k\right)\cdot\left(d_j - d_k\right) = 0 \mbox{ mod } 255

This doesn’t happen much. It needs d_j and d_k to be 255 apart. Or for \left(2^j - 2^k\right) to be a divisor of 255 and d_j - d_k to be another divisor. I’ll discuss when that happens in the next section.

In practice, this is a great simple checksum formula. It isn’t hard to calculate, it catches most of the likely data-entry mistakes, and it doesn’t require much extra data entry to work.

What flaws did the checksum have?

The biggest flaw the MLX 2.0 checksum scheme has is that it’s helpless to distinguish FF, the number 255, from 00, the number 0. It’s so vulnerable to this that a warning got attached to the MLX listing in every issue of the magazines:

Because of the checksum formula used, MLX won’t notice if you accidentally type FF in place of 00, and vice versa. And there’s a very slim chance that you could garble a line and still end up with a combination of characters that adds up to the proper checksum. However, these mistakes should not occur if you take reasonable care while entering data.

So when can a transposition go wrong? Well, any time you swap a 00 and an FF on a line, however far apart they are. But also if you swap the elements in position j and k, if 2^j - 2^k is a divisor of 255 and d_j - d_k works with you, modulo 255.

For a transposition of adjacent instructions to go wrong — say, the third and the fourth numbers in a line — you need the third and fourth numbers to be 255 apart. That is, entering 00 FF where you mean FF 00 will go undetected. But that’s the only possible case for adjacent instructions.

A transposition past one space — say, swapping the third and the fifth numbers in a line — needs the two to be 85, 170, or 255 away. So, if you were supposed to enter (in hexadecimal) EE A9 44 and you instead entered 44 A9 EE, it would go undetected. That’s the only way a one-space transposition can happen. MLX will catch entering EE A9 45 as 45 A9 EE.

A transposition past two spaces — say, swapping the first and the fifth numbers — will always be caught unless the numbers are 255 apart, that is, a 00 and an FF. A transposition past three spaces — like, swapping the first and the sixth numbers — is vulnerable again. Then if the first and sixth numbers are off by 17 (or a multiple of 17) the swap will go unnoticed. A transposition across four spaces will always be caught unless it’s 00 for FF. A transposition across five spaces — like, swapping the second and eighth numbers — has to also have the two numbers be 85 or 170 or 255 apart to sneak through. And a transposition across six spaces — this has to be swapping the first and last elements in the line — again will be caught unless it’s 00 for FF.

Front cover of the June 1985 issue of Compute!, with the feature article being Apple Speedscript, a 'powerful word processor' inside. The art is a watercolor picture of a man in Apple T-shirt riding a bicycle. Behind him is a Commodore 128 floating in midair, and in front of him is a hand holding a flip-book animation.
So if you weren’t there in the 80s? This is pretty much what it was like. Well-toned men with regrettable moustaches pedaling their bikes while eight-bit computers exploded out of the void behind them and giants played with flip books in front of them.

Listing all the possible exceptions like this makes it sound dire. It’s not. The most likely transposition someone is going to make is swapping the order of two elements. That’s caught unless one of the numbers is FF and the other 00. If the transposition swaps non-neighboring numbers there’s a handful of new cases that might slip through. But you can estimate how often two numbers separated by one or three or five spaces are also different by 85 or 34 or another dangerous combination. (That estimate would suppose that every number from 0 to 255 is equally likely. They’re not, though, because popular machine language instruction codes such as A9 or 20 will be over-represented. So will references to important parts of computer memory such as, on the Commodore, FFD2.)

You will forgive me for not listing all the possible cases where competing typos in entering numbers will cancel out. I don’t want to figure them out either. I will go along with the magazines’ own assessment that there’s a “very slim chance” one could garble the line and get something that passes, though. After all, there are 18,446,744,073,709,551,615 conceivable lines of code one might type in, and only 255 possible checksums. Some garbled lines must match the correct checksum.

Could the checksum have been better?

The checksum could have been different. This is a trivial conclusion. “Better”? That demands thought. A good error-detection scheme needs to catch errors that are common or that are particularly dangerous. It should add as little overhead as possible.

The MLX checksum as it is catches many of the most common errors. A single entry mis-keyed, for example, except for the case of swapping 00 and FF. Or transposing one number for the one next to it. It even catches most transpositions with spaces between the transposed numbers. It catches almost all cases where one enters the entirely wrong line. And it does this for only two more keystrokes per eight pieces of data entered. That’s doing well.

The obvious gap is the inability to distinguish 00 from FF. There’s a cure for that, of course. Count the number of 00’s — or the number of FF’s — in a line, and include that as part of the checksum. It wouldn’t be particularly hard to enter (going back to the Q-Bird example)

0801:0B 08 00 00 9E 32 30 36 EC 2
0809:31 00 00 00 A9 00 8D 20 3A 4
0811:D0 20 CF 14 20 1B 08 4C 96 0
0819:C7 0B A9 93 20 D2 FF A9 34 0

(Or if you prefer, to have the extra checksums be 0 0 0 1.)

This adds to the overhead, yes, one more keystroke in what is already a good bit of typing. And one may ask whether you’re likely to ever touch 00 when you mean FF. They keys aren’t near one another. Then you learn that MLX soon got a patch which made keying much easier. They did this by making the characters in the rows under 7 8 9 0 type in digits. And the mapping used (on the Commodore 64) put the key to enter F right next to the key to enter 0.

The page of boilerplate text explaining MLX after it became a part of nearly every issue. In the rightmost column a chart explains how the program translates keys so that, for example, U, I, and O are read as the numbers 4, 5, and 6, to make a hexadecimal keypad for faster entry.
The last important revision of MLX made a data-entry keypad out of, for the Commodore 64, some of the letters on the keyboard. For the Commodore 128, it made a data-entry keypad out of … the keypad, but fitting in the hexadecimal numbers A, B, C, D, E, and F took some thought. But the 64 version still managed to put F and 0 next to each other, making it possible to enter FF where you meant 00 or vice-versa.

If you get ambitious, you might attempt even cleverer schemes. Suppose you want to catch those off-by-85 or off-by-17 differences that would detect transpositions. Why not, say, copy the last bits of each of your eight data, and use that to assemble a new checksum number? So, for example, in line 0801 up there the last bit of each number was 1-0-0-0-0-0-0-0 which is boring, but gives us 128, hexadecimal 80, as a second checksum. Line 0809 has eighth bits 1-0-0-0-1-0-1-0-0, or 138 (hex 8A). And so on; so we could have:

0801:0B 08 00 00 9E 32 30 36 EC 2 80
0809:31 00 00 00 A9 00 8D 20 3A 4 8A
0811:D0 20 CF 14 20 1B 08 4C 96 0 24
0819:C7 0B A9 93 20 D2 FF A9 34 0 B3

Now, though? We’ve got five keystrokes of overhead to sixteen keystrokes of data. Getting a bit bloated. It could be cleaned up a little; the single-digit count of 00’s (or FF’s) is redundant to the two-digit number formed from the cross-section I did there.

And if we were working in a modern programming language we could reduce the MLX checksum and this sampled-digit checksum to a single number. Use the bitwise exclusive-or of the two numbers as the new, ‘mixed’ checksum. Exclusive-or the sampled-digit with the mixed checksum and you get back the classic MLX checksum. You get two checksums in the space of one. In the program you’d build the sampled-digit checksum, and exclusive-or it with the mixed checksum, and get back what should be the MLX checksum. Or take the mixed checksum and exclusive-or it with the MLX checksum, and you get the sampled-digit checksum.

This almost magic move has two problems. This sampled digit checksum could catch transpositions that are off by 85 or 17. It won’t catch transpositions off by 17 or by 34, though, just as deadly. It will catch transpositions off by odd multiples of 17, at least. You would catch transpositions off by 85 or by 34 if you sampled the seventh digit, at least. Or if you build a sample based on the fifth or the third digit. But then you won’t catch transpositions off by 85 or by 17. You can add new sampled checksums. This threatens us again with putting in too many check digits for actual data entry.

The other problem is worse: Commodore 64 BASIC did not have a bitwise exclusive-or command. I was shocked, and I was more shocked to learn that Applesoft BASIC also lacked an exclusive-or. The Commodore 128 had exclusive-or, at least. But given that lack, and the inability to add an exclusive-or function that wouldn’t be infuriating? I can’t blame anyone for not trying.

So there is my verdict. There are some obvious enough ways that MLX’s checksum might have been able to catch more errors. But, given the constraints of the computers it was running on? A more sensitive error check likely would not have been available. Not without demanding much more typing. And, as a another practical matter, demanding the program listings in the magazine be smaller and harder to read. The New MLX did, overall, a quite good job catching errors without requiring too much extra typing. We’ll probably never see its like again.

In Which I Feel A Little Picked On


This is not a proper Reading the Comics post, since there’s nothing mathematical about this. But it does reflect a project I’ve been letting linger for months and that I intend to finish before starting the abbreviated Mathematics A-to-Z for this year.

Panel labelled Monday-Friday. A man sitting in an easy chair says, 'I'll get to it this weekend.' Panel labelled Weekend. The man sitting in the easy chair says, 'I need to relax. I'll do it next week.'
Jeff Stahler’s Moderately Confused for the 12th of June, 2021. Essays in which I discuss Moderately Confused, usually for its mathematical content, are at this link.

In the meanwhile. I have a person dear to me who’s learning college algebra. For no reason clear to me this put me in mind of last year’s essay about Extraneous Solutions. These are fun and infuriating friends. They’re created when you follow the rules about how you can rewrite a mathematical expression without changing its value. And yet sometimes you do these rewritings correctly and get a would-be solution that isn’t actually one. So I’d shared some thoughts about why they appear, and what tedious work keeps them from showing up.

Iva Sallay teaches you how to host the Playful Math Education Blog Carnival


Iva Sallay, creator of the Find The Factors recreational mathematics puzzle and a kind friend to my blog, posted Yes, YOU Can Host a Playful Math Education Blog Carnival. It explains in quite good form how to join in Denise Gaskins’s roaming blog event. It tries to gather educational or recreational or fun or just delightful mathematics links.

Hosting the blog carnival is a great experience I recommend for mathematics bloggers at least once. I seem to be up to hosting it about once a year, most recently in September 2020. Most important in putting one together is looking at your mathematics reading with different eyes. Sallay, though, goes into specifics about what to look for, and how to find that.

If you’d like to host a carnival you can sign up now for the June slot, blog #147, or for most of the rest of the year.

History of Philosophy podcast has episode on Nicholas of Cusa


I continue to share things I’ve heard, rather than created. Peter Adamson’s podcast The History Of Philosophy Without Any Gaps this week had an episode about Nicholas of Cusa. There’s another episode on him scheduled for two weeks from now.

Nicholas is one of those many polymaths of the not-quite-modern era. Someone who worked in philosophy, theology, astronomy, mathematics, with a side in calendar reform. He’s noteworthy in mathematics and theology and philosophy for trying to understand the infinite and the infinitesimal. Adamson’s podcast — about a half-hour — focuses on the philosophical and theological sides of things. But the mathematics can’t help creeping in, with questions like, how can you tell the difference between a straight line and the edge of a circle with infinitely large diameter? Or between a circle and a regular polygon with infinitely many sides?

The St Andrews Maths History site has an article on Nicholas that focuses more on the kinds of work he did.

How May 2021 Treated My Mathematics Blog


I’ll take this chance now to look over my readership from the past month. It’s either that or actually edit this massive article I’ve had sitting for two months. I keep figuring I’ll edit it this next weekend, and then the week ends before I do. This weekend, though, I’m sure to edit it into coherence. Just you watch.

According to WordPress I had 3,068 page views in May of 2021. That’s an impressive number: my 12-month running mean, leading up to May, was 2,366.0 views per month. The 12-month running median is a similar 2,394 views per month. That startles me, especially as I don’t have any pieces that obviously drew special interest. Sometimes there’s a flood of people to a particular page, or from a particular site. That didn’t happen this month, at least as far as I can tell. There was a steady flow of readers to all kinds of things.

There were 2,085 unique visitors, according to WordPress. That’s down from April, but still well above the running mean of 1,671.9 visitors. And above the median of 1,697 unique visitors.

When we rate things per post the dominance of the past month gets even more amazing. That’s an average 340.9 views per posting this month, compared to a mean of 202.5 or a median of 175.5. (Granted, yes, the majority of those were to things from earlier months; there’s almost ten years of backlog and people notice those too.) And it’s 231.7 unique visitors per posting, versus a mean of 144.7 and a median of 127.4.

Bar chart of two and a half years's worth of monthly readership figures. The last several months have seen a steady roughly 3,000 page views and 2,000 unique visitors a month, an increase over the preceding years.
The most important thing in tracking all this is I hope to someday catch WordPress giving me the same readership statistics two months in a row.

There were 48 likes given in May. That’s below the running mean of 56.3 and median of 55.5. Per-posting, though, these numbers look better. That’s 5.3 likes per posting over the course of May. The mean per posting was 4.5 and the median 4.1 over the previous twelve months. There were 20 comments, barely above the running mean of 19.4 and running median of 18. But that’s 2.2 comments per posting, versus a mean per posting of 1.7 and a median per posting of 1.4. I make my biggest impact with readers by shutting up more.

I got around to publishing nine things in May. A startling number of them were references to other people’s work or, in one case, me talking about using an earlier bit I wrote. Here’s the posts in descending order of popularity. I’m surprised how much this differs from simple chronological order. It suggests there are things people are eager to see, and one of them is Reading the Comics posts. Which I don’t do on a schedule anymore.

As that last and least popular post says, I plan to do an A-to-Z this year. A shorter one than usual, though, one of only fifteen week’s duration, and covering only ten different letters. It’s been a hard year and I need to conserve my energies. I’ll begin appealing for subjects soon.

In May 2021 I posted 4,719 words here, figures WordPress, bringing me to a total of 22,620 words this year. This averages out at 524.3 words per posting in May, and 552 words per post for the year.

As of the start of June I’ve had 1,623 posts to here, which gathered a total 135,779 views from a logged 79,646 unique visitors.

I’d be glad to have you as a regular reader. To be one that never appears in my statistics you can use the RSS feed for my essays. If you don’t have an RSS reader you can sign up for a free account at Dreamwidth or Livejournal. You can add any RSS feed by https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn and have it appear on your Friends page.

If you have a WordPress account, you can add my posts to your Reader. Use the “Follow NebusResearch” button to do that. Or you can use “Follow NebusResearch by E-mail” to get posts sent to your mailbox. That’s the way to get essays before I notice their most humiliating typos.

I’m @nebusj on Twitter, but don’t read or interact with it. It posts announcements of essays is all. I do read @nebusj@mathstodon.xyz, on the mathematics-themed Mastodon instance.

Thank you for reading, however it is you’re doing, and I hope you’ll do more of that. If you’re not reading, I suppose I don’t have anything more to say.

Announcing my 2021 Mathematics A-to-Z


I enjoy the tradition of writing an A-to-Z, a string of essays about topics from across the alphabet and mostly chosen by readers and commenters. I’ve done at least one each year since 2015 and it’s a thrilling, exhausting performance. I didn’t want to miss this year, too.

But note the “exhausting” there. It’s been a heck of a year and while I’ve been more fortunate than many, I also know my limits. I don’t believe I have the energy to do the whole alphabet. I tell myself these essays don’t have to be big productions, and then they turn into 2,500 words a week for 26 weeks. It’s nice work but it’s also a (slender) pop mathematics book a year, on top of everything else I write in the corners around my actual work.

So how to do less, and without losing the Mathematics A-to-Z theme? And Iva Sallay, creator of Find the Factors and always a kind and generous reader, had the solution. This year I’ll plan on a subset of the alphabet, corresponding to a simple phrase. That phrase? I’m embarrassed to say how long it took me to think of, but it must be the right one.

I plan to do, in this order, the letters of “MATHEMATICS A-TO-Z”.

That is still a 15-week course of essays, but I did want something that would still be a worthwhile project. I intend to keep the essays shorter this year, aiming at a 1,000-word cap, so look forward to me breaking 4,000 words explaining “saddle points”. This also implies that I’ll be doubling and even tripling letters, for the first time in one of these sequences. There’s to be three A’s, three T’s, and two M’s. Also one each of C, E, H, I, O, S, and Z. I figure I have one Z essay left before I exhaust the letter. I may deal with that problem in 2022.

I plan to set my call for topics soon. I’d like to get the sequence started publishing in July, so I have to do that soon. But to give some idea the range of things I’ve discussed before, here’s the roster of past, full-alphabet, A-to-Z topics:

I, too, am fascinated by the small changes in how I titled these posts and even chose whether to capitalize subject names in the roster. By “am fascinated by the small changes” I mean “am annoyed beyond reason by the inconsistencies”. I hope you too have an appropriate reaction to them.

Reading the Comics, May 25, 2021: Hilbert’s Hotel Edition


I have only a couple strips this time, and from this week. I’m not sure when I’ll return to full-time comics reading, but I do want to share strips that inspire something.

Carol Lay’s Lay Lines for the 24th of May riffs on Hilbert’s Hotel. This is a metaphor often used in pop mathematics treatments of infinity. So often, in fact, a friend snarked that he wished for any YouTube mathematics channel that didn’t do the same three math theorems. Hilbert’s Hotel was among them. I think I’ve never written a piece specifically about Hilbert’s Hotel. In part because every pop mathematics blog has one, so there are better expositions available. I have a similar restraint against a detailed exploration of the different sizes of infinity, or of the Monty Hall Problem.

Narration, with illustrations to match: 'Hilbert's Hotel: The infinite hotel was always filled to capacity. Yet if a new guest arrived, she was always given a room. After all, there were an infinite number of rooms. This paradox assumed that management could always add one or more to infinity. The brain-bruising hotel attracted a lot of mathematicians and philosophers. They liked to argue into the wee hours abou the nature of infinity. Unfortunately, they were a bunch of slobs. Management had to hire a new maid to keep up with the mess. Daunted by the number of rooms to clean... the maid set fire to the joint. The philosophers escaped ... but the hotel burned forever.'
Carol Lay’s Lay Lines for the 24th of May, 2021. This and a couple other essays inspired by something in Lay Lines are at this link. This comic is, per the copyright notice, from 2002. I don’t know anything of its publication history past that.

Hilbert’s Hotel is named for David Hilbert, of Hilbert problems fame. It’s a thought experiment to explore weird consequences of our modern understanding of infinite sets. It presents various cases about matching elements of a set to the whole numbers, by making it about guests in hotel rooms. And then translates things we accept in set theory, like combining two infinitely large sets, into material terms. In material terms, the operations seem ridiculous. So the set of thought experiments get labelled “paradoxes”. This is not in the logician sense of being things both true and false, but in the ordinary sense that we are asked to reconcile our logic with our intuition.

So the Hotel serves a curious role. It doesn’t make a complex idea understandable, the way many demonstrations do. It instead draws attention to the weirdness in something a mathematics student might otherwise nod through. It does serve some role, or it wouldn’t be so popular now.

It hasn’t always been popular, though. Hilbert introduced the idea in 1924, though per a paper by Helge Kragh, only to address one question. A modern pop mathematician would have a half-dozen problems. George Gamow’s 1947 book One Two Three … Infinity brought it up again, but it didn’t stay in the public eye. It wasn’t until the 1980s that it got a secure place in pop mathematics culture, and that by way of philosophers and theologians. If you aren’t up to reading the whole of Kragh’s paper, I did summarize it a bit more completely in this 2018 Reading the Comics essay.

Anyway, Carol Lay does an great job making a story of it.

Two people stand in front of a chalkboard which contains a gibberish equation: 'sqrt(PB+J(ax pi)^2) * Z/y { = D/8 + H} - 17^4 x G + z x 2 / 129 \div +/o + exp(null set mickey-mouse-ears), et cetera. One person says: 'Oh, it definitely proves something, all right ... when it comes to actual equations, at least one cartoonist doesn't know squat.'
Leigh Rubin’s Rubes for the 25th of May, 2021. This and other essays mentioning Rubes are at this link. I’m not sure whether that symbol at the end of the second line is meant to be Mickey Mouse ears, or a Venn diagram, or a symbol that I’m not recognizing.

Leigh Rubin’s Rubes for the 25th of May I’ll toss in here too. It’s a riff on the art convention of a blackboard equation being meaningless. Normally, of course, the content of the equation doesn’t matter. So it gets simplified and abstracted, for the same reason one draws a brick wall as four separate patches of two or three bricks together. It sometimes happens that a cartoonist makes the equation meaningful. That’s because they’re a recovering physics major like Bill Amend of FoxTrot. Or it’s because the content of the blackboard supports the joke. Which, in this case, it does.

The essays I write about comic strips I tag so they appear at this link. You may enjoy some more pieces there.

In Which I Get To Use My Own Work


We have goldfish, normally kept in an outdoor pond. It’s not a deep enough pond that it would be safe to leave them out for a very harsh winter. So we keep as many as we can catch in a couple 150-gallon tanks in the basement.

Recently, and irritatingly close to when we’d set them outside, the nitrate level in the tanks grew too high. Fish excrete ammonia. Microorganisms then turn the ammonia into nitrates and then nitrates. In the wild, the nitrates then get used by … I dunno, plants? Which don’t thrive enough hin our basement to clean them out. To get the nitrate out of the water all there is to do is replace the water.

We have six buckets, each holding five gallons, of water that we can use for replacement. So there’s up to 30 gallons of water that we could change out in a day. Can’t change more because tap water contains chloramines, which kill bacteria (good news for humans) but hurt fish (bad news for goldfish). We can treat the tap water to neutralize the chloramines, but want to give that time to finish. I have never found a good reference for how long this takes. I’ve adopted “about a day” because we don’t have a water tap in the basement and I don’t want to haul more than 30 gallons of water downstairs any given day.

So I got thinking, what’s the fastest way to get the nitrate level down for both tanks? Change 15 gallons in each of them once a day, or change 30 gallons in one tank one day and the other tank the next?

Several dozen goldfish, most of them babies, within a 150-gallon rubber stock tank, their wintering home.
Not a current picture, but the fish look about like this still.

And, happy to say, I realized this was the tea-making problem I’d done a couple months ago. The tea-making problem had a different goal, that of keeping as much milk in the tea as possible. But the thing being studied was how partial replacements of a solution with one component affects the amount of the other component. The major difference is that the fish produce (ultimately) more nitrates in time. There’s no tea that spontaneously produces milk. But if nitrate-generation is low enough, the same conclusions follow. So, a couple days of 30-gallon changes, in alternating tanks, and we had the nitrates back to a decent level.

We’d have put the fish outside this past week if I hadn’t broken, again, the tool used for cleaning the outside pond.

Homologies and Cohomologies explained quickly


I’d hoped to have a pretty substantial post today. I fell short of having time to edit the beast into shape. I apologize but hope to have that soon.

I also hope to soon have an announcement about a Mathematics A-to-Z for this year. But until then, here’s this.

Several years ago in an A-to-Z I tried to explain cohomologies. I wasn’t satisfied with it, as, in part, I couldn’t think of a good example. You know, something you could imagine demonstrating with specific physical objects. I can reel off definitions, once I look up the definitions, but there’s only so many people who can understand something from that.

Quanta Magazine recently ran an article about homologies. It’s a great piece, if we get past the introduction of topology with that doughnut-and-coffee-cup joke. (Not that it’s wrong, just that it’s tired.) It’s got pictures, too, which is great.

This I came to notice because Refurio Anachro on Mathstodon wrote a bit about it. This in a thread of toots talking about homologies and cohomologies. The thread at this link is more for mathematicians than the lay audience, unlike the Quanta Magazine article. If you’re comfortable reading about simplexes and linear operators and multifunctions you’re good. Otherwise … well, I imagine you trust that cohomologies can take care of themselves. But I feel better-informed for reading the thread. And it includes a link to a downloadable textbook in algebraic topology, useful for people who want to give that a try on their own.

In Our Time podcast has an episode on Longitude


The BBC’s In Our Time program, and podcast, did a 50-minute chat about the longitude problem. That’s the question of how to find one’s position, east or west of some reference point. It’s an iconic story of pop science and, I’ll admit, I’d think anyone likely to read my blog already knows the rough outline of the story. But you never know what people don’t know. And even if you do know, it’s often enjoyable to hear the story told a different way.

The mathematics content of the longitude problem is real, although it’s not discussed more than in passing during the chat. The core insight Western mapmakers used is that the difference between local (sun) time and a reference point’s time tells you how far east or west you are of that reference point. So then the question becomes how you know what your reference point’s time is.

This story, as it’s often told in pop science treatments, tends to focus on the brilliant clockmaker John Harrison, and the podcast does a fair bit of this. Harrison spent his life building a series of ever-more-precise clocks. These could keep London time on ships sailing around the world. (Or at least to the Caribbean, where the most profitable, slavery-driven, British interests were.) But he also spent decades fighting with the authorities he expected to reward him for his work. It makes for an almost classic narrative of lone genius versus the establishment.

But, and I’m glad the podcast discussion comes around to this, the reality more ambiguous than this. (Actual history is always more ambiguous than whatever you think.) Part of the goal of the goal of the British (and other powers) was finding a practical way for any ship to find longitude. Granted Harrison could build an advanced, ingenious clock more accurate than anyone else could. Could he build the hundreds, or thousands, of those clocks that British shipping needed? Could anyone?

And the competing methods for finding longitude were based on astronomy and calculation. The moment when, say, the Moon passes in front of Jupiter is the same for everyone on Earth. (At least for the accuracy needed here.) It can, in principle, be forecast years, even decades ahead of time. So why not print up books listing astronomical events for the next five years and the formulas to turn observations into longitudes? Books are easy to print. You already train your navigators in astronomy so that they can find latitude. (This by how far above the horizon the pole star, or the sun, or another identifiable feature is.) And, incidentally, you gain a way of computing longitude that you don’t lose if your clock breaks. I appreciated having some of that perspective shown.

(The problem of longitude on land gets briefly addressed. The same principles that work at sea work on land. And land offers some secondary checks. For an unmentioned example there’s triangulation. It’s a great process, and a compelling use of trigonometry. I may do a piece about that myself sometime.)

Also a thing I somehow did not realize: British English pronounces “longitude” with a hard G sound. Huh.

Reading the Comics update: Wavehead does not have a name


So this is not a mathematics-themed comic update, not really. It’s just a bit of startling news about frequent Reading the Comics subject Andertoons. A comic strip back in December revealed that Wavehead had a specific name. According to the strip from the 3rd of December, the student most often challenging the word problem or the definition on the blackboard is named Tommy.

And then last week we got this bombshell:

Wavehead on the telephone at the school office: 'Mom, it's Charlie. Boy, you'd think with everything in the news that Mrs Philips would have more to worry about than me talking in class, but here we are.'
Mark Anderson’s Andertoons for the 4th of May, 2021. This strip previously ran the 23rd of July, 2018, and don’t think I’m not surprised to discover Andertoons has been in reruns.

So, also, it turns out I should have already known this since the strip ran in 2018 also. All I can say is I have a hard enough time reading nearly every comic strip in the world. I can’t be expected to understand them too.

So as not to leave things too despairing let me share a mathematics-mentioning Andertoons from yesterday and also from July 2018.

On the board, the fraction 3/4 with the numerator and denominator labelled. Wavehead: 'You know, for something that sounds like two killer robots, this is really disappointing.'
Mark Anderson’s Andertoons for the 10th of May, 2021. This strip previously ran the 29th of July, 2018, and I discussed it then.

I don’t know if it’s run before that.

At this link are my essays discussing Andertoons. And my Reading the Comics essays are at this link.

How April 2021 Treated My Mathematics Blog, and a question about my A-to-Z’s


I grant that I’m later even than usual in doing my readership recap. That news about how to get rid of the awful awful awful Block Editor was too important to not give last Wednesday’s publication slot. But let me get back to the self-preening and self-examination that people always seem to like and that I never take any lessons from.

In April 2021 there were 3,016 page views recorded here, according to WordPress. These came from 2,298 unique visitors. These are some impressive-looking numbers, especially given that in April I only published nine pieces. And one of those was the readership report for March.

The 3,016 page views is appreciably above the running mean of 2,267.9 views per month for the twelve months leading up to April. It’s also above the running median of 2,266.5 for the twelve months before. And, per posting, the apparent growth is the more impressive. This averages at 335.1 views per posting. The twelve-month running mean was 185.5 views per posting, and twelve-month running median 161.0.

Similarly, unique visitors are well above the averages. 2,298 unique visitors in April is well above the running mean of 1,589.9, and the running median of 1,609.5. The total comes out to 255.3 unique visitors per posting. The running mean, per posting, for the twelve months prior to April was 130.7 unique visitors per posting. The median was a mere 114.1 views per posting.

There were even nice results in the things that show engagement. There were 70 things liked in April, compared to the mean of 54.1 and median of 49. That’s 7.8 likes per posting, well above the mean of 4.1 and median of 4.0. There were for a wonder even more comments than average, 22 given in April compared to a mean of 18.3 and median of 18. Per-posting, that’s 2.4 comments per posting, comfortably above the 1.5 comments per posting mean and 1.2 comments per posting median. It all suggests that I’m finally finding readers who appreciate my genius, or at least style.

Bar chart showing two and a half years' worth of of monthly readership figures. There's a huge spike in October 2019. Beyond that, the past several months of 2021 have shown a fair rise to around 3,000 page views and 2,000 visitors per month.
I would have sworn I’d managed ten posts in April. No way to tell, really, except by counting.

I have doubts, of course, because I don’t have the self-confidence to be a successful writer. But I also notice, for example, that quite a few of these views, and visitors, came in a rush from about the 12th through 16th of April. That’s significant because my humor blog logged an incredible number of visits that week. Someone on the Fandom Drama reddit, explaining James Allen’s departure from Mark Trail, linked to a comic strip I’d saved for my own plot recaps. I’m not sure that this resulted in anyone on the Fandom Drama reddit reading a word I wrote. I also don’t know how this would have brought even a few people to my mathematics blog. The most I can find is several hundred people coming to the mathematics blog from Facebook. As far as I know Facebook had nothing to do with the Fandom Drama reddit. But the coincidence is hard to ignore.


As said, I posted nine things in April. Here they are in decreasing order of popularity. This isn’t quite chronological order, even though pieces from earlier in the month have more time to gather views. It likely means something that one of the more popular pieces is a Reading the Comics post for a comic strip which has run in no newspapers since the 1960s.

My writing plans? I do keep reading the comics. I’m trying to read more for comic strips that offer interesting mathematics points or puzzles to discuss. There’ve been few of those, it seems. But I’m burned out on pointing out how a student got a story problem. And it does seem there’ve been fewer of those, too. But since I don’t want to gather the data needed to do statistics I’ll go with my impression. If I am wrong, what harm will it do?

For each of the past several years I’ve done an A-to-Z, writing an essay for each letter in the alphabet. I am almost resolved to do one for this year. My reservation is that I have felt close to burnout for a long while. This is part of why I am posting two or even one things per week, and have since the 2020 A-to-Z finished. I think that if I do a 2021 A-to-Z it will have to be under some constraints. First is space. A 2,500-word essay lets me put in a lot of nice discoveries and thoughts about topics. It also takes forever to write. Planning to write an 800-word essay trains me to look at smaller scopes, and be easier to find energy and time to write.

Then, too, I may forego making a complete tour of the alphabet. Some letters are so near tapped out that they stop being fun. Some letters end up getting more subject nominations than I can fulfil. It feels a bit off to start an A-to-Z that won’t ever hit Z, but we do live in difficult times. If I end up doing only thirteen essays? That is probably better than none at all.

If you have thoughts about how I could do a different A-to-Z, or better, please let me know. I’m open to outside thoughts about what’s good in these series and what’s bad in them.


In April 2021 I posted 5,057 words here, by WordPress’s estimate. Over nine posts that averages 561,9 words per post. Things brings me to a total of 17,901 words for the year and an average 559 words per post for 2021.

As of the start of May I’ve posted 1,614 things here. They had gathered 131,712 views from 77,564 logged unique visitors.

If you’d like to be a regular reader here, you have options. One is, if you have an RSS reader, to follow essays from the RSS feed. If you don’t have an RSS reader but want one, good news! Sign up for a free account at Dreamwidth or Livejournal. You can use their Reading/Friends page as an RSS reader. Add any RSS feed using https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn.

If you have a WordPress account, you can use the “Follow NebusResearch” button, and posts will appear in your Reader here. If you’d rather get posts in e-mail, typos and all, you can click the “Follow NebusResearch by E-mail” button.

On Twitter my @nebusj account still exists, and posts announcements of things. But Safari doesn’t want to reliably let me read Twitter and I don’t care enough to get that sorted out, so you can’t use it to communicate with me. If you’re on Mastodon, you can find me as @nebusj@mathstodon.xyz, the mathematics-themed server there. Safari does mostly like and let me read that. (It has an annoying tendency to jump back to the top of the timeline. But since Mathstodon is a quiet neighborhood this jumping around is not a major nuisance.)

Thank you for reading. I hope you’re enjoying it. And if you do have thoughts for a 2021 A-to-Z, I hope you’ll share them.

Here’s how to get rid of WordPress’s Block Editor and get the good editor back


So I have to skip my planned post for right now, in favor of good news for WordPress bloggers. I apologize for the insular nature of this, but, it’s news worth sharing.

This is how to dump the Block Editor and get the classic, or ‘good’, editor back. WordPress’s ‘Classic Editor Guide’ explains that you go to your — not your blog’s — account settings. That would be https://wordpress.com/me/account. Under ‘Account Settings’ look for the ‘Interface Settings’ section. There’s a toggle for ‘Dashboard appearance’. Click it to ‘Show wp-admin pages if available’, and save that setting. There! Now you have the usable editor again.

Here’s what it looks like:

Screenshot of https://wordpress.com/me/account showing the Account Settings / Interface Settings section. A red ellipse outlines the 'Show wp-admin pages if available' toggle.
There it is! ‘Show wp-admin pages if available’ and if they ever stop being available, I’m out of here.

Now for how I came to this knowledge.

About two months ago WordPress pushed this update where I had no choice but to use their modern ‘Block’ editor. Its main characteristics are that everything takes longer and behaves worse. And more unpredictably. This is part of a site-wide reorganization where everything is worse. Like, it dumped the old system where you could upload several pictures, put in captions and alt-text for them, and have the captions be saved. And somehow the Block Editor kept getting worse. It has two modes, a ‘Visual Editor’ where it shows roughly what your post would look like, and a ‘Code Editor’ where it shows the HTML code you’re typing in. And this past week it decided anything put in as Code Editor should preview as ‘This block has encountered an error and cannot be previewed’.

It’s sloppy, but everything about the Block Editor is sloppy. There is no guessing, at any point, what clicking the mouse will do, much less why it would do that. The Block Editor is a master class in teaching helplessness. I would pay ten dollars toward an article that studied the complex system of failures and bad decisions that created such a bad editor.

This is not me being a cranky old man at a web site changing. I gave it around two months, plenty of time to get used to the scheme and to understand what it does well. It does nothing well.

For example, if I have an article and wish to insert a picture between two paragraphs? And I click at the space between the two paragraphs where I want the picture? There are at least four different things that the mouse click might cause to happen, one of them being “the editor jumps to the very start of the post”. Which of those four will happen? Why? I don’t know, and you know what? I should not have to know.

In the Classic Editor, if I want to insert a picture, I click in my post where I want the picture to go. I click the ‘Insert Media’ button. I select the picture I want, and that’s it. Any replacement system should be no less hard for me, the writer, to use. Last week, I had to forego putting a picture in one of my Popeye cartoon reviews because nothing would allow me to insert a picture. This is WordPress’s failure, not mine.

With the latest change, and thinking seriously whether WordPress blogging is worth the aggravation, I went to WordPress’s help pages looking for how to get the old editor back. And, because their help pages are also a user-interface clusterfluff, ended up posting this question to a forum that exists somewhere. And, wonderfully, musicdoc1 saw my frustrated pleas and gave me the answer. I am grateful to them and I cannot exaggerate how much difference this makes. Were I forced to choose between the Block Editor and not blogging at all, not blogging would win.

I am so very grateful to musicdoc1 for this information and I am glad to be able to carry on here.

If you are one of the WordPress programmers behind the Block Editor, first, shame on you, and second, I am willing to offer advice on how to make an editor. First bit of advice: it should be less hard than using a scrap of metal to carve a message into Commander Data’s severed head for recovery 500 years in the future. There’s more that’s necessary, but get back to me when you’ve managed that at least.

Iva Sallay’s published the 146th Playful Math Education Blog Carnival


Iva Sallay, who kindly runs the FindTheFactors mathematics puzzle, is the host of the Playful Math Education Blog Carnival this month. She’s just published the current, 146th, edition.

These carnivals often feature recreational mathematics. Sallay’s collection this month has even more than usual, and (to my tastes) more delightful ones than usual. Even if you aren’t an educator or parent it’s worth reading, as there’s surely something you haven’t thought about before.

And if you have a blog, and would like to host the carnival some month? Denise Gaskins, who organizes the project, is taking volunteers. The 147th carnival needs a host yet, and there’s all of fall and winter available too. Hosting is an exciting and challenging thing to do, and I do recommend anyone with pop-mathematics inclinations trying it at least once.

How do I do a matrix in WordPress LaTeX?


I wanted to walk through the calculation that Atlas the Mental Giant did in that installment of Barnaby which I shared on Monday. I’ve been stymied, though. To have anything comprehensible, I need a matrix. (To be precise, I need the determinant of a matrix.) It needs to be typeset in a particular way.

In normal mathematics communications this is easy. We can use the LaTeX typesetting standard, and I would write something like this:

\left(U - TS\right)\left|\begin{tabular}{cc} -dT^2 & S \\ e^{\imath \pi} & \zeta(0) L \end{tabular} \right|

I haven’t checked that I have the syntax precisely right, but it’s something like that.

WordPress includes a bit of support for LaTeX expressions. Here I mean the standard free account that I have; I can write in some line like

\int_0^M \sum_{j=1}^{N} a_j x^j dx

and it will get displayed neat and clean as

\int_0^M \sum_{j=1}^{N} a_j x^j dx


Thing is, the standard installation only has a subset of LaTeX’s commands. This is fair enough. It’s ridiculous to bring the entire workshop out when all you need is one hammer. What I can’t find, though, is a description of what LaTeX tools are available to the standard default WordPress free-account user. My experiments in my own comments suggest that the tabular, and the table, structures aren’t supported. But I can’t find a reference that says what’s allowed and what isn’t. I might, after all, be making a silly error in syntax, over and over. When you make an error in WordPress LaTeX you get a sulky note that the formula does not parse. There’s no hint given to what went wrong, or where. You have to remove symbols until the error disappears, and then reverse-engineer what should have been there.

(And the new WordPress editor does not help either. There is not a single point in the new editor where I am fully sure what clicking the mouse will do, or why. Whether it’ll pop up a toolbar I don’t need, or open a new section I don’t want, or pop up a menu where items have moved around from the last time, or whether it’ll jump back to the start of my post and challenge me to remember what I was doing. I realize it is always popular to complain about a web site change, but usually the changes make at least one thing better than it used to be. I can’t find the thing this has made at all better.)

So I’m hoping to attract information. Does anyone have a list of what LaTeX commands WordPress can use? And how the set of what’s available differs between the original post and the comments on the post? And what, for a basic subscription, you can use to represent a matrix?

Incidentally, here’s how to make WordPress print a line of LaTeX larger. Put a &s=N just before the closing $ of your symbol. That N can be 1, 2, 3, or 4. The bigger the N, the bigger the print. You can also put in 0 or negative numbers, if you want the expression to be smaller. I can’t imagine wanting that, but it’s out there.

Reading the Comics, December 20, 1948: What is Barnaby’s friend’s name Edition?


Have a special one today. I’ve been reading a compilation of Crockett Johnson’s 1940s comic Barnaby. The title character, an almost too gentle child, follows his fairy godfather Mr O’Malley into various shenanigans. Many (the best ones, I’d say) involve the magical world. The steady complication is that Mr O’Malley boasts abilities beyond his demonstrated competence. (Although most of the magic characters are shown to be not all that good at their business.) It’s a gentle strip and everything works out all right, if farcically.

This particular strip comes from a late 1948 storyline. Mr O’Malley’s gone missing, coincidentally to a fairy cop come to arrest the pixie, who is a con artist at heart. So this sees the entry of Atlas, the Mental Giant, who’s got some pleasant gimmicks. One of them is his requiring mnemonics built on mathematical formulas to work out names. And this is a charming one, with a great little puzzle: how do you get A-T-L-A-S out of the formula Atlas has remembered?

While Barnaby and Jane look on a Fairy Cop asks: 'Sergeant Ausdauer is the name. With a baffling problem. Baffling for the police, that is. But I'm sure that if a Mental Giant like you, sir, will apply his direct scientific mind to --- ' Atlas: 'Eh? How do you do. My name is --- er --- my name is --- er --- Where's my slide rule?' While he works on this Jane says to Barnaby, 'He forgot his name.' Atlas mutters: '(U - TS) * det( -dT^2 S \ e^{i*pi} zeta(0) L) = ... ' Walking off panel, Atlas says, 'A-T-L-A-S --- my name is Atlas. I never forget a name. With my memory system --- good day. Sorry to have bothered you --- ' Barnaby, calling him back: 'Hey! Wait!'
Crockett Johnson and Jack Morley’s Barnaby for the 20th of December, 1948. (Morley drew the strip at this point.) I haven’t had cause to discuss other Barnaby strips but if I do, I’ll put them in an essay here. Sergeant Ausdauer reasons that “one of those upper-class amateur detectives with scientific minds who solve all the problems for Scotland Yard” could get him through this puzzle. If they were in London they could just ring any doorbell … which gives you a further sense of the comic strip’s sensibility.

I’m sorry the solution requires a bit of abusing notation, so please forgive it. But it’s a fun puzzle, especially as the joke would not be funnier if the formula didn’t work. I’m always impressed when a comic strip goes to that extra effort.

Johnson, who also wrote the Harold and the Purple Crayon books, painted over a hundred canvasses with theorem-based pictures. There’s a selection of them at the Smithsonian Institute’s web site, here.

There’s a new tiny sci.math archive out there


My friend Porsupah Rhee — you might know her work from a sometimes-viral photo of rabbits fighting, available on some fun merchandise — tipped me off to this. It’s a new attempt at archiving Usenet, and also Fidonet and other bulletin boards. These are the things we used for communicating before web forums and then Facebonk took over everything everywhere. There were sprawling and often messy things, moderated only by the willingness of people to not violate social norms. Sometimes this worked; sometimes it didn’t.

Usenet was a most important piece of my Internet history; for many years it was very nearly the thing to use the Internet for. For several years it had a great archive, in the form of Deja News, which kept its many conversations researchable. Google bought this up, and as is their way, made it worse. Part of this was trying to confuse people about the difference between Usenet and their own Google Groups, a discussion-board system that I assume they have shut down. If it’s possible to search Usenet through Google anymore, I can’t find how to do it.

So I’m eager to see this archive at I Ping Therefore I Am. I don’t know where it’s getting its records from, or how new ones are coming in. What it has got is a bunch of messages from 1986. This makes for a great, weird peek at a time when the Internet was much smaller, and free of advertising, but still recognizable.

The archives do extend already to sci.math, a group for the discussion of mathematics topics. Also for discovering how people write out mathematics expressions when they don’t have LaTeX, or at least Word’s Equation Editor, to format things. This also covers two subordinate groups, sci.math.stat (for statistics) and sci.math.symbolic (for symbolic algebra discussions).

It would be bad form to join any of these conversations, even if you could figure a way how. But there may be some revealing pieces there now. And I hope the archive will grow, especially to cover the heights of 1990s Usenet. You do not have permission to look up anything I wrote longer than, oh, six weeks ago.

No, You Can’t Say What 6/2(1+2) Equals


I am made aware that a section of Twitter argues about how to evaluate an expression. There may be more than one of these going around, but the expression I’ve seen is:

6 \div 2\left(1 + 2\right) =

Many people feel that the challenge is knowing the order of operations. This is reasonable. That is, that to evaluate arithmetic, you evaluate terms inside parentheses first. Then terms within exponentials. Then multiplication and division. Then addition and subtraction. This is often abbreviated as PEMDAS, and made into a mnemonic like “Please Excuse My Dear Aunt Sally”.

That is fine as far as it goes. Many people likely start by adding the 1 and 2 within the parentheses, and that’s fair. Then they get:

6 \div 2(3) =

Putting two quantities next to one another, as the 2 and the (3) are, means to multiply them. And then comes the disagreement: does this mean take 6\div 2 and multiply that by 3, in which case the answer is 9? Or does it mean take 6 divided by 2\cdot 3, in which case the answer is 1?

And there is the trick. Depending on which way you choose to parse these instructions you get different answers. But you don’t get to do that, not and have arithmetic. So the answer is that this expression has no answer. The phrasing is ambiguous and can’t be resolved.

I’m aware there are people who reject this answer. They picked up along the line somewhere a rule like “do multiplication and division from left to right”. And a similar rule for addition and subtraction. This is wrong, but understandable. The left-to-right “rule” is a decent heuristic, a guide to how to attack a problem too big to do at once. The rule works because multiplication-and-division associates. The quantity a-times-b, multiplied by c, has to be the same number as the quantity a multiplied by the quantity b-times-c. The rule also works for addition-and-subtraction because addition associates too. The quantity a-plus-b, plus the quantity c, has to be the same as the quantity a plus the quantity b-plus-c.

This left-to-right “rule”, though, just helps you evaluate a meaningful expression. It would be just as valid to do all the multiplications-and-divisions from right-to-left. If you get different values working left-to-right from right-to-left, you have a meaningless expression.

But you also start to see why mathematicians tend to avoid the \div symbol. We understand, for example, a \div b to mean a \cdot \frac{1}{b} . Carry that out and then there’s no ambiguity about

6 \cdot \frac{1}{2} \cdot 3 =

I understand the desire to fix an ambiguity. Believe me. I’m a know-it-all; I only like ambiguities that enable logic-based jokes. (“Would you like ice cream or cake?” “Yes.”) But the rules that could remove the ambiguity in 6\div 2(1 + 2) also remove associativity from multiplication. Once you do that, you’re not doing arithmetic anymore. Resist the urge.

(And the mnemonic is a bit dangerous. We can say division has the same priority as multiplication, but we also say “multiplication” first. I bet you can construct an ambiguous expression which would mislead someone who learned Please Excuse Dear Miss Sally Andrews.)

And now a qualifier: computer languages will often impose doing a calculation in some order. Usually left-to-right. The microchips doing the work need to have some instructions. Spotting all possible ambiguous phrasings ahead of time is a challenge. But we accept our computers doing not-quite-actual-arithmetic. They’re able to do not-quite-actual-arithmetic much faster and more reliably than we can. This makes the compromise worthwhile. We need to remember the difference between what the computer does and the calculation we intend.

And another qualifier: it is possible to do interesting mathematics with operations that aren’t associative. But if you are it’s in your research as a person with a postgraduate degree in mathematics. It’s possible it might fit in social media, but I would be surprised. It won’t draw great public attention, anyway.

Reading the Comics Follow-up: Where Else Is A Tetrahedron’s Centroid Edition


A Reading the Comics post a couple weeks back inspired me to find the centroid of a regular tetrahedron. A regular tetrahedron, also known as “a tetrahedron”, is the four-sided die shape. A pyramid with triangular base. Or a cone with a triangle base, if you prefer. If one asks a person to draw a tetrahedron, and they comply, they’ll likely draw this shape. The centroid, the center of mass of the tetrahedron, is at a point easy enough to find. It’s on the perpendicular between any of the four faces — the equilateral triangles — and the vertex not on that face. Particularly, it’s one-quarter the distance from the face towards the other vertex. We can reason that out purely geometrically, without calculating, and I did in that earlier post.

But most tetrahedrons are not regular. They have centroids too; where are they?

In a boxing ring. Facing off and wearing boxing gloves are a tetrahedron and a cube. The umpire, a sphere, says into the microphone, 'And remember: nothing below the centroid.'
Ben Zaehringer’s In The Bleachers for the 16th of March, 2021. This and other essays featuring In The Bleachers are gathered at this link.

Thing is I know the correct answer going in. It’s at the “average” of the vertices of the tetrahedron. Start with the Cartesian coordinates of the four vertices. The x-coordinate of the centroid is the arithmetic mean of the x-coordinates of the four vertices. The y-coordinate of the centroid is the mean of the y-coordinates of the vertices. The z-coordinate of the centroid is the mean of the z-coordinates of the vertices. Easy to calculate; but, is there a way to see that this is right?

What’s got me is I can think of an argument that convinces me. So in this sense, I have an easy proof of it. But I also see where this argument leaves a lot unaddressed. So it may not prove things to anyone else. Let me lay it out, though.

So start with a tetrahedron of your own design. This will be less confusing if I have labels for the four vertices. I’m going to call them A, B, C, and D. I don’t like those labels, not just for being trite, but because I so want ‘C’ to be the name for the centroid. I can’t find a way to do that, though, and not have the four tetrahedron vertices be some weird set of letters. So let me use ‘P’ as the name for the centroid.

Where is P, relative to the points A, B, C, and D?

And here’s where I give a part of an answer. Start out by putting the tetrahedron somewhere convenient. That would be the floor. Set the tetrahedron so that the face with triangle ABC is in the xy plane. That is, points A, B, and C all have the z-coordinate of 0. The point D has a z-coordinate that is not zero. Let me call that coordinate h. I don’t care what the x- and y-coordinates for any of these points are. What I care about is what the z-coordinate for the centroid P is.

The property of the centroid that was useful last time around was that it split the regular tetrahedron into four smaller, irregular, tetrahedrons, each with the same volume. Each with one-quarter the volume of the original. The centroid P does that for the tetrahedron too. So, how far does the point P have to be from the triangle ABC to make a tetrahedron with one-quarter the volume of the original?

The answer comes from the same trick used last time. The volume of a cone is one-third the area of the base times its altitude. The volume of the tetrahedron ABCD, for example, is one-third times the area of triangle ABC times how far point D is from the triangle. That number I’d labelled h. The volume of the tetrahedron ABCP, meanwhile, is one-third times the area of triangle ABC times how far point P is from the triangle. So the point P has to be one-quarter as far from triangle ABC as the point D is. It’s got a z-coordinate of one-quarter h.

Notice, by the way, that while I don’t know anything about the x- and y- coordinates of any of these points, I do know the z-coordinates. A, B, and C all have z-coordinate of 0. D has a z-coordinate of h. And P has a z-coordinate of one-quarter h. One-quarter h sure looks like the arithmetic mean of 0, 0, 0, and h.

At this point, I’m convinced. The coordinates of the centroid have to be the mean of the coordinates of the vertices. But you also see how much is not addressed. You’d probably grant that I have the z-coordinate coordinate worked out when three vertices have the same z-coordinate. Or where three vertices have the same y-coordinate or the same x-coordinate. You might allow that if I can rotate a tetrahedron, I can get three points to the same z-coordinate (or y- or x- if you like). But this still only gets one coordinate of the centroid P.

I’m sure a bit of algebra would wrap this up. But I would like to avoid that, if I can. I suspect the way to argue this geometrically depends on knowing the line from vertex D to tetrahedron centroid P, if extended, passes through the centroid of triangle ABC. And something similar applies for vertexes A, B, and C. I also suspect there’s a link between the vector which points the direction from D to P and the sum of the three vectors that point the directions from D to A, B, and C. I haven’t quite got there, though.

I will let you know if I get closer.

In Our Time podcast has episode on Pierre-Simon Laplace


I have another mathematics-themed podcast to share. It’s again from the BBC’s In Our Time, a 50-minute program in which three experts discuss a topic. Here they came back around to mathematics and physics. And along the way chemistry and mensuration. The topic here was Pierre-Simon Laplace, who’s one of those people whose name you learn well as a mathematics or physics major. He doesn’t quite reach the levels of Euler — who does? — but he’s up there.

Laplace might be best known for his work in celestial mechanics. He (independently of Immanuel Kant) developed the nebular hypothesis, that the solar system formed from the contraction of a great cloud of dust. We today accept a modified version of this. And for studying the question of whether the solar system is stable. That is, whether the perturbations every planet has on one another average out to nothing, or to something catastrophic. And studying probability, which has more to do with these questions than one might imagine. And then there’s general mechanics, and differential equations, and if that weren’t enough, his role in establishing the Metric system. This and more gets discussion.

How March 2020 Treated My Mathematics Blog


March was the first time in three-quarters of a year that I did any Reading the Comics posts. One was traditional, a round-up of comics on a particular theme. The other was new for me, a close look at a question inspired by one comic. Both turned out to be popular. Now see if I learn anything from that.

I’d left the Reading the Comics posts on hiatus when I started last year’s A-to-Z. Given the stress of the pandemic I did not feel up to that great a workload. For this year I am considering whether I feel up to an A-to-Z again. An A-to-Z is enjoyable work, yes, and I like the work. But I am still thinking over whether this is work I want to commit to just now.

That’s for the future. What of the recent past? WordPress’s statistics page suggests that the comics were very well-received. It tells me there were 2,867 page views in March. That’s the greatest number since November, the last full month of the 2020 A-to-Z. This is well above the twelve-month running average of 2,199.8 views per month. And as far above the twelve-month running median of 2,108 views per month. Per posting — there were ten postings in March — the figures are even greater. There were 286.7 views per posting in March. The running mean is 172.9 views per posting, and the running median 144.8.

Bar chart showing monthly readership for the preceding two and a half years. After a loose deecline in January and February both readership and visitor counts were up sharply in March, though still far below the October 2019 peak.
While I am curious what kind of insights Cloudflare Analytics could hope to offer me, I suspect what it really amounts to is “Cloudflare will allow me to give them some money in exchange for confirmation that the most popular stuff is the things that lots of people like”.

There were 1,993 unique visitors in March. This is well above the running averages. The twelve-month running mean was 1,529.4 unique visitors, and the running median 1,491.5. This is 199.3 unique visitors per March posting, not a difficult calculation to make. The twelve-month running mean was 121.1 viewers per posting, though, and the mean a mere 99.8 viewers per posting. So that’s popular.

Not popular? Talking to me. We all feel like that sometimes but I have data. After a chatty February things fell below average for March. There were 30 likes given in March, below the running mean of 56.7 and median of 55.5. There were 3.0 likes per posting. The running mean for the twelve months leading in to this was 4.2 likes per posting. The running median was 4.0.

And actual comments? There were 10 of them in March, below the mean of 14.3 and median of 10. This averaged 1.0 comments per posting, which is at least something. The running per-post mean is 1.6 comments, though, and median is 1.4. It could be the centroids of regular tetrahedrons are not the hot, debatable topic I had assumed.

Pi Day was, as I’d expected, a good day for reading Pi Day comics. And miscellaneous other articles about Pi Day. I need to write some more up for next year, to enjoy those search engine queries. There are some things in differential equations that would be a nice different take.

As mentioned, I posted ten things in March. Here they are in decreasing order of popularity. I would expect this to be roughly a chronological list of when things were posted. It doesn’t seem to be, but I haven’t checked whether the difference is statistically significant.

In March I posted 5,173 words here, for an average 517.3 words per post. That’s shorter than my average January and February posts were. My average words-per-posting for the year has dropped to 558. And despite my posts being on average shorter, this was still my most verbose month of 2021. I’ve had 12,844 words posted this year, through the start of April, and more than two-fifths of them were March.

As of the start of April I’ve posted 1,605 things to the blog here. They’ve gathered 129,696 page views from an acknowledged 75,266 visitors.

If you’d like to be a regular reader, there’s a couple approaches. One is to read regularly. The best way for you to do that is using the RSS feed in whatever reader you prefer. I won’t see you show up in my statistics, and that’s fine. If you don’t have an RSS reader, you can open a free account at Dreamwidth or Livejournal and add any RSS feed you like. This from https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn depending on what you sign up for. If that’s too much, you can use the “Follow NebusResearch By E-mail” button, which will send you essays after they’ve appeared and before I’ve fixed typos.

If you have a WordPress account you can use the “Follow NebusResearch” button to add me to your Reader. If you have Twitter, congratulations; I don’t exactly. My account at @nebusj is still there, but it only has an automated post announcement. I don’t know when that will break. If you’re on Mastodon, you can find me as @nebusj@mathstodon.xyz.

One last thing. WordPress imposed their awful, awful, awful ‘Block’ editor on my blog. I used to be able to us the classic, or ‘good’, editor, where I could post stuff without it needing twelve extra mouse clicks. If anyone knows hacks to get the good editor back please leave a comment.

Thank you all for reading.

A quick reading recommendation


I’ve been reading The Disordered Cosmos: A Journey Into Dark Matter, Spacetime, and Dreams Deferred, by Chanda Prescod-Weinstein. It’s the best science book I’ve read in a long while.

Part of it is a pop-science discussion of particle physics and cosmology, as they’re now understood. It may seem strange that the tiniest things and the biggest thing are such natural companion subjects. That is what seems to make sense, though. I’ve fallen out of touch with a lot of particle physics since my undergraduate days and it’s wonderful to have it discussed well. This sort of pop physics is for me a pleasant comfort read.

The other part of the book is more memoir, and discussion of the culture of science. This is all discomfort reading. It’s an important discomfort.

I discuss sometimes how mathematics is, pretensions aside, a culturally-determined thing. Usually this is in the context of how, for example, that we have questions about “perfect numbers” is plausibly an idiosyncrasy. I don’t talk much about the culture of working mathematicians. In large part this is because I’m not a working mathematician, and don’t have close contact with working mathematicians. And then even if I did — well, I’m a tall, skinny white guy. I could step into most any college’s mathematics or physics department, sit down in a seminar, and be accepted as belonging there. People will assume that if I say anything, it’s worth listening to.

Chanda Prescod-Weinstein, a Black Jewish agender woman, does not get similar consideration. This despite her much greater merit. And, like, I was aware that women have it harder than men. And Black people have it harder than white people. And that being open about any but heterosexual cisgender inclinations is making one’s own life harder. What I hadn’t paid attention to was how much harder, and how relentlessly harder. Most every chapter, including the comfortable easy ones talking about families of quarks and all, is several needed slaps to my complacent face.

Her focus is on science, particularly physics. It’s not as though mathematics is innocent of driving women out or ignoring them when it can’t. Or of treating Black people with similar hostility. Much of what’s wrong is passively accepting patterns of not thinking about whether mathematics is open to everyone who wants in. Prescod-Weinstein offers many thoughts and many difficult thoughts. They are worth listening to.

Reading the Comics, April 1, 2021: Why Is Gunther Taking Algebraic Topology Edition


I’m not yet looking to discuss every comic strip with any mathematics mention. But something gnawed at me in this installment of Greg Evans and Karen Evans’s Luann. It’s about the classes Gunther says he’s taking.

The main characters in Luann are in that vaguely-defined early-adult era. They’re almost all attending a local university. They’re at least sophomores, since they haven’t been doing stories about the trauma and liberation of going off to school. How far they’ve gotten has been completely undefined. So here’s what gets me.

Gunther, looking at sewing patterns: 'You want me to sew pirate outfits?' Bets: 'I'm thinking satin brocade doublet and velvet pantaloons.' Les, not in the conversation: 'Nerd.' Gunther: 'I'm thinking algebraic topology and vector calculus homework.' (He shows his textbooks.) Les: 'And nerdier. (Les pets a cat.)
Greg Evans and Karen Evans’s Luann for the 1st of April, 2021. This and other essays discussing topics raised by Luann are at this link. The overall story here is that Bets wants to have this pirate-themed dinner and trusts Gunther, who’s rather good at clothes-making, to do the decorating.

Gunther taking vector calculus? That makes sense. Vector calculus is a standard course if you’re taking any mathematics-dependent major. It might be listed as Multivariable Calculus or Advanced Calculus or Calculus III. It’s where you learn partial derivatives, integrals along a path, integrals over a surface or volume. I don’t know Gunther’s major, but if it’s any kind of science, yeah, he’s taking vector calculus.

Algebraic topology, though. That I don’t get. Topology at all is usually an upper-level course. It’s for mathematics majors, maybe physics majors.  Not every mathematics major takes topology.   Algebraic topology is a deeper specialization of the subject. I’ve only seen courses listed as algebraic topology as graduate courses. It’s possible for an undergraduate to take a graduate-level course, yes. And it may be that Gunther is taking a regular topology course, and the instructor prefers to focus on algebraic topology.

But even a regular topology course relies on abstract algebra. Which, again, is something you’ll get as an undergraduate. If you’re a mathematics major you’ll get at least two years of algebra. And, if my experience is typical, still feel not too sure about the subject. Thing is that Intro to Abstract Algebra is something you’d plausibly take at the same time as Vector Calculus.  Then you’d get Abstract Algebra and then, if you wished, Topology.

So you see the trouble. I don’t remember anything in algebra-to-topology that would demand knowing vector calculus. So it wouldn’t mean Gunther took courses without taking the prerequisites. But it’s odd to take an advanced mathematics course at the same time as a basic mathematics course. Unless Gunther’s taking an advanced vector calculus course, which might be. Although since he wants to emphasize that he’s taking difficult courses, it’s odd to not say “advanced”. Especially if he is tossing in “algebraic” before topology.

And, yes, I’m aware of the Doylist explanation for this. The Evanses wanted courses that sound impressive and hard. And that’s all the scene demands. The joke would not be more successful if they picked two classes from my actual Junior year schedule. None of the characters have a course of study that could be taken literally. They’ve been university students full-time since 2013 and aren’t in their senior year yet. It would be fun, is all, to find a way this makes sense.


This and my other essays discussing something from the comic strips are at this link.

When is Easter likely to happen?


It’s a natural question to wonder this time of year. The date when Easter falls is calculated by some tricky numerical rules. These come from the desire to make Easter an early-spring (in the Northern hemisphere) holiday, while tying it to the date of Passover, as worked out by people who did not know the exact rules by which the Jewish calendar worked. The result is that some dates are more likely than others to be Easter.

A few years ago I wrote a piece finding how often Easter would be on different dates, in the possible range from the 22nd of March through the 25th of April. And discussed some challenges in the problem. Calendars are full of surprising subtle problems. Easter creates a host of new challenges.

The 145th Playful Math Education Blog Carnival is posted


John Golden, MathHombre, was host this month for the Playful Math Education Blog Carnival. And this month’s collection of puzzles, essays, and creative mathematics projects. Among them are some quilts and pattern-block tiles, which manifest all that talk about the structure of mathematical objects and their symmetries in easy-to-see form. There’s likely to be something of interest there.

Among the wonderful things I discovered there is Math Zine Fest 2021. It’s as the name suggests, a bunch of zines — short printable magazines on a niche topic — put together for the end of February. I had missed this organizing, but hope to get to see later installments. I don’t know what zine I might make, but I must have something I could do.

Denise Gaskins, who organizes the carnival, has hosting slots available for later this year. Hosting is an exciting challenge I encourage people to try at least the once.

Reading the Comics, March 16, 2021: Where Is A Tetrahedron’s Centroid Edition


Comic Strip Master Command has not, to appearances, been distressed by my Reading the Comics hiatus. There are still mathematically-themed comic strips. Many of them are about story problems and kids not doing them. Some get into a mathematical concept. One that ran last week caught my imagination so I’ll give it some time here. This and other Reading the Comics essays I have at this link, and I figure to resume posting them, at least sometimes.

Ben Zaehringer’s In The Bleachers for the 16th of March, 2021 is an anthropomorphized-geometry joke. Here the centroid stands in for “the waist”, the height below which boxers may not punch.

In a boxing ring. Facing off and wearing boxing gloves are a tetrahedron and a cube. The umpire, a sphere, says into the microphone, 'And remember: nothing below the centroid.'
Ben Zaehringer’s In The Bleachers for the 16th of March, 2021. This and other essays featuring In The Bleachers are gathered at this link. There haven’t been many, so far. One of the few appearances was another boxing joke, though.

The centroid is good geometry, something which turns up in plane and solid shapes. It’s a center of the shape: the arithmetic mean of all the points in the shape. (There are other things that can, with reason, be called a center too. Mathworld mentions the existence of 2,001 things that can be called the “center” of a triangle. It must be only a lack of interest that’s kept people from identifying even more centers for solid shapes.) It’s the center of mass, if the shape is a homogenous block. Balance the shape from below this centroid and it stays balanced.

For a complicated shape, finding the centroid is a challenge worthy of calculus. For these shapes, though? The sphere, the cube, the regular tetrahedron? We can work those out by reason. And, along the way, work out whether this rule gives an advantage to either boxer.

The sphere first. That’s the easiest. The centroid has to be the center of the sphere. Like, the point that the surface of the sphere is a fixed radius from. This is so obvious it takes a moment to think why it’s obvious. “Why” is a treacherous question for mathematics facts; why should 4 divide 8? But sometimes we can find answers that give us insight into other questions.

Here, the “why” I like is symmetry. Look at a sphere. Suppose it lacks markings. There’s none of the referee’s face or bow tie here. Imagine then rotating the sphere some amount. Can you see any difference? You shouldn’t be able to. So, in doing that rotation, the centroid can’t have moved. If it had moved, you’d be able to tell the difference. The rotated sphere would be off-balance. The only place inside the sphere that doesn’t move when the sphere is rotated is the center.

This symmetry consideration helps answer where the cube’s centroid is. That also has to be the center of the cube. That is, halfway between the top and bottom, halfway between the front and back, halfway between the left and right. Symmetry again. Take the cube and stand it upside-down; does it look any different? No, so, the centroid can’t be any closer to the top than it can the bottom. Similarly, rotate it 180 degrees without taking it off the mat. The rotation leaves the cube looking the same. So this rules out the centroid being closer to the front than to the back. It also rules out the centroid being closer to the left end than to the right. It has to be dead center in the cube.

Now to the regular tetrahedron. Obviously the centroid is … all right, now we have issues. Dead center is … where? We can tell when the regular tetrahedron’s turned upside-down. Also when it’s turned 90 or 180 degrees.

Symmetry will guide us. We can say some things about it. Each face of the regular tetrahedron is an equilateral triangle. The centroid has to be along the altitude. That is, the vertical line connecting the point on top of the pyramid with the equilateral triangle base, down on the mat. Imagine looking down on the shape from above, and rotating the shape 120 or 240 degrees if you’re still not convinced.

And! We can tip the regular tetrahedron over, and put another of its faces down on the mat. The shape looks the same once we’ve done that. So the centroid has to be along the altitude between the new highest point and the equilateral triangle that’s now the base, down on the mat. We can do that for each of the four sides. That tells us the centroid has to be at the intersection of these four altitudes. More, that the centroid has to be exactly the same distance to each of the four vertices of the regular tetrahedron. Or, if you feel a little fancier, that it’s exactly the same distance to the centers of each of the four faces.

It would be nice to know where along this altitude this intersection is, though. We can work it out by algebra. It’s no challenge to figure out the Cartesian coordinates for a good regular tetrahedron. Then finding the point that’s got the right distance is easy. (Set the base triangle in the xy plane. Center it, so the coordinates of the highest point are (0, 0, h) for some number h. Set one of the other vertices so it’s in the xz plane, that is, at coordinates (0, b, 0) for some b. Then find the c so that (0, 0, c) is exactly as far from (0, 0, h) as it is from (0, b, 0).) But algebra is such a mass of calculation. Can we do it by reason instead?

That I ask the question answers it. That I preceded the question with talk about symmetry answers how to reason it. The trick is that we can divide the regular tetrahedron into four smaller tetrahedrons. These smaller tetrahedrons aren’t regular; they’re not the Platonic solid. But they are still tetrahedrons. The little tetrahedron has as its base one of the equilateral triangles that’s the bigger shape’s face. The little tetrahedron has as its fourth vertex the centroid of the bigger shape. Draw in the edges, and the faces, like you’d imagine. Three edges, each connecting one of the base triangle’s vertices to the centroid. The faces have two of these new edges plus one of the base triangle’s edges.

The four little tetrahedrons have to all be congruent. Symmetry again; tip the big tetrahedron onto a different face and you can’t see a difference. So we’ll know, for example, all four little tetrahedrons have the same volume. The same altitude, too. The centroid is the same distance to each of the regular tetrahedron’s faces. And the four little tetrahedrons, together, have the same volume as the original regular tetrahedron.

What is the volume of a tetrahedron?

If we remember dimensional analysis we may expect the volume should be a constant times the area of the base of the shape times the altitude of the shape. We might also dimly remember there is some formula for the volume of any conical shape. A conical shape here is something that’s got a simple, closed shape in a plane as its base. And some point P, above the base, that connects by straight lines to every point on the base shape. This sounds like we’re talking about circular cones, but it can be any shape at the base, including polygons.

So we double-check that formula. The volume of a conical shape is one-third times the area of the base shape times the altitude. That’s the perpendicular distance between P and the plane that the base shape is in. And, hey, one-third times the area of the face times the altitude is exactly what we’d expect.

So. The original regular tetrahedron has a base — has all its faces — with area A. It has an altitude h. That h must relate in some way to the area; I don’t care how. The volume of the regular tetrahedron has to be \frac{1}{3} A h .

The volume of the little tetrahedrons is — well, they have the same base as the original regular tetrahedron. So a little tetrahedron’s base is A. The altitude of the little tetrahedron is the height of the original tetrahedron’s centroid above the base. Call that h_c . How can the volume of the little tetrahedron, \frac{1}{3} A h_c , be one-quarter the volume of the original tetrahedron, \frac{1}{3} A h ? Only if h_c is one-quarter h .

This pins down where the centroid of the regular tetrahedron has to be. It’s on the altitude underneath the top point of the tetrahedron. It’s one-quarter of the way up from the equilateral-triangle face.

(And I’m glad, checking this out, that I got to the right answer after all.)

So, if the cube and the tetrahedron have the same height, then the cube has an advantage. The cube’s centroid is higher up, so the tetrahedron has a narrower range to punch. Problem solved.


I do figure to talk about comic strips, and mathematics problems they bring up, more. I’m not sure how writing about one single strip turned into 1300 words. But that’s what happens every time I try to do something simpler. You know how it goes.