I apologize for missing its actual publication date, but better late than not at all. Math Book Magic, host of the Playful Math Education Blog Carnival, posted the 148th in the series, and it’s a good read. A healthy number of recreational mathematics puzzles, including some geometry puzzles I’ve been enjoying. As these essays are meant to do, this one gathers some recreational and some educational and some just fun mathematics.

So what the arXiv.org paper does is look at different types of Latin Squares, and whip up some new ones by imposing new rules. Latin Squares are one of those corners of mathematics I haven’t thought about much. But they do connect to other problems, such as sudoku, or knights-tour and similar problems of chess piece movement. So we get enlightenment in those from considering these. And from thinking how we might vary the rules about how to arrange numbers. It’s pleasant, fun exercise.

Mental arithmetic is fun. It has some use, yes. It’s always nice when you’re doing work to have some idea what a reasonable answer looks like. But mostly it’s fun to be able to spot, oh, 24 times 16, that’s got to be a little under 400.

I ran across this post, by Math1089, with a neat trick for certain multiplications. It’s limited in scope. Most mental-arithmetic tricks are; they have certain problems they do well and you need to remember a grab bag that covers enough to be useful. Here, the case is multiplying two numbers that start the same way, and whose ends are complements. That is, the ends add together to 10. (Or, to 100, or 1000, or some other power of two.) So, for example, you could use this trick to multiply together 41 and 49, or 64 and 66. (Or, if you needed, to multiply 2038 by 2062.)

It won’t directly solve 41 times 39, though, nor 64 times 65. But you can hack it together. 64 times 65 is 64 times 66 — you have a trick for that — minus 64. 41 times 39 is tougher, but, it’s 41 times 49 minus 41 times 10. 41 times 10 is easy to do. This is what I mean by learning a grab bag of tricks. You won’t outpace someone who has their calculator out and ready to go. But you might outpace someone who has to get their calculator out, and you’ll certainly impress them.

So it’s clever, and not hard to learn. If you feel like testing your high-school algebra prowess you can even work out why this trick works, and why it has the limits it does.

Better than August 2021 treated me! I don’t wish to impose my woes on you, but the last month was one of the worst I’ve had. Besides various physical problems I also felt dreadfully burned out, which postponed my Little Mathematics A-to-Z yet again. I hope yet to get the sequence started, not to mention finished, although I want to get one more essay banked before I start publishing. If things go well, then, that’ll be this Wednesday; if it doesn’t, maybe next Wednesday.

Still, and despite everything, I was able to post seven things in August, a slow return to form. I am still trying to rebuild my energies. But my hope is to get up to about two posts a week, so for most months, eight to ten posts.

The postings I did do were received with this kind of readership:

So that’s a total of 2,136 page views for August. That’s up from July, though still below the twelve-month running mean of 2,572.6 views per month. It’s also below the median of 2,559 views per month. There were 1,465 unique visitors recorded. This is again below the running mean of 1,8237.7 unique visitors, and the running mean of 1,801 unique visitors.

There were 43 things liked in August, below the running mean of 53.4 and running median of 49.5. And there were a meager 10 comments received, below the mean of 18.7 and median of 18. I expect this will correct itself whenever I do get the Little Mathematics A-to-Z started; those always attract steady interest, and people writing back, even if it’s just to thank me for taking one of their topics as an essay.

Rated per-post, everything gets strikingly close to average. August came in at an mean 305.1 views per posting, compared to a twelve-month running mean of 257.2 and running median of 282.6. There were 209.3 unique visitors per posting, compared to a running mean of 182.7 and median of 197.0. There were 6.1 likes per posting, compared to a mean of 5.0 and median of 4.4. The only figure not above some per-post average was comments, which were 1.4 per posting. The mean comments per posting, from August 2020 through July 2021, was 1.9, and the median 1.4.

Here’s how August’s seven posts ranked in popularity, as in, number of page views for each post:

WordPress estimates that I published 2,440 words in August, a meager 348.6 words per post. I told you I was burned out. It estimates that for 2021 I’ve published a total of 36,015 words as of the start of September, an average of 581 words per posting.

You also can get essays e-mailed right to you, at publication. Please use this option if you want me to be self-conscious about the typos and grammatical errors that I never find before publication however hard I try. You can do that by using the “Follow NebusResearch via Email” box to the right-center of the page. If you have a WordPress account, you can use “Follow NebusResearch” on the top right to add my essays to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for being here, and here’s hoping for a happy September.

And some happy news for those who like miscellaneous collections of mathematics stuff. Jeremy Kun has published the 197th edition of the Carnival of Mathematics. This differs from the Playful Math Education Blog Carnival in not having a specific focus on educational or recreational mathematics. That’s not to say there isn’t fun little stuff mentioned here. For example, Kun leads with a bit of trivia about 197 as a number. But there’s a stronger focus on more serious mathematics work, such as studying space-filling curves, or a neat puzzle about how to fold (roughly) equilateral triangles without measuring them.

Elkement, who’s been a longtime support of my blogging here, has been thinking about stereographic projection recently. This comes from playing with complex-valued numbers. It’s hard to start thinking about something like “what is and not get into the projection. The projection itself Elkement describes a bit in this post, from early in August. It’s one of the ways to try to match the points on a sphere to the points on the entire, infinite plane. One common way to imagine it, and to draw it, is to imagine setting the sphere on the plane. Imagine sitting on the top of the sphere. Draw the line connecting the top of the sphere with whatever point you find interesting on the sphere, and then extend that line until it intersects the plane. Match your point on the sphere with that point on the plane. You can use this to trace out shapes on the sphere and find their matching shapes on the plane.

This distorts the shapes, as you’d expect. Well, the sphere has a finite area, the plane an infinite one. We can’t possibly preserve the areas of shapes in this transformation. But this transformation does something amazing that offends students when they first encounter it. It preserves circles: a circle on the original sphere becomes a circle on the plane, and vice-versa. I know, you want it to turn something into ellipses, at least. She takes a turn at thinking out reasons why this should be reasonable. There are abundant proofs of this, but it helps the intuition to see different ways to make the argument. And to have rough proofs, that outline the argument you mean to make. We need rigorous proofs, yes, but a good picture that makes the case convincing helps a good deal.

What we mean by that is the area between some left boundary, , and some right boundary, , that’s above the x-axis, and below that curve. And there’s just no finding a, you know, answer. Something that looks like (to make up an answer) the area is or something normal like that. The one interesting exception is that you can find the area if the left bound is and the right bound . That’s done by some clever reasoning and changes of variables which is why we see that and only that in freshman calculus. (Oh, and as a side effect we can get the integral between 0 and infinity, because that has to be half of that.)

Anyway, Quintanilla includes a nice bit along the way, that I don’t remember from my freshman calculus, pointing out why we can’t come up with a nice simple formula like that. It’s a loose argument, showing what would happen if we suppose there is a way to integrate this using normal functions and showing we get a contradiction. A proper proof is much harder and fussier, but this is likely enough to convince someone who understands a bit of calculus and a bit of Taylor series.

The talk of the catenary and the brachistochrone give away that this is a calculus paper. The catenary and the brachistochrone are some of the oldest problems in calculus as we know it. The catenary is the problem of what shape a weighted chain takes under gravity. The brachistochrone is the problem of what path a beam of light traces out moving through regions with different indexes of refraction. (As in, through films of glass or water or such.) Straight lines and circles we’ve heard of from other places.

The paper relies on calculus so if you’re not comfortable with that, well, skim over the lines with symbols. Rojas discusses the ways that we can treat all these different shapes as solutions of related, very similar problems. And there’s some talk about calculating approximate solutions. There is special delight in this as these are problems that can be done by an analog computer. You can build a tool to do some of these calculations. And I do mean “you”; the approach is to build a box, like, the sort of thing you can do by cutting up plastic sheets and gluing them together and setting toothpicks or wires on them. Then dip the model into a soap solution. Lift it out slowly and take a good picture of the soapy surface.

This is not as quick, or as precise, as fiddling with a Matlab or Octave or Mathematica simulation. But it can be much more fun.

This is a slight piece, but I just learned that Giuseppe Peano spearheaded the creation of Latino sine flexione, an attempted auxiliary language. The name gives away the plan: “Latin without inflections”. That is, without the nouns and verbs changing form to reflect the role they play in a sentence. I know very little about languages, so I admit I don’t understand quite how this is supposed to work. I had the impression that what an Indo-European language skips in inflections it makes up for with prepositions, and Peano was trying to do without either. But he (and his associates) had something, apparently; he was able to publish the fifth edition of his Formulario Mathematico in the Latino sine flexione.

Giuseppe Peano is a name any mathematician would know and respect highly. He’s one of the logicians and set theorists of the late 19th and early 20th century who straightened out so much of the logical foundations of arithmetic. His “Peano axioms” are still the standard axiomatization of the natural numbers, that is, the logic that underlies what we think of as “four”. And into the logic of mathematical induction, a slick way of proving something true by breaking it up into infinitely many possible cases. You can see why the logic of this requires delicate treatment. And he was an inveterate thinker about notation. Wikipedia credits his 1889 treatise The Principles Of Arithmetic, Presented By A New Method as making pervasive the basic set theory symbols, including the notations for “is an element of”, “is a subset of”, “intersection of sets”, and “union of sets”. Florian Cajori’s History of Mathematical Notations also reveals to me that the step in analysis, when we stop writing “function f evaluated on element x” as “f(x)”, and move instead to “fx”, shows his influence. (He apparently felt the parentheses served no purpose. I … see his point, for f(x) or even f(g(x)) but feel that’s unsympathetic to someone dealing with f(a + sin(t)). I imagine he would agree those parentheses have a point.)

This is all a tiny thing, and anyone reading it should remember that the reality is far more complicated, and ambiguous, and confusing than I present. But it’s a reminder that mathematicians have always held outside fascinations. And that great mathematicians were also part of the intellectual currents of the pre-Great-War time, that sought utopia through things like universal languages and calendar reform and similar kinds of work.

Another mere little piece today. I’d wanted folks to know that Kelly Darke’s Math Book Magic is the next host for the Playful Math Education Blog Carnival. And would likely be able to use any nominations you had for blog posts, YouTube videos, books, games, or other activities that share what’s delightful about mathematics. The Playful Math Education Blog Carnival is a fun roundup to read, and to write — I’ve been able to host it a few times myself — and I hope anyone reading this will consider supporting it too.

I didn’t quite abandon my mathematics blog in July, but it would be hard to prove otherwise. I published only five pieces, which I think is my lowest monthly production on record. One of them was the monthly statistics recap. One pointed to a neat thing I found. Three were pointers to earlier essays I’ve written here. It’s economical stuff, But it draws in fewer readers, a thing I’m conditioned to think of as bad. How bad?

I received 1,891 page views in July, way below the running mean of 2,545.0 for the twelve months ending with June 2021. This is also well below the running median of 2,559. There were 1,324 unique visitors in July, way below the running mean of 1,797.1 and median of 1,801. The number of likes barely dropped from June’s totals, with 34 things given a like here. That’s well down from the mean of 56.8 per month and the 55.5 per month median. And comments were dire, only four received compared to a mean of 20.5 and median of 19.

That’s the kind of collapse which makes it look like the blog’s just dried up and floated away. But these readership figures are still a good bit above most of 2020, for example, or all but one month of 2018. I’m feeling the effects of the hedonic treadmill here.

And, now — if we consider that per posting? Suddenly my laconic nature starts to seem like genius. There were an average 378.2 views per posting in July. Not all July posts, but the number of views divided by the number posts given. That’s crushing the twelve-month mean of 232.9 views per posting, and twelve-month median of 235.0 views per posting. There were 264.8 unique visitors per posting. The twelve-month running mean was 165.2 unique visitors per posting, and the median 166.3.

Even the likes and comments look better this way. There were 6.8 likes for each time I posted, above the mean of 4.7 and median of 4.3. There were still only 0.8 comments per posting, below the mean of 1.9 and median of 1.6, but at least the numbers look closer together.

The order of popularity of July’s essays, most to least, was:

The most popular essay of all was No, You Can’t Say What 6/2(1+2) Equals. From this I infer some segment of Twitter got worked up about an ambiguous arithmetic expression again.

WordPress estimates that I published 3,103 words in July. This is an average of merely 517.2 words per posting, a figure that will increase as soon as I get this year’s A-to-Z under way. My average words per posting for 2021 declined to 611 thanks to all this. I am at 33,575 words for the year so far.

If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Use the “Follow NebusResearch via Email” box to the right-center of the page here.. Or if you have a WordPress account, you can use “Follow NebusResearch” on the top right to add this page to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.

I hope to begin publishing this year’s Little Mathematics A-to-Z next week, with a rousing start in the letter “M”. I’m also hoping to work several weeks ahead of deadline for a change. To that end, I already need more letters! While I have a couple topics picked out for M-A-T-H, I’ll need topics for the next quartet. If you have a mathematics (or mathematics-adjacent) term starting with E, M, A, or T that I might write a roughly thousand-word essay about? Please, leave a comment and I’ll think about it.

If you do, please leave a mention of any project (mathematics or otherwise) you’d like people to know more about. And several folks were kind enough to make suggestions for M-A-T-H, several weeks ago. I’m still keeping those as possibilities for M, A, and T’s later appearances.

I’m open to re-examining a topic I’ve written about in the past, if I think I have something fresh to say about it. Past A-to-Z’s have been about these subjects:

As I continue to approach readiness for the Little Mathematics A-to-Z, let me share another piece you might have missed. Back in 2016 somehow two A-to-Z’s wasn’t enough for me. I also did a string of “Theorem Thursdays”, trying to explain some interesting piece of mathematics. The Jordan Curve Theorem is one of them.

The theorem, at heart, seems too simple to even be mathematics. It says that a simple closed curve on the plane divides the plane into an inside and an outside. There are similar versions for surfaces in three-dimensional spaces. Or volumes in four-dimensional spaces and so on. Proving the theorem turns out to be more complicated than I could fit into an essay. But proving a simplified version, where the curve is a polygon? That’s doable. Easy, even.

And as a sideline you get an easy way to test whether a point is inside a shape. It’s obvious, yeah, if a point is inside a square. But inside a complicated shape, some labyrinthine shape? Then it’s not obvious, and it’s nice to have an easy test.

This is even mathematics with practical application. A few months ago in my day job I needed an automated way to place a label inside a potentially complicated polygon. The midpoint of the polygon’s vertices wouldn’t do. The shapes could be L- or U- shaped, so that the midpoint wasn’t inside, or was too close to the edge of another shape. Starting from the midpoint, though, and finding the largest part of the polygon near to it? That’s doable, and that’s the Jordan Curve Theorem coming to help me.

I am, believe it or not, working ahead of deadline on the Little Mathematics A-to-Z for this year. I feel so happy about that. But that’s eating up time to write fresh stuff here. So please let me share some older material, this from my prolific year 2016.

Transcendental numbers, which I describe at this link, are nearly all the real numbers. We’re able to prove that even though we don’t actually know very many of them. We know some numbers that we’re interested in, like π and , are. And that this has surprising consequences. π being a transcendental number means, for example, the Ancient Greek geometric challenge to square the circle using straightedge and compass is impossible.

However, it’s not hard to create a number that you know is transcendental. Here’s how to do it, with an easy step-by-step guide. If you want to create this and declare it’s named after you, enjoy! Nobody but you will ever care about this number, I’m afraid. Its only interesting traits will be that it’s transcendental and that you crafted it. Still, isn’t that nice anyway? I think it’s nice anyway.

I don’t yet have actual words committed to text editor for this year’s little A-to-Z yet. Soon, though. Rather than leave things completely silent around here, I’d like to re-share an old sequence about something which delighted me. A lon while ago I read Edmund Callis Berkeley’s Giant Brains: Or Machines That Think. It’s a book from 1949 about numerical computing. And it explained just how to really calculate logarithms.

Anyone who knows calculus knows, in principle, how to calculate a logarithm. I mean as in how to get a numerical approximation to whatever the log of 25 is. If you didn’t have a calculator that did logarithms, but you could reliably multiply and add numbers? There’s a polynomial, one of a class known as Taylor Series, that — if you add together infinitely many terms — gives the exact value of a logarithm. If you only add a finite number of terms together, you get an approximation.

That suffices, in principle. In practice, you might have to calculate so many terms and add so many things together you forget why you cared what the log of 25 was. What you want is how to calculate them swiftly. Ideally, with as few calculations as possible. So here’s a set of articles I wrote, based on Berkeley’s book, about how to do that.

Machines That Give You Logarithms explains how to use those tools. And lays out how to get the base-ten logarithm for most numbers that you would like with a tiny bit of computing work. I showed off an example of getting the logarithm of 47.2286 using only three divisions, four additions, and a little bit of looking up stuff.

Without Machines That Think About Logarithms closes it out. One catch with the algorithm described is that you need to work out some logarithms ahead of time and have them on hand, ready to look up. They’re not ones that you care about particularly for any problem, but they make it easier to find the logarithm you do want. This essay talks about which logarithms to calculate, in order to get the most accurate results for the logarithm you want, using the least custom work possible.

And that’s the series! With that, in principle, you have a good foundation in case you need to reinvent numerical computing.

There is an excellent chance it is! Mathematicians sometimes assert the object of their study is a universal truth, independent of all human culture. It may be. But the expression of that interest depends on the humans expressing it. And as with all human activities it picks up quirks. Patterns that don’t seem to make sense. Or that seem to conflict with other patterns. It’s not two days ago I most recently saw someone cross that 0 times anything is 0, but 0! is 1.

Mathematicians are not all of one mind. They notice different things that seem important and want to focus on that. They use ways that make sense to their culture. When they create new notation, or new definitions, they use the old ones to guide them. When a topic’s interesting enough for many people to notice, they bring many trails of notation to describe it. Usually a consensus emerges, that there are some notations that work well to describe these concepts, and the others fall away. But it’s difficult to get complete consistency. Particularly when there are several major fields that don’t need to interact much, but do have some overlap.

It’s the time of month when I like to look at what my popularity is like. How many readers I had, what they were reading, that sort of thing. And I’m even getting to it earlier than usual in the month of July. Credit a hot Sunday when I can’t think of other things to do instead.

According to WordPress there were 2,507 page views here in June 2021. That’s down from the last couple months. But it is above the twelve-month running mean, leading up to June, which was of 2,445.9 views per month. The twelve-month running median was 2,516.5. This all implies that June was quite in line with my average month from June 2020 through May 2021. It just looks like a decline is all.

There were 1,753 unique visitors recorded by WordPress in June. That again fits between the running averages. There were a mean 1,728.4 unique visitors per month between June 2020 and May 2021. There was a median of 1,800 unique visitors each month over that same range.

The number of likes given collapsed, a mere 36 clicks of the like button given in June compared to a mean of 57.3 and median of 55.5. Given how many of my posts were some variation of “I’m struggling to find the energy to write”? I can’t blame folks not finding the energy to like. Comments were up, though, surely in response to my appeal for Mathematics A-to-Z topics. If you’ve thought of any, please, let me know; I’m eager to know.

I had nine essays posted in June, including my readership review post. These were, in the order most-to-least popular (as measured by page views):

In June I posted 7,852 words, my most verbose month since October 2020. That comes to an average of 981.5 words per posting in June. But the majority of them were in a single post, the exploration of MLX, which shows how the mean can be a misleading measure. This does bring my words-per-posting mean for the year up to 622, an increase of 70 words per posting. I need to not do that again.

As of the start of July I’ve had 1,631 posts here, which gathered 138,286 total views from 81,404 logged unique visitors.

If you’d like to be a regular reader, this is a great time for it, as I’ve almost worked my way through my obsession with checksum routines of 1980s computer magazines! And there’s the A-to-Z starting soon. Each year I do a glossary project, writing essays about mathematics terms from across the dictionary, many based on reader suggestions. All 168 essays from past years are at this link. This year’s should join that set, too.

If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Or if you have a WordPress account, you can use “Follow NebusResearch” to add this page to your Reader. And I am @nebusj@mathstodon.xyz, the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.

With qualifiers, of course. Compute! and Compute!’s Gazette had two generations of Automatic Proofreader for Commodore computers. The magazines also had Automatic Proofreaders for the other eight-bit computers that they covered. I trust that those worked the same way, but — with one exception — don’t know. I haven’t deciphered most of those other proofreaders.

Let me introduce how it was used, though. Compute! and Compute!’s Gazette offered computer programs to type in. Many of them were in BASIC, which uses many familiar words of English as instructions. But you can still make typos entering commands, and this causes bugs or crashes in programs. The Automatic Proofreader, for the Commodore (and the Atari), put in a little extra step after you typed in a line of code. It calculated a checksum. It showed that on-screen after every line you entered. And you could check whether that matched the checksum the magazine printed. So the listing in the magazine would be something like:

You would type in all those lines up to the :rem part. ‘rem’ here stands for ‘Remark’ and means the rest of the line is a comment to the programmer, not the computer. So they’d do no harm if you did enter them. But why type text you didn’t need?

So after typing, say, 100 POKE 56,50:CLR:DIM IN$,I,J,A,B,A$,B$,A(7),N$ you’d hit return and with luck get the number 34 up on screen. The Automatic Proofreader did not force you to re-type the line. You were on your honor to do that. (Nor were you forced to type lines in order. If you wished to type line 100, then 200, then 300, then 190, then 250, then 330, you could. The checksum would calculate the same.) And it didn’t only work for entering programs, these commands starting with line numbers. It would return a result for any command you entered. But since you wouldn’t know what the checksum should be for a freeform command, that didn’t tell you much.

The first-generation Automatic Proofreader, which is what I’m talking about here, returned a number between 0 and 255. And it was a simple checksum. It could not detect transposed characters: the checksum for PIRNT was the same as PRINT and PRITN. And, it turns out, errors could offset: the checksum for PEEK(46) would be the same as that for PEEK(55).

And there was one bit of deliberate insensitivity built in. Spaces would not be counted. The checksum for FA=PEEK(45)+Z6*PEEK(46) would be the same as FA = PEEK( 45 ) + Z6 * PEEK( 46 ). So you could organize text in whatever way was most convenient.

Given this, and given the example of the first MLX, you may have a suspicion how the Automatic Proofreader calculated things. So did I and it turned out to be right. The checksum for the first-generation Automatic Proofreader, at least for the Commodore 64 and the Vic-20, was a simple sum. Take the line that’s been entered. Ignore spaces. But otherwise, take the ASCII code value for each character, and add that up, modulo 256. That is, if the sum is (say) 300, subtract 256 from that, that is, 44.

I’m fibbing a little when I say it’s the ASCII code values. The Commodore computers used a variation on ASCII, called PETSCII (Commodore’s first line of computers was the PET). For ordinary text the differences between ASCII and PETSCII don’t matter. The differences come into play for various characters Commodores had. These would be symbols like the suits of cards, or little circles, or checkerboard patterns. Symbols that, these days, we’d see as emojis, or at least part of an extended character set.

But translating all those symbols is … tedious, but not hard. If you want to do a simulated Automatic Proofreader in Octave, it’s almost no code at all. It turns out Octave and Matlab need no special command to get the ASCII code equivalent of text. So here’s a working simulation

function retval = automatic_proofreader (oneLine)
trimmedLine = strrep(oneLine, " ", "");
# In Matlab this should be replace(oneLine, " ", "");
retval = mod(sum(trimmedLine), 256);
endfunction

Capitalization matters! The ASCII code for capital-P is different from that for lowercase-p. Spaces won’t matter, though. More exotic characters, though, such as the color-setting commands, are trouble and let’s not deal with that right now. Also you can enclose your line in single-quotes, in case for example you want the checksum of a line that had double-quotes. Let’s agree that lines with single- and double-quotes don’t exist.

I understand the way Commodore 64’s work well enough that I can explain the Automatic Proofreader’s code. I plan to do that soon. I don’t know how the Atari version of the Automatic Proofreader worked, but since it had the same weaknesses I assume it used the same algorithm.

There is a first-generation Automatic Proofreader with a difference, though, and I’ll come to that.

As with the previous podcast, there’s almost no mention of Nicholas of Cusa’s mathematics work. On the other hand, if you learn the tiniest possible bit about Nicholas of Cusa, you learn everything there is to know about Nicholas of Cusa. (I believe this joke would absolutely kill with the right audience, and will hear nothing otherwise.) The St Andrews Maths History site has a biography focusing particularly on his mathematical work.

I’m sorry not to be able to offer more about his mathematical work. If someone knows of a mathematics-history podcast with a similar goal, please leave a comment. I’d love to know and to share with other people.

I’d like to say I’m ready to start this year’s Mathematics A-to-Z. I’m not sure I am. But if I wait until I’m sure, I’ve learned, I wait too long. As mentioned, this year I’m doing an abbreviated version of my glossary project. Rather than every letter in the alphabet, I intend to write one essay each for the letters in “Mathematics A-to-Z”. The dashes won’t be included.

While I have some thoughts in minds for topics, I’d love to know what my kind readers would like to see me discuss. I’m hoping to write about one essay, of around a thousand words, per week. One for each letter. The topic should be anything mathematics-related, although I tend to take a broad view of mathematics-related. (I’m also open to biographical sketches.) To suggest something, please, say so in a comment. If you do, please also let me know about any projects you have — blogs, YouTube channels, real-world projects — that I should mention at the top of that essay.

To keep things manageable, I’m looking for the first couple letters — MATH — first. But if you have thoughts for later in the alphabet please share them. I can keep track of that. I am happy to revisit a subject I think I have more to write about, too. Past essays for these letters that I’ve written include:

The reason I wrote a second Tiling essay is because I forgot I’d already written one in 2018. I hope not to make that same mistake again. But I am open to repeating a topic, or a variation of a topic, on purpose..

I am embarrassed that after writing 72,650 words about MLX 2.0 for last week, I left something out. Specifically, I didn’t include code for your own simulation of the checksum routine on a more modern platform. Here’s a function that carries out the calculations of the Commodore 64/128 or Apple II versions of MLX 2.0. It’s written in Octave, the open-source Matlab-like numerical computation routine. If you can read this, though, you can translate it to whatever language you find convenient.

function [retval] = mlxII (oneline)
z2 = 2;
z4 = 254;
z5 = 255;
z6 = 256;
z7 = 127;
address = oneline(1);
entries = oneline(2:9);
checksum = oneline(10);
ck = 0;
ck = floor(address/z6);
ck = address-z4*ck + z5*(ck>z7)*(-1);
ck = ck + z5*(ck>z5)*(-1);
#
# This looks like but is not the sum mod 255.
# The 8-bit computers did not have a mod function and
# used this subtraction instead.
#
for i=1:length(entries),
ck = ck*z2 + z5*(ck>z7)*(-1) + entries(i);
ck = ck + z5*(ck>z5)*(-1);
endfor
#
# The checksum *can* be 255 (0xFF), but not 0 (0x00)!
# Using the mod function could make zeroes appear
# where 255's should.
#
retval = (ck == checksum);
endfunction

This reproduces the code as it was actually coded. Here’s a version that relies on Octave or Matlab’s ability to use modulo operations:

function [retval] = mlxIIslick (oneline)
factors = 2.^(7:-1:0);
address = oneline(1);
entries = oneline(2:9);
checksum = oneline(10);
ck = 0;
ck = mod(address - 254*floor(address/256), 255);
ck = ck + sum(entries.*factors);
ck = mod(ck, 255);
ck = ck + 255*(ck == 0);
retval = (ck == checksum);
endfunction

Enjoy! Please don’t ask when I’ll have the Automatic Proofreader solved.

A couple months ago I worked out a bit of personal curiosity. This was about how MLX worked. MLX was a program used in Compute! and Compute!’s Gazette magazine in the 1980s, so that people entering machine-language programs could avoid errors. There were a lot of fine programs, some of them quite powerful, free for the typing-in. The catch is this involved typing in a long string of numbers, and if any were wrong, the program wouldn’t work.

So MLX, introduced in late 1983, was a program to make typing in programs better. You would enter in a string of six numbers — six computer instructions or data — and a seventh, checksum, number. Back in January I worked out finally what the checksum was. It turned out to be simple. Take the memory location of the first of your set of six instructions, modulo 256. Add to it each of the six instructions, modulo 256. That’s the checksum. If it doesn’t match the typed-in checksum, there’s an error.

There’s weaknesses to this, though. It’s vulnerable to transposition errors: if you were supposed to type in 169 002 and put in 002 169 instead, it wouldn’t be caught. It’s also vulnerable to casual typos: 141 178 gives the same checksum as 142 177.

Which is all why the original MLX lasted only two years.

What Was The New MLX?

The New MLX, also called MLX 2.0, appeared first in the June 1985 Compute!. This in a version for the Apple II. Six months later a version for the Commodore 64 got published, again in Compute!, though it ran in Compute!’s Gazette too. Compute! was for all the home computers of the era; Compute!’s Gazette specialized in the Commodore computers. I would have sworn that MLX got adapted for the Atari eight-bit home computers too, but can’t find evidence it ever was. By 1986 Compute! was phasing out its type-in programs and didn’t run much for Atari anymore.

The new MLX made a bunch of changes. Some were internal, about how to store a program being entered. One was dramatic in appearance. In the original MLX people typed in decimal numbers, like 32 or 169. In the new, they would enter hexadecimal digits, like 20 or A9. And a string of eight numbers on a line, rather than six. This promised to save our poor fingers. Where before we needed to type in 21 digits to enter six instructions, now we needed 18 digits to enter eight instructions. So the same program would take about two-thirds the number of keystrokes. A plausible line of code would look something like:

I had a Commodore 64, so I always knew MLX from its Commodore version. The key parts of the checksum code appear in it in lines 350 through 390. Let me copy out the key code, spaced a bit out for easier reading:

360 A = INT(AD/Z6):
GOSUB 350:
A = AD - A*Z6:
GOSUB 350:
PRINT":";
370 CK = INT(AD/Z6):
CK = AD - Z4*CK + Z5*(CK>27):
GOTO 390
380 CK = CK*Z2 + Z5*(CK>Z7) + A
390 CK = CK + Z5*(CK>Z5):
RETURN

Z2, Z4, Z5, Z6, and Z7 are constants, defined at the start of the program. Z4 equals 254, Z5 equals 255, Z6 equals 256, and Z7, as you’d expect, is 127. Z2, meanwhile, was a simple 2.

A bit of Commodore BASIC here. INT means to take the largest whole number not larger than whatever’s inside. AD is the address of the start of the line being entered. CK is the checksum. A is one number, one machine language instruction, being put in. GOSUB, “go to subroutine”, means to jump to another line and execute commands from there, and then RETURN. That’s the command. The program then continues from the next instruction after the GOSUB. In this code, line 350 converts a number from decimal to hexadecimal and prints out the hexadecimal version. This bit about adding Z5 * (CK>Z7) looks peculiar.

Commodore BASIC evaluates logical expressions like CK > 27 into a bit pattern. That pattern looks like a number. We can use it like an integer. Many programming languages do something like that and it can allow for clever but cryptic programming tricks. An expression that’s false evaluates as 0; an expression that’s true evaluates as -1. So, CK + Z5*(CK>Z5) is an efficient little filter. If CK is smaller than Z5, it’s left untouched. If CK is larger than Z5, then subtract Z5 from CK. This keeps CK from being more than 255, exactly as we’d wanted.

But you also notice: this code makes no sense.

Like, starting the checksum with something derived from the address makes sense. Adding to that numbers based on the instructions makes sense. But the last instruction of line 370 is a jump straight to line 390. Line 380, where any of the actual instructions are put into the checksum, never gets called. Also, there’s eight instructions per line. Why is only one ever called?

And this was a bear to work out. One friend insisted I consider the possibility that MLX was buggy and nobody had found the defect. I could not accept that, not for a program that was so central to so much programming for so long. Also, not considering that it worked. Make almost any entry error and the checksum would not match.

Where’s the rest of the checksum formula?

This is what took time! I had to go through the code and find what other lines call lines 360 through 390. There’s a hundred lines of code in the Commodore version of MLX, which isn’t that much. They jump around a lot, though. By my tally 68 of these 100 lines jump to, or can jump to, something besides the next line of code. I don’t know how that compares to modern programming languages, but it’s still dizzying. For a while I thought it might be a net saving in time to write something that would draw a directed graph of the program’s execution flow. It might still be worth doing that.

The checksum formula gets called by two pieces of code. One of them is the code when the program gets entered. MLX calculates a checksum and verifies whether it matches the ninth number entered. The other role is in printing out already-entered data. There, the checksum doesn’t have a role, apart from making the on-screen report look like the magazine listing.

Here’s the code that calls the checksum when you’re entering code:

440 POKE 198,0:
GOSUB 360:
IF F THEN PRINT IN$ PRINT" ";
[ many lines about entering your data here ]
560 FOR I=1 TO 25 STEP 3:
B$ = MID$(IN$, I):
GOSUB 320:
IF I<25 THEN GOSUB 380: A(I/3)=A
570 NEXT:
IF ACK THEN GOSUB 1060:
PRINT "ERROR: REENTER LINE ":
F = 1:
GOTO 440
580 GOSUB 1080:
[ several more lines setting up a new line of data to enter ]

Line 320 started the routine that turned a hexadecimal number, such as 7F, into decimal, such as 127. It returns this number as the variable named A. IN$ was the input text, part of the program you you enter. This should be 27 characters long. A(I/3) was an element in an array, the string of eight instructions for that entry. Yes, you could use the same name for an array and for a single, unrelated, number. Yes, this was confusing.

But here’s the logic. Line 440 starts work on your entry. It calculates the part of the checksum that comes from the location in memory that data’s entered in. Line 560 does several bits of work. It takes the entered instructions and converts the strings into numbers. Then it takes each of those instruction numbers and adds its contribution to the checksum. Line 570 compares whether the entered checksum matches the computed checksum. If it does match, good. If it doesn’t match, then go back and re-do the entry.

The code for displaying a line of your machine language program is shorter:

630 GOSUB 360:
B = BS + AD - SA;
FOR I = B TO B+7:
A = PEEK(I):
GOSUB 350:
GOSUB 380:
PRINT S$;
640 NEXT:
PRINT "";
A = CK:
GOSUB 350:
PRINT

The bit about PEEK is looking into the buffer, which holds the entered instructions, and reading what’s there. The GOSUB 350 takes the number ‘A’ and prints out its hexadecimal representation. GOSUB 360 calculates the part of the checksum that’s based on the memory location. The GOSUB 380 contributes the part based on every instruction. S$ is a space. It’s used to keep all the numbers from running up against each other.

So what is the checksum formula?

The checksum takes in two parts. The first part is based on the address at the start of the line. Let me call that the number . The second part is based on the entry, the eight instructions following the line. Let me call them through . So this is easiest described in two parts.

The base of the checksum, which I’ll call , is:

For example, suppose the address is 49152 (in hexadecimal, C000), which was popular for Commodore 64 programming. Then would be 129. If the address is 2049 (in hexadecimal, 0801), another popular location, $latex ck_{0} would be 17.

Generally, the initial increases by 1 as the memory address for the start of a line increases. If you entered a line that started at memory address 49153 (hexadecimal C001) for some reason, that would be 130. A line which started at address 49154 (hexadecimal C002) would have start at 131. This progression continues until would reach 256. Then that greater-than filter at the end of the expression intrudes. A line starting at memory address 49278 (C07E) has of 255, and one starting at memory address 49279 (C07F) has of 1. I see reason behind this choice.

That’s the starting point. Now to use the actual data, the eight pieces through that are the actual instructions. The easiest way for me to describe this is do it as a loop, using to calculate , and to define and so on.

That is, for each piece of data in turn, double the existing checksum and add the next data to it. If this sum is 256 or larger, subtract 255 from it. The working sum never gets larger than 512, thanks to that subtract-255-rule after the doubling. And then again that subtract-255-rule after adding . Repeat through the eighth piece of data. That last calculated checksum, , is the checksum for the entry. If does match the entered checksum, go on to the next entry. If does not match the entered checksum, give a warning and go back and re-do the entry.

Why was MLX written like that?

There are mysterious bits to this checksum formula. First is where it came from. It’s not, as far as I can tell, a standard error-checking routine, or if it is it’s presented in a form I don’t recognize. But I know only small pieces of information theory, and it might be that this is equivalent to a trick everybody knows.

The formula is, at heart, “double your working sum and add the next instruction, and repeat”. At the end, take the sum modulo 255 so that the checksum is no more than two hexadecimal digits. Almost. In studying the program I spent a lot of time on a nearly-functionally-equivalent code that used modulo operations. I’m confident that if Apple II and Commodore BASIC had modulo functions, then MLX would have used them.

But those eight-bit BASICs did not. Instead the programs tested whether the working checksum had gotten larger than 255, and if it had, then subtracted 255 from it. This is a little bit different. It is possible for a checksum to be 255 (hexadecimal FF). This even happened. In the June 1985 Compute!, introducing the new MLX for the Apple II, we have this entry as part of the word processor Speedscript 3.0 that anyone could type in:

0848: 20 A9 00 8D 53 1E A0 00 FF

What we cannot have is a checksum of 0. (Unless a program began at memory location 0, and had instructions of nothing but 0. This would not happen. The Commodore 64, and the Apple II, used those low-address memory locations for system work. No program could use them.) Were the formulas written with modulo operations, we’d see 00 where we should see FF.

Doubling the working sum and then setting it to be in a valid range — from 1 to 255 — is easy enough. I don’t know how the designer settled on doubling, but have hypotheses. It’s a good scheme for catching transposition errors, entering 20 FF D2 where one means to enter 20 D2 FF.

The initial seems strange. The equivalent step for the original MLX was the address on which the entry started, modulo 256. Why the change?

My hypothesis is this change was to make it harder to start typing in the wrong entry. The code someone typed in would be long columns of numbers, for many pages. The text wasn’t backed by alternating bands of color, or periodic breaks, or anything else that made it harder for the eye to skip one or more lines of machine language code.

In the original MLX, skipping one line, or even a couple lines, can’t go undetected. The original MLX entered six pieces of data at a time. If your eye skips a line, the wrong data will mismatch the checksum by 6, or by 12, or by 18 — by 6 times the number of lines you miss. To have the checksum not catch this error, you have to skip 128 lines, and that’s not going to happen. That’s about one and a quarter columns of text and the eye just doesn’t make that mistake. Skimming down a couple lines, yes. Moving to the next column, yes. Next column plus 37 lines? No.

In the new MLX, one enters eight instructions of code at a time. So skipping a line increases the checksum by 8 times the number of lines skipped. If the initial checksum were the line’s starting address modulo 256, then we’d only need to skip 16 lines to get the same initial checksum. Sixteen lines is a bit much to skip, but it’s less than one-sixth of a column. That’s not too far. And the eye could see 0968 where it means to read 0868. That’s a plausible enough error and one the new checksum would be helpless against.

So the more complicated, and outright weird, formula that MLX 2.0 uses betters this. Skipping 16 lines — entering the line for 0968 instead of 0868 — increases the base checksum by 2. Combined with the subtract-255 rule, you won’t get a duplicate of the checksum for, in most cases, 127 lines. Nobody is going to make that error.

So this explains the components. Why is the Commodore 64 version of MLX such a tangle of spaghetti code?

Here I have fewer answers. Part must be that Commodore BASIC was prone to creating messes. For example, it did not really have functions, smaller blocks of code with their own, independent, sets of variables. These would let, say, numbers convert from hexadecimal to decimal without interrupting the main flow of the program. Instead you had to jump, either by GOTO or GOSUB, to another part of the program. The Commodore or Apple II BASIC subroutine has to use the same variable names as the main part of the program, so, pick your variables wisely! Or do a bunch of reassigning values before and after the subroutine’s called.

To be precise, Commodore BASIC did let one define some functions. This by using the DEF FN command. It could take one number as the input, and return one number as output. The whole definition of the function couldn’t be more than 80 characters long. It couldn’t have a loop. Given these constraints, you can see why user-defined functions went all but unused.

The Commodore version jumps around a lot. Of its 100 lines of code, 68 jump or can jump to somewhere else. The Apple II version has 52 lines of code, 28 of which jump or can jump to another line. That’s just over 50 percent of the lines. I’m not sure how much of this reflects Apple II’s BASIC being better than Commodore’s. Commodore 64 BASIC we can charitably describe as underdeveloped. The Commodore 128 version of MLX is a bit shorter than the 64’s (90 lines of code). I haven’t analyzed it to see how much it jumps around. (But it does have some user-defined functions.)

The most mysterious element, to me, is the defining of some constants like Z2, which is 2, or Z5, which is 255. The Apple version of this doesn’t uses these constants. It uses 2 or 255 or such in the checksum calculation. I can rationalize replacing 254 with Z4, or 255 with Z5, or 127 with Z7. The Commodore 64 allowed only 80 tokens in a command line. So these values might save only a couple characters, but if they’re needed characters, good. Z2, though, only makes the line longer.

I would have guessed that this reflected experiments. That is, trying out whether one should double the existing sum and add a new number, or triple, or quadruple, or even some more complicated rule. But the Apple II version appeared first, and has the number 2 hard-coded in. This might reflect that Tim Victor, author of the Apple II version, preferred to clean up such details while Ottis R Cowper, writing the Commodore version, did not. Lacking better evidence, I have to credit that to style.

Is this checksum any good?

Whether something is “good” depends on what it is supposed to do. The New MLX, or MLX 2.0, was supposed to make it possible to type in long strings of machine-language code while avoiding errors. So it’s good if it protects against those errors without being burdensome.

It’s a light burden. The person using this types in 18 keystrokes per line. This carries eight machine-language instructions plus one checksum number. So only one-ninth of the keystrokes are overhead, things to check that other work is right. That’s not bad. And it’s better than the original version of MLX, where up to 21 keystrokes gave six instructions. And one-seventh of the keystrokes were the checksum overhead.

The checksum quite effectively guards against entering instructions on a wrong line. To get the same checksum that (say) line 0811 would have you need to jump to line 0C09. In print, that’s another column over and a third of the way down the page. It’s a hard mistake to make.

Entering a wrong number in the instructions — say, typing in 22 where one means 20 — gets caught. The difference gets multiplied by some whole power of two in the checksum. Which power depends on what number’s entered wrong. If the eighth instruction is entered wrong, the checksum is off by that error. If the seventh instruction is wrong, the checksum is off by two times that error. If the sixth instruction is wrong, the checksum is off by four times that error. And so on, so that if the first instruction is wrong, the checksum is off by 128 times that error. And these errors are taken not-quite-modulo 255.

The only way to enter a single number wrong without the checksum catching it is to type something 255 higher or lower than the correct number. And MLX confines you to entering a two-hexadecimal-digit number, that is, a number from 0 to 255. The only mistake it’s possible to make is to enter 00 where you mean FF, or FF where you mean 00.

What about transpositions? Here, the the new MLX checksum shines. Doubling the sum so far and adding a new term to it makes transpositions very likely to be caught. Not many, though. A transposition of the data at position number j and at position number k will go unnoticed only when and happen to make true

This doesn’t happen much. It needs and to be 255 apart. Or for to be a divisor of 255 and to be another divisor. I’ll discuss when that happens in the next section.

In practice, this is a great simple checksum formula. It isn’t hard to calculate, it catches most of the likely data-entry mistakes, and it doesn’t require much extra data entry to work.

What flaws did the checksum have?

The biggest flaw the MLX 2.0 checksum scheme has is that it’s helpless to distinguish FF, the number 255, from 00, the number 0. It’s so vulnerable to this that a warning got attached to the MLX listing in every issue of the magazines:

Because of the checksum formula used, MLX won’t notice if you accidentally type FF in place of 00, and vice versa. And there’s a very slim chance that you could garble a line and still end up with a combination of characters that adds up to the proper checksum. However, these mistakes should not occur if you take reasonable care while entering data.

So when can a transposition go wrong? Well, any time you swap a 00 and an FF on a line, however far apart they are. But also if you swap the elements in position j and k, if is a divisor of 255 and works with you, modulo 255.

For a transposition of adjacent instructions to go wrong — say, the third and the fourth numbers in a line — you need the third and fourth numbers to be 255 apart. That is, entering 00 FF where you mean FF 00 will go undetected. But that’s the only possible case for adjacent instructions.

A transposition past one space — say, swapping the third and the fifth numbers in a line — needs the two to be 85, 170, or 255 away. So, if you were supposed to enter (in hexadecimal) EE A9 44 and you instead entered 44 A9 EE, it would go undetected. That’s the only way a one-space transposition can happen. MLX will catch entering EE A9 45 as 45 A9 EE.

A transposition past two spaces — say, swapping the first and the fifth numbers — will always be caught unless the numbers are 255 apart, that is, a 00 and an FF. A transposition past three spaces — like, swapping the first and the sixth numbers — is vulnerable again. Then if the first and sixth numbers are off by 17 (or a multiple of 17) the swap will go unnoticed. A transposition across four spaces will always be caught unless it’s 00 for FF. A transposition across five spaces — like, swapping the second and eighth numbers — has to also have the two numbers be 85 or 170 or 255 apart to sneak through. And a transposition across six spaces — this has to be swapping the first and last elements in the line — again will be caught unless it’s 00 for FF.

Listing all the possible exceptions like this makes it sound dire. It’s not. The most likely transposition someone is going to make is swapping the order of two elements. That’s caught unless one of the numbers is FF and the other 00. If the transposition swaps non-neighboring numbers there’s a handful of new cases that might slip through. But you can estimate how often two numbers separated by one or three or five spaces are also different by 85 or 34 or another dangerous combination. (That estimate would suppose that every number from 0 to 255 is equally likely. They’re not, though, because popular machine language instruction codes such as A9 or 20 will be over-represented. So will references to important parts of computer memory such as, on the Commodore, FFD2.)

You will forgive me for not listing all the possible cases where competing typos in entering numbers will cancel out. I don’t want to figure them out either. I will go along with the magazines’ own assessment that there’s a “very slim chance” one could garble the line and get something that passes, though. After all, there are 18,446,744,073,709,551,615 conceivable lines of code one might type in, and only 255 possible checksums. Some garbled lines must match the correct checksum.

Could the checksum have been better?

The checksum could have been different. This is a trivial conclusion. “Better”? That demands thought. A good error-detection scheme needs to catch errors that are common or that are particularly dangerous. It should add as little overhead as possible.

The MLX checksum as it is catches many of the most common errors. A single entry mis-keyed, for example, except for the case of swapping 00 and FF. Or transposing one number for the one next to it. It even catches most transpositions with spaces between the transposed numbers. It catches almost all cases where one enters the entirely wrong line. And it does this for only two more keystrokes per eight pieces of data entered. That’s doing well.

The obvious gap is the inability to distinguish 00 from FF. There’s a cure for that, of course. Count the number of 00’s — or the number of FF’s — in a line, and include that as part of the checksum. It wouldn’t be particularly hard to enter (going back to the Q-Bird example)

(Or if you prefer, to have the extra checksums be 0 0 0 1.)

This adds to the overhead, yes, one more keystroke in what is already a good bit of typing. And one may ask whether you’re likely to ever touch 00 when you mean FF. They keys aren’t near one another. Then you learn that MLX soon got a patch which made keying much easier. They did this by making the characters in the rows under 7 8 9 0 type in digits. And the mapping used (on the Commodore 64) put the key to enter F right next to the key to enter 0.

If you get ambitious, you might attempt even cleverer schemes. Suppose you want to catch those off-by-85 or off-by-17 differences that would detect transpositions. Why not, say, copy the last bits of each of your eight data, and use that to assemble a new checksum number? So, for example, in line 0801 up there the last bit of each number was 1-0-0-0-0-0-0-0 which is boring, but gives us 128, hexadecimal 80, as a second checksum. Line 0809 has eighth bits 1-0-0-0-1-0-1-0-0, or 138 (hex 8A). And so on; so we could have:

Now, though? We’ve got five keystrokes of overhead to sixteen keystrokes of data. Getting a bit bloated. It could be cleaned up a little; the single-digit count of 00’s (or FF’s) is redundant to the two-digit number formed from the cross-section I did there.

And if we were working in a modern programming language we could reduce the MLX checksum and this sampled-digit checksum to a single number. Use the bitwise exclusive-or of the two numbers as the new, ‘mixed’ checksum. Exclusive-or the sampled-digit with the mixed checksum and you get back the classic MLX checksum. You get two checksums in the space of one. In the program you’d build the sampled-digit checksum, and exclusive-or it with the mixed checksum, and get back what should be the MLX checksum. Or take the mixed checksum and exclusive-or it with the MLX checksum, and you get the sampled-digit checksum.

This almost magic move has two problems. This sampled digit checksum could catch transpositions that are off by 85 or 17. It won’t catch transpositions off by 17 or by 34, though, just as deadly. It will catch transpositions off by odd multiples of 17, at least. You would catch transpositions off by 85 or by 34 if you sampled the seventh digit, at least. Or if you build a sample based on the fifth or the third digit. But then you won’t catch transpositions off by 85 or by 17. You can add new sampled checksums. This threatens us again with putting in too many check digits for actual data entry.

The other problem is worse: Commodore 64 BASIC did not have a bitwise exclusive-or command. I was shocked, and I was more shocked to learn that Applesoft BASIC also lacked an exclusive-or. The Commodore 128 had exclusive-or, at least. But given that lack, and the inability to add an exclusive-or function that wouldn’t be infuriating? I can’t blame anyone for not trying.

So there is my verdict. There are some obvious enough ways that MLX’s checksum might have been able to catch more errors. But, given the constraints of the computers it was running on? A more sensitive error check likely would not have been available. Not without demanding much more typing. And, as a another practical matter, demanding the program listings in the magazine be smaller and harder to read. The New MLX did, overall, a quite good job catching errors without requiring too much extra typing. We’ll probably never see its like again.

This is not a proper Reading the Comics post, since there’s nothing mathematical about this. But it does reflect a project I’ve been letting linger for months and that I intend to finish before starting the abbreviated Mathematics A-to-Z for this year.

In the meanwhile. I have a person dear to me who’s learning college algebra. For no reason clear to me this put me in mind of last year’s essay about Extraneous Solutions. These are fun and infuriating friends. They’re created when you follow the rules about how you can rewrite a mathematical expression without changing its value. And yet sometimes you do these rewritings correctly and get a would-be solution that isn’t actually one. So I’d shared some thoughts about why they appear, and what tedious work keeps them from showing up.

Iva Sallay, creator of the Find The Factors recreational mathematics puzzle and a kind friend to my blog, posted Yes, YOU Can Host a Playful Math Education Blog Carnival. It explains in quite good form how to join in Denise Gaskins’s roaming blog event. It tries to gather educational or recreational or fun or just delightful mathematics links.

Hosting the blog carnival is a great experience I recommend for mathematics bloggers at least once. I seem to be up to hosting it about once a year, most recently in September 2020. Most important in putting one together is looking at your mathematics reading with different eyes. Sallay, though, goes into specifics about what to look for, and how to find that.

I continue to share things I’ve heard, rather than created. Peter Adamson’s podcast The History Of Philosophy Without Any Gaps this week had an episode about Nicholas of Cusa. There’s another episode on him scheduled for two weeks from now.

Nicholas is one of those many polymaths of the not-quite-modern era. Someone who worked in philosophy, theology, astronomy, mathematics, with a side in calendar reform. He’s noteworthy in mathematics and theology and philosophy for trying to understand the infinite and the infinitesimal. Adamson’s podcast — about a half-hour — focuses on the philosophical and theological sides of things. But the mathematics can’t help creeping in, with questions like, how can you tell the difference between a straight line and the edge of a circle with infinitely large diameter? Or between a circle and a regular polygon with infinitely many sides?

I’ll take this chance now to look over my readership from the past month. It’s either that or actually edit this massive article I’ve had sitting for two months. I keep figuring I’ll edit it this next weekend, and then the week ends before I do. This weekend, though, I’m sure to edit it into coherence. Just you watch.

According to WordPress I had 3,068 page views in May of 2021. That’s an impressive number: my 12-month running mean, leading up to May, was 2,366.0 views per month. The 12-month running median is a similar 2,394 views per month. That startles me, especially as I don’t have any pieces that obviously drew special interest. Sometimes there’s a flood of people to a particular page, or from a particular site. That didn’t happen this month, at least as far as I can tell. There was a steady flow of readers to all kinds of things.

There were 2,085 unique visitors, according to WordPress. That’s down from April, but still well above the running mean of 1,671.9 visitors. And above the median of 1,697 unique visitors.

When we rate things per post the dominance of the past month gets even more amazing. That’s an average 340.9 views per posting this month, compared to a mean of 202.5 or a median of 175.5. (Granted, yes, the majority of those were to things from earlier months; there’s almost ten years of backlog and people notice those too.) And it’s 231.7 unique visitors per posting, versus a mean of 144.7 and a median of 127.4.

There were 48 likes given in May. That’s below the running mean of 56.3 and median of 55.5. Per-posting, though, these numbers look better. That’s 5.3 likes per posting over the course of May. The mean per posting was 4.5 and the median 4.1 over the previous twelve months. There were 20 comments, barely above the running mean of 19.4 and running median of 18. But that’s 2.2 comments per posting, versus a mean per posting of 1.7 and a median per posting of 1.4. I make my biggest impact with readers by shutting up more.

I got around to publishing nine things in May. A startling number of them were references to other people’s work or, in one case, me talking about using an earlier bit I wrote. Here’s the posts in descending order of popularity. I’m surprised how much this differs from simple chronological order. It suggests there are things people are eager to see, and one of them is Reading the Comics posts. Which I don’t do on a schedule anymore.

As that last and least popular post says, I plan to do an A-to-Z this year. A shorter one than usual, though, one of only fifteen week’s duration, and covering only ten different letters. It’s been a hard year and I need to conserve my energies. I’ll begin appealing for subjects soon.

In May 2021 I posted 4,719 words here, figures WordPress, bringing me to a total of 22,620 words this year. This averages out at 524.3 words per posting in May, and 552 words per post for the year.

As of the start of June I’ve had 1,623 posts to here, which gathered a total 135,779 views from a logged 79,646 unique visitors.

If you have a WordPress account, you can add my posts to your Reader. Use the “Follow NebusResearch” button to do that. Or you can use “Follow NebusResearch by E-mail” to get posts sent to your mailbox. That’s the way to get essays before I notice their most humiliating typos.

Thank you for reading, however it is you’re doing, and I hope you’ll do more of that. If you’re not reading, I suppose I don’t have anything more to say.

I enjoy the tradition of writing an A-to-Z, a string of essays about topics from across the alphabet and mostly chosen by readers and commenters. I’ve done at least one each year since 2015 and it’s a thrilling, exhausting performance. I didn’t want to miss this year, too.

But note the “exhausting” there. It’s been a heck of a year and while I’ve been more fortunate than many, I also know my limits. I don’t believe I have the energy to do the whole alphabet. I tell myself these essays don’t have to be big productions, and then they turn into 2,500 words a week for 26 weeks. It’s nice work but it’s also a (slender) pop mathematics book a year, on top of everything else I write in the corners around my actual work.

So how to do less, and without losing the Mathematics A-to-Z theme? And Iva Sallay, creator of Find the Factors and always a kind and generous reader, had the solution. This year I’ll plan on a subset of the alphabet, corresponding to a simple phrase. That phrase? I’m embarrassed to say how long it took me to think of, but it must be the right one.

I plan to do, in this order, the letters of “MATHEMATICS A-TO-Z”.

That is still a 15-week course of essays, but I did want something that would still be a worthwhile project. I intend to keep the essays shorter this year, aiming at a 1,000-word cap, so look forward to me breaking 4,000 words explaining “saddle points”. This also implies that I’ll be doubling and even tripling letters, for the first time in one of these sequences. There’s to be three A’s, three T’s, and two M’s. Also one each of C, E, H, I, O, S, and Z. I figure I have one Z essay left before I exhaust the letter. I may deal with that problem in 2022.

I plan to set my call for topics soon. I’d like to get the sequence started publishing in July, so I have to do that soon. But to give some idea the range of things I’ve discussed before, here’s the roster of past, full-alphabet, A-to-Z topics:

I, too, am fascinated by the small changes in how I titled these posts and even chose whether to capitalize subject names in the roster. By “am fascinated by the small changes” I mean “am annoyed beyond reason by the inconsistencies”. I hope you too have an appropriate reaction to them.

I have only a couple strips this time, and from this week. I’m not sure when I’ll return to full-time comics reading, but I do want to share strips that inspire something.

Carol Lay’s Lay Lines for the 24th of May riffs on Hilbert’s Hotel. This is a metaphor often used in pop mathematics treatments of infinity. So often, in fact, a friend snarked that he wished for any YouTube mathematics channel that didn’t do the same three math theorems. Hilbert’s Hotel was among them. I think I’ve never written a piece specifically about Hilbert’s Hotel. In part because every pop mathematics blog has one, so there are better expositions available. I have a similar restraint against a detailed exploration of the different sizes of infinity, or of the Monty Hall Problem.

Hilbert’s Hotel is named for David Hilbert, of Hilbert problems fame. It’s a thought experiment to explore weird consequences of our modern understanding of infinite sets. It presents various cases about matching elements of a set to the whole numbers, by making it about guests in hotel rooms. And then translates things we accept in set theory, like combining two infinitely large sets, into material terms. In material terms, the operations seem ridiculous. So the set of thought experiments get labelled “paradoxes”. This is not in the logician sense of being things both true and false, but in the ordinary sense that we are asked to reconcile our logic with our intuition.

So the Hotel serves a curious role. It doesn’t make a complex idea understandable, the way many demonstrations do. It instead draws attention to the weirdness in something a mathematics student might otherwise nod through. It does serve some role, or it wouldn’t be so popular now.

It hasn’t always been popular, though. Hilbert introduced the idea in 1924, though per a paper by Helge Kragh, only to address one question. A modern pop mathematician would have a half-dozen problems. George Gamow’s 1947 book One Two Three … Infinity brought it up again, but it didn’t stay in the public eye. It wasn’t until the 1980s that it got a secure place in pop mathematics culture, and that by way of philosophers and theologians. If you aren’t up to reading the whole of Kragh’s paper, I did summarize it a bit more completely in this 2018 Reading the Comics essay.

Anyway, Carol Lay does an great job making a story of it.

Leigh Rubin’s Rubes for the 25th of May I’ll toss in here too. It’s a riff on the art convention of a blackboard equation being meaningless. Normally, of course, the content of the equation doesn’t matter. So it gets simplified and abstracted, for the same reason one draws a brick wall as four separate patches of two or three bricks together. It sometimes happens that a cartoonist makes the equation meaningful. That’s because they’re a recovering physics major like Bill Amend of FoxTrot. Or it’s because the content of the blackboard supports the joke. Which, in this case, it does.

We have goldfish, normally kept in an outdoor pond. It’s not a deep enough pond that it would be safe to leave them out for a very harsh winter. So we keep as many as we can catch in a couple 150-gallon tanks in the basement.

Recently, and irritatingly close to when we’d set them outside, the nitrate level in the tanks grew too high. Fish excrete ammonia. Microorganisms then turn the ammonia into nitrates and then nitrates. In the wild, the nitrates then get used by … I dunno, plants? Which don’t thrive enough hin our basement to clean them out. To get the nitrate out of the water all there is to do is replace the water.

We have six buckets, each holding five gallons, of water that we can use for replacement. So there’s up to 30 gallons of water that we could change out in a day. Can’t change more because tap water contains chloramines, which kill bacteria (good news for humans) but hurt fish (bad news for goldfish). We can treat the tap water to neutralize the chloramines, but want to give that time to finish. I have never found a good reference for how long this takes. I’ve adopted “about a day” because we don’t have a water tap in the basement and I don’t want to haul more than 30 gallons of water downstairs any given day.

So I got thinking, what’s the fastest way to get the nitrate level down for both tanks? Change 15 gallons in each of them once a day, or change 30 gallons in one tank one day and the other tank the next?

And, happy to say, I realized this was the tea-making problem I’d done a couple months ago. The tea-making problem had a different goal, that of keeping as much milk in the tea as possible. But the thing being studied was how partial replacements of a solution with one component affects the amount of the other component. The major difference is that the fish produce (ultimately) more nitrates in time. There’s no tea that spontaneously produces milk. But if nitrate-generation is low enough, the same conclusions follow. So, a couple days of 30-gallon changes, in alternating tanks, and we had the nitrates back to a decent level.

We’d have put the fish outside this past week if I hadn’t broken, again, the tool used for cleaning the outside pond.

Several years ago in an A-to-Z I tried to explain cohomologies. I wasn’t satisfied with it, as, in part, I couldn’t think of a good example. You know, something you could imagine demonstrating with specific physical objects. I can reel off definitions, once I look up the definitions, but there’s only so many people who can understand something from that.

Quanta Magazine recently ran an article about homologies. It’s a great piece, if we get past the introduction of topology with that doughnut-and-coffee-cup joke. (Not that it’s wrong, just that it’s tired.) It’s got pictures, too, which is great.

This I came to notice because Refurio Anachro on Mathstodon wrote a bit about it. This in a thread of toots talking about homologies and cohomologies. The thread at this link is more for mathematicians than the lay audience, unlike the Quanta Magazine article. If you’re comfortable reading about simplexes and linear operators and multifunctions you’re good. Otherwise … well, I imagine you trust that cohomologies can take care of themselves. But I feel better-informed for reading the thread. And it includes a link to a downloadable textbook in algebraic topology, useful for people who want to give that a try on their own.

The BBC’s In Our Time program, and podcast, did a 50-minute chat about the longitude problem. That’s the question of how to find one’s position, east or west of some reference point. It’s an iconic story of pop science and, I’ll admit, I’d think anyone likely to read my blog already knows the rough outline of the story. But you never know what people don’t know. And even if you do know, it’s often enjoyable to hear the story told a different way.

The mathematics content of the longitude problem is real, although it’s not discussed more than in passing during the chat. The core insight Western mapmakers used is that the difference between local (sun) time and a reference point’s time tells you how far east or west you are of that reference point. So then the question becomes how you know what your reference point’s time is.

This story, as it’s often told in pop science treatments, tends to focus on the brilliant clockmaker John Harrison, and the podcast does a fair bit of this. Harrison spent his life building a series of ever-more-precise clocks. These could keep London time on ships sailing around the world. (Or at least to the Caribbean, where the most profitable, slavery-driven, British interests were.) But he also spent decades fighting with the authorities he expected to reward him for his work. It makes for an almost classic narrative of lone genius versus the establishment.

But, and I’m glad the podcast discussion comes around to this, the reality more ambiguous than this. (Actual history is always more ambiguous than whatever you think.) Part of the goal of the goal of the British (and other powers) was finding a practical way for any ship to find longitude. Granted Harrison could build an advanced, ingenious clock more accurate than anyone else could. Could he build the hundreds, or thousands, of those clocks that British shipping needed? Could anyone?

And the competing methods for finding longitude were based on astronomy and calculation. The moment when, say, the Moon passes in front of Jupiter is the same for everyone on Earth. (At least for the accuracy needed here.) It can, in principle, be forecast years, even decades ahead of time. So why not print up books listing astronomical events for the next five years and the formulas to turn observations into longitudes? Books are easy to print. You already train your navigators in astronomy so that they can find latitude. (This by how far above the horizon the pole star, or the sun, or another identifiable feature is.) And, incidentally, you gain a way of computing longitude that you don’t lose if your clock breaks. I appreciated having some of that perspective shown.

(The problem of longitude on land gets briefly addressed. The same principles that work at sea work on land. And land offers some secondary checks. For an unmentioned example there’s triangulation. It’s a great process, and a compelling use of trigonometry. I may do a piece about that myself sometime.)

Also a thing I somehow did not realize: British English pronounces “longitude” with a hard G sound. Huh.

So this is not a mathematics-themed comic update, not really. It’s just a bit of startling news about frequent Reading the Comics subject Andertoons. A comic strip back in December revealed that Wavehead had a specific name. According to the strip from the 3rd of December, the student most often challenging the word problem or the definition on the blackboard is named Tommy.

And then last week we got this bombshell:

So, also, it turns out I should have already known this since the strip ran in 2018 also. All I can say is I have a hard enough time reading nearly every comic strip in the world. I can’t be expected to understand them too.

So as not to leave things too despairing let me share a mathematics-mentioning Andertoons from yesterday and also from July 2018.

I grant that I’m later even than usual in doing my readership recap. That news about how to get rid of the awful awful awful Block Editor was too important to not give last Wednesday’s publication slot. But let me get back to the self-preening and self-examination that people always seem to like and that I never take any lessons from.

In April 2021 there were 3,016 page views recorded here, according to WordPress. These came from 2,298 unique visitors. These are some impressive-looking numbers, especially given that in April I only published nine pieces. And one of those was the readership report for March.

The 3,016 page views is appreciably above the running mean of 2,267.9 views per month for the twelve months leading up to April. It’s also above the running median of 2,266.5 for the twelve months before. And, per posting, the apparent growth is the more impressive. This averages at 335.1 views per posting. The twelve-month running mean was 185.5 views per posting, and twelve-month running median 161.0.

Similarly, unique visitors are well above the averages. 2,298 unique visitors in April is well above the running mean of 1,589.9, and the running median of 1,609.5. The total comes out to 255.3 unique visitors per posting. The running mean, per posting, for the twelve months prior to April was 130.7 unique visitors per posting. The median was a mere 114.1 views per posting.

There were even nice results in the things that show engagement. There were 70 things liked in April, compared to the mean of 54.1 and median of 49. That’s 7.8 likes per posting, well above the mean of 4.1 and median of 4.0. There were for a wonder even more comments than average, 22 given in April compared to a mean of 18.3 and median of 18. Per-posting, that’s 2.4 comments per posting, comfortably above the 1.5 comments per posting mean and 1.2 comments per posting median. It all suggests that I’m finally finding readers who appreciate my genius, or at least style.

I have doubts, of course, because I don’t have the self-confidence to be a successful writer. But I also notice, for example, that quite a few of these views, and visitors, came in a rush from about the 12th through 16th of April. That’s significant because my humor blog logged an incredible number of visits that week. Someone on the Fandom Drama reddit, explaining James Allen’s departure from Mark Trail, linked to a comic strip I’d saved for my own plot recaps. I’m not sure that this resulted in anyone on the Fandom Drama reddit reading a word I wrote. I also don’t know how this would have brought even a few people to my mathematics blog. The most I can find is several hundred people coming to the mathematics blog from Facebook. As far as I know Facebook had nothing to do with the Fandom Drama reddit. But the coincidence is hard to ignore.

As said, I posted nine things in April. Here they are in decreasing order of popularity. This isn’t quite chronological order, even though pieces from earlier in the month have more time to gather views. It likely means something that one of the more popular pieces is a Reading the Comics post for a comic strip which has run in no newspapers since the 1960s.

My writing plans? I do keep reading the comics. I’m trying to read more for comic strips that offer interesting mathematics points or puzzles to discuss. There’ve been few of those, it seems. But I’m burned out on pointing out how a student got a story problem. And it does seem there’ve been fewer of those, too. But since I don’t want to gather the data needed to do statistics I’ll go with my impression. If I am wrong, what harm will it do?

For each of the past several years I’ve done an A-to-Z, writing an essay for each letter in the alphabet. I am almost resolved to do one for this year. My reservation is that I have felt close to burnout for a long while. This is part of why I am posting two or even one things per week, and have since the 2020 A-to-Z finished. I think that if I do a 2021 A-to-Z it will have to be under some constraints. First is space. A 2,500-word essay lets me put in a lot of nice discoveries and thoughts about topics. It also takes forever to write. Planning to write an 800-word essay trains me to look at smaller scopes, and be easier to find energy and time to write.

Then, too, I may forego making a complete tour of the alphabet. Some letters are so near tapped out that they stop being fun. Some letters end up getting more subject nominations than I can fulfil. It feels a bit off to start an A-to-Z that won’t ever hit Z, but we do live in difficult times. If I end up doing only thirteen essays? That is probably better than none at all.

If you have thoughts about how I could do a different A-to-Z, or better, please let me know. I’m open to outside thoughts about what’s good in these series and what’s bad in them.

In April 2021 I posted 5,057 words here, by WordPress’s estimate. Over nine posts that averages 561,9 words per post. Things brings me to a total of 17,901 words for the year and an average 559 words per post for 2021.

As of the start of May I’ve posted 1,614 things here. They had gathered 131,712 views from 77,564 logged unique visitors.

If you have a WordPress account, you can use the “Follow NebusResearch” button, and posts will appear in your Reader here. If you’d rather get posts in e-mail, typos and all, you can click the “Follow NebusResearch by E-mail” button.

On Twitter my @nebusj account still exists, and posts announcements of things. But Safari doesn’t want to reliably let me read Twitter and I don’t care enough to get that sorted out, so you can’t use it to communicate with me. If you’re on Mastodon, you can find me as @nebusj@mathstodon.xyz, the mathematics-themed server there. Safari does mostly like and let me read that. (It has an annoying tendency to jump back to the top of the timeline. But since Mathstodon is a quiet neighborhood this jumping around is not a major nuisance.)

Thank you for reading. I hope you’re enjoying it. And if you do have thoughts for a 2021 A-to-Z, I hope you’ll share them.

So I have to skip my planned post for right now, in favor of good news for WordPress bloggers. I apologize for the insular nature of this, but, it’s news worth sharing.

About two months ago WordPress pushed this update where I had no choice but to use their modern ‘Block’ editor. Its main characteristics are that everything takes longer and behaves worse. And more unpredictably. This is part of a site-wide reorganization where everything is worse. Like, it dumped the old system where you could upload several pictures, put in captions and alt-text for them, and have the captions be saved. And somehow the Block Editor kept getting worse. It has two modes, a ‘Visual Editor’ where it shows roughly what your post would look like, and a ‘Code Editor’ where it shows the HTML code you’re typing in. And this past week it decided anything put in as Code Editor should preview as ‘This block has encountered an error and cannot be previewed’.

It’s sloppy, but everything about the Block Editor is sloppy. There is no guessing, at any point, what clicking the mouse will do, much less why it would do that. The Block Editor is a master class in teaching helplessness. I would pay ten dollars toward an article that studied the complex system of failures and bad decisions that created such a bad editor.

This is not me being a cranky old man at a web site changing. I gave it around two months, plenty of time to get used to the scheme and to understand what it does well. It does nothing well.

For example, if I have an article and wish to insert a picture between two paragraphs? And I click at the space between the two paragraphs where I want the picture? There are at least four different things that the mouse click might cause to happen, one of them being “the editor jumps to the very start of the post”. Which of those four will happen? Why? I don’t know, and you know what? I should not have to know.

In the Classic Editor, if I want to insert a picture, I click in my post where I want the picture to go. I click the ‘Insert Media’ button. I select the picture I want, and that’s it. Any replacement system should be no less hard for me, the writer, to use. Last week, I had to forego putting a picture in one of my Popeye cartoon reviews because nothing would allow me to insert a picture. This is WordPress’s failure, not mine.

With the latest change, and thinking seriously whether WordPress blogging is worth the aggravation, I went to WordPress’s help pages looking for how to get the old editor back. And, because their help pages are also a user-interface clusterfluff, ended up posting this question to a forum that exists somewhere. And, wonderfully, musicdoc1 saw my frustrated pleas and gave me the answer. I am grateful to them and I cannot exaggerate how much difference this makes. Were I forced to choose between the Block Editor and not blogging at all, not blogging would win.

I am so very grateful to musicdoc1 for this information and I am glad to be able to carry on here.

These carnivals often feature recreational mathematics. Sallay’s collection this month has even more than usual, and (to my tastes) more delightful ones than usual. Even if you aren’t an educator or parent it’s worth reading, as there’s surely something you haven’t thought about before.

And if you have a blog, and would like to host the carnival some month? Denise Gaskins, who organizes the project, is taking volunteers. The 147th carnival needs a host yet, and there’s all of fall and winter available too. Hosting is an exciting and challenging thing to do, and I do recommend anyone with pop-mathematics inclinations trying it at least once.