A Reading the Comics post a couple weeks back inspired me to find the centroid of a regular tetrahedron. A regular tetrahedron, also known as “a tetrahedron”, is the four-sided die shape. A pyramid with triangular base. Or a cone with a triangle base, if you prefer. If one asks a person to draw a tetrahedron, and they comply, they’ll likely draw this shape. The centroid, the center of mass of the tetrahedron, is at a point easy enough to find. It’s on the perpendicular between any of the four faces — the equilateral triangles — and the vertex not on that face. Particularly, it’s one-quarter the distance from the face towards the other vertex. We can reason that out purely geometrically, without calculating, and I did in that earlier post.
But most tetrahedrons are not regular. They have centroids too; where are they?
Thing is I know the correct answer going in. It’s at the “average” of the vertices of the tetrahedron. Start with the Cartesian coordinates of the four vertices. The x-coordinate of the centroid is the arithmetic mean of the x-coordinates of the four vertices. The y-coordinate of the centroid is the mean of the y-coordinates of the vertices. The z-coordinate of the centroid is the mean of the z-coordinates of the vertices. Easy to calculate; but, is there a way to see that this is right?
What’s got me is I can think of an argument that convinces me. So in this sense, I have an easy proof of it. But I also see where this argument leaves a lot unaddressed. So it may not prove things to anyone else. Let me lay it out, though.
So start with a tetrahedron of your own design. This will be less confusing if I have labels for the four vertices. I’m going to call them A, B, C, and D. I don’t like those labels, not just for being trite, but because I so want ‘C’ to be the name for the centroid. I can’t find a way to do that, though, and not have the four tetrahedron vertices be some weird set of letters. So let me use ‘P’ as the name for the centroid.
Where is P, relative to the points A, B, C, and D?
And here’s where I give a part of an answer. Start out by putting the tetrahedron somewhere convenient. That would be the floor. Set the tetrahedron so that the face with triangle ABC is in the xy plane. That is, points A, B, and C all have the z-coordinate of 0. The point D has a z-coordinate that is not zero. Let me call that coordinate h. I don’t care what the x- and y-coordinates for any of these points are. What I care about is what the z-coordinate for the centroid P is.
The property of the centroid that was useful last time around was that it split the regular tetrahedron into four smaller, irregular, tetrahedrons, each with the same volume. Each with one-quarter the volume of the original. The centroid P does that for the tetrahedron too. So, how far does the point P have to be from the triangle ABC to make a tetrahedron with one-quarter the volume of the original?
The answer comes from the same trick used last time. The volume of a cone is one-third the area of the base times its altitude. The volume of the tetrahedron ABCD, for example, is one-third times the area of triangle ABC times how far point D is from the triangle. That number I’d labelled h. The volume of the tetrahedron ABCP, meanwhile, is one-third times the area of triangle ABC times how far point P is from the triangle. So the point P has to be one-quarter as far from triangle ABC as the point D is. It’s got a z-coordinate of one-quarter h.
Notice, by the way, that while I don’t know anything about the x- and y- coordinates of any of these points, I do know the z-coordinates. A, B, and C all have z-coordinate of 0. D has a z-coordinate of h. And P has a z-coordinate of one-quarter h. One-quarter h sure looks like the arithmetic mean of 0, 0, 0, and h.
At this point, I’m convinced. The coordinates of the centroid have to be the mean of the coordinates of the vertices. But you also see how much is not addressed. You’d probably grant that I have the z-coordinate coordinate worked out when three vertices have the same z-coordinate. Or where three vertices have the same y-coordinate or the same x-coordinate. You might allow that if I can rotate a tetrahedron, I can get three points to the same z-coordinate (or y- or x- if you like). But this still only gets one coordinate of the centroid P.
I’m sure a bit of algebra would wrap this up. But I would like to avoid that, if I can. I suspect the way to argue this geometrically depends on knowing the line from vertex D to tetrahedron centroid P, if extended, passes through the centroid of triangle ABC. And something similar applies for vertexes A, B, and C. I also suspect there’s a link between the vector which points the direction from D to P and the sum of the three vectors that point the directions from D to A, B, and C. I haven’t quite got there, though.
I have another mathematics-themed podcast to share. It’s again from the BBC’s In Our Time, a 50-minute program in which three experts discuss a topic. Here they came back around to mathematics and physics. And along the way chemistry and mensuration. The topic here was Pierre-Simon Laplace, who’s one of those people whose name you learn well as a mathematics or physics major. He doesn’t quite reach the levels of Euler — who does? — but he’s up there.
Laplace might be best known for his work in celestial mechanics. He (independently of Immanuel Kant) developed the nebular hypothesis, that the solar system formed from the contraction of a great cloud of dust. We today accept a modified version of this. And for studying the question of whether the solar system is stable. That is, whether the perturbations every planet has on one another average out to nothing, or to something catastrophic. And studying probability, which has more to do with these questions than one might imagine. And then there’s general mechanics, and differential equations, and if that weren’t enough, his role in establishing the Metric system. This and more gets discussion.
March was the first time in three-quarters of a year that I did any Reading the Comics posts. One was traditional, a round-up of comics on a particular theme. The other was new for me, a close look at a question inspired by one comic. Both turned out to be popular. Now see if I learn anything from that.
I’d left the Reading the Comics posts on hiatus when I started last year’s A-to-Z. Given the stress of the pandemic I did not feel up to that great a workload. For this year I am considering whether I feel up to an A-to-Z again. An A-to-Z is enjoyable work, yes, and I like the work. But I am still thinking over whether this is work I want to commit to just now.
That’s for the future. What of the recent past? WordPress’s statistics page suggests that the comics were very well-received. It tells me there were 2,867 page views in March. That’s the greatest number since November, the last full month of the 2020 A-to-Z. This is well above the twelve-month running average of 2,199.8 views per month. And as far above the twelve-month running median of 2,108 views per month. Per posting — there were ten postings in March — the figures are even greater. There were 286.7 views per posting in March. The running mean is 172.9 views per posting, and the running median 144.8.
There were 1,993 unique visitors in March. This is well above the running averages. The twelve-month running mean was 1,529.4 unique visitors, and the running median 1,491.5. This is 199.3 unique visitors per March posting, not a difficult calculation to make. The twelve-month running mean was 121.1 viewers per posting, though, and the mean a mere 99.8 viewers per posting. So that’s popular.
Not popular? Talking to me. We all feel like that sometimes but I have data. After a chatty February things fell below average for March. There were 30 likes given in March, below the running mean of 56.7 and median of 55.5. There were 3.0 likes per posting. The running mean for the twelve months leading in to this was 4.2 likes per posting. The running median was 4.0.
And actual comments? There were 10 of them in March, below the mean of 14.3 and median of 10. This averaged 1.0 comments per posting, which is at least something. The running per-post mean is 1.6 comments, though, and median is 1.4. It could be the centroids of regular tetrahedrons are not the hot, debatable topic I had assumed.
Pi Day was, as I’d expected, a good day for reading Pi Day comics. And miscellaneous other articles about Pi Day. I need to write some more up for next year, to enjoy those search engine queries. There are some things in differential equations that would be a nice different take.
As mentioned, I posted ten things in March. Here they are in decreasing order of popularity. I would expect this to be roughly a chronological list of when things were posted. It doesn’t seem to be, but I haven’t checked whether the difference is statistically significant.
In March I posted 5,173 words here, for an average 517.3 words per post. That’s shorter than my average January and February posts were. My average words-per-posting for the year has dropped to 558. And despite my posts being on average shorter, this was still my most verbose month of 2021. I’ve had 12,844 words posted this year, through the start of April, and more than two-fifths of them were March.
As of the start of April I’ve posted 1,605 things to the blog here. They’ve gathered 129,696 page views from an acknowledged 75,266 visitors.
If you’d like to be a regular reader, there’s a couple approaches. One is to read regularly. The best way for you to do that is using the RSS feed in whatever reader you prefer. I won’t see you show up in my statistics, and that’s fine. If you don’t have an RSS reader, you can open a free account at Dreamwidth or Livejournal and add any RSS feed you like. This from https://www.dreamwidth.org/feeds/ or https://www.livejournal.com/syn depending on what you sign up for. If that’s too much, you can use the “Follow NebusResearch By E-mail” button, which will send you essays after they’ve appeared and before I’ve fixed typos.
If you have a WordPress account you can use the “Follow NebusResearch” button to add me to your Reader. If you have Twitter, congratulations; I don’t exactly. My account at @nebusj is still there, but it only has an automated post announcement. I don’t know when that will break. If you’re on Mastodon, you can find me as @firstname.lastname@example.org.
One last thing. WordPress imposed their awful, awful, awful ‘Block’ editor on my blog. I used to be able to us the classic, or ‘good’, editor, where I could post stuff without it needing twelve extra mouse clicks. If anyone knows hacks to get the good editor back please leave a comment.
I’ve been reading The Disordered Cosmos: A Journey Into Dark Matter, Spacetime, and Dreams Deferred, by Chanda Prescod-Weinstein. It’s the best science book I’ve read in a long while.
Part of it is a pop-science discussion of particle physics and cosmology, as they’re now understood. It may seem strange that the tiniest things and the biggest thing are such natural companion subjects. That is what seems to make sense, though. I’ve fallen out of touch with a lot of particle physics since my undergraduate days and it’s wonderful to have it discussed well. This sort of pop physics is for me a pleasant comfort read.
The other part of the book is more memoir, and discussion of the culture of science. This is all discomfort reading. It’s an important discomfort.
I discuss sometimes how mathematics is, pretensions aside, a culturally-determined thing. Usually this is in the context of how, for example, that we have questions about “perfect numbers” is plausibly an idiosyncrasy. I don’t talk much about the culture of working mathematicians. In large part this is because I’m not a working mathematician, and don’t have close contact with working mathematicians. And then even if I did — well, I’m a tall, skinny white guy. I could step into most any college’s mathematics or physics department, sit down in a seminar, and be accepted as belonging there. People will assume that if I say anything, it’s worth listening to.
Chanda Prescod-Weinstein, a Black Jewish agender woman, does not get similar consideration. This despite her much greater merit. And, like, I was aware that women have it harder than men. And Black people have it harder than white people. And that being open about any but heterosexual cisgender inclinations is making one’s own life harder. What I hadn’t paid attention to was how much harder, and how relentlessly harder. Most every chapter, including the comfortable easy ones talking about families of quarks and all, is several needed slaps to my complacent face.
Her focus is on science, particularly physics. It’s not as though mathematics is innocent of driving women out or ignoring them when it can’t. Or of treating Black people with similar hostility. Much of what’s wrong is passively accepting patterns of not thinking about whether mathematics is open to everyone who wants in. Prescod-Weinstein offers many thoughts and many difficult thoughts. They are worth listening to.
I’m not yet looking to discuss every comic strip with any mathematics mention. But something gnawed at me in this installment of Greg Evans and Karen Evans’s Luann. It’s about the classes Gunther says he’s taking.
The main characters in Luann are in that vaguely-defined early-adult era. They’re almost all attending a local university. They’re at least sophomores, since they haven’t been doing stories about the trauma and liberation of going off to school. How far they’ve gotten has been completely undefined. So here’s what gets me.
Gunther taking vector calculus? That makes sense. Vector calculus is a standard course if you’re taking any mathematics-dependent major. It might be listed as Multivariable Calculus or Advanced Calculus or Calculus III. It’s where you learn partial derivatives, integrals along a path, integrals over a surface or volume. I don’t know Gunther’s major, but if it’s any kind of science, yeah, he’s taking vector calculus.
Algebraic topology, though. That I don’t get. Topology at all is usually an upper-level course. It’s for mathematics majors, maybe physics majors. Not every mathematics major takes topology. Algebraic topology is a deeper specialization of the subject. I’ve only seen courses listed as algebraic topology as graduate courses. It’s possible for an undergraduate to take a graduate-level course, yes. And it may be that Gunther is taking a regular topology course, and the instructor prefers to focus on algebraic topology.
But even a regular topology course relies on abstract algebra. Which, again, is something you’ll get as an undergraduate. If you’re a mathematics major you’ll get at least two years of algebra. And, if my experience is typical, still feel not too sure about the subject. Thing is that Intro to Abstract Algebra is something you’d plausibly take at the same time as Vector Calculus. Then you’d get Abstract Algebra and then, if you wished, Topology.
So you see the trouble. I don’t remember anything in algebra-to-topology that would demand knowing vector calculus. So it wouldn’t mean Gunther took courses without taking the prerequisites. But it’s odd to take an advanced mathematics course at the same time as a basic mathematics course. Unless Gunther’s taking an advanced vector calculus course, which might be. Although since he wants to emphasize that he’s taking difficult courses, it’s odd to not say “advanced”. Especially if he is tossing in “algebraic” before topology.
And, yes, I’m aware of the Doylist explanation for this. The Evanses wanted courses that sound impressive and hard. And that’s all the scene demands. The joke would not be more successful if they picked two classes from my actual Junior year schedule. None of the characters have a course of study that could be taken literally. They’ve been university students full-time since 2013 and aren’t in their senior year yet. It would be fun, is all, to find a way this makes sense.
It’s a natural question to wonder this time of year. The date when Easter falls is calculated by some tricky numerical rules. These come from the desire to make Easter an early-spring (in the Northern hemisphere) holiday, while tying it to the date of Passover, as worked out by people who did not know the exact rules by which the Jewish calendar worked. The result is that some dates are more likely than others to be Easter.
John Golden, MathHombre, was host this month for the Playful Math Education Blog Carnival. And this month’s collection of puzzles, essays, and creative mathematics projects. Among them are some quilts and pattern-block tiles, which manifest all that talk about the structure of mathematical objects and their symmetries in easy-to-see form. There’s likely to be something of interest there.
Among the wonderful things I discovered there is Math Zine Fest 2021. It’s as the name suggests, a bunch of zines — short printable magazines on a niche topic — put together for the end of February. I had missed this organizing, but hope to get to see later installments. I don’t know what zine I might make, but I must have something I could do.
Comic Strip Master Command has not, to appearances, been distressed by my Reading the Comics hiatus. There are still mathematically-themed comic strips. Many of them are about story problems and kids not doing them. Some get into a mathematical concept. One that ran last week caught my imagination so I’ll give it some time here. This and other Reading the Comics essays I have at this link, and I figure to resume posting them, at least sometimes.
The centroid is good geometry, something which turns up in plane and solid shapes. It’s a center of the shape: the arithmetic mean of all the points in the shape. (There are other things that can, with reason, be called a center too. Mathworld mentions the existence of 2,001 things that can be called the “center” of a triangle. It must be only a lack of interest that’s kept people from identifying even more centers for solid shapes.) It’s the center of mass, if the shape is a homogenous block. Balance the shape from below this centroid and it stays balanced.
For a complicated shape, finding the centroid is a challenge worthy of calculus. For these shapes, though? The sphere, the cube, the regular tetrahedron? We can work those out by reason. And, along the way, work out whether this rule gives an advantage to either boxer.
The sphere first. That’s the easiest. The centroid has to be the center of the sphere. Like, the point that the surface of the sphere is a fixed radius from. This is so obvious it takes a moment to think why it’s obvious. “Why” is a treacherous question for mathematics facts; why should 4 divide 8? But sometimes we can find answers that give us insight into other questions.
Here, the “why” I like is symmetry. Look at a sphere. Suppose it lacks markings. There’s none of the referee’s face or bow tie here. Imagine then rotating the sphere some amount. Can you see any difference? You shouldn’t be able to. So, in doing that rotation, the centroid can’t have moved. If it had moved, you’d be able to tell the difference. The rotated sphere would be off-balance. The only place inside the sphere that doesn’t move when the sphere is rotated is the center.
This symmetry consideration helps answer where the cube’s centroid is. That also has to be the center of the cube. That is, halfway between the top and bottom, halfway between the front and back, halfway between the left and right. Symmetry again. Take the cube and stand it upside-down; does it look any different? No, so, the centroid can’t be any closer to the top than it can the bottom. Similarly, rotate it 180 degrees without taking it off the mat. The rotation leaves the cube looking the same. So this rules out the centroid being closer to the front than to the back. It also rules out the centroid being closer to the left end than to the right. It has to be dead center in the cube.
Now to the regular tetrahedron. Obviously the centroid is … all right, now we have issues. Dead center is … where? We can tell when the regular tetrahedron’s turned upside-down. Also when it’s turned 90 or 180 degrees.
Symmetry will guide us. We can say some things about it. Each face of the regular tetrahedron is an equilateral triangle. The centroid has to be along the altitude. That is, the vertical line connecting the point on top of the pyramid with the equilateral triangle base, down on the mat. Imagine looking down on the shape from above, and rotating the shape 120 or 240 degrees if you’re still not convinced.
And! We can tip the regular tetrahedron over, and put another of its faces down on the mat. The shape looks the same once we’ve done that. So the centroid has to be along the altitude between the new highest point and the equilateral triangle that’s now the base, down on the mat. We can do that for each of the four sides. That tells us the centroid has to be at the intersection of these four altitudes. More, that the centroid has to be exactly the same distance to each of the four vertices of the regular tetrahedron. Or, if you feel a little fancier, that it’s exactly the same distance to the centers of each of the four faces.
It would be nice to know where along this altitude this intersection is, though. We can work it out by algebra. It’s no challenge to figure out the Cartesian coordinates for a good regular tetrahedron. Then finding the point that’s got the right distance is easy. (Set the base triangle in the xy plane. Center it, so the coordinates of the highest point are (0, 0, h) for some number h. Set one of the other vertices so it’s in the xz plane, that is, at coordinates (0, b, 0) for some b. Then find the c so that (0, 0, c) is exactly as far from (0, 0, h) as it is from (0, b, 0).) But algebra is such a mass of calculation. Can we do it by reason instead?
That I ask the question answers it. That I preceded the question with talk about symmetry answers how to reason it. The trick is that we can divide the regular tetrahedron into four smaller tetrahedrons. These smaller tetrahedrons aren’t regular; they’re not the Platonic solid. But they are still tetrahedrons. The little tetrahedron has as its base one of the equilateral triangles that’s the bigger shape’s face. The little tetrahedron has as its fourth vertex the centroid of the bigger shape. Draw in the edges, and the faces, like you’d imagine. Three edges, each connecting one of the base triangle’s vertices to the centroid. The faces have two of these new edges plus one of the base triangle’s edges.
The four little tetrahedrons have to all be congruent. Symmetry again; tip the big tetrahedron onto a different face and you can’t see a difference. So we’ll know, for example, all four little tetrahedrons have the same volume. The same altitude, too. The centroid is the same distance to each of the regular tetrahedron’s faces. And the four little tetrahedrons, together, have the same volume as the original regular tetrahedron.
What is the volume of a tetrahedron?
If we remember dimensional analysis we may expect the volume should be a constant times the area of the base of the shape times the altitude of the shape. We might also dimly remember there is some formula for the volume of any conical shape. A conical shape here is something that’s got a simple, closed shape in a plane as its base. And some point P, above the base, that connects by straight lines to every point on the base shape. This sounds like we’re talking about circular cones, but it can be any shape at the base, including polygons.
So we double-check that formula. The volume of a conical shape is one-third times the area of the base shape times the altitude. That’s the perpendicular distance between P and the plane that the base shape is in. And, hey, one-third times the area of the face times the altitude is exactly what we’d expect.
So. The original regular tetrahedron has a base — has all its faces — with area A. It has an altitude h. That h must relate in some way to the area; I don’t care how. The volume of the regular tetrahedron has to be .
The volume of the little tetrahedrons is — well, they have the same base as the original regular tetrahedron. So a little tetrahedron’s base is A. The altitude of the little tetrahedron is the height of the original tetrahedron’s centroid above the base. Call that . How can the volume of the little tetrahedron, , be one-quarter the volume of the original tetrahedron, ? Only if is one-quarter .
This pins down where the centroid of the regular tetrahedron has to be. It’s on the altitude underneath the top point of the tetrahedron. It’s one-quarter of the way up from the equilateral-triangle face.
(And I’m glad, checking this out, that I got to the right answer after all.)
So, if the cube and the tetrahedron have the same height, then the cube has an advantage. The cube’s centroid is higher up, so the tetrahedron has a narrower range to punch. Problem solved.
I do figure to talk about comic strips, and mathematics problems they bring up, more. I’m not sure how writing about one single strip turned into 1300 words. But that’s what happens every time I try to do something simpler. You know how it goes.
The skeptical reader might say this is obvious. They’re invited to write a simulation that takes a set of fold lines and predicts which sides of the paper are angled out and which are angled in. The skeptical reader may also ask who cares about paper. It’s paper because many mathematics problems start from the kinds of things one sets one’s hands on. Anyone who’s seen a crack growing across their sidewalk, though — or across their countertop, or their grandfather’s desk — realizes there are things we don’t understand about how things break. And why they break that way. And, more generally, there’s a lot we don’t understand about how complicated “natural” shapes form. The big interest in this is how long molecules crumple up. The shapes of these govern how they behave, and it’d be nice to understand that more.
I was embarrassed, on looking at old Pi Day Reading the Comics posts, to see how often I observed there were fewer Pi Day comics than I expected. There was not a shortage this year. This even though if Pi Day has any value it’s as an educational event, and there should be no in-person educational events while the pandemic is still on. Of course one can still do educational stuff remotely, mathematics especially. But after a year of watching teaching on screens and sometimes doing projects at home, it’s hard for me to imagine a bit more of that being all that fun.
But Pi Day being a Sunday did give cartoonists more space to explain what they’re talking about. This is valuable. It’s easy for the dreadfully online, like me, to forget that most people haven’t heard of Pi Day. Most people don’t have any idea why that should be a thing or what it should be about. This seems to have freed up many people to write about it. But — to write what? Let’s take a quick tour of my daily comics reading.
Tony Cochran’s Agnes starts with some talk about Daylight Saving Time. Agnes and Trout don’t quite understand how it works, and get from there to Pi Day. Or as Agnes says, Pie Day, missing the mathematics altogether in favor of the food.
Scott Hilburn’s The Argyle Sweater is an anthropomorphic-numerals joke. It’s a bit risqué compared to the sort of thing you expect to see around here. The reflection of the numerals is correct, but it bothered me too.
Georgia Dunn’s Breaking Cat News is a delightful cute comic strip. It doesn’t mention mathematics much. Here the cat reporters do a fine job explaining what Pi Day is and why everybody spent Sunday showing pictures of pies. This could almost be the standard reference for all the Pi Day strips.
Bill Amend’s FoxTrot is one of the handful that don’t mention pie at all. It focuses on representing the decimal digits of π. At least within the confines of something someone might write in the American dating system. The logic of it is a bit rough but if we’ve accepted 3-14 to represent 3.14, we can accept 1:59 as standing in for the 0.00159 of the original number. But represent 0.0015926 (etc) of a day however you like. If we accept that time is continuous, then there’s some moment on the 14th of March which matches that perfectly.
Jef Mallett’s Frazz talks about the eliding between π and pie for the 14th of March. The strip wonders a bit what kind of joke it is exactly. It’s a nerd pun, or at least nerd wordplay. If I had to cast a vote I’d call it a language gag. If they celebrated Pi Day in Germany, there would not be any comic strips calling it Tortentag.
Steenz’s Heart of the City is another of the pi-pie comics. I do feel for Heart’s bewilderment at hearing π explained at length. Also Kat’s desire to explain mathematics overwhelming her audience. It’s a feeling I struggle with too. The thing is it’s a lot of fun to explain things. It’s so much fun you can lose track whether you’re still communicating. If you set off one of these knowledge-floods from a friend? Try to hold on and look interested and remember any single piece anywhere of it. You are doing so much good for your friend. And if you realize you’re knowledge-flooding someone? Yeah, try not to overload them, but think about the things that are exciting about this. Your enthusiasm will communicate when your words do not.
Michael Jantze’s Studio Jantze ran on Monday instead, although the caption suggests it was intended for Pi Day. So I’m including it here. And it’s the last of the strips sliding the day over to pie.
But there were a couple of comic strips with some mathematics mention that were not about Pi Day. It may have been coincidence.
Sandra Bell-Lundy’s Between Friends is of the “word problem in real life” kind. It’s a fair enough word problem, though, asking about how long something would take. From the premises, it takes a hair seven weeks to grow one-quarter inch, and it gets trimmed one quarter-inch every six weeks. It’s making progress, but it might be easier to pull out the entire grey hair. This won’t help things though.
Darby Conley’s Get Fuzzy is a rerun, as all Get Fuzzy strips are. It first (I think) ran the 13th of September, 2009. And it’s another Infinite Monkeys comic strip, built on how a random process should be able to create specific outcomes. As often happens when joking about monkeys writing Shakespeare, some piece of pop culture is treated as being easier. But for these problems the meaning of the content doesn’t count. Only the length counts. A monkey typing “let it be written in eight and eight” is as improbable as a monkey typing “yrg vg or jevggra va rvtug naq rvtug”. It’s on us that we find one of those more impressive than the other.
And this wraps up my Pi Day comic strips. I don’t promise that I’m back to reading the comics for their mathematics content regularly. But I have done a lot of it, and figure to do it again. All my Reading the Comics posts appear at this link. Thank you for reading and I hope you had some good pie.
I don’t know how Andertoons didn’t get an appearance here.
I regret not having the time or energy to write something original about π for today. I hope you’ll accept this offering of past Reading the Comics posts covering the day, and some of my other π-related writings:
And then there’s comic strips. I seem to complain every year that there’s fewer Pi Day comic strips than I expected, which invites the question of just what I expect. Here’s, as best I can tell, the actual record:
I have not yet read today’s comics, so don’t know what they’ll offer. We shall see! Also, I apologize but some of the comics may have been removed from GoComics or Comics Kingdom, and so the links may be dead. I’m not happy about that. But if I wanted the essays discussing these strips to stay permanently sensible I’d have posted the comics on my own web site.
The post is a collection of titles and brief descriptions. Some of them are general-interest books, such as one about the Inca system of knotted strings for recording numbers, or about how non-Euclidean geometries work. Others are textbooks or histories or biographies. And some are research monographs or other highly specialized work.
The Playful Math Education Blog Carnival is a collection of posts about mathematics that are educational or recreational or delightful or just fun. This isn’t an exclusive or. There’s a good chance at least some posts will interest you. Some may be useful if you ever need to teach or communicate mathematics to an audience.
A toot on Mathstodon made me aware of this. It’s a listing, and brief description, of 243 theorems, as compiled by Oliver Knill. As the title implies they’re all intended to be fundamental theorems of some area of mathematics.
Many areas of mathematics have something called their Fundamental Theorem. The one that comes first to my mind is always the Fundamental Theorem of Calculus. That one connects derivatives and indefinite integrals in a way that saves a lot of work. But also commonly in my mind are the Fundamental Theorem of Algebra, which assures one of how many roots a polynomial should have, and the Fundamental Theorem of Arithmetic, about factoring counting numbers into primes.
The list does not stop there. And it gets into areas where “Fundamental Theorem Of ___ ” is not the common phrasing. They are, where I know something about the area, certainly core, fundamental theorems as promised, though. Or important mathematical principles, such as the pigeon-hole principle. It’s worth skimming around; even if you don’t know anything about the area, Knill provides some context, so you can understand why this might be of interest.
And then after the many theorems Knill provides some thoughts about why these theorems. What makes a theorem “fundamental”. This is something which shows off how culturally dependent and human the construction of mathematics is. And then, from page 147, a set of short lecture notes about the history of mathematics. Even if your eyes glaze over at torsion groups, it’s worth going into those notes at the end.
I hadn’t quite intended it, but February was another low-power month here. No big A-to-Z project and no resumption of Reading the Comics. The high points were sharing things that I’d seen elsewhere, and a mathematics problem that occurred to me while making tea. Very low-scale stuff. Still, I like to check on how that’s received.
I did put together seven posts for February — the same as January — and here’s a list of them in descending order of popularity:
I assume the essay setting out the tea question was more popular than the answer because it had a week more to pick up readers. That or people reading the answer checked back on what the question was. It couldn’t be that people are that uninterested in my actually explaining a mathematics thing.
That’s it for relative popularity. How about for total readership?
I had expected readership to continue declining, since I’m publishing fewer things and having my name out there seems to matter. But the decline’s less drastic than I expected. There were 2,167 page views here in February. But in the twelve months from February 2020 through January 2021? I had a mean of 2,137.4 page views, and a median of 2,044.5. That is, I’m still on the high side of my popularity.
There were 1,576 logged unique visitors in February. In the twelve months leading up to that the mean was 1,480.7 unique visitors, and the median 1,395.5.
The figures look more impressive if you rate them by number of postings. In that case in February I gathered 309.6 views per posting, way above the mean of 157.9 and median of 135.6. There were also 225.1 unique visitors per posting, again way above the running mean of 109.9 and median of 90.7.
I’ll dig unpopularity out of any set of numbers, though. There were only 47 likes granted here in February, down from the running mean of 55.8 and median of 55.5. That is still 6.7 likes per posting, above the mean of 3.9 and median of 4.0, but it’s still sparse likings. There were a hearty 39 comments given — my highest number since October 2018 — and that’s well above the mean of 17.0 and median of 18. Per posting, that’s 5.6 comments per posting, the highest I have since I started calculating this figure back in July of 2018. The mean and median comments per posting, for the twelve months leading up to this, were both 1.2.
WordPress’s insights panel tells me I published seven things in February, which matches my experience. I still can’t explain the discrepancy back in January. It says also that I published 3,440 words over February, my quietest month since I started tracking those numbers. It put my average post at 590 words for February, and 573.3 words for the whole year to date.
I start March, if WordPress is reliable, having gathered 126,829 views from 73,273 logged unique visitors. This after 1,595 posts in total.
If you have a WordPress account you can add me to your Reader by clicking the “Follow Nebusresearch” button on this page. I’ve also re-enabled the “Follow NebusResearch By E-mail” option, for people who want to see posts before I’ve fixed the typos. The typos will never be fixed. Every time an author looks at an old blog post there are three more typos, even if they’ve corrected the typos before.
The problem I’d set out last week: I have a teapot good for about three cups of tea. I want to put milk in the once, before the first cup. How much should I drink before topping up the cup, to have the most milk at the end?
I have expectations. Some of this I know from experience, doing other problems where things get replaced at random. Here, tea or milk particles get swallowed at random, and replaced with tea particles. Yes, ‘particle’ is a strange word to apply to “a small bit of tea”. But it’s not like I can call them tea molecules. “Particle” will do and stop seeming weird someday.
Random replacement problems tend to be exponential decays. That I know from experience doing problems like this. So if I get an answer that doesn’t look like an exponential decay I’ll doubt it. I might be right, but I’ll need more convincing.
I also get some insight from extreme cases. We can call them reductios. Here “reductio” as in the word we usually follow with “ad absurdum”. Make the case ridiculous and see if that offers insight. The first reductio is to suppose I drink the entire first cup down to the last particle, then pour new tea in. By the second cup, there’s no milk left. The second reductio is to suppose I drink not a bit of the first cup of milk-with-tea. Then I have the most milk preserved. It’s not a satisfying break. But it leads me to suppose the most milk makes it through to the end if I have a lot of small sips and replacements of tea. And to look skeptically if my work suggests otherwise.
So that’s what I expect. What actually happens? Here, I do a bit of reasoning. Suppose that I have a mug. It can hold up to 1 unit of tea-and-milk. And the teapot, which holds up to 2 more units of tea-and-milk. What units? For the mathematics, I don’t care.
I’m going to suppose that I start with some amount — call it — of milk. is some number between 0 and 1. I fill the cup up to full, that is, 1 unit of tea-and-milk. And I drink some amount of the mixture. Call the amount I drink . It, too, is between 0 and 1. After this, I refill the mug up to full, so, putting in units of tea. And I repeat this until I empty the teapot. So I can do this times.
I know you noticed that I’m short on tea here. The teapot should hold 3 units of tea. I’m only pouring out . I could be more precise by refilling the mug times. I’m also going to suppose that I refill the mug with amount of tea a whole number of times. This sounds necessarily true. But consider: what if I drank and re-filled three-quarters of a cup of tea each time? How much tea is poured that third time?
I make these simplifications for good reasons. They reduce the complexity of the calculations I do without, I trust, making the result misleading. I can justify it too. I don’t drink tea from a graduated cylinder. It’s a false precision to pretend I do. I drink (say) about half my cup and refill it. How much tea I get in the teapot is variable too. Also, I don’t want to do that much work for this problem.
In fact, I’m going to do most of the work of this problem with a single drawing of a square. Here it is.
So! I start out with units of tea in the mixture. After drinking units of milk-and-tea, what’s left is units of milk in the mixture.
How about the second refill? The process is the same as the first refill. But where, before, there had been units of milk in the tea, now there are only units in. So that horizontal strip is a little narrower is all. The same reasoning applies and so, after the second refill, there’s milk in the mixture.
If you nodded to that, you’d agree that after the third refill there’s . And are pretty sure what happens at the fourth and fifth and so on. If you didn’t nod to that, it’s all right. If you’re willing to take me on faith we can continue. If you’re not, that’s good too. Try doing a couple drawings yourself and you may convince yourself. If not, I don’t know. Maybe try, like, getting six white and 24 brown beads, stir them up, take out four at random. Replace all four with brown beads and count, and do that several times over. If you’re short on beads, cut up some paper into squares and write ‘B’ and ‘W’ on each square.
But anyone comfortable with algebra can see how to reduce this. The amount of milk remaining after j refills is going to be
How many refills does it take to run out of tea? That we knew from above: it’s refills. So my last full mug of tea will have left in it
units of milk.
Anyone who does differential equations recognizes this. It’s the discrete approximation of the exponential decay curve. Discrete, here, because we take out some finite but nonzero amount of milk-and-tea, , and replace it with the same amount of pure tea.
Now, again, I’ve seen this before so I know its conclusions. The most milk will make it to the end of is as small as possible. The best possible case would be if I drink and replace an infinitesimal bit of milk-and-tea each time. Then the last mug would end with of milk. That’s as in the base of the natural logarithm. Every mathematics problem has an somewhere in it and I’m not exaggerating much. All told this would be about 13 and a half percent of the original milk.
Drinking more realistic amounts, like, half the mug before refilling, makes the milk situation more dire. Replacing half the mug at a time means the last full mug has only one-sixteenth what I started with. Drinking a quarter of the mug and replacing it lets about one-tenth the original milk survive.
But all told the lesson is clear. If I want milk in the last mug, I should put some in each refill. Putting all the milk in at the start and letting it dissolve doesn’t work.
A post on Mathstodon made me aware there’s a bit of talk about iceberg shapes. Particularly that one of the iconic photographs of an iceberg above-and-below water, is a imaginative work. A real iceberg wouldn’t be stable in that orientation. Which, I’ll admit, isn’t something I had thought about. I also hadn’t thought about the photography challenge of getting a clear picture of something in sunlight and in water at once. There was a lot I hadn’t thought about. In my defense, I spend a lot of time noticing comic strips had a character complain about the New Math.
I’ve been taking milk in my tea lately. I have a teapot good for about three cups of tea. So that’s got me thinking about how to keep the most milk in the last of my tea. You may ask why I don’t just get some more milk when I refill the cup. I answer that if I were willing to work that hard I wouldn’t be a mathematician.
It’s easy to spot the lowest amount of milk I could have. If I drank the whole of the first cup, there’d be only whatever milk was stuck by surface tension to the cup for the second. And so even less than that for the third. But if I drank half a cup, poured more tea in, drank half again, poured more in … without doing the calculation, that’s surely more milk for the last full cup.
So what’s the strategy for the most milk I could get in the final cup? And how much is in there?
Rosenbluth was a PhD in physics (and an Olympics-qualified fencer). Her postdoctoral work was with the Atomic Energy Commission, bringing her to a position at Los Alamos National Laboratory in the early 1950s. And a moment in computer science that touches very many people’s work, my own included. This is in what we call Metropolis-Hastings Markov Chain Monte Carlo.
Monte Carlo methods are numerical techniques that rely on randomness. The name references the casinos. Markov Chain refers to techniques that create a sequence of things. Each thing exists in some set of possibilities. If we’re talking about Markov Chain Monte Carlo this is usually an enormous set of possibilities, too many to deal with by hand, except for little tutorial problems. The trick is that what the next item in the sequence is depends on what the current item is, and nothing more. This may sound implausible — when does anything in the real world not depend on its history? — but the technique works well regardless. Metropolis-Hastings is a way of finding states that meet some condition well. Usually this is a maximum, or minimum, of some interesting property. The Metropolis-Hastings rule has the chance of going to an improved state, one with more of whatever the property we like, be 1, a certainty. The chance of going to a worsened state, with less of the property, be not zero. The worse the new state is, the less likely it is, but it’s never zero. The result is a sequence of states which, most of the time, improve whatever it is you’re looking for. It sometimes tries out some worse fits, in the hopes that this leads us to a better fit, for the same reason sometimes you have to go downhill to reach a larger hill. The technique works quite well at finding approximately-optimum states when it’s hard to find the best state, but it’s easy to judge which of two states is better. Also when you can have a computer do a lot of calculations, because it needs a lot of calculations.
So here we come to Rosenbluth. She and her then-husband, according to an interview he gave in 2003, were the primary workers behind the 1953 paper that set out the technique. And, particularly, she wrote the MANIAC computer program which ran the algorithm. It’s important work and an uncounted number of mathematicians, physicists, chemists, biologists, economists, and other planners have followed. She would go on to study statistical mechanics problems, in particular simulations of molecules. It’s still a rich field of study.
Both the Klein bottle and the Möbius strip have many possible appearances, for about the same reason there are many kinds of trapezoids or octagons or whatnot. Möbius strips are easy enough to make in real life. Klein bottles, not so; the shape needs four dimensions of space and we just don’t have them. We’ll represent it with a shape that loops back through itself, but a real Klein bottle wouldn’t do that, for the same reason a wireframe cube’s edges don’t intersect the way the lines of its photograph do.
It makes a good wireframe shape, though. I’m surprised not to see more playground equipment using it.
This is not the whole of her work, though my understanding is she’d be worth noticing even if it were. Part of the greatness of the translation was putting Newton’s mathematics — which he had done as geometric demonstrations — into the calculus of the day. The experts on In Our Time’s podcast argue that she did a good bit of work advancing the state of calculus in doing this. She’d also done a good bit of work on the problem of colliding bodies.
A major controversy was, in modern terms, whether momentum and kinetic energy are different things and, if they are different, which one collisions preserve. Châtelet worked on experiments — inspired by ideas of Gottfried Wilhelm Liebniz — to show kinetic energy was its own thing and was the important part of collisions. We today understand both momentum and energy are conserved, but we have the advantage of her work and the people influenced by her work to draw on.
She’s also renowned for a paper about the nature and propagation of fire, submitted anonymously for the Académie des Sciences’s 1737 Grand Prix. It didn’t win — Leonhard Euler’s did — but her paper and her lover Voltaire’s papers were published.
Châtelet was also surprisingly connected to the nascent mathematics and physics scene of the time. She had ongoing mathematical discussions with Pierre-Louis Maupertuis, of the principle of least action; Alexis Clairaut, who calculated the return of Halley’s Comet; Samuel König, author of a theorem relating systems of particles to their center of mass; and Bernard de Fontenelle, perpetual secretary of the Acadeémie des Sciences.
So for those interested in the history of mathematics and physics, and of women who are able to break through social restrictions to do good work, the podcast is worth a listen.
I spent much of the time waiting for a mention of Chatelier’s principle which never came. This because Chatelier’s principle’s — about the tendency of a system in equilibrium to resist changes — is named for Henry Louis Le Chatelier, a late 19th/early 20th century chemist with, so far as I know, no relation to Émile du Châtelet. I hope this spares you the confusion I felt.
I did not abandon my mathematics blog in January. I felt like I did, yes. But I posted seven essays, by my count. Six, by the WordPress statistics “Insight” panel. I have no idea what post it thinks doesn’t count, but this does shake my faith in whatever Insights it’s supposed to give me. On my humor blog, which had a post a day, it correctly logs 31. I haven’t noticed other discrepancies either. And it’s not like any of my seven January posts was a reblog which might count differently. One quoted a tweet, but that’s nothing unusual.
I’ve observed that my views-per-post tend to be pretty uniform. The implication then is that the more I write, the more I’m read, which seems reasonable. So what would I expect from the most short-winded month I’ve had in at least two and a half years?
So, this might encourage some bad habits in me. There were 2,611 page views here in January 2021. That’s above December’s total, and comfortably above the twelve-month running mean of 2,039.5. It’s also above the twelve-month running median of 2,014.5. This came from 1,849 unique visitors. That’s also above the twelve-month running mean of 1,405.8 unique visitors, and the running median of 1,349 unique visitors.
Where things fell off a bit are in likes and comments. There were 41 likes given in January 2021, below the running mean of 55.2 and running median of 55.5. There were 13 comments received, below the running mean of 16.5 and running median of 18.
Looked at per-post, though, these are fantastic numbers. 373.0 views per posting, crushing the running mean of 138.8 and running median of 135.6 visitors per posting. (And I know these were not all views of January 2021-dated posts.) There were 264.1 unique visitors per posting, similarly crushing the running mean of 95.8 and running median of 90.7 unique visitors per posting.
Even the likes and comments look good, rated that way. There were 5.9 likes per posting in January, above the running mean and median of 3.7 likes per posting. There were 1.9 comments per posting, above the running mean of 1.1 and median of 1.0 per posting. The implication is clear: people like it when I write less.
It seems absurd to list the five most popular posts from January when there were seven total, and two of them were statistics reviews. So I’ll list them all, in descending order of popularity.
WordPress claims that I published 4,231 words in January. Since the Insights panel thinks I published six things, that’s an average of 705 words per post. Since I know I published seven things, that’s an average of 604.4 words per post. I don’t know how to reconcile all this. WordPress put my 2020 average at 672 words per posting, for what that’s worth.
If I can trust anything WordPress tells me, I started February 2021 with 1,588 posts written since I started this in 2011. They’d drawn a total of 124,662 views from 71,697 logged unique visitors.
My love read a thread about the < and > signs, and mnemonics people had learned to tell which was which. And my love wondered, is a mnemonic needed? The symbol is wider on the side with the larger quantity; that’s what it means, right? Why imagine an alligator that’s already swallowed the smaller and is ready to eat the larger? In my elementary school it was goldfish, not alligators. Much easier to draw them in.
All right, but just because an interpretation seems obvious doesn’t mean it is. The questions are, who introduced the < and > symbols to mathematics, and what were they thinking?
And here we get complications. The symbols first appear, meaning what they do today, in Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas (“The Analytical Art by which Algebraic Equations can be Resolved”). This is a book, by Thomas Harriot, published in 1631. Thomas Harriot was one of the great English mathematicians of the late 16th and early 17th centuries. He worked on the longitude problem, on optics, on astronomy. Harriot’s observations are our first record of sunspots. He almost observed what we now call Halley’s Comet, with records used to work out its orbit. And he worked on how to solve equations, in ways that look at least recognizably close to what we do today.
There is a tradition that holds Harriot drew these symbols from the arm markings on a Native American. Harriot did sail to the New World at least once. He was on Walter Raleigh’s 1585-86 expedition to Virginia and observed the solar eclipse of April 1585. This was a rare chance to calculate the longitude of a ship at sea. So that’s possible. But there is also an argument that Harriot (or editor) drew from the example of the equals sign.
The = sign we first see in the mid-16th century, written by Robert Recorde, another of the great English mathematicians. Recorde did write, in The Whetstone of Witte (1557) that he used parallel lines of a common length because no two things could be more equal. Good mnemonic there. It seems Harriot (or editor) interpreted the common distance between the lines in the equals sign as the thing kept equal. So, on the side of the symbol with the greater number, make the distance between lines greater. On the lower-number’s side, make the distance between lines smaller. Which is another useful mnemonic for the symbol, if you need one.
It’s not an inevitable scheme. William Oughtred also had symbols for less-than and greater-than. Oughtred’s another vaguely familiar name in mathematics symbols. He gave us the symbol for multiplication, and and for the trig functions. He also pioneered slide rules. Oughtred’s symbols look like a block-letter U set on its side, with the upper leg longer than the lower. The vertical stroke and the shorter horizontal stroke would be on the left, to represent the left being greater than the right. The vertical stroke and shorter horizontal stroke would be on the right, for the left being less than the right. That is, the “open” side would face the smaller of the numbers, opposite to what we do with < and >.
And that seems to be as much as can be definitely said. If I’m reading right, we don’t have Harriot’s (or editor’s) statement of what inspired these symbols. We have guesses that seem reasonable, but that might only seem reasonable because we’ve brought our own interpretations to it. I’d love to know if there’s better information available.
My friend ChefMongoose pointed out this probability question. As with many probability questions, it comes from a dice game. Here, Yahtzee, based on rolling five dice to make combinations. I’m not sure whether my Twitter problems will get in the way of this embedding working; we’ll see.
Probability help please! You are playing Yahtzee against your insanely competitive spouse. You have two rolls left. You’re trying to get three of a kind. Is it better to commit and roll three dice here? Or split it and roll one die? pic.twitter.com/fi85UYUTUv
Probability help please! You are playing Yahtzee against your insanely competitive spouse. You have two rolls left. You’re trying to get three of a kind. Is it better to commit and roll three dice here? Or split it and roll one die? — Christopher Yost.
Of the five dice, two are showing 1’s; two are showing 2’s; and there’s one last die that’s a 3.
As with many dice questions you can in principle work this out by listing all the possible combinations of every possible outcome. A bit of reasoning takes much less work, but you have to think through the reasons.
I like starting the year with a look at the past year’s readership. Really what I like is sitting around waiting to see if WordPress is going to provide any automatically generated reports on this. The first few years I was here it did, this nice animated video with fireworks corresponding to posts and how they were received. That’s been gone for years and I suppose isn’t ever coming back. WordPress is run by a bunch of cowards.
But I can still do a look back the old-fashioned way, like I do with the monthly recaps. There’s just fewer years to look back on, and less reliable trends to examine.
2020 was my ninth full year of mathematics blogging. (I reach my tenth anniversary in September and no, I haven’t any idea what I’ll do for that. Most likely forget.) It was an unusual one in that I set aside what’s been my largest gimmick, the Reading the Comics essays, in favor of my second-largest gimmick, the A-to-Z. It’s the first year I’ve done an A-to-Z that didn’t have a month or two with a posting every day. Also along the way I slid from having a post every Sunday come what may to having a post every Wednesday, although usually also a Monday and a Friday also. Everyone claims it helps a blog to have a regular schedule, although I don’t know whether the particular day of the week counts for much. But how did all that work out for me?
So, I had a year that nearly duplicated 2019. There were 24,474 page views in 2020, down insignificantly from 2019’s 24,662. There were 16,870 unique visitors in 2020, up but also insignificantly from the 16,718 visiting in 2019. The number of likes continued to drift downward, from 798 in 2019 to 662 in 2020. My likes peaked in 2015 (over 3200!) and have fallen off ever since in what sure looks like a Poisson distribution to my eye. But the number of comments — which also peaked in 2015 (at 822) — actually rose, from 181 in 2019 to 198 in 2020.
There’s two big factors in my own control. One is when I post and, as noted, I moved away from Sunday posts midway through the year. The other is how much I post. And that dropped: in 2019 I had 201 posts published. In 2020 I posed only 178.
I thought of 2020 as a particularly longwinded year for me. WordPress says I published only 118,941 words, though, for an average of 672 words per posting. That’s my fewest number of words since 2014, though, and my shortest words-per-posting for the year going since 2013. Apparently throwing things off is all those posts that just point to earlier posts.
And what was popular among posts this year? Rather than give even more attention to how many kinds of trapezoid I can think of, I’ll focus just on what were the most popular things posted in 2020. Those were:
I am, first, surprised that so many Reading the Comics posts were among the most-read pieces. I like them, sure, but how many of them say anything that’s relevant one you’ve forgotten whether you read today’s Scary Gary? And yes, I am going to be bothered until the end of time that I was inconsistent about including the # symbol in the Playful Math Education Blog Carnival posts.
I fell off checking what countries sent me readers, month by month. I got bored writing an image alt-text of “Mercator-style map of the world, with the United States in dark red and most of the New World, western Europe, South and Pacific Rim Asia, Australia, and New Zealand in a more uniform pink” over and over and over again. But it’s a new year, it’s worth putting some fuss into things. And then, hey, what’s this?
Yeah! I finally got a reader from Greenland! Two page views, it looks like. Here’s the whole list, for the whole world.
United Arab Emirates
Hong Kong SAR China
Macau SAR China
Trinidad & Tobago
U.S. Virgin Islands
Bosnia & Herzegovina
Northern Mariana Islands
This is 141 countries, or country-like constructs, all together. I don’t know how that compares to previous years but I’m sure it’s the first time I’ve had five different countries send me a thousand page views each. That’s all gratifying to see.
So what plans have I got for 2021? And when am I going to get back to Reading the Comics posts? Good questions and I don’t know. I suppose I will pick up that series again, although since I took no notes last week, it isn’t going to be this week. At some time this year I want to do another A-to-Z, but I am still recovering from the workload of the last. Anything else? We’ll see. I am open to suggestions of things people think I should try, though.
This is, at least, a retrocomputing-adjacent piece. I’m looking back at the logic of a common and useful tool from the early-to-mid-80s and why it’s built that way. I hope you enjoy. It has to deal with some of the fussier points about how Commodore 64 computers worked. If you find a paragraph is too much technical fussing for you, I ask you to not give up, just zip on to the next paragraph. It’s interesting to know why something was written that way, but it’s all right to accept that it was and move to the next point.
How Did You Get Computer Programs In The 80s?
When the world and I were young, in the 1980s, we still had computers. There were two ways to get software, though. One was trading cassette tapes or floppy disks with cracked programs on them. (The cracking was taking off the copy-protection.) The other was typing. You could type in your own programs, certainly, just like you can make your own web page just by typing. Or you could type in a program. We had many magazines and books that had programs ready for entry. Some were serious programs, spreadsheets and word processors and such. Some were fun, like games or fractal-generators or such. Some were in-between, programs to draw or compose music or the such. Some added graphics or sound commands that the built-in BASIC programming language lacked. All this was available for the $2.95 cover price, or ten cents a page at the library photocopier. I had a Commodore 64 for most of this era, moving to a Commodore 128 (which also ran Commodore 64 programs) in 1989 or so. So my impressions, and this article, default to the Commodore 64 experience.
These programs all had the same weakness. You had to type them in. You can expect to make errors. If the program was written in BASIC you had a hope of spotting errors. The BASIC programming language uses common English words for its commands. Their grammar is not English, but it’s also very formulaic, and not hard to pick up. One has a chance of spotting mistakes if it’s 250 PIRNT "SUM; " S one typed.
But many programs were distributed as machine language. That is, the actual specific numbers that correspond to microchip instructions. For the Commodore 64, and most of the eight-bit home computers of the era, this was the 6502 microchip. (The 64 used a variation, the 6510. The differences between the 6502 and 6510 don’t matter for this essay.) Machine language had advantages, making the programs run faster, and usually able to do more things than BASIC could. But a string of numbers is only barely human-readable. Oh, you might in time learn to recognize the valid microchip instructions. But it is much harder to spot the mistakes on entering 32 255 120. That last would be a valid command on any eight-bit Commodore computer. It would have the computer print something, if it weren’t for the transposition errors.
What Was MLX and How Did You Use It?
The magazines came up with tools to handle this. In the 398-page(!) December 1983 issue of Compute!, my favorite line of magazines introduced MLX. This was a program, written in BASIC, which let you enter machine language programs. Charles Brannon has the credit for writing the article which introduced it. I assume he also wrote the program, but could be mistaken. I’m open to better information. Other magazines had other programs to do the same work; I knew them less well. MLX formatted machine language programs to look like this:
What did all this mean, though? These were lines you would enter in while running MLX. Before the colon was a location in memory. The numbers after the colon — the entries, I’ll call them — are six machine language instructions, one number to go into each memory cell. So, the number 169 was destined to go into memory location 49152. The number 002 would go into memory location 49153. The number 141 would go into memory location 49154. And so on; 000 would go into memory location 49158, 141 into 49159, 179 into 49160. 002 would go into memory location 49164; 141 would go into memory location 49170. And so on.
MLX would prompt you with the line number, the 49152 or 49158 or 49164 or so on. Machine language programs could go into almost any memory location. You had to tell it where to start. 49152 was a popular location for Commodore 64 programs. It was the start of a nice block of memory not easily accessed except by machine language programs. Then you would type in the entries, the numbers that follow. This was a reasonably efficient way to key this stuff in. MLX automatically advanced the location in memory and would handle things like saving the program to tape or disk when you were done.
The alert reader notices, though, that there are seven entries after the colon in each line. That seventh number is the checksum. It’s the guard that Compute! and Compute!’s Gazette put against typos. This seventh number was a checksum. MLX did a calculation based on the memory location and the first six numbers of the line. If it was not the seventh number on the line, then there was an error somewhere. You had to re-enter the line to get it right.
The thing I’d wondered, and finally got curious enough to explore, was how it calculated this.
What Was The Checksum And How Did It Work?
Happily, Compute! and Compute!’s Gazette published MLX in almost every issue, so it’s easy to find. You can see it, for example, on page 123 of the October 1985 issue of Compute!’s Gazette. And MLX was itself a BASIC program. There are quirks of the language, and its representation in magazine print, that take time to get used to. But one can parse it without needing much expertise. One important thing is that most Commodore BASIC commands didn’t need spaces after them. For an often-used program like this they’d skip the spaces. And the : symbol denoted the end of one command and start of another. So, for example, PRINTCHR$(20):IFN=CKSUMTHEN530 one learns means PRINT CHR$(20); IF N = CKSUM THEN 530.
So how does it work? MLX is, as a program, convoluted. It’s well-described by the old term “spaghetti code”. But the actual calculation of the checksum is done in a single line of the program, albeit one with several instructions. I’ll print it, but with some spaces added in to make it easier to read.
500 CKSUM = AD - INT(AD/256)*256:
FOR I = 1 TO 6:
CKSUM = (CKSUM + A(I))AND 255:
Most of this you have a chance of understanding even if you don’t program. CKSUM is the checksum number. AD is the memory address for the start of the line. A is an array of six numbers, the six numbers of that line of machine language. I is an index, a number that ranges from 1 to 6 here. Each A(I) happens to be a number between 0 and 255 inclusive, because that’s the range of integers you can represent with eight bits.
What Did This Code Mean?
So to decipher all this. Starting off. CKSUM = AD - INT(AD/256)*256. INT means “calculate the largest integer not greater than whatever’s inside”. So, like, INT(50/256) would be 0; INT(300/256) would be 1; INT(600/256) would be 2. What we start with, then, is the checksum is “the remainder after dividing the line’s starting address by 256”. We’re familiar with this, mathematically, as “address modulo 256”.
In any modern programming language, we’d write this as CKSUM = MOD(AD, 256) or CKSUM = AD % 256. But Commodore 64 BASIC didn’t have a modulo command. This structure was the familiar and comfortable enough workaround. But, read on.
The next bit was a for/next loop. This would do the steps inside for every integer value of I, starting at 1 and increasing to 6. CKSUM + A(I) has an obvious enough intention. What is the AND 255 part doing, though?
AND, here, is a logic operator. For the Commodore 64, it works on numbers represented as two-byte integers. These have a memory representation of 11111111 11111111 for ‘true’, and 00000000 00000000 for ‘false’. The very leftmost bit, for integers, is a plus-or-minus-sign. If that leftmost bit is a 1, the number is negative; if that leftmost bit is a 0, the number is positive. Did you notice me palming that card, there? We’ll come back to that.
Ordinary whole numbers can be represented in binary too. Like, the number 26 has a binary representation of 00000000 00011010. The number, say, 14 has a binary representation of 00000000 00001110. 26 AND 14 is the number 00000000 00001010, the binary digit being a 1 only when both the first and second numbers have a 1 in that column. This bitwise and operation is also sometimes referred to as masking, as in masking tape. The zeroes in the binary digits of one number mask out the binary digits of the other. (Which does the masking is a matter of taste; 26 AND 14 is the same number as 14 AND 26.)
The binary 00000000 0001010 is the decimal number 10. So you can see that generally these bitwise and operations give you weird results. Taking the bitwise and for 255 is more predictable, though. The number 255 has a bit representation of 00000000 11111111. So what (CKSUM + A(I)) AND 255 does is … give the remainder after dividing (CKSUM + A(I)) by 256. That is, it’s (CKSUM + A(I)) modulo 256.
The formula’s not complicated. To write it in mathematical terms, the calculation is:
Why Write It Like That?
So we have a question. Why are we calculating a number modulo 256 by two different processes? And in the same line of the program?
We get an answer by looking at the binary representation of 49152, which is 11000000 00000000. Remember that card I just palmed? I had warned that if the leftmost digit there were a 1, the number was understood to be negative. 49152 is many things, none of them negative.
So now we know the reason behind the odd programming choice to do the same thing two different ways. As with many odd programming choices it amounts to technical details of how Commodore hardware worked. The Commodore 64’s logical operators — AND, OR, and NOT — work on variables stored as two-byte integers. Two-byte integers can represent numbers from -32,768 up to +32,767. But memory addresses on the Commodore 64 are indexed from 0 up to 65,535. We can’t use bit masking to do the modulo operation, not on memory locations.
I have a second question, though. Look at the work inside the FOR loop. It takes the current value of the checksum, adds one of the entries to it, and takes the bitwise AND of that with 255. Why? The value would be the same if we waited until the loop was done to take the bitwise AND. At least, it would be unless the checksum grew to larger than 32,767. The checksum will be the sum of at most seven numbers, none of them larger than 255, though, so that can’t be the contraint. It’s usually faster to do as little inside a loop as possible, so, why this extravagance?
My first observation is that this FOR loop does the commands inside it six times. And logical operations like AND are very fast. The speed difference could not possibly be perceived. There is a point where optimizing your code is just making life harder for yourself.
My second observation goes back to the quirks of the Commodore 64. You entered commands, like the lines of a BASIC program, on a “logical line” that allowed up to eighty tokens. For typing in commands this is the same as the number of characters. Can this line be rewritten so there’s no redundant code inside the for loop, and so it’s all under 80 characters long?
Yes. This line would have the same effect and it’s only 78 characters:
I don’t have a clear answer. I suspect it’s for the benefit of people typing in the MLX program. In typing that in I’d have trouble not putting in a space between FOR and I, or between CKSUM and AND. Also before and after the TO and before and after AND. This would make the line run over 80 characters and make it crash. The original line is 68 characters, short enough that anyone could add a space here and there and not mess up anything. In looking through MLX, and other programs, I find there are relatively few lines more than 70 characters long. I have found them as long as 76 characters, though. I can’t rule out there being 78- or 79-character lines. They would have to suppose anyone typing them in understands when the line is too long.
There’s an interesting bit of support for this. Compute! also published machine language programs for the Atari 400 and 800. A version of MLX came out for the Atari at the same time the Commodore 64’s came out. Atari BASIC allowed for 120 characters total. And the equivalent line in Atari MLX was:
500 CKSUM=ADDR-INT(ADDR/256)*256:FOR I=1 TO 6:CKSUM=CKSUM+A(I):CKSUM=CKSUM-256*(CKSUM>255):NEXT I
This has a longer name for the address variable. It uses a different way to ensure that CKSUM stays a number between 0 and 255. But the whole line is only 98 characters.
We could save more spaces on the Commodore 64 version, though. Commodore BASIC “really” used only the first two characters of a variable name. To write CKSUM is for the convenience of the programmer. To the computer it would be the same if we wrote CK. We could even truncate it to CK for this one line of code. The only penalty would be confusing the reader who doesn’t remember that CK and CKSUM are the same variable.
And there’s no reason that this couldn’t have been two lines. One line could add up the checksum and a second could do the bitwise AND. Maybe this is all a matter of the programmer’s tastes.
In a modern language this is all quite zippy to code. To write it in Octave or Matlab is something like:
This is a bit verbose. I want it to be easier to see what work is being done. We could make it this compact:
function [checksOut] = oldmlx(oneline)
checksOut = !(mod(sum(oneline(1:7))-oneline(8), 256));
I don’t like compressing my thinking quite that much, though.
But that’s the checksum. Now the question: did it work?
Was This Checksum Any Good?
Since Compute! and Compute!’s Gazette used it for years, the presumptive answer is that it did. The real question, then, is did it work well? “Well” means does it prevent the kinds of mistakes you’re likely to make without demanding too much extra work. We could, for example, eliminate nearly all errors by demanding every line be entered three times and accept only a number that’s entered the same at least two of three times. That’s an incredible typing load. Here? We have to enter one extra number for every six. Much lower load, but it allows more errors through. But the calculation is — effectively — simply “add together all the numbers we typed in, and see if that adds to the expected total”. If it stops the most likely errors, though, then it’s good. So let’s consider them.
The first and simplest error? Entering the wrong line. MLX advanced the memory location on its own. So if you intend to write the line for memory location 50268, and your eye slips and you start entering that for 50274 instead? Or even, reading left to right, going to line 50814 in the next column? Very easy to do. This checksum will detect that nicely, though. Entering one line too soon, or too late, will give a checksum that’s off by 6. If your eye skips two lines, the checksum will be off by 12. The only way to not have the checksum miss is to enter a line that’s some multiple of 256 memory locations away. And since each line is six memory locations, that means you have to jump 768 memory locations away. That is 128 lines away. You are not going to make that mistake. (Going from one column in the magazine to the next is a jump of 91 lines. The pages were 8½-by-11 pages, so were a bit easier to read than the image makes them look.)
How about other errors? You could mis-key, say, 169. But think of the plausible errors. Typing it in as 159 or 196 or 269 would be detected by the checksum. The only one that wouldn’t would be to enter a number that’s equal to 169, modulo 256. So, 425, say, or 681. There is nobody so careless as to read 169 and accidentally type 425, though. In any case, other code in MLX rejects any data that’s not between 0 and 255, so that’s caught before the checksum comes into play.
So it’s safe against the most obvious mistake. And against mis-keying a single entry. Yes, it’s possible that you typed in the whole line right but mis-keyed the checksum. If you did that you felt dumb but re-entered the line. If you even noticed and didn’t just accept the error report and start re-entering the line.
What about mis-keying double entries? And here we have trouble. Suppose that you’re supposed to enter 169, 062 and instead enter 159, 072. They’ll add to the same quantity, and the same checksum. All that’s protecting you is that it takes a bit of luck to make two errors that exactly balance each other. But, then, slipping and hitting an adjacent number on the keyboard is an easy mistake to make.
Worse is entry transposition. If you enter 062, 169 instead you have made no checksum errors. And you won’t even be typing any number “wrong”. At least with the mis-keying you might notice that 169 is a common number and 159 a rare one in machine language. (169 was the command “Load Accumulator”. That is, copy a number into the Central Processing Unit’s accumulator. This was one of three on-chip memory slots. 159 was no meaningful command. It would only appear as data.) Swapping two numbers is another easy error to make.
And they would happen. I can attest from experience. I’d had at least one program which, after typing, had one of these glitches. After all the time spent entering it, I ended up with a program that didn’t work. And I never had the heart to go back and track down the glitch or, more efficiently, retype the whole thing from scratch.
The irony is that the program with the critical typing errors was a machine language compiler. It’s something that would have let me write this sort of machine language code. Since I never reentered it, I never created anything but the most trivial of machine language programs for the 64.
So this MLX checksum was fair. It devoted one-seventh of the typing to error detection. It could catch line-swap errors, single-entry mis-keyings, and transpositions within one entry. It couldn’t catch transposing two entries. So that could have been better. I hope to address that soon.
And now, finally, the close of the All 2020 Mathematics A-to-Z. You may see this as coming in after the close of 2020. I say, well, I’ve done that before. Things that come close to the end of the year are prone to that.
The first important lesson was that I need to read exactly what topics I’ve written about before going ahead with a new week’s topic. I am not sorry, really, to have written about Tiling a second time. I’d rather it have been more than two years after the previous time. But I can make a little something out of that, too. I enjoy the second essay more. I don’t think that’s only because I like my most recent writing more. In the second version I looked at one of those fussy little specific questions. Particularly, what were the 20,426 tiles which Robert Berger found could create an aperiodic tiling? Tracking that down brought me to some fun new connections. And it let me write in a less foggy way. It’s always tempting to write the most generally true thing possible. But details and example cases are easier to understand. It’s surprising that no one in the history of knowledge has observed this difference before I did.
The second lesson was about work during a crisis. 2020 was the most stressful year of my life, a fact I hope remains true. I am aware that ritual, doing regular routine things, helps with stress. So a regular schedule of composing an essay on a mathematical topic was probably a good thing for me. Committing to the essay meant I had specific, attainable goals on clear, predictable deadlines. The catch is that I never got on top of the A-to-Z the way I hoped. My ideal for these is to have the essay written a week ahead of publication. Enough that I can sleep on it many times and amend it as needed. I never got close to this. I was running up close to deadline every week. If I were better managing all this I’d have gotten all November’s essays written before the election, and I didn’t, and that’s why I had to slip a week. I have always been a Sabbath-is-made-for-man sort, so don’t feel bad for slipping the week. But I would have liked to never had had a week when I was copy-editing a half-hour before publication.
It does all imply that I need to do what I resolve every year. Select topics sooner. Start research and drafts sooner. Let myself slip a deadline when that’s needed. But there is also the observation that apparently I can’t cut down the time I spend writing. The first several years of this, believe it or not, I wrote three essays a week for eight intense weeks. These would be six to eight hundred words each. Then I slacked off, doing two a week; these of course grew to a thousand, maybe 1200 words each. For 2020? One essay a week and more than one topped 2500 words. Yes, the traditional joke is that you write a lot because you don’t have the time to write briefly. But writing a lot takes time too.
They’re challenging. In the pandemic particularly, as I can’t rely on the university library for a quick biography to read. Or to check journals of mathematical history, although I haven’t resorted to such actual information yet. But I’m also aware that I am not a historian or a real biographer. I have to balance drawing conclusions I can feel confident are not wrong with making declarations that are interesting to read. Still, I enjoy a focus on the culture of mathematics, and how mathematics interacts with the broader culture. It’s a piece mathematicians tend not to acknowledge; our field’s reputation for objective truth is a compelling romantic legend.
I do plan to write an A-to-Z for 2021. I suspect I’ll do it as this year, one per week. I don’t know when I’ll start, although it should be earlier than June. I’ll want to give myself more possible slip dates without running off the year. I will not be writing about tiling again. I do realize that, since I have seven A-to-Z sequences of 26 essays each, I could in principle fill half a year with writing by reblogging each, one a day. I’m not sure the point of such an exercise, but it would at least fill the content hole.
There is a side of me that would like to have a blogging gimmick that doesn’t commit me to 26 essays. I’ve tried a couple; they haven’t ever caught like this has. Maybe I could do something small and focused, like, ten terms from complex analysis. I’m open to suggestions.
When will I resume covering mathematical themes in comic strips? I don’t know; it’s the obvious thing to do while I wait for the A-to-Z cycle to start anew. It’s got some of the A-to-Z thrill, of writing about topics someone else chose. But I need some time to relax and play and I don’t know when I’ll be back to regular work.
I’m very slightly sorry to bump other things. But folks who like the history of mathematics, and how it links to other things, and who also like listening to stuff, might want to know. Peter Adamson, host of the History Of Philosophy Without Any Gaps podcast, this week talked for about twenty minutes about Girolamo Cardano.
Cardano is famous in mathematics circles for early work in probability. And, more, for pioneering the use of imaginary numbers. This along the way to a fantastic controversy about credit, and discovery, and secrets, and self-promotion.
Cardano was, as Adamson notes, a polymath; his day job was as a physician and he poked around in the philosophy of mind. That’s what makes him a fit subject for Adamson’s project. So if you’d like a different perspective on a person known, if vaguely, to many mathematics folks, and have a spot of time, you might enjoy.
And a happy new year, at last, to all. I’ll take this chance first to look at my readership figures from December. Later I’ll look at the whole year, and what things I would learn from that if I were capable of learning from this self-examination.
I had 13 posts here in December, which is my lowest count since June. For the twelve months from December 2019 through November 2020, I’d posted a mean of 15.3 and a median of 15 posts. So that’s relatively quiet. My blog overall got 2,366 page views from 1,751 unique visitors. That’s a decline from October and November. But it’s still above the running averages, which had a mean of 1,957.8 and median of 1,974 page views. And a mean of 1,335.7 and median of 1,290.5 unique visitors.
There were 51 likes given to posts in December. That’s barely below the twelve-month running averages, which had a mean of 54.6 and a median of 52 likes. The number of comments collapsed to a mere 4 and while it’s been worse, it’s still dire. There were a mean of 15.3 and median of 15 comments through the twelve months before that.
If it’s disappointing to see numbers drop, and it is, there’s some evidence that it’s all my own fault. Even beyond that this is my blog and I’m the only one writing for it. That is in the per-posting statistics. There were 182.0 views per posting, which is well above the averages (132.0 mean, 132.6 median). It’s also near the averages in November (191.5) and October (169.1). Likes per posting were even better: 3.9, compared to a running average mean of 3.5 and running average median of 3.4. The per-posting likes had been 4.0 and 4.4 the previous months. Comments per posting — 0.3 — is still a dire number, though. The running-average mean was 1.1 per posting and median of 1.0 per posting.
It suggests that the best thing I can do for my statistics is post more. Most of December’s posts were little but links to even earlier posts. This feels like cheating to me, to do too often. On the other hand, I’ve had 1,580 posts over the past decade; why have that if I’m not going to reuse them? And, yes, it’s a bit staggering to imagine that I could repost one entry a day for four and a third years before I ran out. (Granting that lot of those would be references to earlier posts. Or things like monthly statistics recaps that make not a lick of sense to repeat.)
What were popular posts from November or December 2020? It turns out the five most popular posts from that stretch were all December ones:
It feels weird that How Many Of This Weird Prime Are There? was so popular since that was posted the 30th of December. (And late, at that, as I didn’t schedule it right.) So in 30 hours it attracted more readers than posts that had all of November and December to collect readers. I guess there’s something about weird primes that people want to read about. Although not to comment on with their answers to the third prime of the form … well, maybe they’re leaving it for other people to find, unspoiled. I also always find it weird that these How-A-Month-Treated-My-Blog posts are so popular. I think other insecure bloggers like to see someone else suffering.
According to WordPress I published 7,758 words in December. This is only my fourth-most-laconic month in 2020. This put me also at an average of 596.8 words per posting in December. My average for all 2020 was 672 words per posting, so all those recaps were in theory saving me time.
Also according to WordPress, I started January 2021 with a total of 1,581 posts ever. (There’s one secret post, created to test some things out; there’s no sense revealing or deleting it.) These have drawn a total 122,051 views from 69,848 logged unique visitors. It’s not a bad record for a blog entering its tenth year of publication without ever getting a clear identity.
My Twitter account has gone feral. While it’s still posting announcements, I don’t read it, because I don’t have the energy to figure out why it sometimes won’t load. If you want to social-media thing with me try me on the Mastodon account @email@example.com. Mathstodon is a mathematics-themed instance of that microblogging network you remember hearing something about somewhere but not what anybody said about it.
And, yeah, I hope to have my closing thoughts about the 2020 A-To-Z later this week. Thank you all for reading.
A friend made me aware of a neat little unsolved problem in number theory. I know it seems like number theory is nothing but unsolved problems, but this is an unfair reputation. There are as many as four solved problems in number theory. It’s a tough field.
The question started with the observation that 11 is a prime number. And so is 101. But 1,001 is not; nor is 10,001. How many prime numbers are there that have the form , for whole-number values of n? Are there infinitely many? Finitely many? If there’s finitely many, how many are there?
It turns out this is an open question. We know of three prime numbers that you can write as . I’ll leave the third for you to find.
One neat bit is that if there are more prime numbers, they have to be ones where n is itself a whole power of 2. That is, where the number is for some whole number k. They’ve been tested up to at least, so this subset of the Generalized Fermat Numbers seems to be rare. But wouldn’t it be just our luck if from onward they were nothing but primes?
Folks who’ve been with me a long while know one of my happy Christmastime traditions is watching the Aardman Animation film Arthur Christmas. The film also gave me a great mathematical-physics question. You might consider some questions it raises.
First: Could `Arthur Christmas’ Happen In Real Life? There’s a spot in the movie when Arthur and Grand-Santa are stranded on a Caribbean island while the reindeer and sleigh, without them, go flying off in a straight line. What does a straight line on the surface of the Earth mean?
Second: Returning To Arthur Christmas. From here spoilers creep in and I have to discuss, among other things, what kind of straight line the reindeer might move in. There is no one “right” answer.
Third: Arthur Christmas And The Least Common Multiple. If we suppose the reindeer move in a straight line the way satellites move in a straight line, we can calculate how long Arthur and Grand-Santa would need to wait before the reindeer and sled are back if they’re lucky enough to be waiting on the equator.
Fourth: Six Minutes Off. Waiting for the reindeer to get back becomes much harder if Arthur and Grand-Santa are not on the equator. This has potential dangers for saving the day.
Fifth and last: Arthur Christmas and the End of Time. We get to the thing that every mathematical physics blogger really really wants to get into. This is the paradox that conservation of energy and the fact of entropy seem to force us into some weird conclusions, if the universe can get old enough. Maybe; there’s some extra considerations, though, that can change the conclusion.
I am happy, as ever, to complete an A-to-Z. Also to take some time to recover after the project. I had thought that spreading things out to 26 weeks would make them less stressful, and instead, I just wrote even longer pieces, in compensation. I’ll try to have other good observations in an essay next week.
For now, though, a piece that I will find useful for years to come: a roster of what essays I wrote this year. In future years, I may even check them before writing a third piece about tiling.
Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.
3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.
A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements. (An element is just a thing in a set. We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo , are a ring (among other things).
Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.
We can do modulo arithmetic with any of the counting numbers. Look, for example, at instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about ? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.
How about ? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.
When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is, zero, the additive identity, always a zero divisor. … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?
Your ring might or might not have them. It depends on the ring. The ring of integers , for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 , though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 ? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, , lacks zero divisors besides 0.
Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.
It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.
In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If is any ring, then is the zero-divisor graph of . (I know some of you think is the real numbers. No; that’s a bold-faced instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in . You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)
Drawing this graph makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?
And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which .
And for the last of this year’s (planned) exhumations from my archives? It’s a piece from summer 2017: Zeta Function. As will happen in mathematics, there are many zeta functions. But there’s also one special one that people find endlessly interesting, and that’s what we mean if we say “the zeta function”. It, of course, goes back to Bernhard Riemann.
Also a cute note I saw going around. If you cut off the century years then the date today — the 16th day of the 12th month of the 20th year of the century — you get a rare Pythagorean triplet. and after a moment we notice that’s the famous 3-4-5 Pythagorean triplet all over again. If you miss it, well, that’s all right. There’ll be another along in July of 2025, and one after that in October of 2026.
To dig something out of my archives today, I offer the Zermelo-Fraenkel Axioms. This wrapped up the End 2016 A-to-Z. On the last day of 2016, I see; I didn’t realize I was cutting things that close that year. These are fundamentals of set theory, which is the study of what you can include and what you exclude from a set of things. For a while in the 20th century this looked likely to be the foundation of mathematics, from which everything else could be derived. We’ve moved on now to thinking that category theory is more likely the core. But set theory remains a really good foundation. You can understand a lot of what’s interesting about it without needing more than a child’s ability to make marks on paper and draw circles around some of them. Or, like my essays insist on doing, without even doing the drawings that would make it all easier to follow.