As I continue to approach readiness for the Little Mathematics A-to-Z, let me share another piece you might have missed. Back in 2016 somehow two A-to-Z’s wasn’t enough for me. I also did a string of “Theorem Thursdays”, trying to explain some interesting piece of mathematics. The Jordan Curve Theorem is one of them.

The theorem, at heart, seems too simple to even be mathematics. It says that a simple closed curve on the plane divides the plane into an inside and an outside. There are similar versions for surfaces in three-dimensional spaces. Or volumes in four-dimensional spaces and so on. Proving the theorem turns out to be more complicated than I could fit into an essay. But proving a simplified version, where the curve is a polygon? That’s doable. Easy, even.

And as a sideline you get an easy way to test whether a point is inside a shape. It’s obvious, yeah, if a point is inside a square. But inside a complicated shape, some labyrinthine shape? Then it’s not obvious, and it’s nice to have an easy test.

This is even mathematics with practical application. A few months ago in my day job I needed an automated way to place a label inside a potentially complicated polygon. The midpoint of the polygon’s vertices wouldn’t do. The shapes could be L- or U- shaped, so that the midpoint wasn’t inside, or was too close to the edge of another shape. Starting from the midpoint, though, and finding the largest part of the polygon near to it? That’s doable, and that’s the Jordan Curve Theorem coming to help me.

Several years ago in an A-to-Z I tried to explain cohomologies. I wasn’t satisfied with it, as, in part, I couldn’t think of a good example. You know, something you could imagine demonstrating with specific physical objects. I can reel off definitions, once I look up the definitions, but there’s only so many people who can understand something from that.

Quanta Magazine recently ran an article about homologies. It’s a great piece, if we get past the introduction of topology with that doughnut-and-coffee-cup joke. (Not that it’s wrong, just that it’s tired.) It’s got pictures, too, which is great.

This I came to notice because Refurio Anachro on Mathstodon wrote a bit about it. This in a thread of toots talking about homologies and cohomologies. The thread at this link is more for mathematicians than the lay audience, unlike the Quanta Magazine article. If you’re comfortable reading about simplexes and linear operators and multifunctions you’re good. Otherwise … well, I imagine you trust that cohomologies can take care of themselves. But I feel better-informed for reading the thread. And it includes a link to a downloadable textbook in algebraic topology, useful for people who want to give that a try on their own.

The BBC’s In Our Time program, and podcast, did a 50-minute chat about the longitude problem. That’s the question of how to find one’s position, east or west of some reference point. It’s an iconic story of pop science and, I’ll admit, I’d think anyone likely to read my blog already knows the rough outline of the story. But you never know what people don’t know. And even if you do know, it’s often enjoyable to hear the story told a different way.

The mathematics content of the longitude problem is real, although it’s not discussed more than in passing during the chat. The core insight Western mapmakers used is that the difference between local (sun) time and a reference point’s time tells you how far east or west you are of that reference point. So then the question becomes how you know what your reference point’s time is.

This story, as it’s often told in pop science treatments, tends to focus on the brilliant clockmaker John Harrison, and the podcast does a fair bit of this. Harrison spent his life building a series of ever-more-precise clocks. These could keep London time on ships sailing around the world. (Or at least to the Caribbean, where the most profitable, slavery-driven, British interests were.) But he also spent decades fighting with the authorities he expected to reward him for his work. It makes for an almost classic narrative of lone genius versus the establishment.

But, and I’m glad the podcast discussion comes around to this, the reality more ambiguous than this. (Actual history is always more ambiguous than whatever you think.) Part of the goal of the goal of the British (and other powers) was finding a practical way for any ship to find longitude. Granted Harrison could build an advanced, ingenious clock more accurate than anyone else could. Could he build the hundreds, or thousands, of those clocks that British shipping needed? Could anyone?

And the competing methods for finding longitude were based on astronomy and calculation. The moment when, say, the Moon passes in front of Jupiter is the same for everyone on Earth. (At least for the accuracy needed here.) It can, in principle, be forecast years, even decades ahead of time. So why not print up books listing astronomical events for the next five years and the formulas to turn observations into longitudes? Books are easy to print. You already train your navigators in astronomy so that they can find latitude. (This by how far above the horizon the pole star, or the sun, or another identifiable feature is.) And, incidentally, you gain a way of computing longitude that you don’t lose if your clock breaks. I appreciated having some of that perspective shown.

(The problem of longitude on land gets briefly addressed. The same principles that work at sea work on land. And land offers some secondary checks. For an unmentioned example there’s triangulation. It’s a great process, and a compelling use of trigonometry. I may do a piece about that myself sometime.)

Also a thing I somehow did not realize: British English pronounces “longitude” with a hard G sound. Huh.

A Reading the Comics post a couple weeks back inspired me to find the centroid of a regular tetrahedron. A regular tetrahedron, also known as “a tetrahedron”, is the four-sided die shape. A pyramid with triangular base. Or a cone with a triangle base, if you prefer. If one asks a person to draw a tetrahedron, and they comply, they’ll likely draw this shape. The centroid, the center of mass of the tetrahedron, is at a point easy enough to find. It’s on the perpendicular between any of the four faces — the equilateral triangles — and the vertex not on that face. Particularly, it’s one-quarter the distance from the face towards the other vertex. We can reason that out purely geometrically, without calculating, and I did in that earlier post.

But most tetrahedrons are not regular. They have centroids too; where are they?

Thing is I know the correct answer going in. It’s at the “average” of the vertices of the tetrahedron. Start with the Cartesian coordinates of the four vertices. The x-coordinate of the centroid is the arithmetic mean of the x-coordinates of the four vertices. The y-coordinate of the centroid is the mean of the y-coordinates of the vertices. The z-coordinate of the centroid is the mean of the z-coordinates of the vertices. Easy to calculate; but, is there a way to see that this is right?

What’s got me is I can think of an argument that convinces me. So in this sense, I have an easy proof of it. But I also see where this argument leaves a lot unaddressed. So it may not prove things to anyone else. Let me lay it out, though.

So start with a tetrahedron of your own design. This will be less confusing if I have labels for the four vertices. I’m going to call them A, B, C, and D. I don’t like those labels, not just for being trite, but because I so want ‘C’ to be the name for the centroid. I can’t find a way to do that, though, and not have the four tetrahedron vertices be some weird set of letters. So let me use ‘P’ as the name for the centroid.

Where is P, relative to the points A, B, C, and D?

And here’s where I give a part of an answer. Start out by putting the tetrahedron somewhere convenient. That would be the floor. Set the tetrahedron so that the face with triangle ABC is in the xy plane. That is, points A, B, and C all have the z-coordinate of 0. The point D has a z-coordinate that is not zero. Let me call that coordinate h. I don’t care what the x- and y-coordinates for any of these points are. What I care about is what the z-coordinate for the centroid P is.

The property of the centroid that was useful last time around was that it split the regular tetrahedron into four smaller, irregular, tetrahedrons, each with the same volume. Each with one-quarter the volume of the original. The centroid P does that for the tetrahedron too. So, how far does the point P have to be from the triangle ABC to make a tetrahedron with one-quarter the volume of the original?

The answer comes from the same trick used last time. The volume of a cone is one-third the area of the base times its altitude. The volume of the tetrahedron ABCD, for example, is one-third times the area of triangle ABC times how far point D is from the triangle. That number I’d labelled h. The volume of the tetrahedron ABCP, meanwhile, is one-third times the area of triangle ABC times how far point P is from the triangle. So the point P has to be one-quarter as far from triangle ABC as the point D is. It’s got a z-coordinate of one-quarter h.

Notice, by the way, that while I don’t know anything about the x- and y- coordinates of any of these points, I do know the z-coordinates. A, B, and C all have z-coordinate of 0. D has a z-coordinate of h. And P has a z-coordinate of one-quarter h. One-quarter h sure looks like the arithmetic mean of 0, 0, 0, and h.

At this point, I’m convinced. The coordinates of the centroid have to be the mean of the coordinates of the vertices. But you also see how much is not addressed. You’d probably grant that I have the z-coordinate coordinate worked out when three vertices have the same z-coordinate. Or where three vertices have the same y-coordinate or the same x-coordinate. You might allow that if I can rotate a tetrahedron, I can get three points to the same z-coordinate (or y- or x- if you like). But this still only gets one coordinate of the centroid P.

I’m sure a bit of algebra would wrap this up. But I would like to avoid that, if I can. I suspect the way to argue this geometrically depends on knowing the line from vertex D to tetrahedron centroid P, if extended, passes through the centroid of triangle ABC. And something similar applies for vertexes A, B, and C. I also suspect there’s a link between the vector which points the direction from D to P and the sum of the three vectors that point the directions from D to A, B, and C. I haven’t quite got there, though.

Comic Strip Master Command has not, to appearances, been distressed by my Reading the Comics hiatus. There are still mathematically-themed comic strips. Many of them are about story problems and kids not doing them. Some get into a mathematical concept. One that ran last week caught my imagination so I’ll give it some time here. This and other Reading the Comics essays I have at this link, and I figure to resume posting them, at least sometimes.

The centroid is good geometry, something which turns up in plane and solid shapes. It’s a center of the shape: the arithmetic mean of all the points in the shape. (There are other things that can, with reason, be called a center too. Mathworld mentions the existence of 2,001 things that can be called the “center” of a triangle. It must be only a lack of interest that’s kept people from identifying even more centers for solid shapes.) It’s the center of mass, if the shape is a homogenous block. Balance the shape from below this centroid and it stays balanced.

For a complicated shape, finding the centroid is a challenge worthy of calculus. For these shapes, though? The sphere, the cube, the regular tetrahedron? We can work those out by reason. And, along the way, work out whether this rule gives an advantage to either boxer.

The sphere first. That’s the easiest. The centroid has to be the center of the sphere. Like, the point that the surface of the sphere is a fixed radius from. This is so obvious it takes a moment to think why it’s obvious. “Why” is a treacherous question for mathematics facts; why should 4 divide 8? But sometimes we can find answers that give us insight into other questions.

Here, the “why” I like is symmetry. Look at a sphere. Suppose it lacks markings. There’s none of the referee’s face or bow tie here. Imagine then rotating the sphere some amount. Can you see any difference? You shouldn’t be able to. So, in doing that rotation, the centroid can’t have moved. If it had moved, you’d be able to tell the difference. The rotated sphere would be off-balance. The only place inside the sphere that doesn’t move when the sphere is rotated is the center.

This symmetry consideration helps answer where the cube’s centroid is. That also has to be the center of the cube. That is, halfway between the top and bottom, halfway between the front and back, halfway between the left and right. Symmetry again. Take the cube and stand it upside-down; does it look any different? No, so, the centroid can’t be any closer to the top than it can the bottom. Similarly, rotate it 180 degrees without taking it off the mat. The rotation leaves the cube looking the same. So this rules out the centroid being closer to the front than to the back. It also rules out the centroid being closer to the left end than to the right. It has to be dead center in the cube.

Now to the regular tetrahedron. Obviously the centroid is … all right, now we have issues. Dead center is … where? We can tell when the regular tetrahedron’s turned upside-down. Also when it’s turned 90 or 180 degrees.

Symmetry will guide us. We can say some things about it. Each face of the regular tetrahedron is an equilateral triangle. The centroid has to be along the altitude. That is, the vertical line connecting the point on top of the pyramid with the equilateral triangle base, down on the mat. Imagine looking down on the shape from above, and rotating the shape 120 or 240 degrees if you’re still not convinced.

And! We can tip the regular tetrahedron over, and put another of its faces down on the mat. The shape looks the same once we’ve done that. So the centroid has to be along the altitude between the new highest point and the equilateral triangle that’s now the base, down on the mat. We can do that for each of the four sides. That tells us the centroid has to be at the intersection of these four altitudes. More, that the centroid has to be exactly the same distance to each of the four vertices of the regular tetrahedron. Or, if you feel a little fancier, that it’s exactly the same distance to the centers of each of the four faces.

It would be nice to know where along this altitude this intersection is, though. We can work it out by algebra. It’s no challenge to figure out the Cartesian coordinates for a good regular tetrahedron. Then finding the point that’s got the right distance is easy. (Set the base triangle in the xy plane. Center it, so the coordinates of the highest point are (0, 0, h) for some number h. Set one of the other vertices so it’s in the xz plane, that is, at coordinates (0, b, 0) for some b. Then find the c so that (0, 0, c) is exactly as far from (0, 0, h) as it is from (0, b, 0).) But algebra is such a mass of calculation. Can we do it by reason instead?

That I ask the question answers it. That I preceded the question with talk about symmetry answers how to reason it. The trick is that we can divide the regular tetrahedron into four smaller tetrahedrons. These smaller tetrahedrons aren’t regular; they’re not the Platonic solid. But they are still tetrahedrons. The little tetrahedron has as its base one of the equilateral triangles that’s the bigger shape’s face. The little tetrahedron has as its fourth vertex the centroid of the bigger shape. Draw in the edges, and the faces, like you’d imagine. Three edges, each connecting one of the base triangle’s vertices to the centroid. The faces have two of these new edges plus one of the base triangle’s edges.

The four little tetrahedrons have to all be congruent. Symmetry again; tip the big tetrahedron onto a different face and you can’t see a difference. So we’ll know, for example, all four little tetrahedrons have the same volume. The same altitude, too. The centroid is the same distance to each of the regular tetrahedron’s faces. And the four little tetrahedrons, together, have the same volume as the original regular tetrahedron.

What is the volume of a tetrahedron?

If we remember dimensional analysis we may expect the volume should be a constant times the area of the base of the shape times the altitude of the shape. We might also dimly remember there is some formula for the volume of any conical shape. A conical shape here is something that’s got a simple, closed shape in a plane as its base. And some point P, above the base, that connects by straight lines to every point on the base shape. This sounds like we’re talking about circular cones, but it can be any shape at the base, including polygons.

So we double-check that formula. The volume of a conical shape is one-third times the area of the base shape times the altitude. That’s the perpendicular distance between P and the plane that the base shape is in. And, hey, one-third times the area of the face times the altitude is exactly what we’d expect.

So. The original regular tetrahedron has a base — has all its faces — with area A. It has an altitude h. That h must relate in some way to the area; I don’t care how. The volume of the regular tetrahedron has to be .

The volume of the little tetrahedrons is — well, they have the same base as the original regular tetrahedron. So a little tetrahedron’s base is A. The altitude of the little tetrahedron is the height of the original tetrahedron’s centroid above the base. Call that . How can the volume of the little tetrahedron, , be one-quarter the volume of the original tetrahedron, ? Only if is one-quarter .

This pins down where the centroid of the regular tetrahedron has to be. It’s on the altitude underneath the top point of the tetrahedron. It’s one-quarter of the way up from the equilateral-triangle face.

(And I’m glad, checking this out, that I got to the right answer after all.)

So, if the cube and the tetrahedron have the same height, then the cube has an advantage. The cube’s centroid is higher up, so the tetrahedron has a narrower range to punch. Problem solved.

I do figure to talk about comic strips, and mathematics problems they bring up, more. I’m not sure how writing about one single strip turned into 1300 words. But that’s what happens every time I try to do something simpler. You know how it goes.

Both the Klein bottle and the Möbius strip have many possible appearances, for about the same reason there are many kinds of trapezoids or octagons or whatnot. Möbius strips are easy enough to make in real life. Klein bottles, not so; the shape needs four dimensions of space and we just don’t have them. We’ll represent it with a shape that loops back through itself, but a real Klein bottle wouldn’t do that, for the same reason a wireframe cube’s edges don’t intersect the way the lines of its photograph do.

It makes a good wireframe shape, though. I’m surprised not to see more playground equipment using it.

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements. (An element is just a thing in a set. We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo , are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about ? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about ? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers , for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 , though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 ? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, , lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If is any ring, then is the zero-divisor graph of . (I know some of you think is the real numbers. No; that’s a bold-faced instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in . You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L^{2}-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which .

Nobody had particular suggestions for the letter ‘Y’ this time around. It’s a tough letter to find mathematical terms for. It doesn’t even lend itself to typography or wordplay the way ‘X’ does. So I chose to do one more biographical piece before the series concludes. There were twists along the way in writing.

Before I get there, I have a word for a longtime friend, Porsupah Ree. Among her hobbies is watching, and photographing, the wild rabbits. A couple years back she got a great photograph. It’s one that you may have seen going around social media with a caption about how “everybody was bun fu fighting”. She’s put it up on Redbubble, so you can get the photograph as a print or a coffee mug or a pillow, or many other things. And you can support her hobbies of rabbit photography and eating.

Yang Hui.

Several problems beset me in writing about this significant 13th-century Chinese mathematician. One is my ignorance of the Chinese mathematical tradition. I have little to guide me in choosing what tertiary sources to trust. Another is that the tertiary sources know little about him. The Complete Dictionary of Scientific Biography gives a dire verdict. “Nothing is known about the life of Yang Hui, except that he produced mathematical writings”. MacTutor’s biography gives his lifespan as from circa 1238 to circa 1298, on what basis I do not know. He seems to have been born in what’s now Hangzhou, near Shanghai. He seems to have worked as a civil servant. This is what I would have imagined; most scholars then were. It’s the sort of job that gives one time to write mathematics. Also he seems not to have been a prominent civil servant; he’s apparently not listed in any dynastic records. After that, we need to speculate.

E F Robertson, writing the MacTutor biography, speculates that Yang Hui was a teacher. That he was writing to explain mathematics in interesting and helpful ways. I’m not qualified to judge Robertson’s conclusions. And Robertson notes that’s not inconsistent with Yang being a civil servant. Robertson’s argument is based on Yang’s surviving writings, and what they say about the demonstrated problems. There is, for example, 1274’s Cheng Chu Tong Bian Ben Mo. Robertson translates that title as Alpha and omega of variations on multiplication and division. I try to work out my unease at having something translated from Chinese as “Alpha and Omega”. That is my issue. Relevant here is that a syllabus prefaces the first chapter. It provides a schedule and series of topics, as well as a rationale for why this plan.

Was Yang Hui a discoverer of significant new mathematics? Or did he “merely” present what was already known in a useful way? This is not to dismiss him; we have the same questions about Euclid. He is held up as among the great Chinese mathematicians of the 13th century, a particularly fruitful time and place for mathematics. How much greatness to assign to original work and how much to good exposition is unanswerable with what we know now.

Consider for example the thing I’ve featured before, Yang Hui’s Triangle. It’s the arrangement of numbers known in the west as Pascal’s Triangle. Yang provides the earliest extant description of the triangle and how to form it and use it. This in the 1261 Xiangjie jiuzhang suanfa (Detailed analysis of the mathematical rules in the Nine Chapters and their reclassifications). But in it, Yang Hui says he learned the triangle from a treatise by Jia Xian, Huangdi Jiuzhang Suanjing Xicao (The Yellow Emperor’s detailed solutions to the Nine Chapters on the Mathematical Art). Jia Xian lived in the 11th century; he’s known to have written two books, both lost. Yang Hui’s commentary gives us a fair idea what Jia Xian wrote about. But we’re limited in judging what was Jia Xian’s idea and what was Yang Hui’s inference or what.

The Nine Chapters referred to is Jiuzhang suanshu. An English title is Nine Chapters on the Mathematical Art. The book is a 246-problem handbook of mathematics that dates back to antiquity. It’s impossible to say when the Nine Chapters was first written. Liu Hui, who wrote a commentary on the Nine Chapters in 263 CE, thought it predated the Qin ruler Shih Huant Ti’s 213 BCE destruction of all books. But the book — and the many commentaries on the book — served as a centerpiece for Chinese mathematics for a long while. Jia Xian’s and Yang Hui’s work was part of this tradition.

Yang Hui’s Detailed Analysis covers the Nine Chapters. It goes on for three chapters, more about geometry and fundamentals of mathematics. Even how to classify the problems. He had further works. In 1275 Yang published Practical mathematical rules for surveying and Continuation of ancient mathematical methods for elucidating strange properties of numbers. (I’m not confident in my ability to give the Chinese titles for these.) The first title particularly echoes how in the Western tradition geometry was born of practical concerns.

The breadth of topics covers, it seems to me, a decent modern (American) high school mathematics education. The triangle, and the binomial expansions it gives us, fit that. Yang writes about more efficient ways to multiply on the abacus. He writes about finding simultaneous solutions to sets of equations. And through a technique that amounts to finding the matrix of coefficients for the equations, and its determinant. He writes about finding the roots for cubic and quartic equations. The technique is commonly known in the west as Horner’s Method, a technique of calculating divided differences. We see the calculating of areas and volumes for regular shapes.

And sequences. He found the sum of the squares of natural numbers followed a rule:

This by a method of “piling up squares”, described some here by the Mathematical Association of America. (Me, I spent 40 minutes that could have gone into this essay convincing myself the formula was right. I couldn’t make myself believe the part and had to work it out a couple different ways.)

And then there’s magic squares, and magic circles. He seems to have found them, as professional mathematicians today would, good ways to interest people in calculation. Not magic; he called them something like number diagrams. But he gives magic squares from three-by-three all the way to ten-by-ten. We don’t know of earlier examples of Chinese mathematicians writing about the larger magic squares. But Yang Hui doesn’t claim to be presenting new work. He also gives magic circles. The simplest is a web of seven intersecting circles, each with four numbers along the circle and one at its center. The sum of the center and the circumference numbers are 65 for all seven circles. Is this significant? No; merely fun.

Grant this breadth of work. Is he significant? I learned this year that familiar names might have been obscure until quite recently. The record is once again ambiguous. Other mathematicians wrote about Yang Hui’s work in the early 1300s. Yang Hui’s works were printed in China in 1378, says the Complete Dictionary of Scientific Biography, and reprinted in Korea in 1433. They’re listed in a 1441 catalogue of the Ming Imperial Library. Seki Takakazu, a towering figure in 17th century Japanese mathematics, copied the Korean text by hand. Yet Yang Hui’s work seems to have been lost by the 18th century. Reconstructions, from commentaries and encyclopedias, started in the 19th century. But we don’t have everything we know he wrote. We don’t even have a complete text of Detailed Analysis. This is not to say he wasn’t influential. All I could say is there seems to have been a time his influence was indirect.

Mr Wu, author of the Singapore Maths Tuition blog, had an interesting suggestion for the letter T: Talent. As in mathematical talent. It’s a fine topic but, in the end, too far beyond my skills. I could share some of the legends about mathematical talent I’ve received. But what that says about the culture of mathematicians is a deeper and more important question.

So I picked my own topic for the week. I do have topics for next week — U — and the week after — V — chosen. But the letters W and X? I’m still open to suggestions. I’m open to creative or wild-card interpretations of the letters. Especially for X and (soon) Z. Thanks for sharing any thoughts you care to.

Tiling.

Think of a floor. Imagine you are bored. What do you notice?

What I hope you notice is that it is covered. Perhaps by carpet, or concrete, or something homogeneous like that. Let’s ignore that. My floor is covered in small pieces, repeated. My dining room floor is slats of wood, about three and a half feet long and two inches wide. The slats are offset from the neighbors so there’s a pleasant strong line in one direction and stippled lines in the other. The kitchen is squares, one foot on each side. This is a grid we could plot high school algebra functions on. The bathroom is more elaborate. It has white rectangles about two inches long, tan rectangles about two inches long, and black squares. Each rectangle is perpendicular to ones of the other color, and arranged to bisect those. The black squares fill the gaps where no rectangle would fit.

Move from my house to pure mathematics. It’s easy to turn the floor of a room into abstract mathematics. We start with something to tile. Usually this is the infinite, two-dimensional plane. The thing you get if you have a house and forget the walls. Sometimes we look to tile the hyperbolic plane, a different geometry that we of course represent with a finite circle. (Setting particular rules about how to measure distance makes this equivalent to a funny-shaped plane.) Or the surface of a sphere, or of a torus, or something like that. But if we don’t say otherwise, it’s the plane.

What to cover it with? … Smaller shapes. We have a mathematical tiling if we have a collection of not-overlapping open sets. And if those open sets, plus their boundaries, cover the whole plane. “Cover” here means what “cover” means in English, only using more technical words. These sets — these tiles — can be any shape. We can have as many or as few of them as we like. We can even add markings to the tiles, give them colors or patterns or such, to add variety to the puzzles.

(And if we want, we can do this in other dimensions. There are good “tiling” questions to ask about how to fill a three-dimensional space, or a four-dimensional one, or more.)

Having an unlimited collection of tiles is nice. But mathematicians learn to look for how little we need to do something. Here, we look for the smallest number of distinct shapes. As with tiling an actual floor, we can get all the tiles we need. We can rotate them, too, to any angle. We can flip them over and put the “top” side “down”, something kitchen tiles won’t let us do. Can we reflect them? Use the shape we’d get looking at the mirror image of one? That’s up to whoever’s writing this paper.

What shapes will work? Well, squares, for one. We can prove that by looking at a sheet of graph paper. Rectangles would work too. We can see that by drawing boxes around the squares on our graph paper. Two-by-one blocks, three-by-two blocks, 40-by-1 blocks, these all still cover the paper and we can imagine covering the plane. If we like, we can draw two-by-two squares. Squares made up of smaller squares. Or repeat this: draw two-by-one rectangles, and then group two of these rectangles together to make two-by-two squares.

We can take it on faith that, oh, rectangles π long by e wide would cover the plane too. These can all line up in rows and columns, the way our squares would. Or we can stagger them, like bricks or my dining room’s wood slats are.

How about parallelograms? Those, it turns out, tile exactly as well as rectangles or squares do. Grids or staggered, too. Ah, but how about trapezoids? Surely they won’t tile anything. Not generally, anyway. The slanted sides will, most of the time, only fit in weird winding circle-like paths.

Unless … take two of these trapezoid tiles. We’ll set them down so the parallel sides run horizontally in front of you. Rotate one of them, though, 180 degrees. And try setting them — let’s say so the longer slanted line of both trapezoids meet, edge to edge. These two trapezoids come together. They make a parallelogram, although one with a slash through it. And we can tile parallelograms, whether or not they have a slash.

OK, but if you draw some weird quadrilateral shape, and it’s not anything that has a more specific name than “quadrilateral”? That won’t tile the plane, will it?

It will! In one of those turns that surprises and impresses me every time I run across it again, any quadrilateral can tile the plane. It opens up so many home decorating options, if you get in good with a tile maker.

That’s some good news for quadrilateral tiles. How about other shapes? Triangles, for example? Well, that’s good news too. Take two of any identical triangle you like. Turn one of them around and match sides of the same length. The two triangles, bundled together like that, are a quadrilateral. And we can use any quadrilateral to tile the plane, so we’re done.

How about pentagons? … With pentagons, the easy times stop. It turns out not every pentagon will tile the plane. The pentagon has to be of the right kind to make it fit. If the pentagon is in one of these kinds, it can tile the plane. If not, not. There are fifteen families of tiling known. The most recent family was discovered in 2015. It’s thought that there are no other convex pentagon tilings. I don’t know whether the proof of that is generally accepted in tiling circles. And we can do more tilings if the pentagon doesn’t need to be convex. For example, we can cut any parallelogram into two identical pentagons. So we can make as many pentagons as we want to cover the plane. But we can’t assume any pentagon we like will do it.

And there the good times end. There are no convex heptagons or octagons or any other shape with more sides that tile the plane.

Not by themselves, anyway. If we have more than one tile shape we can start doing fine things again. Octagons assisted by squares, for example, will tile the plane. I’ve lived places with that tiling. Or something that looks like it. It’s easier to install if you have square tiles with an octagon pattern making up the center, and triangle corners a different color. These squares come together to look like octagons and squares.

And this leads to a fun avenue of tiling. Hao Wang, in the early 60s, proposed a sort of domino-like tiling. You may have seen these in mathematics puzzles, or in toys. Each of these Wang Tiles, or Wang Dominoes, is a square. But the square is cut along the diagonals, into four quadrants. Each quadrant is a right triangle. Each quadrant, each triangle, is one of a finite set of colors. Adjacent triangles can have the same color. You can place down tiles, subject only to the rule that the tile edge has to have the same color on both sides. So a tile with a blue right-quadrant has to have on its right a tile with a blue left-quadrant. A tile with a white upper-quadrant on its top has, above it, a tile with a white lower-quadrant.

In 1961 Wang conjectured that if a finite set of these tiles will tile the plane, then there must be a periodic tiling. That is, if you picked up the plane and slid it a set horizontal and vertical distance, it would all look the same again. This sort of translation is common. All my floors do that. If we ignore things like the bounds of their rooms, or the flaws in their manufacture or installation or where a tile broke in some mishap.

This is not to say you couldn’t arrange them aperiodically. You don’t even need Wang Tiles for that. Get two colors of square tile, a white and a black, and lay them down based on whether the next decimal digit of π is odd or even. No; Wang’s conjecture was that if you had tiles that you could lay down aperiodically, then you could also arrange them to set down periodically. With the black and white squares, lay down alternate colors. That’s easy.

In 1964, Robert Berger proved Wang’s conjecture was false. He found a collection of Wang Tiles that could only tile the plane aperiodically. In 1966 he published this in the Memoirs of the American Mathematical Society. The 1964 proof was for his thesis. 1966 was its general publication. I mention this because while doing research I got irritated at how different sources dated this to 1964, 1966, or sometimes 1961. I want to have this straightened out. It appears Berger had the proof in 1964 and the publication in 1966.

I would like to share details of Berger’s proof, but haven’t got access to the paper. What fascinates me about this is that Berger’s proof used a set of 20,426 different tiles. I assume he did not work this all out with shards of construction paper, but then, how to get 20,426 of anything? With computer time as expensive as it was in 1964? The mystery of how he got all these tiles is worth an essay of its own and regret I can’t write it.

Berger conjectured that a smaller set might do. Quite so. He himself reduced the set to 104 tiles. Donald Knuth in 1968 modified the set down to 92 tiles. In 2015 Emmanuel Jeandel and Michael Rao published a set of 11 tiles, using four colors. And showed by computer search that a smaller set of tiles, or fewer colors, would not force some aperiodic tiling to exist. I do not know whether there might be other sets of 11, four-colored, tiles that work. You can see the set at the top of Wikipedia’s page on Wang Tiles.

These Wang Tiles, all squares, inspired variant questions. Could there be other shapes that only aperiodically tile the plane? What if they don’t have to be squares? Raphael Robinson, in 1971, came up with a tiling using six shapes. The shapes have patterns on them too, usually represented as colored lines. Tiles can be put down only in ways that fit and that make the lines match up.

Among my readers are people who have been waiting, for 1800 words now, for Roger Penrose. It’s now that time. In 1974 Penrose published an aperiodic tiling, one based on pentagons and using a set of six tiles. You’ve never heard of that either, because soon after he found a different set, based on a quadrilateral cut into two shapes. The shapes, as with Wang Tiles or Robinson’s tiling, have rules about what edges may be put against each other. Penrose — and independently Robert Ammann — also developed another set, this based on a pair of rhombuses. These have rules about what edges may tough one another, and have patterns on them which must line up.

To show that the rhombus-based Penrose tiling is aperiodic takes some arguing. But it uses tools already used in this essay. Remember drawing rectangles around several squares? And then drawing squares around several of these rectangles? We can do that with these Penrose-Ammann rhombuses. From the rhombus tiling we can draw bigger rhombuses. Ones which, it turns out, follow the same edge rules that the originals do. So that we can go again, grouping these bigger rhombuses into even-bigger rhombuses. And into even-even-bigger rhombuses. And so on.

What this gets us is this: suppose the rhombus tiling is periodic. Then there’s some finite-distance horizontal-and-vertical move that leaves the pattern unchanged. So, the same finite-distance move has to leave the bigger-rhombus pattern unchanged. And this same finite-distance move has to leave the even-bigger-rhombus pattern unchanged. Also the even-even-bigger pattern unchanged.

Keep bundling rhombuses together. You get eventually-big-enough-rhombuses. Now, think of how far you have to move the tiles to get a repeat pattern. Especially, think how many eventually-big-enough-rhombuses it is. This distance, the move you have to make, is less than one eventually-big-enough rhombus. (If it’s not you aren’t eventually-big-enough yet. Bundle them together again.) And that doesn’t work. Moving one tile over without changing the pattern makes sense. Moving one-half a tile over? That doesn’t. So the eventually-big-enough pattern can’t be periodic, and so, the original pattern can’t be either. This is explained in graphic detail a nice Powerpoint slide set from Professor Alexander F Ritter, A Tour Of Tilings In Thirty Minutes.

It’s possible to do better. In 2010 Joshua E S Socolar and Joan M Taylor published a single tile that can force an aperiodic tiling. As with the Wang Tiles, and Robinson shapes, and the Penrose-Ammann rhombuses, markings are part of it. They have to line up so that the markings — in two colors, in the renditions I’ve seen — make sense. With the Penrose tilings, you can get away from the pattern rules for the edges by replacing them with little notches. The Socolar-Taylor shape can make a similar trade. Here the rules are complex enough that it would need to be a three-dimensional shape, one that looks like the dilithium housing of the warp core. You can see the tile — in colored, marked form, and also in three-dimensional tile shape — at the PDF here. It’s likely not coming to the flooring store soon.

It’s all wonderful, but is it useful? I could go on a few hundred words about, particularly, crystals and quasicrystals. These are important for materials science. Especially these days as we have harnessed slightly-imperfect crystals to be our computers. I don’t care. These are lovely to look at. If you see nothing appealing in a great heap of colors and polygons spread over the floor there are things we cannot communicate about. Tiling is a delight; what more do you need?

Part of why I write these essays is to save future time. If I have an essay explaining some complex idea, then in the future, I can use a link and a short recap of the central idea. There’s some essays that have been perennials. I think I’ve linked to polynomials more than anything else on this site. And then some disappear, even though they seem to be about good useful subjects. Riemann sphere, from the Leap Day 2016 sequence, is one of those disappeared topics. This is one of the ways to convert between “shapes on the plane” and “shapes on the sphere”. There’s no way to perfectly move something from the plane to the sphere, or vice-versa. The Riemann Sphere is an approach which preserves the interior angles. If two lines on the plane intersect at a 25 degree angle, their representation on the sphere will intersect at a 25 degree angle. But everything else may get strange.

I’m happy to have a subject from Elke Stangl, author of elkemental Force. That’s a fun and wide-ranging blog which, among other things, just published a poem about proofs. You might enjoy.

Quadratic Form.

One delight, and sometimes deadline frustration, of these essays is discovering things I had not thought about. Researching quadratic forms invited the obvious question of what is a form? And that goes undefined on, for example, Mathworld. Also in the textbooks I’ve kept. Even ones you’d think would mention, like R W R Darling’s Differential Forms and Connections, or Frigyes Riesz and Béla Sz-Nagy’s Functional Analysis. Reluctantly I started thinking about what we talk about when discussing forms.

Quadratic forms offer some hints. These take a vector in some n-dimensional space, and return a scalar. Linear forms, and cubic forms, do the same. The pattern suggests a form is a mapping from a space like to or maybe to . That looks good, but then we have to ask: isn’t that just an operator? Also: then what about differential forms? Or volume forms? These are about how to fill space. There’s nothing scalar in that. But maybe these are both called forms because they fill similar roles. They might have as little to do with one another as red pandas and giant pandas do.

Enlightenment comes after much consideration or happening on Wikipedia’s page about homogenous polynomials. That offers “an algebraic form, or simply form, is a function defined by a homogeneous polynomial”. That satisfies. First, because it gets us back to polynomials. Second, because all the forms I could think of do have rules based in homogeneous polynomials. They might be peculiar polynomials. Volume forms, for example, have a polynomial in wedge products of differentials. But it counts.

A function’s homogenous if it scales a particular way. Evaluate it at some set of coordinates x, y, z, (more variables if you need). That’s some number (let’s say). Take all those coordinates and multiply them by the same constant; let me call that α. Evaluate the function at α x, α y α z, (α times more variables if you need). Then that value is α^{k} times the original value of f. k is some constant. It depends on the function, but not on what x, y, z, (more) are.

For a quadratic form, this constant k equals 4. This is because in the quadratic form, all the terms in the polynomial are of the second degree. So, for example, is a quadratic form. So is ; the x times the y brings this to a second degree. Also a quadratic form is . So is .

This can have many variables. If we have a lot, we have a couple choices. One is to start using subscripts, and to write the form something like:

This is respectable enough. People who do a lot of differential geometry get used to a shortcut, the Einstein Summation Convention. In that, we take as implicit the summation instructions. So they’d write the more compact . Those of us who don’t do a lot of differential geometry think that looks funny. And we have more familiar ways to write things down. Like, we can put the collection of variables into an ordered n-tuple. Call it the vector . If we then think to put the numbers into a square matrix we have a great way of writing things. We have to manipulate the a little to make the matrix, but it’s nothing complicated. Once that’s done we can write the quadratic form as:

This uses matrix multiplication. The vector we assume is a column vector, a bunch of rows one column across. Then we have to take its transposition, one row a bunch of columns across, to make the matrix multiplication work out. If we don’t like that notation with its annoying superscripts? We can declare the bare ‘x’ to mean the vector, and use inner products:

This is easier to type at least. But what does it get us?

Looking at some quadratic forms may give us an idea. practically begs to be matched to an , and the name “the equation of a circle”. is less familiar, but to the crowd reading this, not much less familiar. Fill that out to and we have a hyperbola. If we have and let that then we have an ellipse, something a bit wider than it is tall. Similarly is a hyperbola still, just anamorphic.

If we expand into three variables we start to see spheres: just begs to equal . Or ellipsoids: , set equal to some (positive) , is something we might get from rolling out clay. Or hyperboloids: or , set equal to , give us nice shapes. (We can also get cylinders: equalling some positive number describes a tube.)

How about ? This also wants to be an ellipse. , to pick an easy number, is a rotated ellipse. The long axis is along the line described by . The short axis is along the line described by . How about — let me make this easy. ? The equation describes a hyperbola, but a rotated one, with the x- and y-axes as its asymptotes.

Do you want to take any guesses about three-dimensional shapes? Like, what might represent? If you’re thinking “ellipsoid, only it’s at an angle” you’re doing well. It runs really long in one direction, along the plane described by . It runs medium-size along the plane described by . It runs pretty short along the z-axis. We could run some more complicated shapes. Ellipses pointing in weird directions. Hyperboloids of different shapes. They’ll have things in common.

One is that they have obviously important axes. Axes of symmetry, particularly. There’ll be one for each dimension of space. An ellipse has a long axis and a short axis. An ellipsoid has a long, a middle, and a short. (It might be that two of these have the same length. If all three have the same length, you have a sphere, my friend.) A hyperbola, similarly, has two axes of symmetry. One of them is the midpoint between the two branches of the hyperbola. One of them slices through the two branches, through the points where the two legs come closest together. Hyperboloids, in three dimensions, have three axes of symmetry. One of them connects the points where the two branches of hyperboloid come closest together. The other two run perpendicular to that.

We can go on imagining more dimensions of space. We don’t need them. The important things are already there. There are, for these shapes, some preferred directions. The ones around which these quadratic-form shapes have symmetries. These directions are perpendicular to each other. These preferred directions are important. We call them “eigenvectors”, a partly-German name.

Eigenvectors are great for a bunch of purposes. One is that if the matrix A represents a problem you’re interested in? The eigenvectors are probably a great basis to solve problems in it. This is a change of basis vectors, which is the same work as doing a rotation. And it’s happy to report this change of coordinates doesn’t mess up the problem any. We can rewrite the problem to be easier.

And, roughly, any time we look at reflections in a Euclidean space, there’s a quadratic form lurking around. This leads us into interesting places. Looking at reflections encourages us to see abstract algebra, to see groups. That space can be rotated in infinitesimally small pieces gets us a kind of group named a Lie (pronounced ‘lee’) Algebra. Quadratic forms give us a way of classifying those.

Quadratic forms work in number theory also. There’s a neat theorem, the 15 Theorem. If a quadratic form, with integer coefficients, can produce all the integers from 1 through 15, then it can produce all positive numbers. For example, can, for sets of integers x, y, z, and w, add up to any positive number you like. (It’s not guaranteed this will happen. can’t produce 15.) We know of at least 54 combinations which generate all the positive integers, like and and such.

There’s more, of course. There always is. I spent time skimming Quadratic Forms and their Applications, Proceedings of the Conference on Quadratic Forms and their Applications. It was held at University College Dublin in July of 1999. It’s some impressive work. I can think of very little that I can describe. Even Winfried Scharlau’s On the History of the Algebraic Theory of Quadratic Forms, from page 229, is tough going. Ina Kersten’s Biography of Ernst Witt, one of the major influences on quadratic forms, is accessible. I’m not sure how much of the particular work communicates.

It’s easy at least to know what things this field is about, though. The things that we calculate. That they connect to novel and abstract places shows how close together arithmetic and dynamical systems and topology and group theory and number theory are, despite appearances.

And in last year’s A-to-Z I published one of those essays already becoming a favorite. I haven’t had much chance to link back to it. So let me fix that. My 2019 Mathematics A To Z: Platonic focuses on the Platonic Solids, and questions like why we might find them interesting. Also, what Platonic solids look like in spaces of other than three dimensions. Three-dimensional space has five Platonic solids. There are six Platonic Solids in four dimensions. How many would you expect in a five-dimensional space? Or a ten-dimensional one? The answer may surprise you!

Jacob Siehler suggested this topic. I had to check several times that I hadn’t written an essay about the Möbius strip already. While I have talked about it some, mostly in comic strip essays, this is a chance to specialize on the shape in a way I haven’t before.

Möbius Strip.

I have ridden at least 252 different roller coasters. These represent nearly every type of roller coaster made today, and most of the types that were ever made. One type, common in the 1920s and again since the 70s, is the racing coaster. This is two roller coasters, dispatched at the same time, following tracks that are as symmetric as the terrain allows. Want to win the race? Be in the train with the heavier passenger load. The difference in the time each train takes amounts to losses from friction, and the lighter train will lose a bit more of its speed.

There are three special wooden racing coasters. These are Racer at Kennywood Amusement Park (Pittsburgh), Grand National at Blackpool Pleasure Beach (Blackpool, England), and Montaña Rusa at La Feria Chapultepec Magico (Mexico City). I’ve been able to ride them all. When you get into the train going up, say, the left lift hill, you return to the station in the train that will go up the right lift hill. These racing roller coasters have only one track. The track twists around itself and becomes a Möbius strip.

This is a fun use of the Möbius strip. The shape is one of the few bits of advanced mathematics to escape into pop culture. Maybe dominates it, in a way nothing but the blackboard full of calculus equations does. In 1958 the public intellectual and game show host Clifton Fadiman published the anthology Fantasia Mathematica. It’s all essays and stories and poems with some mathematical element. I no longer remember how many of the pieces were about the Möbius strip one way or another. The collection does include A J Deutschs’s classic A Subway Named Möbius. In this story the Boston subway system achieves hyperdimensional complexity. It does not become a Möbius strip, though, in that story. It might be one in reality anyway.

The Möbius strip we name for August Ferdinand Möbius, who in 1858 was the second person known to have noticed the shape’s curious properties. The first — to notice, in 1858, and to publish, in 1862 — was Johann Benedict Listing. Listing seems to have coined the term “topology” for the field that the Möbius strip would be emblem for. He wrote one of the first texts on the field. He also seems to have coined terms like “entrophic phenomena” and “nodal points” and “geoid” and “micron”, for a millionth of a meter. It’s hard to say why we don’t talk about Listing strips instead. Mathematical fame is a strange, unpredictable creature. There is a topological invariant, the Listing Number, named for him. And he’s known to ophthalmologists for Listing’s Law, which describes how human eyes orient themselves.

The Möbius strip is an easy thing to construct. Loop a ribbon back to itself, with an odd number of half-twist before you fasten the ends together. Anyone could do it. So it seems curious that for all recorded history nobody thought to try. Not until 1858 when Lister and then Möbius hit on the same idea.

An irresistible thing, while riding these roller coasters, is to try to find the spot where you “switch”, where you go from being on the left track to the right. You can’t. The track is — well, the track is a series of metal straps bolted to a base of wood. (The base the straps are bolted to is what makes it a wooden roller coaster. The great lattice holding the tracks above ground have nothing to do with it.) But the path of the tracks is a continuous whole. To split it requires the same arbitrariness with which mapmakers pick a prime meridian. It’s obvious that the “longitude” of a cylinder or a rubber ball is arbitrary. It’s not obvious that roller coaster tracks should have the same property. Until you draw the shape in that ∞-loop figure we always see. Then you can get lost imagining a walk along the surface.

And it’s not true that nobody thought to try this shape before 1858. Julyan H E Cartwright and Diego L González wrote a paper searching for pre-Möbius strips. They find some examples. To my eye not enough examples to support their abstract’s claim of “lots of them”, but I trust they did not list every example. One example is a Roman mosaic showing Aion, the God of Time, Eternity, and the Zodiac. He holds a zodiac ring that is either a Möbius strip or cylinder with artistic errors. Cartwright and González are convinced. I’m reminded of a Looks Good On Paper comic strip that forgot to include the needed half-twist.

Islamic science gives us a more compelling example. We have a book by Ismail al-Jazari dated 1206, The Book of Knowledge of Ingenious Mechanical Devices. Some manuscripts of it illustrate a chain pump, with the chain arranged as a Möbius strip. Cartwright and González also note discussions in Scientific American, and other engineering publications in the United States, about drive and conveyor belts with the Möbius strip topology. None of those predate Lister or Möbius, or apparently credit either. And they do come quite soon after. It’s surprising something might leap from abstract mathematics to Yankee ingenuity that fast.

If it did. It’s not hard to explain why mechanical belts didn’t consider Möbius strip shapes before the late 19th century. Their advantage is that the wear of the belt distributes over twice the surface area, the “inside” and “outside”. A leather belt has a smooth and a rough side. Many other things you might make a belt from have a similar asymmetry. By the late 19th century you could make a belt of rubber. Its grip and flexibility and smoothness is uniform on all sides. “Balancing” the use suddenly could have a point.

I still find it curious almost no one drew or speculated about or played with these shapes until, practically, yesterday. The shape doesn’t seem far away from a trefoil knot. The recycling symbol, three folded-over arrows, suggests a Möbius strip. The strip evokes the ∞ symbol, although that symbol was not attached to the concept of “infinity” until John Wallis put it forth in 1655.

Even with the shape now familiar, and loved, there are curious gaps. Consider game design. If you play on a board that represents space you need to do something with the boundaries. The easiest is to make the boundaries the edges of playable space. The game designer has choices, though. If a piece moves off the board to the right, why not have it reappear on the left? (And, going off to the left, reappear on the right.) This is fine. It gives the game board, a finite rectangle, the topology of a cylinder. If this isn’t enough? Have pieces that go off the top edge reappear at the bottom, and vice-versa. Doing this, along with matching the left to the right boundaries, makes the game board a torus, a doughnut shape.

A Möbius strip is easy enough to code. Make the top and bottom impenetrable borders. And match the left to the right edges this way: a piece going off the board at the upper half of the right edge reappears at the lower half of the left edge. Going off the lower half of the right edge brings the piece to the upper half of the left edge. And so on. It isn’t hard, but I’m not aware of any game — board or computer — that uses this space. Maybe there’s a backgammon variant which does.

Still, the strip defies our intuition. It has one face and one edge. To reflect a shape across the width of the strip is the same as sliding a shape along its length. Cutting the strip down the center unfurls it into a cylinder. Cutting the strip down, one-third of the way from the edge, divides it into two pieces, a skinnier Möbius strip plus a cylinder. If we could extract the edge we could tug and stretch it until it was a circle.

And it primes our intuition. Once we understand there can be shapes lacking sides we can look for more. Anyone likely to read a pop mathematics blog about the Möbius strip has heard of the Klein bottle. This is a three-dimensional surface that folds back on itself in the fourth dimension of space. The shape is a jug with no inside, or with nothing but inside. Three-dimensional renditions of this get suggested as gifts to mathematicians. This for your mathematician friend who’s already got a Möbius scarf.

Though a Möbius strip looks — at any one spot — like a plane, the four-color map theorem doesn’t hold for it. Even the five-color theorem won’t do. You need six colors to cover maps on such a strip. A checkerboard drawn on a Möbius strip can be completely covered by T-shape pentominoes or Tetris pieces. You can’t do this for a checkerboard on the plane. In the mathematics of music theory the organization of dyads — two-tone “chords” — has the structure of a Möbius strip. I do not know music theory or the history of music theory. I’m curious whether Möbius strips might have been recognized by musicians before the mathematicians caught on.

And they inspire some practical inventions. Mechanical belts are obvious, although I don’t know how often they’re used. More clever are designs for resistors that have no self-inductance. They can resist electric flow without causing magnetic interference. I can look up the patents; I can’t swear to how often these are actually used. There exist — there are made — Möbius aromatic compounds. These are organic compounds with rings of carbon and hydrogen. I do not know a use for these. That they’ve only been synthesized this century, rather than found in nature, suggests they are more neat than practical.

Perhaps this shape is most useful as a path into a particular type of topology, and for its considerable artistry. And, with its “late” discovery, a reminder that we do not yet know all that is obvious. That is enough for anything.

There are three steel roller coasters with a Möbius strip track. That is, the metal rail on which the coaster runs is itself braced directly by metal. One of these is in France, one in Italy, and one in Iran. One in Liaoning, China has been under construction for five years. I can’t say when it might open. I have yet to ride any of them.

This is a slight thing that crossed my reading yesterday. You might enjoy. The question is a silly one: what’s the “optimal” way to slice banana onto a peanut-butter-and-banana sandwich?

Here’s Ethan Rosenthal’s answer. The specific problem this is put to is silly. The optimal peanut butter and banana sandwich is the one that satisfies your desire for a peanut butter and banana sandwich. However, the approach to the problem demonstrates good mathematics, and numerical mathematics, practices. Particularly it demonstrates defining just what your problem is, and what you mean by “optimal”, and how you can test that. And then developing a numerical model which can optimize it.

And the specific question, how much of the sandwich can you cover with banana slices, one of actual interest. A good number of ideas in analysis involve thinking of cover sets: what is the smallest collection of these things which will completely cover this other thing? Concepts like this give us an idea of how to define area, also, as the smallest number of standard reference shapes which will cover the thing we’re interested in. The basic problem is practical too: if we wish to provide something, and have units like this which can cover some area, how can we arrange them so as to miss as little as possible? Or use as few of the units as possible?

I should have gone with Vayuputrii’s proposal that I talk about the Kronecker Delta. But both Jacob Siehler and Mr Wu proposed K-Theory as a topic. It’s a big and an important one. That was compelling. It’s also a challenging one. This essay will not teach you K-Theory, or even get you very far in an introduction. It may at least give some idea of what the field is about.

K-Theory.

This is a difficult topic to discuss. It’s an important theory. It’s an abstract one. The concrete examples are either too common to look interesting or are already deep into things like “tangent bundles of S^{n-1}”. There are people who find tangent bundles quite familiar concepts. My blog will not be read by a thousand of them this month. Those who are familiar with the legends grown around Alexander Grothendieck will nod on hearing he was a key person in the field. Grothendieck was of great genius, and also spectacular indifference to practical mathematics. Allegedly he once, pressed to apply something to a particular prime number for an example, proposed 57, which is not prime. (One does not need to be a genius to make a mistake like that. If I proposed 447 or 449 as prime numbers, how long would you need to notice I was wrong?)

K-Theory predates Grothendieck. Now that we know it’s a coherent mathematical idea we can find elements leading to it going back to the 19th century. One important theorem has Bernhard Riemann’s name attached. Henri Poincaré contributed early work too. Grothendieck did much to give the field a particular identity. Also a name, the K coming from the German Klasse. Grothendieck pioneered what we now call Algebraic K-Theory, working on the topic as a field of abstract algebra. There is also a Topological K-Theory, early work on which we thank Michael Atiyah and Friedrick Hirzebruch for. Topology is, popularly, thought of as the mathematics of flexible shapes. It is, but we get there from thinking about relationships between sets, and these are the topologies of K-Theory. We understand these now as different ways of understandings structures.

Still, one text I found described (topological) K-Theory as “the first generalized cohomology theory to be studied thoroughly”. I remember how much handwaving I had to do to explain what a cohomology is. The subject looks intimidating because of the depth of technical terms. Every field is deep in technical terms, though. These look more rarefied because we haven’t talked much, or deeply, into the right kinds of algebra and topology.

You find at the center of K-Theory either “coherent sheaves” or “vector bundles”. Which alternative depends on whether you prefer Algebraic or Topological K-Theory. Both alternatives are ways to encode information about the space around a shape. Let me talk about vector bundles because I find that easier to describe. Take a shape, anything you like. A closed ribbon. A torus. A Möbius strip. Draw a curve on it. Every point on that curve has a tangent plane, the plane that just touches your original shape, and that’s guaranteed to touch your curve at one point. What are the directions you can go in that plane? That collection of directions is a fiber bundle — a tangent bundle — at that point. (As ever, do not use this at your thesis defense for algebraic topology.)

Now: what are all the tangent bundles for all the points along that curve? Does their relationship tell you anything about the original curve? The question is leading. If their relationship told us nothing, this would not be a subject anyone studies. If you pick a point on the curve and look at its tangent bundle, and you move that point some, how does the tangent bundle change?

Why create such a thing? The usual reasons. Often it turns out calculating something is easier on the associated ring than it is on the original space. What are we looking to calculate? Typically, we’re looking for invariants. Things that are true about the original shape whatever ways it might be rotated or stretched or twisted around. Invariants can be things as basic as “the number of holes through the solid object”. Or they can be as ethereal as “the total energy in a physics problem”. Unfortunately if we’re looking at invariants that familiar, K-Theory is probably too much overhead for the problem. I confess to feeling overwhelmed by trying to learn enough to say what it is for.

There are some big things which it seems well-suited to do. K-Theory describes, in its way, how the structure of a set of items affects the functions it can have. This links it to modern physics. The great attention-drawing topics of 20th century physics were quantum mechanics and relativity. They still are. The great discovery of 20th century physics has been learning how much of it is geometry. How the shape of space affects what physics can be. (Relativity is the accessible reflection of this.)

And so K-Theory comes to our help in string theory. String theory exists in that grand unification where mathematics and physics and philosophy merge into one. I don’t toss philosophy into this as an insult to philosophers or to string theoreticians. Right now it is very hard to think of ways to test whether a particular string theory model is true. We instead ponder what kinds of string theory could be true, and how we might someday tell whether they are. When we ask what things could possibly be true, and how to tell, we are working for the philosophy department.

My reading tells me that K-Theory has been useful in condensed matter physics. That is, when you have a lot of particles and they interact strongly. When they act like liquids or solids. I can’t speak from experience, either on the mathematics or the physics side.

I can talk about an interesting mathematical application. It’s described in detail in section 2.3 of Allen Hatcher’s text Vector Bundles and K-Theory, here. It comes about from consideration of the Hopf invariant, named for Heinz Hopf for what I trust are good reasons. It also comes from consideration of homomorphisms. A homomorphism is a matching between two sets of things that preserves their structure. This has a precise definition, but I can make it casual. If you have noticed that, every (American, hourlong) late-night chat show is basically the same? The host at his desk, the jovial band leader, the monologue, the show rundown? Two guests and a band? (At least in normal times.) Then you have noticed the homomorphism between these shows. A mathematical homomorphism is more about preserving the products of multiplication. Or it preserves the existence of a thing called the kernel. That is, you can match up elements and how the elements interact.

What’s important is Adams’ Theorem of the Hopf Invariant. I’ll write this out (quoting Hatcher) to give some taste of K-Theory:

The following statements are true only for n = 1, 2, 4, and 8:
a. is a division algebra.
b. is parallelizable, ie, there exist n – 1 tangent vector fields to which are linearly independent at each point, or in other words, the tangent bundle to is trivial.

This is, I promise, low on jargon. “Division algebra” is familiar to anyone who did well in abstract algebra. It means a ring where every element, except for zero, has a multiplicative inverse. That is, division exists. “Linearly independent” is also a familiar term, to the mathematician. Almost every subject in mathematics has a concept of “linearly independent”. The exact definition varies but it amounts to the set of things having neither redundant nor missing elements.

The proof from there sprawls out over a bunch of ideas. Many of them I don’t know. Some of them are simple. The conditions on the Hopf invariant all that stuff eventually turns into finding values of n for for which divides . There are only three values of ‘n’ that do that. For example.

What all that tells us is that if you want to do something like division on ordered sets of real numbers you have only a few choices. You can have a single real number, . Or you can have an ordered pair, . Or an ordered quadruple, . Or you can have an ordered octuple, . And that’s it. Not that other ordered sets can’t be interesting. They will all diverge far enough from the way real numbers work that you can’t do something that looks like division.

And now we come back to the running theme of this year’s A-to-Z. Real numbers are real numbers, fine. Complex numbers? We have some ways to understand them. One of them is to match each complex number with an ordered pair of real numbers. We have to define a more complicated multiplication rule than “first times first, second times second”. This rule is the rule implied if we come to through this avenue of K-Theory. We get this matching between real numbers and the first great expansion on real numbers.

The next great expansion of complex numbers is the quaternions. We can understand them as ordered quartets of real numbers. That is, as . We need to make our multiplication rule a bit fussier yet to do this coherently. Guess what fuss we’d expect coming through K-Theory?

seems the odd one out; who does anything with that? There is a set of numbers that neatly matches this ordered set of octuples. It’s called the octonions, sometimes called the Cayley Numbers. We don’t work with them much. We barely work with quaternions, as they’re a lot of fuss. Multiplication on them doesn’t even commute. (They’re very good for understanding rotations in three-dimensional space. You can also also use them as vectors. You’ll do that if your programming language supports quaternions already.) Octonions are more challenging. Not only does their multiplication not commute, it’s not even associative. That is, if you have three octonions — call them p, q, and r — you can expect that p times the product of q-and-r would be different from the product of p-and-q times r. Real numbers don’t work like that. Complex numbers or quaternions don’t either.

Octonions let us have a meaningful division, so we could write out and know what it meant. We won’t see that for any bigger ordered set of . And K-Theory is one of the tools which tells us we may stop looking.

This is hardly the last word in the field. It’s barely the first. It is at least an understandable one. The abstractness of the field works against me here. It does offer some compensations. Broad applicability, for example; a theorem tied to few specific properties will work in many places. And pure aesthetics too. Much work, in statements of theorems and their proofs, involve lovely diagrams. You’ll see great lattices of sets relating to one another. They’re linked by chains of homomorphisms. And, in further aesthetics, beautiful words strung into lovely sentences. You may not know what it means to say “Pontryagin classes also detect the nontorsion in outside the stable range”. I know I don’t. I do know when I hear a beautiful string of syllables and that is a joy of mathematics never appreciated enough.

To start this year’s great glossary project Mr Wu, author of the MathTuition88.com blog, had a great suggestion: The Atiyah-Singer Index Theorem. It’s an important and spectacular piece of work. I’ll explain why I’m not doing that in a few sentences.

Mr Wu pointed out that a biography of Michael Atiyah, one of the authors of this theorem, might be worth doing. GoldenOj endorsed the biography idea, and the more I thought it over the more I liked it. I’m not able to do a true biography, something that goes to primary sources and finds a convincing story of a life. But I can sketch out a bit, exploring his work and why it’s of note.

Michael Atiyah.

Theodore Frankel’s The Geometry of Physics: An Introduction is a wonderful book. It’s 686 pages, including the index. It all explores how our modern understanding of physics is our modern understanding of geometry. On page 465 it offers this:

The Atiyah-Singer index theorem must be considered a high point of geometrical analysis of the twentieth century, but is far too complicated to be considered in this book.

I know when I’m licked. Let me attempt to look at one of the people behind this theorem instead.

The Riemann Hypothesis is about where to find the roots of a particular infinite series. It’s been out there, waiting for a solution, for a century and a half. There are many interesting results which we would know to be true if the Riemann Hypothesis is true. In 2018, Michael Atiyah declared that he had a proof. And, more, an amazing proof, a short proof. Albeit one that depended on a great deal of background work and careful definitions. The mathematical community was skeptical. It still is. But it did not dismiss outright the idea that he had a solution. It was plausible that Atiyah might solve one of the greatest problems of mathematics in something that fits on a few PowerPoint slides.

So think of a person who commands such respect.

His proof of the Riemann Hypothesis, as best I understand, is not generally accepted. For example, it includes the fine structure constant. This comes from physics. It describes how strongly electrons and photons interact. The most compelling (to us) consequence of the Riemann Hypothesis is in how prime numbers are distributed among the integers. It’s hard to think how photons and prime numbers could relate. But, then, if humans had done all of mathematics without noticing geometry, we would know there is something interesting about π. Differential equations, if nothing else, would turn up this number. We happened to discover π in the real world first too. If it were not familiar for so long, would we think there should be any commonality between differential equations and circles?

I do not mean to say Atiyah is right and his critics wrong. I’m no judge of the matter at all. What is interesting is that one could imagine a link between a pure number-theory matter like the Riemann hypothesis and a physical matter like the fine structure constant. It’s not surprising that mathematicians should be interested in physics, or vice-versa. Atiyah’s work was particularly important. Much of his work, from the late 70s through the 80s, was in gauge theory. This subject lies under much of modern quantum mechanics. It’s born of the recognition of symmetries, group operations that you can do on a field, such as the electromagnetic field.

In a sequence of papers Atiyah, with other authors, sorted out particular cases of how magnetic monopoles and instantons behave. Magnetic monopoles may sound familiar, even though no one has ever seen one. These are magnetic points, an isolated north or a south pole without its opposite partner. We can understand well how they would act without worrying about whether they exist. Instantons are more esoteric; I don’t remember encountering the term before starting my reading for this essay. I believe I did, encountering the technique as a way to describe the transitions between one quantum state and another. Perhaps the name failed to stick. I can see where there are few examples you could give an undergraduate physics major. And it turns out that monopoles appear as solutions to some problems involving instantons.

This was, for Atiyah, later work. It arose, in part, from bringing the tools of index theory to nonlinear partial differential equations. This index theory is the thing that got us the Atiyah-Singer Index Theorem too complicated to explain in 686 pages. Index theory, here, studies questions like “what can we know about a differential equation without solving it?” Solving a differential equation would tell us almost everything we’d like to know, yes. But it’s also quite hard. Index theory can tell us useful things like: is there a solution? Is there more than one? How many? And it does this through topological invariants. A topological invariant is a trait like, for example, the number of holes that go through a solid object. These things are indifferent to operations like moving the object, or rotating it, or reflecting it. In the language of group theory, they are invariant under a symmetry.

It’s startling to think a question like “is there a solution to this differential equation” has connections to what we know about shapes. This shows some of the power of recasting problems as geometry questions. From the late 50s through the mid-70s, Atiyah was a key person working in a topic that is about shapes. We know it as K-theory. The “K” from the German Klasse, here. It’s about groups, in the abstract-algebra sense; the things in the groups are themselves classes of isomorphisms. Michael Atiyah and Friedrich Hirzebruch defined this sort of group for a topological space in 1959. And this gave definition to topological K-theory. This is again abstract stuff. Frankel’s book doesn’t even mention it. It explores what we can know about shapes from the tangents to the shapes.

And it leads into cobordism, also called bordism. This is about what you can know about shapes which could be represented as cross-sections of a higher-dimension shape. The iconic, and delightfully named, shape here is the pair of pants. In three dimensions this shape is a simple cartoon of what it’s named. On the one end, it’s a circle. On the other end, it’s two circles. In between, it’s a continuous surface. Imagine the cross-sections, how on separate layers the two circles are closer together. How their shapes distort from a real circle. In one cross-section they come together. They appear as two circles joined at a point. In another, they’re a two-looped figure. In another, a smoother circle. Knowing that Atiyah came from these questions may make his future work seem more motivated.

But how does one come to think of the mathematics of imaginary pants? Many ways. Atiyah’s path came from his first research specialty, which was algebraic geometry. This was his work through much of the 1950s. Algebraic geometry is about the kinds of geometric problems you get from studying algebra problems. Algebra here means the abstract stuff, although it does touch on the algebra from high school. You might, for example, do work on the roots of a polynomial, or a comfortable enough equation like . Atiyah had started — as an undergraduate — working on projective geometries. This is what one curve looks like projected onto a different surface. This moved into elliptic curves and into particular kinds of transformations on surfaces. And algebraic geometry has proved important in number theory. You might remember that the Wiles-Taylor proof of Fermat’s Last Theorem depended on elliptic curves. Some work on the Riemann hypothesis is built on algebraic topology.

(I would like to trace things farther back. But the public record of Atiyah’s work doesn’t offer hints. I can find amusing notes like his father asserting he knew he’d be a mathematician. He was quite good at changing local currency into foreign currency, making a profit on the deal.)

It’s possible to imagine this clear line in Atiyah’s career, and why his last works might have been on the Riemann hypothesis. That’s too pat an assertion. The more interesting thing is that Atiyah had several recognizable phases and did iconic work in each of them. There is a cliche that mathematicians do their best work before they are 40 years old. And, it happens, Atiyah did earn a Fields Medal, given to mathematicians for the work done before they are 40 years old. But I believe this cliche represents a misreading of biographies. I suspect that first-rate work is done when a well-prepared mind looks fresh at a new problem. A mathematician is likely to have these traits line up early in the career. Grad school demands the deep focus on a particular problem. Getting out of grad school lets one bring this deep knowledge to fresh questions.

It is easy, in a career, to keep studying problems one has already had great success in, for good reason and with good results. It tends not to keep producing revolutionary results. Atiyah was able — by chance or design I can’t tell — to several times venture into a new field. The new field was one that his earlier work prepared him for, yes. But it posed new questions about novel topics. And this creative, well-trained mind focusing on new questions produced great work. And this is one way to be credible when one announces a proof of the Riemann hypothesis.

Here is something I could not find a clear way to fit into this essay. Atiyah recorded some comments about his life for the Web of Stories site. These are biographical and do not get into his mathematics at all. Much of it is about his life as child of British and Lebanese parents and how that affected his schooling. One that stood out to me was about his peers at Manchester Grammar School, several of whom he rated as better students than he was. Being a good student is not tightly related to being a successful academic. Particularly as so much of a career depends on chance, on opportunities happening to be open when one is ready to take them. It would be remarkable if there wre three people of greater talent than Atiyah who happened to be in the same school at the same time. It’s not unthinkable, though, and we may wonder what we can do to give people the chance to do what they are good in. (I admit this assumes that one finds doing what one is good in particularly satisfying or fulfilling.) In looking at any remarkable talent it’s fair to ask how much of their exceptional nature is that they had a chance to excel.

This is the slightly belated close of last week’s topics suggested by Comic Strip Master Command. For the week we’ve had, I am doing very well.

Werner Wejp-Olsen’s Inspector Danger’s Crime Quiz for the 25th of May sees another mathematician killed, and “identifying” his killer in a dying utterance. Inspector Danger has followed killer mathematicians several times before: the 9th of July, 2012, for instance. Or the 4th of July, 2016, for a case so similar that it’s almost a Slylock Fox six-differences puzzle. Apparently realtors and marine biologists are out for mathematicians’ blood. I’m not surprised by the realtors, but hey, marine biology, what’s the deal? The same gimmick got used the 15th of May, 2017, too. (And in fairness to the late Wejp-Olsen, who could possibly care that similar names are being used in small puzzles used years apart? It only stands out because I’m picking out things that no reasonable person would notice.)

Jim Meddick’s Monty for the 25th has the title character inspired by the legend of genius work done during plague years. A great disruption in life is a great time to build new habits, and if Covid-19 has given you the excuse to break bad old habits, or develop good new ones, great! Congratulations! If it has not, though? That’s great too. You’re surviving the most stressful months of the 21st century, I hope, not taking a holiday.

Anyway, the legend mentioned here includes Newton inventing Calculus while in hiding from the plague. The actual history is more complicated, and ambiguous. (You will not go wrong supposing that the actual history of a thing is more complicated and ambiguous than you imagine.) The Renaissance Mathematicus describes, with greater authority and specificity than I could, what Newton’s work was more like. And some of how we have this legend. This is not to say that the 1660s were not astounding times for Newton, nor to deny that he worked with a rare genius. It’s more that we are lying to imagine that Newton looked around, saw London was even more a deathtrap than usual, and decided to go off to the country and toss out a new and unique understanding of the infinitesimal and the continuum.

Mark Anderson’s Andertoons for the 27th is the Mark Anderson’s Andertoons for the week. One of the students — not Wavehead — worries that a geometric ray, going on forever, could endanger people. There’s some neat business going on here. Geometry, like much mathematics, works on abstractions that we take to be universally true. But it also seems to have a great correspondence to ordinary real-world stuff. We wouldn’t study it if it didn’t. So how does that idealization interact with the reality? If the ray represented by those marks on the board goes on to do something, do we have to take care in how it’s used?

There were a couple more comic strips in the block of time I want to write about. Only one’s got some deeper content and, I admit, I had to work to find it.

Olivia Jaimes’s Nancy for the 8th has Nancy and Sluggo avoiding mathematics homework. Or, “practice”, anyway. There’s more, though; Nancy and Sluggo are doing some analysis of viewing angles. That’s actual mathematics, certainly. Computer-generated imagery depends on it, just like you’d imagine. There are even fun abstract questions that can give surprising insights into numbers. For example: imagine that space were studded, at a regular square grid spacing, with perfectly reflective marbles of uniform size. Is there, then, a line of sight between any two points outside any marbles? Even if it requires tens of millions of reflections; we’re interested in what perfect reflections would give us.

Using playing cards as a makeshift protractor is a creative bit of making do with what you have. The cards spread in a fanfold easily enough and there’s marks on the cards that you can use to keep your measurements reasonably uniform. Creating ad hoc measurement tools like this isn’t mathematics per se. But making a rough tool is a first step to making a precise tool. And you can use reason to improve your estimates.

It’s not on-point, but I did want to share the most wondrous ad hoc tool I know of: You can use an analog clock hand, and the sun, as a compass. You don’t even need a real clock; you can draw the time on a sheet of paper and use that. It’s not a precise measure, of course. But if you need some help, here you go. You’ve got it.

The past week was a light one for mathematically-themed comic strips. So let’s see if I can’t review what’s interesting about them before the end of this genially dumb movie (1940’s Hullabaloo, starring Frank Morgan and featuring Billie Burke in a small part). It’ll be tough; they’re reaching a point where the characters start acting like they care about the plot either, which is usually the sign they’re in the last reel.

Jenny Campbell’s Flo and Friends for the 26th is a joke about fumbling a bit of practical mathematics, in this case, cutting a recipe down. When I look into arguments about the metric system, I will sometimes see the claim that English traditional units are advantageous for cutting down a recipe: it’s quite easy to say that half of “one cup” is a half cup, for example. I doubt that this is much more difficult than working out what half of 500 ml is, and my casual inquiries suggest that nobody has the faintest idea what half of a pint would be. And anyway none of this would help Ruthie’s problem, which is taking two-fifths of a recipe meant for 15 people. … Honestly, I would have just cut it in half and wonder who’s publishing recipes that serve 15.

Ed Bickford and Aaron Walther’s American Chop Suey for the 28th uses a panel of (gibberish) equations to represent deep thinking. It’s in part of a story about an origami competition. This interests me because there is serious mathematics to be done in origami. Most of these are geometry problems, as you might expect. The kinds of things you can understand about distance and angles from folding a square may surprise. For example, it’s easy to trisect an arbitrary angle using folded squares. The problem is, famously, impossible for compass-and-straightedge geometry.

Origami offers useful mathematical problems too, though. (In practice, if we need to trisect an angle, we use a protractor.) It’s good to know how to take a flat, or nearly flat, thing and unfold it into a more interesting shape. It’s useful whenever you have something that needs to be transported in as few pieces as possible, but that on site needs to not be flat. And this connects to questions with pleasant and ordinary-seeming names like the map-folding problem: can you fold a large sheet into a small package that’s still easy to open? Often you can. So, the mathematics of origami is a growing field, and one that’s about an accessible subject.

Nate Fakes’s Break of Day for the 29th is the anthropomorphic-symbols joke for the week, with an x talking about its day job in equations and its free time in games like tic-tac-toe.

Bill Holbrook’s On The Fastrack for the 2nd of May also talks about the use of x as a symbol. Curt takes eagerly to the notion that a symbol can represent any number, whether we know what it is or not. And, also, that the choice of symbol is arbitrary; we could use whatever symbol communicates. I remember getting problems to work in which, say, 3 plus a box equals 8 and working out what number in the box would make the equation true. This is exactly the same work as solving 3 + x = 8. Using an empty box made the problem less intimidating, somehow.

Dave Whamond’s Reality Check for the 2nd is, really, a bit baffling. It has a student asking Siri for the cosine of 174 degrees. But it’s not like anyone knows the cosine of 174 degrees off the top of their heads. If the cosine of 174 degrees wasn’t provided in a table for the students, then they’d have to look it up. Well, more likely they’d be provided the cosine of 6 degrees; the cosine of an angle is equal to minus one times the cosine of 180 degrees minus that same angle. This allows table-makers to reduce how much stuff they have to print. Still, it’s not really a joke that a student would look up something that students would be expected to look up.

… That said …

If you know anything about trigonometry, you know the sine and cosine of a 30-degree angle. If you know a bit about trigonometry, and are willing to put in a bit of work, you can start from a regular pentagon and work out the sine and cosine of a 36-degree angle. And, again if you know anything about trigonometry, you know that there are angle-addition and angle-subtraction formulas. That is, if you know the cosine of two angles, you can work out the cosine of the difference between them.

So, in principle, you could start from scratch and work out the cosine of 6 degrees without using a calculator. And the cosine of 174 degrees is minus one times the cosine of 6 degrees. So it could be a legitimate question to work out the cosine of 174 degrees without using a calculator. I can believe in a mathematics class which has that as a problem. But that requires such an ornate setup that I can’t believe Whamond intended that. Who in the readership would think the cosine of 174 something to work out by hand? If I hadn’t read a book about spherical trigonometry last month I wouldn’t have thought the cosine of 6 a thing someone could reasonably work out by hand.

I didn’t finish writing before the end of the movie, even though it took about eighteen hours to wrap up ten minutes of story. My love came home from a walk and we were talking. Anyway, this is plenty of comic strips for the week. When there are more to write about, I’ll try to have them in an essay at this link. Thanks for reading.

As much as everything is still happening, and so much, there’s still comic strips. I’m fortunately able here to focus just on the comics that discuss some mathematical theme, so let’s get started in exploring last week’s reading. Worth deeper discussion are the comics that turn up here all the time.

Lincoln Peirce’s Big Nate for the 5th is a casual mention. Nate wants to get out of having to do his mathematics homework. This really could be any subject as long as it fit the word balloon.

John Hambrock’s The Brilliant Mind of Edison Lee for the 6th is a funny-answers-to-story-problems joke. Edison Lee’s answer disregards the actual wording of the question, which supposes the group is travelling at an average 70 miles per hour. The number of stops doesn’t matter in this case.

Mark Anderson’s Andertoons for the 6th is the Mark Anderson’s Andertoons for the week. In it Wavehead gives the “just use a calculator” answer for geometry problems.

Not much to talk about there. But there is a fascinating thing about perimeters that you learn if you go far enough in Calculus. You have to get into multivariable calculus, something where you integrate a function that has at least two independent variables. When you do this, you can find the integral evaluated over a curve. If it’s a closed curve, something that loops around back to itself, then you can do something magic. Integrating the correct function on the curve around a shape will tell you the enclosed area.

And this is an example of one of the amazing things in multivariable calculus. It tells us that integrals over a boundary can tell us something about the integral within a volume, and vice-versa. It can be worth figuring out whether your integral is better solved by looking at the boundaries or at the interiors.

Heron’s Formula, for the area of a triangle based on the lengths of its sides, is an expression of this calculation. I don’t know of a formula exactly like that for the perimeter of a quadrilateral, but there are similar formulas if you know the lengths of the sides and of the diagonals.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 5th depicts, fairly, the sorts of things that excite mathematicians. The number discussed here is about algorithmic complexity. This is the study of how long it takes to do an algorithm. How long always depends on how big a problem you are working on; to sort four items takes less time than sorting four million items. Of interest here is how much the time to do work grows with the size of whatever you’re working on.

The mathematician’s particular example, and I thank dtpimentel in the comments for finding this, is about the Coppersmith–Winograd algorithm. This is a scheme for doing matrix multiplication, a particular kind of multiplication and addition of squares of numbers. The squares have some number N rows and N columns. It’s thought that there exists some way to do matrix multiplication in the order of N^{2} time, that is, if it takes 10 time units to multiply matrices of three rows and three columns together, we should expect it takes 40 time units to multiply matrices of six rows and six columns together. The matrix multiplication you learn in linear algebra takes on the order of N^{3} time, so, it would take like 80 time units.

We don’t know the way to do that. The Coppersmith–Winograd algorithm was thought, after Virginia Vassilevska Williams’s work in 2011, to take something like N^{2.3728642} steps. So that six-rows-six-columns multiplication would take slightly over 51.796 844 time units. In 2014, François le Gall found it was no worse than N^{2.3728639} steps, so this would take slightly over 51.796 833 time units. The improvement doesn’t seem like much, but on tiny problems it never does. On big problems, the improvement’s worth it. And, sometimes, you make a good chunk of progress at once.

This little essay should let me wrap up the rest of the comic strips from the past week. Most of them were casual mentions. At least I thought they were when I gathered them. But let’s see what happens when I actually write my paragraphs about them.

Thaves’s Frank and Ernest for the 2nd is a bit of wordplay, having Euclid and Galileo talking about parallel universes. I’m not sure that Galileo is the best fit for this, but I’m also not sure there’s another person connected who could be named. It’d have to be a name familiar to an average reader as having something to do with geometry. Pythagoras would seem obvious, but the joke is stronger if it’s two people who definitely did not live at the same time. Did Euclid and Pythagoras live at the same time? I am a mathematics Ph.D. and have been doing pop mathematics blogging for nearly a decade now, and I have not once considered the question until right now. Let me look it up.

It doesn’t make any difference. The comic strip has to read quickly. It might be better grounded to post Euclid meeting Gauss or Lobachevsky or Euler (although the similarity in names would be confusing) but being understood is better than being precise.

Stephan Pastis’s Pearls Before Swine for the 2nd is a strip about the foolhardiness of playing the lottery. And it is foolish to think that even a $100 purchase of lottery tickets will get one a win. But it is possible to buy enough lottery tickets as to assure a win, even if it is maybe shared with someone else. It’s neat that an action can be foolish if done in a small quantity, but sensible if done in enough bulk.

Mark Anderson’s Andertoons for the 3rd is the Mark Anderson’s Andertoons for the week. Wavehead has made a bunch of failed attempts at subtracting seven from ten, but claims it’s at least progress that some thing have been ruled out. I’ll go along with him that there is some good in ruling out wrong answers. The tricky part is in how you rule them out. For example, obvious to my eye is that the correct answer can’t be more than ten; the problem is 10 minus a positive number. And it can’t be less than zero; it’s ten minus a number less than ten. It’s got to be a whole number. If I’m feeling confident about five and five making ten, then I’d rule out any answer that isn’t between 1 and 4 right away. I’ve got the answer down to four guesses and all I’ve really needed to know is that 7 is greater than five but less than ten. That it’s an even number minus an odd means the result has to be odd; so, it’s either one or three. Knowing that the next whole number higher than 7 is an 8 says that we can rule out 1 as the answer. So there’s the answer, done wholly by thinking of what we can rule out. Of course, knowing what to rule out takes some experience.

Mark Parisi’s Off The Mark for the 4th is roughly the anthropomorphic numerals joke for the week. It’s a dumb one, but, that’s what sketchbooks are for.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th is the Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th for the week. It shows in joking but not wrong fashion a mathematical physicist’s encounters with orbital mechanics. Orbital mechanics are a great first physics problem. It’s obvious what they’re about, and why they might be interesting. And the mathematics of it is challenging in ways that masses on springs or balls shot from cannons aren’t.

A few problems are very easy, like, one thing in circular orbit of another. A few problems are not bad, like, one thing in an elliptical or hyperbolic orbit of another. All our good luck runs out once we suppose the universe has three things in it. You’re left with problems that are doable if you suppose that one of the things moving is so tiny that it barely exists. This is near enough true for, for example, a satellite orbiting a planet. Or by supposing that we have a series of two-thing problems. Which is again near enough true for, for example, a satellite travelling from one planet to another. But these is all work that finds approximate solutions, often after considerable hard work. It feels like much more labor to smaller reward than we get for masses on springs or balls shot from cannons. Walking off to a presumably easier field is understandable. Unfortunately, none of the other fields is actually easier.

Pythagoras died somewhere around 495 BC. Euclid was born sometime around 325 BC. That’s 170 years apart. So Pythagoras was as far in Euclid’s past as, oh, Maria Gaetana Agnesi is to mine.

Greetings, friends, and thank you for visiting the 136th installment of Denise Gaskins’s Playful Math Education Blog Carnival. I apologize ahead of time that this will not be the merriest of carnivals. It has not been the merriest of months, even with it hosting Pi Day at the center.

In consideration of that, let me lead with Art in the Time of Transformation by Paula Beardell Krieg. This is from the blog Playful Bookbinding and Paper Works. The post particularly reflects on the importance of creating a thing in a time of trouble. There is great beauty to find, and make, in symmetries, and rotations, and translations. Simple polygons patterned by simple rules can be accessible to anyone. Studying just how these symmetries and other traits work leads to important mathematics. Thus how Kreig’s page has recent posts with names like “Frieze Symmetry Group F7” but also to how symmetry is for five-year-olds. I am grateful to Goldenoj for the reference.

That link was brought to my attention by Iva Sallay, another longtime friend of my little writings here. She writes fun pieces about every counting number, along with recreational puzzles. And asked to share 1458 Tangrams Can Be A Pot of Gold, as an example of what fascinating things can be found in any number. This includes a tangram. Tangrams we see in recreational-mathematics puzzles based on ways that you can recombine shapes. It’s always exciting to be able to shift between arithmetic and shapes. And that leads to a video and related thread again pointed to me by goldenoj …

This video, by Mathologer on YouTube, explains a bit of number theory. Number theory is the field of asking easy questions about whole numbers, and then learning that the answers are almost impossible to find. I exaggerate, but it does often involve questions that just suppose you understand what a prime number should be. And then, as the title asks, take centuries to prove.

Neat visual proof of Fermat's two square theorem from @Mathologer – had to watch bits of this a few times to grasp it https://t.co/JS4FBCTPXQ

Fermat’s Two-Squares Theorem, discussed here, is not the famous one about . Pierre de Fermat had a lot of theorems, some of which he proved. This one is about prime numbers, though, and particularly prime numbers that are one more than a multiple of four. This means it’s sometimes called Fermat’s 4k+1 Theorem, which is the name I remember learning it under. (k is so often a shorthand for “some counting number” that people don’t bother specifying it, the way we don’t bother to say “x is an unknown number”.) The normal proofs of this we do in the courses that convince people they’re actually not mathematics majors.

What the video offers is a wonderful alternate approach. It turns key parts of the proof into geometry, into visual statements. Into sliding tiles around and noticing patterns. It’s also a great demonstration of one standard problem-solving tool. This is to look at a related, different problem that’s easier to say things about. This leads to what seems like a long path from the original question. But it’s worth it because the path involves thinking out things like “is the count of this thing odd or even”? And that’s mathematics that you can do as soon as you can understand the question.

I again thank Iva Sallay for that link, as well as this essay. Dan Meyer’s But Artichokes Aren’t Pinecones: What Do You Do With Wrong Answers? looks at the problem of students giving wrong answers. There is no avoiding giving wrong answers. A parent’s or teacher’s response to wrong answers will vary, though, and Meyer asks why that is. Meyer has some hypotheses. His example notes that he doesn’t mind a child misidentifying an artichoke as a pinecone. Not in the same way identifying the sum of 1 and 9 as 30 would. What is different about those mistakes?

Jessannwa’s Soft Start In The Intermediate Classroom looks to the teaching of older students. No muffins and cookies here. That the students might be more advanced doesn’t change the need to think of what they have energy for, and interest in. She discusses a class setup that’s meant to provide structure in ways that don’t feel so authority-driven. And ways to turn practicing mathematics problems into optimizing game play. I will admit this is a translation of the problem which would have worked well for me. But I also know that not everybody sees a game as, in part, something to play at maximum efficiency. It depends on the game, though. They’re on Twitter as @jesannwa.

These are thoughts about how anyone can start learning mathematics. What does it look like to have learned a great deal, though, to the point of becoming renowned for it? Life Through A Mathematician’s Eyes posted Australian Mathematicians in late January. It’s a dozen biographical sketches of Australian mathematicians. It also matches each to charities or other public-works organizations. They were trying to help the continent through the troubles it had even before the pandemic struck. They’re in no less need for all that we’re exhausted. The page’s author is on Twitter as @lthmath.

I have since the start of this post avoided mentioning the big mathematical holiday of March. Pi Day had the bad luck to fall on a weekend this year, and then was further hit by the Covid-19 pandemic forcing the shutdown of many schools. Iva Sallay again helped me by noting YummyMath’s activities page It’s Time To Gear Up For Pi Day. This hosts several worksheets, about the history of π and ways to calculate it, and several formulas for π. This even gets into interesting techniques like how to use continued fractions in finding a numerical value.

Rolands Rag Bag shared A Pi-Ku for Pi-Day featuring a poem written in a form I wasn’t aware anyone did. The “Pi-Ku” as named here has 3 syllables for the first time, 1 syllable in the second line, 4 syllables in the third line, 1 syllable the next line, 5 syllables after that … you see the pattern. (One of Avery’s older poems also keeps this form.) The form could, I suppose, go on to as many lines as one likes. Or at least to the 40th line, when we would need a line of zero syllables. Probably one would make up a rule to cover that.

As a last note, I have joined Mathstodon, the Mastodon instance with a mathematics theme. You can follow my shy writings there as @nebusj@mathstodon.xyz, or follow a modest number of people talking, largely, about mathematics. Mathstodon is a mathematically-themed microblogging site. On WordPress, I do figure to keep reading the comics for their mathematics topics. And sometime this year, when I feel I have the energy, I hope to do another A to Z, my little glossary project.

And this is what I have to offer. I hope the carnival has brought you some things of interest, and some things of delight. And, if I may, please consider this Grant Snider cartoon, Hope.

Life Through A Mathematician’s Eyes is scheduled to host the 137th installment of the Playful Math Education Blog Carnival, at the end of April. I look forward to seeing it. Good luck to us all.

There’s some comic strips that get mentioned here all the time. Then there’s comic strips that I have been reading basically my whole life, and that never give me a thread to talk about. Although I’ve been reading comic strips for their mathematics content for a long while now, somehow, I am still surprised when these kinds of comic strip are not the same thing. So here’s the end of last week’s comics, almost in time for next week to start:

Kevin Fagan’s Drabble for the 28th has Penny doing “math” on colors. Traditionally I use an opening like this to mention group theory. In that we study things that can be added together, in ways like addition works on the integers. Colors won’t quite work like this, unfortunately. A group needs an element that’s an additive identity. This works like zero: it can be added to anything without changing its value. There isn’t a color that you can mix with other colors that leaves the other color unchanged, though. Even white or clear will dilute the original color.

If you’ve thought of the clever workaround, that each color can be the additive identity to itself, you get credit for ingenuity. Unfortunately, to be a group there has to be a lone additive identity. Having more than one makes a structure that’s so unlike the integers that mathematicians won’t stand for it. I also don’t know of any interesting structures that have more than one additive identity. This suggests that nobody has found a problem that they represent well. But the strip suggests maybe it could tell us something useful for colors. I don’t know.

Tom Armstrong’s Marvin for the 28th is a strip which follows from the discovery that “fake news” is a thing that people say. Here the strip uses a bit of arithmetic as the sort of incontrovertibly true thing that Marvin is dumb to question. Well, that 1 + 1 equals 2 is uncontrovertibly true, unless we are looking at some funny definitions of ‘1’ or ‘plus’ or something. I remember, as a kid, being quite angry with a book that mentioned “one cup of popcorn plus one cup of water does not give us two cups of soggy popcorn”, although I didn’t know how to argue the point.

Hilary Price and Rina Piccolo’s Rhymes with Orange for the 30th is … well, I’m in this picture and I don’t like it. I come from a long line of people who cover every surface with stuff. But as for what surface area is? … Well, there’s a couple of possible definitions. One that I feel is compelling is to think of covering sets. Take a shape that’s set, by definition, to have an area of 1 unit of area. What is the smallest number of those unit shapes which will cover the original shape? Cover is a technical term here. But also, here, the ordinary English word describes what we need it for. How many copies of the unit shape do you need to exactly cover up the whole original shape? That’s your area. And this fits to the mother’s use of surfaces in the comic strip neatly enough.

Bud Fisher’s Mutt and Jeff for the 31st is a rerun of vintage unknown to me. I’m not sure whether it’s among the digitally relettered strips. The lettering’s suspiciously neat, but, for example, there’s at least three different G’s in there. Anyway, it’s an old joke about adding together enough gas-saving contraptions that it uses less than zero gas. So far as it’s tenable at all, it comes from treating percentage savings from different schemes as additive, instead of multiplying together. Also, I suppose, that the savings are independent, that (in this case) Jeff’s new gas saving ten percent still applies even with the special spark plugs or the new carburettor [sic]. The premise is also probably good for a word problem, testing out understanding of percentages and multiplication, which is just a side observation here.

This wraps up last week’s mathematically-themed comic strips. This week I can tell you already was a bonanza week. When I start getting to its comics I should have an essay at this link. Thanks for reading.

Today’s essay is just to mention the comic strips which, last week, said mathematics but in some incidental way. Or some way that I can’t write a reasonable blog entry for.

Jim Meddick’s Monty for the 29th has the time-travelling Professor Xemit (get it?) show a Times Square Ball Drop of the future. The ball gets replaced with a “demihypercube”, the idea being that the future will have some more complicated geometry than a mere “ball”. There is no such thing as “a” demihypercube, in the same way there is not “a” pentagon. There is a family of shapes, all called demihypercubes. There’s a variety of ways to represent them. A reasonable one, though, is a roughly spherical shape made of pointy triangles all over. It wouldn’t look absurd. There are probably time ball drops that use something like a demihypercube already.

Also this coming Sunday I should look at more mathematically-themed comic strips. That should appear at this link, unless something urgent commands my attention first. Thank you.

The last full week of the year had, again, comic strips that mostly mention mathematics without getting into detail. That’s all right. I have a bit of a cold so I’m happy not to have to compose thoughts about too many of them.

John Zakour and Scott Roberts’s Maria’s Day for the 22nd has Maria finishing, and losing, her mathematics homework. I suppose the implication’s that she couldn’t hope to reconstruct it before class. It’s not like she could re-write a short essay for history, though.

Percy Crosby’s Skippy for the 23rd has Skippy and Sookie doing the sort of story problem arithmetic of working out a total bill. The strip originally ran the 11th of August, 1932.

Cy Olson’s Office Hours for the 24th, which originally ran the 14th of October, 1971, comes the nearest to having enough to talk about here. The secretary describes having found five different answers in calculating the profits and so used the highest one. The joke is on incompetent secretaries, yes. But it is respectable, if trying to understand something very complicated, to use several different models for what one wants to know. These will likely have different values, although how different they are, and how changes in one model tracks changes in another, can be valuable. We’re accustomed to this, at least in the United States, by weather forecasts: any local weather report will describe expected storms by different models. These use different ideas about how much moisture moves into the air, how fast raindrops will form (a very difficult problem), how winds will shift, that sort of thing. It’s defensible to make similar different models for reporting the health of a business, particularly if company owns things with a price that can’t be precisely stated.

Marguerite Dabaie and Tom Hart’s Ali’s House for the 24th continues a story from the week before in which a character imagines something tossing us out of three-dimensional space. A seven-dimensional space is interesting mathematically. We can define a cross product between vectors in three-dimensional space and in seven-dimensional space. Most other spaces don’t allow something like a cross product to be coherently defined. Seven-dimensional space also allows for something called the “exotic sphere”, which I hadn’t heard of before either. It’s a structure that’s topologically a sphere, but that has a different kind of structure. This isn’t unique to seven-dimensional space. It’s not known whether four-dimensional space has exotic spheres, although many spaces higher than seven dimensions have them.

Gordon Bess’s Redeye for the 25th of December has Pokey asking his horse Loco to do arithmetic. There’s a long history of animals doing, or seeming to do, arithmetic. The strip originally ran the 23rd of August, 1973.

I don’t know. I say this for anyone this has unintentionally clickbaited, or who’s looking at a search engine’s preview of the page.

I come to this question from a friend, though, and it’s got me wondering. I don’t have a good answer, either. But I’m putting the question out there in case someone reading this, sometime, does know. Even if it’s in the remote future, it’d be nice to know.

And before getting to the question I should admit that “why” questions are, to some extent, a mug’s game. Especially in mathematics. I can ask why the sum of two consecutive triangular numbers a square number. But the answer is … well, that’s what we chose to mean by ‘triangular number’, ‘square number’, ‘sum’, and ‘consecutive’. We can show why the arithmetic of the combination makes sense. But that doesn’t seem to answer “why” the way, like, why Neil Armstrong was the first person to walk on the moon. It’s more a “why” like, “why are there Seven Sisters [ in the Pleiades ]?” [*]

But looking for “why” can, at least, give us hints to why a surprising result is reasonable. Draw dots representing a square number, slice it along the space right below a diagonal. You see dots representing two successive triangular numbers. That’s the sort of question I’m asking here.

From here, we get to some technical stuff and I apologize to readers who don’t know or care much about this kind of mathematics. It’s about the wave-mechanics formulation of quantum mechanics. In this, everything that’s observable about a system is contained within a function named . You find by solving a differential equation. The differential equation represents problems. Like, a particle experiencing some force that depends on position. This is written as a potential energy, because that’s easier to work with. But it’s the kind of problem done.

Grant that you’ve solved , since that’s hard and I don’t want to deal with it. You still don’t know, like, where the particle is. You never know that, in quantum mechanics. What you do know is its distribution: where the particle is more likely to be, where it’s less likely to be. You get from to this distribution for, like, particles by applying an operator to . An operator is a function with a domain and a range that are spaces. Almost always these are spaces of functions.

Each thing that you can possibly observe, in a quantum-mechanics context, matches an operator. For example, there’s the x-coordinate operator, which tells you where along the x-axis your particle’s likely to be found. This operator is, conveniently, just x. So evaluate and that’s your x-coordinate distribution. (This is assuming that we know in Cartesian coordinates, ones with an x-axis. Please let me do that.) This looks just like multiplying your old function by x, which is nice and easy.

Or you might want to know momentum. The momentum in the x-direction has an operator, , which equals . The is partial derivatives. The is Planck’s constant, a number which in normal systems of measurement is amazingly tiny. And you know how . That – symbol is just the minus or the subtraction symbol. So to find the momentum distribution, evaluate . This means taking a derivative of the you already had. And multiplying it by some numbers.

But. Why is there a in the momentum operator rather than the position operator? Why isn’t one and the other ? From a mathematical physics perspective, position and momentum are equally good variables. We tend to think of position as fundamental, but that’s surely a result of our happening to be very good at seeing where things are. If we were primarily good at spotting the momentum of things around us, we’d surely see that as the more important variable. When we get into Hamiltonian mechanics we start treating position and momentum as equally fundamental. Even the notation emphasizes how equal they are in importance, and treatment. We stop using ‘x’ or ‘r’ as the variable representing position. We use ‘q’ instead, a mirror to the ‘p’ that’s the standard for momentum. (‘p’ we’ve always used for momentum because … … … uhm. I guess ‘m’ was already committed, for ‘mass’. What I have seen is that it was taken as the first letter in ‘impetus’ with no other work to do. I don’t know that this is true. I’m passing on what I was told explains what looks like an arbitrary choice.)

So I’m supposing that this reflects how we normally set up as a function of position. That this is maybe why the position operator is so simple and bare. And then why the momentum operator has a minus, an imaginary number, and this partial derivative stuff. That if we started out with the wave function as a function of momentum, the momentum operator would be just the momentum variable. The position operator might be some mess with and derivatives or worse.

I don’t have a clear guess why one and not the other operator gets full possession of the though. I suppose that has to reflect convenience. If position and momentum are dual quantities then I’d expect we could put a mere constant like wherever we want. But this is, mostly, me writing out notes and scattered thoughts. I could be trying to explain something that might be as explainable as why the four interior angles of a rectangle are all right angles.

So I would appreciate someone pointing out the obvious reason these operators look like that. I may grumble privately at not having seen the obvious myself. But I’d like to know it anyway.

Today’s A To Z term is … well, my second choice. Goldenoj suggested Yang-Mills and I was so interested. Yang-Mills describes a class of mathematical structures. They particularly offer insight into how to do quantum mechanics. Especially particle physics. It’s of great importance. But, on thinking out what I would have to explain I realized I couldn’t write a coherent essay about it. Getting to what the theory is made of would take explaining a bunch of complicated mathematical structures. If I’d scheduled the A-to-Z differently, setting up matters like Lie algebras, maybe I could do it, but this time around? No such help. And I don’t feel comfortable enough in my knowledge of Yang-Mills to describe it without describing its technical points.

That said I hope that Jacob Siehler, who suggested the Game of ‘Y’, does not feel slighted. I hadn’t known anything of the game going in to the essay-writing. When I started research I was delighted. I have yet to actually play a for-real game of this. But I like what I see, and what I can think I can write about it.

Game of ‘Y’.

This is, as the name implies, a game. It has two players. They have the same objective: to create a ‘y’. Here, they do it by laying down tokens representing their side. They take turns, each laying down one token in a turn. They do this on a shape with three edges. The ‘y’ is created when there’s a continuous path of their tokens that reaches all three edges. Yes, it counts to have just a single line running along one edge of the board. This makes a pretty sorry ‘y’ but it suggests your opponent isn’t trying.

There are details of implementation. The board is a mesh of, mostly, hexagons. I take this to be for the same reason that so many conquest-type strategy games use hexagons. They tile space well, they give every space a good number of neighbors, and the distance from the centers of one neighbor to another is constant. In a square grid, the centers of diagonal neighbors are farther than the centers of left-right or up-down neighbors. Hexagons do well for this kind of game, where the goal is to fill space, or at least fill paths in space. There’s even a game named Hex, slightly older than Y, with similar rules. In that the goal is to draw a continuous path from one end of the rectangular grid to another. The grid of commercial boards, that I see, are around nine hexagons on a side. This probably reflects a desire to have a big enough board that games go on a while, but not so big that they go on forever

Mathematicians have things to say about this game. It fits nicely in game theory. It’s well-designed to show some things about game theory. It’s the kind of game which has perfect information game, for example. Each player knows, at all times, the moves all the players have made. Just look at the board and see where they’ve placed their tokens. A player might have forgotten the order the tokens were placed in, but that’s the player’s problem, not the game’s. Anyway in Y, the order of token-placing doesn’t much matter.

It’s also a game of complete information. Every player knows, at every step, what the other player could do. And what objective they’re working towards. One party, thinking enough, could forecast the other’s entire game. This comes close to the joke about the prisoners telling each other jokes by shouting numbers out to one another.

It is also a game in which a draw is impossible. Play long enough and someone must win. This even if both parties are for some reason trying to lose. There are ingenious proofs of this, but we can show it by considering a really simple game. Imagine playing Y on a tiny board, one that’s just one hex on each side. Definitely want to be the first player there.

So now imagine playing a slightly bigger board. Augment this one-by-one-by-one board by one row. That is, here, add two hexes along one of the sides of the original board. So there’s two pieces here; one is the original territory, and one is this one-row augmented territory. Look first at the original territory. Suppose that one of the players has gotten a ‘Y’ for the original territory. Will that player win the full-size board? … Well, sure. The other player can put a token down on either hex in the augmented territory. But there’s two hexes, either of which would make a path that connects the three edges of the board. The first player can put a token down on the other hex in the augmented territory, and now connects all three of the new sides again. First player wins.

All right, but how about a slightly bigger board? So take that two-by-two-by-two board and augment it, adding three hexes along one of the sides. Imagine a player’s won the original territory board. Do they have to win the full-size board? … Sure. The second player can put something in the augmented territory. But there’s again two hexes that would make the path connecting all three sides of the full board. The second player can put a token in one of those hexes. But the first player can put a token in the other of those. First player wins again.

How about a slightly bigger board yet? … Same logic holds. Really the only reason that the first player doesn’t always win is that, at some point, the first player screws up. And this is an existence proof, showing that the first player can always win. It doesn’t give any guidance into how to play, though. If the first player plays perfectly, she’s compelled to win. This is something which happens in many two-player, symmetric games. A symmetric game is one where either player has the same set of available moves, and can make the same moves with the same results. This proof needs to be tightened up to really hold. But it should convince you, at least, that the first player has an advantage.

So given that, the question becomes why play this game after you’ve decided who’ll go first? The reason you might if you were playing a game is, what, you have something else to do? And maybe you think you’ll make fewer mistakes than your opponent. One approach often used in symmetric games like this is the “pie rule”. The name comes from the story about how to slice a pie so you and your sibling don’t fight over the results. One cuts the pie, the other gets first pick of the slice, and then you fight anyway. In this game, though, one player makes a tentative first move. The other decides whether they will be Player One with that first move made or whether they’ll be Player Two, responding.

There are some neat quirks in the commercial Y games. One is that they don’t actually show hexes, and you don’t put tokens in the middle of hexes. Instead you put tokens on the spots that would be the center of the hex. On the board are lines pointing to the neighbors. This makes the board actually a string of triangles. This is the dual to the hex grid. It shows a set of vertices, and their connections, instead of hexes and their neighbors. Whether you think the hex grid or this dual makes it easier to tell when you’ve connected all three edges is a matter of taste. It does make the edges less jagged all around.

Another is that there will be three vertices that don’t connect to six others. They connect to five others, instead. Their spaces would be pentagons. As I understand the literature on this, this is a concession to game balance. It makes it easier for one side to fend off a path coming from the center.

It has geometric significance, though. A pure hexagonal grid is a structure that tiles the plane. A mostly hexagonal grid, with a couple of pentagons, though? That can tile the sphere. To cover the whole sphere you need something like at least twelve irregular spots. But this? With the three pentagons? That gives you a space that’s topographically equivalent to a hemisphere, or at least a slice of the sphere. If we do imagine the board to be a hemisphere covered, then the result of the handful of pentagon spaces is to make the “pole” closer to the equator.

So as I say the game seems fun enough to play. And it shows off some of the ways that game theorists classify games. And the questions they ask about games. Is the game always won by someone? Does one party have an advantage? Can one party always force a win? It also shows the kinds of approach game theorists can use to answer these questions. This before they consider whether they’d enjoy playing it.

I came across a little geometry thing that left me unsettled, even as I have to admit it’s correct. Start with a two-dimensional space, or as the hew-mons call it, a plane. Draw a square with sides of length two and centered on the origin. So it has corners at the points with Cartesian coordinates (+1, +1), (+1, -1), (-1, +1), and (-1, -1). Around each of these corners draw a circle of radius 1.

There is some largest circle that you can draw, centered on the origin, the dead center of the square, with Cartesian coordinates (0, 0), and that just touches all of the corners’ circles. It has a radius of a little under 0.414.

Now think of the three-dimensional analog. Three-dimensional space. Draw a box with sides all of length two and centered on the origin. So it has corners at the points with Cartesian coordinates (+1, +1, +1), (+1, +1, -1), (+1, -1, +1), (+1, -1, -1), (-1, +1, +1), (-1, +1, -1), (-1, -1, +1), and (-1, -1, -1). Around each of these eight corners draw a circle of radius 1.

There is some largest sphere that you can draw, centered on the origin, the point with Cartesian coordinates (0, 0, 0), that just touches all of the corners’ circles. It has a radius of a little under 0.732.

Think of the four-dimensional analog. This is harder to sketch. But a four-dimensional hypercube, with each side of length 2 and centered on the origin. So it has corners at the points with Cartesian coordinates (+1, +1, +1, +1), (+1, +1, +1, -1), (+1, +1, -1, +1), (+1, +1, -1, -1), and you know what? Will you let me pretend we listed all sixteen corners? Thanks. Around each of these corners draw a circle of radius 1.

There is some largest hypersphere you can draw, centered on the origin, the point with Cartesian coordinates (0, 0, 0, 0), and that just touches all of these corners’ circles. It has a radius of 1.

Keep going. Five-dimensional space, with corners like (+1, +1, +1, +1, +1). Six-dimensional space, with corners like (+1, +1, +1, +1, +1, +1). Seven-dimensional space. And so on.

Eventually, the space is vast enough that the radius of this largest-touching hypersphere is bigger than 2. That is, reaching out more than twice as far as the original box goes, this even though the corner hyperspheres line the edges of the box, and touch one another along those edges.

Non-Euclidean geometry has the reputation of holding deep, inscrutable mysteries. To say something is a non-Euclidean space, outside of a mathematical context, is to designate it as a place immune to reason and beyond human comprehension. This is not such a case. This is a completely Euclidean space; it’s just got a lot of dimensions to it. Strange things will happen.

Another weird, but to me not so unsettling matter, concerns the surface (or hypersurface) area and the volume of these spheres. The circumference of a unit circle is, famously, 2π. The area of a unit sphere is 4π. For a four-dimensional hypersphere the surface area is a bit bigger yet. And bigger again for five and six and seven dimensions. But at eight dimensions the surface area starts shrinking again, and it never grows again. Have a great enough number of dimensions and the unit hypersphere has almost zero surface area. The volume of a unit circle is π. Of a unit sphere, . For a four-dimensional hypersphere, . For a five-dimensional hypersphere, . It is never so large again; for six or more dimensions the volume starts to shrink again. As the number of dimensions of space grows, the volume of the unit hypersphere dwindles to zero.

You know, that’s unsettling me more now that I’m paying attention to it.

Now let me discuss the comic strips from last week with some real meat to their subject matter. There weren’t many: after Wednesday of last week there were only casual mentions of any mathematics topic. But one of the strips got me quite excited. You’ll know which soon enough.

Mac King and Bill King’s Magic in a Minute for the 10th uses everyone’s favorite topological construct to do a magic trick. This one uses a neat quirk of the Möbius strip: that if sliced along the center of its continuous loop you get not two separate shapes but one Möbius strip of greater length. There are more astounding feats possible. If the strip were cut one-third of the way from an edge it would slice the strip into two shapes, one another Möbius strip and one a simple loop.

Or consider not starting with a Möbius strip. Make the strip of paper by taking one end and twisting it twice around, for a full loop, before taping it to the other end. Slice this down the center and what results are two interlinked rings. Or place three twists in the original strip of paper before taping the ends together. Then, the shape, cut down the center, unfolds into a trefoil knot. But this would take some expert hand work to conceal the loops from the audience while cutting. It’d be a neat stunt if you could stage it, though.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 10th uses mathematics as obfuscation. We value mathematics for being able to make precise and definitely true statements. And for being able to describe the world with precision and clarity. But this has got the danger that people hear mathematical terms and tune out, trusting that the point will be along soon after some complicated talk.

The formulas on the blackboard are nearly all legitimate, and correct, formulas for the value of π. The upper-left and the lower-right formulas are integrals, and ones that correspond to particular trigonometric formulas. The The middle-left and the upper-right formulas are series, the sums of infinitely many terms. The one in the upper right, , was roughly proven by Leonhard Euler. Euler developed a proof that’s convincing, but that assumed that infinitely-long polynomials behave just like finitely-long polynomials. In this context, he was correct, but this can’t be generally trusted to happen. We’ve got proofs that, to our eyes, seem rigorous enough now.

The center-left formula doesn’t look correct to me. To my eye, this looks like a mistaken representation of the formula

But it’s obscured by Haskins’s head. It may be that this formula’s written in a format that, in full, would be correct. There are many, many formulas for π (here’s Mathworld’s page of them and here’s Wikipedia’s page of π formulas); it’s impossible to list them all.

The center-right formula is interesting because, in part, it looks weird. It’s written out as

That looks at first glance like something’s gone wrong with one of those infinite-product series for π. Not so; this is a notation used for continued fractions. A continued fraction has a string of denominators that are typically some whole number plus another fraction. Often the denominator of that fraction will itself be a whole number plus another fraction. This gets to be typographically challenging. So we have this notation instead. Its syntax is that

There are many attractive formulas for π. It’s temping to say this is because π is such a lovely number it naturally has beautiful formulas. But more likely humans are so interested in π we go looking for formulas with some appealing sequence to them. There are some awful-looking formulas out there too. I don’t know your tastes, but for me I feel my heart cool when I see that π is equal to four divided by this number:

however much I might admire the ingenuity which found that relationship, and however efficiently it may calculate digits of π.

Glenn McCoy and Gary McCoy’s The Duplex for the 13th uses skill at arithmetic as shorthand for proving someone’s a teacher. There’s clearly some implicit idea that this is a school teacher, probably for elementary schools, and doesn’t have a particular specialty. But it is only three panels; they have to get the joke done, after all.

I knew by Thursday this would be a brief week. The number of mathematically-themed comic strips has been tiny. I’m not upset, as the days turned surprisingly full on me once again. At some point I would have to stop being surprised that every week is busier than I expect, right?

Anyway, the week gives me plenty of chances to look back to 1936, which is great fun for people who didn’t have to live through 1936.

Elzie Segar’s Thimble Theatre rerun for the 28th of October is part of the story introducing Eugene the Jeep. The Jeep has astounding powers which, here, are finally explained as being due to it being a fourth-dimensional creature. Or at least able to move into the fourth dimension. This is amazing for how it shows off the fourth dimension being something you could hang a comic strip plot on, back in the day. (Also back in the day, humor strips with ongoing plots that might run for months were very common. The only syndicated strips like it today are Gasoline Alley, Alley Oop, the current storyline in Safe Havens where they’ve just gone and terraformed Mars, and Popeye, rerunning old daily stories.) The Jeep has many astounding powers, including that he can’t be kept inside — or outside — anywhere against his will, and he’s able to forecast the future.

Could there be a fourth-dimensional animal? I dunno, I’m not a dimensional biologist. It seems like we need a rich chemistry for life to exist. Lots of compounds, many of them long and complicated ones. Can those exist in four dimensions? I don’t know the quantum mechanics of chemical formation well enough to say. I think there’s obvious problems. Electrical attraction and repulsion would fall off much more rapidly with distance than they do in three-dimensional space. This seems like it argues chemical bonds would be weaker things, which generically makes for weaker chemical compounds. So probably a simpler chemistry. On the other hand, what’s interesting in organic chemistry is shapes of molecules, and four dimensions of space offer plenty of room for neat shapes to form. So maybe that compensates for the chemical bonds. I don’t know.

But if we take the premise as given, that there is a four-dimensional animal? With some minor extra assumptions then yeah, the Jeep’s powers fit well enough. Not being able to be enclosed follows almost naturally. You, a three-dimensional being, can’t be held against your will by someone tracing a line on the floor around you. The Jeep — if the fourth dimension is as easy to move through as the third — has the same ability.

Forecasting the future, though? We have a long history of treating time as “the” fourth dimension. There’s ways that this makes good organizational sense. But we do have to treat time as somehow different from space, even to make, for example, general relativity work out. If the Jeep can see and move through time? Well, yeah, then if he wants he can check on something for you, at least if it’s something whose outcome he can witness. If it’s not, though? Well, maybe the flow of events from the fourth dimension is more obvious than it is from a mere three, in the way that maybe you can spot something coming down the creek easily, from above, in a way that people on the water can’t tell.

Olive Oyl and Popeye use the Jeep to tease one another, asking for definite answers about whether the other is cute or not. This seems outside the realm of things that the fourth dimension could explain. In the 1960s cartoons he even picks up the power to electrically shock offenders; I don’t remember if this was in the comic strips at all.

Elzie Segar’s Thimble Theatre rerun for the 29th of October has Wimpy doing his best to explain the fourth dimension. I think there’s a warning here for mathematician popularizers here. He gets off to a fair start and then it all turns into a muddle. Explaining the fourth dimension in terms of the three dimensions we’re familiar with seems like a good start. Appealing to our intuition to understand something we have to reason about has a long and usually successful history. But then Wimpy goes into a lot of talk about the mystery of things, and it feels like it’s all an appeal to the strangeness of the fourth dimension. I don’t blame Popeye for not feeling it’s cleared anything up. Segar would come back, in this storyline, to several other attempted explanations of the Jeep’s powers, although they do come back around to, y’know, it’s a magical animal. They’re all over the place in the Popeye comic universe.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 28th of October is a riff on predictability and encryption. Good encryption schemes rely on randomness. Concealing the content of a message means matching it to an alternate message. Each of the alternate messages should be equally likely to be transmitted. This way, someone who hasn’t got the key would not be able to tell what’s being sent. The catch is that computers do not truly do randomness. They mostly rely on quasirandom schemes that could, in principle, be detected and spoiled. There are ways to get randomness, mostly involving putting in something from the real world. Sensors that detect tiny fluctuations in temperature, for example, or radio detectors. I recall one company going for style and using a wall of lava lamps, so that the rise and fall of lumps were in some way encoded into unpredictable numbers.

Robb Armstrong’s JumpStart for the 2nd of November is a riff on the Birthday “Paradox”, the thing where you’re surprised to find someone shares a birthday with you. (I have one small circle of friends featuring two people who share my birthday, neatly enough.) Paradox is in quotes because it defies only intuition, not logic. The logic is clear that you need only a couple dozen people before some pair will probably share a birthday. Marcie goes overboard in trying to guess how many people at her workplace would share their birthday on top of that. Birthdays are nearly uniformly spread across all days of the year. There are slight variations; September birthdays are a little more likely than, say, April ones; the 13th of any month is a less likely birthday than the 12th or the 24th are. But this is a minor correction, aptly ignored when you’re doing a rough calculation. With 615 birthdays spread out over the year you’d expect the average day to be the birthday of about 1.7 people. (To be not silly about this, a ten-day span should see about 17 birthdays.) However, there are going to be “clumps”, days where three or even four people have birthdays. There will be gaps, days nobody has a birthday, or even streaks of days where nobody has a birthday. If there weren’t a fair number of days with a lot of birthdays, and days with none, we’d have to suspect birthdays weren’t random here.

There were also a handful of comic strips just mentioning mathematics, that I can’t make anything in depth about. Here’s two.

I hope to have proper comment about it in the usual Sunday Reading the Comics post. But the “current” storyline in Elzie Segar’s Thimble Theatre comic strip — Popeye to normal people — is the 1936 introduction of Eugene the Jeep. If you’ve looked at my user icon here you know I like Eugene.

Anyway, Eugene the Jeep has wondrous powers. These include the power of prophecy and the power to disappear from even enclosed spaces. Segar’s explanation for this was that the Jeep can turn into the fourth dimension and so do things we can’t hope to do. Which is a fun premise, yes. More, though, it’s got to be a pretty early use of the fourth or other high dimensions in pop culture. Yes, there were some things normal people might know that talk about higher dimensions. H G Wells’s The Time Machine starts with talk about time as a dimension like space. Edwin Abbott’s Flatland is explicitly about two- and three-dimensions, although Square thinks of whether there could be four- or more-dimensional spaces.

Wikipedia helps me find a few pieces of literature mentioning the fourth dimension before Eugene the Jeep. And a few pieces of visual art as well. No mention of earlier comic strips, although there’s no mention of Eugene the Jeep in either. So, all I can say is this is an early pop cultural appearance of the fourth dimension. I can’t say it’s the first, even among major comic strips.

Do not try to use this to pass your geometry quals.

I got a good nomination for a Q topic, thanks again to goldenoj. It was for Qualitative/Quantitative. Either would be a good topic, but they make a natural pairing. They describe the things mathematicians look for when modeling things. But ultimately I couldn’t find an angle that I liked. So rather than carry on with an essay that wasn’t working I went for a topic of my own. Might come back around to it, though, especially if nothing good presents itself for the letter X, which will probably need to be a wild card topic anyway.

Quadrature.

We like comparing sizes. I talked about that some with norms. We do the same with shapes, though. We’d like to know which one is bigger than another, and by how much. We rely on squares to do this for us. It could be any shape, but we in the western tradition chose squares. I don’t know why.

My guess, unburdened by knowledge, is the ancient Greek tradition of looking at the shapes one can make with straightedge and compass. The easiest shape these tools make is, of course, circles. But it’s hard to find a circle with the same area as, say, any old triangle. Squares are probably a next-best thing. I don’t know why not equilateral triangles or hexagons. Again I would guess that the ancient Greeks had more rectangular or square rooms than the did triangles or hexagons, and went with what they knew.

So that’s what lurks behind that word “quadrature”. It may be hard for us to judge whether this pentagon is bigger than that octagon. But if we find squares that are the same size as the pentagon and the octagon, great. We can spot which of the squares is bigger, and by how much.

Straightedge-and-compass lets you find the quadrature for many shapes. Like, take a rectangle. Let me call that ABCD. Let’s say that AB is one of the long sides and BC one of the short sides. OK. Extend AB, outwards, to another point that I’ll call E. Pick E so that the length of BE is the same as the length of BC.

Next, bisect the line segment AE. Call that point F. F is going to be the center of a new semicircle, one with radius FE. Draw that in, on the side of AE that’s opposite the point C. Because we are almost there.

Extend the line segment CB upwards, until it touches this semicircle. Call the point where it touches G. The line segment BG is the side of a square with the same area as the original rectangle ABCD. If you know enough straightedge-and-compass geometry to do that bisection, you know enough to turn BG into a square. If you’re not sure why that’s the correct length, you can get there quickly. Use a little algebra and the Pythagorean theorem.

Neat, yeah, I agree. Also neat is that you can use the same trick to find the area of a parallelogram. A parallelogram has the same area as a square with the same bases and height between them, you remember. So take your parallelogram, draw in some perpendiculars to share that off into a rectangle, and find the quadrature of that rectangle. you’ve got the quadrature of your parallelogram.

Having the quadrature of a parallelogram lets you find the quadrature of any triangle. Pick one of the sides of the triangle as the base. You have a third point not on that base. Draw in the parallel to that base that goes through that third point. Then choose one of the other two sides. Draw the parallel to that side which goes through the other point. Look at that: you’ve got a parallelogram with twice the area of your original triangle. Bisect either the base or the height of this parallelogram, as you like. Then follow the rules for the quadrature of a parallelogram, and you have the quadrature of your triangle. Yes, you’re doing a lot of steps in-between the triangle you started with and the square you ended with. Those steps don’t count, not by this measure. Getting the results right matters.

And here’s some more beauty. You can find the quadrature for any polygon. Remember how you can divide any polygon into triangles? Go ahead and do that. Find the quadrature for every one of those triangles then. And you can create a square that has an area as large as all those squares put together. I’ll refrain from saying quite how, because realizing how is such a delight, one of those moments that at least made me laugh at how of course that’s how. It’s through one of those things that even people who don’t know mathematics know about.

With that background you understand why people thought the quadrature of the circle ought to be possible. Moreso when you know that the lune, a particular crescent-moon-like shape, can be squared. It looks so close to a half-circle that it’s obvious the rest should be possible. It’s not, and it took two thousand years and a completely different idea of geometry to prove it. But it sure looks like it should be possible.

Along the way to modernity quadrature picked up a new role. This is as part of calculus. One of the legs of calculus is integration. There is an interpretation of what the (definite) integral of a function means so common that we sometimes forget it doesn’t have to be that. This is to say that the integral of a function is the area “underneath” the curve. That is, it’s the area bounded by the limits of integration, by the horizontal axis, and by the curve represented by the function. If the function is sometimes less than zero, within the limits of integration, we’ll say that the integral represents the “net area”. Then we allow that the net area might be less than zero. Then we ignore the scolding looks of the ancient Greek mathematicians.

No matter. We love being able to find “the” integral of a function. This is a new function, and evaluating it tells us what this net area bounded by the limits of integration is. Finding this is “integration by quadrature”. At least in books published back when they wrote words like “to-day” or “coördinate”. My experience is that the term’s passed out of the vernacular, at least in North American Mathematician’s English.

Anyway the real flaw is that there are, like, six functions we can find the integral for. For the rest, we have to make do with approximations. This gives us “numerical quadrature”, a phrase which still has some currency.

And with my prologue about compass-and-straightedge quadrature you can see why it’s called that. Numerical integration schemes often rely on finding a polynomial with a part that looks like a graph of the function you’re interested in. The other edges look like the limits of the integration. Then the area of that polygon should be close to the area “underneath” this function. So it should be close to the integral of the function you want. And we’re old hands at how the quadrature of polygons, since we talked that out like five hundred words ago.

Now, no person ever has or ever will do numerical quadrature by compass-and-straightedge on some function. So why call it “numerical quadrature” instead of just “numerical integration”? Style, for one. “Quadrature” as a word has a nice tone, clearly jargon but not threateningly alien. Also “numerical integration” often connotes the solving differential equations numerically. So it can clarify whether you’re evaluating integrals or solving differential equations. If you think that’s a distinction worth making. Evaluating integrals and solving differential equations are similar together anyway.

And there is another adjective that often attaches to quadrature. This is Gaussian Quadrature. Gaussian Quadrature is, in principle, a fantastic way to do numerical integration perfectly. For some problems. For some cases. The insight which justifies it to me is one of those boring little theorems you run across in the chapter introducing How To Integrate. It runs something like this. Suppose ‘f’ is a continuous function, with domain the real numbers and range the real numbers. Suppose a and b are the limits of integration. Then there’s at least one point c, between a and b, for which:

So if you could pick the right c, any integration would be so easy. Evaluate the function for one point and multiply it by whatever b minus a is. The catch is, you don’t know what c is.

Except there’s some cases where you kinda do. Like, if f is a line, rising or falling with a constant slope from a to b? Then have c be the midpoint of a and b.

That won’t always work. Like, if f is a parabola on the region from a to b, then c is not going to be the midpoint. If f is a cubic, then the midpoint is probably not c. And so on. And if you don’t know what kind of function f is? There’s no guessing where c will be.

But. If you decide you’re only trying to certain kinds of functions? Then you can do all right. If you decide you only want to integrate polynomials, for example, then … well, you’re not going to find a single point c for this. But what you can find is a set of points between a and b. Evaluate the function for those points. And then find a weighted average by rules I’m not getting into here. And that weighted average will be exactly that integral.

Of course there’s limits. The Gaussian Quadrature of a function is only possible if you can evaluate the function at arbitrary points. If you’re trying to integrate, like, a set of sample data it’s inapplicable. The points you pick, and the weighting to use, depend on what kind of function you want to integrate. The results will be worse the less your function is like what you supposed. It’s tedious to find what these points are for a particular assumption of function. But you only have to do that once, or look it up, if you know (say) you’re going to use polynomials of degree up to six or something like that.

And there are variations on this. They have names like the Chevyshev-Gauss Quadrature, or the Hermite-Gauss Quadrature, or the Jacobi-Gauss Quadrature. There are even some that don’t have Gauss’s name in them at all.

Despite that, you can get through a lot of mathematics not talking about quadrature. The idea implicit in the name, that we’re looking to compare areas of different things by looking at squares, is obsolete. It made sense when we worked with numbers that depended on units. One would write about a shape’s area being four times another shape’s, or the length of its side some multiple of a reference length.

We’ve grown comfortable thinking of raw numbers. It makes implicit the step where we divide the polygon’s area by the area of some standard reference unit square. This has advantages. We don’t need different vocabulary to think about integrating functions of one or two or ten independent variables. We don’t need wordy descriptions like “the area of this square is to the area of that as the second power of this square’s side is to the second power of that square’s side”. But it does mean we don’t see squares as intermediaries to understanding different shapes anymore.

Today’s A To Z term is another from goldenoj. It was just the proposal “Platonic”. Most people, prompted, would follow that adjective with one of three words. There’s relationship, ideal, and solid. Relationship is a little too far off of mathematics for me to go into here. Platonic ideals run very close to mathematics. Probably the default philosophy of western mathematics is Platonic. At least a folk Platonism, where the rest of us follow what the people who’ve taken the study of mathematical philosophy seriously seem to be doing. The idea that mathematical constructs are “real things” and have some “existence” that we can understand even if we will never see a true circle or an unadulterated four. Platonic solids, though, those are nice and familiar things. Many of them we can find around the house. That’s one direction to go.

Platonic.

Before I get to the Platonic Solids, though, I’d like to think a little more about Platonic Ideals. What do they look like? I gather our friends in the philosophy department have debated this question a while. So I won’t pretend to speak as if I had actual knowledge. I just have an impression. That impression is … well, something simple. My reasoning is that the Platonic ideal of, say, a chair has to have all the traits that every chair ever has. And there’s not a lot that every chair has. Whatever’s in the Platonic Ideal chair has to be just the things that every chair has, and to omit things that non-chairs do not.

That’s comfortable to me, thinking like a mathematician, though. I think mathematicians train to look for stuff that’s very generally true. This will tend to be things that have few properties to satisfy. Things that look, in some way, simple.

So what is simple in a shape? There’s no avoiding aesthetic judgement here. We can maybe use two-dimensional shapes as a guide, though. Polygons seem nice. They’re made of line segments which join at vertices. Regular polygons even nicer. Each vertex in a regular polygon connects to two edges. Each edge connects to exactly two vertices. Each edge has the same length. The interior angles are all congruent. And if you get many many sides, the regular polygon looks like a circle.

So there’s some things we might look for in solids. Shapes where every edge is the same length. Shapes where every edge connects exactly two vertices. Shapes where every vertex connects to the same number of edges. Shapes where the interior angles are all constant. Shapes where each face is the same polygon as every other face. Look for that and, in three-dimensional space, we find nine shapes.

Yeah, you want that to be five also. The four extra ones are “star polyhedrons”. They look like spikey versions of normal shapes. What keeps these from being Platonic solids isn’t a lack of imagination on Plato’s part. It’s that they’re not convex shapes. There’s no pair of points in a convex shape for which the line segment connecting them goes outside the shape. For the star polyhedrons, well, look at the ends of any two spikes. If we decide that part of this beautiful simplicity is convexity, then we’re down to five shapes. They’re famous. Tetrahedron, cube, octahedron, icosahedron, and dodecahedron.

I’m not sure why they’re named the Platonic Solids, though. Before you explain to me that they were named by Plato in the dialogue Timaeus, let me say something. They were named by Plato in the dialogue Timaeus. That isn’t the same thing as why they have the name Platonic Solids. I trust Plato didn’t name them “the me solids”, since if I know anything about Plato he would have called them “the Socratic solids”. It’s not that Plato was the first to group them either. At least some of the solids were known long before Plato. I don’t know of anyone who thinks Plato particularly advanced human understanding of the solids.

But he did write about them, and in things that many people remembered. It’s natural for a name to attach to the most famous person writing them. Still, someone had the thought which we follow to group these solids together under Plato’s name. I’m curious who, and when. Naming is often a more arbitrary thing than you’d think. The Fibonacci sequence has been known at latest since Fibonacci wrote about it in 1204. But it could not have that name before 1838, when historian Guillaume Libri gave Leonardo of Pisa the name Fibonacci. I’m not saying that the name “Platonic Solid” was invented in, like, 2002. But traditions that seem age-old can be surprisingly recent.

What is an age-old tradition is looking for physical significance in the solids. Plato himself cleverly matched the solids to the ancient concept of four elements plus a quintessence. Johannes Kepler, whom we thank for noticing the star polyhedrons, tried to match them to the orbits of the planets around the sun. Wikipedia tells me of a 1980s attempt to understand the atomic nucleus using Platonic solids. The attempt even touches me. Along the way to my thesis I looked at uniform charges free to move on the surface of a sphere. It was obvious if there were four charges they’d move to the vertices of a tetrahedron on the sphere. Similarly, eight charges would go to the vertices of the cube. 20 charges to the vertices of the icosahedron. And so on. The Platonic Solids seem not just attractive but also of some deep physical significance.

There are not the four (or five) elements of ancient Greek atomism. Attractive as it is to think that fire is a bunch of four-sided dice. The orbits of the planets have nothing to do with the Platonic solids. I know too little about the physics of the atomic nucleus to say whether that panned out. However, that it doesn’t even get its own Wikipedia entry suggests something to me. And, in fact, eight charges on the sphere will not settle at the vertices of a cube. They’ll settle on a staggered pattern, two squares turned 45 degrees relative to each other. The shape is called a “square antiprism”. I was as surprised as you to learn that. It’s possible that the Platonic Solids are, ultimately, pleasant to us but not a key to the universe.

The example of the Platonic Solids does give us the cue to look for other families of solids. There are many such. The Archimedean Solids, for example, are again convex polyhedrons. They have faces of two or more regular polygons, rather than the lone one of Platonic Solids. There are 13 of these, with names of great beauty like the snub cube or the small rhombicuboctahedron. The Archimedean Solids have duals. The dual of a polyhedron represents a face of the original shape with a vertex. Faces that meet in the original polyhedron have an edge between their dual’s vertices. The duals to the Archimedean Solids get the name Catalan Solids. This for the Belgian mathematician Eugène Catalan, who described them in 1865. These attract names like “deltoidal icositetrahedron”. (The Platonic Solids have duals too, but those are all Platonic solids too. The tetrahedron is even its own dual.) The star polyhedrons hint us to look at stellations. These are shapes we get by stretching out the edges or faces of a polyhedron until we get a new polyhedron. It becomes a dizzying taxonomy of shapes, many of them with pointed edges.

There are things that look like Platonic Solids in more than three dimensions of space. In four dimensions of space there are six of these, five of which look like versions of the Platonic Solids we all know. The sixth is this novel shape called the 24-cell, or hyperdiamond, or icositetrachoron, or some other wild names. In five dimensions of space? … it turns out there are only three things that look like Platonic Solids. There’s versions of the tetrahedron, the cube, and the octahedron. In six dimensions? … Three shapes, again versions of the tetrahedron, cube, and octahedron. And it carries on like this for seven, eight, nine, any number of dimensions of space. Which is an interesting development. If I hadn’t looked up the answer I’d have expected more dimensions of space to allow for more Platonic Solid-like shapes. Well, our experience with two and three dimensions guides us to thinking about more dimensions of space. It doesn’t mean that they’re just regular space with a note in the corner that “N = 8”. Shapes hold surprises.

Today’s A To Z term is another free choice. So I’m picking a term from the world of … mathematics. There are a lot of norms out there. Many are specialized to particular roles, such as looking at complex-valued numbers, or vectors, or matrices, or polynomials.

Still they share things in common, and that’s what this essay is for. And I’ve brushed up against the topic before.

The norm, also, has nothing particular to do with “normal”. “Normal” is an adjective which attaches to every noun in mathematics. This is security for me as while these A-To-Z sequences may run out of X and Y and W letters, I will never be short of N’s.

Norm.

A “norm” is the size of whatever kind of thing you’re working with. You can see where this is something we look for. It’s easy to look at two things and wonder which is the smaller.

There are many norms, even for one set of things. Some seem compelling. For the real numbers, we usually let the absolute value do this work. By “usually” I mean “I don’t remember ever seeing a different one except from someone introducing the idea of other norms”. For a complex-valued number, it’s usually the square root of the sum of the square of the real part and the square of the imaginary coefficient. For a vector, it’s usually the square root of the vector dot-product with itself. (Dot product is this binary operation that is like multiplication, if you squint, for vectors.) Again, these, the “usually” means “always except when someone’s trying to make a point”.

Which is why we have the convention that there is a “the norm” for a kind of operation. The norm dignified as “the” is usually the one that looks as much as possible like the way we find distances between two points on a plane. I assume this is because we bring our intuition about everyday geometry to mathematical structures. You know how it is. Given an infinity of possible choices we take the one that seems least difficult.

Every sort of thing which can have a norm, that I can think of, is a vector space. This might be my failing imagination. It may also be that it’s quite easy to have a vector space. A vector space is a collection of things with some rules. Those rules are about adding the things inside the vector space, and multiplying the things in the vector space by scalars. These rules are not difficult requirements to meet. So a lot of mathematical structures are vector spaces, and the things inside them are vectors.

A norm is a function that has these vectors as its domain, and the non-negative real numbers as its range. And there are three rules that it has to meet. So. Give me a vector ‘u’ and a vector ‘v’. I’ll also need a scalar, ‘a. Then the function f is a norm when:

. This is a famous rule, called the triangle inequality. You know how in a triangle, the sum of the lengths of any two legs is greater than the length of the third leg? That’s the rule at work here.

. This doesn’t have so snappy a name. Sorry. It’s something about being homogeneous, at least.

If then u has to be the additive identity, the vector that works like zero does.

Norms take on many shapes. They depend on the kind of thing we measure, and what we find interesting about those things. Some are familiar. Look at a Euclidean space, with Cartesian coordinates, so that we might write something like (3, 4) to describe a point. The “the norm” for this, called the Euclidean norm or the L^{2} norm, is the square root of the sum of the squares of the coordinates. So, 5. But there are other norms. The L^{1} norm is the sum of the absolute values of all the coefficients; here, 7. The L^{∞} norm is the largest single absolute value of any coefficient; here, 4.

A polynomial, meanwhile? Write it out as . Take the absolute value of each of these terms. Then … you have choices. You could take those absolute values and add them up. That’s the L^{1} polynomial norm. Take those absolute values and square them, then add those squares, and take the square root of that sum. That’s the L^{2} norm. Take the largest absolute value of any of these coefficients. That’s the L^{∞} norm.

These don’t look so different, even though points in space and polynomials seem to be different things. We designed the tool. We want it not to be weirder than it has to be. When we try to put a norm on a new kind of thing, we look for a norm that resembles the old kind of thing. For example, when we want to define the norm of a matrix, we’ll typically rely on a norm we’ve already found for a vector. At least to set up the matrix norm; in practice, we might do a calculation that doesn’t explicitly use a vector’s norm, but gives us the same answer.

If we have a norm for some vector space, then we have an idea of distance. We can say how far apart two vectors are. It’s the norm of the difference between the vectors. This is called defining a metric on the vector space. A metric is that sense of how far apart two things are. What keeps a norm and a metric from being the same thing is that it’s possible to come up with a metric that doesn’t match any sensible norm.

It’s always possible to use a norm to define a metric, though. Doing that promotes our normed vector space to the dignified status of a “metric space”. Many of the spaces we find interesting enough to work in are such metric spaces. It’s hard to think of doing without some idea of size.

Comic Strip Master Command hoped to give me an easy week, one that would let me finally get ahead on my A-to-Z essays and avoid the last-minute rush to complete tasks. I showed them, though. I can procrastinate more than they can give me breaks. This essay alone I’m writing about ten minutes after you read it.

Eric the Circle for the 7th, by Shoy, is one of the jokes where Eric’s drawn as something besides a circle. I can work with this, though, because the cube is less far from a circle than you think. It gets to what we mean by “a circle”. If it’s all the points that are exactly a particular distance from a given center? Or maybe all the points up to that particular distance from a given center? This seems too reasonable to argue with, so you know where the trick is.

The trick is asking what we mean by distance? The ordinary distance that normal people use has a couple names. The Euclidean distance, often. Or Euclidean metric. Euclidean norm. It has some fancier names that can wait. Give two points. You can find this distance easily if you have their coordinates in a Cartesian system. (There’s infinitely many Cartesian systems you could use. You can pick whatever one you like; the distance will be the same whatever they are.) That’s that thing about finding the distance between corresponding coordinates, squaring those distances, adding that up, and taking the square root. And that’s good.

That’s not our only choice, though. We can make a perfectly good distance using other rules. For example, take the difference between corresponding coordinates, take the absolute value of each, and add all those absolute values up. This distance even has real-world application. It’s how far it is to go from one place to another on a grid of city squares, where it’s considered poor form to walk directly through buildings. There’s another. Instead of adding those absolute values up? Just pick the biggest of the absolute values. This is another distance. In it, circles look like squares. Or, in three dimensions, spheres look like cubes.

Ryan North’s Dinosaur Comics for the 9th builds on a common science fictional premise, that contact with an alien intelligence is done through mathematics first. It’s a common supposition in science fiction circles, and among many scientists, that mathematics is a truly universal language. It’s hard to imagine a species capable of communication with us that wouldn’t understand two and two adding up to four. Or about the ratio of a circle circumference to its diameter being independent of that diameter. Or about how an alternating knot for which the minimum number of crossing points is odd can’t ever be amphicheiral.

All right, I guess I can imagine a species that never ran across that point. Which is one of the things we suppose in using mathematics as a universal language. Its truths are indisputable, if we allow the rules of logic and axioms and definitions that we use. And I agree I don’t know that it’s possible not to notice basic arithmetic and basic geometry, not if one lives in a sensory world much like humans’. But it does seem to me at least some of mathematics is probably idiosyncratic. In representation at least; certainly in organization. I suspect there may be trouble in using universal and generically true things to express something local and specific. I don’t know how to go from deductive logic to telling someone when my birthday is. Well, I’m sure our friends in the philosophy department have considered that problem and have some good thoughts we can use, if there were only some way to communicate with them.

Bill Whitehead’s Free Range for the 12th is your classic blackboard-full-of-symbols. I like the beauty of the symbols used. I mean, the whole expression doesn’t parse, but many of the symbols do and are used in reasonable ways. Long trailing strings of arrows to extend one line to another are common and reasonable too. In the middle of the second line is , which doesn’t make sense, but which doesn’t make sense in a way that seems authentic to working out an idea. It’s something that could be cleaned up if the reasoning needed to be made presentable.