## Reading the Comics, June 18, 2022: Pizza Edition

I’m back with my longest-running regular feature here. As I’ve warned I’m trying not to include every time one of the newspaper comics (that is, mostly, ones running on Comics Kingdom or GoComics) mentions the existence of arithmetic. So, for example, both Frank and Ernest and Rhymes with Orange did jokes about the names of the kinds of triangles. You can clip those at your leisure; I’m looking to discuss deeper subjects.

Scott Hilburn’s The Argyle Sweater is … well, it’s just an anthropomorphic-numerals joke. I have a weakness for The Wizard of Oz, that’s all. Also, I don’t know but somewhere in the nine kerspillion authorized books written since Baum’s death there be at least one with a “wizard of odds” plot.

Bill Amend’s FoxTrot reads almost like a word problem’s setup. There’s a difference in cost between pizzas of different sizes. Jason and Marcus make the supposition that they could buy the difference in sizes. They are asking for something physically unreasonable, but in a way that mathematics problems might do. The ring of pizza they’d be buying would be largely crust, after all. (Some people like crust, but I doubt any are ten-year-olds like Jason and Marcus.) The obvious word problem to spin out of this is extrapolating the costs of 20-inch or 8-inch pizzas, and maybe the base cost of making any pizza however tiny.

You can think of a 16-inch-wide circle as a 12-inch-wide circle with an extra ring around it. (An annulus, we’d say in the trades.) This is often a useful way to look at circles. If you get into calculus you’ll see the extra area you get from a slight increase in the diameter (or, more likely, the radius) all over the place. Also, in three dimensions, the difference in volume you get from an increase in diameter. There are also a good number of theorems with names like Green’s and Stokes’s. These are all about what you can know about the interior of a shape, like a pizza, from what you know about the ring around the edge.

Jim Meddick’s Monty sees Sedgwick, spoiled scion of New Jersey money, preparing for a mathematics test. He’s allowed the use of an abacus, one of the oldest and best-recognized computational aides. The abacus works by letting us turn the operations of basic arithmetic into physical operations. This has several benefits. We (generally) understand things in space pretty well. And the beads and wires serve as aides to memory, always a struggle. Sedgwick also brings out a “hyperbolic abacus”, a tool for more abstract operations like square roots and sines and cosines. I don’t know of anything by that name, but you can design mechanical tools to do particular computations. Slide rules, for example, generally have markings to let one calculate square roots and cube roots easily. Aircraft pilots might use a flight computer, a set of plastic discs to do quick estimates of flight time, fuel consumption, ground speed, and such. (There’s even an episode of the original Star Trek where Spock fiddles with one!)

I have heard, but not seen, that specialized curves were made to let people square circles with something approximating a compass-and-straightedge method. A contraption to calculate sines and cosines would not be hard to imagine. It would need to be a post on a hinge, mostly, with a set of lines to read off sine and cosine values over a range of angles. I don’t know of one that existed, as it’s easy enough to print out a table of trig functions, but it wouldn’t be hard to make.

And that’s enough for this week. This and all my other Reading the Comics posts should be at this link. I hope to get this back to a weekly column, but that does depend on Comic Strip Master Command doing what’s convenient for me. We’ll see how it turns out.

## Something Neat About Triangles, Again

I apologize for not having anything fresh to share today. It’s been a difficult week, one of many. So I would like to share something from years ago, and something I still find delightful.

I was reading a biography of Donald Coxeter, one of the most important geometers of the 20th century, and it mentioned in passing something Coxeter referred to as Morley’s Miracle Theorem. The theorem was proved in 1899 by Frank Morley, who taught at Haverford College (if that sounds vaguely familiar that’s because you remember it’s where Dave Barry went) and then Johns Hopkins (which may be familiar on the strength of its lacrosse team), and published this in the first issue of the Transactions of the American Mathematical Society. And, yes, perhaps it isn’t actually important, but the result is so unexpected and surprising that I wanted to share it with you. The biography also includes a proof Coxeter wrote for the theorem, one that’s admirably straightforward, but let me show the result without the proof so you can wonder about it.

First, start by drawing a triangle. It doesn’t have to have any particular interesting properties other than existing. I’ve drawn an example one.

The next step is to cut into three equal pieces each of the interior angles of the triangle, and draw those lines. I’m doing that in separate diagrams for each of the triangle’s three original angles because I want to better suggest the process.

I should point out, this trisection of the angles can be done however you like, which is probably going to be by measuring the angles with a protractor and dividing the angle by three. I made these diagrams just by sketching them out, so they aren’t perfect in their measure, but if you were doing the diagram yourself on a sheet of scratch paper you wouldn’t bother getting the protractor out either. (And, famously, you can’t trisect an angle if you’re using just compass and straightedge to draw things, but you don’t have to restrict yourself to compass and straightedge for this.)

Now the next bit is to take the points where adjacent angle trisectors intersect — that is, for example, where the lower red line crosses the lower green line; where the upper red line crosses the left blue line; and where the right blue line crosses the upper green line. Draw lines connecting these points together and …

This new triangle, drawn in purple on my sketch, is an equilateral triangle!

(It may look a little off, but that’s because I didn’t measure the trisectors when I drew them in and just eyeballed it. If I had measured the angles and drawn the new ones in carefully, it would have been perfect.)

I’ve been thinking back on this and grinning ever since reading it. I certainly didn’t see that punch line coming.

## My Little 2021 Mathematics A-to-Z: Tangent Space

And now, finally, I resume and hopefully finish what was meant to be a simpler and less stressful A-to-Z for last year. I’m feeling much better about my stress loads now and hope that I can soon enjoy the feeling of having a thing accomplished.

This topic is one of many suggestions that Elkement, one of my longest blog-friendships here, offered. It’s a creation that sent me back to my grad school textbooks, some of those slender paperback volumes with tiny, close-set type that turn out to be far more expensive than you imagine. Though not in this case: my most useful reference here was V I Arnold’s Ordinary Differential Equations, stamped inside as costing \$18.75. The field is full of surprises. Another wonderful reference was this excellent set of notes prepared by Jodin Morey. They would have done much to help me through that class.

# Tangent Space

Stand in midtown Manhattan, holding a map of midtown Manhattan. You have — not a tangent space, not yet. A tangent plane, representing the curved surface of the Earth with the flat surface of your map, though. But the tangent space is near: see how many blocks you must go, along the streets and the avenues, to get somewhere. Four blocks north, three west. Two blocks south, ten east. And so on. Those directions, of where you need to go, are the tangent space around you.

There is the first trick in tangent spaces. We get accustomed, early in learning calculus, to think of tangent lines and then of tangent planes. These are nice, flat approximations to some original curve. But while we’re introduced to the tangent space, and first learn examples of it, as tangent planes, we don’t stay there. There are several ways to define tangent spaces. One recasts tangent spaces in group theory terms, describing them as a ring based on functions that are equal to zero at the tangent point. (To be exact, it’s an ideal, based on a quotient group, based on two sets of such functions.)

That’s a description mathematicians are inclined to like, not only because it’s far harder to imagine than a map of the city is. But this ring definition describes the tangent space in terms of what we can do with it, rather than how to calculate finding it. That tends to appeal to mathematicians. And it offers surprising insights. Cleverer mathematicians than I am notice how this makes tangent spaces very close to Lagrange multipliers. Lagrange multipliers are a technique to find the maximum of a function subject to a constraint from another function. They seem to work by magic, and tangent spaces will echo that.

I’ll step back from the abstraction. There’s relevant observations to make from this map of midtown. The directions “four blocks north, three west” do not represent any part of Manhattan. It describes a way you might move in Manhattan, yes. But you could move in that direction from many places in the city. And you could go four blocks north and three west if you were in any part of any city with a grid of streets. It is a vector space, with elements that are velocities at a tangent point.

The tangent space is less a map showing where things are and more one of how to get to other places, closer to a subway map than a literal one. Still, the topic is steeped in the language of maps. I’ll find it a useful metaphor too. We do not make a map unless we want to know how to find something. So the interesting question is what do we try to find in these tangent spaces?

There are several routes to tangent spaces. The one I’m most familiar with is through dynamical systems. These are typically physics-driven, sometimes biology-driven, problems. They describe things that change in time according to ordinary differential equations. Physics problems particularly are often about things moving in space. Space, in dynamical systems, becomes “phase space”, an abstract universe spanned by all of the possible values of the variables. The variables are, usually, the positions and momentums of the particles (for a physics problem). Sometimes time and energy appear as variables. In biology variables are often things that represent populations. The role the Earth served in my first paragraph is now played by a manifold. The manifold represents whatever constraints are relevant to the problem. That’s likely to be conservation laws or limits on how often arctic hares can breed or such.

The evolution in time of this system, though, is now the tracing out of a path in phase space. An understandable and much-used system is the rigid pendulum. A stick, free to swing around a point. There are two useful coordinates here. There’s the angle the stick makes, relative to the vertical axis, $\theta$. And there’s how fast the stick is changing, $\dot{\theta}$. You can draw these axes; I recommend $\theta$ as the horizontal and $\dot{\theta}$ as the vertical axis but, you know, you do you.

If you give the pendulum a little tap, it’ll swing back and forth. It rises and moves to the right, then falls while moving to the left, then rises and moves to the left, then falls and moves to the right. In phase space, this traces out an ellipse. It’s your choice whether it’s going clockwise or anticlockwise. If you give the pendulum a huge tap, it’ll keep spinning around and around. It’ll spin a little slower as it gets nearly upright, but it speeds back up again. So in phase space that’s a wobbly line, moving either to the right or the left, depending what direction you hit it.

You can even imagine giving the pendulum just the right tap, exactly hard enough that it rises to vertical and balances there, perfectly aligned so it doesn’t fall back down. This is a special path, the dividing line between those ellipses and that wavy line. Or setting it vertically there to start with and trusting no truck driving down the street will rattle it loose. That’s a very precise dot, where $\dot{\theta}$ is exactly zero. These paths, the trajectories, match whatever walking you did in the first paragraph to get to some spot in midtown Manhattan. And now let’s look again at the map, and the tangent space.

Within the tangent space we see what changes would change the system’s behavior. How much of a tap we would need, say, to launch our swinging pendulum into never-ending spinning. Or how much of a tap to stop a spinning pendulum. Every point on a trajectory of a dynamical system has a tangent space. And, for many interesting systems, the tangent space will be separable into two pieces. One of them will be perturbations that don’t go far from the original trajectory. One of them will be perturbations that do wander far from the original.

These regions may have a complicated border, with enclaves and enclaves within enclaves, and so on. This can be where we get (deterministic) chaos from. But what we usually find interesting is whether the perturbation keeps the old behavior intact or destroys it altogether. That is, how we can change where we are going.

That said, in practice, mathematicians don’t use tangent spaces to send pendulums swinging. They tend to come up when one is past studying such petty things as specific problems. They’re more often used in studying the ways that dynamical systems can behave. Tangent spaces themselves often get wrapped up into structures with names like tangent bundles. You’ll see them proving the existence of some properties, describing limit points and limit cycles and invariants and quite a bit of set theory. These can take us surprising places. It’s possible to use a tangent-space approach to prove the fundamental theorem of algebra, that every polynomial has at least one root. This seems to me the long way around to get there. But it is amazing to learn that is a place one can go.

I am so happy to be finally finishing Little 2021 Mathematics A-to-Z. All of this project’s essays should be at this link. And all my glossary essays from every year should be at this link. Thank you for reading.

## From my Seventh A-to-Z: Zero Divisor

Here I stand at the end of the pause I took in 2021’s Little Mathematics A-to-Z, in the hopes of building the time and buffer space to write its last three essays. Have I succeeded? We’ll see next week, but I will say that I feel myself in a much better place than I was in December.

The Zero Devisor closed out my big project for the first plague year. It let me get back to talking about abstract algebra, one of the cores of a mathematics major’s education. And it let me get into graph theory, the unrequited love of my grad school life. The subject also let me tie back to Michael Atiyah, the start of that year’s A-to-Z. Often a sequence will pick up a theme and 2020’s gave a great illusion of being tightly constructed.

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

# Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$.

And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

## From my Second A-to-Z: Orthonormal

For early 2016 — dubbed “Leap Day 2016” as that’s when it started — I got a request to explain orthogonal. I went in a different direction, although not completely different. This essay does get a bit more into specifics of how mathematicians use the idea, like, showing some calculations and such. I put in a casual description of vectors here. For book publication I’d want to rewrite that to be clearer that, like, ordered sets of numbers are just one (very common) way to represent vectors.

Jacob Kanev had requested “orthogonal” for this glossary. I’d be happy to oblige. But I used the word in last summer’s Mathematics A To Z. And I admit I’m tempted to just reprint that essay, since it would save some needed time. But I can do something more.

## Orthonormal.

“Orthogonal” is another word for “perpendicular”. Mathematicians use it for reasons I’m not precisely sure of. My belief is that it’s because “perpendicular” sounds like we’re talking about directions. And we want to extend the idea to things that aren’t necessarily directions. As majors, mathematicians learn orthogonality for vectors, things pointing in different directions. Then we extend it to other ideas. To functions, particularly, but we can also define it for spaces and for other stuff.

I was vague, last summer, about how we do that. We do it by creating a function called the “inner product”. That takes in two of whatever things we’re measuring and gives us a real number. If the inner product of two things is zero, then the two things are orthogonal.

The first example mathematics majors learn of this, before they even hear the words “inner product”, are dot products. These are for vectors, ordered sets of numbers. The dot product we find by matching up numbers in the corresponding slots for the two vectors, multiplying them together, and then adding up the products. For example. Give me the vector with values (1, 2, 3), and the other vector with values (-6, 5, -4). The inner product will be 1 times -6 (which is -6) plus 2 times 5 (which is 10) plus 3 times -4 (which is -12). So that’s -6 + 10 – 12 or -8.

So those vectors aren’t orthogonal. But how about the vectors (1, -1, 0) and (0, 0, 1)? Their dot product is 1 times 0 (which is 0) plus -1 times 0 (which is 0) plus 0 times 1 (which is 0). The vectors are perpendicular. And if you tried drawing this you’d see, yeah, they are. The first vector we’d draw as being inside a flat plane, and the second vector as pointing up, through that plane, like a thumbtack.

So that’s orthogonal. What about this orthonormal stuff?

Well … the inner product can tell us something besides orthogonality. What happens if we take the inner product of a vector with itself? Say, (1, 2, 3) with itself? That’s going to be 1 times 1 (which is 1) plus 2 times 2 (4, according to rumor) plus 3 times 3 (which is 9). That’s 14, a tidy sum, although, so what?

The inner product of (-6, 5, -4) with itself? Oh, that’s some ugly numbers. Let’s skip it. How about the inner product of (1, -1, 0) with itself? That’ll be 1 times 1 (which is 1) plus -1 times -1 (which is positive 1) plus 0 times 0 (which is 0). That adds up to 2. And now, wait a minute. This might be something.

Start from somewhere. Move 1 unit to the east. (Don’t care what the unit is. Inches, kilometers, astronomical units, anything.) Then move -1 units to the north, or like normal people would say, 1 unit o the south. How far are you from the starting point? … Well, you’re the square root of 2 units away.

Now imagine starting from somewhere and moving 1 unit east, and then 2 units north, and then 3 units straight up, because you found a convenient elevator. How far are you from the starting point? This may take a moment of fiddling around with the Pythagorean theorem. But you’re the square root of 14 units away.

And what the heck, (0, 0, 1). The inner product of that with itself is 0 times 0 (which is zero) plus 0 times 0 (still zero) plus 1 times 1 (which is 1). That adds up to 1. And, yeah, if we go one unit straight up, we’re one unit away from where we started.

The inner product of a vector with itself gives us the square of the vector’s length. At least if we aren’t using some freak definition of inner products and lengths and vectors. And this is great! It means we can talk about the length — maybe better to say the size — of things that maybe don’t have obvious sizes.

Some stuff will have convenient sizes. For example, they’ll have size 1. The vector (0, 0, 1) was one such. So is (1, 0, 0). And you can think of another example easily. Yes, it’s $\left(\frac{1}{\sqrt{2}}, -\frac{1}{2}, \frac{1}{2}\right)$. (Go ahead, check!)

So by “orthonormal” we mean a collection of things that are orthogonal to each other, and that themselves are all of size 1. It’s a description of both what things are by themselves and how they relate to one another. A thing can’t be orthonormal by itself, for the same reason a line can’t be perpendicular to nothing in particular. But a pair of things might be orthogonal, and they might be the right length to be orthonormal too.

Why do this? Well, the same reasons we always do this. We can impose something like direction onto a problem. We might be able to break up a problem into simpler problems, one in each direction. We might at least be able to simplify the ways different directions are entangled. We might be able to write a problem’s solution as the sum of solutions to a standard set of representative simple problems. This one turns up all the time. And an orthogonal set of something is often a really good choice of a standard set of representative problems.

This sort of thing turns up a lot when solving differential equations. And those often turn up when we want to describe things that happen in the real world. So a good number of mathematicians develop a habit of looking for orthonormal sets.

## From my Seventh A-to-Z: Tiling (the accidental remake)

For the 2020 A-to-Z I took the suggestion to write about tiling. It’s a fun field with many interesting wrinkles. And I realized after publishing that I had already written about Tiling, just two years before. There was no scrambling together a replacement essay, so I had to let it stand as is.

The accidental remake allows for some interesting studies, though. The two essays have very similar structures, which probably reflects that I came to both essays with similar rough ideas what to write, and went to similar sources to fill in details. The second essay turned out longer. Also, I think, better. I did a bit more tracking down specifics, such as trying to find Hao Wang’s paper and see just what it says. And rewriting is often key to good writing. This offers lessons in preparing these essays for book publication.

Mr Wu, author of the Singapore Maths Tuition blog, had an interesting suggestion for the letter T: Talent. As in mathematical talent. It’s a fine topic but, in the end, too far beyond my skills. I could share some of the legends about mathematical talent I’ve received. But what that says about the culture of mathematicians is a deeper and more important question.

So I picked my own topic for the week. I do have topics for next week — U — and the week after — V — chosen. But the letters W and X? I’m still open to suggestions. I’m open to creative or wild-card interpretations of the letters. Especially for X and (soon) Z. Thanks for sharing any thoughts you care to.

# Tiling.

Think of a floor. Imagine you are bored. What do you notice?

What I hope you notice is that it is covered. Perhaps by carpet, or concrete, or something homogeneous like that. Let’s ignore that. My floor is covered in small pieces, repeated. My dining room floor is slats of wood, about three and a half feet long and two inches wide. The slats are offset from the neighbors so there’s a pleasant strong line in one direction and stippled lines in the other. The kitchen is squares, one foot on each side. This is a grid we could plot high school algebra functions on. The bathroom is more elaborate. It has white rectangles about two inches long, tan rectangles about two inches long, and black squares. Each rectangle is perpendicular to ones of the other color, and arranged to bisect those. The black squares fill the gaps where no rectangle would fit.

Move from my house to pure mathematics. It’s easy to turn the floor of a room into abstract mathematics. We start with something to tile. Usually this is the infinite, two-dimensional plane. The thing you get if you have a house and forget the walls. Sometimes we look to tile the hyperbolic plane, a different geometry that we of course represent with a finite circle. (Setting particular rules about how to measure distance makes this equivalent to a funny-shaped plane.) Or the surface of a sphere, or of a torus, or something like that. But if we don’t say otherwise, it’s the plane.

What to cover it with? … Smaller shapes. We have a mathematical tiling if we have a collection of not-overlapping open sets. And if those open sets, plus their boundaries, cover the whole plane. “Cover” here means what “cover” means in English, only using more technical words. These sets — these tiles — can be any shape. We can have as many or as few of them as we like. We can even add markings to the tiles, give them colors or patterns or such, to add variety to the puzzles.

(And if we want, we can do this in other dimensions. There are good “tiling” questions to ask about how to fill a three-dimensional space, or a four-dimensional one, or more.)

Having an unlimited collection of tiles is nice. But mathematicians learn to look for how little we need to do something. Here, we look for the smallest number of distinct shapes. As with tiling an actual floor, we can get all the tiles we need. We can rotate them, too, to any angle. We can flip them over and put the “top” side “down”, something kitchen tiles won’t let us do. Can we reflect them? Use the shape we’d get looking at the mirror image of one? That’s up to whoever’s writing this paper.

What shapes will work? Well, squares, for one. We can prove that by looking at a sheet of graph paper. Rectangles would work too. We can see that by drawing boxes around the squares on our graph paper. Two-by-one blocks, three-by-two blocks, 40-by-1 blocks, these all still cover the paper and we can imagine covering the plane. If we like, we can draw two-by-two squares. Squares made up of smaller squares. Or repeat this: draw two-by-one rectangles, and then group two of these rectangles together to make two-by-two squares.

We can take it on faith that, oh, rectangles π long by e wide would cover the plane too. These can all line up in rows and columns, the way our squares would. Or we can stagger them, like bricks or my dining room’s wood slats are.

How about parallelograms? Those, it turns out, tile exactly as well as rectangles or squares do. Grids or staggered, too. Ah, but how about trapezoids? Surely they won’t tile anything. Not generally, anyway. The slanted sides will, most of the time, only fit in weird winding circle-like paths.

Unless … take two of these trapezoid tiles. We’ll set them down so the parallel sides run horizontally in front of you. Rotate one of them, though, 180 degrees. And try setting them — let’s say so the longer slanted line of both trapezoids meet, edge to edge. These two trapezoids come together. They make a parallelogram, although one with a slash through it. And we can tile parallelograms, whether or not they have a slash.

OK, but if you draw some weird quadrilateral shape, and it’s not anything that has a more specific name than “quadrilateral”? That won’t tile the plane, will it?

It will! In one of those turns that surprises and impresses me every time I run across it again, any quadrilateral can tile the plane. It opens up so many home decorating options, if you get in good with a tile maker.

That’s some good news for quadrilateral tiles. How about other shapes? Triangles, for example? Well, that’s good news too. Take two of any identical triangle you like. Turn one of them around and match sides of the same length. The two triangles, bundled together like that, are a quadrilateral. And we can use any quadrilateral to tile the plane, so we’re done.

How about pentagons? … With pentagons, the easy times stop. It turns out not every pentagon will tile the plane. The pentagon has to be of the right kind to make it fit. If the pentagon is in one of these kinds, it can tile the plane. If not, not. There are fifteen families of tiling known. The most recent family was discovered in 2015. It’s thought that there are no other convex pentagon tilings. I don’t know whether the proof of that is generally accepted in tiling circles. And we can do more tilings if the pentagon doesn’t need to be convex. For example, we can cut any parallelogram into two identical pentagons. So we can make as many pentagons as we want to cover the plane. But we can’t assume any pentagon we like will do it.

Hexagons look promising. First, a regular hexagon tiles the plane, as strategy games know. There are also at least three families of irregular hexagons that we know can tile the plane.

And there the good times end. There are no convex heptagons or octagons or any other shape with more sides that tile the plane.

Not by themselves, anyway. If we have more than one tile shape we can start doing fine things again. Octagons assisted by squares, for example, will tile the plane. I’ve lived places with that tiling. Or something that looks like it. It’s easier to install if you have square tiles with an octagon pattern making up the center, and triangle corners a different color. These squares come together to look like octagons and squares.

And this leads to a fun avenue of tiling. Hao Wang, in the early 60s, proposed a sort of domino-like tiling. You may have seen these in mathematics puzzles, or in toys. Each of these Wang Tiles, or Wang Dominoes, is a square. But the square is cut along the diagonals, into four quadrants. Each quadrant is a right triangle. Each quadrant, each triangle, is one of a finite set of colors. Adjacent triangles can have the same color. You can place down tiles, subject only to the rule that the tile edge has to have the same color on both sides. So a tile with a blue right-quadrant has to have on its right a tile with a blue left-quadrant. A tile with a white upper-quadrant on its top has, above it, a tile with a white lower-quadrant.

In 1961 Wang conjectured that if a finite set of these tiles will tile the plane, then there must be a periodic tiling. That is, if you picked up the plane and slid it a set horizontal and vertical distance, it would all look the same again. This sort of translation is common. All my floors do that. If we ignore things like the bounds of their rooms, or the flaws in their manufacture or installation or where a tile broke in some mishap.

This is not to say you couldn’t arrange them aperiodically. You don’t even need Wang Tiles for that. Get two colors of square tile, a white and a black, and lay them down based on whether the next decimal digit of π is odd or even. No; Wang’s conjecture was that if you had tiles that you could lay down aperiodically, then you could also arrange them to set down periodically. With the black and white squares, lay down alternate colors. That’s easy.

In 1964, Robert Berger proved Wang’s conjecture was false. He found a collection of Wang Tiles that could only tile the plane aperiodically. In 1966 he published this in the Memoirs of the American Mathematical Society. The 1964 proof was for his thesis. 1966 was its general publication. I mention this because while doing research I got irritated at how different sources dated this to 1964, 1966, or sometimes 1961. I want to have this straightened out. It appears Berger had the proof in 1964 and the publication in 1966.

I would like to share details of Berger’s proof, but haven’t got access to the paper. What fascinates me about this is that Berger’s proof used a set of 20,426 different tiles. I assume he did not work this all out with shards of construction paper, but then, how to get 20,426 of anything? With computer time as expensive as it was in 1964? The mystery of how he got all these tiles is worth an essay of its own and regret I can’t write it.

Berger conjectured that a smaller set might do. Quite so. He himself reduced the set to 104 tiles. Donald Knuth in 1968 modified the set down to 92 tiles. In 2015 Emmanuel Jeandel and Michael Rao published a set of 11 tiles, using four colors. And showed by computer search that a smaller set of tiles, or fewer colors, would not force some aperiodic tiling to exist. I do not know whether there might be other sets of 11, four-colored, tiles that work. You can see the set at the top of Wikipedia’s page on Wang Tiles.

These Wang Tiles, all squares, inspired variant questions. Could there be other shapes that only aperiodically tile the plane? What if they don’t have to be squares? Raphael Robinson, in 1971, came up with a tiling using six shapes. The shapes have patterns on them too, usually represented as colored lines. Tiles can be put down only in ways that fit and that make the lines match up.

Among my readers are people who have been waiting, for 1800 words now, for Roger Penrose. It’s now that time. In 1974 Penrose published an aperiodic tiling, one based on pentagons and using a set of six tiles. You’ve never heard of that either, because soon after he found a different set, based on a quadrilateral cut into two shapes. The shapes, as with Wang Tiles or Robinson’s tiling, have rules about what edges may be put against each other. Penrose — and independently Robert Ammann — also developed another set, this based on a pair of rhombuses. These have rules about what edges may tough one another, and have patterns on them which must line up.

The Penrose tiling became, and stayed famous. (Ammann, an amateur, never had much to do with the mathematics community. He died in 1994.) Martin Gardner publicized it, and it leapt out of mathematicians’ hands into the popular culture. At least a bit. That it could give you nice-looking floors must have helped.

To show that the rhombus-based Penrose tiling is aperiodic takes some arguing. But it uses tools already used in this essay. Remember drawing rectangles around several squares? And then drawing squares around several of these rectangles? We can do that with these Penrose-Ammann rhombuses. From the rhombus tiling we can draw bigger rhombuses. Ones which, it turns out, follow the same edge rules that the originals do. So that we can go again, grouping these bigger rhombuses into even-bigger rhombuses. And into even-even-bigger rhombuses. And so on.

What this gets us is this: suppose the rhombus tiling is periodic. Then there’s some finite-distance horizontal-and-vertical move that leaves the pattern unchanged. So, the same finite-distance move has to leave the bigger-rhombus pattern unchanged. And this same finite-distance move has to leave the even-bigger-rhombus pattern unchanged. Also the even-even-bigger pattern unchanged.

Keep bundling rhombuses together. You get eventually-big-enough-rhombuses. Now, think of how far you have to move the tiles to get a repeat pattern. Especially, think how many eventually-big-enough-rhombuses it is. This distance, the move you have to make, is less than one eventually-big-enough rhombus. (If it’s not you aren’t eventually-big-enough yet. Bundle them together again.) And that doesn’t work. Moving one tile over without changing the pattern makes sense. Moving one-half a tile over? That doesn’t. So the eventually-big-enough pattern can’t be periodic, and so, the original pattern can’t be either. This is explained in graphic detail a nice Powerpoint slide set from Professor Alexander F Ritter, A Tour Of Tilings In Thirty Minutes.

It’s possible to do better. In 2010 Joshua E S Socolar and Joan M Taylor published a single tile that can force an aperiodic tiling. As with the Wang Tiles, and Robinson shapes, and the Penrose-Ammann rhombuses, markings are part of it. They have to line up so that the markings — in two colors, in the renditions I’ve seen — make sense. With the Penrose tilings, you can get away from the pattern rules for the edges by replacing them with little notches. The Socolar-Taylor shape can make a similar trade. Here the rules are complex enough that it would need to be a three-dimensional shape, one that looks like the dilithium housing of the warp core. You can see the tile — in colored, marked form, and also in three-dimensional tile shape — at the PDF here. It’s likely not coming to the flooring store soon.

It’s all wonderful, but is it useful? I could go on a few hundred words about, particularly, crystals and quasicrystals. These are important for materials science. Especially these days as we have harnessed slightly-imperfect crystals to be our computers. I don’t care. These are lovely to look at. If you see nothing appealing in a great heap of colors and polygons spread over the floor there are things we cannot communicate about. Tiling is a delight; what more do you need?

Thanks for your attention. This and all of my 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. See you next week, I hope.

## From my First A-to-Z: Tensor

Of course I can’t just take a break for the sake of having a break. I feel like I have to do something of interest. So why not make better use of my several thousand past entries and repost one? I’d just reblog it except WordPress’s system for that is kind of rubbish. So here’s what I wrote, when I was first doing A-to-Z’s, back in summer of 2015. Somehow I was able to post three of these a week. I don’t know how.

I had remembered this essay as mostly describing the boring part of tensors, that we usually represent them as grids of numbers and then symbols with subscripts and superscripts. I’m glad to rediscover that I got at why we do such things to numbers and subscripts and superscripts.

## Tensor.

The true but unenlightening answer first: a tensor is a regular, rectangular grid of numbers. The most common kind is a two-dimensional grid, so that it looks like a matrix, or like the times tables. It might be square, with as many rows as columns, or it might be rectangular.

It can also be one-dimensional, looking like a row or a column of numbers. Or it could be three-dimensional, rows and columns and whole levels of numbers. We don’t try to visualize that. It can be what we call zero-dimensional, in which case it just looks like a solitary number. It might be four- or more-dimensional, although I confess I’ve never heard of anyone who actually writes out such a thing. It’s just so hard to visualize.

You can add and subtract tensors if they’re of compatible sizes. You can also do something like multiplication. And this does mean that tensors of compatible sizes will form a ring. Of course, that doesn’t say why they’re interesting.

Tensors are useful because they can describe spatial relationships efficiently. The word comes from the same Latin root as “tension”, a hint about how we can imagine it. A common use of tensors is in describing the stress in an object. Applying stress in different directions to an object often produces different effects. The classic example there is a newspaper. Rip it in one direction and you get a smooth, clean tear. Rip it perpendicularly and you get a raggedy mess. The stress tensor represents this: it gives some idea of how a force put on the paper will create a tear.

Tensors show up a lot in physics, and so in mathematical physics. Technically they show up everywhere, since vectors and even plain old numbers (scalars, in the lingo) are kinds of tensors, but that’s not what I mean. Tensors can describe efficiently things whose magnitude and direction changes based on where something is and where it’s looking. So they are a great tool to use if one wants to represent stress, or how well magnetic fields pass through objects, or how electrical fields are distorted by the objects they move in. And they describe space, as well: general relativity is built on tensors. The mathematics of a tensor allow one to describe how space is shaped, based on how to measure the distance between two points in space.

My own mathematical education happened to be pretty tensor-light. I never happened to have courses that forced me to get good with them, and I confess to feeling intimidated when a mathematical argument gets deep into tensor mathematics. Joseph C Kolecki, with NASA’s Glenn (Lewis) Research Center, published in 2002 a nice little booklet “An Introduction to Tensors for Students of Physics and Engineering”. This I think nicely bridges some of the gap between mathematical structures like vectors and matrices, that mathematics and physics majors know well, and the kinds of tensors that get called tensors and that can be intimidating.

## My Little 2021 Mathematics A-to-Z: Atlas

I owe Elkement thanks again for a topic. They’re author of the Theory and Practice of Trying to Combine Just Anything blog. And the subject lets me circle back around topology.

# Atlas.

Mathematics is like every field in having jargon. Some jargon is unique to the field; there is no lay meaning of a “homeomorphism”. Some jargon is words plucked from the common language, such as “smooth”. The common meaning may guide you to what mathematicians want in it. A smooth function has a graph with no gaps, no discontinuities, no sharp corners; you can see smoothness in it. Sometimes the common meaning is an ambiguous help. A “series” is the sum of a sequence of numbers, that is, it is one number. Mathematicians study the series, but by looking at properties of the sequence.

So what sort of jargon is “atlas”? In common English, an atlas is a book of maps. Each map represents something different. Perhaps a different region of space. Perhaps a different scale, or a different projection altogether. The maps may show different features, or show them at different times. The maps must be about the same sort of thing. No slipping a map of Narnia in with the map of an amusement park, unless you warn of that in the title. The maps must not contradict one another. (So far as human-made things can be consistent, anyway.) And that’s the important stuff.

Atlas is the first kind of common-word jargon. Mathematicians use it to mean a collection of things. Those collected things aren’t mathematical maps. “Map” is the second type of jargon. The collected things are coordinate charts. “Coordinate chart” is a pairing of words not likely to appear in common English. But if you did encounter them? The meaning you might guess from their common use is not far off their mathematical use.

A coordinate chart is a matching of the points in an open set to normal coordinates. Euclidean coordinates, to be precise. But, you know, latitude and longitude, if it’s two dimensional. Add in the altitude if it’s three dimensions. Your x-y-z coordinates. It still counts if this is one dimension, or four dimensions, or sixteen dimensions. You’re less likely to draw a sketch of those. (In practice, you draw a sketch of a three-dimensional blob, and put N = 16 off in the corner, maybe in a box.)

These coordinate charts are on a manifold. That’s the second type of common-language jargon. Manifold, to pick the least bad of its manifold common definitions, is a “complicated object or subject”. The mathematical manifold is a surface. The things on that surface are connected by relationships that could be complicated. But the shape can be as simple as a plane or a sphere or a torus.

Every point on a coordinate chart needs some unique set of coordinates. And if a point appears on two coordinate charts, they have to be consistent. Consistent here is the matching between charts being a homeomorphism. A homeomorphism is a map, in the jargon sense. So it’s a function matching open sets on one chart to ope sets in the other chart. There’s more to it (there always is). But the important thing is that, away from the edges of the chart, we don’t create any new gaps or punctures or missing sections.

Some manifolds are easy to spot. The surface of the Earth, for example. Many are easy to come up with charts for. Think of any map of the Earth. Each point on the surface of the Earth matches some point on the sheet of paper. The coordinate chart is … let’s say how far your point is from the upper left corner of the page. (Pretend that you can measure those points precisely enough to match them to, like, the town you’re in.) Could be how far you are from the center, or the lower right corner, or whatever. These are all as good, and even count as other coordinate charts.

It’s easy to imagine that as latitude and longitude. We see maps of the world arranged by latitude and longitude so often. And that’s fine; latitude and longitude makes a good chart. But we have a problem in giving coordinates to the north and south pole. The latitude is easy but the longitude? So we have two points that can’t be covered on the map. We can save our atlas by having a couple charts. For the Earth this can be a map of most of the world arranged by latitude and longitude, and then two insets showing a disc around the north and the south poles. Thus we have an atlas of three charts.

We can make this a little tighter, reducing this to two charts. Have one that’s your normal sort of wall map, centered on the equator. Have the other be a transverse Mercator map. Make its center the great circle going through the prime meridian and the 180-degree antimeridian. Then every point on the planet, including the poles, has a neat unambiguous coordinate in at least one chart. A good chunk of the world will be on both charts. We can throw in more charts if we like, but two is enough.

The requirements to be an atlas aren’t hard to meet. So a lot of geometric structures end up being atlases. Theodore Frankel’s wonderful The Geometry of Physics introduces them on page 15. But that’s also the last appearance of “atlas”, at least in the index. The idea gets upstaged. The manifolds that the atlas charts end up being more interesting. Many problems about things in motion are easy to describe as paths traced out on manifolds. A large chunk of mathematical physics is then looking at this problem and figuring out what the space of possible behaviors looks like. What its topology is.

In a sense, the mathematical physicist might survey a problem, like a scout exploring new territory, more than solve it. This exploration brings us to directional derivatives. To tangent bundles. To other terms, jargon only partially informed by the common meanings.

And we draw to the final weeks of 2021, and of the Little 2021 Mathematics A-to-Z. All this year’s essays should be at this link. And all my glossary essays from every year should be at this link. Thank you for reading!

## My Little 2021 Mathematics A-to-Z: Convex

Jacob Siehler, a friend from Mathstodon, and Assistant Professor at Gustavus Adolphus College, offered several good topics for the letter ‘C’. I picked the one that seemed to connect to the greatest number of other topics I’ve covered recently.

# Convex

It’s easy to say what convex is, if we’re talking about shapes in ordinary space. A convex shape is one where the line connecting any two points inside the shape always stays inside the shape. Circles are convex. Triangles and rectangles too. Star shapes are not. Is a torus? That depends. If it’s a doughnut shape sitting in some bigger space, then it’s not convex. If the doughnut shape is all the space there is to consider, then it is. There’s a parallel here to prime numbers. Whether 5 is a prime depends on whether you think 5 is an integer, a real number, or a complex number.

Still, this seems easy to the point of boring. So how does Wolfram Mathworld match 337 items for ‘convex’? For a sense of scale, it has only 112 matches for ‘quadrilateral’. This is a word used almost as much as ‘quadratic’, with 370 items. Why?

Why is that it’s one of those terms that sneaks in everywhere. Some of it is obvious. There’s a concept called “star-convex”, where two points only need a connection by some path. It doesn’t have to be a straight line. That’s a familiar mathematical trick, coming up with a less-demanding version of a property. There’s the “convex hull”, which is the smallest convex set that contains a given set of points. We even come up with “convex functions”, functions of real numbers. A function’s convex if, the space above the graph of a function is convex. This seems like stretching the idea of convexity rather a bit.

Still, we wouldn’t coin such a term if we couldn’t use it. Well, if someone couldn’t use it. The saving thing here is the idea of “space”. We get it from our idea of what space is from looking around rooms and walking around hills and stuff. But what makes something a space? When we look at what’s essential? What we need is traits like, there are things. We can measure how far apart things are. We have some idea of paths between things. That’s not asking a lot.

So many things become spaces. And so convexity sneaks in everywhere. A convex function has nice properties if you’re looking for minimums. Or maximums; that’s as easy to do. And we look for minimums a lot. A large, practical set of mathematics is the search for optimum values, the set of values that maximize, or minimize, something. You may protest that not everything we’re intersted in is a convex function. This is true. But a lot of what we are interested in is, or is approximately.

This gets into some surprising corners. Economics, for example. The mathematics of economics is often interested in how much of a thing you can make. But you have to put things in to make it. You expect, at least once the system is set up, that if you halve the components you put in you get half the thing out. Or double the components in and get double the thing out. But you can run out of the components. Or related stuff, like, floor space to store partly-complete product. Or transport available to send this stuff to the customer. Or time to get things finished. For our needs these are all “things you can run out of”.

And so we have a problem of linear programming. We have something or other we want to optimize. Call it $y$. It depends on a whole range of variables, which we describe as a vector $\vec{x}$. And we have constraints. Each of these is an inequality; we can represent that as demanding some functions of these variables be at most some numbers. We can bundle those functions together as a matrix called $A$. We can bundle those maximum numbers together as a vector called $\vec{b}$. So the problem is finding $A\vec{x} \le \vec{b}$. Also, we demand that none of these values be smaller than some minimum we might as well call 0. The range of all the possible values of these variables is a space. These constraints chop up that space, into a shape. Into a convex shape, of course, or this paragraph wouldn’t belong in this essay. If you need to be convinced of this, imagine taking a wedge of cheese and hacking away slices all the way through it. How do you cut a cave or a tunnel in it?

So take this convex shape, called a polytope. That’s what we call a polygon or polyhedron if we don’t want to commit to any particular number of dimensions of space. (If we’re being careful. My suspicion is ‘polyhedron’ is more often said.) This makes a shape. Some point in that shape has the best possible value of $y$. (Also the worst, if that’s your thing.) Where is it? There is an answer, and it gives a pretext to share a fun story. The answer is that it’s on the outside, on one of the faces of the polytope. And you can find it following along the edges of those polytopes. This we know as the simplex method, or Dantzig’s Simplex Method if we must be more particular, for George Dantzig. Its success relies on looking at convex functions in convex spaces and how much this simplifies finding things.

Usually. The simplex method is one of polynomial-order complexity for normal, typical problems. That’s a measure of how much longer it takes to find an answer as you get more variables, more constraints, more work. Polynomial is okay, growing about the way it takes longer to multiply when you have more digits in the numbers. But there’s a worst case, in which the complexity grows exponentially. We shy away from exponential-complexity because … you know, exponentials grow fast, given a chance. What saves us is that that’s a worst case, not a typical case. The convexity lets us set up our problem and, rather often, solve it well enough.

Now the story, a mutation of which it’s likely you encountered. George Dantzig, as a student in Jerzy Neyman’s statistics class, arrived late one day to find a couple problems on the board. He took these to be homework, and struggled with the harder-than-usual set. But turned them in, apologizing for them being late. Neyman accepted the work, and eventually got around to looking at it. This wasn’t the homework. This was some unsolved problems in statistics. Six weeks later Neyman had prepared them for publication. A year later, Neyman explained to Dantzig that all he needed to earn his PhD was put these two papers together in a nice binder.

This cute story somehow escaped into the wild. It became an inspirational tale for more than mathematics grad students. That part’s easy to see; it has most everything inspiration needs. It mutated further, into the movie Good Will Hunting. I do not know that the unsolved problems, work done in the late 1930s, related to Dantzig’s simplex method, proved after World War II. It may be that they are simply connected in their originator. But perhaps it is more than I realize now.

I hope to finish off the word ‘Mathematics’ with the letter S next week. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all of this year’s essays, and all the A-to-Z essays from past years, should be at this link. Thank you for reading.

## My Little 2021 Mathematics A-to-Z: Triangle

And I have another topic suggested by John Golden, author of Math Hombre. It’s one of the basic bits of mathematics, and so is hard to think about.

# Triangle.

Edward Brisse assembled a list of 2,001 things to call a “center” of a triangle. I’d have run out around three. We don’t need most of them. I mention them because the list speaks of how interesting we find triangles. Nobody’s got two thousand thoughts about enneadecagons (19-sided figures).

As always with mathematics it’s hard to say whether triangles are all that interesting or whether we humans are obsessed. They’ve got great publicity. The Pythagorean Theorem may be the only bit of interesting mathematics an average person can be assumed to recognize. The kinds of triangles — acute, obtuse, right, equilateral, isosceles, scalene — are fit questions for trivia games. An ordinary mathematics education can end in trigonometry. This ends up being about circles, but we learn it through triangles. The art and science of determining where a thing is we call “triangulation”.

But triangles do seem to stand out. They’re the simplest polygon, only three vertices and three edges. So we can slice any other polygon into triangles. Any triangle can tile the plane. Even quadrilaterals may need reflections of themselves. One of the first geometry facts we learn is the interior angles of a triangle add up to two right angles. And one of the first geometry facts we learn, discovering there are non-Euclidean geometries, is that they don’t have to.

Triangles have to be convex, that is, they don’t have any divots. This property sounds boring. But it’s a good boring; it makes other work easier. It tells us that the length of any two sides of a triangle add together to something longer than the third side. And that’s a powerful idea.

There are many ways to define “distance”. Mathematicians have tried to find the most abstract version of the concept. This inequality is one of the few pieces that every definition of “distance” must respect. This idea of distance leaps out of shapes drawn on paper. Last week I mentioned a triangle inequality, in discussing functions $f$ and $g$. We can define operators that describe a distance between functions. And the distances between trios of functions behave like the distances between points on the triangle. Thus does geometry sneak in to abstract concepts like “piecewise continuous functions”.

And they serve in curious blends of the abstract and the concrete. For example, numerical solutions to partial differential equations. A partial differential equation is one where we want to know a function of two or more variables, and only have information about how the function changes as those variables change. These turn up all the time in any study of things in bulk. Heat flowing through space. Waves passing through fluids. Fluids running through channels. So any classical physics problem that isn’t, like, balls bouncing against each other or planets orbiting stars. We can solve these if they’re linear. Linear here is a term of art meaning “easy”. I kid; “linear” means more like “manageable”. All the good problems are nonlinear and we can exactly solve about two of them.

So, numerical solutions. We make approximations by putting down a mesh on the differential equation’s domain. And then, using several graduate-level courses’ worth of tricks, approximating the equation we want with one that we can solve here. That mesh, though? … It can be many things. One powerful technique is “finite elements”. An element is a small piece of space. Guess what the default shape for these elements are. There are times, and reasons, to use other shapes as elements. You learn those once you have the hang of triangles. (Dividing the space of your variables up into elements lets you look for an approximate solution using tools easier to manage than you’d have without. This is a bit like looking for one’s keys over where the light is better. But we can find something that’s as close as we need to our keys.)

If we need finite elements for, oh, three dimensions of space, or four, then triangles fail us. We can’t fill a volume with two-dimensional shapes like triangles. But the triangle has its analog. The tetrahedron, in some sense four triangles joined together, has all the virtues of the triangle for three dimensions. We can look for a similar shape in four and five and more dimensions. If we’re looking for the thing most like an equilateral triangle, we’re looking for a “simplex”.

These simplexes, or these elements, sprawl out across the domain we want to solve problems for. They look uncannily like the triangles surveyors draw across the chart of a territory, as they show us where things are.

Next week I hope to cover the letter ‘I’ as I near the end of ‘Mathematics’ and consider what to do about ‘A To Z’. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all of this year’s essays, and all the A-to-Z essays from past years, should be at this link. Thank you once more for reading.

## My Little 2021 Mathematics A-to-Z: Embedding

Elkement, who’s one of my longest blog-friends here, put forth this suggestion for an ‘E’ topic. It’s a good one. They’re author of the Theory and Practice of Trying to Combine Just Anything blog. Their blog has recently been exploring complex-valued numbers and how to represent rotations.

# Embedding.

Consider a book. It’s a collection. It’s easy to see the ordered setting of words, maybe pictures, possibly numbers or even equations. The important thing is the ideas those all represent.

Set the book in a library. How can this change the book?

Perhaps the comparison to other books shows us something the original book neglected. Perhaps something in the original book we now realize was a brilliantly-presented insight. The way we appreciate the book may change.

What can’t change is the content of the original book. The words stay the same, in the same order. If it’s a physical book, the number of pages stays the same, as does the size of the page. The ideas expressed remain the same.

So now you understand embedding. It’s a broad concept, something that can have meaning for any mathematical structure. A structure here is a bunch of items and some things you can do with them. A group, for example, is a good structure to use with this sort of thing. So, for example, the integers and regular addition. This original structure’s embedded in another when everything in the original structure is in the new, and everything you can do with the original structure you can do in the new and get the same results. So, for example, the group you get by taking the integers and regular addition? That’s embedded in the group you get by taking the rational numbers and regular addition. 4 + 8 is 12 whether or not you consider 6.5 a topic fit for discussion. It’s an embedding that expands the set of elements, and that modifies the things you can do to match.

The group you get from the integers and addition is embedded in other things. For example, it’s embedded in the ring you get from the integers and regular addition and regular multiplication. 4 + 8 remains 12 whether or not you can multiply 4 by 8. This embedding doesn’t add any new elements, just new things you can do with them.

Once you have the name, you see embedding everywhere. When we first learn arithmetic we — I, anyway — learn it as adding whole numbers together. Then we embed that into whole numbers with addition and multiplication. And then the (nonnegative) rational numbers with addition and multiplication. At some point (I forget when) the negative numbers came in. So did the whole set of real numbers. Eventually the real numbers got embedded into the complex numbers. And the complex numbers got embedded into the quaternions, although we found real and complex numbers enough for most of our work. I imagine something similar goes on these days.

There’s never only one embedding possible. Consider, for example, two-dimensional geometry, the shapes of figures on a sheet of paper. It’s easy to put that in three dimensions, by setting the paper on the floor, and expand it by drawing in chalk on the wall. Or you can set the paper on the wall, and extend its figures by drawing in chalk on the floor. Or set the paper at an angle to the floor. What you use depends on what’s most convenient. And that can be driven by laziness. It’s easy to match, say, the point in two dimensions at coordinates (3, 4) with the point in three dimensions at coordinates (3, 4, 0), even though (0, 3, 4) or (4, 0, 3) are as valid.

Why embed something in another thing? For the same reasons we do any transformation in mathematics. One is that we figure to embed the thing we’re working on into something easier to deal with. A famous example of this is the Nash embedding theorem. It describes when certain manifolds can be embedded into something that looks like normal space. And that’s useful because it can turn nonlinear partial differential equations — the most insufferable equations — into something solvable.

Another good reason, though, is the one implicit in that early arithmetic education. We started with whole-numbers-with-addition. And then we added the new operation of multiplication. And then new elements, like fractions and negative numbers. If we follow this trail we get to some abstract, tricky structures like octonions. But by small steps in which we have great experience guiding us into new territories.

I hope to return in a week with a fresh A-to-Z essay. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all of this year’s essays, and all A-to-Z essays from past years, should be at this link. Thank you once more for reading.

## My Little 2021 Mathematics A-to-Z: Hyperbola

John Golden, author of the Math Hombre blog, had several great ideas for the letter H in this little A-to-Z for the year. Here’s one of them.

# Hyperbola.

The hyperbola is where advanced mathematics begins. It’s a family of shapes, some of the pieces you get by slicing a cone. You can make an approximate one shining a flashlight on a wall. Other conic sections are familiar, everyday things, though. Circles we see everywhere. Ellipses we see everywhere we look at a circle in perspective. Parabolas we learn, in approximation, watching something tossed, or squirting water into the air. The hyperbola should be as accessible. Hold your flashlight parallel to the wall and look at the outline of light it casts. But the difference between this and a parabola isn’t obvious. And it’s harder to see parabolas in nature. It’s the path a space probe swinging past a planet makes? Great guide for all us who’ve launched space probes past Jupiter.

When we learn of hyperbolas, somewhere in high school algebra or in precalculus, they seem designed to break the rules we had inferred. We’ve learned functions like lines and quadradics (parabolas) and cubics. They’re nice, simple, connected shapes. The hyperbola comes in two pieces. We’ve learned that the graph of a function crosses any given vertical line at most once. Now, we can expect to see it twice. We learn to sketch functions by finding a few interesting points — roots, y-intercepts, things like that. Hyperbolas, we’re taught to draw this little central box and then two asymptotes. Also, we have asymptotes, a simpler curve that the actual curve almost equals.

We’re trained to see functions having the couple odd points where they’re not defined. Nobody expects $y = 1 \div x$ to mean anything when $x$ is zero. But we learn these as weird, isolated points. Now there’s this interval of x-values that don’t fit anything on the graph. Half the time, anyway, because we see two classes of hyperbolas. There’s ones that open like cups, pointing up and down. Those have definitions for every value of x. There’s ones that open like ears, pointing left and right. Those have a box in the center where no y satisfies the x’s. They seem like they’re taught just to be mean.

They’re not, of course. The only mathematical thing we teach just to be mean is integration by trigonometric substitution. The things which seem weird or new in hyperbolas are, largely, things we didn’t notice before. A vertical line put across a circle or ellipse crosses the curve twice, most points. There are two huge intervals, to the left and to the right of the circle, where no value of y makes the equation true. Circles are familiar, though. Ellipses don’t seem intimidating. We know we can’t turn $x^2 + y^2 = 4$ (a typical circle) into a function without some work. We have to write either $f(x) = \sqrt{4 - x^2}$ or $f(x) = -\sqrt{4 - x^2}$, breaking the circle into two halves. The same happens for hyperbolas, though, with $x^2 - y^2 = 4$ (a typical hyperbola) turning into $f(x) = \sqrt{x^2 - 4}$ or $f(x) = -\sqrt{x^2 - 4}$.

Even the definitions seem weird. The ellipse we can draw by taking a set distance and two focus points. If the distance from the first focus to a point plus the distance from the point to the second focus is that set distance, the point’s on the ellipse. We can use two thumbtacks and a piece of string to draw the ellipse. The hyperbola has a simliar rule, but weirder. You have your two focus points, yes. And a set distance. But the locus of points of the hyperbola is everything where the distance from the point to one focus minus the distance from the point to the other focus is that set distance. Good luck doing that with thumbtacks and string.

Yet hyperbolas are ready for us. Consider playing with a decent calculator, hitting the reciprocal button for different numbers. 1 turns to 1, yes. 2 turns into 0.5. -0.125 turns into -8. It’s the simplest iterative game to do on the calculator. If you sketch this, though, all the points (x, y) where one coordinate is the reciprocal of the other? It’s two curves. They approach without ever touching the x- and y-axes. Get far enough from the origin and there’s no telling this curve from the axes. It’s a hyperbola, one that obeys that vertical-line rule again. It has only the one value of x that can’t be allowed. We write it as $y = \frac{1}{x}$ or even $xy = 1$. But it’s the shape we see when we draw $x^2 - y^2 = 2$, rotated. Or a rotation of one we see when we draw $y^2 - x^2 = 2$. The equations of rotated shapes are annoying. We do enough of them for ellipses and parabolas and hyperbolas to meet the course requirement. But they point out how the hyperbola is a more normal construct than we fear.

And let me look at that construct again. An equation describing a hyperbola that opens horizontally or vertically looks like $ax^2 - by^2 = c$ for some constant numbers a, b, and c. (If a, b, and c are all positive, this is a hyperbola opening horizontally. If a and b are positive and c negative, this is a hyperbola opening vertically.) An equation describing an ellipse, similarly with its axes horizontal or vertical looks like $ax^2 + by^2 = c$. (These are shapes centered on the origin. They can have other centers, which make the equations harder but not more enlightening.) The equations have very similar shapes. Mathematics trains us to suspect things with similar shapes have similar properties. That change from a plus to a minus seems too important to ignore, and yet …

I bet you assumed x and y are real numbers. This is convention, the safe bet. If someone wants complex-valued numbers they usually say so. If they don’t want to be explicit, they use z and w as variables instead of x and y. But what if y is an imaginary number? Suppose $y = \imath t$, for some real number t, where $\imath^2 = -1$. You haven’t missed a step; I’m summoning this from nowhere. (Let’s not think about how to draw a point with an imaginary coordinate.) Then $ax^2 - by^2 = c$ is $ax^2 - b(\imath t)^2 = c$ which is $ax^2 + bt^2 = c$. And despite the weird letters, that’s a circle. By the same supposition we could go from $ax^2 + by^2 = c$, which we’d taken to be a circle, and get $ax^2 - bt^2 = c$, a hyperbola.

Fine stuff inspiring the question “so?” I made up a case and showed how that made two dissimilar things look alike. All right. But consider trigonometry, built on the cosine and sine functions. One good way to see the cosine and sine of an angle is as the x- and y-coordinates of a point on the unit circle, where $x^2 + y^2 = 1$. (The angle $\theta$ is the one from the point $(\cos(\theta), \sin(\theta))$ to the origin to the point (1, 0).)

There exists, in parallel to the familiar trig functions, the “hyperbolic trigonometric functions”. These have imaginative names like the hyperbolic sine and hyperbolic cosine. (And onward. We can speak of the “inverse hyperbolic cosecant”, if we wish no one to speak to us again.) Usually these get introduced in calculus, to give the instructor a tiny break. Their derivatives, and integrals, look much like those of the normal trigonometric functions, but aren’t the exact same problems over and over. And these functions, too, have a compelling meaning. The hyperbolic cosine of an angle and hyperbolic sine of an angle have something to do with points on a unit hyperbola, $x^2 - y^2 = 1$.

Thinking back on the flashlight. We get a circle by holding the light perpendicular to the wall. We get a hyperbola holding the light parallel. We get a circle by drawing $x^2 + y^2 = 1$ with x and y real numbers. We get a hyperbola by (somehow) drawing $x^2 + y^2 = 1$ with x real and y imaginary. We remember something about representing complex-valued numbers with a real axis and an orthogonal imaginary axis.

One almost feels the connection. I can’t promise that pondering this will make hyperbolas be as familiar as circles or at least ellipses. But often a problem that brings us to hyperbolas has an alternate phrasing that’s ellipses, a nd vice-versa. But the common traits of these conic slices can guide you into a new understanding of mathematics.

Thank you for reading. I hope to have another piece next week at this time. This and all of this year’s Little Mathematics A to Z essays should be at this link. And the A-to-Z essays for every year should be at this link.

## My Little 2021 Mathematics A-to-Z: Torus

Mr Wu, a mathematics tutor in Singapore and author of the blog about that, offered this week’s topic. It’s about one of the iconic mathematics shapes.

# Torus

When one designs a board game, one has to decide what the edge of the board means. Some games make getting to the edge the goal, such as Candy Land or backgammon. Some games set their play so the edge is unreachable, such as Clue or Monopoly. Some make the edge an impassible limit, such as Go or Scrabble or Checkers. And sometimes the edge becomes something different.

Consider a strategy game like Risk or Civilization or their video game descendants like Europa Universalis. One has to be able to go east, or west, without limit. But there’s no making a cylindrical board. Or making a board infinite in extent, side to side. Instead, the game demands we connect borders. Moving east one space from just-at-the-Eastern-edge means we put the piece at just-at-the-Western-edge. As a video game this is seamless. As a tabletop game we just learn to remember those units in Alberta are not so far from Kamchatka as they look. We have the awkward point that the board doesn’t let us go over the poles. It doesn’t hurt game play: no one wants to invade Russia from the north. We can represent a boundless space on our table.

Sometimes we need more. Consider the arcade game Asteroid. The player’s spaceship hopes to survive by blasting into dust asteroids cluttered around them. The game ‘board’ is the arcade screen, a manageable slice of space. Asteroids move in any direction, often drifting off-screen. If they were out of the game, this would make victory so easy as to be unsatisfying. So the game takes a tip from the strategy games, and connects the right edge of the screen to the left. If we ask why an asteroid last seen moving to the right now appears on the left, well, there are answers. One is to say we’re in a very average segment of a huge asteroid field. There’s about as many asteroids that happen to be approaching from off-screen as recede from us. Why our local work destroying asteroids eliminates the off-screen asteroids is a mystery for the ages. Perhaps the rest of the fleet is also asteroid-clearing at about our pace. What matters is we still have to do something with the asteroids.

Almost. We’ve still got asteroids leaking away through the top and bottom. But we can use the same trick the right and left edges do. And now we have some wonderful things. One is a balanced game. Another is the space in which ship and asteroids move. It is no rectangle now, but a torus.

This is a neat space to explore. It’s unbounded, for example, just as the surface of the Earth is. Or (it appears) the actual universe is. Set your course right and your spaceship can go quite a long way without getting back to exactly where it started from, again much like the surface of the Earth or the universe. We can impersonate an unbounded space using a manageably small set of coordinates, a decent-size game board.

That’s a nice trick to have. Many mathematics problems are about how great blocks of things behave. And it’s usually easiest to model these things if there aren’t boundaries. We can, sure, but they’re hard, most of the time. So we analyze great, infinitely-extending stretches of things.

Analysis does great things. But we need sometimes to do simulations, too. Computers are, as ever, great tempting setups to this. Look at a spreadsheet with hundreds of rows and columns of cells. Each can represent a point in space, interacting with whatever’s nearby by whatever our rule is. And this can do very well … except these cells have to represent a finite territory. A million rows can’t span more than one million times the greatest distance between rows. We have to handle that.

There are tricks. One is to model the cells as being at ever-expanding distances, trusting that there are regions too dull to need much attention. Another is to give the boundary some values that, we figure, look as generic as possible. That “past here it carries on like that”. The trick that makes rhetorical sense to mention here is creating a torus, matching left edge to right, top edge to bottom. Front edge to back if it’s a three-dimensional model.

Making a torus works if a particular spot is mostly affected by its local neighborhood. This describes a lot of problems we find interesting. Many of them are in statistical mechanics, where we do a lot of problems about particules in grids that can do one of two things, depending on the locale. But many mechanics problems work like this too. If we’re interested in how a satellite orbits the Earth, we can ignore that Saturn exists, except maybe as something it might photograph.

And just making a grid into a torus doesn’t solve every problem. This is obvious if you imagine making a torus that’s two rows and two columns linked together. There won’t be much interesting behavior there. Even a reasonably large grid offers problems. There might be structures larger than the torus is across or wide, for example, worth study, and those will be missed. That we have a grid means that a shape is easier to represent if it’s horizontal or vertical. In a real continuous space there’s no directions to be partial to.

There are topology differences too. A famous result shows that four colors are enough to color any map on the plane. On the torus we need at least seven. Putting colors on things may seem like a trivial worry. But map colorings represent information about how stuff can be connected. And here’s a huge difference in these connections.

This all is about one aspect of a torus. Likely you came in wondering when I would get to talking about doughnut shapes, and the line about topology may have readied you to hear about coffee cups. The torus, like most any mathematical concept familiar enough ordinary people know the word, connects to many ideas. Some of them have more than one hole. Some have surfaces that intersect themselves. Some extend into four or more dimensions. Some are even constructs that appear in phase space, describing ways that complicated physical systems can behave. These are all reflections of this shape idea that we can learn from thinking about game boards.

## The 148th Playful Math Education Carnival is posted

I apologize for missing its actual publication date, but better late than not at all. Math Book Magic, host of the Playful Math Education Blog Carnival, posted the 148th in the series, and it’s a good read. A healthy number of recreational mathematics puzzles, including some geometry puzzles I’ve been enjoying. As these essays are meant to do, this one gathers some recreational and some educational and some just fun mathematics.

Math In Nature is scheduled to host the next carnival. If you have any mathematics writing or videos or podcasts or such to share, or are aware of any that people might like, please let them know. And if you’d like to host a Playful Math Education Blog Carnival Denise Gaskins has several slots available over the next few months, including the chance to host the 150th of this series. It’s exhausting work, but it is satisfying work. Consider giving it a try.

## How to Make Circles Into Circles on a Different Shape

Elkement, who’s been a longtime support of my blogging here, has been thinking about stereographic projection recently. This comes from playing with complex-valued numbers. It’s hard to start thinking about something like “what is $1 \div \left(2 + 3\imath \right)$ and not get into the projection. The projection itself Elkement describes a bit in this post, from early in August. It’s one of the ways to try to match the points on a sphere to the points on the entire, infinite plane. One common way to imagine it, and to draw it, is to imagine setting the sphere on the plane. Imagine sitting on the top of the sphere. Draw the line connecting the top of the sphere with whatever point you find interesting on the sphere, and then extend that line until it intersects the plane. Match your point on the sphere with that point on the plane. You can use this to trace out shapes on the sphere and find their matching shapes on the plane.

This distorts the shapes, as you’d expect. Well, the sphere has a finite area, the plane an infinite one. We can’t possibly preserve the areas of shapes in this transformation. But this transformation does something amazing that offends students when they first encounter it. It preserves circles: a circle on the original sphere becomes a circle on the plane, and vice-versa. I know, you want it to turn something into ellipses, at least. She takes a turn at thinking out reasons why this should be reasonable. There are abundant proofs of this, but it helps the intuition to see different ways to make the argument. And to have rough proofs, that outline the argument you mean to make. We need rigorous proofs, yes, but a good picture that makes the case convincing helps a good deal.

## How to Tell if a Point Is Inside a Shape

As I continue to approach readiness for the Little Mathematics A-to-Z, let me share another piece you might have missed. Back in 2016 somehow two A-to-Z’s wasn’t enough for me. I also did a string of “Theorem Thursdays”, trying to explain some interesting piece of mathematics. The Jordan Curve Theorem is one of them.

The theorem, at heart, seems too simple to even be mathematics. It says that a simple closed curve on the plane divides the plane into an inside and an outside. There are similar versions for surfaces in three-dimensional spaces. Or volumes in four-dimensional spaces and so on. Proving the theorem turns out to be more complicated than I could fit into an essay. But proving a simplified version, where the curve is a polygon? That’s doable. Easy, even.

And as a sideline you get an easy way to test whether a point is inside a shape. It’s obvious, yeah, if a point is inside a square. But inside a complicated shape, some labyrinthine shape? Then it’s not obvious, and it’s nice to have an easy test.

This is even mathematics with practical application. A few months ago in my day job I needed an automated way to place a label inside a potentially complicated polygon. The midpoint of the polygon’s vertices wouldn’t do. The shapes could be L- or U- shaped, so that the midpoint wasn’t inside, or was too close to the edge of another shape. Starting from the midpoint, though, and finding the largest part of the polygon near to it? That’s doable, and that’s the Jordan Curve Theorem coming to help me.

## Homologies and Cohomologies explained quickly

I’d hoped to have a pretty substantial post today. I fell short of having time to edit the beast into shape. I apologize but hope to have that soon.

I also hope to soon have an announcement about a Mathematics A-to-Z for this year. But until then, here’s this.

Several years ago in an A-to-Z I tried to explain cohomologies. I wasn’t satisfied with it, as, in part, I couldn’t think of a good example. You know, something you could imagine demonstrating with specific physical objects. I can reel off definitions, once I look up the definitions, but there’s only so many people who can understand something from that.

Quanta Magazine recently ran an article about homologies. It’s a great piece, if we get past the introduction of topology with that doughnut-and-coffee-cup joke. (Not that it’s wrong, just that it’s tired.) It’s got pictures, too, which is great.

This I came to notice because Refurio Anachro on Mathstodon wrote a bit about it. This in a thread of toots talking about homologies and cohomologies. The thread at this link is more for mathematicians than the lay audience, unlike the Quanta Magazine article. If you’re comfortable reading about simplexes and linear operators and multifunctions you’re good. Otherwise … well, I imagine you trust that cohomologies can take care of themselves. But I feel better-informed for reading the thread. And it includes a link to a downloadable textbook in algebraic topology, useful for people who want to give that a try on their own.

## In Our Time podcast has an episode on Longitude

The BBC’s In Our Time program, and podcast, did a 50-minute chat about the longitude problem. That’s the question of how to find one’s position, east or west of some reference point. It’s an iconic story of pop science and, I’ll admit, I’d think anyone likely to read my blog already knows the rough outline of the story. But you never know what people don’t know. And even if you do know, it’s often enjoyable to hear the story told a different way.

The mathematics content of the longitude problem is real, although it’s not discussed more than in passing during the chat. The core insight Western mapmakers used is that the difference between local (sun) time and a reference point’s time tells you how far east or west you are of that reference point. So then the question becomes how you know what your reference point’s time is.

This story, as it’s often told in pop science treatments, tends to focus on the brilliant clockmaker John Harrison, and the podcast does a fair bit of this. Harrison spent his life building a series of ever-more-precise clocks. These could keep London time on ships sailing around the world. (Or at least to the Caribbean, where the most profitable, slavery-driven, British interests were.) But he also spent decades fighting with the authorities he expected to reward him for his work. It makes for an almost classic narrative of lone genius versus the establishment.

But, and I’m glad the podcast discussion comes around to this, the reality more ambiguous than this. (Actual history is always more ambiguous than whatever you think.) Part of the goal of the goal of the British (and other powers) was finding a practical way for any ship to find longitude. Granted Harrison could build an advanced, ingenious clock more accurate than anyone else could. Could he build the hundreds, or thousands, of those clocks that British shipping needed? Could anyone?

And the competing methods for finding longitude were based on astronomy and calculation. The moment when, say, the Moon passes in front of Jupiter is the same for everyone on Earth. (At least for the accuracy needed here.) It can, in principle, be forecast years, even decades ahead of time. So why not print up books listing astronomical events for the next five years and the formulas to turn observations into longitudes? Books are easy to print. You already train your navigators in astronomy so that they can find latitude. (This by how far above the horizon the pole star, or the sun, or another identifiable feature is.) And, incidentally, you gain a way of computing longitude that you don’t lose if your clock breaks. I appreciated having some of that perspective shown.

(The problem of longitude on land gets briefly addressed. The same principles that work at sea work on land. And land offers some secondary checks. For an unmentioned example there’s triangulation. It’s a great process, and a compelling use of trigonometry. I may do a piece about that myself sometime.)

Also a thing I somehow did not realize: British English pronounces “longitude” with a hard G sound. Huh.

## Reading the Comics Follow-up: Where Else Is A Tetrahedron’s Centroid Edition

A Reading the Comics post a couple weeks back inspired me to find the centroid of a regular tetrahedron. A regular tetrahedron, also known as “a tetrahedron”, is the four-sided die shape. A pyramid with triangular base. Or a cone with a triangle base, if you prefer. If one asks a person to draw a tetrahedron, and they comply, they’ll likely draw this shape. The centroid, the center of mass of the tetrahedron, is at a point easy enough to find. It’s on the perpendicular between any of the four faces — the equilateral triangles — and the vertex not on that face. Particularly, it’s one-quarter the distance from the face towards the other vertex. We can reason that out purely geometrically, without calculating, and I did in that earlier post.

But most tetrahedrons are not regular. They have centroids too; where are they?

Thing is I know the correct answer going in. It’s at the “average” of the vertices of the tetrahedron. Start with the Cartesian coordinates of the four vertices. The x-coordinate of the centroid is the arithmetic mean of the x-coordinates of the four vertices. The y-coordinate of the centroid is the mean of the y-coordinates of the vertices. The z-coordinate of the centroid is the mean of the z-coordinates of the vertices. Easy to calculate; but, is there a way to see that this is right?

What’s got me is I can think of an argument that convinces me. So in this sense, I have an easy proof of it. But I also see where this argument leaves a lot unaddressed. So it may not prove things to anyone else. Let me lay it out, though.

So start with a tetrahedron of your own design. This will be less confusing if I have labels for the four vertices. I’m going to call them A, B, C, and D. I don’t like those labels, not just for being trite, but because I so want ‘C’ to be the name for the centroid. I can’t find a way to do that, though, and not have the four tetrahedron vertices be some weird set of letters. So let me use ‘P’ as the name for the centroid.

Where is P, relative to the points A, B, C, and D?

And here’s where I give a part of an answer. Start out by putting the tetrahedron somewhere convenient. That would be the floor. Set the tetrahedron so that the face with triangle ABC is in the xy plane. That is, points A, B, and C all have the z-coordinate of 0. The point D has a z-coordinate that is not zero. Let me call that coordinate h. I don’t care what the x- and y-coordinates for any of these points are. What I care about is what the z-coordinate for the centroid P is.

The property of the centroid that was useful last time around was that it split the regular tetrahedron into four smaller, irregular, tetrahedrons, each with the same volume. Each with one-quarter the volume of the original. The centroid P does that for the tetrahedron too. So, how far does the point P have to be from the triangle ABC to make a tetrahedron with one-quarter the volume of the original?

The answer comes from the same trick used last time. The volume of a cone is one-third the area of the base times its altitude. The volume of the tetrahedron ABCD, for example, is one-third times the area of triangle ABC times how far point D is from the triangle. That number I’d labelled h. The volume of the tetrahedron ABCP, meanwhile, is one-third times the area of triangle ABC times how far point P is from the triangle. So the point P has to be one-quarter as far from triangle ABC as the point D is. It’s got a z-coordinate of one-quarter h.

Notice, by the way, that while I don’t know anything about the x- and y- coordinates of any of these points, I do know the z-coordinates. A, B, and C all have z-coordinate of 0. D has a z-coordinate of h. And P has a z-coordinate of one-quarter h. One-quarter h sure looks like the arithmetic mean of 0, 0, 0, and h.

At this point, I’m convinced. The coordinates of the centroid have to be the mean of the coordinates of the vertices. But you also see how much is not addressed. You’d probably grant that I have the z-coordinate coordinate worked out when three vertices have the same z-coordinate. Or where three vertices have the same y-coordinate or the same x-coordinate. You might allow that if I can rotate a tetrahedron, I can get three points to the same z-coordinate (or y- or x- if you like). But this still only gets one coordinate of the centroid P.

I’m sure a bit of algebra would wrap this up. But I would like to avoid that, if I can. I suspect the way to argue this geometrically depends on knowing the line from vertex D to tetrahedron centroid P, if extended, passes through the centroid of triangle ABC. And something similar applies for vertexes A, B, and C. I also suspect there’s a link between the vector which points the direction from D to P and the sum of the three vectors that point the directions from D to A, B, and C. I haven’t quite got there, though.

I will let you know if I get closer.

## Reading the Comics, March 16, 2021: Where Is A Tetrahedron’s Centroid Edition

Comic Strip Master Command has not, to appearances, been distressed by my Reading the Comics hiatus. There are still mathematically-themed comic strips. Many of them are about story problems and kids not doing them. Some get into a mathematical concept. One that ran last week caught my imagination so I’ll give it some time here. This and other Reading the Comics essays I have at this link, and I figure to resume posting them, at least sometimes.

Ben Zaehringer’s In The Bleachers for the 16th of March, 2021 is an anthropomorphized-geometry joke. Here the centroid stands in for “the waist”, the height below which boxers may not punch.

The centroid is good geometry, something which turns up in plane and solid shapes. It’s a center of the shape: the arithmetic mean of all the points in the shape. (There are other things that can, with reason, be called a center too. Mathworld mentions the existence of 2,001 things that can be called the “center” of a triangle. It must be only a lack of interest that’s kept people from identifying even more centers for solid shapes.) It’s the center of mass, if the shape is a homogenous block. Balance the shape from below this centroid and it stays balanced.

For a complicated shape, finding the centroid is a challenge worthy of calculus. For these shapes, though? The sphere, the cube, the regular tetrahedron? We can work those out by reason. And, along the way, work out whether this rule gives an advantage to either boxer.

The sphere first. That’s the easiest. The centroid has to be the center of the sphere. Like, the point that the surface of the sphere is a fixed radius from. This is so obvious it takes a moment to think why it’s obvious. “Why” is a treacherous question for mathematics facts; why should 4 divide 8? But sometimes we can find answers that give us insight into other questions.

Here, the “why” I like is symmetry. Look at a sphere. Suppose it lacks markings. There’s none of the referee’s face or bow tie here. Imagine then rotating the sphere some amount. Can you see any difference? You shouldn’t be able to. So, in doing that rotation, the centroid can’t have moved. If it had moved, you’d be able to tell the difference. The rotated sphere would be off-balance. The only place inside the sphere that doesn’t move when the sphere is rotated is the center.

This symmetry consideration helps answer where the cube’s centroid is. That also has to be the center of the cube. That is, halfway between the top and bottom, halfway between the front and back, halfway between the left and right. Symmetry again. Take the cube and stand it upside-down; does it look any different? No, so, the centroid can’t be any closer to the top than it can the bottom. Similarly, rotate it 180 degrees without taking it off the mat. The rotation leaves the cube looking the same. So this rules out the centroid being closer to the front than to the back. It also rules out the centroid being closer to the left end than to the right. It has to be dead center in the cube.

Now to the regular tetrahedron. Obviously the centroid is … all right, now we have issues. Dead center is … where? We can tell when the regular tetrahedron’s turned upside-down. Also when it’s turned 90 or 180 degrees.

Symmetry will guide us. We can say some things about it. Each face of the regular tetrahedron is an equilateral triangle. The centroid has to be along the altitude. That is, the vertical line connecting the point on top of the pyramid with the equilateral triangle base, down on the mat. Imagine looking down on the shape from above, and rotating the shape 120 or 240 degrees if you’re still not convinced.

And! We can tip the regular tetrahedron over, and put another of its faces down on the mat. The shape looks the same once we’ve done that. So the centroid has to be along the altitude between the new highest point and the equilateral triangle that’s now the base, down on the mat. We can do that for each of the four sides. That tells us the centroid has to be at the intersection of these four altitudes. More, that the centroid has to be exactly the same distance to each of the four vertices of the regular tetrahedron. Or, if you feel a little fancier, that it’s exactly the same distance to the centers of each of the four faces.

It would be nice to know where along this altitude this intersection is, though. We can work it out by algebra. It’s no challenge to figure out the Cartesian coordinates for a good regular tetrahedron. Then finding the point that’s got the right distance is easy. (Set the base triangle in the xy plane. Center it, so the coordinates of the highest point are (0, 0, h) for some number h. Set one of the other vertices so it’s in the xz plane, that is, at coordinates (0, b, 0) for some b. Then find the c so that (0, 0, c) is exactly as far from (0, 0, h) as it is from (0, b, 0).) But algebra is such a mass of calculation. Can we do it by reason instead?

That I ask the question answers it. That I preceded the question with talk about symmetry answers how to reason it. The trick is that we can divide the regular tetrahedron into four smaller tetrahedrons. These smaller tetrahedrons aren’t regular; they’re not the Platonic solid. But they are still tetrahedrons. The little tetrahedron has as its base one of the equilateral triangles that’s the bigger shape’s face. The little tetrahedron has as its fourth vertex the centroid of the bigger shape. Draw in the edges, and the faces, like you’d imagine. Three edges, each connecting one of the base triangle’s vertices to the centroid. The faces have two of these new edges plus one of the base triangle’s edges.

The four little tetrahedrons have to all be congruent. Symmetry again; tip the big tetrahedron onto a different face and you can’t see a difference. So we’ll know, for example, all four little tetrahedrons have the same volume. The same altitude, too. The centroid is the same distance to each of the regular tetrahedron’s faces. And the four little tetrahedrons, together, have the same volume as the original regular tetrahedron.

What is the volume of a tetrahedron?

If we remember dimensional analysis we may expect the volume should be a constant times the area of the base of the shape times the altitude of the shape. We might also dimly remember there is some formula for the volume of any conical shape. A conical shape here is something that’s got a simple, closed shape in a plane as its base. And some point P, above the base, that connects by straight lines to every point on the base shape. This sounds like we’re talking about circular cones, but it can be any shape at the base, including polygons.

So we double-check that formula. The volume of a conical shape is one-third times the area of the base shape times the altitude. That’s the perpendicular distance between P and the plane that the base shape is in. And, hey, one-third times the area of the face times the altitude is exactly what we’d expect.

So. The original regular tetrahedron has a base — has all its faces — with area A. It has an altitude h. That h must relate in some way to the area; I don’t care how. The volume of the regular tetrahedron has to be $\frac{1}{3} A h$.

The volume of the little tetrahedrons is — well, they have the same base as the original regular tetrahedron. So a little tetrahedron’s base is A. The altitude of the little tetrahedron is the height of the original tetrahedron’s centroid above the base. Call that $h_c$. How can the volume of the little tetrahedron, $\frac{1}{3} A h_c$, be one-quarter the volume of the original tetrahedron, $\frac{1}{3} A h$? Only if $h_c$ is one-quarter $h$.

This pins down where the centroid of the regular tetrahedron has to be. It’s on the altitude underneath the top point of the tetrahedron. It’s one-quarter of the way up from the equilateral-triangle face.

(And I’m glad, checking this out, that I got to the right answer after all.)

So, if the cube and the tetrahedron have the same height, then the cube has an advantage. The cube’s centroid is higher up, so the tetrahedron has a narrower range to punch. Problem solved.

I do figure to talk about comic strips, and mathematics problems they bring up, more. I’m not sure how writing about one single strip turned into 1300 words. But that’s what happens every time I try to do something simpler. You know how it goes.

## Some topological fun

A friend sent me this tweet, start of a thread of some mathematically neat parks.

This jungle gym has the shape of one of the classic three-dimensional representations of the Klein bottle. It’s one of pop mathematics’s favorite shapes, up there with the Möbius strip, another all-time favorite.

Both the Klein bottle and the Möbius strip have many possible appearances, for about the same reason there are many kinds of trapezoids or octagons or whatnot. Möbius strips are easy enough to make in real life. Klein bottles, not so; the shape needs four dimensions of space and we just don’t have them. We’ll represent it with a shape that loops back through itself, but a real Klein bottle wouldn’t do that, for the same reason a wireframe cube’s edges don’t intersect the way the lines of its photograph do.

It makes a good wireframe shape, though. I’m surprised not to see more playground equipment using it.

## My All 2020 Mathematics A to Z: Zero Divisor

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

# Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$.

And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

## My All 2020 Mathematics A to Z: Yang Hui

Nobody had particular suggestions for the letter ‘Y’ this time around. It’s a tough letter to find mathematical terms for. It doesn’t even lend itself to typography or wordplay the way ‘X’ does. So I chose to do one more biographical piece before the series concludes. There were twists along the way in writing.

Before I get there, I have a word for a longtime friend, Porsupah Ree. Among her hobbies is watching, and photographing, the wild rabbits. A couple years back she got a great photograph. It’s one that you may have seen going around social media with a caption about how “everybody was bun fu fighting”. She’s put it up on Redbubble, so you can get the photograph as a print or a coffee mug or a pillow, or many other things. And you can support her hobbies of rabbit photography and eating.

# Yang Hui.

Several problems beset me in writing about this significant 13th-century Chinese mathematician. One is my ignorance of the Chinese mathematical tradition. I have little to guide me in choosing what tertiary sources to trust. Another is that the tertiary sources know little about him. The Complete Dictionary of Scientific Biography gives a dire verdict. “Nothing is known about the life of Yang Hui, except that he produced mathematical writings”. MacTutor’s biography gives his lifespan as from circa 1238 to circa 1298, on what basis I do not know. He seems to have been born in what’s now Hangzhou, near Shanghai. He seems to have worked as a civil servant. This is what I would have imagined; most scholars then were. It’s the sort of job that gives one time to write mathematics. Also he seems not to have been a prominent civil servant; he’s apparently not listed in any dynastic records. After that, we need to speculate.

E F Robertson, writing the MacTutor biography, speculates that Yang Hui was a teacher. That he was writing to explain mathematics in interesting and helpful ways. I’m not qualified to judge Robertson’s conclusions. And Robertson notes that’s not inconsistent with Yang being a civil servant. Robertson’s argument is based on Yang’s surviving writings, and what they say about the demonstrated problems. There is, for example, 1274’s Cheng Chu Tong Bian Ben Mo. Robertson translates that title as Alpha and omega of variations on multiplication and division. I try to work out my unease at having something translated from Chinese as “Alpha and Omega”. That is my issue. Relevant here is that a syllabus prefaces the first chapter. It provides a schedule and series of topics, as well as a rationale for why this plan.

Was Yang Hui a discoverer of significant new mathematics? Or did he “merely” present what was already known in a useful way? This is not to dismiss him; we have the same questions about Euclid. He is held up as among the great Chinese mathematicians of the 13th century, a particularly fruitful time and place for mathematics. How much greatness to assign to original work and how much to good exposition is unanswerable with what we know now.

Consider for example the thing I’ve featured before, Yang Hui’s Triangle. It’s the arrangement of numbers known in the west as Pascal’s Triangle. Yang provides the earliest extant description of the triangle and how to form it and use it. This in the 1261 Xiangjie jiuzhang suanfa (Detailed analysis of the mathematical rules in the Nine Chapters and their reclassifications). But in it, Yang Hui says he learned the triangle from a treatise by Jia Xian, Huangdi Jiuzhang Suanjing Xicao (The Yellow Emperor’s detailed solutions to the Nine Chapters on the Mathematical Art). Jia Xian lived in the 11th century; he’s known to have written two books, both lost. Yang Hui’s commentary gives us a fair idea what Jia Xian wrote about. But we’re limited in judging what was Jia Xian’s idea and what was Yang Hui’s inference or what.

The Nine Chapters referred to is Jiuzhang suanshu. An English title is Nine Chapters on the Mathematical Art. The book is a 246-problem handbook of mathematics that dates back to antiquity. It’s impossible to say when the Nine Chapters was first written. Liu Hui, who wrote a commentary on the Nine Chapters in 263 CE, thought it predated the Qin ruler Shih Huant Ti’s 213 BCE destruction of all books. But the book — and the many commentaries on the book — served as a centerpiece for Chinese mathematics for a long while. Jia Xian’s and Yang Hui’s work was part of this tradition.

Yang Hui’s Detailed Analysis covers the Nine Chapters. It goes on for three chapters, more about geometry and fundamentals of mathematics. Even how to classify the problems. He had further works. In 1275 Yang published Practical mathematical rules for surveying and Continuation of ancient mathematical methods for elucidating strange properties of numbers. (I’m not confident in my ability to give the Chinese titles for these.) The first title particularly echoes how in the Western tradition geometry was born of practical concerns.

The breadth of topics covers, it seems to me, a decent modern (American) high school mathematics education. The triangle, and the binomial expansions it gives us, fit that. Yang writes about more efficient ways to multiply on the abacus. He writes about finding simultaneous solutions to sets of equations. And through a technique that amounts to finding the matrix of coefficients for the equations, and its determinant. He writes about finding the roots for cubic and quartic equations. The technique is commonly known in the west as Horner’s Method, a technique of calculating divided differences. We see the calculating of areas and volumes for regular shapes.

And sequences. He found the sum of the squares of natural numbers followed a rule:

$1^2 + 2^2 + 3^2 + \cdots + n^2 = \frac{1}{3}\cdot n\cdot (n + 1)\cdot (n + \frac{1}{2})$

This by a method of “piling up squares”, described some here by the Mathematical Association of America. (Me, I spent 40 minutes that could have gone into this essay convincing myself the formula was right. I couldn’t make myself believe the $(n + \frac{1}{2})$ part and had to work it out a couple different ways.)

And then there’s magic squares, and magic circles. He seems to have found them, as professional mathematicians today would, good ways to interest people in calculation. Not magic; he called them something like number diagrams. But he gives magic squares from three-by-three all the way to ten-by-ten. We don’t know of earlier examples of Chinese mathematicians writing about the larger magic squares. But Yang Hui doesn’t claim to be presenting new work. He also gives magic circles. The simplest is a web of seven intersecting circles, each with four numbers along the circle and one at its center. The sum of the center and the circumference numbers are 65 for all seven circles. Is this significant? No; merely fun.

Grant this breadth of work. Is he significant? I learned this year that familiar names might have been obscure until quite recently. The record is once again ambiguous. Other mathematicians wrote about Yang Hui’s work in the early 1300s. Yang Hui’s works were printed in China in 1378, says the Complete Dictionary of Scientific Biography, and reprinted in Korea in 1433. They’re listed in a 1441 catalogue of the Ming Imperial Library. Seki Takakazu, a towering figure in 17th century Japanese mathematics, copied the Korean text by hand. Yet Yang Hui’s work seems to have been lost by the 18th century. Reconstructions, from commentaries and encyclopedias, started in the 19th century. But we don’t have everything we know he wrote. We don’t even have a complete text of Detailed Analysis. This is not to say he wasn’t influential. All I could say is there seems to have been a time his influence was indirect.

I am sorry to offer so much uncertainty about Yang Hui. I had hoped to provide a fuller account. But we always only know thin slivers of life, and try to use those to know anything.

Next week I hope to finish this year’s A-to-Z project. The whole All 2020 A-to-Z should be gathered at this link. And all the essays from every A-to-Z series should be at this link. I haven’t decided whether I’ll publish on Wednesday or Friday. It’ll depend what I can get done over the weekend; we’ll see. Thank you for reading.

## My All 2020 Mathematics A to Z: Tiling

Mr Wu, author of the Singapore Maths Tuition blog, had an interesting suggestion for the letter T: Talent. As in mathematical talent. It’s a fine topic but, in the end, too far beyond my skills. I could share some of the legends about mathematical talent I’ve received. But what that says about the culture of mathematicians is a deeper and more important question.

So I picked my own topic for the week. I do have topics for next week — U — and the week after — V — chosen. But the letters W and X? I’m still open to suggestions. I’m open to creative or wild-card interpretations of the letters. Especially for X and (soon) Z. Thanks for sharing any thoughts you care to.

# Tiling.

Think of a floor. Imagine you are bored. What do you notice?

What I hope you notice is that it is covered. Perhaps by carpet, or concrete, or something homogeneous like that. Let’s ignore that. My floor is covered in small pieces, repeated. My dining room floor is slats of wood, about three and a half feet long and two inches wide. The slats are offset from the neighbors so there’s a pleasant strong line in one direction and stippled lines in the other. The kitchen is squares, one foot on each side. This is a grid we could plot high school algebra functions on. The bathroom is more elaborate. It has white rectangles about two inches long, tan rectangles about two inches long, and black squares. Each rectangle is perpendicular to ones of the other color, and arranged to bisect those. The black squares fill the gaps where no rectangle would fit.

Move from my house to pure mathematics. It’s easy to turn the floor of a room into abstract mathematics. We start with something to tile. Usually this is the infinite, two-dimensional plane. The thing you get if you have a house and forget the walls. Sometimes we look to tile the hyperbolic plane, a different geometry that we of course represent with a finite circle. (Setting particular rules about how to measure distance makes this equivalent to a funny-shaped plane.) Or the surface of a sphere, or of a torus, or something like that. But if we don’t say otherwise, it’s the plane.

What to cover it with? … Smaller shapes. We have a mathematical tiling if we have a collection of not-overlapping open sets. And if those open sets, plus their boundaries, cover the whole plane. “Cover” here means what “cover” means in English, only using more technical words. These sets — these tiles — can be any shape. We can have as many or as few of them as we like. We can even add markings to the tiles, give them colors or patterns or such, to add variety to the puzzles.

(And if we want, we can do this in other dimensions. There are good “tiling” questions to ask about how to fill a three-dimensional space, or a four-dimensional one, or more.)

Having an unlimited collection of tiles is nice. But mathematicians learn to look for how little we need to do something. Here, we look for the smallest number of distinct shapes. As with tiling an actual floor, we can get all the tiles we need. We can rotate them, too, to any angle. We can flip them over and put the “top” side “down”, something kitchen tiles won’t let us do. Can we reflect them? Use the shape we’d get looking at the mirror image of one? That’s up to whoever’s writing this paper.

What shapes will work? Well, squares, for one. We can prove that by looking at a sheet of graph paper. Rectangles would work too. We can see that by drawing boxes around the squares on our graph paper. Two-by-one blocks, three-by-two blocks, 40-by-1 blocks, these all still cover the paper and we can imagine covering the plane. If we like, we can draw two-by-two squares. Squares made up of smaller squares. Or repeat this: draw two-by-one rectangles, and then group two of these rectangles together to make two-by-two squares.

We can take it on faith that, oh, rectangles π long by e wide would cover the plane too. These can all line up in rows and columns, the way our squares would. Or we can stagger them, like bricks or my dining room’s wood slats are.

How about parallelograms? Those, it turns out, tile exactly as well as rectangles or squares do. Grids or staggered, too. Ah, but how about trapezoids? Surely they won’t tile anything. Not generally, anyway. The slanted sides will, most of the time, only fit in weird winding circle-like paths.

Unless … take two of these trapezoid tiles. We’ll set them down so the parallel sides run horizontally in front of you. Rotate one of them, though, 180 degrees. And try setting them — let’s say so the longer slanted line of both trapezoids meet, edge to edge. These two trapezoids come together. They make a parallelogram, although one with a slash through it. And we can tile parallelograms, whether or not they have a slash.

OK, but if you draw some weird quadrilateral shape, and it’s not anything that has a more specific name than “quadrilateral”? That won’t tile the plane, will it?

It will! In one of those turns that surprises and impresses me every time I run across it again, any quadrilateral can tile the plane. It opens up so many home decorating options, if you get in good with a tile maker.

That’s some good news for quadrilateral tiles. How about other shapes? Triangles, for example? Well, that’s good news too. Take two of any identical triangle you like. Turn one of them around and match sides of the same length. The two triangles, bundled together like that, are a quadrilateral. And we can use any quadrilateral to tile the plane, so we’re done.

How about pentagons? … With pentagons, the easy times stop. It turns out not every pentagon will tile the plane. The pentagon has to be of the right kind to make it fit. If the pentagon is in one of these kinds, it can tile the plane. If not, not. There are fifteen families of tiling known. The most recent family was discovered in 2015. It’s thought that there are no other convex pentagon tilings. I don’t know whether the proof of that is generally accepted in tiling circles. And we can do more tilings if the pentagon doesn’t need to be convex. For example, we can cut any parallelogram into two identical pentagons. So we can make as many pentagons as we want to cover the plane. But we can’t assume any pentagon we like will do it.

Hexagons look promising. First, a regular hexagon tiles the plane, as strategy games know. There are also at least three families of irregular hexagons that we know can tile the plane.

And there the good times end. There are no convex heptagons or octagons or any other shape with more sides that tile the plane.

Not by themselves, anyway. If we have more than one tile shape we can start doing fine things again. Octagons assisted by squares, for example, will tile the plane. I’ve lived places with that tiling. Or something that looks like it. It’s easier to install if you have square tiles with an octagon pattern making up the center, and triangle corners a different color. These squares come together to look like octagons and squares.

And this leads to a fun avenue of tiling. Hao Wang, in the early 60s, proposed a sort of domino-like tiling. You may have seen these in mathematics puzzles, or in toys. Each of these Wang Tiles, or Wang Dominoes, is a square. But the square is cut along the diagonals, into four quadrants. Each quadrant is a right triangle. Each quadrant, each triangle, is one of a finite set of colors. Adjacent triangles can have the same color. You can place down tiles, subject only to the rule that the tile edge has to have the same color on both sides. So a tile with a blue right-quadrant has to have on its right a tile with a blue left-quadrant. A tile with a white upper-quadrant on its top has, above it, a tile with a white lower-quadrant.

In 1961 Wang conjectured that if a finite set of these tiles will tile the plane, then there must be a periodic tiling. That is, if you picked up the plane and slid it a set horizontal and vertical distance, it would all look the same again. This sort of translation is common. All my floors do that. If we ignore things like the bounds of their rooms, or the flaws in their manufacture or installation or where a tile broke in some mishap.

This is not to say you couldn’t arrange them aperiodically. You don’t even need Wang Tiles for that. Get two colors of square tile, a white and a black, and lay them down based on whether the next decimal digit of π is odd or even. No; Wang’s conjecture was that if you had tiles that you could lay down aperiodically, then you could also arrange them to set down periodically. With the black and white squares, lay down alternate colors. That’s easy.

In 1964, Robert Berger proved Wang’s conjecture was false. He found a collection of Wang Tiles that could only tile the plane aperiodically. In 1966 he published this in the Memoirs of the American Mathematical Society. The 1964 proof was for his thesis. 1966 was its general publication. I mention this because while doing research I got irritated at how different sources dated this to 1964, 1966, or sometimes 1961. I want to have this straightened out. It appears Berger had the proof in 1964 and the publication in 1966.

I would like to share details of Berger’s proof, but haven’t got access to the paper. What fascinates me about this is that Berger’s proof used a set of 20,426 different tiles. I assume he did not work this all out with shards of construction paper, but then, how to get 20,426 of anything? With computer time as expensive as it was in 1964? The mystery of how he got all these tiles is worth an essay of its own and regret I can’t write it.

Berger conjectured that a smaller set might do. Quite so. He himself reduced the set to 104 tiles. Donald Knuth in 1968 modified the set down to 92 tiles. In 2015 Emmanuel Jeandel and Michael Rao published a set of 11 tiles, using four colors. And showed by computer search that a smaller set of tiles, or fewer colors, would not force some aperiodic tiling to exist. I do not know whether there might be other sets of 11, four-colored, tiles that work. You can see the set at the top of Wikipedia’s page on Wang Tiles.

These Wang Tiles, all squares, inspired variant questions. Could there be other shapes that only aperiodically tile the plane? What if they don’t have to be squares? Raphael Robinson, in 1971, came up with a tiling using six shapes. The shapes have patterns on them too, usually represented as colored lines. Tiles can be put down only in ways that fit and that make the lines match up.

Among my readers are people who have been waiting, for 1800 words now, for Roger Penrose. It’s now that time. In 1974 Penrose published an aperiodic tiling, one based on pentagons and using a set of six tiles. You’ve never heard of that either, because soon after he found a different set, based on a quadrilateral cut into two shapes. The shapes, as with Wang Tiles or Robinson’s tiling, have rules about what edges may be put against each other. Penrose — and independently Robert Ammann — also developed another set, this based on a pair of rhombuses. These have rules about what edges may tough one another, and have patterns on them which must line up.

The Penrose tiling became, and stayed famous. (Ammann, an amateur, never had much to do with the mathematics community. He died in 1994.) Martin Gardner publicized it, and it leapt out of mathematicians’ hands into the popular culture. At least a bit. That it could give you nice-looking floors must have helped.

To show that the rhombus-based Penrose tiling is aperiodic takes some arguing. But it uses tools already used in this essay. Remember drawing rectangles around several squares? And then drawing squares around several of these rectangles? We can do that with these Penrose-Ammann rhombuses. From the rhombus tiling we can draw bigger rhombuses. Ones which, it turns out, follow the same edge rules that the originals do. So that we can go again, grouping these bigger rhombuses into even-bigger rhombuses. And into even-even-bigger rhombuses. And so on.

What this gets us is this: suppose the rhombus tiling is periodic. Then there’s some finite-distance horizontal-and-vertical move that leaves the pattern unchanged. So, the same finite-distance move has to leave the bigger-rhombus pattern unchanged. And this same finite-distance move has to leave the even-bigger-rhombus pattern unchanged. Also the even-even-bigger pattern unchanged.

Keep bundling rhombuses together. You get eventually-big-enough-rhombuses. Now, think of how far you have to move the tiles to get a repeat pattern. Especially, think how many eventually-big-enough-rhombuses it is. This distance, the move you have to make, is less than one eventually-big-enough rhombus. (If it’s not you aren’t eventually-big-enough yet. Bundle them together again.) And that doesn’t work. Moving one tile over without changing the pattern makes sense. Moving one-half a tile over? That doesn’t. So the eventually-big-enough pattern can’t be periodic, and so, the original pattern can’t be either. This is explained in graphic detail a nice Powerpoint slide set from Professor Alexander F Ritter, A Tour Of Tilings In Thirty Minutes.

It’s possible to do better. In 2010 Joshua E S Socolar and Joan M Taylor published a single tile that can force an aperiodic tiling. As with the Wang Tiles, and Robinson shapes, and the Penrose-Ammann rhombuses, markings are part of it. They have to line up so that the markings — in two colors, in the renditions I’ve seen — make sense. With the Penrose tilings, you can get away from the pattern rules for the edges by replacing them with little notches. The Socolar-Taylor shape can make a similar trade. Here the rules are complex enough that it would need to be a three-dimensional shape, one that looks like the dilithium housing of the warp core. You can see the tile — in colored, marked form, and also in three-dimensional tile shape — at the PDF here. It’s likely not coming to the flooring store soon.

It’s all wonderful, but is it useful? I could go on a few hundred words about, particularly, crystals and quasicrystals. These are important for materials science. Especially these days as we have harnessed slightly-imperfect crystals to be our computers. I don’t care. These are lovely to look at. If you see nothing appealing in a great heap of colors and polygons spread over the floor there are things we cannot communicate about. Tiling is a delight; what more do you need?

Thanks for your attention. This and all of my 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. See you next week, I hope.

## Using my A to Z Archives: Riemann Sphere

Part of why I write these essays is to save future time. If I have an essay explaining some complex idea, then in the future, I can use a link and a short recap of the central idea. There’s some essays that have been perennials. I think I’ve linked to polynomials more than anything else on this site. And then some disappear, even though they seem to be about good useful subjects. Riemann sphere, from the Leap Day 2016 sequence, is one of those disappeared topics. This is one of the ways to convert between “shapes on the plane” and “shapes on the sphere”. There’s no way to perfectly move something from the plane to the sphere, or vice-versa. The Riemann Sphere is an approach which preserves the interior angles. If two lines on the plane intersect at a 25 degree angle, their representation on the sphere will intersect at a 25 degree angle. But everything else may get strange.

## My All 2020 Mathematics A to Z: Quadratic Form

I’m happy to have a subject from Elke Stangl, author of elkemental Force. That’s a fun and wide-ranging blog which, among other things, just published a poem about proofs. You might enjoy.

# Quadratic Form.

One delight, and sometimes deadline frustration, of these essays is discovering things I had not thought about. Researching quadratic forms invited the obvious question of what is a form? And that goes undefined on, for example, Mathworld. Also in the textbooks I’ve kept. Even ones you’d think would mention, like R W R Darling’s Differential Forms and Connections, or Frigyes Riesz and Béla Sz-Nagy’s Functional Analysis. Reluctantly I started thinking about what we talk about when discussing forms.

Quadratic forms offer some hints. These take a vector in some n-dimensional space, and return a scalar. Linear forms, and cubic forms, do the same. The pattern suggests a form is a mapping from a space like $R^n$ to $R$ or maybe $C^n$ to $C$. That looks good, but then we have to ask: isn’t that just an operator? Also: then what about differential forms? Or volume forms? These are about how to fill space. There’s nothing scalar in that. But maybe these are both called forms because they fill similar roles. They might have as little to do with one another as red pandas and giant pandas do.

Enlightenment comes after much consideration or happening on Wikipedia’s page about homogenous polynomials. That offers “an algebraic form, or simply form, is a function defined by a homogeneous polynomial”. That satisfies. First, because it gets us back to polynomials. Second, because all the forms I could think of do have rules based in homogeneous polynomials. They might be peculiar polynomials. Volume forms, for example, have a polynomial in wedge products of differentials. But it counts.

A function’s homogenous if it scales a particular way. Evaluate it at some set of coordinates x, y, z, (more variables if you need). That’s some number (let’s say). Take all those coordinates and multiply them by the same constant; let me call that α. Evaluate the function at α x, α y α z, (α times more variables if you need). Then that value is αk times the original value of f. k is some constant. It depends on the function, but not on what x, y, z, (more) are.

For a quadratic form, this constant k equals 4. This is because in the quadratic form, all the terms in the polynomial are of the second degree. So, for example, $x^2 + y^2$ is a quadratic form. So is $x^2 + 2xy + y^2$; the x times the y brings this to a second degree. Also a quadratic form is $xy + yz + zx$. So is $x^2 + y^2 + zw + wx + wy$.

This can have many variables. If we have a lot, we have a couple choices. One is to start using subscripts, and to write the form something like:

$q = \sum_{i = 1}^n \sum_{j = 1}^n a_{i, j} x_i x_j$

This is respectable enough. People who do a lot of differential geometry get used to a shortcut, the Einstein Summation Convention. In that, we take as implicit the summation instructions. So they’d write the more compact $q = a_{i, j} x_i x_j$. Those of us who don’t do a lot of differential geometry think that looks funny. And we have more familiar ways to write things down. Like, we can put the collection of variables $x_1, x_2, x_3, \cdots x_n$ into an ordered n-tuple. Call it the vector $\vec{x}$. If we then think to put the numbers $a_{i, j}$ into a square matrix we have a great way of writing things. We have to manipulate the $a_{i, j}$ a little to make the matrix, but it’s nothing complicated. Once that’s done we can write the quadratic form as:

$q_A = \vec{x}^T A \vec{x}$

This uses matrix multiplication. The vector $\vec{x}$ we assume is a column vector, a bunch of rows one column across. Then we have to take its transposition, one row a bunch of columns across, to make the matrix multiplication work out. If we don’t like that notation with its annoying superscripts? We can declare the bare ‘x’ to mean the vector, and use inner products:

$q_A = (x, Ax)$

This is easier to type at least. But what does it get us?

Looking at some quadratic forms may give us an idea. $x^2 + y^2$ practically begs to be matched to an $= r^2$, and the name “the equation of a circle”. $x^2 - y^2$ is less familiar, but to the crowd reading this, not much less familiar. Fill that out to $x^2 - y^2 = C$ and we have a hyperbola. If we have $x^2 + 2y^2$ and let that $= C$ then we have an ellipse, something a bit wider than it is tall. Similarly $\frac{1}{4}x^2 - 2y^2 = C$ is a hyperbola still, just anamorphic.

If we expand into three variables we start to see spheres: $x^2 + y^2 + z^2$ just begs to equal $r^2$. Or ellipsoids: $x^2 + 2y^2 + 10z^2$, set equal to some (positive) $C$, is something we might get from rolling out clay. Or hyperboloids: $x^2 + y^2 - z^2$ or $x^2 - y^2 - z^2$, set equal to $C$, give us nice shapes. (We can also get cylinders: $x^2 + z^2$ equalling some positive number describes a tube.)

How about $x^2 - xy + y^2$? This also wants to be an ellipse. $x^2 - xy + y^2 = 3$, to pick an easy number, is a rotated ellipse. The long axis is along the line described by $y = x$. The short axis is along the line described by $y = -x$. How about — let me make this easy. $xy$? The equation $xy = C$ describes a hyperbola, but a rotated one, with the x- and y-axes as its asymptotes.

Do you want to take any guesses about three-dimensional shapes? Like, what $x^2 - xy + y^2 + 6z^2$ might represent? If you’re thinking “ellipsoid, only it’s at an angle” you’re doing well. It runs really long in one direction, along the plane described by $y = x$. It runs medium-size along the plane described by $y = -x$. It runs pretty short along the z-axis. We could run some more complicated shapes. Ellipses pointing in weird directions. Hyperboloids of different shapes. They’ll have things in common.

One is that they have obviously important axes. Axes of symmetry, particularly. There’ll be one for each dimension of space. An ellipse has a long axis and a short axis. An ellipsoid has a long, a middle, and a short. (It might be that two of these have the same length. If all three have the same length, you have a sphere, my friend.) A hyperbola, similarly, has two axes of symmetry. One of them is the midpoint between the two branches of the hyperbola. One of them slices through the two branches, through the points where the two legs come closest together. Hyperboloids, in three dimensions, have three axes of symmetry. One of them connects the points where the two branches of hyperboloid come closest together. The other two run perpendicular to that.

We can go on imagining more dimensions of space. We don’t need them. The important things are already there. There are, for these shapes, some preferred directions. The ones around which these quadratic-form shapes have symmetries. These directions are perpendicular to each other. These preferred directions are important. We call them “eigenvectors”, a partly-German name.

Eigenvectors are great for a bunch of purposes. One is that if the matrix A represents a problem you’re interested in? The eigenvectors are probably a great basis to solve problems in it. This is a change of basis vectors, which is the same work as doing a rotation. And it’s happy to report this change of coordinates doesn’t mess up the problem any. We can rewrite the problem to be easier.

And, roughly, any time we look at reflections in a Euclidean space, there’s a quadratic form lurking around. This leads us into interesting places. Looking at reflections encourages us to see abstract algebra, to see groups. That space can be rotated in infinitesimally small pieces gets us a kind of group named a Lie (pronounced ‘lee’) Algebra. Quadratic forms give us a way of classifying those.

Quadratic forms work in number theory also. There’s a neat theorem, the 15 Theorem. If a quadratic form, with integer coefficients, can produce all the integers from 1 through 15, then it can produce all positive numbers. For example, $x^2 + y^2 + z^2 + w^2$ can, for sets of integers x, y, z, and w, add up to any positive number you like. (It’s not guaranteed this will happen. $x^2 + 2y^2 + 5z^2 + 5w^2$ can’t produce 15.) We know of at least 54 combinations which generate all the positive integers, like $x^2 + y^2 + 2z^2 + 14w^2$ and $x^2 + 2y^2 + 3z^2 + 5w^2$ and such.

There’s more, of course. There always is. I spent time skimming Quadratic Forms and their Applications, Proceedings of the Conference on Quadratic Forms and their Applications. It was held at University College Dublin in July of 1999. It’s some impressive work. I can think of very little that I can describe. Even Winfried Scharlau’s On the History of the Algebraic Theory of Quadratic Forms, from page 229, is tough going. Ina Kersten’s Biography of Ernst Witt, one of the major influences on quadratic forms, is accessible. I’m not sure how much of the particular work communicates.

It’s easy at least to know what things this field is about, though. The things that we calculate. That they connect to novel and abstract places shows how close together arithmetic and dynamical systems and topology and group theory and number theory are, despite appearances.

Thanks for reading this. Today’s and all the other 2020 A-to-Z essays should be at this link. Both the All-2020 and past A-to-Z essays should be at this link. And I am looking for letter S, T, and U topics for the coming weeks. I’m grateful for your thoughts.

## Using my A to Z Archives: Platonic Solid

And in last year’s A-to-Z I published one of those essays already becoming a favorite. I haven’t had much chance to link back to it. So let me fix that. My 2019 Mathematics A To Z: Platonic focuses on the Platonic Solids, and questions like why we might find them interesting. Also, what Platonic solids look like in spaces of other than three dimensions. Three-dimensional space has five Platonic solids. There are six Platonic Solids in four dimensions. How many would you expect in a five-dimensional space? Or a ten-dimensional one? The answer may surprise you!

## My All 2020 Mathematics A to Z: Möbius Strip

Jacob Siehler suggested this topic. I had to check several times that I hadn’t written an essay about the Möbius strip already. While I have talked about it some, mostly in comic strip essays, this is a chance to specialize on the shape in a way I haven’t before.

# Möbius Strip.

I have ridden at least 252 different roller coasters. These represent nearly every type of roller coaster made today, and most of the types that were ever made. One type, common in the 1920s and again since the 70s, is the racing coaster. This is two roller coasters, dispatched at the same time, following tracks that are as symmetric as the terrain allows. Want to win the race? Be in the train with the heavier passenger load. The difference in the time each train takes amounts to losses from friction, and the lighter train will lose a bit more of its speed.

There are three special wooden racing coasters. These are Racer at Kennywood Amusement Park (Pittsburgh), Grand National at Blackpool Pleasure Beach (Blackpool, England), and Montaña Rusa at La Feria Chapultepec Magico (Mexico City). I’ve been able to ride them all. When you get into the train going up, say, the left lift hill, you return to the station in the train that will go up the right lift hill. These racing roller coasters have only one track. The track twists around itself and becomes a Möbius strip.

This is a fun use of the Möbius strip. The shape is one of the few bits of advanced mathematics to escape into pop culture. Maybe dominates it, in a way nothing but the blackboard full of calculus equations does. In 1958 the public intellectual and game show host Clifton Fadiman published the anthology Fantasia Mathematica. It’s all essays and stories and poems with some mathematical element. I no longer remember how many of the pieces were about the Möbius strip one way or another. The collection does include A J Deutschs’s classic A Subway Named Möbius. In this story the Boston subway system achieves hyperdimensional complexity. It does not become a Möbius strip, though, in that story. It might be one in reality anyway.

The Möbius strip we name for August Ferdinand Möbius, who in 1858 was the second person known to have noticed the shape’s curious properties. The first — to notice, in 1858, and to publish, in 1862 — was Johann Benedict Listing. Listing seems to have coined the term “topology” for the field that the Möbius strip would be emblem for. He wrote one of the first texts on the field. He also seems to have coined terms like “entrophic phenomena” and “nodal points” and “geoid” and “micron”, for a millionth of a meter. It’s hard to say why we don’t talk about Listing strips instead. Mathematical fame is a strange, unpredictable creature. There is a topological invariant, the Listing Number, named for him. And he’s known to ophthalmologists for Listing’s Law, which describes how human eyes orient themselves.

The Möbius strip is an easy thing to construct. Loop a ribbon back to itself, with an odd number of half-twist before you fasten the ends together. Anyone could do it. So it seems curious that for all recorded history nobody thought to try. Not until 1858 when Lister and then Möbius hit on the same idea.

An irresistible thing, while riding these roller coasters, is to try to find the spot where you “switch”, where you go from being on the left track to the right. You can’t. The track is — well, the track is a series of metal straps bolted to a base of wood. (The base the straps are bolted to is what makes it a wooden roller coaster. The great lattice holding the tracks above ground have nothing to do with it.) But the path of the tracks is a continuous whole. To split it requires the same arbitrariness with which mapmakers pick a prime meridian. It’s obvious that the “longitude” of a cylinder or a rubber ball is arbitrary. It’s not obvious that roller coaster tracks should have the same property. Until you draw the shape in that ∞-loop figure we always see. Then you can get lost imagining a walk along the surface.

And it’s not true that nobody thought to try this shape before 1858. Julyan H E Cartwright and Diego L González wrote a paper searching for pre-Möbius strips. They find some examples. To my eye not enough examples to support their abstract’s claim of “lots of them”, but I trust they did not list every example. One example is a Roman mosaic showing Aion, the God of Time, Eternity, and the Zodiac. He holds a zodiac ring that is either a Möbius strip or cylinder with artistic errors. Cartwright and González are convinced. I’m reminded of a Looks Good On Paper comic strip that forgot to include the needed half-twist.

Islamic science gives us a more compelling example. We have a book by Ismail al-Jazari dated 1206, The Book of Knowledge of Ingenious Mechanical Devices. Some manuscripts of it illustrate a chain pump, with the chain arranged as a Möbius strip. Cartwright and González also note discussions in Scientific American, and other engineering publications in the United States, about drive and conveyor belts with the Möbius strip topology. None of those predate Lister or Möbius, or apparently credit either. And they do come quite soon after. It’s surprising something might leap from abstract mathematics to Yankee ingenuity that fast.

If it did. It’s not hard to explain why mechanical belts didn’t consider Möbius strip shapes before the late 19th century. Their advantage is that the wear of the belt distributes over twice the surface area, the “inside” and “outside”. A leather belt has a smooth and a rough side. Many other things you might make a belt from have a similar asymmetry. By the late 19th century you could make a belt of rubber. Its grip and flexibility and smoothness is uniform on all sides. “Balancing” the use suddenly could have a point.

I still find it curious almost no one drew or speculated about or played with these shapes until, practically, yesterday. The shape doesn’t seem far away from a trefoil knot. The recycling symbol, three folded-over arrows, suggests a Möbius strip. The strip evokes the ∞ symbol, although that symbol was not attached to the concept of “infinity” until John Wallis put it forth in 1655.

Even with the shape now familiar, and loved, there are curious gaps. Consider game design. If you play on a board that represents space you need to do something with the boundaries. The easiest is to make the boundaries the edges of playable space. The game designer has choices, though. If a piece moves off the board to the right, why not have it reappear on the left? (And, going off to the left, reappear on the right.) This is fine. It gives the game board, a finite rectangle, the topology of a cylinder. If this isn’t enough? Have pieces that go off the top edge reappear at the bottom, and vice-versa. Doing this, along with matching the left to the right boundaries, makes the game board a torus, a doughnut shape.

A Möbius strip is easy enough to code. Make the top and bottom impenetrable borders. And match the left to the right edges this way: a piece going off the board at the upper half of the right edge reappears at the lower half of the left edge. Going off the lower half of the right edge brings the piece to the upper half of the left edge. And so on. It isn’t hard, but I’m not aware of any game — board or computer — that uses this space. Maybe there’s a backgammon variant which does.

Still, the strip defies our intuition. It has one face and one edge. To reflect a shape across the width of the strip is the same as sliding a shape along its length. Cutting the strip down the center unfurls it into a cylinder. Cutting the strip down, one-third of the way from the edge, divides it into two pieces, a skinnier Möbius strip plus a cylinder. If we could extract the edge we could tug and stretch it until it was a circle.

And it primes our intuition. Once we understand there can be shapes lacking sides we can look for more. Anyone likely to read a pop mathematics blog about the Möbius strip has heard of the Klein bottle. This is a three-dimensional surface that folds back on itself in the fourth dimension of space. The shape is a jug with no inside, or with nothing but inside. Three-dimensional renditions of this get suggested as gifts to mathematicians. This for your mathematician friend who’s already got a Möbius scarf.

Though a Möbius strip looks — at any one spot — like a plane, the four-color map theorem doesn’t hold for it. Even the five-color theorem won’t do. You need six colors to cover maps on such a strip. A checkerboard drawn on a Möbius strip can be completely covered by T-shape pentominoes or Tetris pieces. You can’t do this for a checkerboard on the plane. In the mathematics of music theory the organization of dyads — two-tone “chords” — has the structure of a Möbius strip. I do not know music theory or the history of music theory. I’m curious whether Möbius strips might have been recognized by musicians before the mathematicians caught on.

And they inspire some practical inventions. Mechanical belts are obvious, although I don’t know how often they’re used. More clever are designs for resistors that have no self-inductance. They can resist electric flow without causing magnetic interference. I can look up the patents; I can’t swear to how often these are actually used. There exist — there are made — Möbius aromatic compounds. These are organic compounds with rings of carbon and hydrogen. I do not know a use for these. That they’ve only been synthesized this century, rather than found in nature, suggests they are more neat than practical.

Perhaps this shape is most useful as a path into a particular type of topology, and for its considerable artistry. And, with its “late” discovery, a reminder that we do not yet know all that is obvious. That is enough for anything.

There are three steel roller coasters with a Möbius strip track. That is, the metal rail on which the coaster runs is itself braced directly by metal. One of these is in France, one in Italy, and one in Iran. One in Liaoning, China has been under construction for five years. I can’t say when it might open. I have yet to ride any of them.

This and all the other 2020 A-to-Z essays should be at this link. Both the 2020 and all past A-to-Z essays should be at this link. I am hosting the Playful Math Education Blog Carnival at the end of September, so appreciate any educational or recreational or simply fun mathematics material you know about. And, goodness, I’m actually overdue to ask for topics for the latters P through R; I’ll have a post for that tomorrow, I hope. Thank you for your reading and your help.

## Meanwhile, in sandwich news

This is a slight thing that crossed my reading yesterday. You might enjoy. The question is a silly one: what’s the “optimal” way to slice banana onto a peanut-butter-and-banana sandwich?

Here’s Ethan Rosenthal’s answer. The specific problem this is put to is silly. The optimal peanut butter and banana sandwich is the one that satisfies your desire for a peanut butter and banana sandwich. However, the approach to the problem demonstrates good mathematics, and numerical mathematics, practices. Particularly it demonstrates defining just what your problem is, and what you mean by “optimal”, and how you can test that. And then developing a numerical model which can optimize it.

And the specific question, how much of the sandwich can you cover with banana slices, one of actual interest. A good number of ideas in analysis involve thinking of cover sets: what is the smallest collection of these things which will completely cover this other thing? Concepts like this give us an idea of how to define area, also, as the smallest number of standard reference shapes which will cover the thing we’re interested in. The basic problem is practical too: if we wish to provide something, and have units like this which can cover some area, how can we arrange them so as to miss as little as possible? Or use as few of the units as possible?

## My All 2020 Mathematics A to Z: K-Theory

I should have gone with Vayuputrii’s proposal that I talk about the Kronecker Delta. But both Jacob Siehler and Mr Wu proposed K-Theory as a topic. It’s a big and an important one. That was compelling. It’s also a challenging one. This essay will not teach you K-Theory, or even get you very far in an introduction. It may at least give some idea of what the field is about.

# K-Theory.

This is a difficult topic to discuss. It’s an important theory. It’s an abstract one. The concrete examples are either too common to look interesting or are already deep into things like “tangent bundles of Sn-1”. There are people who find tangent bundles quite familiar concepts. My blog will not be read by a thousand of them this month. Those who are familiar with the legends grown around Alexander Grothendieck will nod on hearing he was a key person in the field. Grothendieck was of great genius, and also spectacular indifference to practical mathematics. Allegedly he once, pressed to apply something to a particular prime number for an example, proposed 57, which is not prime. (One does not need to be a genius to make a mistake like that. If I proposed 447 or 449 as prime numbers, how long would you need to notice I was wrong?)

K-Theory predates Grothendieck. Now that we know it’s a coherent mathematical idea we can find elements leading to it going back to the 19th century. One important theorem has Bernhard Riemann’s name attached. Henri Poincaré contributed early work too. Grothendieck did much to give the field a particular identity. Also a name, the K coming from the German Klasse. Grothendieck pioneered what we now call Algebraic K-Theory, working on the topic as a field of abstract algebra. There is also a Topological K-Theory, early work on which we thank Michael Atiyah and Friedrick Hirzebruch for. Topology is, popularly, thought of as the mathematics of flexible shapes. It is, but we get there from thinking about relationships between sets, and these are the topologies of K-Theory. We understand these now as different ways of understandings structures.

Still, one text I found described (topological) K-Theory as “the first generalized cohomology theory to be studied thoroughly”. I remember how much handwaving I had to do to explain what a cohomology is. The subject looks intimidating because of the depth of technical terms. Every field is deep in technical terms, though. These look more rarefied because we haven’t talked much, or deeply, into the right kinds of algebra and topology.

You find at the center of K-Theory either “coherent sheaves” or “vector bundles”. Which alternative depends on whether you prefer Algebraic or Topological K-Theory. Both alternatives are ways to encode information about the space around a shape. Let me talk about vector bundles because I find that easier to describe. Take a shape, anything you like. A closed ribbon. A torus. A Möbius strip. Draw a curve on it. Every point on that curve has a tangent plane, the plane that just touches your original shape, and that’s guaranteed to touch your curve at one point. What are the directions you can go in that plane? That collection of directions is a fiber bundle — a tangent bundle — at that point. (As ever, do not use this at your thesis defense for algebraic topology.)

Now: what are all the tangent bundles for all the points along that curve? Does their relationship tell you anything about the original curve? The question is leading. If their relationship told us nothing, this would not be a subject anyone studies. If you pick a point on the curve and look at its tangent bundle, and you move that point some, how does the tangent bundle change?

If we start with the right sorts of topological spaces, then we can get some interesting sets of bundles. What makes them interesting is that we can form them into a ring. A ring means that we have a set of things, and an operation like addition, and an operation like multiplication. That is, the collection of things works somewhat like the integers do. This is a comfortable familiar behavior after pondering too much abstraction.

Why create such a thing? The usual reasons. Often it turns out calculating something is easier on the associated ring than it is on the original space. What are we looking to calculate? Typically, we’re looking for invariants. Things that are true about the original shape whatever ways it might be rotated or stretched or twisted around. Invariants can be things as basic as “the number of holes through the solid object”. Or they can be as ethereal as “the total energy in a physics problem”. Unfortunately if we’re looking at invariants that familiar, K-Theory is probably too much overhead for the problem. I confess to feeling overwhelmed by trying to learn enough to say what it is for.

There are some big things which it seems well-suited to do. K-Theory describes, in its way, how the structure of a set of items affects the functions it can have. This links it to modern physics. The great attention-drawing topics of 20th century physics were quantum mechanics and relativity. They still are. The great discovery of 20th century physics has been learning how much of it is geometry. How the shape of space affects what physics can be. (Relativity is the accessible reflection of this.)

And so K-Theory comes to our help in string theory. String theory exists in that grand unification where mathematics and physics and philosophy merge into one. I don’t toss philosophy into this as an insult to philosophers or to string theoreticians. Right now it is very hard to think of ways to test whether a particular string theory model is true. We instead ponder what kinds of string theory could be true, and how we might someday tell whether they are. When we ask what things could possibly be true, and how to tell, we are working for the philosophy department.

My reading tells me that K-Theory has been useful in condensed matter physics. That is, when you have a lot of particles and they interact strongly. When they act like liquids or solids. I can’t speak from experience, either on the mathematics or the physics side.

I can talk about an interesting mathematical application. It’s described in detail in section 2.3 of Allen Hatcher’s text Vector Bundles and K-Theory, here. It comes about from consideration of the Hopf invariant, named for Heinz Hopf for what I trust are good reasons. It also comes from consideration of homomorphisms. A homomorphism is a matching between two sets of things that preserves their structure. This has a precise definition, but I can make it casual. If you have noticed that, every (American, hourlong) late-night chat show is basically the same? The host at his desk, the jovial band leader, the monologue, the show rundown? Two guests and a band? (At least in normal times.) Then you have noticed the homomorphism between these shows. A mathematical homomorphism is more about preserving the products of multiplication. Or it preserves the existence of a thing called the kernel. That is, you can match up elements and how the elements interact.

What’s important is Adams’ Theorem of the Hopf Invariant. I’ll write this out (quoting Hatcher) to give some taste of K-Theory:

The following statements are true only for n = 1, 2, 4, and 8:
a. $R^n$ is a division algebra.
b. $S^{n - 1}$ is parallelizable, ie, there exist n – 1 tangent vector fields to $S^{n - 1}$ which are linearly independent at each point, or in other words, the tangent bundle to $S^{n - 1}$ is trivial.

This is, I promise, low on jargon. “Division algebra” is familiar to anyone who did well in abstract algebra. It means a ring where every element, except for zero, has a multiplicative inverse. That is, division exists. “Linearly independent” is also a familiar term, to the mathematician. Almost every subject in mathematics has a concept of “linearly independent”. The exact definition varies but it amounts to the set of things having neither redundant nor missing elements.

The proof from there sprawls out over a bunch of ideas. Many of them I don’t know. Some of them are simple. The conditions on the Hopf invariant all that $S^{n - 1}$ stuff eventually turns into finding values of n for for which $2^n$ divides $3^n - 1$. There are only three values of ‘n’ that do that. For example.

What all that tells us is that if you want to do something like division on ordered sets of real numbers you have only a few choices. You can have a single real number, $R^1$. Or you can have an ordered pair, $R^2$. Or an ordered quadruple, $R^4$. Or you can have an ordered octuple, $R^8$. And that’s it. Not that other ordered sets can’t be interesting. They will all diverge far enough from the way real numbers work that you can’t do something that looks like division.

And now we come back to the running theme of this year’s A-to-Z. Real numbers are real numbers, fine. Complex numbers? We have some ways to understand them. One of them is to match each complex number with an ordered pair of real numbers. We have to define a more complicated multiplication rule than “first times first, second times second”. This rule is the rule implied if we come to $R^2$ through this avenue of K-Theory. We get this matching between real numbers and the first great expansion on real numbers.

The next great expansion of complex numbers is the quaternions. We can understand them as ordered quartets of real numbers. That is, as $R^4$. We need to make our multiplication rule a bit fussier yet to do this coherently. Guess what fuss we’d expect coming through K-Theory?

$R^8$ seems the odd one out; who does anything with that? There is a set of numbers that neatly matches this ordered set of octuples. It’s called the octonions, sometimes called the Cayley Numbers. We don’t work with them much. We barely work with quaternions, as they’re a lot of fuss. Multiplication on them doesn’t even commute. (They’re very good for understanding rotations in three-dimensional space. You can also also use them as vectors. You’ll do that if your programming language supports quaternions already.) Octonions are more challenging. Not only does their multiplication not commute, it’s not even associative. That is, if you have three octonions — call them p, q, and r — you can expect that p times the product of q-and-r would be different from the product of p-and-q times r. Real numbers don’t work like that. Complex numbers or quaternions don’t either.

Octonions let us have a meaningful division, so we could write out $p \div q$ and know what it meant. We won’t see that for any bigger ordered set of $R^n$. And K-Theory is one of the tools which tells us we may stop looking.

This is hardly the last word in the field. It’s barely the first. It is at least an understandable one. The abstractness of the field works against me here. It does offer some compensations. Broad applicability, for example; a theorem tied to few specific properties will work in many places. And pure aesthetics too. Much work, in statements of theorems and their proofs, involve lovely diagrams. You’ll see great lattices of sets relating to one another. They’re linked by chains of homomorphisms. And, in further aesthetics, beautiful words strung into lovely sentences. You may not know what it means to say “Pontryagin classes also detect the nontorsion in $\pi_k(SO(n))$ outside the stable range”. I know I don’t. I do know when I hear a beautiful string of syllables and that is a joy of mathematics never appreciated enough.

Thank you for reading. The All 2020 A-to-Z essays should be available at this link. The essays from all A-to-Z sequence, 2015 to present, should be at this link. And I am still open for M, N, and O essay topics. Thanks for your attention.

## My All 2020 Mathematics A to Z: Michael Atiyah

To start this year’s great glossary project Mr Wu, author of the MathTuition88.com blog, had a great suggestion: The Atiyah-Singer Index Theorem. It’s an important and spectacular piece of work. I’ll explain why I’m not doing that in a few sentences.

Mr Wu pointed out that a biography of Michael Atiyah, one of the authors of this theorem, might be worth doing. GoldenOj endorsed the biography idea, and the more I thought it over the more I liked it. I’m not able to do a true biography, something that goes to primary sources and finds a convincing story of a life. But I can sketch out a bit, exploring his work and why it’s of note.

# Michael Atiyah.

Theodore Frankel’s The Geometry of Physics: An Introduction is a wonderful book. It’s 686 pages, including the index. It all explores how our modern understanding of physics is our modern understanding of geometry. On page 465 it offers this:

The Atiyah-Singer index theorem must be considered a high point of geometrical analysis of the twentieth century, but is far too complicated to be considered in this book.

I know when I’m licked. Let me attempt to look at one of the people behind this theorem instead.

The Riemann Hypothesis is about where to find the roots of a particular infinite series. It’s been out there, waiting for a solution, for a century and a half. There are many interesting results which we would know to be true if the Riemann Hypothesis is true. In 2018, Michael Atiyah declared that he had a proof. And, more, an amazing proof, a short proof. Albeit one that depended on a great deal of background work and careful definitions. The mathematical community was skeptical. It still is. But it did not dismiss outright the idea that he had a solution. It was plausible that Atiyah might solve one of the greatest problems of mathematics in something that fits on a few PowerPoint slides.

So think of a person who commands such respect.

His proof of the Riemann Hypothesis, as best I understand, is not generally accepted. For example, it includes the fine structure constant. This comes from physics. It describes how strongly electrons and photons interact. The most compelling (to us) consequence of the Riemann Hypothesis is in how prime numbers are distributed among the integers. It’s hard to think how photons and prime numbers could relate. But, then, if humans had done all of mathematics without noticing geometry, we would know there is something interesting about π. Differential equations, if nothing else, would turn up this number. We happened to discover π in the real world first too. If it were not familiar for so long, would we think there should be any commonality between differential equations and circles?

I do not mean to say Atiyah is right and his critics wrong. I’m no judge of the matter at all. What is interesting is that one could imagine a link between a pure number-theory matter like the Riemann hypothesis and a physical matter like the fine structure constant. It’s not surprising that mathematicians should be interested in physics, or vice-versa. Atiyah’s work was particularly important. Much of his work, from the late 70s through the 80s, was in gauge theory. This subject lies under much of modern quantum mechanics. It’s born of the recognition of symmetries, group operations that you can do on a field, such as the electromagnetic field.

In a sequence of papers Atiyah, with other authors, sorted out particular cases of how magnetic monopoles and instantons behave. Magnetic monopoles may sound familiar, even though no one has ever seen one. These are magnetic points, an isolated north or a south pole without its opposite partner. We can understand well how they would act without worrying about whether they exist. Instantons are more esoteric; I don’t remember encountering the term before starting my reading for this essay. I believe I did, encountering the technique as a way to describe the transitions between one quantum state and another. Perhaps the name failed to stick. I can see where there are few examples you could give an undergraduate physics major. And it turns out that monopoles appear as solutions to some problems involving instantons.

This was, for Atiyah, later work. It arose, in part, from bringing the tools of index theory to nonlinear partial differential equations. This index theory is the thing that got us the Atiyah-Singer Index Theorem too complicated to explain in 686 pages. Index theory, here, studies questions like “what can we know about a differential equation without solving it?” Solving a differential equation would tell us almost everything we’d like to know, yes. But it’s also quite hard. Index theory can tell us useful things like: is there a solution? Is there more than one? How many? And it does this through topological invariants. A topological invariant is a trait like, for example, the number of holes that go through a solid object. These things are indifferent to operations like moving the object, or rotating it, or reflecting it. In the language of group theory, they are invariant under a symmetry.

It’s startling to think a question like “is there a solution to this differential equation” has connections to what we know about shapes. This shows some of the power of recasting problems as geometry questions. From the late 50s through the mid-70s, Atiyah was a key person working in a topic that is about shapes. We know it as K-theory. The “K” from the German Klasse, here. It’s about groups, in the abstract-algebra sense; the things in the groups are themselves classes of isomorphisms. Michael Atiyah and Friedrich Hirzebruch defined this sort of group for a topological space in 1959. And this gave definition to topological K-theory. This is again abstract stuff. Frankel’s book doesn’t even mention it. It explores what we can know about shapes from the tangents to the shapes.

And it leads into cobordism, also called bordism. This is about what you can know about shapes which could be represented as cross-sections of a higher-dimension shape. The iconic, and delightfully named, shape here is the pair of pants. In three dimensions this shape is a simple cartoon of what it’s named. On the one end, it’s a circle. On the other end, it’s two circles. In between, it’s a continuous surface. Imagine the cross-sections, how on separate layers the two circles are closer together. How their shapes distort from a real circle. In one cross-section they come together. They appear as two circles joined at a point. In another, they’re a two-looped figure. In another, a smoother circle. Knowing that Atiyah came from these questions may make his future work seem more motivated.

But how does one come to think of the mathematics of imaginary pants? Many ways. Atiyah’s path came from his first research specialty, which was algebraic geometry. This was his work through much of the 1950s. Algebraic geometry is about the kinds of geometric problems you get from studying algebra problems. Algebra here means the abstract stuff, although it does touch on the algebra from high school. You might, for example, do work on the roots of a polynomial, or a comfortable enough equation like $x^2 + y^2 = 1$. Atiyah had started — as an undergraduate — working on projective geometries. This is what one curve looks like projected onto a different surface. This moved into elliptic curves and into particular kinds of transformations on surfaces. And algebraic geometry has proved important in number theory. You might remember that the Wiles-Taylor proof of Fermat’s Last Theorem depended on elliptic curves. Some work on the Riemann hypothesis is built on algebraic topology.

(I would like to trace things farther back. But the public record of Atiyah’s work doesn’t offer hints. I can find amusing notes like his father asserting he knew he’d be a mathematician. He was quite good at changing local currency into foreign currency, making a profit on the deal.)

It’s possible to imagine this clear line in Atiyah’s career, and why his last works might have been on the Riemann hypothesis. That’s too pat an assertion. The more interesting thing is that Atiyah had several recognizable phases and did iconic work in each of them. There is a cliche that mathematicians do their best work before they are 40 years old. And, it happens, Atiyah did earn a Fields Medal, given to mathematicians for the work done before they are 40 years old. But I believe this cliche represents a misreading of biographies. I suspect that first-rate work is done when a well-prepared mind looks fresh at a new problem. A mathematician is likely to have these traits line up early in the career. Grad school demands the deep focus on a particular problem. Getting out of grad school lets one bring this deep knowledge to fresh questions.

It is easy, in a career, to keep studying problems one has already had great success in, for good reason and with good results. It tends not to keep producing revolutionary results. Atiyah was able — by chance or design I can’t tell — to several times venture into a new field. The new field was one that his earlier work prepared him for, yes. But it posed new questions about novel topics. And this creative, well-trained mind focusing on new questions produced great work. And this is one way to be credible when one announces a proof of the Riemann hypothesis.

Here is something I could not find a clear way to fit into this essay. Atiyah recorded some comments about his life for the Web of Stories site. These are biographical and do not get into his mathematics at all. Much of it is about his life as child of British and Lebanese parents and how that affected his schooling. One that stood out to me was about his peers at Manchester Grammar School, several of whom he rated as better students than he was. Being a good student is not tightly related to being a successful academic. Particularly as so much of a career depends on chance, on opportunities happening to be open when one is ready to take them. It would be remarkable if there wre three people of greater talent than Atiyah who happened to be in the same school at the same time. It’s not unthinkable, though, and we may wonder what we can do to give people the chance to do what they are good in. (I admit this assumes that one finds doing what one is good in particularly satisfying or fulfilling.) In looking at any remarkable talent it’s fair to ask how much of their exceptional nature is that they had a chance to excel.

## Reading the Comics, May 29, 2020: Slipping Into Summer More Edition

This is the slightly belated close of last week’s topics suggested by Comic Strip Master Command. For the week we’ve had, I am doing very well.

Werner Wejp-Olsen’s Inspector Danger’s Crime Quiz for the 25th of May sees another mathematician killed, and “identifying” his killer in a dying utterance. Inspector Danger has followed killer mathematicians several times before: the 9th of July, 2012, for instance. Or the 4th of July, 2016, for a case so similar that it’s almost a Slylock Fox six-differences puzzle. Apparently realtors and marine biologists are out for mathematicians’ blood. I’m not surprised by the realtors, but hey, marine biology, what’s the deal? The same gimmick got used the 15th of May, 2017, too. (And in fairness to the late Wejp-Olsen, who could possibly care that similar names are being used in small puzzles used years apart? It only stands out because I’m picking out things that no reasonable person would notice.)

Jim Meddick’s Monty for the 25th has the title character inspired by the legend of genius work done during plague years. A great disruption in life is a great time to build new habits, and if Covid-19 has given you the excuse to break bad old habits, or develop good new ones, great! Congratulations! If it has not, though? That’s great too. You’re surviving the most stressful months of the 21st century, I hope, not taking a holiday.

Anyway, the legend mentioned here includes Newton inventing Calculus while in hiding from the plague. The actual history is more complicated, and ambiguous. (You will not go wrong supposing that the actual history of a thing is more complicated and ambiguous than you imagine.) The Renaissance Mathematicus describes, with greater authority and specificity than I could, what Newton’s work was more like. And some of how we have this legend. This is not to say that the 1660s were not astounding times for Newton, nor to deny that he worked with a rare genius. It’s more that we are lying to imagine that Newton looked around, saw London was even more a deathtrap than usual, and decided to go off to the country and toss out a new and unique understanding of the infinitesimal and the continuum.

Mark Anderson’s Andertoons for the 27th is the Mark Anderson’s Andertoons for the week. One of the students — not Wavehead — worries that a geometric ray, going on forever, could endanger people. There’s some neat business going on here. Geometry, like much mathematics, works on abstractions that we take to be universally true. But it also seems to have a great correspondence to ordinary real-world stuff. We wouldn’t study it if it didn’t. So how does that idealization interact with the reality? If the ray represented by those marks on the board goes on to do something, do we have to take care in how it’s used?

Olivia Jaimes’s Nancy for the 29th is set in a (virtual) arithmetic class. It builds on the conflation between “nothing” and “zero”.

And that wraps up my week in comic strips. I keep all my Reading the Comics posts at this link. I am also hoping to start my All 2020 Mathematics A-to-Z shortly, and am open for nominations for topics for the first couple letters. Thank you for reading.

## Reading the Comics, May 9, 2020: Knowing the Angles Edition

There were a couple more comic strips in the block of time I want to write about. Only one’s got some deeper content and, I admit, I had to work to find it.

Bob Scott’s Bear With me for the 7th has Bear offering the answer from mathematics class, late.

Jerry Bittle’s Shirley and Sons Classic rerun for the 7th has Louis struggling on an arithmetic test.

Olivia Jaimes’s Nancy for the 8th has Nancy and Sluggo avoiding mathematics homework. Or, “practice”, anyway. There’s more, though; Nancy and Sluggo are doing some analysis of viewing angles. That’s actual mathematics, certainly. Computer-generated imagery depends on it, just like you’d imagine. There are even fun abstract questions that can give surprising insights into numbers. For example: imagine that space were studded, at a regular square grid spacing, with perfectly reflective marbles of uniform size. Is there, then, a line of sight between any two points outside any marbles? Even if it requires tens of millions of reflections; we’re interested in what perfect reflections would give us.

Using playing cards as a makeshift protractor is a creative bit of making do with what you have. The cards spread in a fanfold easily enough and there’s marks on the cards that you can use to keep your measurements reasonably uniform. Creating ad hoc measurement tools like this isn’t mathematics per se. But making a rough tool is a first step to making a precise tool. And you can use reason to improve your estimates.

It’s not on-point, but I did want to share the most wondrous ad hoc tool I know of: You can use an analog clock hand, and the sun, as a compass. You don’t even need a real clock; you can draw the time on a sheet of paper and use that. It’s not a precise measure, of course. But if you need some help, here you go. You’ve got it.

Tony Rubino and Gary Markstein’s Daddy’s Home for the 9th has Elliot avoiding doing his mathematics homework.

And that’s got the last week covered. Some more comic strips should follow at a link here, soon. And I hope to have some other stuff to announce here, soon.

## Reading the Comics, May 2, 2020: What Is The Cosine Of Six Edition

The past week was a light one for mathematically-themed comic strips. So let’s see if I can’t review what’s interesting about them before the end of this genially dumb movie (1940’s Hullabaloo, starring Frank Morgan and featuring Billie Burke in a small part). It’ll be tough; they’re reaching a point where the characters start acting like they care about the plot either, which is usually the sign they’re in the last reel.

Patrick Roberts’s Todd the Dinosaur for the 26th of April presents mathematics homework as the most dreadful kind of homework.

Jenny Campbell’s Flo and Friends for the 26th is a joke about fumbling a bit of practical mathematics, in this case, cutting a recipe down. When I look into arguments about the metric system, I will sometimes see the claim that English traditional units are advantageous for cutting down a recipe: it’s quite easy to say that half of “one cup” is a half cup, for example. I doubt that this is much more difficult than working out what half of 500 ml is, and my casual inquiries suggest that nobody has the faintest idea what half of a pint would be. And anyway none of this would help Ruthie’s problem, which is taking two-fifths of a recipe meant for 15 people. … Honestly, I would have just cut it in half and wonder who’s publishing recipes that serve 15.

Ed Bickford and Aaron Walther’s American Chop Suey for the 28th uses a panel of (gibberish) equations to represent deep thinking. It’s in part of a story about an origami competition. This interests me because there is serious mathematics to be done in origami. Most of these are geometry problems, as you might expect. The kinds of things you can understand about distance and angles from folding a square may surprise. For example, it’s easy to trisect an arbitrary angle using folded squares. The problem is, famously, impossible for compass-and-straightedge geometry.

Origami offers useful mathematical problems too, though. (In practice, if we need to trisect an angle, we use a protractor.) It’s good to know how to take a flat, or nearly flat, thing and unfold it into a more interesting shape. It’s useful whenever you have something that needs to be transported in as few pieces as possible, but that on site needs to not be flat. And this connects to questions with pleasant and ordinary-seeming names like the map-folding problem: can you fold a large sheet into a small package that’s still easy to open? Often you can. So, the mathematics of origami is a growing field, and one that’s about an accessible subject.

Nate Fakes’s Break of Day for the 29th is the anthropomorphic-symbols joke for the week, with an x talking about its day job in equations and its free time in games like tic-tac-toe.

Bill Holbrook’s On The Fastrack for the 2nd of May also talks about the use of x as a symbol. Curt takes eagerly to the notion that a symbol can represent any number, whether we know what it is or not. And, also, that the choice of symbol is arbitrary; we could use whatever symbol communicates. I remember getting problems to work in which, say, 3 plus a box equals 8 and working out what number in the box would make the equation true. This is exactly the same work as solving 3 + x = 8. Using an empty box made the problem less intimidating, somehow.

Dave Whamond’s Reality Check for the 2nd is, really, a bit baffling. It has a student asking Siri for the cosine of 174 degrees. But it’s not like anyone knows the cosine of 174 degrees off the top of their heads. If the cosine of 174 degrees wasn’t provided in a table for the students, then they’d have to look it up. Well, more likely they’d be provided the cosine of 6 degrees; the cosine of an angle is equal to minus one times the cosine of 180 degrees minus that same angle. This allows table-makers to reduce how much stuff they have to print. Still, it’s not really a joke that a student would look up something that students would be expected to look up.

… That said …

If you know anything about trigonometry, you know the sine and cosine of a 30-degree angle. If you know a bit about trigonometry, and are willing to put in a bit of work, you can start from a regular pentagon and work out the sine and cosine of a 36-degree angle. And, again if you know anything about trigonometry, you know that there are angle-addition and angle-subtraction formulas. That is, if you know the cosine of two angles, you can work out the cosine of the difference between them.

So, in principle, you could start from scratch and work out the cosine of 6 degrees without using a calculator. And the cosine of 174 degrees is minus one times the cosine of 6 degrees. So it could be a legitimate question to work out the cosine of 174 degrees without using a calculator. I can believe in a mathematics class which has that as a problem. But that requires such an ornate setup that I can’t believe Whamond intended that. Who in the readership would think the cosine of 174 something to work out by hand? If I hadn’t read a book about spherical trigonometry last month I wouldn’t have thought the cosine of 6 a thing someone could reasonably work out by hand.

I didn’t finish writing before the end of the movie, even though it took about eighteen hours to wrap up ten minutes of story. My love came home from a walk and we were talking. Anyway, this is plenty of comic strips for the week. When there are more to write about, I’ll try to have them in an essay at this link. Thanks for reading.

## Reading the Comics, April 6, 2020: My Perennials Edition

As much as everything is still happening, and so much, there’s still comic strips. I’m fortunately able here to focus just on the comics that discuss some mathematical theme, so let’s get started in exploring last week’s reading. Worth deeper discussion are the comics that turn up here all the time.

Lincoln Peirce’s Big Nate for the 5th is a casual mention. Nate wants to get out of having to do his mathematics homework. This really could be any subject as long as it fit the word balloon.

John Hambrock’s The Brilliant Mind of Edison Lee for the 6th is a funny-answers-to-story-problems joke. Edison Lee’s answer disregards the actual wording of the question, which supposes the group is travelling at an average 70 miles per hour. The number of stops doesn’t matter in this case.

Mark Anderson’s Andertoons for the 6th is the Mark Anderson’s Andertoons for the week. In it Wavehead gives the “just use a calculator” answer for geometry problems.

Not much to talk about there. But there is a fascinating thing about perimeters that you learn if you go far enough in Calculus. You have to get into multivariable calculus, something where you integrate a function that has at least two independent variables. When you do this, you can find the integral evaluated over a curve. If it’s a closed curve, something that loops around back to itself, then you can do something magic. Integrating the correct function on the curve around a shape will tell you the enclosed area.

And this is an example of one of the amazing things in multivariable calculus. It tells us that integrals over a boundary can tell us something about the integral within a volume, and vice-versa. It can be worth figuring out whether your integral is better solved by looking at the boundaries or at the interiors.

Heron’s Formula, for the area of a triangle based on the lengths of its sides, is an expression of this calculation. I don’t know of a formula exactly like that for the perimeter of a quadrilateral, but there are similar formulas if you know the lengths of the sides and of the diagonals.

Richard Thompson’s Cul de Sac rerun for the 6th sees Petey working on his mathematics homework. As with the Big Nate strip, it could be any subject.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 5th depicts, fairly, the sorts of things that excite mathematicians. The number discussed here is about algorithmic complexity. This is the study of how long it takes to do an algorithm. How long always depends on how big a problem you are working on; to sort four items takes less time than sorting four million items. Of interest here is how much the time to do work grows with the size of whatever you’re working on.

The mathematician’s particular example, and I thank dtpimentel in the comments for finding this, is about the Coppersmith–Winograd algorithm. This is a scheme for doing matrix multiplication, a particular kind of multiplication and addition of squares of numbers. The squares have some number N rows and N columns. It’s thought that there exists some way to do matrix multiplication in the order of N2 time, that is, if it takes 10 time units to multiply matrices of three rows and three columns together, we should expect it takes 40 time units to multiply matrices of six rows and six columns together. The matrix multiplication you learn in linear algebra takes on the order of N3 time, so, it would take like 80 time units.

We don’t know the way to do that. The Coppersmith–Winograd algorithm was thought, after Virginia Vassilevska Williams’s work in 2011, to take something like N2.3728642 steps. So that six-rows-six-columns multiplication would take slightly over 51.796 844 time units. In 2014, François le Gall found it was no worse than N2.3728639 steps, so this would take slightly over 51.796 833 time units. The improvement doesn’t seem like much, but on tiny problems it never does. On big problems, the improvement’s worth it. And, sometimes, you make a good chunk of progress at once.

I’ll have some more comic strips to discuss in an essay at this link, sometime later this week. Thanks for reading.