How Pop-Up Cards and Books Work


Through the Interesting Esoterica postings on Mathstodon I learned of this neat post. Joseph O’Rourke published this year Pop-Up Geometry: The Mathematics Behind Pop-Up Cards. I haven’t got the book (yet), but O’Rourke has a page with animated GIFs showing how basic shapes work. The animations, even without narrative, are eye-opening, revealing how to make complicated and curved motions with a single rotating plane and fixed-length attachments. It isn’t properly origami but the subject is related.

Interesting Esoterica has an abstract to this entry here. An advantage to searching there is the archive of interesting topics, searchable by tags. These sprawl considerably over difficulty range: Under the tag ‘things to make and do’ are this piece on Pop-up Geometry, but also on group theory as it applies to laying model train tracks, or a 1959 essay describing how to build a computer out of paper. Or, if you’re looking for a more advanced project, Fibbinary Zippers in a Monoid of Toroidal Hamiltonian Cycles that Generate Hilbert-Style Square-Filling Curves. (This one is closer to the train tracks paper than you imagine, and you can follow its point from looking at the pictures.) You’re likely to find something delightful there.

My Little 2021 Mathematics A-to-Z: Addition


John Golden, whom so far as I know doesn’t have an active blog, suggested this week’s topic. It pairs nicely with last week’s. I link to that in text, but if you would like to read all of this year’s Little Mathematics A to Z it should be at this link. And if you’d like to see all of my A-to-Z projects, pleas try this link. Thank you.

Addition

When I wrote about multiplication I came to the peculiar conclusion that it was the same as addition. This is true only in certain lights. When we study [abstract] algebra we look at things that look like arithmetic. The simplest useful thing that looks like arithmetic is a group. It has a set of elements, and a pairwise “group operation”. That group operation we call multiplication, if we don’t have a better name. We give it two elements and it gives us one. Under certain circumstances, this multiplication looks just like addition does.

But we have reason to think addition and multiplication aren’t the same. Where do we get addition?

We can make a meaningful addition by giving it something to interact with. By adding another operation. This turns the group into a ring. As it has two operations, it’s hard to resist calling one of them addition and the other multiplication. The new multiplication follows many of the rules the addition did. Adding two elements together gives you an element in the ring. So does multiplying. Addition is associative: a + (b + c) is the same thing as (a + b) + c . So it multiplication: a \times (b \times c) is the same thing as (a \times b) \times c .

And then the addition and the multiplication have to interact. If they didn’t, we’d just have a group with two operations. I don’t know anyone who’s found a good use for that. The way addition and multiplication interact we call distribution. This is represented by two rules, both of them depending on elements a, b, and c:

a\times(b + c) = a\times b + a\times c

(a + b)\times c = a\times c + b\times c

This is where we get something we have to call addition. It’s in having the two interacting group operations.

A problem which would have worried me at age eight: do we know we’re calling the correct operation “addition”? Yes, yes, names are arbitrary. But are we matching the thing we think we’re doing when we calculate 2 + 2 to addition and the thing for 2 x 2 to multiplication? How do we tell these two apart?

For all that they start the same, and resemble one another, there are differences. Addition has an identity, something that works like zero. a + 0 is always a , whatever a is. Multiplication … the multiplication we use every day has an identity, that is, 1. Are we required to have a multiplicative identity, something so that a \times 1 is always a ? That depends on what it said in the Introduction to Algebra textbook you learned on. If you want to be clear your rings do have a multiplicative identity you call it a “unit ring”. If you want to be clear you don’t care, I don’t know what to say. I’m told some people write that as “rng”, to hint that this identity is missing.

Addition always has an inverse. Whatever element a you pick, there is some -a so that -a + a is the additive identity. Multiplication? Even if we have a unit ring, there’s not always a reciprocal. The integers are a unit ring. But there are only two integers that have an integer multiplicative inverse, something you can multiply them by to get 1. If your unit ring does have a multiplicative inverse, this is called a division algebra. Rational numbers, for example, are a division algebra.

So for some rings, like the integers, there’s an obvious difference between addition and multiplication. But for the rational numbers? Can we tell the operations apart?

We can, through the additive identity, which please let me call 0. And the multiplicative identity, which please let me call 1. Is there a multiplicative inverse of 0? Suppose there is one; let me call it c , because I need some name. Then of all the things in the world, we know this:

0 \times c = 1

I can replace anything I like with something equal to it. So, for example, I can replace 0 with the sum of an element and its additive inverse. Like, (-a + a) for some element a . So then:

(-a + a) \times c = 1

And distribute this away!

-a\times c + a\times c = 1

I don’t know what number ac is, nor what its inverse -ac is. But I know its sum is zero. And so

0 = 1

This looks like trouble. But, all right, why not have the additive and the multiplicative identities be the same number? Mathematicians like to play with all kinds of weird things; why not this weirdness?

The why not is that you work out pretty fast that every element has to be equal to every other element. If you’re not sure how, consider the starting line of that little proof, but with an element b :

0 \times c \times b = 1 \times b

So there, finally, is a crack between addition and multiplication. Addition’s identity element, its zero, can’t have a multiplicative inverse. Multiplication’s identity element, its one, must have an additive inverse. We get addition from the thing we can’t un-multiply.

It may have struck you that if all we want is a ring with the lone element of 0 (or 1), then we can have addition and multiplication be indistinguishable again. And have the additive and multiplicative identities be the same thing. There’s nothing else for them to be. This is true, and we can. Unfortunately this ring doesn’t do much that’s interesting, except maybe prove some theorem we were working on isn’t always true. So we usually draw a box around it, acknowledge it once, and then exclude it from division algebras and fields and other things of interest. It’s much the same way we normally rule out 1 as a prime number. It’s an example that is too much bother to include given how unenlightening it is.

You can have groups and attach to them a multiplication and an addition and another binary operation. Those aren’t of such general interest that you study them much as an undergraduate.

And this is what we know of addition. It looks almost like a second multiplication. But it interacts just enough with multiplication to force the two to be distinguishable. From that we can create mathematics structures as interesting as arithmetic is.

My Little 2021 Mathematics A-to-Z: Multiplication


I wanted to start the Little 2021 Mathematics A-to-Z with more ceremony. These glossary projects are fun and work in about equal measure. But an already hard year got much harder about a month and a half back, and it hasn’t been getting much better. I’m even considering cutting down the reduced A-to-Z project I am doing. But I also feel I need to get some structured work under way. And sometimes only ambition will overcome a diminished world. So I begin, and with luck, will keep posting weekly essays about mathematical terms.

Today’s was a term suggested by Iva Sallay, longtime blog friend and creator of the Find The Factors recreational mathematics puzzle. Also a frequent host of the Playful Math Education Blog Carnival, a project quite worth reading and a great hosting challenge too. And as often makes for a delightful A-to-Z topic, it’s about something so commonplace one forgets it can hold surprises.

Multiplication

A friend pondering mathematics said they know you learn addition first, but that multiplication somehow felt more fundamental. I supported their insight. We learn two plus two first. It’s two times two where we start seeing strange things.

Suppose for the moment we’re interested only in the integers. Zero multiplied by anything is zero. There’s nothing like that in addition. Consider even numbers. An even number times anything gives you an even number again. There’s no duplicating that in addition. But this trait isn’t even unique to even numbers. Multiples of three, or four, or 237 assimilate the integers by multiplication the same way. You can find an integer to add to 2 to get 5; you can’t find an integer to multiply by 2 to get 5. Or consider prime numbers. There’s no integer you can make by only one, or only finitely many, different sums. New possibilities, and restrictions, happen in multiplication.

Whether this makes multiplication the foundation of mathematics, or at least arithmetic, is a judgement. It depends how basic your concepts must be, and what you decide is important. Mathematicians do have a field which studies “things that look like arithmetic”, though. We call this algebra. Or call it abstract algebra to clarify it’s not that stuff with the quadratic formula. And that starts with group theory. A group is made of two things. One is a collection of elements. The other is a thing to do with pairs of elements. Generically, we call that multiplication.

A possible multiplication has to follow a couple rules. It has to be a binary operation on your group’s set. That is, it matches two things in the set to something in the set. There has to be an identity, something that works like 1 does for multiplying numbers. It has to be associative. If you want to multiply three things together, you can start with whatever pair looks easier. Every element has to have an inverse, something you can multiply it by to get 1 as the product.

That’s all, and that’s not much. This description covers a lot of things. For example, there’s regular old multiplication, for the set of rational numbers (other than zero and I intend to talk about that later). For another, there’s rotations of a ball. Each axis you could turn the ball around on, and angle you could rotate it, is an element of the set of three-dimensional rotations. Multiplication we interpret as doing those rotations one after the other. There’s the multiplication of square matrices, ones that have the same number of rows and columns.

If you’re reading a pop mathematics blog, you know of \imath , the “imaginary unit”. You know it because \imath^2 = -1 . A bit more multiplying of these and you find a nice tight cycle. This forms a group, with four discernible elements: 1, \imath, -1, \mbox{ and } -\imath and regular multiplication. It’s a nice example of a “cyclic group”. We can represent the whole thing as multiplying a single element together: \imath^0, \imath, \imath^2, \imath^3 . We can think of \imath^4 but that’s got the same value as \imath^0 . Or \imath^5 , which has the same value as \imath^1 . With a little ingenuity we can even think of what we might mean by, say, \imath^{-1} and realize it has to be the same quantity as \imath^3 . Or \imath{-2} which has to equal \imath^2 . You see the cycle.

A cyclic group doesn’t have to have four elements. It needs to be generated by doing the multiplication over and over on one element, that’s all. It can have a single element, or two, or two hundred. Or infinitely many elements. Suppose we have a set built on the powers of an element that we’ll call e . This is a common name for “an element and we don’t care what it is”. It has nothing to do with the number called e, or any number. At least it doesn’t have to.

Please let me use the shorthand of e^2 to mean e times e , and e^3 to mean e^2 times e , and so on. Then we have a set that looks like, in part, \cdots e^{-3}, e^{-2}, e^{-1}, e^0, e^1, e^2, e^3. \cdots . They multiply together the way we might multiply x raised to powers. e^2 \times e^3 is e^5 , and e^4 \times e^{-4} is e^0 , and e^-3 \times e^2 is e^{-1} and so on.

Those exponents suggest something familiar. In this infinite cyclic group e^j \times e^k is e^{j + k} , where j and k are integers. Do we even need to write the e? Why not just write the j and k in a normal-size typeface? Is there a difference between cyclic-group multiplication and regular old addition of integers?

Not an important one. There’s differences in how we write the symbols, and what we think they mean. There’s not a difference in the way they interact. Regular old addition, in this light, we can see as a multiplication.

Calling addition “multiplication” can be confusing. So we deal with that a few ways. One is to say that rather than multiplication what a group has is a group operation. This lets us avoid fooling people into thinking we mean to take this times that. It lacks a good shorthand word, the way we might say “a times b” or “a plus b”. But we can call it “the group operation”, and say “times” or “plus” as fits our sentence and our sentiment.

I’ve left unanswered that mention of multiplication on the rational-numbers-except-zero making a group. If you include zero in the set, though, you don’t have multiplication as a group operation. There’s no inverse to zero. There seems to be an oversight in multiplication not being a multiplication. I hope to address that in the next A-to-Z essay, on Addition.


This, and my other essays for the Little 2021 Mathematics A-to-Z, should be at this link. And all my A-to-Z essays from every year should be at this link. Thanks for reading.

Reading the Comics, February 1, 2020: I Never Talk About Marvin Edition


There’s some comic strips that get mentioned here all the time. Then there’s comic strips that I have been reading basically my whole life, and that never give me a thread to talk about. Although I’ve been reading comic strips for their mathematics content for a long while now, somehow, I am still surprised when these kinds of comic strip are not the same thing. So here’s the end of last week’s comics, almost in time for next week to start:

Kevin Fagan’s Drabble for the 28th has Penny doing “math” on colors. Traditionally I use an opening like this to mention group theory. In that we study things that can be added together, in ways like addition works on the integers. Colors won’t quite work like this, unfortunately. A group needs an element that’s an additive identity. This works like zero: it can be added to anything without changing its value. There isn’t a color that you can mix with other colors that leaves the other color unchanged, though. Even white or clear will dilute the original color.

Mom: 'How was school today, Penny?' Penny: 'Great, Mommy! I learned how to do math! Want me to show you? Blue plus red equals purple!'
Kevin Fagan’s Drabble for the 28th of January, 2020. It doesn’t come up often, but when it does, Drabble appears in essays at this link.

If you’ve thought of the clever workaround, that each color can be the additive identity to itself, you get credit for ingenuity. Unfortunately, to be a group there has to be a lone additive identity. Having more than one makes a structure that’s so unlike the integers that mathematicians won’t stand for it. I also don’t know of any interesting structures that have more than one additive identity. This suggests that nobody has found a problem that they represent well. But the strip suggests maybe it could tell us something useful for colors. I don’t know.

Marvin: 'After all the talk about 'fake news' I'm starting to question EVERYTHING big people tell me.' He's looking at a teacher holding up the flashcard 1 + 1 = 2.
Tom Armstrong’s Marvin for the 28th of January, 2020. I don’t think it has ever come up before, but what the heck. Any essays which mention Marvin should be at this link.

Tom Armstrong’s Marvin for the 28th is a strip which follows from the discovery that “fake news” is a thing that people say. Here the strip uses a bit of arithmetic as the sort of incontrovertibly true thing that Marvin is dumb to question. Well, that 1 + 1 equals 2 is uncontrovertibly true, unless we are looking at some funny definitions of ‘1’ or ‘plus’ or something. I remember, as a kid, being quite angry with a book that mentioned “one cup of popcorn plus one cup of water does not give us two cups of soggy popcorn”, although I didn’t know how to argue the point.

Title: 'The Math Homework.' Dad, in the kitchen, to kid: 'What's surface area? Ask your mother.' The mother is in the kitchen, working, and has every bit of surface area that isn't being used for homework with cooking tools. Footer joke: Mom asks, 'Can you please move? I need this space.'
Hilary Price and Rina Piccolo’s Rhymes with Orange for the 30th of January, 2020. Essays with some mention of Rhymes With Orange should be at this link.

Hilary Price and Rina Piccolo’s Rhymes with Orange for the 30th is … well, I’m in this picture and I don’t like it. I come from a long line of people who cover every surface with stuff. But as for what surface area is? … Well, there’s a couple of possible definitions. One that I feel is compelling is to think of covering sets. Take a shape that’s set, by definition, to have an area of 1 unit of area. What is the smallest number of those unit shapes which will cover the original shape? Cover is a technical term here. But also, here, the ordinary English word describes what we need it for. How many copies of the unit shape do you need to exactly cover up the whole original shape? That’s your area. And this fits to the mother’s use of surfaces in the comic strip neatly enough.

Mutt: 'What's the matter, you stuck?' Jeff, looking at his car: 'Yes and no! I tried the cary products they advertise on TV. They claimed this car would use 50% less gas. Then I bought a carburettor which saves 30%, special spark plugs which save 20% and a new brand of gas which saved 10%! Now when I drive the gas tank overflows!' Jeff shows gas pouring out of the tank.
Bud Fisher’s Mutt and Jeff for the 31st of January, 2020. And the essays which have mentioned Mutt and Jeff comics appear at this link.

Bud Fisher’s Mutt and Jeff for the 31st is a rerun of vintage unknown to me. I’m not sure whether it’s among the digitally relettered strips. The lettering’s suspiciously neat, but, for example, there’s at least three different G’s in there. Anyway, it’s an old joke about adding together enough gas-saving contraptions that it uses less than zero gas. So far as it’s tenable at all, it comes from treating percentage savings from different schemes as additive, instead of multiplying together. Also, I suppose, that the savings are independent, that (in this case) Jeff’s new gas saving ten percent still applies even with the special spark plugs or the new carburettor [sic]. The premise is also probably good for a word problem, testing out understanding of percentages and multiplication, which is just a side observation here.


This wraps up last week’s mathematically-themed comic strips. This week I can tell you already was a bonanza week. When I start getting to its comics I should have an essay at this link. Thanks for reading.

Reading the Comics, May 20, 2019: I Guess I Took A Week Off Edition


I’d meant to get back into discussing continuous functions this week, and then didn’t have the time. I hope nobody was too worried.

Bill Amend’s FoxTrot for the 19th is set up as geometry or trigonometry homework. There are a couple of angles that we use all the time, and they do correspond to some common unit fractions of a circle: a quarter, a sixth, an eighth, a twelfth. These map nicely to common cuts of circular pies, at least. Well, it’s a bit of a freak move to cut a pie into twelve pieces, but it’s not totally out there. If someone cuts a pie into 24 pieces, flee.

Offscreen voice: 'So a pizza sliced into fourths has ... ' Paige: '90 degrees per slice.' Voice: 'Correct! And a pizza sliced into sixths has ... ' Page: '60 degrees per slice.' Voice: 'Good! And a pizza sliced into eighths has ... ' Paige: '45 degrees per slice.' Voice: 'Yep! I'd say you're ready for your geometry final, Paige.' Paige: 'Woo-hoo!' Voice, revealed to be Peter: 'Now help me clean up these [ seven pizza ] boxes.' Page: 'I still don't understand why teaching me this required *actual* pizzas.'
Bill Amend’s FoxTrot for the 19th of May, 2019. Essays featuring FoxTrot, either the current (Sunday-only) strips or the 1990s-vintage reruns, should be at this link.

Tom Batiuk’s vintage Funky Winkerbean for the 19th of May is a real vintage piece, showing off the days when pocket electronic calculators were new. The sales clerk describes the calculator as having “a floating decimal”. And here I must admit: I’m poorly read on early-70s consumer electronics. So I can’t say that this wasn’t a thing. But I suspect that Batiuk either misunderstood “floating-point decimal”, which would be a selling point, or shortened the phrase in order to make the dialogue less needlessly long. Which is fine, and his right as an author. The technical detail does its work, for the setup, by existing. It does not have to be an actual sales brochure. Reducing “floating point decimal” to “floating decimal” is a useful artistic shorthand. It’s the dialogue equivalent to the implausibly few, but easy to understand, buttons on the calculator in the title panel.

Calculator salesman: 'This little pocket calculator is a real beauty. It's nice and light so you can take it anywhere. It has an eight-digit readout with automatic roundoff. Not only that, but it has a floating decimal which enables you to solve ANY type of problem with it!' Les Moore: 'Amazing! May I try it out?' (To the calculator) 'Hello, pocket calculator? Why do I have so much trouble getting girls to like me?'
Tom Batiuk’s vintage Funky Winkerbean for the 19th of May, 2019. The strip originally ran the 17th of June, 1973. Comics Kingdom is printing both the current Funky Winkerbean strips and early-70s reprints. Essays that mention Funky Winkerbean, old or new, should appear at this link.

Floating point is one of the ways to represent numbers electronically. The storage scheme is much like scientific notation. That is, rather than think of 2,038, think of 2.038 times 103. In the computer’s memory are stored the 2.038 and the 3, with the “times ten to the” part implicit in the storage scheme. The advantage of this is the range of numbers one can use now. There are different ways to implement this scheme; a common one will let one represent numbers as tiny as 10-308 or as large as 10308, which is enough for most people’s needs.

The disadvantage is that floating point numbers aren’t perfect. They have only around (commonly) sixteen digits of significance. That is, the first sixteen or so nonzero numbers in the number you represent mean anything; everything after that is garbage. Most of the time, that trailing garbage doesn’t hurt. But most is not always. Trying to add, for example, a tiny number, like 10-20, to a huge number, like 1020 won’t get the right answer. And there are numbers that can’t be represented correctly anyway, including such exotic and novel numbers as \frac{1}{3} . A lot of numerical mathematics is about finding ways to compute that avoid these problems.

Back when I was a grad student I did have one casual friend who proclaimed that no real mathematician ever worked with floating point numbers, because of the limitations they impose. I could not get him to accept that no, in fact, mathematicians are fine with these limitations. Every scheme for representing numbers on a computer has limitations, and floating point numbers work quite well. At some point, you have to suspect some people would rather fight for a mistaken idea they already have than accept something new.

Matrix-O-Magic: Draw a nine-square grid on a notepad, filling in the numbers 1-9 like this: 2, 9, 4 // 7, 5, 3 // 6, 1, 8 Hand the pad and marker to a friend and tell him to pick any row of three numbers, upward, downward, or diagonal. Tell him to black out any numbers not in his row. Instruct your friend to add up his three randomly chosen numbers. Ask your friend to flip through the rest of the notepad to make sure the pages are blank. All the pages are blank except one. That one bears the number that his numbers added up to: 15. (All the rows/columns/diagonals add to 15; because the other numbers are blacked out your friend won't notice. If asked to do the trick more than once the grid can be made to look different by rotating the order of the numbers left or right, et, 6, 7, 2 // 1, 5, 9 // 8, 3, 4.)
Mac King and Bill King’s Magic in a Minute for the 19th of May, 2019. So far as I know all these panels are new ones, although they do reuse gimmicks now and then. But the arithmetic and logic tricks featured in Magic In A Minute get discussed at this link, when they get mention from me at all.

Mac King and Bill King’s Magic in a Minute for the 19th does a bit of stage magic supported by arithmetic: forecasting the sum of three numbers. The trick is that all eight possible choices someone would make have the same sum. There’s a nice bit of group theory hidden in the “Howdydoit?” panel, about how to do the trick a second time. Rotating the square of numbers makes what looks, casually, like a different square. It’s hard for human to memorize a string of digits that don’t have any obvious meaning, and the longer the string the worse people are at it. If you’ve had a person — as directed — black out the rows or columns they didn’t pick, then it’s harder to notice the reused pattern.

The different directions that you could write the digits down in represent symmetries of the square. That is, geometric operations that would replace a square with something that looks like the original. This includes rotations, by 90 or 180 or 270 degrees clockwise. Mac King and Bill King don’t mention it, but reflections would also work: if the top row were 4, 9, 2, for example, and the middle 3, 5, 7, and the bottom 8, 1, 6. Combining rotations and reflections also works.

If you do the trick a second time, your mark might notice it’s odd that the sum came up 15 again. Do it a third time, even with a different rotation or reflection, and they’ll know something’s up. There are things you could do to disguise that further. Just double each number in the square, for example: a square of 4/18/8, 14/10/6, 12/2/16 will have each row or column or diagonal add up to 30. But this loses the beauty of doing this with the digits 1 through 9, and your mark might grow suspicious anyway. The same happens if, say, you add one to each number in the square, and forecast a sum of 18. Even mathematical magic tricks are best not repeated too often, not unless you have good stage patter.

Wavehead, to classmate, over lunch: 'Did you know that every square is a rhombus, but not every rhombus is a square? I mean, you can't make this stuff up!'
Mark Anderson’s Andertoons for the 20th of May, 2019. Always glad to discuss Andertoons, as you can see from these essays.

Mark Anderson’s Andertoons for the 20th is the Mark Anderson’s Andertoons for the week. Wavehead’s marveling at what seems at first like an asymmetry, about squares all being rhombuses yet rhombuses not all being squares. There are similar results with squares and rectangles. Still, it makes me notice something. Nobody would write a strip where the kid marvelled that all squares were polygons but not all polygons were squares. It seems that the rhombus connotes something different. This might just be familiarity. Polygons are … well, if not a common term, at least something anyone might feel familiar. Rhombus is a more technical term. It maybe never quite gets familiar, not in the ways polygons do. And the defining feature of a rhombus — all four sides the same length — seems like the same thing that makes a square a square.


There should be another Reading the Comics post this coming week, and it should appear at this link. I’d like to publish it Tuesday but, really, Wednesday is more probable.

Reading the Comics, January 13, 2019: January 13, 2019 Edition


I admit I’m including a fairly marginal strip in this, just so I can have the fun of another single-day edition. What can I say? I can be easily swayed by silly things. Also, somehow, all four strips today have circumstances where one might mistake them for reruns. Let’s watch.

Bill Amend’s FoxTrot for the 13th is wordplay, mashing up ‘cell division’ with ‘long division’. As you might expect from Bill Amend — who loves sneaking legitimate mathematics and physics in where it’s not needed — Paige’s long cell division is a legitimate one. If you’d like a bit of recreational mathematics fun, you can figure out which microscopic organisms correspond to which numerals. The answer is also the Featured Comment on the page, at least as I write this. So if you need an answer, or you want to avoid having the answer spoiled, know what’s there.

A long division problem, with microbes representing the digits. Science teacher: 'Paige, about your diagram of cell division ... ' Paige: 'Did I get the math wrong?'
Bill Amend’s FoxTrot for the 13th of January, 2019. Essays discussing topics raised by FoxTrot, whether new (Sunday strips) or rerun (the weekdays), should be at this link.

Greg Evans’s Luann Againn for the 13th is the strip of most marginal relevance here. Part of Luann’s awful ay is a mathematics test. The given problems are nothing particularly meaningful. There is the sequence ‘mc2’ in the problem, although written as m^c 2 . There’s also a mention of ‘googleplex’, which when the strip was first published in 1991 was nothing more than a misspelling of the quite large number. (‘Googol’ is the number; ‘Google’ a curious misspelling. Or perhaps a reversion. The name was coined in 1938 by Milton Sirotta. Sirotta was seven years old at the time. I accept that it is at least possible Sirotta was thinking of the then-very-popular serial-comic strip Barney Google, and that his uncle Edward Kasner, who brought the name to mathematics, wrote it down wrong.) And that carries with it the connotation that big numbers are harder than small numbers. This is … kind of true. At least, long numbers are more tedious than short numbers. But you don’t really do different work, dividing 1428 by 7, than you do dividing 147 by 7. It’s just longer. “Hard” is a flexible idea.

Panels showing a day in Luann's life: she gets dressed and made up. Then misses the bus and has to run to school, steps in gum, slides into base at gym class, sweats a mathematics test, gets food spilled on her at lunch, and walks in the rain back home. Brad looks over the mess: 'Jeez, Luann, no wonder you don't have any boyfriends. Lookit how you go to school!'
Greg Evans’s Luann Againn for the 13th of January, 2019. It originally ran the 13th of January, 1991. Essays discussing topics raised by Luann, whether new (current day) or rerun (1991 vintage), should be at this link.

Mac King and Bill King’s Magic in a Minute for the 13th felt like a rerun to me. It took a bit of work to find, but yeah, it was. The strip itself, as presented, is new. But the same neat little modular-arithmetic coincidence was used the 31st of July, 2016.

Hickory-Trickery-Clock. From a picture of a standard analog watch, here's what you do: think of any number, one through twelve. Place your fingertip on the number 12 of the clock. Spell the number you thought of, moving one number clockwise for each letter; eg, if you thought 'one', move three spaces, stopping at the 3. Now spell out the number you're touching, advancing the numbers by the same rule. And now do this one more time. You will have reached ... 1:00.
Mac King and Bill King’s Magic in a Minute for the 13th of January, 2019. Essays discussing topics raised by Magic In A Minute, whether new or re-drawn magic, should be at this link.

Mathematics on clock faces is often used as a way to introduce modular arithmetic, a variation on arithmetic with only finitely many integers. This can help, if you’re familiar with clock faces. Like regular arithmetic, modular arithmetic can form a group and a ring. Clock faces won’t give you a group or ring, not unless you replace the number before ‘1’ with a ‘0’. To be a group, you need a collection of items, and a binary operation on the items. This operation we often think of as either addition or multiplication, depending on what makes sense for the problem. To be a ring, you need two binary operations, which interact by a distributive law. So the operations are often matched to addition and multiplication. Modular arithmetic is fun, yes. It’s also useful, not just as a way to do something like arithmetic that’s different. Many schemes for setting up checksums, quick and easy tests against data entry errors, rely on modular arithmetic on the data. And many schemes for generating ‘random’ numbers are built on finding multiplicative inverses in modular arithmetic. This isn’t truly random, of course. But you can look at a string of digits and not see any clear patterns. This is often as close to random as you need.

Avis: 'My niece Jasmine is one of those Millennials.' Nick: 'Ah yes, Generation Y.' Avis: 'Y? Why? I'd like to know! Why can't they read cursive? Why can't they do simple multiplication? Why can't they parallel park? Why can't they talk to each other? Why are they always complaining?' Nick: 'Avis, complaining is hardly limited to millennials.' (Avis's questions are illustrated with young adults trying to read cursive or to multiply 3 x 6 or such.)
Rick DeTorie’s One Big Happy for the 13th of January, 2019. Essays discussing topics raised by One Big Happy, whether new (on Creators.com) or rerun (on GoComics.com), should be at this link.

Rick DeTorie’s One Big Happy for the 13th is mostly a bunch of complaints the old always have against the young. Well, the complaint about parallel parking I haven’t seen before. But the rest are common enough. Featured in it is a complaint that the young can’t do arithmetic. I’m not sure there was ever a time that the older generation thought the young were well-trained in arithmetic. Nor that there was ever a time that the current educational vogue wasn’t blamed for destroying a generation’s ability to calculate. I’m sure there are better and worse ways to teach calculation. But I suspect any teaching method will fall short of addressing a couple issues. One is that people over-rate their own competence and under-rate other’s competence. So the older generation will see itself as having got the best possible arithmetic education and anything that’s different is a falling away. And another is that people get worse at stuff they don’t think is enjoyable or don’t have to do a lot. If you haven’t got a use for the fact, or an appreciation for the beauty in it, three times six is a bit of trivia, and not one that inspires much conversation when shared.


There’s more comics with something of a mathematical theme that got published last week. When I get to them the essays should be at this link.

My 2018 Mathematics A To Z: Group Action


I got several great suggestions for topics for ‘g’. The one that most caught my imagination was mathtuition88’s, the group action. Mathtuition88 is run by Mr Wu, a mathematics tutor in Singapore. His mathematics blog recounts his own explorations of interesting topics.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Group Action.

This starts from groups. A group, here, means a pair of things. The first thing is a set of elements. The second is some operation. It takes a pair of things in the set and matches it to something in the set. For example, try the integers as the set, with addition as the operation. There are many kinds of groups you can make. There can be finite groups, ones with as few as one element or as many as you like. (The one-element groups are so boring. We usually need at least two to have much to say about them.) There can be infinite groups, like the integers. There can be discrete groups, where there’s always some minimum distance between elements. There can be continuous groups, like the real numbers, where there’s no smallest distance between distinct elements.

Groups came about from looking at how numbers work. So the first examples anyone gets are based on numbers. The integers, especially, and then the integers modulo something. For example, there’s Z_2 , which has two numbers, 0 and 1. Addition works by the rule that 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 0. There’s similar rules for Z_3 , which has three numbers, 0, 1, and 2.

But after a few comfortable minutes on this, group theory moves on to more abstract things. Things with names like the “permutation group”. This starts with some set of things and we don’t even care what the things are. They can be numbers. They can be letters. They can be places. They can be anything. We don’t care. The group is all of the ways to swap elements around. All the relabellings we can do without losing or gaining an item. Or another, the “symmetry group”. This is, for some given thing — plates, blocks, and wallpaper patterns are great examples — all the ways you can rotate or move or reflect the thing without changing the way it looks.

And now we’re creeping up on what a “group action” is. Let me just talk about permutations here. These are where you swap around items. Like, start out with a list of items “1 2 3 4”. And pick out a permutation, say, swap the second with the fourth item. We write that, in shorthand, as (2 4). Maybe another permutation too. Say, swap the first item with the third. Write that out as (1 3). We can multiply these permutations together. Doing these permutations, in this order, has a particular effect: it swaps the second and fourth items, and swaps the first and third items. This is another permutation on these four items.

These permutations, these “swap this item with that” rules, are a group. The set for the group is instructions like “swap this with that”, or “swap this with that, and that with this other thing, and this other thing with the first thing”. Or even “leave this thing alone”. The operation between two things in the set is, do one and then the other. For example, (2 3) and then (3 4) has the effect of moving the second thing to the fourth spot, the (original) fourth thing to the third spot, and the original third thing to the second spot. That is, it’s the permutation (2 3 4). If you ever need something to doodle during a slow meeting, try working out all the ways you can shuffle around, say, six things. And what happens as you do all the possible combinations of these things. Hey, you’re only permuting six items. How many ways could that be?

So here’s what sounds like a fussy point. The group here is made up the ways you can permute these items. The items aren’t part of the group. They just gave us something to talk about. This is where I got so confused, as an undergraduate, working out groups and group actions.

When we move back to talking about the original items, then we get a group action. You get a group action by putting together a group with some set of things. Let me call the group ‘G’ and the set ‘X’. If I need something particular in the group I’ll call that ‘g’. If I need something particular from the set ‘X’ I’ll call that ‘x’. This is fairly standard mathematics notation. You see how subtly clever this notation is. The group action comes from taking things in G and applying them to things in X, to get things in X. Usually other things, but not always. In the lingo, we say the group action maps the pair of things G and X to the set X.

There are rules these actions have to follow. They’re what you would expect, if you’ve done any fiddling with groups. Don’t worry about them. What’s interesting is what we get from group actions.

First is group orbits. Take some ‘g’ out of the group G. Take some ‘x’ out of the set ‘X’. And build this new set. First, x. Then, whatever g does to x, which we write as ‘gx’. But ‘gx’ is still something in ‘X’, so … what does g do to that? So toss in ‘ggx’. Which is still something in ‘X’, so, toss in ‘gggx’. And ‘ggggx’. And keep going, until you stop getting new things. If ‘X’ is finite, this sequence has to be finite. It might be the whole set of X. It might be some subset of X. But if ‘X’ is finite, it’ll get back, eventually, to where you started, which is why we call this the “group orbit”. We use the same term even if X isn’t finite and we can’t guarantee that all these iterations of g on x eventually get back to the original x. This is a subgroup of X, based on the same group operation that G has.

There can be other special groups. Like, are there elements ‘g’ that map ‘x’ to ‘x’? Sure. The has to be at least one, since the group G has an identity element. There might be others. So, for any given ‘x’, what are all the elements in ‘g’ that don’t change it? The set of all the values of g for which gx is x is the “isotropy group” Gx. Or the “stabilizer subgroup”. This is a subgroup of G, based on x.

Yes, but the point?

Well, the biggest thing we get from group actions is the chance to put group theory principles to work on specific things. A group might describe the ways you can rotate or reflect a square plate without leaving an obvious change in the plate. The group action lets you make this about the plate. Much of modern physics is about learning how the geometry of a thing affects its behavior. This can be the obvious sorts of geometry, like, whether it’s rotationally symmetric. But it can be subtler things, like, whether the forces in the system are different at different times. Group actions let us put what we know from geometry and topology to work in specifics.

A particular favorite of mine is that they let us express the wallpaper groups. These are the ways we can use rotations and reflections and translations (linear displacements) to create different patterns. There are fewer different patterns than you might have guessed. (Different, here, overlooks such petty things as whether the repeated pattern is a diamond, a flower, or a hexagon. Or whether the pattern repeats every two inches versus every three inches.)

And they stay useful for abstract mathematical problems. All this talk about orbits and stabilizers lets us find something called the Orbit Stabilization Theorem. This connects the size of the group G to the size of orbits of x and of the stabilizer subgroups. This has the exciting advantage of letting us turn many proofs into counting arguments. A counting argument is just what you think: showing there’s as many of one thing as there are another. here’s a nice page about the Orbit Stabilization Theorem, and how to use it. This includes some nice, easy-to-understand problems like “how many different necklaces could you make with three red, two green, and one blue bead?” Or if that seems too mundane a problem, an equivalent one from organic chemistry: how many isomers of naphthol could there be? You see where these group actions give us useful information about specific problems.


If you should like a more detailed introduction, although one that supposes you’re more conversant with group theory than I do here, this is a good sequence: Group Actions I, which actually defines the things. Group actions II: the orbit-stabilizer theorem, which is about just what it says. Group actions III — what’s the point of them?, which has the sort of snappy title I like, but which gives points that make sense when you’re comfortable talking about quotient groups and isomorphisms and the like. And what I think is the last in the sequence, Group actions IV: intrinsic actions, which is about using group actions to prove stuff. And includes a mention of one of my favorite topics, the points the essay-writer just didn’t get the first time through. (And more; there’s a point where the essay goes wrong, and needs correction. I am not the Joseph who found the problem.)

My 2018 Mathematics A To Z: Fermat’s Last Theorem


Today’s topic is another request, this one from a Dina. I’m not sure if this is Dina Yagodich, who’d also suggested using the letter ‘e’ for the number ‘e’. Trusting that it is, Dina Yagodich has a YouTube channel of mathematics videos. They cover topics like how to convert degrees and radians to one another, what the chance of a false positive (or false negative) on a medical test is, ways to solve differential equations, and how to use computer tools like MathXL, TI-83/84 calculators, or Matlab. If I’m mistaken, original-commenter Dina, please let me know and let me know if you have any creative projects that should be mentioned here.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Fermat’s Last Theorem.

It comes to us from number theory. Like many great problems in number theory, it’s easy to understand. If you’ve heard of the Pythagorean Theorem you know, at least, there are triplets of whole numbers so that the first number squared plus the second number squared equals the third number squared. It’s easy to wonder about generalizing. Are there quartets of numbers, so the squares of the first three add up to the square of the fourth? Quintuplets? Sextuplets? … Oh, yes. That’s easy. What about triplets of whole numbers, including negative numbers? Yeah, and that turns out to be boring. Triplets of rational numbers? Turns out to be the same as triplets of whole numbers. Triplets of real-valued numbers? Turns out to be very boring. Triplets of complex-valued numbers? Also none too interesting.

Ah, but, what about a triplet of numbers, only raised to some other power? All three numbers raised to the first power is easy; we call that addition. To the third power, though? … The fourth? Any other whole number power? That’s hard. It’s hard finding, for any given power, a trio of numbers that work, although some come close. I’m informed there was an episode of The Simpsons which included, as a joke, the equation 1782^{12} + 1841^{12} = 1922^{12} . If it were true, this would be enough to show Fermat’s Last Theorem was false. … Which happens. Sometimes, mathematicians believe they have found something which turns out to be wrong. Often this comes from noticing a pattern, and finding a proof for a specific case, and supposing the pattern holds up. This equation isn’t true, but it is correct for the first nine digits. An episode of The Wizard of Evergreen Terrace puts forth 3987^{12} + 4365^{12} = 4472^{12} , which apparently matches ten digits. This includes the final digit, also known as “the only one anybody could check”. (The last digit of 398712 is 1. Last digit of 436512 is 5. Last digit of 447212 is 6, and there you go.) Really makes you think there’s something weird going on with 12th powers.

For a Fermat-like example, Leonhard Euler conjectured a thing about “Sums of Like Powers”. That for a whole number ‘n’, you need at least n whole numbers-raised-to-an-nth-power to equal something else raised to an n-th power. That is, you need at least three whole numbers raised to the third power to equal some other whole number raised to the third power. At least four whole numbers raised to the fourth power to equal something raised to the fourth power. At least five whole numbers raised to the fifth power to equal some number raised to the fifth power. Euler was wrong, in this case. L J Lander and T R Parkin published, in 1966, the one-paragraph paper Counterexample to Euler’s Conjecture on Sums of Like Powers. 27^5 + 84^5 + 110^5 + 133^5 = 144^5 and there we go. Thanks, CDC 6600 computer!

But Fermat’s hypothesis. Let me put it in symbols. It’s easier than giving everything long, descriptive names. Suppose that the power ‘n’ is a whole number greater than 2. Then there are no three counting numbers ‘a’, ‘b’, and ‘c’ which make true the equation a^n + b^n = c^n . It looks doable. It looks like once you’ve mastered high school algebra you could do it. Heck, it looks like if you know the proof about how the square root of two is irrational you could approach it. Pierre de Fermat himself said he had a wonderful little proof of it.

He was wrong. No shame in that. He was right about a lot of mathematics, including a lot of stuff that leads into the basics of calculus. And he was right in his feeling that this a^n + b^n = c^n stuff was impossible. He was wrong that he had a proof. At least not one that worked for every possible whole number ‘n’ larger than 2.

For specific values of ‘n’, though? Oh yes, that’s doable. Fermat did it himself for an ‘n’ of 4. Euler, a century later, filed in ‘n’ of 3. Peter Dirichlet, a great name in number theory and analysis, and Joseph-Louis Lagrange, who worked on everything, proved the case of ‘n’ of 5. Dirichlet, in 1832, proved the case for ‘n’ of 14. And there were more partial solutions. You could show that if Fermat’s Last Theorem were ever false, it would have to be false for some prime-number value of ‘n’. That’s great work, answering as it does infinitely many possible cases. It just leaves … infinitely many to go.

And that’s how things went for centuries. I don’t know that every mathematician made some attempt on Fermat’s Last Theorem. But it seems hard to imagine a person could love mathematics enough to spend their lives doing it and not at least take an attempt at it. Nobody ever found it, though. In a 1989 episode of Star Trek: The Next Generation, Captain Picard muses on how eight centuries after Fermat nobody’s proven his theorem. This struck me at the time as too pessimistic. Granted humans were stumped for 400 years. But for 800 years? And stumping everyone in a whole Federation of a thousand worlds? And more than a thousand mathematical traditions? And, for some of these species, tens of thousands of years of recorded history? … Still, there wasn’t much sign of the solving the problem. In 1992 Analog Science Fiction Magazine published a funny short-short story by Ian Randal Strock, “Fermat’s Legacy”. In it, Fermat — jealous of figures like René Descartes and Blaise Pascal who upstaged his mathematical accomplishments — jots down the note. He figures an unsupported claim like that will earn true lasting fame.

So that takes us to 1993, when the world heard about elliptic integrals for the first time. Elliptic curves are neat things. They’re polynomials. They have some nice mathematical properties. People first noticed them in studying how long arcs of ellipses are. (This is why they’re called elliptic curves, even though most of them have nothing to do with any ellipse you’d ever tolerate in your presence.) They look ready to use for encryption. And in 1985, Gerhard Frey noticed something. Suppose you did have, for some ‘n’ bigger than 2, a solution a^n + b^n = c^n . Then you could use that a, b, and n to make a new elliptic curve. That curve is the one that satisfies y^2 = x\cdot\left(x - a^n\right)\cdot\left(x + b^n\right) . And then that elliptic curve would not be “modular”.

I would like to tell you what it means for an elliptic curve to be modular. But getting to that point would take at least four subsidiary essays. MathWorld has a description of what it means to be modular, and even links to explaining terms like “meromorphic”. It’s getting exotic stuff.

Frey didn’t show whether elliptic curves of this time had to be modular or not. This is normal enough, for mathematicians. You want to find things which are true and interesting. This includes conjectures like this, that if elliptic curves are all modular then Fermat’s Last Theorem has to be true. Frey was working on consequences of the Taniyama-Shimura Conjecture, itself three decades old at that point. Yutaka Taniyama and Goro Shimura had found there seemed to be a link between elliptic curves and these “modular forms”, which are a kind of group. That is, a group-theory thing.

So in fall of 1993 I was taking an advanced, though still undergraduate, course in (not-high-school) algebra at Rutgers. It’s where we learn group theory, after Intro to Algebra introduced us to group theory. Some exciting news came out. This fellow named Andrew Wiles at Princeton had shown an impressive bunch of things. Most important, that the Taniyama-Shimura Conjecture was true for semistable elliptic curves. This includes the kind of elliptic curve Frey made out of solutions to Fermat’s Last Theorem. So the curves based on solutions to Fermat’s Last Theorem would have be modular. But Frey had shown any curves based on solutions to Fermat’s Last Theorem couldn’t be modular. The conclusion: there can’t be any solutions to Fermat’s Last Theorem. Our professor did his best to explain the proof to us. Abstract Algebra was the undergraduate course closest to the stuff Wiles was working on. It wasn’t very close. When you’re still trying to work out what it means for something to be an ideal it’s hard to even follow the setup of the problem. The proof itself was inaccessible.

Which is all right. Wiles’s original proof had some flaws. At least this mathematics major shrugged when that news came down and wondered, well, maybe it’ll be fixed someday. Maybe not. I remembered how exciting cold fusion was for about six weeks, too. But this someday didn’t take long. Wiles, with Richard Taylor, revised the proof and published about a year later. So far as I’m aware, nobody has any serious qualms about the proof.

So does knowing Fermat’s Last Theorem get us anything interesting? … And here is a sad anticlimax. It’s neat to know that a^n + b^n = c^n can’t be true unless ‘n’ is 1 or 2, at least for positive whole numbers. But I’m not aware of any neat results that follow from that, or that would follow if it were untrue. There are results that follow from the Taniyama-Shimura Conjecture that are interesting, according to people who know them and don’t seem to be fibbing me. But Fermat’s Last Theorem turns out to be a cute little aside.

Which is not to say studying it was foolish. This easy-to-understand, hard-to-solve problem certainly attracted talented minds to think about mathematics. Mathematicians found interesting stuff in trying to solve it. Some of it might be slight. I learned that in a Pythagorean triplet — ‘a’, ‘b’, and ‘c’ with a^2 + b^2 = c^2 — that I was not the infinitely brilliant mathematician at age fifteen I hoped I might be. Also that if ‘a’, ‘b’, and ‘c’ are relatively prime, you can’t have ‘a’ and ‘b’ both odd and ‘c’ even. You had to have ‘c’ and either ‘a’ or ‘b’ odd, with the other number even. Other mathematicians of more nearly infinite ability found stuff of greater import. Ernst Eduard Kummer in the 19th century developed ideals. These are an important piece of group theory. He was busy proving special cases of Fermat’s Last Theorem.

Kind viewers have tried to retcon Picard’s statement about Fermat’s Last Theorem. They say Picard was really searching for the proof Fermat had, or believed he had. Something using the mathematical techniques available to the early 17th century. Or that follow closely enough from that. The Taniyama-Shimura Conjecture definitely isn’t it. I don’t buy the retcon, but I’m willing to play along for the sake of not causing trouble. I suspect there’s not a proof of the general case that uses anything Fermat could have recognized, or thought he had. That’s all right. The search for a thing can be useful even if the thing doesn’t exist.

Reading the Comics, December 30, 2017: Looking To 2018 Edition


The last full week of 2017 was also a slow one for mathematically-themed comic strips. You can tell by how many bits of marginally relevant stuff I include. In this case, it also includes a couple that just mention the current or the upcoming year. So you’ve been warned.

Mac King and Bill King’s Magic in a Minute activity for the 24th is a logic puzzle. I’m not sure there’s deep mathematics to it, but it’s some fun to reason out.

John Graziano’s Ripley’s Believe It Or Not for the 24th mentions the bit of recreational group theory that normal people know, the Rubik’s Cube. The group theory comes in from rotations: you can take rows or columns on the cube and turn them, a quarter or a half or a three-quarters turn. Which rows you turn, and which ways you turn them, form a group. So it’s a toy that inspires deep questions. Who wouldn’t like to know in how few moves a cube could be solved? We know there are at least some puzzles that take 18 moves to solve. (You can calculate the number of different cube arrangements there are, and how many arrangements you could make by shuffling a cube around with 17 moves. There’s more possible arrangements than there are ones you can get to in 17 moves; therefore, there must be at least one arrangement that takes 18 moves to solve.) A 2010 computer-assisted proof by Tomas Rokicki, Herbert Kociemba, Morley Davidson, and John Dethridge showed that at most 20 face turns are needed for every possible cube to be solved. I don’t know if there’s been any success figuring out whether 19 or even 18 is necessarily enough.

Griffith: 'Here we are, Zippy, back in the land of our childhood.' Zippy: 'Are we still in the ninth grade?' Griffith: 'Kind of ... although I still can't remember a thing about algebra.' Zippy: 'So many spitballs and paper airplanes ago!!' Griffith: 'Why did I act up so much in school, Zippy? Was it a Freudian thing?' Zippy: 'It was a cry for kelp.' Griffith: 'Don't you mean a cry for help? I don't think kelp was even a word I knew back in the 50s.' Zippy: 'Seaweed is the fifth dimension!'
Bill Griffith’s Zippy the Pinhead for the 26th of December, 2017. This is not as strongly a memoir or autobiographical strip as Griffith will sometimes do, which is a shame. Those are always captivating. I have fun reading Zippy the Pinhead and understand why people wouldn’t. But the memoir strips I recommend even to people who don’t care for the usual fare.

Bill Griffith’s Zippy the Pinhead for the 26th just mentions algebra as a thing that Griffith can’t really remember, even in one of his frequent nostalgic fugues. I don’t know that Zippy’s line about the fifth dimension is meant to refer to geometry. It might refer to the band, but that would be a bit odd. Yes, I know, Zippy the Pinhead always speaks oddly, but in these nostalgic fugue strips he usually provides some narrative counterpoint.

Larry Wright’s Motley Classics for the 26th originally ran in 1986. I mention this because it makes the odd dialogue of getting “a new math program” a touch less odd. I confess I’m not sure what the kid even got. An educational game? Something for numerical computing? The coal-fired, gear-driven version of Mathematica that existed in the 1980s? It’s a mystery, it is.

Ryan Pagelow’s Buni for the 27th is really a calendar joke. It seems to qualify as an anthropomorphic numerals joke, though. It’s not a rare sentiment either.

Jef Mallett’s Frazz for the 29th is similarly a calendar joke. It does play on 2017 being a prime number, a fact that doesn’t really mean much besides reassuring us that it’s not a leap year. I’m not sure just what’s meant by saying it won’t repeat for another 2017 years, at least that wouldn’t be just as true for (say) 2015 or 2019. But as Frazz points out, we do cling to anything that floats in times like these.

The Summer 2017 Mathematics A To Z: Volume Forms


I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Volume Forms.

So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” y = 2x + 1 .

A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called y = 2x + 1 before? … That’s … some mess. And now r = 2\theta + 1 … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

And something to bother you a while. y = 2x + 1 is an equation that looks the same as r = 2\theta + 1 . You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

The Summer 2017 Mathematics A To Z: Prime Number


Gaurish, host of, For the love of Mathematics, gives me another topic for today’s A To Z entry. I think the subject got away from me. But I also like where it got.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Prime Number.

Something about ‘5’ that you only notice when you’re a kid first learning about numbers. You know that it’s a prime number because it’s equal to 1 times 5 and nothing else. You also know that once you introduce fractions, it’s equal to all kinds of things. It’s 10 times one-half and it’s 15 times one-third and it’s 2.5 times 2 and many other things. Why, you might ask the teacher, is it a prime number if it’s got a million billion trillion different factors? And when every other whole number has as many factors? If you get to the real numbers it’s even worse yet, although when you’re a kid you probably don’t realize that. If you ask, the teacher probably answers that it’s only the whole numbers that count for saying whether something is prime or not. And, like, 2.5 can’t be considered anything, prime or composite. This satisfies the immediate question. It doesn’t quite get at the underlying one, though. Why do integers have prime numbers while real numbers don’t?

To maybe have a prime number we need a ring. This is a creature of group theory, or what we call “algebra” once we get to college. A ring consists of a set of elements, and a rule for adding them together, and a rule for multiplying them together. And I want this ring to have a multiplicative identity. That’s some number which works like ‘1’: take something, multiply it by that, and you get that something back again. Also, I want this multiplication rule to commute. That is, the order of multiplication doesn’t affect what the result is. (If the order matters then everything gets too complicated to deal with.) Let me say the things in the set are numbers. It turns out (spoiler!) they don’t have to be. But that’s how we start out.

Whether the numbers in a ring are prime or not depends on the multiplication rule. Let’s take a candidate number that I’ll call ‘a’ to make my writing easier. If the only numbers whose product is ‘a’ are the pair of ‘a’ and the multiplicative identity, then ‘a’ is prime. If there’s some other pair of numbers that give you ‘a’, then ‘a’ is not prime.

The integers — the positive and negative whole numbers, including zero — are a ring. And they have prime numbers just like you’d expect, if we figure out some rule about how to deal with the number ‘-1’. There are many other rings. There’s a whole family of rings, in fact, so commonly used that they have shorthand. Mathematicians write them as “Zn”, where ‘n’ is some whole number. They’re the integers, modulo ‘n’. That is, they’re the whole numbers from ‘0’ up to the number ‘n-1’, whatever that is. Addition and multiplication work as they do with normal arithmetic, except that if the result is less than ‘0’ we add ‘n’ to it. If the result is more than ‘n-1’ we subtract ‘n’ from it. We repeat that until the result is something from ‘0’ to ‘n-1’, inclusive.

(We use the letter ‘Z’ because it’s from the German word for numbers, and a lot of foundational work was done by German-speaking mathematicians. Alternatively, we might write this set as “In”, where “I” stands for integers. If that doesn’t satisfy, we might write this set as “Jn”, where “J” stands for integers. This is because it’s only very recently that we’ve come to see “I” and “J” as different letters rather than different ways to write the same letter.)

These modulo arithmetics are legitimate ones, good reliable rings. They make us realize how strange prime numbers are, though. Consider the set Z4, where the only numbers are 0, 1, 2, and 3. 0 times anything is 0. 1 times anything is whatever you started with. 2 times 1 is 2. Obvious. 2 times 2 is … 0. All right. 2 times 3 is 2 again. 3 times 1 is 3. 3 times 2 is 2. 3 times 3 is 1. … So that’s a little weird. The only product that gives us 3 is 3 times 1. So 3’s a prime number here. 2 isn’t a prime number: 2 times 3 is 2. For that matter even 1 is a composite number, an unsettling consequence.

Or then Z5, where the only numbers are 0, 1, 2, 3, and 4. Here, there are no prime numbers. Each number is the product of at least one pair of other numbers. In Z6 we start to have prime numbers again. But Z7? Z8? I recommend these questions to a night when your mind is too busy to let you fall asleep.

Prime numbers depend on context. In the crowded universe of all the rational numbers, or all the real numbers, nothing is prime. In the more austere world of the Gaussian Integers, familiar friends like ‘3’ are prime again, although ‘5’ no longer is. We recognize that as the product of 2 + \imath and 2 - \imath , themselves now prime numbers.

So given that these things do depend on context. Should we care? Or let me put it another way. Suppose we contact a wholly separate culture, one that we can’t have influenced and one not influenced by us. It’s plausible that they should have a mathematics. Would they notice prime numbers as something worth study? Or would they notice them the way we notice, say, pentagonal numbers, a thing that allows for some pretty patterns and that’s about it?

Well, anything could happen, of course. I’m inclined to think that prime numbers would be noticed, though. They seem to follow naturally from pondering arithmetic. And if one has thought of rings, then prime numbers seem to stand out. The way that Zn behaves changes in important ways if ‘n’ is a prime number. Most notably, if ‘n’ is prime (among the whole numbers), then we can define something that works like division on Zn. If ‘n’ isn’t prime (again), we can’t. This stands out. There are a host of other intriguing results that all seem to depend on whether ‘n’ is a prime number among the whole numbers. It seems hard to believe someone could think of the whole numbers and not notice the prime numbers among them.

And they do stand out, as these reliably peculiar things. Many things about them (in the whole numbers) are easy to prove. That there are infinitely many, for example, you can prove to a child. And there are many things we have no idea how to prove. That there are infinitely many primes which are exactly two more than another prime, for example. Any child can understand the question. The one who can prove it will win what fame mathematicians enjoy. If it can be proved.

They turn up in strange, surprising places. Just in the whole numbers we find some patches where there are many prime numbers in a row (Forty percent of the numbers 1 through 10!). We can find deserts; we know of a stretch of 1,113,106 numbers in a row without a single prime among them. We know it’s possible to find prime deserts as vast as we want. Say you want a gap between primes of at least size N. Then look at the numbers (N+1)! + 2, (N+1)! + 3, (N+1)! + 4, and so on, up to (N+1)! + N+1. None of those can be prime numbers. You must have a gap at least the size N. It may be larger; how we know that (N+1)! + 1 is a prime number?

No telling. Well, we can check. See if any prime number divides into (N+1)! + 1. This takes a long time to do if N is all that big. There’s no formulas we know that will make this easy or quick.

We don’t call it a “prime number” if it’s in a ring that isn’t enough like the numbers. Fair enough. We shift the name to “prime element”. “Element” is a good generic name for a thing whose identity we don’t mean to pin down too closely. I’ve talked about the Gaussian Primes already, in an earlier essay and earlier in this essay. We can make a ring out of the polynomials whose coefficients are all integers. In that, x^2 + 1 is a prime. So is x^2 - 2 . If this hasn’t given you some ideas what other polynomials might be primes, then you have something else to ponder while trying to sleep. Thinking of all the prime polynomials is likely harder than you can do, though.

Prime numbers seem to stand out, obvious and important. Humans have known about prime numbers for as long as we’ve known about multiplication. And yet there is something obscure about them. If there are cultures completely independent of our own, do they have insights which make prime numbers not such occult figures? How different would the world be if we knew all the things we now wonder about primes?

The Summer 2017 Mathematics A To Z: L-function


I’m brought back to elliptic curves today thanks to another request from Gaurish, of the For The Love Of Mathematics blog. Interested in how that’s going to work out? Me too.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

So stop me if you’ve heard this one before. We’re going to make something interesting. You bring to it a complex-valued number. Anything you like. Let me call it ‘s’ for the sake of convenience. I know, it’s weird not to call it ‘z’, but that’s how this field of mathematics developed. I’m going to make a series built on this. A series is the sum of all the terms in a sequence. I know, it seems weird for a ‘series’ to be a single number, but that’s how that field of mathematics developed. The underlying sequence? I’ll make it in three steps. First, I start with all the counting numbers: 1, 2, 3, 4, 5, and so on. Second, I take each one of those terms and raise them to the power of your ‘s’. Third, I take the reciprocal of each of them. That’s the sequence. And when we add —

Yes, that’s right, it’s the Riemann-Zeta Function. The one behind the Riemann Hypothesis. That’s the mathematical conjecture that everybody loves to cite as the biggest unsolved problem in mathematics now that we know someone did something about Fermat’s Last Theorem. The conjecture is about what the zeroes of this function are. What values of ‘s’ make this sum equal to zero? Some boring ones. Zero, negative two, negative four, negative six, and so on. It has a lot of non-boring zeroes. All the ones we know of have an ‘s’ with a real part of ½. So far we know of at least 36 billion values of ‘s’ that make this add up to zero. They’re all ½ plus some imaginary number. We conjecture that this isn’t coincidence and all the non-boring zeroes are like that. We might be wrong. But it’s the way I would bet.

Anyone who’d be reading this far into a pop mathematics blog knows something of why the Riemann Hypothesis is interesting. It carries implications about prime numbers. It tells us things about a host of other theorems that are nice to have. Also they know it’s hard to prove. Really, really hard.

Ancient mathematical lore tells us there are a couple ways to solve a really, really hard problem. One is to narrow its focus. Try to find as simple a case of it as you can solve. Maybe a second simple case you can solve. Maybe a third. This could show you how, roughly, to solve the general problem. Not always. Individual cases of Fermat’s Last Theorem are easy enough to solve. You can show that a^3 + b^3 = c^3 doesn’t have any non-boring answers where a, b, and c are all positive whole numbers. Same with a^5 + b^5 = c^5 , though it takes longer. That doesn’t help you with the general a^n + b^n = c^n .

There’s another approach. It sounds like the sort of crazy thing Captain Kirk would get away with. It’s to generalize, to make a bigger, even more abstract problem. Sometimes that makes it easier.

For the Riemann-Zeta Function there’s one compelling generalization. It fits into that sequence I described making. After taking the reciprocals of integers-raised-to-the-s-power, multiply each by some number. Which number? Well, that depends on what you like. It could be the same number every time, if you like. That’s boring, though. That’s just the Riemann-Zeta Function times your number. It’s more interesting if what number you multiply by depends on which integer you started with. (Do not let it depend on ‘s’; that’s more complicated than you want.) When you do that? Then you’ve created an L-Function.

Specifically, you’ve created a Dirichlet L-Function. Dirichlet here is Peter Gustav Lejeune Dirichlet, a 19th century German mathematician who got his name on like everything. He did major work on partial differential equations, on Fourier series, on topology, in algebra, and on number theory, which is what we’d call these L-functions. There are other L-Functions, with identifying names such as Artin and Hecke and Euler, which get more directly into group theory. They look much like the Dirichlet L-Function. In building the sequence I described in the top paragraph, they do something else for the second step.

The L-Function is going to look like this:

L(s) = \sum_{n \ge 1}^{\infty} a_n \cdot \frac{1}{n^s}

The sigma there means to evaluate the thing that comes after it for each value of ‘n’ starting at 1 and increasing, by 1, up to … well, something infinitely large. The a_n are the numbers you’ve picked. They’re some value that depend on the index ‘n’, but don’t depend on the power ‘s’. This may look funny but it’s a standard way of writing the terms in a sequence.

An L-Function has to meet some particular criteria that I’m not going to worry about here. Look them up before you get too far into your research. These criteria give us ways to classify different L-Functions, though. We can describe them by degree, much as we describe polynomials. We can describe them by signature, part of those criteria I’m not getting into. We can describe them by properties of the extra numbers, the ones in that fourth step that you multiply the reciprocals by. And so on. LMFDB, an encyclopedia of L-Functions, lists eight or nine properties usable for a taxonomy of these things. (The ambiguity is in what things you consider to depend on what other things.)

What makes this interesting? For one, everything that makes the Riemann Hypothesis interesting. The Riemann-Zeta Function is a slice of the L-Functions. But there’s more. They merge into elliptic curves. Every elliptic curve corresponds to some L-Function. We can use the elliptic curve or the L-Function to prove what we wish to show. Elliptic curves are subject to group theory; so, we can bring group theory into these series.

And then it gets deeper. It always does. Go back to that formula for the L-Function like I put in mathematical symbols. I’m going to define a new function. It’s going to look a lot like a polynomial. Well, that L(s) already looked a lot like a polynomial, but this is going to look even more like one.

Pick a number τ. It’s complex-valued. Any number. All that I care is that its imaginary part be positive. In the trade we say that’s “in the upper half-plane”, because we often draw complex-valued numbers as points on a plane. The real part serves as the horizontal and the imaginary part serves as the vertical axis.

Now go back to your L-Function. Remember those a_n numbers you picked? Good. I’m going to define a new function based on them. It looks like this:

f(\tau) = \sum_{n \ge 1}^{\infty} a_n \left(  e^{2 \pi \imath \tau}\right)^n

You see what I mean about looking like a polynomial? If τ is a complex-valued number, then e^{2 \pi \imath \tau} is just another complex-valued number. If we gave that a new name like ‘z’, this function would look like the sum of constants times z raised to positive powers. We’d never know it was any kind of weird polynomial.

Anyway. This new function ‘f(τ)’ has some properties. It might be something called a weight-2 Hecke eigenform, a thing I am not going to explain without charging someone by the hour. But see the logic here: every elliptic curve matches with some kind of L-Function. Each L-Function matches with some ‘f(τ)’ kind of function. Those functions might or might not be these weight-2 Hecke eigenforms.

So here’s the thing. There was a big hypothesis formed in the 1950s that every rational elliptic curve matches to one of these ‘f(τ)’ functions that’s one of these eigenforms. It’s true. It took decades to prove. You may have heard of it, as the Taniyama-Shimura Conjecture. In the 1990s Wiles and Taylor proved this was true for a lot of elliptic curves, which is what proved Fermat’s Last Theorem after all that time. The rest of it was proved around 2000.

As I said, sometimes you have to make your problem bigger and harder to get something interesting out of it.

I mentioned this above. LMFDB is a fascinating site worth looking at. It’s got a lot of L-Function and Riemann-Zeta function-related materials.

The Summer 2017 Mathematics A To Z: Functor


Gaurish gives me another topic for today. I’m now no longer sure whether Gaurish hopes me to become a topology blogger or a category theory blogger. I have the last laugh, though. I’ve wanted to get better-versed in both fields and there’s nothing like explaining something to learn about it.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Functor.

So, category theory. It’s a foundational field. It talks about stuff that’s terribly abstract. This means it’s powerful, but it can be hard to think of interesting examples. I’ll try, though.

It starts with categories. These have three parts. The first part is a set of things. (There always is.) The second part is a collection of matches between pairs of things in the set. They’re called morphisms. The third part is a rule that lets us combine two morphisms into a new, third one. That is. Suppose ‘a’, ‘b’, and ‘c’ are things in the set. Then there’s a morphism that matches a \rightarrow b , and a morphism that matches b \rightarrow c . And we can combine them into another morphism that matches a \rightarrow c . So we have a set of things, and a set of things we can do with those things. And the set of things we can do is itself a group.

This describes a lot of stuff. Group theory fits seamlessly into this description. Most of what we do with numbers is a kind of group theory. Vector spaces do too. Most of what we do with analysis has vector spaces underneath it. Topology does too. Most of what we do with geometry is an expression of topology. So you see why category theory is so foundational.

Functors enter our picture when we have two categories. Or more. They’re about the ways we can match up categories. But let’s start with two categories. One of them I’ll name ‘C’, and the other, ‘D’. A functor has to match everything that’s in the set of ‘C’ to something that’s in the set of ‘D’.

And it does more. It has to match every morphism between things in ‘C’ to some other morphism, between corresponding things in ‘D’. It’s got to do it in a way that satisfies that combining, too. That is, suppose that ‘f’ and ‘g’ are morphisms for ‘C’. And that ‘f’ and ‘g’ combine to make ‘h’. Then, the functor has to match ‘f’ and ‘g’ and ‘h’ to some morphisms for ‘D’. The combination of whatever ‘f’ matches to and whatever ‘g’ matches to has to be whatever ‘h’ matches to.

This might sound to you like a homomorphism. If it does, I admire your memory or mathematical prowess. Functors are about matching one thing to another in a way that preserves structure. Structure is the way that sets of things can interact. We naturally look for stuff made up of different things that have the same structure. Yes, functors are themselves a category. That is, you can make a brand-new category whose set of things are the functors between two other categories. This is a good spot to pause while the dizziness passes.

There are two kingdoms of functor. You tell them apart by what they do with the morphisms. Here again I’m going to need my categories ‘C’ and ‘D’. I need a morphism for ‘C’. I’ll call that ‘f’. ‘f’ has to match something in the set of ‘C’ to something in the set of ‘C’. Let me call the first something ‘a’, and the second something ‘b’. That’s all right so far? Thank you.

Let me call my functor ‘F’. ‘F’ matches all the elements in ‘C’ to elements in ‘D’. And it matches all the morphisms on the elements in ‘C’ to morphisms on the elmenets in ‘D’. So if I write ‘F(a)’, what I mean is look at the element ‘a’ in the set for ‘C’. Then look at what element in the set for ‘D’ the functor matches with ‘a’. If I write ‘F(b)’, what I mean is look at the element ‘b’ in the set for ‘C’. Then pick out whatever element in the set for ‘D’ gets matched to ‘b’. If I write ‘F(f)’, what I mean is to look at the morphism ‘f’ between elements in ‘C’. Then pick out whatever morphism between elements in ‘D’ that that gets matched with.

Here’s where I’m going with this. Suppose my morphism ‘f’ matches ‘a’ to ‘b’. Does the functor of that morphism, ‘F(f)’, match ‘F(a)’ to ‘F(b)’? Of course, you say, what else could it do? And the answer is: why couldn’t it match ‘F(b)’ to ‘F(a)’?

No, it doesn’t break everything. Not if you’re consistent about swapping the order of the matchings. The normal everyday order, the one you’d thought couldn’t have an alternative, is a “covariant functor”. The crosswise order, this second thought, is a “contravariant functor”. Covariant and contravariant are distinctions that weave through much of mathematics. They particularly appear through tensors and the geometry they imply. In that introduction they tend to be difficult, even mean, creations, since in regular old Euclidean space they don’t mean anything different. They’re different for non-Euclidean spaces, and that’s important and valuable. The covariant versus contravariant difference is easier to grasp here.

Functors work their way into computer science. The avenue here is in functional programming. That’s a method of programming in which instead of the normal long list of commands, you write a single line of code that holds like fourteen “->” symbols that makes the computer stop and catch fire when it encounters a bug. The advantage is that when you have the code debugged it’s quite speedy and memory-efficient. The disadvantage is if you have to alter the function later, it’s easiest to throw everything out and start from scratch, beginning from vacuum-tube-based computing machines. But it works well while it does. You just have to get the hang of it.

The Summer 2017 Mathematics A To Z: Elliptic Curves


Gaurish, of the For The Love Of Mathematics gives me another subject today. It’s one that isn’t about ellipses. Sad to say it’s also not about elliptic integrals. This is sad to me because I have a cute little anecdote about a time I accidentally gave my class an impossible problem. I did apologize. No, nobody solved it anyway.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Elliptic Curves.

Elliptic Curves start, of course, with polynomials. Particularly, they’re polynomials with two variables. We call the ‘x’ and ‘y’ because we have no reason to be difficult. They’re of at most third degree. That is, we can have terms like ‘x’ and ‘y2‘ and ‘x2y’ and ‘y3‘. Something with higher powers, like, ‘x4‘ or ‘x2y2‘ — a fourth power, all together — is right out. Doesn’t matter. Start from this and we can do some slick changes of variables so that we can rewrite it to look like this:

y^2 = x^3 + Ax + B

Here, ‘A’ and ‘B’ are some numbers that don’t change for this particular curve. Also, we need it to be true that 4A^3 + 27B^2 doesn’t equal zero. It avoids problems. What we’ll be looking at are coordinates, values of ‘x’ and ‘y’ together which make this equation true. That is, it’s points on the curve. If you pick some real numbers ‘A’ and ‘B’ and draw all the values of ‘x’ and ‘y’ that make the equation true you get … well, there’s different shapes. They all look like those microscope photos of a water drop emerging and falling from a tap, only rotated clockwise ninety degrees.

So. Pick any of these curves that you like. Pick a point. I’m going to name your point ‘P’. Now pick a point once more. I’m going to name that point ‘Q’. Now draw a line from P through Q. Keep drawing it. It’ll cross the original elliptic curve again. And that point is … not actually special. What is special is the reflection of that point. That is, the same x-coordinate, but flip the plus or minus sign for the y-coordinate. (WARNING! Do not call it “the reflection” at your thesis defense! Call it the “conjugate” point. It means “reflection”.) Your elliptic curve will be symmetric around the x-axis. If, say, the point with x-coordinate 4 and y-coordinate 3 is on the curve, so is the point with x-coordinate 4 and y-coordinate -3. So that reflected point is … something special.

Kind of a curved-out less-than-sign shape.
y^2 = x^3 - 1 . The water drop bulges out from the surface.

This lets us do something wonderful. We can think of this reflected point as the sum of your ‘P’ and ‘Q’. You can ‘add’ any two points on the curve and get a third point. This means we can do something that looks like addition for points on the elliptic curve. And this means the points on this curve are a group, and we can bring all our group-theory knowledge to studying them. It’s a commutative group, too; ‘P’ added to ‘Q’ leads to the same point as ‘Q’ added to ‘P’.

Let me head off some clever thoughts that make fair objections. What if ‘P’ and ‘Q’ are already reflections, so the line between them is vertical? That never touches the original elliptic curve again, right? Yeah, fair complaint. We patch this by saying that there’s one more point, ‘O’, that’s off “at infinity”. Where is infinity? It’s wherever your vertical lines end. Shut up, this can too be made rigorous. In any case it’s a common hack for this sort of problem. When we add that, everything’s nice. The ‘O’ serves the role in this group that zero serves in arithmetic: the sum of point ‘O’ and any point ‘P’ is going to be ‘P’ again.

Second clever thought to head off: what if ‘P’ and ‘Q’ are the same point? There’s infinitely many lines that go through a single point so how do we pick one to find an intersection with the elliptic curve? Huh? If you did that, then we pick the tangent line to the elliptic curve that touches ‘P’, and carry on as before.

The curved-out less-than-sign shape has a noticeable c-shaped bulge on the end.
y^2 = x^3 + 1 . The water drop is close to breaking off, but surface tension has not yet pinched off the falling form.

There’s more. What kind of number is ‘x’? Or ‘y’? I’ll bet that you figured they were real numbers. You know, ordinary stuff. I didn’t say what they were, so left it to our instinct, and that usually runs toward real numbers. Those are what I meant, yes. But we didn’t have to. ‘x’ and ‘y’ could be in other sets of numbers too. They could be complex-valued numbers. They could be just the rational numbers. They could even be part of a finite collection of possible numbers. As the equation y^2 = x^3 + Ax + B is something meaningful (and some technical points are met) we can carry on. The elliptical curves, and the points we “add” on them, might not look like the curves we started with anymore. They might not look like anything recognizable anymore. But the logic continues to hold. We still create these groups out of the points on these lines intersecting a curve.

By now you probably admit this is neat stuff. You may also think: so what? We can take this thing you never thought about, draw points and lines on it, and make it look very loosely kind of like just adding numbers together. Why is this interesting? No appreciation just for the beauty of the structure involved? Well, we live in a fallen world.

It comes back to number theory. The modern study of Diophantine equations grows out of studying elliptic curves on the rational numbers. It turns out the group of points you get for that looks like a finite collection of points with some collection of integers hanging on. How long that collection of numbers is is called the ‘rank’, and there are deep mysteries at work. We know there are elliptic equations that have a rank as big as 28. Nobody knows if the rank can be arbitrary high, though. And I believe we don’t even know if there are any curves with rank of, like, 27, or 25.

Yeah, I’m still sensing skepticism out there. Fine. We’ll go back to the only part of number theory everybody agrees is useful. Encryption. We have roughly the same goals for every encryption scheme. We want it to be easy to encode a message. We want it to be easy to decode the message if you have the key. We want it to be hard to decode the message if you don’t have the key.

The curved-out sign has a bulge with convex loops to it, so that it resembles the cut of a jigsaw puzzle piece.
y^2 = 3x^2 - 3x + 3 . The water drop is almost large enough that its weight overcomes the surface tension holding it to the main body of water.

Take something inside one of these elliptic curve groups. Especially one that’s got a finite field. Let me call your thing ‘g’. It’s really easy for you, knowing what ‘g’ is and what your field is, to raise it to a power. You can pretty well impress me by sharing the value of ‘g’ raised to some whole number ‘m’. Call that ‘h’.

Why am I impressed? Because if all I know is ‘h’, I have a heck of a time figuring out what ‘g’ is. Especially on these finite field groups there’s no obvious connection between how big ‘h’ is and how big ‘g’ is and how big ‘m’ is. Start with a big enough finite field and you can encode messages in ways that are crazy hard to crack.

We trust. At least, if there are any ways to break the code quickly, nobody’s shared them. And there’s one of those enormous-money-prize awards waiting for someone who does know how to break such a code quickly. (I don’t know which. I’m going by what I expect from people.)

And then there’s fame. These were used to prove Fermat’s Last Theorem. Suppose there are some non-boring numbers ‘a’, ‘b’, and ‘c’, so that for some prime number ‘p’ that’s five or larger, it’s true that a^p + b^p = c^p . (We can separately prove Fermat’s Last Theorem for a power that isn’t a prime number, or a power that’s 3 or 4.) Then this implies properties about the elliptic curve:

y^2 = x(x - a^p)(x + b^p)

This is a convenient way of writing things since it showcases the ap and bp. It’s equal to:

y^2 = x^3 + \left(b^p - a^p\right)x^2 + a^p b^p x

(I was so tempted to leave an arithmetic error in there so I could make sure someone commented.)

A little ball off to the side of a curved-out less-than-sign shape.
y^2 = 3x^3 - 4x . The water drop has broken off, and the remaining surface rebounds to its normal meniscus.

If there’s a solution to Fermat’s Last Theorem, then this elliptic equation can’t be modular. I don’t have enough words to explain what ‘modular’ means here. Andrew Wiles and Richard Taylor showed that the equation was modular. So there is no solution to Fermat’s Last Theorem except the boring ones. (Like, where ‘b’ is zero and ‘a’ and ‘c’ equal each other.) And it all comes from looking close at these neat curves, none of which looks like an ellipse.

They’re named elliptic curves because we first noticed them when Carl Jacobi — yes, that Carl Jacobi — while studying the length of arcs of an ellipse. That’s interesting enough on its own. But it is hard. Maybe I could have fit in that anecdote about giving my class an impossible problem after all.

The Summer 2017 Mathematics A To Z: Cohomology


Today’s A To Z topic is another request from Gaurish, of the For The Love Of Mathematics blog. Also part of what looks like a quest to make me become a topology blogger, at least for a little while. It’s going to be exciting and I hope not to faceplant as I try this.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Also, a note about Thomas K Dye, who’s drawn the banner art for this and for the Why Stuff Can Orbit series: the publisher for collections of his comic strip is having a sale this weekend.

Cohomology.

The word looks intimidating, and faintly of technobabble. It’s less cryptic than it appears. We see parts of it in non-mathematical contexts. In biology class we would see “homology”, the sharing of structure in body parts that look superficially very different. We also see it in art class. The instructor points out that a dog’s leg looks like that because they stand on their toes. What looks like a backward-facing knee is just the ankle, and if we stand on our toes we see that in ourselves. We might see it in chemistry, as many interesting organic compounds differ only in how long or how numerous the boring parts are. The stuff that does work is the same, or close to the same. And this is a hint to what a mathematician means by cohomology. It’s something in shapes. It’s particularly something in how different things might have similar shapes. Yes, I am using a homology in language here.

I often talk casually about the “shape” of mathematical things. Or their “structures”. This sounds weird and abstract to start and never really gets better. We can get some footing if we think about drawing the thing we’re talking about. Could we represent the thing we’re working on as a figure? Often we can. Maybe we can draw a polygon, with the vertices of the shape matching the pieces of our mathematical thing. We get the structure of our thing from thinking about what we can do to that polygon without changing the way it looks. Or without changing the way we can do whatever our original mathematical thing does.

This leads us to homologies. We get them by looking for stuff that’s true even if we moosh up the original thing. The classic homology comes from polyhedrons, three-dimensional shapes. There’s a relationship between the number of vertices, the number of edges, and the number of faces of a polyhedron. It doesn’t change even if you stretch the shape out longer, or squish it down, for that matter slice off a corner. It only changes if you punch a new hole through the middle of it. Or if you plug one up. That would be unsporting. A homology describes something about the structure of a mathematical thing. It might even be literal. Topology, the study of what we know about shapes without bringing distance into it, has the number of holes that go through a thing as a homology. This gets feeling like a comfortable, familiar idea now.

But that isn’t a cohomology. That ‘co’ prefix looks dangerous. At least it looks significant. When the ‘co’ prefix has turned up before it’s meant something is shaped by how it refers to something else. Coordinates aren’t just number lines; they’re collections of number lines that we can use to say where things are. If ‘a’ is a factor of the number ‘x’, its cofactor is the number you multiply ‘a’ by in order to get ‘x’. (For real numbers that’s just x divided by a. For other stuff it might be weirder.) A codomain is a set that a function maps a domain into (and must contain the range, at least). Cosets aren’t just sets; they’re ways we can divide (for example) the counting numbers into odds and evens.

So what’s the ‘co’ part for a homology? I’m sad to say we start losing that comfortable feeling now. We have to look at something we’re used to thinking of as a process as though it were a thing. These things are morphisms: what are the ways we can match one mathematical structure to another? Sometimes the morphisms are easy. We can match the even numbers up with all the integers: match 0 with 0, match 2 with 1, match -6 with -3, and so on. Addition on the even numbers matches with addition on the integers: 4 plus 6 is 10; 2 plus 3 is 5. For that matter, we can match the integers with the multiples of three: match 1 with 3, match -1 with -3, match 5 with 15. 1 plus -2 is -1; 3 plus -6 is -9.

What happens if we look at the sets of matchings that we can do as if that were a set of things? That is, not some human concept like ‘2’ but rather ‘match a number with one-half its value’? And ‘match a number with three times its value’? These can be the population of a new set of things.

And these things can interact. Suppose we “match a number with one-half its value” and then immediately “match a number with three times its value”. Can we do that? … Sure, easily. 4 matches to 2 which goes on to 6. 8 matches to 4 which goes on to 12. Can we write that as a single matching? Again, sure. 4 matches to 6. 8 matches to 12. -2 matches to -3. We can write this as “match a number with three-halves its value”. We’ve taken “match a number with one-half its value” and combined it with “match a number with three times its value”. And it’s given us the new “match a number with three-halves its value”. These things we can do to the integers are themselves things that can interact.

This is a good moment to pause and let the dizziness pass.

It isn’t just you. There is something weird thinking of “doing stuff to a set” as a thing. And we have to get a touch more abstract than even this. We should be all right, but please do not try not to use this to defend your thesis in category theory. Just use it to not look forlorn when talking to your friend who’s defending her thesis in category theory.

Now, we can take this collection of all the ways we can relate one set of things to another. And we can combine this with an operation that works kind of like addition. Some way to “add” one way-to-match-things to another and get a way-to-match-things. There’s also something that works kind of like multiplication. It’s a different way to combine these ways-to-match-things. This forms a ring, which is a kind of structure that mathematicians learn about in Introduction to Not That Kind Of Algebra. There are many constructs that are rings. The integers, for example, are also a ring, with addition and multiplication the same old processes we’ve always used.

And just as we can sort the integers into odds and evens — or into other groupings, like “multiples of three” and “one plus a multiple of three” and “two plus a multiple of three” — so we can sort the ways-to-match-things into new collections. And this is our cohomology. It’s the ways we can sort and classify the different ways to manipulate whatever we started on.

I apologize that this sounds so abstract as to barely exist. I admit we’re far from a nice solid example such as “six”. But the abstractness is what gives cohomologies explanatory power. We depend very little on the specifics of what we might talk about. And therefore what we can prove is true for very many things. It takes a while to get there, is all.

The End 2016 Mathematics A To Z Roundup


As is my tradition for the end of these roundups (see Summer 2015 and then Leap Day 2016) I want to just put up a page listing the whole set of articles. It’s a chance for people who missed a piece to easily see what they missed. And it lets me recover that little bit extra from the experience. Run over the past two months were:

The End 2016 Mathematics A To Z: Quotient Groups


I’ve got another request today, from the ever-interested and group-theory-minded gaurish. It’s another inspirational one.

Quotient Groups.

We all know about even and odd numbers. We don’t have to think about them. That’s why it’s worth discussing them some.

We do know what they are, though. The integers — whole numbers, positive and negative — we can split into two sets. One of them is the even numbers, two and four and eight and twelve. Zero, negative two, negative six, negative 2,038. The other is the odd numbers, one and three and nine. Negative five, negative nine, negative one.

What do we know about numbers, if all we look at is whether numbers are even or odd? Well, we know every integer is either an odd or an even number. It’s not both; it’s not neither.

We know that if we start with an even number, its negative is also an even number. If we start with an odd number, its negative is also an odd number.

We know that if we start with a number, even or odd, and add to it its negative then we get an even number. A specific number, too: zero. And that zero is interesting because any number plus zero is that same original number.

We know we can add odds or evens together. An even number plus an even number will be an even number. An odd number plus an odd number is an even number. An odd number plus an even number is an odd number. And subtraction is the same as addition, by these lights. One number minus an other number is just one number plus negative the other number. So even minus even is even. Odd minus odd is even. Odd minus even is odd.

We can pluck out some of the even and odd numbers as representative of these sets. We don’t want to deal with big numbers, nor do we want to deal with negative numbers if we don’t have to. So take ‘0’ as representative of the even numbers. ‘1’ as representative of the odd numbers. 0 + 0 is 0. 0 + 1 is 1. 1 + 0 is 1. The addition is the same thing we would do with the original set of integers. 1 + 1 would be 2, which is one of the even numbers, which we represent with 0. So 1 + 1 is 0. If we’ve picked out just these two numbers each is the minus of itself: 0 – 0 is 0 + 0. 1 – 1 is 1 + 1. All that gives us 0, like we should expect.

Two paragraphs back I said something that’s obvious, but deserves attention anyway. An even plus an even is an even number. You can’t get an odd number out of it. An odd plus an odd is an even number. You can’t get an odd number out of it. There’s something fundamentally different between the even and the odd numbers.

And now, kindly reader, you’ve learned quotient groups.

OK, I’ll do some backfilling. It starts with groups. A group is the most skeletal cartoon of arithmetic. It’s a set of things and some operation that works like addition. The thing-like-addition has to work on pairs of things in your set, and it has to give something else in the set. There has to be a zero, something you can add to anything without changing it. We call that the identity, or the additive identity, because it doesn’t change something else’s identity. It makes sense if you don’t stare at it too hard. Everything has an additive inverse. That is everything has a “minus”, that you can add to it to get zero.

With odd and even numbers the set of things is the integers. The thing-like-addition is, well, addition. I said groups were based on how normal arithmetic works, right?

And then you need a subgroup. A subgroup is … well, it’s a subset of the original group that’s itself a group. It has to use the same addition the original group does. The even numbers are such a subgroup of the integers. Formally they make something called a “normal subgroup”, which is a little too much for me to explain right now. If your addition works like it does for normal numbers, that is, “a + b” is the same thing as “b + a”, then all your subgroups are normal groups. Yes, it can happen that they’re not. If the addition is something like rotations in three-dimensional space, or swapping the order of things, then the order you “add” things in matters.

We make a quotient group by … OK, this isn’t going to sound like anything. It’s a group, though, like the name says. It uses the same addition that the original group does. Its set, though, that’s itself made up of sets. One of the sets is the normal subgroup. That’s the easy part.

Then there’s something called cosets. You make a coset by picking something from the original group and adding it to everything in the subgroup. If the thing you pick was from the original subgroup that’s just going to be the subgroup again. If you pick something outside the original subgroup then you’ll get some other set.

Starting from the subgroup of even numbers there’s not a lot to do. You can get the even numbers and you get the odd numbers. Doesn’t seem like much. We can do otherwise though. Suppose we start from the subgroup of numbers divisible by 4, though. That’s 0, 4, 8, 12, -4, -8, -12, and so on. Now there’s three cosets we can make from that. We can start with the original set of numbers. Or we have 1 plus that set: 1, 5, 9, 13, -3, -7, -11, and so on. Or we have 2 plus that set: 2, 6, 10, 14, -2, -6, -10, and so on. Or we have 3 plus that set: 3, 7, 11, 15, -1, -5, -9, and so on. None of these others are subgroups, which is why we don’t call them subgroups. We call them cosets.

These collections of cosets, though, they’re the pieces of a new group. The quotient group. One of them, the normal subgroup you started with, is the identity, the thing that’s as good as zero. And you can “add” the cosets together, in just the same way you can add “odd plus odd” or “odd plus even” or “even plus even”.

For example. Let me start with the numbers divisible by 4. I will have so much a better time if I give this a name. I’ll pick ‘Q’. This is because, you know, quarters, quartet, quadrilateral, this all sounds like four-y stuff. The integers — the integers have a couple of names. ‘I’, ‘J’, and ‘Z’ are the most common ones. We get ‘Z’ from German; a lot of important group theory was done by German-speaking mathematicians. I’m used to it so I’ll stick with that. The quotient group ‘Z / Q’, read “Z modulo Q”, has (it happens) four cosets. One of them is Q. One of them is “1 + Q”, that set 1, 5, 9, and so on. Another of them is “2 + Q”, that set 2, 6, 10, and so on. And the last is “3 + Q”, that set 3, 7, 11, and so on.

And you can add them together. 1 + Q plus 1 + Q turns out to be 2 + Q. Try it out, you’ll see. 1 + Q plus 2 + Q turns out to be 3 + Q. 2 + Q plus 2 + Q is Q again.

The quotient group uses the same addition as the original group. But it doesn’t add together elements of the original group, or even of the normal subgroup. It adds together sets made from the normal subgroup. We’ll denote them using some form that looks like “a + N”, or maybe “a N”, if ‘N’ was the normal subgroup and ‘a’ something that wasn’t in it. (Sometimes it’s more convenient writing the group operation like it was multiplication, because we do that by not writing anything at all, which saves us from writing stuff.)

If we’re comfortable with the idea that “odd plus odd is even” and “even plus odd is odd” then we should be comfortable with adding together quotient groups. We’re not, not without practice, but that’s all right. In the Introduction To Not That Kind Of Algebra course mathematics majors take they get a lot of practice, just in time to be thrown into rings.

Quotient groups land on the mathematics major as a baffling thing. They don’t actually turn up things from the original group. And they lead into important theorems. But to an undergraduate they all look like text huddling up to ladders of quotient groups. We’re told these are important theorems and they are. They also go along with beautiful diagrams of how these quotient groups relate to each other. But they’re hard going. It’s tough finding good examples and almost impossible to explain what a question is. It comes as a relief to be thrown into rings. By the time we come back around to quotient groups we’ve usually had enough time to get used to the idea that they don’t seem so hard.

Really, looking at odds and evens, they shouldn’t be so hard.

The End 2016 Mathematics A To Z: Monster Group


Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

Monster Group.

It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

(Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

The End 2016 Mathematics A To Z: Kernel


I told you that Image thing would reappear. Meanwhile I learned something about myself in writing this.

Kernel.

I want to talk about functions again. I’ve been keeping like a proper mathematician to a nice general idea of what a function is. The sort where a function’s this rule matching stuff in a set called the domain with stuff in a set called the range. And I’ve tried not to commit myself to saying anything about what that domain and range are. They could be numbers. They could be other functions. They could be the set of DVDs you own but haven’t watched in more than two years. They could be collections socks. Haven’t said.

But we know what functions anyone cares about. They’re stuff that have domains and ranges that are numbers. Preferably real numbers. Complex-valued numbers if we must. If we look at more exotic sets they’re ones that stick close to being numbers: vectors made up of an ordered set of numbers. Matrices of numbers. Functions that are themselves about numbers. Maybe we’ll get to something exotic like a rotation, but then what is a rotation but spinning something a certain number of degrees? There are a bunch of unavoidably common domains and ranges.

Fine, then. I’ll stick to functions with ranges that look enough like regular old numbers. By “enough” I mean they have a zero. That is, something that works like zero does. You know, add it to something else and that something else isn’t changed. That’s all I need.

A natural thing to wonder about a function — hold on. “Natural” is the wrong word. Something we learn to wonder about in functions, in pre-algebra class where they’re all polynomials, is where the zeroes are. They’re generally not at zero. Why would we say “zeroes” to mean “zero”? That could let non-mathematicians think they knew what we were on about. By the “zeroes” we mean the things in the domain that get matched to the zero in the range. It might be zero; no reason it couldn’t, until we know what the function’s rule is. Just we can’t count on that.

A polynomial we know has … well, it might have zero zeroes. Might have no zeroes. It might have one, or two, or so on. If it’s an n-th degree polynomial it can have up to n zeroes. And if it’s not a polynomial? Well, then it could have any conceivable number of zeroes and nobody is going to give you a nice little formula to say where they all are. It’s not that we’re being mean. It’s just that there isn’t a nice little formula that works for all possibilities. There aren’t even nice little formulas that work for all polynomials. You have to find zeroes by thinking about the problem. Sorry.

But! Suppose you have a collection of all the zeroes for your function. That’s all the points in the domain that match with zero in the range. Then we have a new name for the thing you have. And that’s the kernel of your function. It’s the biggest subset in the domain with an image that’s just the zero in the range.

So we have a name for the zeroes that isn’t just “the zeroes”. What does this get us?

If we don’t know anything about the kind of function we have, not much. If the function belongs to some common kinds of functions, though, it tells us stuff.

For example. Suppose the function has domain and range that are vectors. And that the function is linear, which is to say, easy to deal with. Let me call the function ‘f’. And let me pick out two things in the domain. I’ll call them ‘x’ and ‘y’ because I’m writing this after Thanksgiving dinner and can’t work up a cleverer name for anything. If f is linear then f(x + y) is the same thing as f(x) + f(y). And now something magic happens. If x and y are both in the kernel, then x + y has to be in the kernel too. Think about it. Meanwhile, if x is in the kernel but y isn’t, then f(x + y) is f(y). Again think about it.

What we can see is that the domain fractures into two directions. One of them, the direction of the kernel, is invisible to the function. You can move however much you like in that direction and f can’t see it. The other direction, perpendicular (“orthogonal”, we say in the trade) to the kernel, is visible. Everything that might change changes in that direction.

This idea threads through vector spaces, and we study a lot of things that turn out to look like vector spaces. It keeps surprising us by letting us solve problems, or find the best-possible approximate solutions. This kernel gives us room to match some fiddly conditions without breaking the real solution. The size of the null space alone can tell us whether some problems are solvable, or whether they’ll have infinitely large sets of solutions.

In this vector-space construct the kernel often takes on another name, the “null space”. This means the same thing. But it reminds us that superhero comics writers miss out on many excellent pieces of terminology by not taking advanced courses in mathematics.

Kernels also appear in group theory, whenever we get into rings. We’re always working with rings. They’re nearly as unavoidable as vector spaces.

You know how you can divide the whole numbers into odd and even? And you can do some neat tricks with that for some problems? You can do that with every ring, using the kernel as a dividing point. This gives us information about how the ring is shaped, and what other structures might look like the ring. This often lets us turn proofs that might be hard into a collection of proofs on individual cases that are, at least, doable. Tricks about odd and even numbers become, in trained hands, subtle proofs of surprising results.

We see vector spaces and rings all over the place in mathematics. Some of that’s selection bias. Vector spaces capture a lot of what’s important about geometry. Rings capture a lot of what’s important about arithmetic. We have understandings of geometry and arithmetic that transcend even our species. Raccoons understand space. Crows understand number. When we look to do mathematics we look for patterns we understand, and these are major patterns we understand. And there are kernels that matter to each of them.

Some mathematical ideas inspire metaphors to me. Kernels are one. Kernels feel to me like the process of holding a polarized lens up to a crystal. This lets one see how the crystal is put together. I realize writing this down that my metaphor is unclear: is the kernel the lens or the structure seen in the crystal? I suppose the function has to be the lens, with the kernel the crystallization planes made clear under it. It’s curious I had enjoyed this feeling about kernels and functions for so long without making it precise. Feelings about mathematical structures can be like that.

The End 2016 Mathematics A To Z: Algebra


So let me start the End 2016 Mathematics A To Z with a word everybody figures they know. As will happen, everybody’s right and everybody’s wrong about that.

Algebra.

Everybody knows what algebra is. It’s the point where suddenly mathematics involves spelling. Instead of long division we’re on a never-ending search for ‘x’. Years later we pass along gifs of either someone saying “stop asking us to find your ex” or someone who’s circled the letter ‘x’ and written “there it is”. And make jokes about how we got through life without using algebra. And we know it’s the thing mathematicians are always doing.

Mathematicians aren’t always doing that. I expect the average mathematician would say she almost never does that. That’s a bit of a fib. We have a lot of work where we do stuff that would be recognizable as high school algebra. It’s just we don’t really care about that. We’re doing that because it’s how we get the problem we are interested in done. the most recent few pieces in my “Why Stuff can Orbit” series include a bunch of high school algebra-style work. But that was just because it was the easiest way to answer some calculus-inspired questions.

Still, “algebra” is a much-used word. It comes back around the second or third year of a mathematics major’s career. It comes in two forms in undergraduate life. One form is “linear algebra”, which is a great subject. That field’s about how stuff moves. You get to imagine space as this stretchy material. You can stretch it out. You can squash it down. You can stretch it in some directions and squash it in others. You can rotate it. These are simple things to build on. You can spend a whole career building on that. It becomes practical in surprising ways. For example, it’s the field of study behind finding equations that best match some complicated, messy real data.

The second form is “abstract algebra”, which comes in about the same time. This one is alien and baffling for a long while. It doesn’t help that the books all call it Introduction to Algebra or just Algebra and all your friends think you’re slumming. The mathematics major stumbles through confusing definitions and theorems that ought to sound comforting. (“Fermat’s Little Theorem”? That’s a good thing, right?) But the confusion passes, in time. There’s a beautiful subject here, one of my favorites. I’ve talked about it a lot.

We start with something that looks like the loosest cartoon of arithmetic. We get a bunch of things we can add together, and an ‘addition’ operation. This lets us do a lot of stuff that looks like addition modulo numbers. Then we go on to stuff that looks like picking up floor tiles and rotating them. Add in something that we call ‘multiplication’ and we get rings. This is a bit more like normal arithmetic. Add in some other stuff and we get ‘fields’ and other structures. We can keep falling back on arithmetic and on rotating tiles to build our intuition about what we’re doing. This trains mathematicians to look for particular patterns in new, abstract constructs.

Linear algebra is not an abstract-algebra sort of algebra. Sorry about that.

And there’s another kind of algebra that mathematicians talk about. At least once they get into grad school they do. There’s a huge family of these kinds of algebras. The family trait for them is that they share a particular rule about how you can multiply their elements together. I won’t get into that here. There are many kinds of these algebras. One that I keep trying to study on my own and crash hard against is Lie Algebra. That’s named for the Norwegian mathematician Sophus Lie. Pronounce it “lee”, as in “leaning”. You can understand quantum mechanics much better if you’re comfortable with Lie Algebras and so now you know one of my weaknesses. Another kind is the Clifford Algebra. This lets us create something called a “hypercomplex number”. It isn’t much like a complex number. Sorry. Clifford Algebra does lend to a construct called spinors. These help physicists understand the behavior of bosons and fermions. Every bit of matter seems to be either a boson or a fermion. So you see why this is something people might like to understand.

Boolean Algebra is the algebra of this type that a normal person is likely to have heard of. It’s about what we can build using two values and a few operations. Those values by tradition we call True and False, or 1 and 0. The operations we call things like ‘and’ and ‘or’ and ‘not’. It doesn’t sound like much. It gives us computational logic. Isn’t that amazing stuff?

So if someone says “algebra” she might mean any of these. A normal person in a non-academic context probably means high school algebra. A mathematician speaking without further context probably means abstract algebra. If you hear something about “matrices” it’s more likely that she’s speaking of linear algebra. But abstract algebra can’t be ruled out yet. If you hear a word like “eigenvector” or “eigenvalue” or anything else starting “eigen” (or “characteristic”) she’s more probably speaking of abstract algebra. And if there’s someone’s name before the word “algebra” then she’s probably speaking of the last of these. This is not a perfect guide. But it is the sort of context mathematicians expect other mathematicians notice.

Reading the Comics, June 25, 2016: Busy Week Edition


I had meant to cut the Reading The Comics posts back to a reasonable one a week. Then came the 23rd, which had something like six hundred mathematically-themed comic strips. So I could post another impossibly long article on Sunday or split what I have. And splitting works better for my posting count, so, here we are.

Charles Brubaker’s Ask A Cat for the 19th is a soap-bubbles strip. As ever happens with comic strips, the cat blows bubbles that can’t happen without wireframes and skillful bubble-blowing artistry. It happens that a few days ago I linked to a couple essays showing off some magnificent surfaces that the right wireframe boundary might inspire. The mathematics describing how a soap bubbles’s shape should be made aren’t hard; I’m confident I could’ve understood the equations as an undergraduate. Finding exact solutions … I’m not sure I could have done. (I’d still want someone looking over my work if I did them today.) But numerical solutions, that I’d be confident in doing. And the real thing is available when you’re ready to get your hands … dirty … with soapy water.

Rick Stromoski’s Soup To Nutz for the 19th Shows RoyBoy on the brink of understanding symmetry. To lose at rock-paper-scissors is indeed just as hard as winning is. Suppose we replaced the names of the things thrown with letters. Suppose we replace ‘beats’ and ‘loses to’ with nonsense words. Then we could describe the game: A flobs B. B flobs C. C flobs A. A dostks C. C dostks B. B dostks A. There’s no way to tell, from this, whether A is rock or paper or scissors, or whether ‘flob’ or ‘dostk’ is a win.

Bill Whitehead’s Free Range for the 20th is the old joke about tipping being the hardest kind of mathematics to do. Proof? There’s an enormous blackboard full of symbols and the three guys in lab coats are still having trouble with it. I have long wondered why tips are used as the model of impossibly difficult things to compute that aren’t taxes. I suppose the idea of taking “fifteen percent” (or twenty, or whatever) of something suggests a need for precision. And it’ll be fifteen percent of a number chosen without any interest in making the calculation neat. So it looks like the worst possible kind of arithmetic problem. But the secret, of course, is that you don’t have to have “the” right answer. You just have to land anywhere in an acceptable range. You can work out a fraction — a sixth, a fifth, or so — of a number that’s close to the tab and you’ll be right. So, as ever, it’s important to know how to tell whether you have a correct answer before worrying about calculating it.

Allison Barrows’s Preeteena rerun for the 20th is your cheerleading geometry joke for this week.

'I refuse to change my personality just for a stupid speech.' 'Fi, you wouldn't have to! In fact, make it an asset! Brand yourself as The Math Curmudgeon! ... The Grouchy Grapher ... The Sour Cosine ... The Number Grump ... The Count of Carping ... The Kvetching Quotient' 'I GET IT!'
Bill Holbrook’s On The Fastrack for the 22nd of June, 2016. There are so many bloggers wondering if Holbrook is talking about them.

I am sure Bill Holbrook’s On The Fastrack for the 22nd is not aimed at me. He hangs around Usenet group rec.arts.comics.strips some, as I do, and we’ve communicated a bit that way. But I can’t imagine he thinks of me much or even at all once he’s done with racs for the day. Anyway, Dethany does point out how a clear identity helps one communicate mathematics well. (Fi is to talk with elementary school girls about mathematics careers.) And bitterness is always a well-received pose. Me, I’m aware that my pop-mathematics brand identity is best described as “I guess he writes a couple things a week, doesn’t he?” and I could probably use some stronger hook, somewhere. I just don’t feel curmudgeonly most of the time.

Darby Conley’s Get Fuzzy rerun for the 22nd is about arithmetic as a way to be obscure. We’ve all been there. I had, at first, read Bucky’s rating as “out of 178 1/3 π” and thought, well, that’s not too bad since one-third of π is pretty close to 1. But then, Conley being a normal person, probably meant “one-hundred seventy-eight and a third”, and π times that is a mess. Well, it’s somewhere around 550 or so. Octave tells me it’s more like 560.251 and so on.

Reading the Comics, April 10, 2016: Four-Digit Prime Number Edition


In today’s installment of Reading The Comics, mathematics gets name-dropped a bunch in strips that aren’t really about my favorite subject other than my love. Also, I reveal the big lie we’ve been fed about who drew the Henry comic strip attributed to Carl Anderson. Finally, I get a question from Queen Victoria. I feel like this should be the start of a podcast.

Todd responds to arithmetic flash cards: 'Tater tots! Sloppy Joes! Mac and Cheese!' 'Todd, what are you doing? These are all math!' 'Sorry ... every day at school we have math right before lunch and you told me to say the first thing that pops into my mind!'
Patrick Roberts’ Todd the Dinosaur for the 6th of April, 2016.

Patrick Roberts’ Todd the Dinosaur for the 6th of April just name-drops mathematics. The flash cards suggest it. They’re almost iconic for learning arithmetic. I’ve seen flash cards for other subjects. But apart from learning the words of other languages I’ve never been able to make myself believe they’d work. On the other hand, I haven’t used flash cards to learn (or teach) things myself.

Mom, taking the mathematics book away from Bad Dad: 'I'll take over now ... fractions and long division aren't `scientifically accepted as unknowable`.'
Joe Martin’s Boffo for the 7th of April, 2016. I bet the link expires in early May.

Joe Martin’s Boffo for the 7th of April is a solid giggle. (I have a pretty watery giggle myself.) There are unknowable, or at least unprovable, things in mathematics. Any logic system with enough rules to be interesting has ideas which would make sense, and which might be true, but which can’t be proven. Arithmetic is such a system. But just fractions and long division by itself? No, I think we need something more abstract for that.

Henry is sent to bed. He can't sleep until he reads from his New Math text.
Carl Anderson’s Henry for the 7th of April, 2016.

Carl Anderson’s Henry for the 7th of April is, of course, a rerun. It’s also a rerun that gives away that the “Carl Anderson” credit is a lie. Anderson turned over drawing the comic strip in 1942 to John Liney, for weekday strips, and Don Trachte for Sundays. There is no possible way the phrase “New Math” appeared on the cover of a textbook Carl Anderson drew. Liney retired in 1979, and Jack Tippit took over until 1983. Then Dick Hodgins, Jr, drew the strip until 1990. So depending on how quickly word of the New Math penetrated Comic Strip Master Command, this was drawn by either Liney, Tippit, or possibly Hodgins. (Peanuts made New Math jokes in the 60s, but it does seem the older the comic strip the longer it takes to mention new stuff.) I don’t know when these reruns date from. I also don’t know why Comics Kingdom is fibbing about the artist. But then they went and cancelled The Katzenjammer Kids without telling anyone either.

Eric the Circle for the 8th, this one by “lolz”, declares that Eric doesn’t like being graphed. This is your traditional sort of graph, one in which points with coordinates x and y are on the plot if their values make some equation true. For a circle, that equation’s something like (x – a)2 + (y – b)2 = r2. Here (a, b) are the coordinates for the point that’s the center of the circle, and r is the radius of the circle. This looks a lot like Eric is centered on the origin, the point with coordinates (0, 0). It’s a popular choice. Any center is as good. Another would just have equations that take longer to work with.

Richard Thompson’s Cul de Sac rerun for the 10th is so much fun to look at that I’m including it even though it just name-drops mathematics. The joke would be the same if it were something besides fractions. Although see Boffo.

Norm Feuti’s Gil rerun for the 10th takes on mathematics’ favorite group theory application, the Rubik’s Cube. It’s the way I solved them best. This approach falls outside the bounds of normal group theory, though.

Mac King and Bill King’s Magic in a Minute for the 10th shows off a magic trick. It’s also a non-Rubik’s-cube problem in group theory. One of the groups that a mathematics major learns, after integers-mod-four and the like, is the permutation group. In this, the act of swapping two (or more) things is a thing. This puzzle restricts the allowed permutations down to swapping one item with the thing next to it. And thanks to that, an astounding result emerges. It’s worth figuring out why the trick would work. If you can figure out the reason the first set of switches have to leave a penny on the far right then you’ve got the gimmick solved.

Pab Sungenis’s New Adventures of Queen Victoria for the 10th made me wonder just how many four-digit prime numbers there are. If I haven’t worked this out wrong, there’s 1,061 of them.

A Leap Day 2016 Mathematics A To Z: Dedekind Domain


When I tossed this season’s A To Z open to requests I figured I’d get some surprising ones. So I did. This one’s particularly challenging. It comes from Gaurish Korpal, author of the Gaurish4Math blog.

Dedekind Domain

A major field of mathematics is Algebra. By this mathematicians don’t mean algebra. They mean studying collections of things on which you can do stuff that looks like arithmetic. There’s good reasons why this field has that confusing name. Nobody knows what they are.

We’ve seen before the creation of things that look a bit like arithmetic. Rings are a collection of things for which we can do something that works like addition and something that works like multiplication. There are a lot of different kinds of rings. When a mathematics popularizer tries to talk about rings, she’ll talk a lot about the whole numbers. We can usually count on the audience to know what they are. If that won’t do for the particular topic, she’ll try the whole numbers modulo something. If she needs another example then she talks about the ways you can rotate or reflect a triangle, or square, or hexagon and get the original shape back. Maybe she calls on the sets of polynomials you can describe. Then she has to give up on words and make do with pictures of beautifully complicated things. And after that she has to give up because the structures get too abstract to describe without losing the audience.

Dedekind Domains are a kind of ring that meets a bunch of extra criteria. There’s no point my listing them all. It would take several hundred words and you would lose motivation to continue before I was done. If you need them anyway Eric W Weisstein’s MathWorld dictionary gives the exact criteria. It also has explanations for all the words in those criteria.

Dedekind Domains, also called Dedekind Rings, are aptly named for Richard Dedekind. He was a 19th century mathematician, the last doctoral student of Gauss, and one of the people who defined what we think of as algebra. He also gave us a rigorous foundation for what irrational numbers are.

Among the problems that fascinated Dedekind was Fermat’s Last Theorem. This can’t surprise you. Every person who would be a mathematician is fascinated by it. We take our innings fiddling with cases and ways to show an + bn can’t equal cn for interesting whole numbers a, b, c, and n. We usually go about this by saying, “Suppose we have the smallest a, b, and c for which this is true and for which n is bigger than 2”. Then we do a lot of scribbling that shows this implies something contradictory, like an even number equals an odd, or that there’s some set of smaller numbers making this true. This proves the original supposition was false. Mathematicians first learn that trick as a way to show the square root of two can’t be a rational number. We stick with it because it’s nice and familiar and looks relevant. Most of us get maybe as far as proving there aren’t any solutions for n = 3 or maybe n = 4 and go on to other work. Dedekind didn’t prove the theorem. But he did find new ways to look at numbers.

One problem with proving Fermat’s Last Theorem is that it’s all about integers. Integers are hard to prove things about. Real numbers are easier. Complex-valued numbers are easier still. This is weird but it’s so. So we have this promising approach: if we could prove something like Fermat’s Last Theorem for complex-valued numbers, we’d get it up for integers. Or at least we’d be a lot of the way there. The one flaw is that Fermat’s Last Theorem isn’t true for complex-valued numbers. It would be ridiculous if it were true.

But we can patch things up. We can construct something called Gaussian Integers. These are complex-valued numbers which we can match up to integers in a compelling way. We could use the tools that work on complex-valued numbers to squeeze out a result about integers.

You know that this didn’t work. If it had, we wouldn’t have had to wait for the 1990s for the proof of Fermat’s Last Theorem. And that proof would have anything to do with this stuff. It hasn’t. One of the problems keeping this kind of proof from working is factoring. Whole numbers are either prime numbers or the product of prime numbers. Or they’re 1, ruled out of the universe of prime numbers for reasons I get to after the next paragraph. Prime numbers are those like 2, 5, 13, 37 and many others. They haven’t got any factors besides themselves and 1. The other whole numbers are the products of prime numbers. 12 is equal to 2 times 2 times 3. 35 is equal to 5 times 7. 165 is equal to 3 times 5 times 11.

If we stick to whole numbers, then, these all have unique prime factorizations. 24 is equal to 2 times 2 times 2 times 3. And there are no other combinations of prime numbers that multiply together to give us 24. We could rearrange the numbers — 2 times 3 times 2 times 2 works. But it will always be a combination of three 2’s and a single 3 that we multiply together to get 24.

(This is a reason we don’t consider 1 a prime number. If we did consider a prime number, then “three 2’s and a single 3” would be a prime factorization of 24, but so would “three 2’s, a single 3, and two 1’s”. Also “three 2’s, a single 3, and fifteen 1’s”. Also “three 2’s, a single 3, and one 1”. We have a lot of theorems that depend on whole numbers having a unique prime factorization. We could add the phrase “except for the count of 1’s in the factorization” to every occurrence of the phrase “prime factorization”. Or we could say that 1 isn’t a prime number. It’s a lot less work to say 1 isn’t a prime number.)

The trouble is that if we work with Gaussian integers we don’t have that unique prime factorization anymore. There are still prime numbers. But it’s possible to get some numbers as a product of different sets of prime numbers. And this point breaks a lot of otherwise promising attempts to prove Fermat’s Last Theorem. And there’s no getting around that, not for Fermat’s Last Theorem.

Dedekind saw a good concept lurking under this, though. The concept is called an ideal. It’s a subset of a ring that itself satisfies the rules for being a ring. And if you take something from the original ring and multiply it by something in the ideal, you get something that’s still in the ideal. You might already have one in mind. Start with the ring of integers. The even numbers are an ideal of that. Add any two even numbers together and you get an even number. Multiply any two even numbers together and you get an even number. Take any integer, even or not, and multiply it by an even number. You get an even number.

(If you were wondering: I mean the ideal would be a “ring without identity”. It’s not required to have something that acts like 1 for the purpose of multiplication. If we insisted on looking at the even numbers and the number 1, then we couldn’t be sure that adding two things from the ideal would stay in the ideal. After all, 2 is in the ideal, and if 1 also is, then 2 + 1 is a peculiar thing to consider an even number.)

It’s not just even numbers that do this. The multiples of 3 make an ideal in the integers too. Add two multiples of 3 together and you get a multiple of 3. Multiply two multiples of 3 together and you get another multiple of 3. Multiply any integer by a multiple of 3 and you get a multiple of 3.

The multiples of 4 also make an ideal, as do the multiples of 5, or the multiples of 82, or of any whole number you like.

Odd numbers don’t make an ideal, though. Add two odd numbers together and you don’t get an odd number. Multiply an integer by an odd number and you might get an odd number, you might not.

And not every ring has an ideal lurking within it. For example, take the integers modulo 3. In this case there are only three numbers: 0, 1, and 2. 1 + 1 is 2, uncontroversially. But 1 + 2 is 0. 2 + 2 is 1. 2 times 1 is 2, but 2 times 2 is 1 again. This is self-consistent. But it hasn’t got an ideal within it. There isn’t a smaller set that has addition work.

The multiples of 4 make an interesting ideal in the integers. They’re not just an ideal of the integers. They’re also an ideal of the even numbers. Well, the even numbers make a ring. They couldn’t be an ideal of the integers if they couldn’t be a ring in their own right. And the multiples of 4 — well, multiply any even number by a multiple of 4. You get a multiple of 4 again. This keeps on going. The multiples of 8 are an ideal for the multiples of 4, the multiples of 2, and the integers. Multiples of 16 and 32 make for even deeper nestings of ideals.

The multiples of 6, now … that’s an ideal of the integers, for all the reasons the multiples of 2 and 3 and 4 were. But it’s also an ideal of the multiples of 2. And of the multiples of 3. We can see the collection of “things that are multiples of 6” as a product of “things that are multiples of 2” and “things that are multiples of 3”. Dedekind saw this before us.

You might want to pause a moment while considering the idea of multiplying whole sets of numbers together. It’s a heady concept. Trying to do proofs with the concept feels at first like being tasked with alphabetizing a cloud. But we’re not planning to prove anything so you can move on if you like with an unalphabetized cloud.

A Dedekind Domain is a ring that has ideals like this. And the ideals come in two categories. Some are “prime ideals”, which act like prime numbers do. The non-prime ideals are the products of prime ideals. And while we might not have unique prime factorizations of numbers, we do have unique prime factorizations of ideals. That is, if an ideal is a product of some set of prime ideals, then it can’t also be the product of some other set of prime ideals. We get back something like unique factors.

This may sound abstract. But you know a Dedekind Domain. The integers are one. That wasn’t a given. Yes, we start algebra by looking for things that work like regular arithmetic do. But that doesn’t promise that regular old numbers will still satisfy us. We can, for instance, study things where the order matters in multiplication. Then multiplying one thing by a second gives us a different answer to multiplying the second thing by the first. Still, regular old integers are Dedekind domains and it’s hard to think of being more familiar than that.

Another example is the set of polynomials. You might want to pause for a moment here. Mathematics majors need a pause to start thinking of polynomials as being something kind of like regular old numbers. But you can certainly add one polynomial to another, and you get a polynomial out of it. You can multiply one polynomial by another, and you get a polynomial out of that. Try it. After that the only surprise would be that there are prime polynomials. But if you try to think of two polynomials that multiply together to give you “x + 1” you realize there have to be.

Other examples start getting more exotic. They’re things like the Gaussian integers I mentioned before. Gaussian integers are themselves an example of a structure called algebraic integers. Algebraic integers are — well, think of all the polynomials you can out of integer coefficients, and with a leading coefficient of 1. So, polynomials that look like “x3 – 4 x2 + 15 x + 6” or the like. All of the roots of those, the values of x which make that expression equal to zero, are algebraic integers. Yes, almost none of them are integers. We know. But the algebraic integers are also a Dedekind Domain.

I’d like to describe some more Dedekind Domains. I am foiled. I can find some more, but explaining them outside the dialect of mathematics is hard. It would take me more words than I am confident readers will give me.

I hope you are satisfied to know a bit of what a Dedekind Domain is. It is a kind of thing which works much like integers do. But a Dedekind Domain can be just different enough that we can’t count on factoring working like we are used to. We don’t lose factoring altogether, though. We are able to keep an attenuated version. It does take quite a few words to explain exactly how to set this up, however.

Radial Tessellation Featuring Decagons, Pentagons, and Golden Hexagons


I apologize for being so quiet the past few days. I haven’t had the chance to write what I mean to. To make it up to you please let me reblog this charming tesselation from RobertLovesPi. Tesselations are wonderful sections of mathematics because they lend themselves to stunning pictures and thoughts of impractical ways to redo the kitchen floor. They also depend on symmetries and rotations to work, which is a hallmark of group theory, which starts out by looking at things which look like addition and multiplication and which ends up in things like predicting how many different kinds of subatomic particles there ought to be. (I haven’t gone that far in studying group theory so I’d have to trust other people to fill in some of the gaps here.)

RobertLovesPi.net

Radial Tessellation Featuring Decagons, Pentagon, and Golden Hexagons

As you can see, this can be continued indefinitely from the center.

View original post

%d bloggers like this: