The Summer 2017 Mathematics A To Z: Young Tableau


I never heard of today’s entry topic three months ago. Indeed, three weeks ago I was still making guesses about just what Gaurish, author of For the love of Mathematics,, was asking about. It turns out to be maybe the grand union of everything that’s ever been in one of my A To Z sequences. I overstate, but barely.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Young Tableau.

The specific thing that a Young Tableau is is beautiful in its simplicity. It could almost be a recreational mathematics puzzle, except that it isn’t challenging enough.

Start with a couple of boxes laid in a row. As many or as few as you like.

Now set another row of boxes. You can have as many as the first row did, or fewer. You just can’t have more. Set the second row of boxes — well, your choice. Either below the first row, or else above. I’m going to assume you’re going below the first row, and will write my directions accordingly. If you do things the other way you’re following a common enough convention. I’m leaving it on you to figure out what the directions should be, though.

Now add in a third row of boxes, if you like. Again, as many or as few boxes as you like. There can’t be more than there are in the second row. Set it below the second row.

And a fourth row, if you want four rows. Again, no more boxes in it than the third row had. Keep this up until you’ve got tired of adding rows of boxes.

How many boxes do you have? I don’t know. But take the numbers 1, 2, 3, 4, 5, and so on, up to whatever the count of your boxes is. Can you fill in one number for each box? So that the numbers are always increasing as you go left to right in a single row? And as you go top to bottom in a single column? Yes, of course. Go in order: ‘1’ for the first box you laid down, then ‘2’, then ‘3’, and so on, increasing up to the last box in the last row.

Can you do it in another way? Any other order?

Except for the simplest of arrangements, like a single row of four boxes or three rows of one box atop another, the answer is yes. There can be many of them, turns out. Seven boxes, arranged three in the first row, two in the second, one in the third, and one in the fourth, have 35 possible arrangements. It doesn’t take a very big diagram to get an enormous number of possibilities. Could be fun drawing an arbitrary stack of boxes and working out how many arrangements there are, if you have some time in a dull meeting to pass.

Let me step away from filling boxes. In one of its later, disappointing, seasons Futurama finally did a body-swap episode. The gimmick: two bodies could only swap the brains within them one time. So would it be possible to put Bender’s brain back in his original body, if he and Amy (or whoever) had already swapped once? The episode drew minor amusement in mathematics circles, and a lot of amazement in pop-culture circles. The writer, a mathematics major, found a proof that showed it was indeed always possible, even after many pairs of people had swapped bodies. The idea that a theorem was created for a TV show impressed many people who think theorems are rarer and harder to create than they necessarily are.

It was a legitimate theorem, and in a well-developed field of mathematics. It’s about permutation groups. These are the study of the ways you can swap pairs of things. I grant this doesn’t sound like much of a field. There is a surprising lot of interesting things to learn just from studying how stuff can be swapped, though. It’s even of real-world relevance. Most subatomic particles of a kind — electrons, top quarks, gluons, whatever — are identical to every other particle of the same kind. Physics wouldn’t work if they weren’t. What would happen if we swap the electron on the left for the electron on the right, and vice-versa? How would that change our physics?

A chunk of quantum mechanics studies what kinds of swaps of particles would produce an observable change, and what kind of swaps wouldn’t. When the swap doesn’t make a change we can describe this as a symmetric operation. When the swap does make a change, that’s an antisymmetric operation. And — the Young Tableau that’s a single row of two boxes? That matches up well with this symmetric operation. The Young Tableau that’s two rows of a single box each? That matches up with the antisymmetric operation.

How many ways could you set up three boxes, according to the rules of the game? A single row of three boxes, sure. One row of two boxes and a row of one box. Three rows of one box each. How many ways are there to assign the numbers 1, 2, and 3 to those boxes, and satisfy the rules? One way to do the single row of three boxes. Also one way to do the three rows of a single box. There’s two ways to do the one-row-of-two-boxes, one-row-of-one-box case.

What if we have three particles? How could they interact? Well, all three could be symmetric with each other. This matches the first case, the single row of three boxes. All three could be antisymmetric with each other. This matches the three rows of one box. Or you could have two particles that are symmetric with each other and antisymmetric with the third particle. Or two particles that are antisymmetric with each other but symmetric with the third particle. Two ways to do that. Two ways to fill in the one-row-of-two-boxes, one-row-of-one-box case.

This isn’t merely a neat, aesthetically interesting coincidence. I wouldn’t spend so much time on it if it were. There’s a matching here that’s built on something meaningful. The different ways to arrange numbers in a set of boxes like this pair up with a select, interesting set of matrices whose elements are complex-valued numbers. You might wonder who introduced complex-valued numbers, let alone matrices of them, into evidence. Well, who cares? We’ve got them. They do a lot of work for us. So much work they have a common name, the “symmetric group over the complex numbers”. As my leading example suggests, they’re all over the place in quantum mechanics. They’re good to have around in regular physics too, at least in the right neighborhoods.

These Young Tableaus turn up over and over in group theory. They match up with polynomials, because yeah, everything is polynomials. But they turn out to describe polynomial representations of some of the superstar groups out there. Groups with names like the General Linear Group (square matrices), or the Special Linear Group (square matrices with determinant equal to 1), or the Special Unitary Group (that thing where quantum mechanics says there have to be particles whose names are obscure Greek letters with superscripts of up to five + marks). If you’d care for more, here’s a chapter by Dr Frank Porter describing, in part, how you get from Young Tableaus to the obscure baryons.

Porter’s chapter also lets me tie this back to tensors. Tensors have varied ranks, the number of different indicies you can have on the things. What happens when you swap pairs of indices in a tensor? How many ways can you swap them, and what does that do to what the tensor describes? Please tell me you already suspect this is going to match something in Young Tableaus. They do this by way of the symmetries and permutations mentioned above. But they are there.

As I say, three months ago I had no idea these things existed. If I ever ran across them it was from seeing the name at MathWorld’s list of terms that start with ‘Y’. The article shows some nice examples (with each rows a atop the previous one) but doesn’t make clear how much stuff this subject runs through. I can’t fit everything in to a reasonable essay. (For example: the number of ways to arrange, say, 20 boxes into rows meeting these rules is itself a partition problem. Partition problems are probability and statistical mechanics. Statistical mechanics is the flow of heat, and the movement of the stars in a galaxy, and the chemistry of life.) I am delighted by what does fit.

One Way To Get Your Own Theorem


While doing some research to better grouse about Ken Keeler’s Futurama theorem I ran across an amusing site I hadn’t known about. It is Theory Mine, a site that allows you to hire — and name — a genuine, mathematically sound theorem. The spirit of the thing is akin to that scam in which you “name” a star. But this is more legitimate in that, you know, it’s got any legitimacy. For this, you’re buying naming rights from someone who has any rights to sell. By convention the discoverer of a theorem can name it whatever she wishes, and there’s one chance in ten that anyone else will use the name.

I haven’t used it. I’ve made my own theorems, thanks, and could put them on a coffee mug or t-shirt if I wished to make a particularly boring t-shirt. But I’m delighted by the scheme. They don’t have a team of freelance mathematicians whipping up stuff and hoping it isn’t already known. Not for the kinds of prices they charge. This should inspire the question: well, where do the theorems come from?

The scheme uses an automated reasoning system. I don’t know the details of how it works, but I can think of a system by which this might work. It goes back to the Crisis of Foundations, the time in the late 19th/early 20th century when logicians got very worried that we were still letting physical intuitions and unstated assumptions stay in our mathematics. One solution: turn everything into symbols, icons with no connotations. The axioms of mathematics become a couple basic symbols. The laws of logical deduction become things we can do with the symbols, converting one line of symbols into a related other line. Every line we get is a theorem. And we know it’s correct. To write out the theorem in this scheme is to write out its proof, and to feel like you’re touching some deep magic. And there’s no human frailties in the system, besides the thrill of reeling off True Names like that.

You may not be sure what this works like. It may help to compare it to a slightly-fun number coding scheme. I mean the one where you start with a number, like, ‘1’. Then you write down how many times and which digit appears. There’s a single ‘1’ in that string, so you would write down ’11’. And repeat: In ’11’ there’s a sequence of two ‘1’s, so you would write down ’21’. And repeat: there’s a single ‘2’ and a single ‘1’, so you then write down ‘1211’. And again: there’s a single ‘1’, a single ‘2’, and then a double ‘1’, so you next write ‘111221’. And so on until you get bored or die.

When we do this for mathematics we start with a couple different basic units. And we also start with several things we may do at most symbols. So there’s rarely a single line that follows from the previous. There’s an ever-expanding tree of known truths. This may stave off boredom but I make no promises about death.

The result of this is pages and pages that look like Ancient High Martian. I don’t feel the thrill of doing this. Some people do, though. And as recreational mathematics goes I suppose it’s at least as good as sudoku. Anyway, this kind of project, rewarding indefatigability and thoroughness, is perfect for automation anyway. Let the computer work out all the things we can prove are true.

If I’m reading Theory Mine’s description correctly they seem to be doing something roughly like this. If they’re not, well, you go ahead and make your own rival service using my paragraphs as your system. All I ask is one penny for every use of L’Hôpital’s Rule, a theorem named for Guillaume de l’Hôpital and discovered by Johann Bernoulli. (I have heard that Bernoulli was paid for his work, but I do not know that’s true. I have now explained why, if we suppose that to be true, my prior sentence is a very funny joke and you should at minimum chuckle.)

This should inspire the question: what do we need mathematicians for, then? It’s for the same reason we need writers, when it would be possible to automate the composing of sentences that satisfy the rules of English grammar. I mean if there were rules to English grammar. That we can identify a theorem that’s true does not mean it has even the slightest interest to anyone, ever. There’s much more that could be known than that we could ever care about.

You can see this in Theory Mine’s example of Quentin’s Theorem. Quentin’s Theorem is about an operation you can do on a set whose elements consist of the non-negative whole numbers with a separate value, which they call color, attached. You can add these colored-numbers together according to some particular rules about how the values and the colors add. The order of this addition normally matters: blue two plus green three isn’t the same as green three plus blue two. Quentin’s Theorem finds cases where, if you add enough colored-numbers together, the order doesn’t matter. I know. I am also staggered by how useful this fact promises to be.

Yeah, maybe there is some use. I don’t know what it is. If anyone’s going to find the use it’ll be a mathematician. Or a physicist who’s found some bizarre quark properties she wants to codify. Anyway, if what you’re interested in is “what can you do to make a vertical column stable?” then the automatic proof generator isn’t helping you at all. Not without a lot of work put in to guiding it. So we can skip the hard work of finding and proving theorems, if we can do the hard work of figuring out where to look for these theorems instead. Always the way.

You also may wonder how we know the computer is doing its work right. It’s possible to write software that is logically proven to be correct. That is, the software can’t produce anything but the designed behavior. We don’t usually write software this way. It’s harder to write, because you have to actually design your software’s behavior. And we can get away without doing it. Usually there’s some human overseeing the results who can say what to do if the software seems to be going wrong. Advocates of logically-proven software point out that we’re getting more software, often passing results on to other programs. This can turn a bug in one program into a bug in the whole world faster than a responsible human can say, “I dunno. Did you try turning it off and on again?” I’d like to think we could get more logically-proven software. But I also fear I couldn’t write software that sound and, you know, mathematics blogging isn’t earning me enough to eat on.

Also, yes, even proven software will malfunction if the hardware the computer’s on malfunctions. That’s rare, but does happen. Fortunately, it’s possible to automate the checking of a proof, and that’s easier to do than creating a proof in the first place. We just have to prove we have the proof-checker working. Certainty would be a nice thing if we ever got it, I suppose.

Reading the Comics, February 15, 2017: SMBC Cuts In Line Edition


It’s another busy enough week for mathematically-themed comic strips that I’m dividing the harvest in two. There’s a natural cutting point since there weren’t any comics I could call relevant for the 15th. But I’m moving a Saturday Morning Breakfast Cereal of course from the 16th into this pile. That’s because there’s another Saturday Morning Breakfast Cereal of course from after the 16th that I might include. I’m still deciding if it’s close enough to on topic. We’ll see.

John Graziano’s Ripley’s Believe It Or Not for the 12th mentions the “Futurama Theorem”. The trivia is true, in that writer Ken Keeler did create a theorem for a body-swap plot he had going. The premise was that any two bodies could swap minds at most one time. So, after a couple people had swapped bodies, was there any way to get everyone back to their correct original body? There is, if you bring two more people in to the body-swapping party. It’s clever.

From reading comment threads about the episode I conclude people are really awestruck by the idea of creating a theorem for a TV show episode. The thing is that “a theorem” isn’t necessarily a mind-boggling piece of work. It’s just the name mathematicians give when we have a clearly-defined logical problem and its solution. A theorem and its proof can be a mind-wrenching bit of work, like Fermat’s Last Theorem or the Four-Color Map Theorem are. Or it can be on the verge of obvious. Keeler’s proof isn’t on the obvious side of things. But it is the reasoning one would have to do to solve the body-swap problem the episode posited without cheating. Logic and good story-telling are, as often, good partners.

Teresa Burritt’s Frog Applause is a Dadaist nonsense strip. But for the 13th it hit across some legitimate words, about a 14 percent false-positive rate. This is something run across in hypothesis testing. The hypothesis is something like “is whatever we’re measuring so much above (or so far below) the average that it’s not plausibly just luck?” A false positive is what it sounds like: our analysis said yes, this can’t just be luck, and it turns out that it was. This turns up most notoriously in medical screenings, when we want to know if there’s reason to suspect a health risk, and in forensic analysis, when we want to know if a particular person can be shown to have been a particular place at a particular time. A 14 percent false positive rate doesn’t sound very good — except.

Suppose we are looking for a rare condition. Say, something one person out of 500 will have. A test that’s 99 percent accurate will turn up positives for the one person who has got it and for five of the people who haven’t. It’s not that the test is bad; it’s just there are so many negatives to work through. If you can screen out a good number of the negatives, though, the people who haven’t got the condition, then the good test will turn up fewer false positives. So suppose you have a cheap or easy or quick test that doesn’t miss any true positives but does have a 14 percent false positive rate. That would screen out 430 of the people who haven’t got whatever we’re testing for, leaving only 71 people who need the 99-percent-accurate test. This can make for a more effective use of resources.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 13th is an algebra-in-real-life joke and I can’t make something deeper out of that.

Mike Shiell’s The Wandering Melon for the 13th is a spot of wordplay built around statisticians. Good for taping to the mathematics teacher’s walls.

Eric the Circle for the 14th, this one by “zapaway”, is another bit of wordplay. Tans and tangents.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th identifies, aptly, a difference between scientists and science fans. Weinersmith is right that loving trivia is a hallmark of a fan. Expertise — in any field, not just science — is more about recognizing patterns of problems and concepts, ways to bring approaches from one field into another, this sort of thing. And the digits of π are great examples of trivia. There’s no need for anyone to know the 1,681st digit of π. There’s few calculations you could ever do when you needed more than three dozen digits. But if memorizing digits seems like fun then π is a great set to learn. e is the only other number at all compelling.

The thing is, it’s very hard to become an expert in something without first being a fan of it. It’s possible, but if a field doesn’t delight you why would you put that much work into it? So even though the scientist might have long since gotten past caring how many digits of π, it’s awfully hard to get something memorized in the flush of fandom out of your head.

I know you’re curious. I can only remember π out to 3.14158926535787962. I might have gotten farther if I’d tried, but I actually got a digit wrong, inserting a ‘3’ before that last ’62’, and the effort to get that mistake out of my head obliterated any desire to waste more time memorizing digits. For e I can only give you 2.718281828. But there’s almost no hope I’d know that far if it weren’t for how e happens to repeat that 1828 stanza right away.

The End 2016 Mathematics A To Z: Monster Group


Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

Monster Group.

It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

(Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

Arthur Christmas and the End of Time


In working out my little Arthur Christmas-inspired problem, I argued that if the reindeer take some nice rational number of hours to complete one orbit of the Earth, eventually they’ll meet back up with Arthur and Grand-Santa stranded on the ground. And if the reindeer take an irrational number of hours to make one orbit, they’ll never meet again, although if they wait long enough, they’ll get pretty close together, eventually.

So far this doesn’t sound like a really thrilling result: the two parties, moving on their own paths, either meet again, or they don’t. Doesn’t sound quite like I earned the four-figure income I got from mathematics work last year. But here’s where I get to be worth it: if the reindeer and Arthur don’t meet up again, but I can accept their being very near one another, then they will get as close as I like. I only figured how long it would take for the two to get about 23 centimeters apart, but if I wanted, I could wait for them to be two centimeters apart, or two millimeters, or two angstroms if I wanted. I’d pay for this nearer miss with a longer wait. And this gives me my opening to a really stunning bit of mathematics.

Continue reading “Arthur Christmas and the End of Time”

%d bloggers like this: