## The Summer 2017 Mathematics A To Z: Young Tableau

I never heard of today’s entry topic three months ago. Indeed, three weeks ago I was still making guesses about just what Gaurish, author of For the love of Mathematics,, was asking about. It turns out to be maybe the grand union of everything that’s ever been in one of my A To Z sequences. I overstate, but barely.

# Young Tableau.

The specific thing that a Young Tableau is is beautiful in its simplicity. It could almost be a recreational mathematics puzzle, except that it isn’t challenging enough.

Start with a couple of boxes laid in a row. As many or as few as you like.

Now set another row of boxes. You can have as many as the first row did, or fewer. You just can’t have more. Set the second row of boxes — well, your choice. Either below the first row, or else above. I’m going to assume you’re going below the first row, and will write my directions accordingly. If you do things the other way you’re following a common enough convention. I’m leaving it on you to figure out what the directions should be, though.

Now add in a third row of boxes, if you like. Again, as many or as few boxes as you like. There can’t be more than there are in the second row. Set it below the second row.

And a fourth row, if you want four rows. Again, no more boxes in it than the third row had. Keep this up until you’ve got tired of adding rows of boxes.

How many boxes do you have? I don’t know. But take the numbers 1, 2, 3, 4, 5, and so on, up to whatever the count of your boxes is. Can you fill in one number for each box? So that the numbers are always increasing as you go left to right in a single row? And as you go top to bottom in a single column? Yes, of course. Go in order: ‘1’ for the first box you laid down, then ‘2’, then ‘3’, and so on, increasing up to the last box in the last row.

Can you do it in another way? Any other order?

Except for the simplest of arrangements, like a single row of four boxes or three rows of one box atop another, the answer is yes. There can be many of them, turns out. Seven boxes, arranged three in the first row, two in the second, one in the third, and one in the fourth, have 35 possible arrangements. It doesn’t take a very big diagram to get an enormous number of possibilities. Could be fun drawing an arbitrary stack of boxes and working out how many arrangements there are, if you have some time in a dull meeting to pass.

Let me step away from filling boxes. In one of its later, disappointing, seasons Futurama finally did a body-swap episode. The gimmick: two bodies could only swap the brains within them one time. So would it be possible to put Bender’s brain back in his original body, if he and Amy (or whoever) had already swapped once? The episode drew minor amusement in mathematics circles, and a lot of amazement in pop-culture circles. The writer, a mathematics major, found a proof that showed it was indeed always possible, even after many pairs of people had swapped bodies. The idea that a theorem was created for a TV show impressed many people who think theorems are rarer and harder to create than they necessarily are.

It was a legitimate theorem, and in a well-developed field of mathematics. It’s about permutation groups. These are the study of the ways you can swap pairs of things. I grant this doesn’t sound like much of a field. There is a surprising lot of interesting things to learn just from studying how stuff can be swapped, though. It’s even of real-world relevance. Most subatomic particles of a kind — electrons, top quarks, gluons, whatever — are identical to every other particle of the same kind. Physics wouldn’t work if they weren’t. What would happen if we swap the electron on the left for the electron on the right, and vice-versa? How would that change our physics?

A chunk of quantum mechanics studies what kinds of swaps of particles would produce an observable change, and what kind of swaps wouldn’t. When the swap doesn’t make a change we can describe this as a symmetric operation. When the swap does make a change, that’s an antisymmetric operation. And — the Young Tableau that’s a single row of two boxes? That matches up well with this symmetric operation. The Young Tableau that’s two rows of a single box each? That matches up with the antisymmetric operation.

How many ways could you set up three boxes, according to the rules of the game? A single row of three boxes, sure. One row of two boxes and a row of one box. Three rows of one box each. How many ways are there to assign the numbers 1, 2, and 3 to those boxes, and satisfy the rules? One way to do the single row of three boxes. Also one way to do the three rows of a single box. There’s two ways to do the one-row-of-two-boxes, one-row-of-one-box case.

What if we have three particles? How could they interact? Well, all three could be symmetric with each other. This matches the first case, the single row of three boxes. All three could be antisymmetric with each other. This matches the three rows of one box. Or you could have two particles that are symmetric with each other and antisymmetric with the third particle. Or two particles that are antisymmetric with each other but symmetric with the third particle. Two ways to do that. Two ways to fill in the one-row-of-two-boxes, one-row-of-one-box case.

This isn’t merely a neat, aesthetically interesting coincidence. I wouldn’t spend so much time on it if it were. There’s a matching here that’s built on something meaningful. The different ways to arrange numbers in a set of boxes like this pair up with a select, interesting set of matrices whose elements are complex-valued numbers. You might wonder who introduced complex-valued numbers, let alone matrices of them, into evidence. Well, who cares? We’ve got them. They do a lot of work for us. So much work they have a common name, the “symmetric group over the complex numbers”. As my leading example suggests, they’re all over the place in quantum mechanics. They’re good to have around in regular physics too, at least in the right neighborhoods.

These Young Tableaus turn up over and over in group theory. They match up with polynomials, because yeah, everything is polynomials. But they turn out to describe polynomial representations of some of the superstar groups out there. Groups with names like the General Linear Group (square matrices), or the Special Linear Group (square matrices with determinant equal to 1), or the Special Unitary Group (that thing where quantum mechanics says there have to be particles whose names are obscure Greek letters with superscripts of up to five + marks). If you’d care for more, here’s a chapter by Dr Frank Porter describing, in part, how you get from Young Tableaus to the obscure baryons.

Porter’s chapter also lets me tie this back to tensors. Tensors have varied ranks, the number of different indicies you can have on the things. What happens when you swap pairs of indices in a tensor? How many ways can you swap them, and what does that do to what the tensor describes? Please tell me you already suspect this is going to match something in Young Tableaus. They do this by way of the symmetries and permutations mentioned above. But they are there.

As I say, three months ago I had no idea these things existed. If I ever ran across them it was from seeing the name at MathWorld’s list of terms that start with ‘Y’. The article shows some nice examples (with each rows a atop the previous one) but doesn’t make clear how much stuff this subject runs through. I can’t fit everything in to a reasonable essay. (For example: the number of ways to arrange, say, 20 boxes into rows meeting these rules is itself a partition problem. Partition problems are probability and statistical mechanics. Statistical mechanics is the flow of heat, and the movement of the stars in a galaxy, and the chemistry of life.) I am delighted by what does fit.

## Reading the Comics, September 19, 2017: Visualization Edition

Comic Strip Master Command apparently doesn’t want me talking about the chances of Friday’s Showcase Showdown. They sent me enough of a flood of mathematically-themed strips that I don’t know when I’ll have the time to talk about the probability of that episode. (The three contestants spinning the wheel all tied, each spinning $1.00. And then in the spin-off, two of the three contestants also spun$1.00. And this after what was already a perfect show, in which the contestants won all six of the pricing games.) Well, I’ll do what comic strips I can this time, and carry on the last week of the Summer 2017 A To Z project, and we’ll see if I can say anything timely for Thursday or Saturday or so.

Jim Scancarelli’s Gasoline Alley for the 17th is a joke about the student embarrassing the teacher. It uses mathematics vocabulary for the specifics. And it does depict one of those moments that never stops, as you learn mathematics. There’s always more vocabulary. There’s good reasons to have so much vocabulary. Having names for things seems to make them easier to work with. We can bundle together ideas about what a thing is like, and what it may do, under a name. I suppose the trouble is that we’ve accepted a convention that we should define terms before we use them. It’s nice, like having the dramatis personae listed at the start of the play. But having that list isn’t the same as saying why anyone should care. I don’t know how to balance the need to make clear up front what one means and the need to not bury someone under a heap of similar-sounding names.

Mac King and Bill King’s Magic in a Minute for the 17th is another puzzle drawn from arithmetic. Look at it now if you want to have the fun of working it out, as I can’t think of anything to say about it that doesn’t spoil how the trick is done. The top commenter does have a suggestion about how to do the problem by breaking one of the unstated assumptions in the problem. This is the kind of puzzle created for people who want to motivate talking about parity or equivalence classes. It’s neat when you can say something of substance about a problem using simple information, though.

Terri Libenson’s Pajama Diaries for the 18th of September, 2017. When I first read this I assumed that of course the base of the triangle had length 4 and the second leg, at a 45-degree angle to that, had length 2, and I wondered if those numbers could be consistent for a triangle to exist. Of course they could, though. There is a bit of fun to be had working out whether a particular triangle could exist from knowing its side lengths, though.

Terri Libenson’s Pajama Diaries for the 18th uses trigonometry as the marker for deep thinking. It comes complete with a coherent equation, too. It gives the area of a triangle with two legs that meet at a 45 degree angle. I admit I am uncomfortable with promoting the idea that people who are autistic have some super-reasoning powers. (Also with the pop-culture idea that someone who spots things others don’t is probably at least a bit autistic.) I understand wanting to think someone’s troubles have some compensation. But people are who they are; it’s not like they need to observe some “balance”.

Lee Falk and Wilson McCoy’s The Phantom for the 10th of August, 1950 was rerun Monday. It’s a side bit of joking about between stories. And it uses knowledge of mathematics — and an interest in relativity — as signifier of civilization. I can only hope King Hano does better learning tensors on his own than I do.

Lee Falk and Wilson McCoy’s The Phantom for the 10th of August, 1950 and rerun the 18th of September, 2017. For my money, just reading a mathematics book doesn’t take. I need to take notes, as if it were in class. I don’t quite copy the book, but it comes close.

Mike Thompson’s Grand Avenue for the 18th goes back to classrooms and stuff for clever answers that subvert the teacher. And I notice, per the title given this edition, that the teacher’s trying to make the abstractness of three minus two tangible, by giving it an example. Which pairs it with …

Will Henry’s Wallace the Brace for the 18th, wherein Wallace asserts that arithmetic is easier if you visualize real things. I agree it seems to help with stuff like basic arithmetic. I wouldn’t want to try taking the cosine of an apple, though. Separating the quantity of a thing from the kind of thing measured is one of those subtle breakthroughs. It’s one of the ways that, for example, modern calculations differ from those of the Ancient Greeks. But it does mean thinking of numbers in, we’d say, a more abstract way than they did, and in a way that seems to tax us more.

Wallace the Brave recently had a book collection published, by the way. I mention because this is one of a handful of comics with a character who likes pinball, and more, who really really loves the Williams game FunHouse. This is an utterly correct choice for favorite pinball game. It’s one of the games that made me a pinball enthusiast.

Ryan North’s Dinosaur Comics rerun for the 19th I mention on loose grounds. In it T-Rex suggests trying out an alternate model for how gravity works. The idea, of what seems to be gravity “really” being the shade cast by massive objects in a particle storm, was explored in the late 17th and early 18th century. It avoids the problem of not being able to quite say what propagates gravitational attraction. But it also doesn’t work, analytically. We would see the planets orbit differently if this were how gravity worked. And there’s the problem about mass and energy absorption, as pointed out in the comic. But it can often be interesting or productive to play with models that don’t work. You might learn something about models that do, or that could.

## The Summer 2017 Mathematics A To Z: X

We come now almost to the end of the Summer 2017 A To Z. Possibly also the end of all these A To Z sequences. Gaurish of, For the love of Mathematics, proposed that I talk about the obvious logical choice. The last promising thing I hadn’t talked about. I have no idea what to do for future A To Z’s, if they’re even possible anymore. But that’s a problem for some later time.

# X.

Some good advice that I don’t always take. When starting a new problem, make a list of all the things that seem likely to be relevant. Problems that are worth doing are usually about things. They’ll be quantities like the radius or volume of some interesting surface. The amount of a quantity under consideration. The speed at which something is moving. The rate at which that speed is changing. The length something has to travel. The number of nodes something must go across. Whatever. This all sounds like stuff from story problems. But most interesting mathematics is from a story problem; we want to know what this property is like. Even if we stick to a purely mathematical problem, there’s usually a couple of things that we’re interested in and that we describe. If we’re attacking the four-color map theorem, we have the number of territories to color. We have, for each territory, the number of territories that touch it.

Next, select a name for each of these quantities. Write it down, in the table, next to the term. The volume of the tank is ‘V’. The radius of the tank is ‘r’. The height of the tank is ‘h’. The fluid is flowing in at a rate ‘r’. The fluid is flowing out at a rate, oh, let’s say ‘s’. And so on. You might take a moment to go through and think out which of these variables are connected to which other ones, and how. Volume, for example, is surely something to do with the radius times something to do with the height. It’s nice to have that stuff written down. You may not know the thing you set out to solve, but you at least know you’ve got this under control.

I recommend this. It’s a good way to organize your thoughts. It establishes what things you expect you could know, or could want to know, about the problem. It gives you some hint how these things relate to each other. It sets you up to think about what kinds of relationships you figure to study when you solve the problem. It gives you a lifeline, when you’re lost in a sea of calculation. It’s reassurance that these symbols do mean something. Better, it shows what those things are.

I don’t always do it. I have my excuses. If I’m doing a problem that’s very like one I’ve already recently done, the things affecting it are probably the same. The names to give these variables are probably going to be about the same. Maybe I’ll make a quick sketch to show how the parts of the problem relate. If it seems like less work to recreate my thoughts than to write them down, I skip writing them down. Not always good practice. I tell myself I can always go back and do things the fully right way if I do get lost. So far that’s been true.

So, the names. Suppose I am interested in, say, the length of the longest rod that will fit around this hallway corridor. Then I am in a freshman calculus book, yes. Fine. Suppose I am interested in whether this pinball machine can be angled up the flight of stairs that has a turn in it Then I will measure things like the width of the pinball machine. And the width of the stairs, and of the landing. I will measure this carefully. Pinball machines are heavy and there are many hilarious sad stories of people wedging them into hallways and stairwells four and a half stories up from the street. But: once I have identified, say, ‘width of pinball machine’ as a quantity of interest, why would I ever refer to it as anything but?

This is no dumb question. It is always dangerous to lose the link between the thing we calculate and the thing we are interested in. Without that link we are less able to notice mistakes in either our calculations or the thing we mean to calculate. Without that link we can’t do a sanity check, that reassurance that it’s not plausible we just might fit something 96 feet long around the corner. Or that we estimated that we could fit something of six square feet around the corner. It is common advice in programming computers to always give variables meaningful names. Don’t write ‘T’ when ‘Total’ or, better, ‘Total_Value_Of_Purchase’ is available. Why do we disregard this in mathematics, and switch to ‘T’ instead?

First reason is, well, try writing this stuff out. Your hand (h) will fall off (foff) in about fifteen minutes, twenty seconds. (15′ 20”). If you’re writing a program, the programming environment you have will auto-complete the variable after one or two letters in. Or you can copy and paste the whole name. It’s still good practice to leave a comment about what the variable should represent, if the name leaves any reasonable ambiguity.

Another reason is that sure, we do specific problems for specific cases. But a mathematician is naturally drawn to thinking of general problems, in abstract cases. We see something in common between the problem “a length and a quarter of the length is fifteen feet; what is the length?” and the problem “a volume plus a quarter of the volume is fifteen gallons; what is the volume?”. That one is about lengths and the other about volumes doesn’t concern us. We see a saving in effort by separating the quantity of a thing from the kind of the thing. This restores danger. We must think, after we are done calculating, about whether the answer could make sense. But we can minimize that, we hope. At the least we can check once we’re done to see if our answer makes sense. Maybe even whether it’s right.

For centuries, as the things we now recognize as algebra developed, we would use words. We would talk about the “thing” or the “quantity” or “it”. Some impersonal name, or convenient pronoun. This would often get shortened because anything you write often you write shorter. “Re”, perhaps. In the late 16th century we start to see the “New Algebra”. Here mathematics starts looking like … you know … mathematics. We start to see stuff like “addition” represented with the + symbol instead of an abbreviation for “addition” or a p with a squiggle over it or some other shorthand. We get equals signs. You start to see decimals and exponents. And we start to see letters used in place of numbers whose value we don’t know.

There are a couple kinds of “numbers whose value we don’t know”. One is the number whose value we don’t know, but hope to learn. This is the classic variable we want to solve for. Another kind is the number whose value we don’t know because we don’t care. I mean, it has some value, and presumably it doesn’t change over the course of our problem. But it’s not like our work will be so different if, say, the tank is two feet high rather than four.

Is there a problem? If we pick our letters to fit a specific problem, no. Presumably all the things we want to describe have some clear name, and some letter that best represents the name. It’s annoying when we have to consider, say, the pinball machine width and the corridor width. But we can work something out.

Is $m b \cos(e) + b^2 \log(y) = \sqrt{e}$ an easy problem to solve?

If we want to figure what ‘m’ is, yes. Similarly ‘y’. If we want to know what ‘b’ is, it’s tedious, but we can do that. If we want to know what ‘e’ is? Run and hide, that stuff is crazy. If you have to, do it numerically and accept an estimate. Don’t try figuring what that is.

And so we’ve developed conventions. There are some letters that, except in weird circumstances, are coefficients. They’re numbers whose value we don’t know, but either don’t care about or could look up. And there are some that, by default, are variables. They’re the ones whose value we want to know.

These conventions started forming, as mentioned, in the late 16th century. François Viète here made a name that lasts to mathematics historians at least. His texts described how to do algebra problems in the sort of procedural methods that we would recognize as algebra today. And he had a great idea for these letters. Use the whole alphabet, if needed. Use the consonants to represent the coefficients, the numbers we know but don’t care what they are. Use the vowels to represent the variables, whose values we want to learn. So he would look at that equation and see right away: it’s a terrible mess. (I exaggerate. He doesn’t seem to have known the = sign, and I don’t know offhand when ‘log’ and ‘cos’ became common. But suppose the rest of the equation were translated into his terminology.)

It’s not a bad approach. Besides the mnemonic value of consonant-coefficient, vowel-variable, it’s true that we usually have fewer variables than anything else. The more variables in a problem the harder it is. If someone expects you to solve an equation with ten variables in it, you’re excused for refusing. So five or maybe six or possibly seven choices for variables is plenty.

But it’s not what we settled on. René Descartes had a better idea. He had a lot of them, but here’s one. Use the letters at the end of the alphabet for the unknowns. Use the letters at the start of the alphabet for coefficients. And that is, roughly, what we’ve settled on. In my example nightmare equation, we’d suppose ‘y’ to probably be the variable we want to solve for.

And so, and finally, x. It is almost the variable. It says “mathematics” in only two strokes. Even π takes more writing. Descartes used it. We follow him. It’s way off at the end of the alphabet. It starts few words, very few things, almost nothing we would want to measure. (Xylem … mass? Flow? What thing is the xylem anyway?) Even mathematical dictionaries don’t have much to say about it. The letter transports almost no connotations, no messy specific problems to it. If it suggests anything, it suggests the horizontal coordinate in a Cartesian system. It almost is mathematics. It signifies nothing in itself, but long use has given it an identity as the thing we hope to learn by study.

And pirate treasure maps. I don’t know when ‘X’ became the symbol of where to look for buried treasure. My casual reading suggests “never”. Treasure maps don’t really exist. Maps in general don’t work that way. Or at least didn’t before cartoons. X marking the spot seems to be the work of Robert Louis Stevenson, renowned for creating a fanciful map and then putting together a book to justify publishing it. (I jest. But according to Simon Garfield’s On The Map: A Mind-Expanding Exploration of the Way The World Looks, his map did get lost on the way to the publisher, and he had to re-create it from studying the text of Treasure Island. This delights me to no end.) It makes me wonder if Stevenson was thinking of x’s service in mathematics. But the advantages of x as a symbol are hard to ignore. It highlights a point clearly. It’s fast to write. Its use might be coincidence.

But it is a letter that does a needed job really well.

• #### gaurish 1:34 am on Saturday, 23 September, 2017 Permalink | Reply

Nice post! I also liked the Joe Vanilla comic. I find it very wierd that English language is biased towards certain alphabets (like S, E) and have very few words starting with X, Y and Z: https://en.oxforddictionaries.com/explore/which-letters-are-used-most Why would someone create a sound which he/she can’t pronounce to start a word? (I ask this question because English is not my native language).

Like

## The Summer 2017 Mathematics A To Z: Well-Ordering Principle

It’s the last full week of the Summer 2017 A To Z! Four more essays and I’ll have completed this project and curl up into a word coma. But I’m not there yet. Today’s request is another from Gaurish, who’s given me another delightful topic to write about. Gaurish hosts a fine blog, For the love of Mathematics, which I hope you’ve given a try.

# Well-Ordering Principle.

An old mathematics joke. Or paradox, if you prefer. What is the smallest whole number with no interesting properties?

Not one. That’s for sure. We could talk about one forever. It’s the first number we ever know. It’s the multiplicative identity. It divides into everything. It exists outside the realm of prime or composite numbers. It’s — all right, we don’t need to talk about one forever. Two? The smallest prime number. The smallest even number. The only even prime. The only — yeah, let’s move on. Three; the smallest odd prime number. Triangular number. One of only two prime numbers that isn’t one more or one less than a multiple of six. Let’s move on. Four. A square number. The smallest whole number that isn’t 1 or a prime. Five. Prime number. First sum of two prime numbers. Part of the first prime pair. Six. Smallest perfect number. Smallest product of two different prime numbers. Let’s move on.

And so on. Somewhere around 22 or so, the imagination fails and we can’t think of anything not-boring about this number. So we’ve found the first number that hasn’t got any interesting properties! … Except that being the smallest boring number must be interesting. So we have to note that this is otherwise the smallest boring number except for that bit where it’s interesting. On to 23, which used to be the default funny number. 24. … Oh, carry on. Maybe around 31 things settle down again. Our first boring number! Except that, again, being the smallest boring number is interesting. We move on to 32, 33, 34. When we find one that couldn’t be interesting, we find that’s interesting. We’re left to conclude there is no such thing as a boring number.

This would be a nice thing to say for numbers that otherwise get no attention, if we pretend they can have hurt feelings. But we do have to admit, 1729 is actually only interesting because it’s a part of the legend of Srinivasa Ramanujan. Enjoy the silliness for a few paragraphs more.

(This is, if I’m not mistaken, a form of the heap paradox. Don’t remember that? Start with a heap of sand. Remove one grain; you’ve still got a heap of sand. Remove one grain again. Still a heap of sand. Remove another grain. Still a heap of sand. And yet if you did this enough you’d leave one or two grains, not a heap of sand. Where does that change?)

Another problem, something you might consider right after learning about fractions. What’s the smallest positive number? Not one-half, since one-third is smaller and still positive. Not one-third, since one-fourth is smaller and still positive. Not one-fourth, since one-fifth is smaller and still positive. Pick any number you like and there’s something smaller and still positive. This is a difference between the positive integers and the positive real numbers. (Or the positive rational numbers, if you prefer.) The thing positive integers have is obvious, but it is not a given.

The difference is that the positive integers are well-ordered, while the positive real numbers aren’t. Well-ordering we build on ordering. Ordering is exactly what you imagine it to be. Suppose you can say, for any two things in a set, which one is less than another. A set is well-ordered if whenever you have a non-empty subset you can pick out the smallest element. Smallest means exactly what you think, too.

The positive integers are well-ordered. And more. The way they’re set up, they have a property called the “well-ordering principle”. This means any non-empty set of positive integers has a smallest number in it.

This is one of those principles that seems so obvious and so basic that it can’t teach anything interesting. That it serves a role in some proofs, sure, that’s easy to imagine. But something important?

Look back to the joke/paradox I started with. It proves that every positive integer has to be interesting. Every number, including the ones we use every day. Including the ones that no one has ever used in any mathematics or physics or economics paper, and never will. We can avoid that paradox by attacking the vagueness of “interesting” as a word. Are you interested to know the 137th number you can write as the sum of cubes in two different ways? Before you say ‘yes’, consider whether you could name it ten days after you’ve heard the number.

(Granted, yes, it would be nice to know the 137th such number. But would you ever remember it? Would you trust that it’ll be on some Wikipedia page that somehow is never threatened with deletion for not being noteworthy? Be honest.)

But suppose we have some property that isn’t so mushy. Suppose that we can describe it in some way that’s indexed by the positive integers. Furthermore, suppose that we show that in any set of the positive integers it must be true for the smallest number in that set. What do we know?

— We know that it must be true for all the positive integers. There’s a smallest positive integer. The positive integers have this well-ordered principle. So any subset of the positive integers has some smallest member. And if we can show that something or other is always true for the smallest number in a subset of the positive integers, there you go.

This technique we call, when it’s introduced, induction. It’s usually a baffling subject because it’s usually taught like this: suppose the thing you want to show is indexed to the positive integers. Show that it’s true when the index is ‘1’. Show that if the thing is true for an arbitrary index ‘n’, then you know it’s true for ‘n + 1’. It’s baffling because that second part is hard to visualize. The student makes a lot of mistakes in learning, on examples of what the sum of the first ‘N’ whole numbers or their squares or cubes are. I don’t think induction is ever taught in this well-ordering principle method. But it does get used in proofs, once you get to the part of analysis where you don’t have to interact with actual specific numbers much anymore.

The well-ordering principle also gives us the method of infinite descent. You encountered this in learning proofs about, like, how the square root of two must be an irrational number. In this, you show that if something is true for some positive integer, then it must also be true for some other, smaller positive integer. And therefore some other, smaller positive integer again. And again, until you get into numbers small enough you can check by hand.

It keeps creeping in. The Fundamental Theorem of Arithmetic says that every positive whole number larger than one is a product of a unique string of prime numbers. (Well, the order of the primes doesn’t matter. 2 times 3 times 5 is the same number as 3 times 2 times 5, and so on.) The well-ordering principle guarantees you can factor numbers into a product of primes. Watch this slick argument.

Suppose you have a set of whole numbers that isn’t the product of prime numbers. There must, by the well-ordering principle, be some smallest number in that set. Call that number ‘n’. We know that ‘n’ can’t be prime, because if it were, then that would be its prime factorization. So it must be the product of at least two other numbers. Let’s suppose it’s two numbers. Call them ‘a’ and ‘b’. So, ‘n’ is equal to ‘a’ times ‘b’.

Well, ‘a’ and ‘b’ have to be less than ‘n’. So they’re smaller than the smallest number that isn’t a product of primes. So, ‘a’ is the product of some set of primes. And ‘b’ is the product of some set of primes. And so, ‘n’ has to equal the primes that factor ‘a’ times the primes that factor ‘b’. … Which is the prime factorization of ‘n’. So, ‘n’ can’t be in the set of numbers that don’t have prime factorizations. And so there can’t be any numbers that don’t have prime factorizations. It’s for the same reason we worked out there aren’t any numbers with nothing interesting to say about them.

And isn’t it delightful to find so simple a principle can prove such specific things?

• #### gaurish 1:23 pm on Thursday, 21 September, 2017 Permalink | Reply

My favourite application of Fermat’s method of infinite descent: x^4+y^4=z^4 has no non-zero integer solutions. We can apply this method not only to solve many other Diophantine equations, but also the famous divisibility question from IMO: https://math.stackexchange.com/q/1897942

Like

• #### Joseph Nebus 8:27 pm on Friday, 22 September, 2017 Permalink | Reply

Oh good heavens, I remember the 1988 International Mathematics Olympiad question. Not from solving it myself, but from seeing it passed around as the sort of thing to practice on if I wanted to try my last year in high school. It felt back then like the sort of problem and argument just transmitted from space. I’m still not sure I’m comfortable with it.

Like

## The Summer 2017 Mathematics A To Z: Volume Forms

I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

# Volume Forms.

So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” $y = 2x + 1$.

A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called $y = 2x + 1$ before? … That’s … some mess. And now $r = 2\theta + 1$ … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

And something to bother you a while. $y = 2x + 1$ is an equation that looks the same as $r = 2\theta + 1$. You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

• #### elkement (Elke Stangl) 7:56 am on Tuesday, 19 September, 2017 Permalink | Reply

That was again very intriguing! I only came across volume forms as a technical term when learning General Relativity. It seems in theoretical statistical mechanics it was not necessary to use that term – despite these fancy integrals in phase spaces with trillions of dimensions that you need when proving things about the canonical ensemble, then move on to the grand canonical one etc. I wonder why. Because the geometry of the spaces or volumes considered are fairly simple after all? “Only highly symmetrical N-balls”?

And – continuing from my comment on your post on Topology – again, Landau and Lifshitz had pulled it off in GR without “Volume Forms” if I recall correctly. But they explain the integration of tensors of different ranks in spaces with different dimensions separately, which is still doable when having 3D space or 4D spacetime in mind (they also put much emphasis on working out the 3D space-only tensor part of the 4D tensors) – but perhaps exactly these insights and generalizations that you allude to are lost without introducing Volume Forms.

Like

• #### Joseph Nebus 8:23 pm on Friday, 22 September, 2017 Permalink | Reply

I suspect that, for most problems, the geometry of the phase spaces in statistical mechanics is pretty simple. The problems I’ve worked on have been easy enough in that regard, although there is a lot in the field (especially non-equilibrium statistical mechanics) that I just don’t know.

Probably it does all come back to the perception of how hard these things are to pick up versus how much one wants to do with them. Or an estimate of the audience, and how likely they are to be familiar with something, and how much book space they’re willing to spend bringing readers up to speed.

Liked by 1 person

## Reading the Comics, September 16, 2017: Wait, Are Elviney and Miss Prunelly The Same Character Week

It was an ordinary enough week when I realized I wasn’t sure about the name of the schoolmarm in Barney Google and Snuffy Smith. So I looked it up on Comics Kingdom’s official cast page for John Rose’s comic strip. And then I realized something about the Smiths’ next-door neighbor Elviney and Jughaid’s teacher Miss Prunelly:

Excerpt from the cast page of Barney Google and Snuffy Smith. Among the many mysteries besides that apparently they’re the same character and I never noticed this before? Why does Spark Plug, the horse Google owns that’s appeared like three times this millennium and been the source of no punch lines since Truman was President, get listed ahead of Elviney and Miss Prunelly who, whatever else you can say about them, appear pretty much every week?

Are … are they the same character, just wearing different glasses? I’ve been reading this comic strip for like forty years and I’ve never noticed this before. I’ve also never heard any of you all joking about this, by the way, so I stand by my argument that if they’re prominent enough then, yes, glasses could be an adequate disguise for Superman. Anyway, I’m startled. (Are they sisters? Cousins? But wouldn’t that make mention on the cast page? There are missing pieces here.)

Mac King and Bill King’s Magic In A Minute feature for the 10th sneaks in here yet again with a magic trick based in arithmetic. Here, they use what’s got to be some Magic Square-based technology for a card trick. This probably could be put to use with other arrangements of numbers, but cards have the advantage of being stuff a magician is likely to have around and that are expected to do something weird.

Susan Camilleri Konair’s Six Chix for the 13th of September, 2017. It’s a small artistic touch, but I do appreciate that the kid is shown with a cell phone and it’s not any part of the joke that having computing devices is somehow wrong or that being on the Internet is somehow weird or awry.

Susan Camilleri Konair’s Six Chix for the 13th name-drops mathematics as the homework likely to be impossible doing. I think this is the first time Konair’s turned up in a Reading The Comics survey.

Thom Bluemel’s Birdbrains for the 13th is an Albert Einstein Needing Help panel. It’s got your blackboard full of symbols, not one of which is the famous E = mc2 equation. But given the setup it couldn’t feature that equation, not and be a correct joke.

John Rose’s Barney Google for the 14th of September, 2017. I admire Miss Prunelly’s commitment to ongoing professional development that she hasn’t run out of shocked or disapproving faces after all these years in a gag-a-day strip.

John Rose’s Barney Google for the 14th does a little more work than necessary for its subtraction-explained-with-candy joke. I non-sarcastically appreciate Rose’s dodging the obvious joke in favor of a guy-is-stupid joke.

Niklas Eriksson’s Carpe Diem for the 14th is a kind of lying-with-statistics joke. That’s as much as it needs to be. Still, thought always should go into exactly how one presents data, especially visually. There are connotations to things. Just inverting an axis is dangerous stuff, though. The convention of matching an increase in number to moving up on the graph is so ingrained that it should be avoided only for enormous cause.

Niklas Eriksson’s Carpe Diem for the 14th of September, 2017. It’s important the patient not panic thinking about how he’s completely flat under the blanket there.

This joke also seems conceptually close, to me, to the jokes about the strangeness of how a “negative” medical test is so often the good news.

Olivia Walch’s Imogen Quest for the 15th is not about solitaire. But “solving” a game by simulating many gameplays and drawing strategic advice from that is a classic numerical mathematics trick. Whether a game is fun once it’s been solved so is up to you. And often in actual play, for a game with many options at each step, it’s impossible without a computer to know the best possible move. You could use simulations like this to develop general guidelines, and a couple rules that often pan out.

Thaves’s Frank and Ernest for the 16th qualifies as the anthropomorphic-numerals joke for this week. I’m glad to have got one in.

## The Summer 2017 Mathematics A To Z: Ulam’s Spiral

Gaurish, of For the love of Mathematics, asked me about one of those modestly famous (among mathematicians) mathematical figures. Yeah, I don’t have a picture of it. Too much effort. It’s easier to write instead.

# Ulam’s Spiral.

Boredom is unfairly maligned in our society. I’ve said this before, but that was years ago, and I have some different readers today. We treat boredom as a terrible thing, something to eliminate. We treat it as a state in which nothing seems interesting. It’s not. Boredom is a state in which anything, however trivial, engages the mind. We would not count the tiles on the floor, or time the rocking of a chandelier, or wonder what fraction of solitaire games can be won if we were never bored. A bored mind is a mind ready to discover things. We should welcome the state.

Several times in the 20th century Stanislaw Ulam was bored. I mention solitaire games because, according to Ulam, he spent some time in 1946 bored, convalescent and playing a lot of solitaire. He got to wondering what’s the probability a particular solitaire game is winnable? (He was specifically playing Canfield solitaire. The game’s also called Demon, Chameleon, or Storehouse, if Wikipedia is right.) What’s the chance the cards can’t be played right, no matter how skilled the player is? It’s a problem impossible to do exactly. Ulam was one of the mathematicians designing and programming the computers of the day.

He, with John von Neumann, worked out how to get a computer to simulate many, many rounds of cards. They would get an answer that I have never seen given in any history of the field. The field is Monte Carlo simulations. It’s built on using random numbers to conduct experiments that approximate an answer. (They’re also what my specialty is in. I mention this for those who’ve wondered what, if any, mathematics field I do consider myself competent in. This is not it.) The chance of a winnable deal is about 71 to 72 percent, although actual humans can’t hope to do more than about 35 percent. My evening’s experience with this Canfield Solitaire game suggests the chance of winning is about zero.

In 1963, Ulam told Martin Gardner, he was bored again during a paper’s presentation. Ulam doodled, and doodled something interesting enough to have a computer doodle more than mere pen and paper could. It was interesting enough to feature in Gardner’s Mathematical Games column for March 1964. It started with what the name suggested, a spiral.

Write down ‘1’ in the center. Write a ‘2’ next to it. This is usually done to the right of the ‘1’. If you want the ‘2’ to be on the left, or above, or below, fine, it’s your spiral. Write a ‘3’ above the ‘2’. (Or below if you want, or left or right if you’re doing your spiral that way. You’re tracing out a right angle from the “path” of numbers before that.) A ‘4’ to the left of that, a ‘5’ under that, a ‘6’ under that, a ‘7’ to the right of that, and so on. A spiral, for as long as your paper or your patience lasts. Now draw a circle around the ‘2’. Or a box. Whatever. Highlight it. Also do this for the ‘3’, and the ‘5’, and the ‘7’ and all the other prime numbers. Do this for all the numbers on your spiral. And look at what’s highlighted.

It looks like …

It’s …

Well, it’s something.

It’s hard to say what exactly. There’s a lot of diagonal lines to it. Not uninterrupted lines. Every diagonal line has some spottiness to it. There are blank regions too. There are some long stretches of numbers not highlighted, many of them horizontal or vertical lines with no prime numbers in them. Those stop too. The eye can’t help seeing clumps, especially. Imperfect diagonal stitching across the fabric of the counting numbers.

Maybe seeing this is some fluke. Start with another number in the center. 2, if you like. 41, if you feel ambitious. Repeat the process. The details vary. But the pattern looks much the same. Regions of dense-packed broken diagonals, all over the plane.

It begs us to believe there’s some knowable pattern here. That we could get an artist to draw a figure, with each spot in the figure corresponding to a prime number. This would be great. We know many things about prime numbers, but we don’t really have any system to generate a lot of prime numbers. Not much better than “here’s a thing, try dividing it”. Back in the 80s and 90s we had the big Fractal Boom. Everybody got computers that could draw what passed for pictures. And we could write programs that drew them. The Ulam Spiral was a minor but exciting prospect there. Was it a fractal? I don’t know. I’m not sure if anyone knows. (The spiral like you’d draw on paper wouldn’t be. The spiral that went out to infinitely large numbers might conceivably be.) It seemed plausible enough for computing magazines to be interested in. Maybe we could describe the pattern by something as simple as the Koch curve (that wriggly triangular snowflake shape). Or as easy to program as the Mandelbrot Curve.

We haven’t found one. As keeps happening with prime numbers, the answers evade us. We can understand why diagonals should appear. Write a polynomial of the form $4n^2 + b n + c$. Evaluate it for n of 1, 2, 3, 4, and so on. Highlight those numbers. This will tend to highlight numbers that, in this spiral, are diagonal or horizontal or vertical lines. A lot of polynomials like this give a string of some prime numbers. But the polynomials all peter out. The lines all have interruptions.

There are other patterns. One, predating Ulam’s boring paper by thirty years, was made by Laurence Klauber. Klauber was a herpetologist of some renown, if Wikipedia isn’t misleading me. It claims his Rattlesnakes: Their Habits, Life Histories, and Influence on Mankind is still an authoritative text. I don’t know and will defer to people versed in the field. It also credits him with several patents in electrical power transmission.

Anyway, Klauber’s Triangle sets a ‘1’ at the top of the triangle. The numbers ‘2 3 4’ under that, with the ‘3’ directly beneath the ‘1’. The numbers ‘5 6 7 8 9’ beneath that, the ‘7’ directly beneath the ‘3’. ’10 11 12 13 14 15 16′ beneath that, the ’13’ underneath the ‘7’. And so on. Again highlight the prime numbers. You get again these patterns of dots and lines. Many vertical lines. Some lines in isometric view. It looks like strands of Morse Code.

In 1994 Robert Sacks created another variant. This one places the counting numbers on an Archimedian spiral. Space the numbers correctly and highlight the primes. The primes will trace out broken curves. Some are radial. Some spiral in (or out, if you rather). Some open up islands. The pattern looks like a Saul Bass logo for a “Nifty Fifty”-era telecommunications firm or maybe an airline.

You can do more. Draw a hexagonal spiral. Triangular ones. Other patterns of laying down numbers. You get patterns. The eye can’t help seeing order there. We can’t quite pin down what it is. Prime numbers keep evading our full understanding. Perhaps it would help to doodle a little during a tiresome conference call.

Stanislaw Ulam did enough fascinating numerical mathematics that I could probably do a sequence just on his work. I do want to mention one thing. It’s part of information theory. You know the game Twenty Questions. Play that, but allow for some lying. The game is still playable. Ulam did not invent this game; Alfréd Rényi did. (I do not know anything else about Rényi.) But Ulam ran across Rényi’s game, and pointed out how interesting it was, and mathematicians paid attention to him.

• #### gaurish 9:04 am on Saturday, 16 September, 2017 Permalink | Reply

“Yeah, I don’t have a picture of it. Too much effort. It’s easier to write instead.” :-)
My interest in ulam spiral was due to its relation with an open problem in number theory to find a non-linear, non-constant polynomial which can take prime values infinitely many times. I am glad that you mentioned it.(https://mathoverflow.net/q/98431/90056)

Like

• #### Joseph Nebus 11:30 pm on Sunday, 17 September, 2017 Permalink | Reply

I’m glad to give satisfaction. Also, regarding your link: gosh, I haven’t thought about Bunyakovski (as I learned the spelling) in years. Wow.

Like

## The Summer 2017 Mathematics A To Z: Topology

Today’s glossary entry comes from Elke Stangl, author of the Elkemental Force blog. I’ll do my best, although it would have made my essay a bit easier if I’d had the chance to do another topic first. We’ll get there.

# Topology.

Start with a universe. Nice thing to have around. Call it ‘M’. I’ll get to why that name.

I’ve talked a fair bit about weird mathematical objects that need some bundle of traits to be interesting. So this will change the pace some. Here, I request only that the universe have a concept of “sets”. OK, that carries a little baggage along with it. We have to have intersections and unions. Those come about from having pairs of sets. The intersection of two sets is all the things that are in both sets simultaneously. The union of two sets is all the things that are in one set, or the other, or both simultaneously. But it’s hard to think of something that could have sets that couldn’t have intersections and unions.

So from your universe ‘M’ create a new collection of things. Call it ‘T’. I’ll get to why that name. But if you’ve formed a guess about why, then you know. So I suppose I don’t need to say why, now. ‘T’ is a collection of subsets of ‘M’. Now let’s suppose these four things are true.

First. ‘M’ is one of the sets in ‘T’.

Second. The empty set ∅ (which has nothing at all in it) is one of the sets in ‘T’.

Third. Whenever two sets are in ‘T’, their intersection is also in ‘T’.

Fourth. Whenever two (or more) sets are in ‘T’, their union is also in ‘T’.

Got all that? I imagine a lot of shrugging and head-nodding out there. So let’s take that. Your universe ‘M’ and your collection of sets ‘T’ are a topology. And that’s that.

Yeah, that’s never that. Let me put in some more text. Suppose we have a universe that consists of two symbols, say, ‘a’ and ‘b’. There’s four distinct topologies you can make of that. Take the universe plus the collection of sets {∅}, {a}, {b}, and {a, b}. That’s a topology. Try it out. That’s the first collection you would probably think of.

Here’s another collection. Take this two-thing universe and the collection of sets {∅}, {a}, and {a, b}. That’s another topology and you might want to double-check that. Or there’s this one: the universe and the collection of sets {∅}, {b}, and {a, b}. Last one: the universe and the collection of sets {∅} and {a, b} and nothing else. That one barely looks legitimate, but it is. Not a topology: the universe and the collection of sets {∅}, {a}, and {b}.

The number of toplogies grows surprisingly with the number of things in the universe. Like, if we had three symbols, ‘a’, ‘b’, and ‘c’, there would be 29 possible topologies. The universe of the three symbols and the collection of sets {∅}, {a}, {b, c}, and {a, b, c}, for example, would be a topology. But the universe and the collection of sets {∅}, {a}, {b}, {c}, and {a, b, c} would not. It’s a good thing to ponder if you need something to occupy your mind while awake in bed.

With four symbols, there’s 355 possibilities. Good luck working those all out before you fall asleep. Five symbols have 6,942 possibilities. You realize this doesn’t look like any expected sequence. After ‘4’ the count of topologies isn’t anything obvious like “two to the number of symbols” or “the number of symbols factorial” or something.

Are you getting ready to call me on being inconsistent? In the past I’ve talked about topology as studying what we can know about geometry without involving the idea of distance. How’s that got anything to do with this fiddling about with sets and intersections and stuff?

So now we come to that name ‘M’, and what it’s finally mnemonic for. I have to touch on something Elke Stangl hoped I’d write about, but a letter someone else bid on first. That would be a manifold. I come from an applied-mathematics background so I’m not sure I ever got a proper introduction to manifolds. They appeared one day in the background of some talk about physics problems. I think they were introduced as “it’s a space that works like normal space”, and that was it. We were supposed to pretend we had always known about them. (I’m translating. What we were actually told would be that it “works like R3”. That’s how mathematicians say “like normal space”.) That was all we needed.

Properly, a manifold is … eh. It’s something that works kind of like normal space. That is, it’s a set, something that can be a universe. And it has to be something we can define “open sets” on. The open sets for the manifold follow the rules I gave for a topology above. You can make a collection of these open sets. And the empty set has to be in that collection. So does the whole universe. The intersection of two open sets in that collection is itself in that collection. The union of open sets in that collection is in that collection. If all that’s true, then we have a manifold.

And now the piece that makes every pop mathematics article about topology talk about doughnuts and coffee cups. It’s possible that two topologies might be homeomorphic to each other. “Homeomorphic” is a term of art. But you understand it if you remember that “morph” means shape, and suspect that “homeo” is probably close to “homogenous”. Two things being homeomorphic means you can match their parts up. In the matching there’s nothing left over in the first thing or the second. And the relations between the parts of the first thing are the same as the relations between the parts of the second thing.

So. Imagine the snippet of the number line for the numbers larger than -π and smaller than π. Think of all the open sets you can use to cover that. It will have a set like “the numbers bigger than 0 and less than 1”. A set like “the numbers bigger than -π and smaller than 2.1”. A set like “the numbers bigger than 0.01 and smaller than 0.011”. And so on.

Now imagine the points that exist on a circle, if you’ve omitted one point. Let’s say it’s the unit circle, centered on the origin, and that what we’re leaving out is the point that’s exactly to the left of the origin. The open sets for this are the arcs that cover some part of this punctured circle. There’s the arc that corresponds to the angles from 0 to 1 radian measure. There’s the arc that corresponds to the angles from -π to 2.1 radians. There’s the arc that corresponds to the angles from 0.01 to 0.011 radians. You see where this is going. You see why I say we can match those sets on the number line to the arcs of this punctured circle. There’s some details to fill in here. But you probably believe me this could be done if I had to.

There’s two (or three) great branches of topology. One is called “algebraic topology”. It’s the one that makes for fun pop mathematics articles about imaginary rubber sheets. It’s called “algebraic” because this field makes it natural to study the holes in a sheet. And those holes tend to form groups and rings, basic pieces of Not That Algebra. The field (I’m told) can be interpreted as looking at functors on groups and rings. This makes for some neat tying-together of subjects this A To Z round.

The other branch is called “differential topology”, which is a great field to study because it sounds like what Mister Spock is thinking about. It inspires awestruck looks where saying you study, like, Bayesian probability gets blank stares. Differential topology is about differentiable functions on manifolds. This gets deep into mathematical physics.

As you study mathematical physics, you stop worrying about ever solving specific physics problems. Specific problems are petty stuff. What you like is solving whole classes of problems. A steady trick for this is to try to find some properties that are true about the problem regardless of what exactly it’s doing at the time. This amounts to finding a manifold that relates to the problem. Consider a central-force problem, for example, with planets orbiting a sun. A planet can’t move just anywhere. It can only be in places and moving in directions that give the system the same total energy that it had to start. And the same linear momentum. And the same angular momentum. We can match these constraints to manifolds. Whatever the planet does, it does it without ever leaving these manifolds. To know the shapes of these manifolds — how they are connected — and what kinds of functions are defined on them tells us something of how the planets move.

The maybe-third branch is “low-dimensional topology”. This is what differential topology is for two- or three- or four-dimensional spaces. You know, shapes we can imagine with ease in the real world. Maybe imagine with some effort, for four dimensions. This kind of branches out of differential topology because having so few dimensions to work in makes a lot of problems harder. We need specialized theoretical tools that only work for these cases. Is that enough to count as a separate branch? It depends what topologists you want to pick a fight with. (I don’t want a fight with any of them. I’m over here in numerical mathematics when I’m not merely blogging. I’m happy to provide space for anyone wishing to defend her branch of topology.)

But each grows out of this quite general, quite abstract idea, also known as “point-set topology”, that’s all about sets and collections of sets. There is much that we can learn from thinking about how to collect the things that are possible.

• #### gaurish 5:31 pm on Thursday, 14 September, 2017 Permalink | Reply

I am really happy that you didn’t start with “Topology is also known as rubber sheet geometry”.

Like

• #### Joseph Nebus 1:46 am on Friday, 15 September, 2017 Permalink | Reply

Although I never know precisely what I’m going to write before I put in the first paragraph, I did resolve that I was going to put off rubber sheets, as well as coffee cups, as long as I possibly could.

Liked by 1 person

• #### elkement (Elke Stangl) 7:33 am on Tuesday, 19 September, 2017 Permalink | Reply

Great post! I was interested in your take as there are different ways to introduce manifolds in theoretical physics – I worked through different General Relativity textbooks / courses in parallel: One lecturer insisted that you need to treat that stuff “with the rigor of a mathematician”, and he went to great lengths to point out why a manifold is different from “normal space”. Others use the typical physicist’s approach of avoiding all specialized terms like fiber bundles and pushbacks, calling everything a “vector field” and “space”, only alluding to comprehensible familiar structures that sort of work in the same way – and somehow still managed to get across the messages and theorems in the end. But the rigorous lecturer said that it was exactly confusing the actual space (or spacetime) and a manifold that had stalled and confused Einstein for many years – so I suppose one should really learn the mathematics thoroughly here …
On the other hand from what you say it seems to me that manifolds have sort of emerged as a tool in physics, and so Einstein had to create or inspire new mathematics as he went along … while today we can build on this and after we learned the rigorous stuff it is probably OK to fall back into the typical physicist’s mode. (Landau / Lifshitz are my favorite resource in the latter class – the treat GR very concisely in the volume on the Classical Theory of Fields, part of their 10-volume Course of Theoretical Physics – and they use hardly any specialized term related to topologies).

Like

• #### Joseph Nebus 8:10 pm on Friday, 22 September, 2017 Permalink | Reply

Thank you so. Well, I’ve shared just how I got introduced to manifolds myself. I come from a more mathematical physics background and it’s a little surprising how often things would be introduced casually, trusting that the precise details would be filled in later. Sometimes they even were. I don’t think that’s idiosyncratic to my school, although it was a heavily applied-mathematics department. (The joke was that we had two tracks, Applied Mathematics and More Applied Mathematics.)

I’m not very well-studied in the history of modern physics, at least not in how the mathematical models develop. But I think that you have a good read on it, that we started to get manifolds because they solved some very specific niche problems well. And then treated rigorously they promised more, and then people started looking for problems they could solve. I think that’s probably more common a history for mathematical structures than people realize. But, as you point out, that doesn’t mean everyone’s going to see the tool as worth learning how to use.

Liked by 1 person

## Reading the Comics, September 9, 2017: First Split Week Edition, Part 2

I don’t actually like it when a split week has so many more comics one day than the next, but I also don’t like splitting across a day if I can avoid it. This week, I had to do a little of both since there were so many comic strips that were relevant enough on the 8th. But they were dominated by the idea of going back to school, yet.

Randy Glasbergen’s Glasbergen Cartoons rerun for the 8th is another back-to-school gag. And it uses arithmetic as the mathematics at its most basic. Arithmetic might not be the most fundamental mathematics, but it does seem to be one of the parts we understand first. It’s probably last to be forgotten even on a long summer break.

Mark Pett’s Mr Lowe rerun for the 8th is built on the familiar old question of why learn arithmetic when there’s computers. Quentin is unconvinced of this as motive for learning long division. I’ll grant the case could be made better. I admit I’m not sure how, though. I think long division is good as a way to teach, especially, the process of estimating and improving estimates of a calculation. There’s a lot of real mathematics in doing that.

Guy Gilchrist’s Nancy for the 8th is another back-to-school strip. Nancy’s faced with “this much math” so close to summer. Her given problem’s a bit of a mess to me. But it’s mostly teaching whether the student’s got the hang of the order of operations. And the instructor clearly hasn’t got the sense right. People can ask whether we should parse “12 divided by 3 times 4” as “(12 divided by 3) times 4” or as “12 divided by (3 times 4)”, and that does make a major difference. Multiplication commutes; you can do it in any order. Division doesn’t. Leaving ambiguous phrasing is the sort of thing you learn, instinctively, to avoid. Nancy would be justified in refusing to do the problem on the grounds that there is no unambiguous way to evaluate it, and that the instructor surely did not mean for her to evaluate it all four different plausible ways.

By the way, I’ve seen going around Normal Person Twitter this week a comment about how they just discovered the division symbol ÷, the obelus, is “just” the fraction bar with dots above and below where the unknown numbers go. I agree this is a great mnemonic for understanding what is being asked for with the symbol. But I see no evidence that this is where the symbol, historically, comes from. We first see ÷ used for division in the writings of Johann Henrich Rahn, in 1659, and the symbol gained popularity particularly when John Pell picked it up nine years later. But it’s not like Rahn invented the symbol out of nowhere; it had been used for subtraction for over 125 years at that point. There were also a good number of writers using : or / or \ for division. There were some people using a center dot before and after a / mark for this, like the % sign fell on its side. That ÷ gained popularity in English and American writing seems to be a quirk of fate, possibly augmented by it being relatively easy to produce on a standard typewriter. (Florian Cajori notes that the National Committee on Mathematical Requirements recommended dropping ÷ altogether in favor of a symbol that actually has use in non-mathematical life, the / mark. The Committee recommended this in 1923, so you see how well the form agenda is doing.)

Dave Whamond’s Reality Check for the 8th is the anthropomorphic-numerals joke for this week. A week without one is always a bit … peculiar.

Mark Leiknes’s Cow and Boy rerun for the 9th only mentions mathematics, and that as a course that Billy would rather be skipping. But I like the comic strip and want to promote its memory as much as possible. It’s a deeply weird thing, because it has something like 400 running jokes, and it’s hard to get into because the first couple times you see a pastoral conversation interrupted by an orca firing a bazooka at a cat-helicopter while a panda brags of blowing up the moon it seems like pure gibberish. If you can get through that, you realize why this is funny.

Dave Blazek’s Loose Parts for the 9th uses chalkboards full of stuff as the sign of a professor doing serious thinking. Mathematics is will-suited for chalkboards, at least in comic strips. It conveys a lot of thought and doesn’t need much preplanning. Although a joke about the difficulties in planning out blackboard use does take that planning. Yes, there is a particular pain that comes from having more stuff to write down in the quick yet easily collaborative medium of the chalkboard than there is board space to write.

Brian Basset’s Red and Rover for the 9th also really only casually mentions mathematics. But it’s another comic strip I like a good deal so would like to talk up. Anyway, it does show Red discovering he doesn’t mind doing mathematics when he sees the use.

• #### Roy Kassinger 10:35 pm on Friday, 15 September, 2017 Permalink | Reply

Admittedly what I know of Newspaper Popeye could easily fit inside a thimble Theatre but I notice this past week features Swee’pea loudly complaining about his allowance, my question is this unusual? None of the other characters seem to notice him talking, let alone talking back

Like

• #### Joseph Nebus 11:39 pm on Sunday, 17 September, 2017 Permalink | Reply

Swee’pea complaining about stuff is a common enough feature of the Sagendorf-era Popeye strips. At least the comics that get drawn on for the eternal reruns, which seems to be an era running from about 1969 to 1985. Two story before Spincoal, for example, was `Who Am I’ (originally run December 1969 to May 1970) and starts out with Swee’pea decrying the generation gap and wondering who the heck he is, exactly. It gets from there into weird territories, by which I mean Olive Oyl renting a gorilla from the zoo to make Popeye jealous, because Bud Sagendorf. But Swee’pea doesn’t seem to have much trouble talking to, or being understood by, other adults in that. Though the only example I ran across on a quick search of him speaking to anybody was to Popeye’s grandmom, so maybe that just runs in the family.

I haven’t been reviewing the in-eternal-rerun story strips on my humor blog, although I wonder if it wouldn’t be worth providing some space for recaps or for people who do want to figure out what the heck the storyline was in a Sagendorf-era Popeye or something.

Like

## The Summer 2017 Mathematics A To Z: Sárközy’s Theorem

Gaurish, of For the love of Mathematics, gives me another chance to talk number theory today. Let’s see how that turns out.

# Sárközy’s Theorem.

I have two pieces to assemble for this. One is in factors. We can take any counting number, a positive whole number, and write it as the product of prime numbers. 2038 is equal to the prime 2 times the prime 1019. 4312 is equal to 2 raised to the third power times 7 raised to the second times 11. 1040 is 2 to the fourth power times 5 times 13. 455 is 5 times 7 times 13.

There are many ways to divide up numbers like this. Here’s one. Is there a square number among its factors? 2038 and 455 don’t have any. They’re each a product of prime numbers that are never repeated. 1040 has a square among its factors. 2 times 2 divides into 1040. 4312, similarly, has a square: we can write it as 2 squared times 2 times 7 squared times 11. So that is my first piece. We can divide counting numbers into squarefree and not-squarefree.

The other piece is in binomial coefficients. These are numbers, often quite big numbers, that get dumped on the high school algebra student as she tries to work with some expression like $(a + b)^n$. They’re also dumped on the poor student in calculus, as something about Newton’s binomial coefficient theorem. Which we hear is something really important. In my experience it wasn’t explained why this should rank up there with, like, the differential calculus. (Spoiler: it’s because of polynomials.) But it’s got some great stuff to it.

Binomial coefficients are among those utility players in mathematics. They turn up in weird places. In dealing with polynomials, of course. They also turn up in combinatorics, and through that, probability. If you run, for example, 10 experiments each of which could succeed or fail, the chance you’ll get exactly five successes is going to be proportional to one of these binomial coefficients. That they touch on polynomials and probability is a sign we’re looking at a thing woven into the whole universe of mathematics. We saw them some in talking, last A-To-Z around, about Yang Hui’s Triangle. That’s also known as Pascal’s Triangle. It has more names too, since it’s been found many times over.

The theorem under discussion is about central binomial coefficients. These are one specific coefficient in a row. The ones that appear, in the triangle, along the line of symmetry. They’re easy to describe in formulas. for a whole number ‘n’ that’s greater than or equal to zero, evaluate what we call 2n choose n:

${{2n} \choose{n}} = \frac{(2n)!}{(n!)^2}$

If ‘n’ is zero, this number is $\frac{0!}{(0!)^2}$ or 1. If ‘n’ is 1, this number is $\frac{2!}{(1!)^2}$ or 2. If ‘n’ is 2, this number is $\frac{4!}{(2!)^2}$ 6. If ‘n’ is 3, this number is (sparing the formula) 20. The numbers keep growing. 70, 252, 924, 3432, 12870, and so on.

So. 1 and 2 and 6 are squarefree numbers. Not much arguing that. But 20? That’s 2 squared times 5. 70? 2 times 5 times 7. 252? 2 squared times 3 squared times 7. 924? That’s 2 squared times 3 times 7 times 11. 3432? 2 cubed times 3 times 11 times 13; there’s a 2 squared in there. 12870? 2 times 3 squared times it doesn’t matter anymore. It’s not a squarefree number.

There’s a bunch of not-squarefree numbers in there. The question: do we ever stop seeing squarefree numbers here?

So here’s Sárközy’s Theorem. It says that this central binomial coefficient ${{2n} \choose{n}}$ is never squarefree as long as ‘n’ is big enough. András Sárközy showed in 1985 that this was true. How big is big enough? … We have a bound, at least, for this theorem. If ‘n’ is larger than the number $2^{8000}$ then the corresponding coefficient can’t be squarefree. It might not surprise you that the formulas involved here feature the Riemann Zeta function. That always seems to turn up for questions about large prime numbers.

That’s a common state of affairs for number theory problems. Very often we can show that something is true for big enough numbers. I’m not sure there’s a clear reason why. When numbers get large enough it can be more convenient to deal with their logarithms, I suppose. And those look more like the real numbers than the integers. And real numbers are typically easier to prove stuff about. Maybe that’s it. This is vague, yes. But to ask ‘why’ some things are easy and some are hard to prove is a hard question. What is a satisfying ’cause’ here?

It’s tempting to say that since we know this is true for all ‘n’ above a bound, we’re done. We can just test all the numbers below that bound, and the rest is done. You can do a satisfying proof this way: show that eventually the statement is true, and show all the special little cases before it is. This particular result is kind of useless, though. $2^{8000}$ is a number that’s something like 241 digits long. For comparison, the total number of things in the universe is something like a number about 80 digits long. Certainly not more than 90. It’d take too long to test all those cases.

That’s all right. Since Sárközy’s proof in 1985 there’ve been other breakthroughs. In 1988 P Goetgheluck proved it was true for a big range of numbers: every ‘n’ that’s larger than 4 and less than $2^{42,205,184}$. That’s a number something more than 12 million digits long. In 1991 I Vardi proved we had no squarefree central binomial coefficients for ‘n’ greater than 4 and less than $2^{774,840,978}$, which is a number about 233 million digits long. And then in 1996 Andrew Granville and Olivier Ramare showed directly that this was so for all ‘n’ larger than 4.

So that 70 that turned up just a few lines in is the last squarefree one of these coefficients.

Is this surprising? Maybe, maybe not. I’ll bet most of you didn’t have an opinion on this topic twenty minutes ago. Let me share something that did surprise me, and continues to surprise me. In 1974 David Singmaster proved that any integer divides almost all the binomial coefficients out there. “Almost all” is here a term of art, but it means just about what you’d expect. Imagine the giant list of all the numbers that can be binomial coefficients. Then pick any positive integer you like. The number you picked will divide into so many of the giant list that the exceptions won’t be noticeable. So that square numbers like 4 and 9 and 16 and 25 should divide into most binomial coefficients? … That’s to be expected, suddenly. Into the central binomial coefficients? That’s not so obvious to me. But then so much of number theory is strange and surprising and not so obvious.

• #### gaurish 2:56 pm on Tuesday, 12 September, 2017 Permalink | Reply

Nice exposition, like always :-) Another place where this central binomial coefficient appears is in Paul Erdős’s proof of Bertrand’s postulate: https://en.wikipedia.org/wiki/Proof_of_Bertrand%27s_postulate

Like

• #### Joseph Nebus 1:39 am on Friday, 15 September, 2017 Permalink | Reply

Thank you. And I’m not sure how I overlooked that, since Bertrand’s Postulate is such a nice, easy-to-understand result. (The Postulate, which can be proven, is that there is always at least one prime between a whole number ‘n’ and its double, ‘2n’. With Sarkozy’s Theorem you can show this has to be true for numbers larger than 468. For the numbers from 1 up to 468, you can just check each case. It’s time-consuming but not hard.)

Like

## Reading the Comics, September 8, 2017: First Split Week Edition, Part 1

It was looking like another slow week for something so early in the (United States) school year. Then Comic Strip Master Commend sent a flood of strips in for Friday and Saturday, so I’m splitting the load. It’s not a heavy one, as back-to-school jokes are on people’s minds. But here goes.

Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017 is a fair strip for this early in the school year. It’s an old joke about making subtraction understandable.

Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017. The joke pretty well explains itself, but I would like to point out the great use of color for highlighting here. The different shades are done in a way very consistent with the mid-century stylings of the characters, but are subtler than could have been done when Hank Ketcham started the comic in the 1950s. For that matter, it’s subtler than could have been printed until quite recently in the newspaper industry. It’s worth noticing.

Mark Anderson’s Andertoons for the 3rd is the Mark Anderson installment for this week, so I’m glad to have that. It’s a good old classic cranky-students setup and it reminds me that “unlike fractions” is a thing. I’m not quibbling with the term, especially not after the whole long-division mess a couple weeks back. I just hadn’t thought in a long while about how different denominators do make adding fractions harder.

Jeff Harris’s Shortcuts informational feature for the 3rd I couldn’t remember why I put on the list of mathematically-themed comic strips. The reason’s in there. There’s a Pi Joke. But my interest was more in learning that strawberries are a hybrid created in France from a North American and a Chilean breed. Isn’t that intriguing stuff?

Bill Abbott’s Specktickles for the 8th of September, 2017. I confess that I don’t know whether this comic is running in any newspapers. But I could find it easily enough so that’s why I read it and look for panels that touch on mathematics topics.

Bill Abbott’s Specktickles for the 8th uses arithmetic — multiplication flash cards — as emblem of stuff to study. About all I can say for that.

## The Summer 2017 Mathematics A To Z: Ricci Tensor

Today’s is technically a request from Elke Stangl, author of the Elkemental Force blog. I think it’s also me setting out my own petard for self-hoisting, as my recollection is that I tossed off a mention of “defining the Ricci Tensor” as the sort of thing that’s got a deep beauty that’s hard to share with people. And that set off the search for where I had written about the Ricci Tensor. I hadn’t, and now look what trouble I’m in. Well, here goes.

# Ricci Tensor.

Imagine if nothing existed.

You’re not doing that right, by the way. I expect what you’re thinking of is a universe that’s a big block of space that doesn’t happen to have any things clogging it up. Maybe you have a natural sense of volume in it, so that you know something is there. Maybe you even imagine something with grid lines or reticules or some reference points. What I imagine after a command like that is a sort of great rectangular expanse, dark and faintly purple-tinged, with small dots to mark its expanse. That’s fine. This is what I really want. But it’s not really imagining nothing existing. There’s space. There’s some sense of where things would be, if they happened to be in there. We’d have to get rid of the space to have “nothing” exist. And even then we have logical problems that sound like word games. (How can nothing have a property like “existing”? Or a property like “not existing”?) This is dangerous territory. Let’s not step there.

So take the empty space that’s what mathematics and physics people mean by “nothing”. What do we know about it? Unless we’re being difficult, it’s got some extent. There are points in it. There’s some idea of distance between these points. There’s probably more than one dimension of space. There’s probably some sense of time, too. At least we’re used to the expectation that things would change if we watched. It’s a tricky sense to have, though. It’s hard to say exactly what time is. We usually fall back on the idea that we know time has passed if we see something change. But if there isn’t anything to see change? How do we know there’s still time passing?

You maybe already answered. We know time is passing because we can see space changing. One of the legs of Modern Physics is geometry, how space is shaped and how its shape changes. This tells us how gravity works, and how electricity and magnetism propagate. If there were no matter, no energy, no things in the universe there would still be some kind of physics. And interesting physics, since the mathematics describing this stuff is even subtler and more challenging to the intuition than even normal Euclidean space. If you’re going to read a pop mathematics blog like this, you’re very used to this idea.

Probably haven’t looked very hard at the idea, though. How do you tell whether space is changing if there’s nothing in it? It’s all right to imagine a coordinate system put on empty space. Coordinates are our concept. They don’t affect the space any more than the names we give the squirrels in the yard affect their behavior. But how to make the coordinates move with the space? It seems question-begging at least.

We have a mathematical gimmick to resolve this. Of course we do. We call it a name like a “test mass” or a “test charge” or maybe just “test particle”. Imagine that we drop into space a thing. But it’s only barely a thing. It’s tiny in extent. It’s tiny in mass. It’s tiny in charge. It’s tiny in energy. It’s so slight in every possible trait that it can’t sully our nothingness. All it does is let us detect it. It’s a good question how. We have good eyes. But now, we could see the particle moving as the space it’s in moves.

But again we can ask how. Just one point doesn’t seem to tell us much. We need a bunch of test particles, a whole cloud of them. They don’t interact. They don’t carry energy or mass or anything. They just carry the sense of place. This is how we would perceive space changing in time. We can ask questions meaningfully.

Here’s an obvious question: how much volume does our cloud take up? If we’re going to be difficult about this, none at all, since it’s a finite number of particles that all have no extent. But you know what we mean. Draw a ball, or at least an ellipsoid, around the test particles. How big is that? Wait a while. Draw another ball around the now-moved test particles. How big is that now?

Here’s another question: has the cloud rotated any? The test particles, by definition, don’t have mass or anything. So they don’t have angular momentum. They aren’t pulling one another to the side any. If they rotate it’s because space has rotated, and that’s interesting to consider. And another question: might they swap positions? Could a pair of particles that go left-to-right swap so they go right-to-left? That I ask admits that I want to allow the possibility.

These are questions about coordinates. They’re about how one direction shifts to other directions. How it stretches or shrinks. That is to say, these are questions of tensors. Tensors are tools for many things, most of them about how things transmit through different directions. In this context, time is another direction.

All our questions about how space moves we can describe as curvature. How do directions fall away from being perpendicular to one another? From being parallel to themselves? How do their directions change in time? If we have three dimensions in space and one in time — a four-dimensional “manifold” — then there’s 20 different “directions” each with maybe their own curvature to consider. This may seem a lot. Every point on this manifold has this set of twenty numbers describing the curvature of space around it. There’s not much to do but accept that, though. If we could do with fewer numbers we would, but trying cheats us out of physics.

Ten of the numbers in that set are themselves a tensor. It’s known as the Weyl Tensor. It describes gravity’s equivalent to light waves. It’s about how the shape of our cloud will change as it moves. The other ten numbers form another tensor. That is, a thousand words into the essay, the Ricci Tensor. The Ricci Tensor describes how the volume of our cloud will change as the test particles move along. It may seem odd to need ten numbers for this, but that’s what we need. For three-dimensional space and one-dimensional time, anyway. We need fewer for two-dimensional space; more, for more dimensions of space.

The Ricci Tensor is a geometric construct. Most of us come to it, if we do, by way of physics. It’s a useful piece of general relativity. It has uses outside this, though. It appears in the study of Ricci Flows. Here space moves in ways akin to how heat flows. And the Ricci Tensor appears in projective geometry, in the study of what properties of shapes don’t depend on how we present them.

It’s still tricky stuff to get a feeling for. I’m not sure I have a good feel for it myself. There’s a long trail of mathematical symbols leading up to these tensors. The geometry of them becomes more compelling in four or more dimensions, which taxes the imagination. Yann Ollivier here has a paper that attempts to provide visual explanations for many of the curvatures and tensors that are part of the field. It might help.

• #### gaurish 11:41 pm on Friday, 8 September, 2017 Permalink | Reply

The concept of tensors is really hard to get a feeling for, even for the physics undergrads at my college. So, if somebody would ask me, I just say: “The Ricci Tensor is a geometric construct. Most of us come to it, if we do, by way of physics. It’s a useful piece of general relativity.” And go away. I feel really motivated by your attempt to give a feeling of such a complicated concept.

Liked by 1 person

• #### Joseph Nebus 3:48 pm on Sunday, 10 September, 2017 Permalink | Reply

Thank you. I’ve never gotten a truly good understanding of tensors, surely because I followed a path through mathematics that avoided needing to deal with them directly. I have tried some self-study, but it’s quite hard to work up the motivation for ploughing through a long series of discussions about how the contravariant and the covariant differ when, at least on the equation level, it just looks like moving stuff from subscript to superscript and back again when you find the former inconvenient. But these glossaries have helped me at least understand the principles better. Maybe sometime I’ll commit to a project around here that makes me work through the subject. (Or maybe I’ll finally take advantage of the work benefit and take a course that gets me familiar with this sort of geometry.)

Liked by 1 person

• #### elkement (Elke Stangl) 7:46 pm on Wednesday, 13 September, 2017 Permalink | Reply

Thanks!! Wow – again I admire how you can explain such concepts without images or (tons of) equations. I figured you had explained the Riemann tensor before, but obviously I was wrong! So I am sorry for making your life as a math blogger difficult ;-)

Like

• #### Joseph Nebus 1:44 am on Friday, 15 September, 2017 Permalink | Reply

Aw, thank you, and there’s no need to apologize. This has been the most fun set of A To Z terms I’ve done, and I don’t think it’s coincidental that so many of them have been challenges.

Liked by 1 person

## The Summer 2017 Mathematics A To Z: Quasirandom numbers

Gaurish, host of, For the love of Mathematics, gives me the excuse to talk about amusement parks. You may want to brace yourself. Yes, this essay includes a picture. It would have included a video if I had enough WordPress privileges for that.

# Quasirandom numbers.

Think of a merry-go-round. Or carousel, if you prefer. I will venture a guess. You might like merry-go-rounds. They’re beautiful. They can evoke happy thoughts of childhood when they were a big ride it was safe to go on. But they don’t often make one think of thrills.. They’re generally sedate things. They don’t need to be. There’s no great secret to making a carousel a thrill ride. They knew it a century ago, when all the great American carousels were carved. It’s simple. Make the thing spin fast enough, at the five or six rotations per minute the ride was made for. There are places that do this yet. There’s the Cedar Downs ride at Cedar Point, Sandusky, Ohio. There’s the antique carousel at Crossroads Village, a historical village/park just outside Flint, Michigan. There’s the Derby Racer at Playland in Rye, New York. There’s the carousel in the Merry-Go-Round Museum in Sandusky, Ohio. Any of them are great rides. Two of them have a special edge. I’ll come back to them.

Rye (New York) Playland Amusement Park’s is the fastest carousel I’m aware of running. Riders are warned ahead of time to sit so they’re leaning to the left, and the ride will not get up to full speed until the ride operator checks everyone during the ride. To get some idea of its speed, notice the ride operator on the left and how far she leans. She’s not being dramatic; that’s the natural stance. Also the tilt in the carousel’s floor is not camera trickery; it does lean like that. If you have a spare day in the New York City area and any interest in classic amusement parks, this is worth the trip.

Randomness is a valuable resource. We know it’s key to many things. We have major fields of mathematics built on it. We can understand the behavior of variables without ever knowing what value they have. All we need is to know than the chance they might be in some particular range. This makes possible all kinds of problems too complicated to do otherwise. We know it’s critical. Quantum mechanics would not work without randomness. Without quantum mechanics, matter doesn’t work. And that’s true randomness, the kind where something is unpredictable. It’s not the kind of randomness we talk about when we ask, say, what’s the chance someone was born on a Tuesday. That’s mere hidden information: if we knew the month and date and year of a person’s birth we would know whether they were born Tuesday or not. We need more.

So the trouble is actually getting a random number. Well, a sequence of randomly drawn numbers. We rarely need this if we’re doing analysis. We can understand how some process changes the shape of a distribution without ever using the distribution. We can take derivatives of a function without ever evaluating the original function, after all.

But we do need randomly drawn numbers. We do too much numerical work with them. For example, it’s impossible to exactly integrate most functions. Numerical methods can take a ferociously long time to evaluate. A family of methods called Monte Carlo rely on randomly-drawn values to estimate the integral. The results are strikingly good for the work required. But they must have random numbers. The name “Monte Carlo” is not some cryptic code. It is an expression of how randomly drawn numbers make the tool work.

It’s hard to get random numbers. Consider: we can’t write an algorithm to do it. If we were to write one, then we’d be able to predict that the sequence of numbers was. We have some recourse. We could set up instruments to rely on the randomness that seems to be in the world. Thermal fluctuations, for example, created by processes outside any computer’s control, can give us a pleasant dose of randomness. If we need higher-quality random numbers than that we can go to exotic equipment. Geiger counters watching the decay of a not-alarmingly-radioactive sample. Cosmic ray detectors watching the sky.

Or we can write something that produces numbers that look random enough. They won’t really be random, and if we wait long enough we’ll notice the sequence repeats itself. But if we only need, say, ten numbers, who cares if the sequence will repeat after ten million numbers? (We’ll surely need more than ten numbers. But we can postpone the repetition until we’ve drawn far more than ten million numbers.)

Two of the carousels I’ve mentioned have an astounding property. The horses in a file move. I mean, relative to each other. Some horse will start the race in front of its neighbors; some will start behind. The four move forward and back thanks to a mechanism of, I am assured, staggering complexity. There are only three carousels in the world that have it. There’s Cedar Downs at Cedar Point in Sandusky, Ohio; the Racing Downs at Playland in Rye, New York; and the Derby Racer at Blackpool Pleasure Beach in Blackpool, England. The mechanism in Blackpool’s hasn’t operated in years. The one at Playland’s had not run in years, but was restored for the 2017 season. My love and I made a trip specifically to ride that. (You may have heard of a fire at the carousel in Playland this summer. This was of part of the building for their other, non-racing, antique carousel. My last information was that the carousel itself was all right.)

These racing derbies have the horses in a file move forward and back in a “random” way. It’s not truly random. If you knew exactly which gears were underneath each horse, and where in their rotations they were, you could say which horse was about to gain on its partners and which was about to fall back. But all that is concealed from the rider. The horse patterns will eventually, someday, repeat. If the gear cycles aren’t interrupted by maintenance or malfunctions. But nobody’s going to ride any horse long enough to notice. We have in these rides a randomness as good as what your computer makes, at least for the purpose it serves.

The racing nature of Playland’s and Cedar Point’s derby racers mean that every ride includes exciting extra moments of overtaking or falling behind your partners to the side. It also means quarreling with your siblings about who really won the race because your horse started like four feet behind your sister’s and it ended only two feet behind so hers didn’t beat yours and, long story short, there was some punching, there was some spitting, and now nobody is gonna be allowed to get ice cream at the Carvel’s (for Playland) or cheese on a stick (for Cedar Point). This is the Cedar Downs ride at Cedar Point, and focuses on the poles that move the horses.

What does it mean to look random? Some things seem obvious. All the possible numbers ought to come up, sooner or later. Any particular possible number shouldn’t repeat too often. Any particular possible number shouldn’t go too long without repeating. There shouldn’t be clumps of numbers; if, say, ‘4’ turns up, we shouldn’t see ‘5’ turn up right away all the time.

We can make the idea of “looking” random quite literal. Suppose we’re selecting numbers from 0 through 9. We can draw the random numbers we’ve picked. Use the numbers as coordinates. Say we pick four digits: 1, 3, 9, and 0. Then draw the point that’s at x-coordinate 13, y-coordinate 90. Then the next four digits. Let’s say they’re 4, 2, 3, and 8. Then draw the point that’s at x-coordinate 42, y-coordinate 38. And repeat. What will this look like?

If it clumps up, we probably don’t have good random numbers. If we see lines that points collect along, or avoid, there’s a good chance our numbers aren’t very random. If there’s whole blocks of space that they occupy, and others they avoid, we may have a defective source of random numbers. We should expect the points to cover a space pretty uniformly. (There are more rigorous, logically sound, methods. The eye can be fooled easily enough. But it’s the same principle. We have some test that notices clumps and gaps.) But …

The thing is, there’s always going to be some clumps. There’ll always be some gaps. Part of randomness is that it forms patterns, or at least things that look like patterns to us. We can describe how big a clump (or gap; it’s the same thing, really) is for any particular quantity of randomly drawn numbers. If we see clumps bigger than that we can throw out the numbers as suspect. But … still …

Toss a coin fairly twenty times, and there’s no reason it can’t turn up tails sixteen times. This doesn’t happen often, but it will happen sometimes. Just luck. This surplus of tails should evaporate as we take more tosses. That is, we most likely won’t see 160 tails out of 200 tosses. We certainly will not see 1,600 tails out of 2,000 tosses. We know this as the Law of Large Numbers. Wait long enough and weird fluctuations will average out.

What if we don’t have time, though? For coin-tossing that’s silly; of course we have time. But for Monte Carlo integration? It could take too long to be confident we haven’t got too-large gaps or too-tight clusters.

This is why we take quasi-random numbers. We begin with what randomness we’re able to manage. But we massage it. Imagine our coins example. Suppose after ten fair tosses we noticed there had been eight tails turn up. Then we would start tossing less fairly, trying to make heads more common. We would be happier if there were 12 rather than 16 tails after twenty tosses.

Draw the results. We get now a pattern that looks still like randomness. But it’s a finer sorting; it looks like static tidied up some. The quasi-random numbers are not properly random. Knowing that, say, the last several numbers were odd means the next one is more likely to be even, the Gambler’s Fallacy put to work. But in aggregate, we trust, we’ll be able to enjoy the speed and power of randomly-drawn numbers. It shows its strengths when we don’t know just how finely we must sample a range of numbers to get good, reliable results.

To carousels. I don’t know whether the derby racers have quasirandom outcomes. I would find believable someone telling me that all the possible orderings of the four horses in any file are equally likely. To know would demand detailed knowledge of how the gearing works, though. Also probably simulations of how the system would work if it ran long enough. It might be easier to watch the ride for a couple of days and keep track of the outcomes. If someone wants to sponsor me doing a month-long research expedition to Cedar Point, drop me a note. Or just pay for my season pass. You folks would do that for me, wouldn’t you? Thanks.

• #### gaurish 6:55 pm on Wednesday, 6 September, 2017 Permalink | Reply

Liked by 1 person

• #### Joseph Nebus 1:22 am on Friday, 8 September, 2017 Permalink | Reply

This was actually an analogy I had waiting to be unleashed. I’d been thinking about using the racing derbies as an exciting case for pseudorandom numbers for ages, and this gave me the excuse to actually do it.

If I figure out how to upload videos I might do another essay about making pseudorandom sequences of numbers. I’ve got the movie footage of the Cedar Point and the Playland derbies. (Blackpool’s I visited with a barely-functional camera; it had gotten soaked in heavy rains a few days earlier. So I have precious few pictures of Blackpool Pleasure Beach and d’Efteling in the Netherlands. But that just gives me a pretext to go back and revisit both places.)

Liked by 2 people

## The Summer 2017 Mathematics A To Z: Prime Number

Gaurish, host of, For the love of Mathematics, gives me another topic for today’s A To Z entry. I think the subject got away from me. But I also like where it got.

# Prime Number.

Something about ‘5’ that you only notice when you’re a kid first learning about numbers. You know that it’s a prime number because it’s equal to 1 times 5 and nothing else. You also know that once you introduce fractions, it’s equal to all kinds of things. It’s 10 times one-half and it’s 15 times one-third and it’s 2.5 times 2 and many other things. Why, you might ask the teacher, is it a prime number if it’s got a million billion trillion different factors? And when every other whole number has as many factors? If you get to the real numbers it’s even worse yet, although when you’re a kid you probably don’t realize that. If you ask, the teacher probably answers that it’s only the whole numbers that count for saying whether something is prime or not. And, like, 2.5 can’t be considered anything, prime or composite. This satisfies the immediate question. It doesn’t quite get at the underlying one, though. Why do integers have prime numbers while real numbers don’t?

To maybe have a prime number we need a ring. This is a creature of group theory, or what we call “algebra” once we get to college. A ring consists of a set of elements, and a rule for adding them together, and a rule for multiplying them together. And I want this ring to have a multiplicative identity. That’s some number which works like ‘1’: take something, multiply it by that, and you get that something back again. Also, I want this multiplication rule to commute. That is, the order of multiplication doesn’t affect what the result is. (If the order matters then everything gets too complicated to deal with.) Let me say the things in the set are numbers. It turns out (spoiler!) they don’t have to be. But that’s how we start out.

Whether the numbers in a ring are prime or not depends on the multiplication rule. Let’s take a candidate number that I’ll call ‘a’ to make my writing easier. If the only numbers whose product is ‘a’ are the pair of ‘a’ and the multiplicative identity, then ‘a’ is prime. If there’s some other pair of numbers that give you ‘a’, then ‘a’ is not prime.

The integers — the positive and negative whole numbers, including zero — are a ring. And they have prime numbers just like you’d expect, if we figure out some rule about how to deal with the number ‘-1’. There are many other rings. There’s a whole family of rings, in fact, so commonly used that they have shorthand. Mathematicians write them as “Zn”, where ‘n’ is some whole number. They’re the integers, modulo ‘n’. That is, they’re the whole numbers from ‘0’ up to the number ‘n-1’, whatever that is. Addition and multiplication work as they do with normal arithmetic, except that if the result is less than ‘0’ we add ‘n’ to it. If the result is more than ‘n-1’ we subtract ‘n’ from it. We repeat that until the result is something from ‘0’ to ‘n-1’, inclusive.

(We use the letter ‘Z’ because it’s from the German word for numbers, and a lot of foundational work was done by German-speaking mathematicians. Alternatively, we might write this set as “In”, where “I” stands for integers. If that doesn’t satisfy, we might write this set as “Jn”, where “J” stands for integers. This is because it’s only very recently that we’ve come to see “I” and “J” as different letters rather than different ways to write the same letter.)

These modulo arithmetics are legitimate ones, good reliable rings. They make us realize how strange prime numbers are, though. Consider the set Z4, where the only numbers are 0, 1, 2, and 3. 0 times anything is 0. 1 times anything is whatever you started with. 2 times 1 is 2. Obvious. 2 times 2 is … 0. All right. 2 times 3 is 2 again. 3 times 1 is 3. 3 times 2 is 2. 3 times 3 is 1. … So that’s a little weird. The only product that gives us 3 is 3 times 1. So 3’s a prime number here. 2 isn’t a prime number: 2 times 3 is 2. For that matter even 1 is a composite number, an unsettling consequence.

Or then Z5, where the only numbers are 0, 1, 2, 3, and 4. Here, there are no prime numbers. Each number is the product of at least one pair of other numbers. In Z6 we start to have prime numbers again. But Z7? Z8? I recommend these questions to a night when your mind is too busy to let you fall asleep.

Prime numbers depend on context. In the crowded universe of all the rational numbers, or all the real numbers, nothing is prime. In the more austere world of the Gaussian Integers, familiar friends like ‘3’ are prime again, although ‘5’ no longer is. We recognize that as the product of $2 + \imath$ and $2 - \imath$, themselves now prime numbers.

So given that these things do depend on context. Should we care? Or let me put it another way. Suppose we contact a wholly separate culture, one that we can’t have influenced and one not influenced by us. It’s plausible that they should have a mathematics. Would they notice prime numbers as something worth study? Or would they notice them the way we notice, say, pentagonal numbers, a thing that allows for some pretty patterns and that’s about it?

Well, anything could happen, of course. I’m inclined to think that prime numbers would be noticed, though. They seem to follow naturally from pondering arithmetic. And if one has thought of rings, then prime numbers seem to stand out. The way that Zn behaves changes in important ways if ‘n’ is a prime number. Most notably, if ‘n’ is prime (among the whole numbers), then we can define something that works like division on Zn. If ‘n’ isn’t prime (again), we can’t. This stands out. There are a host of other intriguing results that all seem to depend on whether ‘n’ is a prime number among the whole numbers. It seems hard to believe someone could think of the whole numbers and not notice the prime numbers among them.

And they do stand out, as these reliably peculiar things. Many things about them (in the whole numbers) are easy to prove. That there are infinitely many, for example, you can prove to a child. And there are many things we have no idea how to prove. That there are infinitely many primes which are exactly two more than another prime, for example. Any child can understand the question. The one who can prove it will win what fame mathematicians enjoy. If it can be proved.

They turn up in strange, surprising places. Just in the whole numbers we find some patches where there are many prime numbers in a row (Forty percent of the numbers 1 through 10!). We can find deserts; we know of a stretch of 1,113,106 numbers in a row without a single prime among them. We know it’s possible to find prime deserts as vast as we want. Say you want a gap between primes of at least size N. Then look at the numbers (N+1)! + 2, (N+1)! + 3, (N+1)! + 4, and so on, up to (N+1)! + N+1. None of those can be prime numbers. You must have a gap at least the size N. It may be larger; how we know that (N+1)! + 1 is a prime number?

No telling. Well, we can check. See if any prime number divides into (N+1)! + 1. This takes a long time to do if N is all that big. There’s no formulas we know that will make this easy or quick.

We don’t call it a “prime number” if it’s in a ring that isn’t enough like the numbers. Fair enough. We shift the name to “prime element”. “Element” is a good generic name for a thing whose identity we don’t mean to pin down too closely. I’ve talked about the Gaussian Primes already, in an earlier essay and earlier in this essay. We can make a ring out of the polynomials whose coefficients are all integers. In that, $x^2 + 1$ is a prime. So is $x^2 - 2$. If this hasn’t given you some ideas what other polynomials might be primes, then you have something else to ponder while trying to sleep. Thinking of all the prime polynomials is likely harder than you can do, though.

Prime numbers seem to stand out, obvious and important. Humans have known about prime numbers for as long as we’ve known about multiplication. And yet there is something obscure about them. If there are cultures completely independent of our own, do they have insights which make prime numbers not such occult figures? How different would the world be if we knew all the things we now wonder about primes?

• #### gaurish 1:51 am on Tuesday, 5 September, 2017 Permalink | Reply

When I submitted this topic I didn’t expect algebraic number theory since for most people, prime numbers= analytic number theory. I really enjoyed this discussion about rings and modulo. My favourite statement: “To maybe have a prime number we need a ring. “

Liked by 1 person

• #### Joseph Nebus 1:19 am on Friday, 8 September, 2017 Permalink | Reply

Thank you. Most of the time I spent preparing this was in thinking about what there was to say about primes that wasn’t sieves and cryptography. Once I thought about how ‘5’ isn’t always prime that’s when I knew I had it.

Liked by 2 people

## Reading the Comics, September 1, 2017: Getting Ready For School Edition

In the United States at least it’s the start of the school year. With that, Comic Strip Master Command sent orders to do back-to-school jokes. They may be shallow ones, but they’re enough to fill my need for content. For example:

Bill Amend’s FoxTrot for the 27th of August, a new strip, has Jason fitting his writing tools to the class’s theme. So mathematics gets to write “2” in a complicated way. The mention of a clay tablet and cuneiform is oddly timely, given the current (excessive) hype about that Babylonian tablet of trigonometric values, which just shows how even a nearly-retired cartoonist will get lucky sometimes.

Dan Collins’s Looks Good On Paper for the 27th does a collage of school stuff, with mathematics the leading representative of the teacher-giving-a-lecture sort of class.

Olivia Walch’s Imogen Quest for the 28th uses calculus as the emblem of stuff that would be put on the blackboard and be essential for knowing. It’s legitimate formulas, so far as we get to see, the stuff that would in fact be in class. It’s also got an amusing, to me at least, idea for getting students’ attention onto the blackboard.

Tony Carrillo’s F Minus for the 29th is here to amuse me. I could go on to some excuse about how the sextant would be used for the calculations that tell someone where he is. But really I’m including it because I was amused and I like how detailed a sketch of a sextant Carrillo included here.

Jim Meddick’s Monty for the 29th features the rich obscenity Sedgwick Nuttingham III, also getting ready for school. In this case the summer mathematics tutoring includes some not-really-obvious game dubbed Integer Ball. I confess a lot of attempts to make games out of arithmetic look to me like this: fun to do but useful in practicing skills? But I don’t know what the rules are or what kind of game might be made of the integers here. I should at least hear it out.

Michael Cavna’s Warped for the 30th lists a top ten greatest numbers, spoofing on mindless clickbait. Cavna also, I imagine unintentionally, duplicates an ancient David Letterman Top Ten List. But it’s not like you can expect people to resist the idea of making numbered lists of numbers. Some of us have a hard time stopping.

Patrick Roberts’s Todd the Dinosaur for the 1st of September, 2017. So Paul Dirac introduced to quantum mechanics a mathematical construct known as the ‘braket’. It’s written as a pair of terms, like, < A | B > . These can be separated into pieces, with < A | called the ‘bra’ and | B > the ‘ket’. We’re told in the quantum mechanics class that this was a moment of possibly “innocent” overlap between what’s a convenient mathematical name and, as a piece of women’s clothing, unending amusement to male physics students. I do not know whether that’s so. I don’t see the thrill myself except in the suggestion that great physicists might be aware of women’s clothing.

Patrick Roberts’s Todd the Dinosaur for the 1st of September mentions a bunch of mathematics as serious studies. Also, to an extent, non-serious studies. I don’t remember my childhood well enough to say whether we found that vaguely-defined thrill in the word “algebra”. It seems plausible enough.

## The Summer 2017 Mathematics A To Z: Open Set

Today’s glossary entry is another request from Elke Stangl, author of the Elkemental Force blog. I’m hoping this also turns out to be a well-received entry. Half of that is up to you, the kind reader. At least I hope you’re a reader. It’s already gone wrong, as it was supposed to be Friday’s entry. I discovered I hadn’t actually scheduled it while I was too far from my laptop to do anything about that mistake. This spoils the nice Monday-Wednesday-Friday routine of these glossary entries that dates back to the first one I ever posted and just means I have to quit forever and not show my face ever again. Sorry, Ulam Spiral. Someone else will have to think of you.

# Open Set.

Mathematics likes to present itself as being universal truths. And it is. At least if we allow that the rules of logic by which mathematics works are universal. Suppose them to be true and the rest follows. But we start out with intuition, with things we observe in the real world. We’re happy when we can remove the stuff that’s clearly based on idiosyncratic experience. We find something that’s got to be universal.

Sets are pretty abstract things, as mathematicians use the term. They get to be hard to talk about; we run out of simpler words that we can use. A set is … a bunch of things. The things are … stuff that could be in a set, or else that we’d rule out of a set. We can end up better understanding things by drawing a picture. We draw the universe, which is a rectangular block, sometimes with dashed lines as the edges. The set is some blotch drawn on the inside of it. Some shade it in to emphasize which stuff we want in the set. If we need to pick out a couple things in the universe we drop in dots or numerals. If we’re rigorous about the drawing we could create a Venn Diagram.

When we do this, we’re giving up on the pure mathematical abstraction of the set. We’re replacing it with a territory on a map. Several territories, if we have several sets. The territories can overlap or be completely separate. We’re subtly letting our sense of geography, our sense of the spaces in which we move, infiltrate our understanding of sets. That’s all right. It can give us useful ideas. Later on, we’ll try to separate out the ideas that are too bound to geography.

A set is open if whenever you’re in it, you can’t be on its boundary. We never quite have this in the real world, with territories. The border between, say, New Jersey and New York becomes this infinitesimally slender thing, as wide in space as midnight is in time. But we can, with some effort, imagine the state. Imagine being as tiny in every direction as the border between two states. Then we can imagine the difference between being on the border and being away from it.

And not being on the border matters. If we are not on the border we can imagine the problem of getting to the border. Pick any direction; we can move some distance while staying inside the set. It might be a lot of distance, it might be a tiny bit. But we stay inside however we might move. If we are on the border, then there’s some direction in which any movement, however small, drops us out of the set. That’s a difference in kind between a set that’s open and a set that isn’t.

I say “a set that’s open and a set that isn’t”. There are such things as closed sets. A set doesn’t have to be either open or closed. It can be neither, a set that includes some of its borders but not other parts of it. It can even be both open and closed simultaneously. The whole universe, for example, is both an open and a closed set. The empty set, with nothing in it, is both open and closed. (This looks like a semantic trick. OK, if you’re in the empty set you’re not on its boundary. But you can’t be in the empty set. So what’s going on? … The usual. It makes other work easier if we call the empty set ‘open’. And the extra work we’d have to do to rule out the empty set doesn’t seem to get us anything interesting. So we accept what might be a trick.) The definitions of ‘open’ and ‘closed’ don’t exclude one another.

I’m not sure how this confusing state of affairs developed. My hunch is that the words ‘open’ and ‘closed’ evolved independent of each other. Why do I think this? An open set has its openness from, well, not containing its boundaries; from the inside there’s always a little more to it. A closed set has its closedness from sequences. That is, you can consider a string of points inside a set. Are these points leading somewhere? Is that point inside your set? If a string of points always leads to somewhere, and that somewhere is inside the set, then you have closure. You have a closed set. I’m not sure that the terms were derived with that much thought. But it does explain, at least in terms a mathematician might respect, why a set that isn’t open isn’t necessarily closed.

Back to open sets. What does it mean to not be on the boundary of the set? How do we know if we’re on it? We can define sets by all sorts of complicated rules: complex-valued numbers of size less than five, say. Rational numbers whose denominator (in lowest form) is no more than ten. Points in space from which a satellite dropped would crash into the moon rather than into the Earth or Sun. If we have an idea of distance we could measure how far it is from a point to the nearest part of the boundary. Do we need distance, though?

No, it turns out. We can get the idea of open sets without using distance. Introduce a neighborhood of a point. A neighborhood of a point is an open set that contains that point. It doesn’t have to be small, but that’s the connotation. And we get to thinking of little N-balls, circle or sphere-like constructs centered on the target point. It doesn’t have to be N-balls. But we think of them so much that we might as well say it’s necessary. If every point in a set has a neighborhood around it that’s also inside the set, then the set’s open.

You’re going to accuse me of begging the question. Fair enough. I was using open sets to define open sets. This use is all right for an intuitive idea of what makes a set open, but it’s not rigorous. We can give in and say we have to have distance. Then we have N-balls and we can build open sets out of balls that don’t contain the edges. Or we can try to drive distance out of our idea of open sets.

We can do it this way. Start off by saying the whole universe is an open set. Also that the union of any number of open sets is also an open set. And that the intersection of any finite number of open sets is also an open set. Does this sound weak? So it sounds weak. It’s enough. We get the open sets we were thinking of all along from this.

This works for the sets that look like territories on a map. It also works for sets for which we have some idea of distance, however strange it is to our everyday distances. It even works if we don’t have any idea of distance. This lets us talk about topological spaces, and study what geometry looks like if we can’t tell how far apart two points are. We can, for example, at least tell that two points are different. Can we find a neighborhood of one that doesn’t contain the other? Then we know they’re some distance apart, even without knowing what distance is.

That we reached so abstract an idea of what an open set is without losing the idea’s usefulness suggests we’re doing well. So we are. It also shows why Nicholas Bourbaki, the famous nonexistent mathematician, thought set theory and its related ideas were the core of mathematics. Today category theory is a more popular candidate for the core of mathematics. But set theory is still close to the core, and much of analysis is about what we can know from the fact of sets being open. Open sets let us explain a lot.

• #### elkement (Elke Stangl) 9:52 am on Sunday, 3 September, 2017 Permalink | Reply

Thanks – beautifully written and very interesting :-)

Like

• #### Joseph Nebus 1:15 am on Friday, 8 September, 2017 Permalink | Reply

Quite kind of you. Thank you.

Liked by 1 person

• #### gaurish 10:17 am on Sunday, 3 September, 2017 Permalink | Reply

Whenever I study analysis/topology, I can’t stop myself from appreciating this simple yet powerful idea.

Liked by 1 person

• #### Joseph Nebus 1:17 am on Friday, 8 September, 2017 Permalink | Reply

It’s a great concept, and one more powerful than it looks. It’s hard to explain how open-ness creeps in to everything, and why it offers something useful that closed-ness doesn’t.

Like

## How August 2017 Treated My Mathematics Blog

Well, August 2017 was wholly soaked up by the August 2017 A-To-Z project. I should have expected that, based on past experience. But I’d hoped to squeeze out one or two Why Stuff Can Orbit posts, since I have that fine Thomas K Dye art to go with it. But I’ve also had more challenging topics to describe than I’d figured on. That’s all right. I’ve really liked the first month of it.

These things usually see my readership rise, and so it did this time. After June’s 878 page views and July’s 911, August saw me creep back above a thousand views at last: 1,030. The number of unique readers rose too, from June’s 542 to July’s 568 up to August’s 680. That is as the number of posts I did rose from my normal 13 (I’d had 12 or 13 posts each month all year) to 21, so maybe it’s not the most efficient reader-per-word tradeoff. Hm.

It’s made me more likable, though. The number of likes has risen from 99 in June to 118 in July and to 147 in august. Still nothing like June 2015 when I did the first of these glossaries, though. Ah well. The number of comments held steady, 45 just as in July. There’d have been more but I wasn’t able to answer a couple comments before the end of the day Thursday. It’ll go into September’s statistics. Anyway, June had a poor 13 comments. … And I admit I’m flattered how many of August’s comments were people happy with the A To Z essays I wrote. I’ve been happy with them myself.

What posts were popular here? Mostly A To Z pieces, with one perennial beating them all:

And how many readers did I get from the various nations of the world? Something like this:

United States 460
Philippines 94
India 64
United Kingdom 42
Singapore 36
Austria 25
Hong Kong SAR China 22
Italy 22
Brazil 18
Spain 17
Australia 16
Turkey 13
France 10
Slovenia 10
Argentina 7
Germany 7
Malaysia 7
Thailand 6
Ireland 5
New Zealand 5
Romania 5
Greece 4
Mexico 4
Sweden 4
U.S. Virgin Islands 4
Bulgaria 3
Croatia 3
Finland 3
Indonesia 3
Japan 3
Poland 3
Russia 3
South Africa 3
Switzerland 3
China 2
Colombia 2
Norway 2
Paraguay 2
South Korea 2
Ukraine 2
Vietnam 2
Armenia 1
Bhutan 1
Cambodia 1 (*)
Chile 1
Czech Republic 1
European Union 1 (*)
Georgia 1
Hungary 1 (**)
Iceland 1
Israel 1 (*)
Jordan 1
Kuwait 1
Netherlands 1
Nigeria 1
Oman 1 (*)
Pakistan 1
Puerto Rico 1
Saudi Arabia 1 (*)
United Arab Emirates 1
Venezuela 1

There were 62 countries sending me readers in August, trusting that you count the European Union and for that matter the US Virgin Islands as countries. There were 60 in July and 52 in June. This time around there were 20 single-reader countries, just as in July, and up from June’s 16. Hungary’s in its third month of being a single-reader country. Cambodia, the European Union, Israel, Oman, and Saudi Arabia are on two-month streaks.

According to Insights, my most popular day of the week was Wednesday, when 18 percent of page views came in. This shows what happens when I have major content posted on days that aren’t Sunday. Of course, in July it was Monday (19 percent) and in June it was Sunday (18 percent), so I guess the only thing to do is project that in September my busiest day will be Saturday with 19 percent of the page views again. The most popular hour was 4 pm, with 11 percent of page views, which is intriguing because I shifted from setting most stuff to post at 4 pm to posting at 6 pm. That’s only 11 percent of page views this past month, though. In July it has been 19 percent of page views; in June, 14 percent. This seems like a crazy wide fluctuation in viewing per that hour and I wonder what’s going on.

WordPress says September begins with my blog having 689 WordPress.com followers, who’ve got me on their Reader pages. That’s up from 676 at the start of August and 666 at the start of July. Would you like to be among them? I’d like you among them. You can join this bunch by clicking on the ‘Follow Nebusresearch’ button at the upper-right corner of the page. If you’d like to follow by e-mail, there’s a ‘Follow Blog Via E-Mail’ button up there too.

Those on Twitter know me as @Nebusj. Those not on Twitter don’t need to worry about it. The problem will take care of itself.

## The Summer 2017 Mathematics A To Z: N-Sphere/N-Ball

Today’s glossary entry is a request from Elke Stangl, author of the Elkemental Force blog, which among other things has made me realize how much there is interesting to say about heat pumps. Well, you never know what’s interesting before you give it serious thought.

# N-Sphere/N-Ball.

I’ll start with space. Mathematics uses a lot of spaces. They’re inspired by geometry, by the thing that fills up our room. Sometimes we make them different by simplifying them, by thinking of the surface of a table, or what geometry looks like along a thread. Sometimes we make them bigger, imagining a space with more directions than we have. Sometimes we make them very abstract. We realize that we can think of polynomials, or functions, or shapes as if they were points in space. We can describe things that work like distance and direction and angle that work for these more abstract things.

What are useful things we know about space? Many things. Whole books full of things. Let me pick one of them. Start with a point. Suppose we have a sense of distance, of how far one thing is from one another. Then we can have an idea of the neighborhood. We can talk about some chunk of space that’s near our starting point.

So let’s agree on a space, and on some point in that space. You give me a distance. I give back to you — well, two obvious choices. One of them is all the points in that space that are exactly that distance from our agreed-on point. We know what this is, at least in the two kinds of space we grow up comfortable with. In three-dimensional space, this is a sphere. A shell, at least, centered around whatever that first point was. In two-dimensional space, on our desktop, it’s a circle. We know it can look a little weird: if we started out in a one-dimensional space, there’d be only two points, one on either side of the original center point. But it won’t look too weird. Imagine a four-dimensional space. Then we can speak of a hypersphere. And we can imagine that as being somehow a ball that’s extremely spherical. Maybe it pokes out of the rendering we try making of it, like a cartoon character falling out of the movie screen. We can imagine a five-dimensional space, or a ten-dimensional one, or something with even more dimensions. And we can conclude there’s a sphere for even that much space. Well, let it.

What are spheres good for? Well, they’re nice familiar shapes. Even if they’re in a weird number of dimensions. They’re useful, too. A lot of what we do in calculus, and in analysis, is about dealing with difficult points. Points where a function is discontinuous. Points where the function doesn’t have a value. One of calculus’s reliable tricks, though, is that we can swap information about the edge of things for information about the interior. We can replace a point with a sphere and find our work is easier.

The other thing I could give you. It’s a ball. That’s all the points that aren’t more than your distance away from our point. It’s the inside, the whole planet rather than just the surface of the Earth.

And here’s an ambiguity. Is the surface a part of the ball? Should we include the edge, or do we just want the inside? And that depends on what we want to do. Either might be right. If we don’t need the edge, then we have an open set (stick around for Friday). This gives us the open ball. If we do need the edge, then we have a closed set, and so, the closed ball.

Balls are so useful. Take a chunk of space that you find interesting for whatever reason. We can represent that space as the joining together (the “union”) of a bunch of balls. Probably not all the same size, but that’s all right. We might need infinitely many of these balls to get the chunk precisely right, or as close to right as can be. But that’s all right. We can still do it. Most anything we want to analyze is easier to prove on any one of these balls. And since we can describe the complicated shape as this combination of balls, then we can know things about the whole complicated shape. It’s much the way we can know things about polygons by breaking them into triangles, and showing things are true about triangles.

Sphere or ball, whatever you like. We can describe how many dimensions of space the thing occupies with the prefix. The 3-ball is everything close enough to a point that’s in a three-dimensional space. The 2-ball is everything close enough in a two-dimensional space. The 10-ball is everything close enough to a point in a ten-dimensional space. The 3-sphere is … oh, all right. Here we have a little squabble. People doing geometry prefer this to be the sphere in three dimensions. People doing topology prefer this to be the sphere whose surface has three dimensions, that is, the sphere in four dimensions. Usually which you mean will be clear from context: are you reading a geometry or a topology paper? If you’re not sure, oh, look for anything hinting at the number of spatial dimensions. If nothing gives you a hint maybe it doesn’t matter.

Either way, we do want to talk about the family of shapes without committing ourselves to any particular number of dimensions. And so that’s why we fall back on ‘N’. ‘N’ is a good name for “the number of dimensions we’re working in”, and so we use it. Then we have the N-sphere and the N-ball, a sphere-like shape, or a ball-like shape, that’s in however much space we need for the problem.

I mentioned something early on that I bet you paid no attention to. That was that we need a space, and a point inside the space, and some idea of distance. One of the surprising things mathematics teaches us about distance is … there’s a lot of ideas of distance out there. We have what I’ll call an instinctive idea of distance. It’s the one that matches what holding a ruler up to stuff tells us. But we don’t have to have that.

I sense the grumbling already. Yes, sure, we can define distance by some screwball idea, but do we ever need it? To which the mathematician answers, well, what if you’re trying to figure out how far away something in midtown Manhattan is? Where you can only walk along streets or avenues and we pretend Broadway doesn’t exist? Huh? How about that? Oh, fine, the skeptic might answer. Grant that there can be weird cases where the straight-line ruler distance is less enlightening than some other scheme is.

Well, there are. There exists a whole universe of different ideas of distance. There’s a handful of useful ones. The ordinary straight-line ruler one, the Euclidean distance, you get in a method so familiar it’s worth saying what you do. You find the coordinates of your two given points. Take the pairs of corresponding coordinates: the x-coordinates of the two points, the y-coordinates of the two points, the z-coordinates, and so on. Find the differences between corresponding coordinates. Take the absolute value of those differences. Square all those absolute-value differences. Add up all those squares. Take the square root of that. Fine enough.

There’s a lot of novelty acts. For example, do that same thing, only instead of raising the differences to the second power, raise them to the 26th power. When you get the sum, instead of the square root, take the 26th root. There. That’s a legitimate distance. No, you will never need this, but your analysis professor might give you it as a homework problem sometime.

Some are useful, though. Raising to the first power, and then eventually taking the first root, gives us something useful. Yes, raising to a first power and taking a first root isn’t doing anything. We just say we’re doing that for the sake of consistency. Raising to an infinitely large power, and then taking an infinitely great root, inspires angry glares. But we can make that idea rigorous. When we do it gives us something useful.

And here’s a new, amazing thing. We can still make “spheres” for these other distances. On a two-dimensional space, the “sphere” with this first-power-based distance will look like a diamond. The “sphere” with this infinite-power-based distance will look like a square. On a three-dimensional space the “sphere” with the first-power-based distance looks like a … well, more complicated, three-dimensional diamond. The “sphere” with the infinite-power-based distance looks like a box. The “balls” in all these cases look like what you expect from knowing the spheres.

As with the ordinary ideas of spheres and balls these shapes let us understand space. Spheres offer a natural path to understanding difficult points. Balls offer a natural path to understanding complicated shapes. The different ideas of distance change how we represent these, and how complicated they are, but not the fact that we can do it. And it allows us to start thinking of what spheres and balls for more abstract spaces, universes made of polynomials or formed of trig functions, might be. They’re difficult to visualize. But we have the grammar that lets us speak about them now.

And for a postscript: I also wrote about spheres and balls as part of my Set Tour a couple years ago. Here’s the essay about the N-sphere, although I didn’t exactly call it that. And here’s the essay about the N-ball, again not quite called that.

• #### elkement (Elke Stangl) 6:25 pm on Wednesday, 30 August, 2017 Permalink | Reply

Thanks a lot for picking my suggestion! Great essay – I need the idea of this general distance sink in …

As you mentioned heat pumps :-) … my fondness of N-balls or spheres is related to that, well, sort of – related to thermodynamics / statistical mechanics. What fascinated me a long time ago was that, for extremely high N, ‘all the volume’ of the N-ball is concentrated in just a thin shell beneath the surface – which I tried to describe here https://elkement.blog/2017/06/17/spheres-in-a-space-with-trillions-of-dimensions/

Something, that’s maybe rather trivial or only weird because it is hard to visualize a sphere in a space with 10^25 dimensions… And after your post now I wonder what would happen if we use the 10^25th root to define the distance?

Liked by 1 person

• #### Joseph Nebus 1:06 am on Friday, 8 September, 2017 Permalink | Reply

Thank you, and I’m quite glad you like. … And yes, where the ‘volume’ is in an N-ball is a weird and wondrous thing. It maybe doesn’t break intuition, but it does challenge it at least.

Using the 10^25th power-and-root to define distance would look, practically, like using the infinite power. In practice, that would select, between two points, whatever the longest distance along one of the axes is and pick that out. The rest of the axes would make for a tiny modification, but at that extreme a power it wouldn’t be noticeable. I’m told that when someone does need to simulate the infinite-power distance for numerical purposes they’ll just toss in a very large power. The error made by doing that should be smaller than the usual acceptable floating-point errors. My impression is that the power used would be closer to, like, a hundred or a thousand. But I don’t have experience directly in the field about this and don’t know why they wouldn’t just use the greatest-coordinate-difference if that’s what they wanted in the first place.

Liked by 1 person

## The Summer 2017 Mathematics A To Z: Morse Theory

Today’s A To Z entry is a change of pace. It dives deeper into analysis than this round has been. The term comes from Mr Wu, of the Singapore Maths Tuition blog, whom I thank for the request.

# Morse Theory.

An old joke, as most of my academia-related ones are. The young scholar says to his teacher how amazing it was in the old days, when people were foolish, and thought the Sun and the Stars moved around the Earth. How fortunate we are to know better. The elder says, ah yes, but what would it look like if it were the other way around?

There are many things to ponder packed into that joke. For one, the elder scholar’s awareness that our ancestors were no less smart or perceptive or clever than we are. For another, the awareness that there is a problem. We want to know about the universe. But we can only know what we perceive now, where we are at this moment. Even a note we’ve written in the past, or a message from a trusted friend, we can’t take uncritically. What we know is that we perceive this information in this way, now. When we pay attention to our friends in the philosophy department we learn that knowledge is even harder than we imagine. But I’ll stop there. The problem is hard enough already.

We can put it in a mathematical form, one that seems immune to many of the worst problems of knowledge. In this form it looks something like this: if what can we know about the universe, if all we really know is what things in that universe are doing near us? The things that we look at are functions. The universe we’re hoping to understand is the domain of the functions. One filter we use to see the universe is Morse Theory.

We don’t look at every possible function. Functions are too varied and weird for that. We look at functions whose range is the real numbers. And they must be smooth. This is a term of art. It means the function has derivatives. It has to be continuous. It can’t have sharp corners. And it has to have lots of derivatives. The first derivative of a smooth function has to also be continuous, and has to also lack corners. And the derivative of that first derivative has to be continuous, and to lack corners. And the derivative of that derivative has to be the same. A smooth function can can differentiate over and over again, infinitely many times. None of those derivatives can have corners or jumps or missing patches or anything. This is what makes it smooth.

Most functions are not smooth, in much the same way most shapes are not circles. That’s all right. There are many smooth functions anyway, and they describe things we find interesting. Or we think they’re interesting, anyway. Smooth functions are easy for us to work with, and to know things about. There’s plenty of smooth functions. If you’re interested in something else there’s probably a smooth function that’s close enough for practical use.

Morse Theory builds on the “critical points” of these smooth functions. A critical point, in this context, is one where the derivative is zero. Derivatives being zero usually signal something interesting going on. Often they show where the function changes behavior. In freshman calculus they signal where a function changes from increasing to decreasing, so the critical point is a maximum. In physics they show where a moving body no longer has an acceleration, so the critical point is an equilibrium. Or where a system changes from one kind of behavior to another. And here — well, many things can happen.

So take a smooth function. And take a critical point that it’s got. (And, erg. Technical point. The derivative of your smooth function, at that critical point, shouldn’t be having its own critical point going on at the same spot. That makes stuff more complicated.) It’s possible to approximate your smooth function near that critical point with, of course, a polynomial. It’s always polynomials. The shape of these polynomials gives you an index for these points. And that can tell you something about the shape of the domain you’re on.

At least, it tells you something about what the shape is where you are. The universal model for this — based on skimming texts and papers and popularizations of this — is of a torus standing vertically. Like a doughnut that hasn’t tipped over, or like a tire on a car that’s working as normal. I suspect this is the best shape to use for teaching, as anyone can understand it while it still shows the different behaviors. I won’t resist.

Imagine slicing this tire horizontally. Slice it close to the bottom, below the central hole, and the part that drops down is a disc. At least, it could be flattened out tolerably well to a disc.

Slice it somewhere that intersects the hole, though, and you have a different shape. You can’t squash that down to a disc. You have a noodle shape. A cylinder at least. That’s different from what you got the first slice.

Slice the tire somewhere higher. Somewhere above the central hole, and you have … well, it’s still a tire. It’s got a hole in it, but you could imagine patching it and driving on. There’s another different shape that we’ve gotten from this.

Imagine we were confined to the surface of the tire, but did not know what surface it was. That we start at the lowest point on the tire and ascend it. From the way the smooth functions around us change we can tell how the surface we’re on has changed. We can see its change from “basically a disc” to “basically a noodle” to “basically a doughnut”. We could work out what the surface we’re on has to be, thanks to how these smooth functions around us change behavior.

Occasionally we mathematical-physics types want to act as though we’re not afraid of our friends in the philosophy department. So we deploy the second thing we know about Immanuel Kant. He observed that knowing the force of gravity falls off as the square of the distance between two things implies that the things should exist in a three-dimensional space. (Source: I dunno, I never read his paper or book or whatever and dunno I ever heard anyone say they did.) It’s a good observation. Geometry tells us what physics can happen, but what physics does happen tells us what geometry they happen in. And it tells the philosophy department that we’ve heard of Immanuel Kant. This impresses them greatly, we tell ourselves.

Morse Theory is a manifestation of how observable physics teaches us the geometry they happen on. And in an urgent way, too. Some of Edward Witten’s pioneering work in superstring theory was in bringing Morse Theory to quantum field theory. He showed a set of problems called the Morse Inequalities gave us insight into supersymmetric quantum mechanics. The link between physics and doughnut-shapes may seem vague. This is because you’re not remembering that mathematical physics sees “stuff happening” as curves drawn on shapes which represent the kind of problem you’re interested in. Learning what the shapes representing the problem look like is solving the problem.

If you’re interested in the substance of this, the universally-agreed reference is J Milnor’s 1963 text Morse Theory. I confess it’s hard going to read, because it’s a symbols-heavy textbook written before the existence of LaTeX. Each page reminds one why typesetters used to get hazard pay, and not enough of it.

• #### gaurish 5:40 am on Tuesday, 29 August, 2017 Permalink | Reply

My favourite functions: “Most functions are not smooth, in much the same way most shapes are not circles.”

Liked by 1 person

• #### Joseph Nebus 12:59 am on Friday, 8 September, 2017 Permalink | Reply

Thank you. Sometimes I think while writing that I’ve really hit something good, and that sentence was the one that gave me that feeling that week.

Liked by 1 person

• #### gaurish 5:41 am on Tuesday, 29 August, 2017 Permalink | Reply

Typo: my favourite statement (not functions)

Liked by 1 person

• #### mathtuition88 12:01 pm on Tuesday, 29 August, 2017 Permalink | Reply

Reblogged this on Singapore Maths Tuition.

Like

• #### elkement (Elke Stangl) 6:39 pm on Wednesday, 30 August, 2017 Permalink | Reply

I totally like: “Occasionally we mathematical-physics types want to act as though we’re not afraid of our friends in the philosophy department.” :-)

It’s an amazing A-Z – I am always late and in catch-up mode, but I am enjoying every post!

Like

• #### Joseph Nebus 1:09 am on Friday, 8 September, 2017 Permalink | Reply

Thank you kindly. And never worry about being late; you can see how scrambled my schedule has been lately. I’m hoping to get down to Inbox 100 sometime this weekend, if all goes well.

Liked by 1 person

## Reading the Comics, August 26, 2017: Dragon Edition

It’s another week where everything I have to talk about comes from GoComics.com. So, no pictures. The Comics Kingdom and the Creators.com strips are harder for non-subscribers to read so I feel better including those pictures. There’s not an overarching theme that I can fit to this week’s strips either, so I’m going to name it for the one that was most visually interesting to me.

Charlie Pondrebarac’s CowTown for the 22nd I just knew was a rerun. It turned up the 26th of August, 2015. Back then I described it as also “every graduate students’ thesis defense anxiety dream”. Now I wonder if I have the possessive apostrophe in the right place there. On reflection, if I have “every” there, then “graduate student” has to be singular. If I dropped the “every” then I could talk about “graduate students” in the plural and be sensible. I guess that’s all for a different blog to answer.

Mike Thompson’s Grand Avenue for the 22nd threatened to get me all cranky again, as Grandmom decided the kids needed to do arithmetic worksheets over the summer. The strip earned bad attention from me a few years ago when a week, maybe more, of the strip was focused on making sure the kids drudged their way through times tables. I grant it’s a true attitude that some people figure what kids need is to do a lot of arithmetic problems so they get better at arithmetic problems. But it’s hard enough to convince someone that arithmetic problems are worth doing, and to make them chores isn’t helping.

John Zakour and Scott Roberts’s Maria’s Day for the 22nd name-drops fractions as a worse challenge than dragon-slaying. I’m including it here for the cool partial picture of the fire-breathing dragon. Also I take a skeptical view of the value of slaying the dragons anyway. Have they given enough time for sanctions to work?

Maria’s Day pops back in the 24th. Needs more dragon-slaying.

Eric the Circle for the 24th, this one by Dennill, gets in here by throwing some casual talk about arcs around. That and π. The given formula looks like nonsense to me. $\frac{pi}{180}\cdot 94 - sin 94\deg$ has parts that make sense. The first part will tell you what radian measure corresponds to 94 degrees, and that’s fine. Mathematicians will tend to look for radian measures rather than degrees for serious work. The sine of 94 degrees they might want to know. Subtracting the two? I don’t see the point. I dare to say this might be a bunch of silliness.

Cathy Law’s Claw for the 25th writes off another Powerball lottery loss as being bad at math and how it’s like algebra. Seeing algebra in lottery tickets is a kind of badness at mathematics, yes. It’s probability, after all. Merely playing can be defended mathematically, though, at least for the extremely large jackpots such as the Powerball had last week. If the payout is around 750 million dollars (as it was) and the chance of winning is about one in 250 million (close enough to true), then the expectation value of playing a ticket is about three dollars. If the ticket costs less than three dollars (and it does; I forget if it’s one or two dollars, but it’s certainly not three), then, on average you could expect to come out slightly ahead. Therefore it makes sense to play.

Except that, of course, it doesn’t make sense to play. On average you’ll lose the cost of the ticket. The on-average long-run you need to expect to come out ahead is millions of tickets deep. The chance of any ticket winning is about one in 250 million. You need to play a couple hundred million times to get a good enough chance of the jackpot for it to really be worth it. Therefore it makes no sense to play.

Mathematical logic therefore fails us: we can justify both playing and not playing. We must study lottery tickets as a different thing. They are (for the purposes of this) entertainment, something for a bit of disposable income. Are they worth the dollar or two per ticket? Did you have other plans for the money that would be more enjoyable? That’s not my ruling to make.

Samson’s Dark Side Of The Horse for the 25th just hurts my feelings. Why the harsh word, Samson? Anyway, it’s playing on the typographic similarity between 0 and O, and how we bunch digits together.

Grouping together three decimal digits as a block is as old, in the Western tradition, as decimal digits are. Leonardo of Pisa, in Liber Abbaci, groups the thousands and millions and thousands of millions and such together. By 1228 he had the idea to note this grouping with an arc above the set of digits, like a tie between notes on a sheet of music. This got cut down, part of the struggle in notation to write as little as possible. Johannes de Sacrobosco in 1256 proposed just putting a dot every third digit. In 1636 Thomas Blundeville put a | mark after every third digit. (I take all this, as ever, from Florian Cajori’s A History Of Mathematical Notations, because it’s got like everything in it.) We eventually settled on separating these stanzas of digits with a , or . mark. But that it should be three digits goes as far back as it could.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r