This is mostly a post for myself, so that I remember the existence of something I mean to read. I have tried downloading and putting into scattered files stuff I mean to read. I’ve also tried stuffing links of stuff I mean to read into Yojimbo. Maybe putting it here will at least let someone read the things.

Anyway, this is a short essay by Joel Abraham that’s on arxiv.org. It’s Introduction to the p-adic Space. p-adics are a method of thinking about what the real numbers are. Why we need ways to think about what the real numbers are turn up when you think carefully about where our idea of them comes from.

It’s easy to see where the counting numbers like ‘1’ and ‘2’ and ‘3’ come from. They’re part of our evolutionary heritage, the part of mathematics that we know is understood also by apes and crows and raccoons. We understand some of it before we even have language.

With some thinking, and many people helping, we can go from these counting numbers to the idea of ‘0’. And even into the negative counting numbers like ‘-4’. And by thinking about multiplication, and how to reverse multiplication, we get fractions. Rational numbers. Positive and negative, given the chance. But then what are the irrational numbers? We can work out easily there have to be irrational numbers. We can name some of them. But how to give a clear definition of the whole mass of them? It should be more than just “also the other numbers”.

The p-adic numbers are one of ways to go about this. They start with thinking what we mean for two numbers to be “close to” one another. And thinking hard about how to write numbers. and this gets to interesting insights I don’t know as well as I’d like.

For this deficiency I blame Usenet. I first noticed p-adics in the voluminous and not particularly wise rantings of a crank poster to sci.math, back in the day. I forget what point, if any, he was trying to prove. But to first notice a subject as someone’s apparently idiosyncratic scheme of rewriting numbers so that everything we were already used to was useless, and in the service of some clearly nonsense goal (I think he was maybe trying to show how the number meant by 0.99999… was somehow different from the number meant by 1), is to badly hobble it. And I followed strongly mathematical-physics classes as an undergraduate and a graduate student. It’s easy to just miss problems of number representation. (This although p-adics could offer some advantages in numerical computing. They could make more numerically stable representations of irrational numbers.)

As I say, I want to fix that, and a friend linked to this arxiv post. And now that I’ve said stuff about it in public maybe it’ll coax me into going back and reading and understanding it all. We’ll see.

## The Summer 2017 Mathematics A To Z: Well-Ordering Principle

It’s the last full week of the Summer 2017 A To Z! Four more essays and I’ll have completed this project and curl up into a word coma. But I’m not there yet. Today’s request is another from Gaurish, who’s given me another delightful topic to write about. Gaurish hosts a fine blog, For the love of Mathematics, which I hope you’ve given a try.

# Well-Ordering Principle.

An old mathematics joke. Or paradox, if you prefer. What is the smallest whole number with no interesting properties?

Not one. That’s for sure. We could talk about one forever. It’s the first number we ever know. It’s the multiplicative identity. It divides into everything. It exists outside the realm of prime or composite numbers. It’s — all right, we don’t need to talk about one forever. Two? The smallest prime number. The smallest even number. The only even prime. The only — yeah, let’s move on. Three; the smallest odd prime number. Triangular number. One of only two prime numbers that isn’t one more or one less than a multiple of six. Let’s move on. Four. A square number. The smallest whole number that isn’t 1 or a prime. Five. Prime number. First sum of two different prime numbers. Part of the first prime pair. Six. Smallest perfect number. Smallest product of two different prime numbers. Let’s move on.

And so on. Somewhere around 22 or so, the imagination fails and we can’t think of anything not-boring about this number. So we’ve found the first number that hasn’t got any interesting properties! … Except that being the smallest boring number must be interesting. So we have to note that this is otherwise the smallest boring number except for that bit where it’s interesting. On to 23, which used to be the default funny number. 24. … Oh, carry on. Maybe around 31 things settle down again. Our first boring number! Except that, again, being the smallest boring number is interesting. We move on to 32, 33, 34. When we find one that couldn’t be interesting, we find that’s interesting. We’re left to conclude there is no such thing as a boring number.

This would be a nice thing to say for numbers that otherwise get no attention, if we pretend they can have hurt feelings. But we do have to admit, 1729 is actually only interesting because it’s a part of the legend of Srinivasa Ramanujan. Enjoy the silliness for a few paragraphs more.

(This is, if I’m not mistaken, a form of the heap paradox. Don’t remember that? Start with a heap of sand. Remove one grain; you’ve still got a heap of sand. Remove one grain again. Still a heap of sand. Remove another grain. Still a heap of sand. And yet if you did this enough you’d leave one or two grains, not a heap of sand. Where does that change?)

Another problem, something you might consider right after learning about fractions. What’s the smallest positive number? Not one-half, since one-third is smaller and still positive. Not one-third, since one-fourth is smaller and still positive. Not one-fourth, since one-fifth is smaller and still positive. Pick any number you like and there’s something smaller and still positive. This is a difference between the positive integers and the positive real numbers. (Or the positive rational numbers, if you prefer.) The thing positive integers have is obvious, but it is not a given.

The difference is that the positive integers are well-ordered, while the positive real numbers aren’t. Well-ordering we build on ordering. Ordering is exactly what you imagine it to be. Suppose you can say, for any two things in a set, which one is less than another. A set is well-ordered if whenever you have a non-empty subset you can pick out the smallest element. Smallest means exactly what you think, too.

The positive integers are well-ordered. And more. The way they’re set up, they have a property called the “well-ordering principle”. This means any non-empty set of positive integers has a smallest number in it.

This is one of those principles that seems so obvious and so basic that it can’t teach anything interesting. That it serves a role in some proofs, sure, that’s easy to imagine. But something important?

Look back to the joke/paradox I started with. It proves that every positive integer has to be interesting. Every number, including the ones we use every day. Including the ones that no one has ever used in any mathematics or physics or economics paper, and never will. We can avoid that paradox by attacking the vagueness of “interesting” as a word. Are you interested to know the 137th number you can write as the sum of cubes in two different ways? Before you say ‘yes’, consider whether you could name it ten days after you’ve heard the number.

(Granted, yes, it would be nice to know the 137th such number. But would you ever remember it? Would you trust that it’ll be on some Wikipedia page that somehow is never threatened with deletion for not being noteworthy? Be honest.)

But suppose we have some property that isn’t so mushy. Suppose that we can describe it in some way that’s indexed by the positive integers. Furthermore, suppose that we show that in any set of the positive integers it must be true for the smallest number in that set. What do we know?

— We know that it must be true for all the positive integers. There’s a smallest positive integer. The positive integers have this well-ordered principle. So any subset of the positive integers has some smallest member. And if we can show that something or other is always true for the smallest number in a subset of the positive integers, there you go.

This technique we call, when it’s introduced, induction. It’s usually a baffling subject because it’s usually taught like this: suppose the thing you want to show is indexed to the positive integers. Show that it’s true when the index is ‘1’. Show that if the thing is true for an arbitrary index ‘n’, then you know it’s true for ‘n + 1’. It’s baffling because that second part is hard to visualize. The student makes a lot of mistakes in learning, on examples of what the sum of the first ‘N’ whole numbers or their squares or cubes are. I don’t think induction is ever taught in this well-ordering principle method. But it does get used in proofs, once you get to the part of analysis where you don’t have to interact with actual specific numbers much anymore.

The well-ordering principle also gives us the method of infinite descent. You encountered this in learning proofs about, like, how the square root of two must be an irrational number. In this, you show that if something is true for some positive integer, then it must also be true for some other, smaller positive integer. And therefore some other, smaller positive integer again. And again, until you get into numbers small enough you can check by hand.

It keeps creeping in. The Fundamental Theorem of Arithmetic says that every positive whole number larger than one is a product of a unique string of prime numbers. (Well, the order of the primes doesn’t matter. 2 times 3 times 5 is the same number as 3 times 2 times 5, and so on.) The well-ordering principle guarantees you can factor numbers into a product of primes. Watch this slick argument.

Suppose you have a set of whole numbers that isn’t the product of prime numbers. There must, by the well-ordering principle, be some smallest number in that set. Call that number ‘n’. We know that ‘n’ can’t be prime, because if it were, then that would be its prime factorization. So it must be the product of at least two other numbers. Let’s suppose it’s two numbers. Call them ‘a’ and ‘b’. So, ‘n’ is equal to ‘a’ times ‘b’.

Well, ‘a’ and ‘b’ have to be less than ‘n’. So they’re smaller than the smallest number that isn’t a product of primes. So, ‘a’ is the product of some set of primes. And ‘b’ is the product of some set of primes. And so, ‘n’ has to equal the primes that factor ‘a’ times the primes that factor ‘b’. … Which is the prime factorization of ‘n’. So, ‘n’ can’t be in the set of numbers that don’t have prime factorizations. And so there can’t be any numbers that don’t have prime factorizations. It’s for the same reason we worked out there aren’t any numbers with nothing interesting to say about them.

And isn’t it delightful to find so simple a principle can prove such specific things?

## Reading the Comics, December 30, 2016: New Year’s Eve Week Edition

So last week, for schedule reasons, I skipped the Christmas Eve strips and promised to get to them this week. There weren’t any Christmas Eve mathematically-themed comic strips. Figures. This week, I need to skip New Year’s Eve comic strips for similar schedule reasons. If there are any, I’ll talk about them next week.

Lorie Ransom’s The Daily Drawing for the 28th is a geometry wordplay joke for this installment. Two of them, when you read the caption.

John Graziano’s Ripley’s Believe It or Not for the 28th presents the quite believable claim that Professor Dwight Barkley created a formula to estimate how long it takes a child to ask “are we there yet?” I am skeptical the equation given means all that much. But it’s normal mathematician-type behavior to try modelling stuff. That will usually start with thinking of what one wants to represent, and what things about it could be measured, and how one expects these things might affect one another. There’s usually several plausible-sounding models and one has to select the one or ones that seem likely to be interesting. They have to be simple enough to calculate, but still interesting. They need to have consequences that aren’t obvious. And then there’s the challenge of validating the model. Does its description match the thing we’re interested in well enough to be useful? Or at least instructive?

Len Borozinski’s Speechless for the 28th name-drops Albert Einstein and the theory of relativity. Marginal mathematical content, but it’s a slow week.

John Allison’s Bad Machinery for the 29th mentions higher dimensions. More dimensions. In particular it names ‘ana’ and ‘kata’ as “the weird extra dimensions”. Ana and kata are a pair of directions coined by the mathematician Charles Howard Hinton to give us a way of talking about directions in hyperspace. They echo the up/down, left/right, in/out pairs. I don’t know that any mathematicians besides Rudy Rucker actually use these words, though, and that in his science fiction. I may not read enough four-dimensional geometry to know the working lingo. Hinton also coined the “tesseract”, which has escaped from being a mathematician’s specialist term into something normal people might recognize. Mostly because of Madeline L’Engle, I suppose, but that counts.

Samson’s Dark Side of the Horse for the 29th is Dark Side of the Horse‘s entry this essay. It’s a fun bit of play on counting, especially as a way to get to sleep.

John Graziano’s Ripley’s Believe It or Not for the 29th mentions a little numbers and numerals project. Or at least representations of numbers. Finding other orders for numbers can be fun, and it’s a nice little pastime. I don’t know there’s an important point to this sort of project. But it can be fun to accomplish. Beautiful, even.

Mark Anderson’s Andertoons for the 30th relieves us by having a Mark Anderson strip for this essay. And makes for a good Roman numerals gag.

Ryan Pagelow’s Buni for the 30th can be counted as an anthropomorphic-numerals joke. I know it’s more of a “ugh 2016 was the worst year” joke, but it parses either way.

John Atkinson’s Wrong Hands for the 30th is an Albert Einstein joke. It’s cute as it is, though.

## Reading the Comics, November 5, 2016: Surprisingly Few Halloween Costumes Edition

Comic Strip Master Command gave me a light load this week, which suit me fine. I’ve been trying to get the End 2016 Mathematics A To Z comfortably under way instead. It does strike me that there were fewer Halloween-themed jokes than I’d have expected. For all the jokes there are to make about Halloween I’d imagine some with some mathematical relevance would come up. But they didn’t and, huh. So it goes. The one big exception is the one I’d have guessed would be the exception.

Bill Amend’s FoxTrot for the 30th — a new strip — plays with the scariness of mathematics. Trigonometry specifically. Trig is probably second only to algebra for the scariest mathematics normal people encounter. And that’s probably more because people get to algebra before they might get to trigonometry. Which is madness, in its way. Trigonometry is about how we can relate angles, arcs, and linear distances. It’s about stuff anyone would like to know, like how to go from an easy-to-make observation of the angle spanned by a thing to how big the thing must be. But the field does require a bunch of exotic new functions like sine and tangent and novelty acts like “arc-cosecant”. And the numbers involved can be terrible things. The sine of an angle, for example, is almost always going to be some irrational number. For common angles we use a lot it’ll be an irrational number with an easy-to-understand form. For example the sine of 45 degrees, mentioned here, is “one-half the square root of two”. Anyone not trying to be intimidating will use that instead. But the sine of, say, 50 degrees? I don’t know what that is either except that it’s some never-ending sequence of digits. People love to have digits, but when they’re asked to do something with them, they get afraid and I don’t blame them.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 30th uses sudoku as shorthand for “genius thinking”. I am aware some complain sudoku isn’t mathematics. It’s certainly logic, though, and if we’re going to rule out logic puzzles from mathematics we’re going to lose a lot of fun fields. One of the commenters provided what I suppose the solution to be. (I haven’t checked.) If wish to do the puzzle be careful about scrolling.

In Jef Mallet’s Frazz for the 2nd Caulfield notices something cute about 100. A perfect square is a familiar enough idea; it’s a whole number that’s the square of another whole number. The “roundest of round numbers” is a value judgement I’m not sure I can get behind. It’s a good round number, anyway, at least for stuff that’s sensibly between about 50 and 150. Or maybe between 50 and 500 if you’re just interested in about how big something might be. An irrational number, well, you know where that joke’s going.

Mrs Olsen doesn’t seem impressed by Caulfield’s discovery, although in fairness we don’t see the actual aftermath. Sometimes you notice stuff like that and it is only good for a “huh”. But sometimes you get into some good recreational mathematics. It’s the sort of thinking that leads to discovering magic squares and amicable numbers and palindromic prime numbers and the like. Do they lead to important mathematics? Some of them do. Or at least into interesting mathematics. Sometimes they’re just passingly amusing.

Greg Curfman’s Meg rerun for the 12th quotes Einstein’s famous equation as the sort of thing you could just expect would be asked in school. I’m not sure I ever had a class where knowing E = mc2 was the right answer to a question, though. Maybe as I got into physics since we did spend a bit of time on special relativity and E = mc2 turns up naturally there. Maybe I’ve been out of elementary school too long to remember.

Mark Tatulli’s Heart of the City for the 4th has Heart and Dean talking about postapocalyptic society. Heart doubts that postapocalyptic society would need people like him, “with long-division experience”. Ah, but, grant the loss of computing devices. People will still need to compute. Before the days of electrical, and practical mechanical, computing people who could compute accurately were in demand. The example mathematicians learn to remember is Zacharias Dase, a German mental calculator. He was able to do astounding work and in his head. But he didn’t earn so much money as pro-mental-arithmetic propaganda would like us to believe. And why work entirely in your head if you don’t need to?

Larry Wright’s Motley Classics rerun for the 5th is a word problem joke. And it’s mixed with labor relations humor for the sake of … I’m not quite sure, actually. Anyway I would have sworn I’d featured this strip in a long-ago Reading The Comics post, but I don’t see it on a casual search. So, go figure.

## Reading the Comics, September 10, 2016: Finishing The First Week Of School Edition

I understand in places in the United States last week wasn’t the first week of school. It was the second or third or even worse. These places are crazy, in that they do things differently from the way my elementary school did it. So, now, here’s the other half of last week’s comics.

Zach Weinersmith’s Saturday Morning Breakfast Cereal presented the 8th is a little freak-out about existence. Mathematicians rely on the word “exists”. We suppose things to exist. We draw conclusions about other things that do exist or do not exist. And these things that exist are not things that exist. It’s a bit heady to realize nobody can point to, or trap in a box, or even draw a line around “3”. We can at best talk about stuff that expresses some property of three-ness. We talk about things like “triangles” and we even draw and use representations of them. But those drawings we make aren’t Triangles, the thing mathematicians mean by the concept. They’re at best cartoons, little training wheels to help us get the idea down. Here I regret that as an undergraudate I didn’t take philosophy courses that challenged me. It seems certain to me mathematicians are using some notion of the Platonic Ideal when we speak of things “existing”. But what does that mean, to a mathematician, to a philosopher, and to the person who needs an attractive tile pattern on the floor?

Cathy Thorne’s Everyday People Cartoons for the 9th is about another bit of the philosophy of mathematics. What are the chances of something that did happen? What does it mean to talk about the chance of something happening? When introducing probability mathematicians like to set it up as “imagine this experiment, which has a bunch of possible outcomes. One of them will happen and the other possibilities will not” and we go on to define a probability from that. That seems reasonable, perhaps because we’re accepting ignorance. We may know (say) that a coin toss is, in principle, perfectly deterministic. If we knew exactly how the coin is made. If we knew exactly how it is tossed. If we knew exactly how the air currents would move during its fall. If we knew exactly what the surface it might bounce off before coming to rest is like. Instead we pretend all this knowable stuff is not, and call the result unpredictability.

But about events in the past? We can imagine them coming out differently. But the imagination crashes hard when we try to say why they would. If we gave the exact same coin the exact same toss in the exact same circumstances how could it land on anything but the exact same face? In which case how can there have been any outcome other than what did happen? Yes, I know, someone wants to rush in and say “Quantum!” Say back to that person, “waveform collapse” and wait for a clear explanation of what exactly that is. There are things we understand poorly about the transition between the future and the past. The language of probability is a reminder of this.

Hilary Price’s Rhymes With Orange for the 10th uses the classic story-problem setup of a train leaving the station. It does make me wonder how far back this story setup goes, and what they did before trains were common. Horse-drawn carriages leaving stations, I suppose, or maybe ships at sea. I quite like the teaser joke in the first panel more.

Tom Toles’s Randolph Itch, 2 am rerun for the 10th is an Einstein The Genius comic. It felt familiar to me, but I don’t seem to have included it in previous Reading The Comics posts. Perhaps I noticed it some week that I figured a mere appearance of Einstein didn’t rate inclusion. Randolph certainly fell asleep while reading about mathematics, though.

It’s popular to tell tales of Einstein not being a very good student, and of not being that good in mathematics. It’s easy to see why. We’d all like to feel a little more like a superlative mind such as that. And Einstein worked hard to develop an image of being accessible and personable. It fits with the charming absent-minded professor image everybody but forgetful professors loves. It feels dramatically right that Einstein should struggle with arithmetic like so many of us do. It’s nonsense, though. When Einstein struggled with mathematics, it was on the edge of known mathematics. He needed advice and consultations for the non-Euclidean geometries core to general relativity? Who doesn’t? I can barely make my way through the basic notation.

Anyway, it’s pleasant to see Toles holding up Einstein for his amazing mathematical prowess. It was a true thing.

## Reading the Comics, August 27, 2016: Calm Before The Term Edition

Here in the United States schools are just lurching back into the mode where they have students come in and do stuff all day. Perhaps this is why it was a routine week. Comic Strip Master Command wants to save up a bunch of story problems for us. But here’s what the last seven days sent into my attention.

Jeff Harris’s Shortcuts educational feature for the 21st is about algebra. It’s got a fair enough blend of historical trivia and definitions and examples and jokes. I don’t remember running across the “number cruncher” joke before.

Mark Anderson’s Andertoons for the 23rd is your typical student-in-lecture joke. But I do sympathize with students not understanding when a symbol gets used for different meanings. It throws everyone. But sometimes the things important to note clearly in one section are different from the needs in another section. No amount of warning will clear things up for everybody, but we try anyway.

Tom Thaves’s Frank and Ernest for the 23rd tells a joke about collapsing wave functions, which is why you never see this comic in a newspaper but always see it on a physics teacher’s door. This is properly physics, specifically quantum mechanics. But it has mathematical import. The most practical model of quantum mechanics describes what state a system is in by something called a wave function. And we can turn this wave function into a probability distribution, which describes how likely the system is to be in each of its possible states. “Collapsing” the wave function is a somewhat mysterious and controversial practice. It comes about because if we know nothing about a system then it may have one of many possible values. If we observe, say, the position of something though, then we have one possible value. The wave functions before and after the observation are different. We call it collapsing, reflecting how a universe of possibilities collapsed into a mere fact. But it’s hard to find an explanation for what that is that’s philosophically and physically satisfying. This problem leads us to Schrödinger’s Cat, and to other challenges to our sense of how the world could make sense. So, if you want to make your mark here’s a good problem for you. It’s not going to be easy.

John Allison’s Bad Machinery for the 24th tosses off a panel full of mathematics symbols as proof of hard thinking. In other routine references John Deering’s Strange Brew for the 26th is just some talk about how hard fractions are.

While it’s outside the proper bounds of mathematics talk, Tom Toles’s Randolph Itch, 2 am for the 23rd is a delight. My favorite strip of this bunch. Should go on the syllabus.

## Bourbaki and How To Write Numbers, A Trifle

So my attempt at keeping the Reading the Comics posts to Sunday has crashed and burned again. This time for a good reason. As you might have read between the lines on my humor blog, I spent the past week on holiday and just didn’t have time to write stuff. I barely had time to read my comics. I’ll get around to it this week.

In the meanwhile then I’d like to point people to the MathsByAGirl blog. The blog recently had an essay on Nicolas Bourbaki, who’s among the most famous mathematicians of the 20th century. Bourbaki is also someone with a tremendous and controversial legacy, one that I expect to touch on as I catch up on last week’s comics. If you don’t know the secret of Bourbaki then do go over and learn it. If you do, well, go over and read anyway. The author’s wondering whether to write more about Bourbaki’s mathematics and while I’m all in favor of that more people should say.

And as I promised a trifle, let me point to something from my own humor blog. How To Write Out Numbers is an older trifle based on everyone’s love for copy-editing standards. I had forgotten I wrote it before digging it up for a week of self-glorifying posts last week. I hope folks around here like it too.

Oh, one more thing: it’s the anniversary of the publishing of an admirable but incorrect proof of the four-color map theorem. It would take another century to get right. As I said Thursday, the five-color map theorem is easy. it’s that last color that’s hard.

Vacations are grand but there is always that comfortable day or two once you’re back home.

## Reading the Comics, July 2, 2016: Ripley’s Edition

As I said Sunday, there were more mathematics-mentioning comic strips than I expected last week. So do please read this little one and consider it an extra. The best stuff to talk about is from Ripley’s Believe It Or Not, which may or may not count as a comic strip. Depends how you view these things.

Randy Glasbergen’s Glasbergen Cartoons for the 29th just uses arithmetic as the sort of problem it’s easiest to hide in bed from. We’ve all been there. And the problem doesn’t really enter into the joke at all. It’s just easy to draw.

John Graziano’s Ripley’s Believe It Or Not on the 29th shows off a bit of real trivia: that 599 is the smallest number whose digits add up to 23. And yet it doesn’t say what the largest number is. That’s actually fair enough. There isn’t one. If you had a largest number whose digits add up to 23, you could get a bigger one by multiplying it by ten: 5990, for example. Or otherwise add a zero somewhere in the digits: 5099; or 50,909; or 50,909,000. If we ignore zeroes, though, there are finitely many different ways to write a number with digits that add up to 23. This is almost an example of a partition problem. Partitions are about how to break up a set of things into groups of one or more. But in a partition proper we don’t really care about the order: 5-9-9 is as good as 9-9-5. But we can see some minor differences between 599 and 995 as numbers. I imagine there must be a name for the sort of partition problem in which order matters, but I don’t know what it is. I’ll take nominations if someone’s heard of one.

Graziano’s Ripley’s sneaks back in here the next day, too, with a trivia almost as baffling as the proper credit for the strip. I don’t know what Graziano is getting at with the claim that Ancient Greeks didn’t consider “one” to be a number. None of the commenters have an idea either and my exhaustive minutes of researching haven’t worked it out.

But I wouldn’t blame the Ancient Greeks for finding something strange about 1. We find something strange about it too. Most notably, of all the counting numbers 1 falls outside the classifications of “prime” and “composite”. It fits into its own special category, “unity”. It divides into every whole number evenly; only it and zero do that, if you don’t consider zero to be a whole number. It’s the multiplicative identity, and it’s the numerator in the set of unit fractions — one-half and one-third and one-tenth and all that — the first fractions that people understand. There’s good reasons to find something exceptional about 1.

dro-mo for the 30th somehow missed both Pi Day and Tau Day. I imagine it’s a rerun that the artist wasn’t watching too closely.

Aaron McGruder’s The Boondocks rerun for the 2nd concludes that storyline I mentioned on Sunday about Riley not seeing the point of learning subtraction. It’s always the motivation problem.

## A Leap Day 2016 Mathematics A To Z: Uncountable

I’m drawing closer to the end of the alphabet. While I have got choices for ‘V’ and ‘W’ set, I’ll admit that I’m still looking for something that inspires me in the last couple letters. Such inspiration might come from anywhere. HowardAt58, of that WordPress blog, gave me the notion for today’s entry.

## Uncountable.

What are we doing when we count things?

Maybe nothing. We might be counting just to be doing something. Or we might be counting because we want to do nothing. Counting can be a good way into a restful state. Fair enough. Just because we do something doesn’t mean we care about the result.

Suppose we do care about the result of our counting. Then what is it we do when we count? The mechanism is straightforward enough. We pick out things and say, or imagine saying, “one, two, three, four,” and so on. Or we at least imagine the numbers along with the things being numbered. When we run out of things to count, we take whatever the last number was. That’s how many of the things there were. Why are there eight light bulbs in the chandelier fixture above the dining room table? Because there are not nine.

That’s how lay people count anyway. Mathematicians would naturally have a more sophisticated view of the business. A much more powerful counting scheme. Concepts in counting that go far beyond what you might work out in first grade.

Yeah, so that’s what most of us would figure. Things don’t get much more sophisticated than that, though. This probably is because the idea of counting is tied to the theory of sets. And the theory of sets grew, in part, to come up with a logically solid base for arithmetic. So many of the key ideas of set theory are so straightforward they hardly seem to need explaining.

We build the idea of “countable” off of the nice, familiar numbers 1, 2, 3, and so on. That set’s called the counting numbers. They’re the numbers that everybody seems to recognize as numbers. Not just people. Even animals seem to understand at least the first couple of counting numbers. Sometimes these are called the natural numbers.

Take a set of things we want to study. We’re interested in whether we can match the things in that set one-to-one with the things in the counting numbers. We don’t have to use all the counting numbers. But we can’t use the same counting number twice. If we’ve matched one chandelier light bulb with the number ‘4’, we mustn’t match a different bulb with the same number. Similarly, if we’ve got the number ‘4’ matched to one bulb, we mustn’t match ‘4’ with another bulb at the same time.

If we can do this, then our set’s countable. If we really wanted, we could pick the counting numbers in order, starting from 1, and match up all the things with counting numbers. If we run out of things, then we have a finitely large set. The last number we used to match anything up with anything is the size, or in the jargon, the cardinality of our set. We might not care about the cardinality, just whether the set is finite. Then we can pick counting numbers as we like in no particular order. Just use whatever’s convenient.

But what if we don’t run out of things? And it’s possible we won’t. Suppose our set is the negative whole numbers: -1, -2, -3, -4, -5, and so on. We can match each of those to a counting number many ways. We always can. But there’s an easy way. Match -1 to 1, match -2 to 2, match -3 to 3, and so on. Why work harder than that? We aren’t going to run out of negative whole numbers. And we aren’t going to find any we can’t match with some counting number. And we aren’t going to have to match two different negative numbers to the same counting number. So what we have here is an infinitely large, yet still countable, set.

So a set of things can be countable and finite. It can be countable and infinite. What else is there to be?

There must be something. It’d be peculiar to have a classification that everything was in, after all. At least it would be peculiar except for people studying what it means to exist or to not exist. And most of those people are in the philosophy department, where we’re scared of visiting. So we must mean there’s some such thing as an uncountable set.

The idea means just what you’d guess if you didn’t know enough mathematics to be tricky. Something is uncountable if it can’t be counted. It can’t be counted if there’s no way to match it up, one thing-to-one thing, with the counting numbers. We have to somehow run out of counting numbers.

It’s not obvious that we can do that. Some promising approaches don’t work. For example, the set of all the integers — 1, 2, 3, 4, 5, and all that, and 0, and the negative numbers -1, -2, -3, -4, -5, and so on — is still countable. Match the counting number 1 to 0. Match the counting number 2 to 1. Match the counting number 3 to -1. Match 4 to 2. Match 5 to -2. Match 6 to 3. Match 7 to -3. And so on.

Even ordered pair of the counting numbers don’t do it. We can match the counting number 1 to the pair (1, 1). Match the counting number 2 to the pair (2, 1). Match the counting number 3 to (1, 2). Match 4 to (3, 1). Match 5 to (2, 2). Match 6 to (1, 3). Match 7 to (4, 1). Match 8 to (3, 2). And so on. We can achieve similar staggering results with ordered triplets, quadruplets, and more. Ordered pairs of integers, positive and negative? Longer to do, yes, but just as doable.

So are there any uncountable things?

Sure. Wouldn’t be here if there weren’t. For example: think about the set that’s all the ways to pick things from a set. I sense your confusion. Let me give you an example. Suppose we have the set of three things. They’re the numbers 1, 2, and 3. We can make a bunch of sets out of things from this set. We can make the set that just has ‘1’ in it. We can make the set that just has ‘2’ in it. Or the set that just has ‘3’ in it. We can also make the set that has just ‘1’ and ‘2’ in it. Or the set that just has ‘2’ and 3′ in it. Or the set that just has ‘3’ and ‘1’ in it. Or the set that has all of ‘1’, ‘2’, and ‘3’ in it. And we can make the set that hasn’t got any of these in it. (Yes, that does too count as a set.)

So from a set of three things, we were able to make a collection of eight sets. If we had a set of four things, we’d be able to make a collection of sixteen sets. With five things to start from, we’d be able to make a collection of thirty-two sets. This collection of sets we call the “power set” of our original set, and if there’s one thing we can say about it, it’s that it’s bigger than the set we start from.

The power set for a finite set, well, that’ll be much bigger. But it’ll still be finite. Still be countable. What about the power set for an infinitely large set?

And the power set of the counting numbers, the collection of all the ways you can make a set of counting numbers, is really big. Is it uncountably big?

Let’s step back. Remember when I said mathematicians don’t get “much more” sophisticated than matching up things to the counting numbers? Here’s a little bit of that sophistication. We don’t have to match stuff up to counting numbers if we like. We can match the things in one set to the things in another set. If it’s possible to match them up one-to-one, with nothing missing in either set, then the two sets have to be the same size. The same cardinality, in the jargon.

So. The set of the numbers 1, 2, 3, has to have a smaller cardinality than its power set. Want to prove it? Do this exactly the way you imagine. You run out of things in the original set before you run out of things in the power set, so there’s no making a one-to-one matchup between the two.

With the infinitely large yet countable set of the counting numbers … well, the same result holds. It’s harder to prove. You have to show that there’s no possible way to match the infinitely many things in the counting numbers to the infinitely many things in the power set of the counting numbers. (The easiest way to do this is by contradiction. Imagine that you have made such a matchup, pairing everything in your power set to everything in the counting numbers. Then you go through your matchup and put together a collection that isn’t accounted for. Whoops! So you must not have matched everything up in the first place. Why not? Because you can’t.)

But the result holds. The power set of the counting numbers is some other set. It’s infinitely large, yes. And it’s so infinitely large that it’s somehow bigger than the counting numbers. It is uncountable.

There’s more than one uncountably large set. Of course there are. We even know of some of them. For example, there’s the set of real numbers. Three-quarters of my readers have been sitting anxiously for the past eight paragraphs wondering if I’d ever get to them. There’s good reason for that. Everybody feels like they know what the real numbers are. And the proof that the real numbers are a larger set than the counting numbers is easy to understand. An eight-year-old could master it. You can find that proof well-explained within the first ten posts of pretty much every mathematics blog other than this one. (I was saving the subject. Then I finally decided I couldn’t explain it any better than everyone else has done.)

Are the real numbers the same size, the same cardinality, as the power set of the counting numbers?

Sure, they are.

No, they’re not.

Whichever you like. This is one of the many surprising mathematical results of the surprising 20th century. Starting from the common set of axioms about set theory, it’s undecidable whether the set of real numbers is as big as the power set of the counting numbers. You can assume that it is. This is known as the Continuum Hypothesis. And you can do fine mathematical work with it. You can assume that it is not. This is known as the … uh … Rejecting the Continuum Hypothesis. And you can do fine mathematical work with that. What’s right depends on what work you want to do. Either is consistent with the starting hypothesis. You are free to choose either, or if you like, neither.

My understanding is that most set theory finds it more productive to suppose that they’re not the same size. I don’t know why this is. I know enough set theory to lead you to this point, but not past it.

But that the question can exist tells you something fascinating. You can take the power set of the power set of the counting numbers. And this gives you another, even vaster, uncountably large set. As enormous as the collection of all the ways to pick things out of the counting numbers is, this power set of the power set is even vaster.

We’re not done. There’s the power set of the power set of the power set of the counting numbers. And the power set of that. Much as geology teaches us to see Deep Time, and astronomy Deep Space, so power sets teach us to see Deep … something. Deep Infinity, perhaps.

## Reading the Comics, January 8, 2015: Rerun-Heavy Edition

I couldn’t think of what connective theme there might be to the mathematically-themed comic strips of the last couple days. It finally struck me: there’s a lot of reruns in this. That’ll do. Most of them are reruns from before I started writing about comics so much in these parts.

Bill Watterson’s Calvin and Hobbes for the 5th of January (a rerun, of course, from the 7th of January, 1986) is a kid-resisting-the-test joke. The particular form is trying to claim a religious exemption from mathematics tests. I sometimes see attempts to claim that mathematics is a kind of religion since, after all, you have to believe it’s true. I’ll grant that you do have to assume some things without proof. Those are the rules of logical inference, and the axioms of the field, particularly. But I can’t make myself buy a definition of “religion” that’s just “something you believe”.

But there are religious overtones to a lot of mathematics. The field promises knowable universal truths, things that are true regardless of who and in what context might know them. And the study of mathematical infinity seems to inspire thoughts of God. Amir D Aczel’s The Mystery Of The Aleph: Mathematics, The Kabbala, and the Search for Infinity is a good read on the topic. Addition is still not a kind of religion, though.

Bud Grace’s The Piranha Club for the 6th of January uses the ability to do arithmetic as proof of intelligence. It’s a kind of intelligence, sure. There’s fun to be had in working out a square root in your head, or on paper. But there’s really no need for it now that we’ve got calculator technology, except for what it teaches you about how to compute.

Ruben Bolling’s Super-Fun-Pak Comix for the 6th of June is an installment of A Voice From Another Dimension. It’s just what the title suggests, and of course it would have to be a three-panel comic. The idea that creatures could live in more, or fewer, dimensions of space is a captivating one. It’s challenging to figure how it could work, though. Spaces of one or two dimensions don’t seem like they would allow biochemistry to work. And, as I understand it, chemistry itself seems unlikely to work right in four or more dimensions of space too. But it’s still fun to think about.

David L Hoyt and Jeff Knurek’s Jumble for the 7th of January is a counting-number joke. It does encourage asking whether numbers are created or discovered, which is a tough question. Counting numbers like “four” are so familiar and so apparently universal that they don’t seem to be constructs. (Even if they are, animals have an understanding of at least small counting numbers like these.) But if “four” is somehow not a human construct, then what about “4,000, 000,000, 000,000, 000,000, 000,000, 000,000”, a number so large it’s hard to think of something we have that many of that we can visualize. And even if that is, “one fourth” seems a bit different from that, and “four i” — the number which, squared, gives us negative 16 — seems qualitatively different. But if they’re constructs, then why do they correspond well to things we can see in the real world?

Greg Curfman’s Meg Classics for the 7th of January originally ran the 19th of September, 1997. It’s about a kid distractingly interested in multiplication. You get these sometimes. My natural instinct is to put the bigger number first and the smaller number second in a multiplication. “2 times 27” makes me feel nervous in a way “27 times 2” never will.

Hector D Cantu and Carlos Castellanos’s Baldo for the 8th of January is a rerun from 2011. It’s an old arithmetic joke. I wouldn’t be surprised if George Burns and Gracie Allen did it. (Well, a little surprised. Gracie Allen didn’t tend to play quite that kind of dumb. But everybody tells some jokes that are a little out of character.)

## Reading the Comics, December 30, 2015: Seeing Out The Year Edition

There’s just enough comic strips with mathematical themes that I feel comfortable doing a last Reading the Comics post for 2015. And as maybe fits that slow week between Christmas and New Year’s, there’s not a lot of deep stuff to write about. But there is a Jumble puzzle.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips gives us someone so wrapped up in measuring data as to not notice the obvious. The obvious, though, isn’t always right. This is why statistics is a deep and useful field. It’s why measurement is a powerful tool. Careful measurement and statistical tools give us ways to not fool ourselves. But it takes a lot of sampling, a lot of study, to give those tools power. It can be easy to get lost in the problems of gathering data. Plus numbers have this hypnotic power over human minds. I understand Lard’s problem.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 27th of December messes with a kid’s head about the way we know 1 + 1 equals 2. The classic Principia Mathematica construction builds it out of pure logic. We come up with an idea that we call “one”, and another that we call “plus one”, and an idea we call “two”. If we don’t do anything weird with “equals”, then it follows that “one plus one equals two” must be true. But does the logic mean anything to the real world? Or might we be setting up a game with no relation to anything observable? The punchy way I learned this question was “one cup of popcorn added to one cup of water doesn’t give you two cups of soggy popcorn”. So why should the logical rules that say “one plus one equals two” tell us anything we might want to know about how many apples one has?

David L Hoyt and Jeff Knurek’s Jumble for the 28th of December features a mathematics teacher. That’s enough to include here. (You might have an easier time getting the third and fourth words if you reason what the surprise-answer word must be. You can use that to reverse-engineer what letters have to be in the circles.)

Richard Thompson’s Richard’s Poor Almanac for the 28th of December repeats the Platonic Fir Christmas Tree joke. It’s in color this time. Does the color add to the perfection of the tree, or take away from it? I don’t know how to judge.

Hilary Price’s Rhymes With Orange for the 29th of December gives its panel over to Rina Piccolo. Price often has guest-cartoonist weeks, which is a generous use of her space. Piccolo already has one and a sixth strips — she’s one of the Six Chix cartoonists, and also draws the charming Tina’s Groove — but what the heck. Anyway, this is a comic strip about the butterfly effect. That’s the strangeness by which a deterministic system can still be unpredictable. This counter-intuitive conclusion dates back to the 1890s, when Henri Poincaré was trying to solve the big planetary mechanics question. That question is: is the solar system stable? Is the Earth going to remain in about its present orbit indefinitely far into the future? Or might the accumulated perturbations from Jupiter and the lesser planets someday pitch it out of the solar system? Or, less likely, into the Sun? And the sad truth is, the best we can say is we can’t tell.

In Brian Anderson’s Dog Eat Doug for the 30th of December, Sophie ponders some deep questions. Most of them are purely philosophical questions and outside my competence. “What are numbers?” is also a philosophical question, but it feels like something a mathematician ought to have a position on. I’m not sure I can offer a good one, though. Numbers seem to be to be these things which we imagine. They have some properties and that obey certain rules when we combine them with other numbers. The most familiar of these numbers and properties correspond with some intuition many animals have about discrete objects. Many times over we’ve expanded the idea of what kinds of things might be numbers without losing the sense of how numbers can interact, somehow. And those expansions have generally been useful. They strangely match things we would like to know about the real world. And we can discover truths about these numbers and these relations that don’t seem to be obviously built into the definitions. It’s almost as if the numbers were real objects with the capacity to surprise and to hold secrets.

Why should that be? The lazy answer is that if we came up with a construct that didn’t tell us anything interesting about the real world, we wouldn’t bother studying it. A truly irrelevant concept would be a couple forgotten papers tucked away in an unread journal. But that is missing the point. It’s like answering “why is there something rather than nothing” with “because if there were nothing we wouldn’t be here to ask the question”. That doesn’t satisfy. Why should it be possible to take some ideas about quantity that ravens, raccoons, and chimpanzees have, then abstract some concepts like “counting” and “addition” and “multiplication” from that, and then modify those concepts, and finally have the modification be anything we can see reflected in the real world? There is a mystery here. I can’t fault Sophie for not having an answer.

## The Set Tour, Part 6: One Big One Plus Some Rubble

I have a couple of sets for this installment of the Set Tour. It’s still an unusual installment because only one of the sets is that important for my purposes here. The rest I mention because they appear a lot, even if they aren’t much used in these contexts.

## I, or J, or maybe Z

The important set here is the integers. You know the integers: they’re the numbers everyone knows. They’re the numbers we count with. They’re 1 and 2 and 3 and a hundred million billion. As we get older we come to accept 0 as an integer, and even the negative integers like “negative 12” and “minus 40” and all that. The integers might be the easiest mathematical construct to know. The positive integers, anyway. The negative ones are still a little suspicious.

The set of integers has several shorthand names. I is a popular and common one. As with the real-valued numbers R and the complex-valued numbers C it gets written by hand, and typically typeset, with a double vertical stroke. And we’ll put horizontal serifs on the top and bottom of the symbol. That’s a concession to readability. You see the same effect in comic strip lettering. A capital “I” in the middle of a word will often be written without serifs, while the word by itself needs the extra visual bulk.

The next popular symbol is J, again with a double vertical stroke. This gets used if we want to reserve “I”, or the word “I”, for some other purpose. J probably gets used because it’s so very close to I, and it’s only quite recently (in historic terms) that they’ve even been seen as different letters.

The symbol that seems to come out of nowhere is Z. It comes less from nowhere than it does from German. The symbol derives from “Zahl”, meaning “number”. It seems to have got into mathematics by way of Nicolas Bourbaki, the renowned imaginary French mathematician. The Z gets written with a double diagonal stroke.

Personally, I like Z most of this set, but on trivial grounds. It’s a more fun letter to write, especially since I write it with the middle horizontal stroke that. I’ve got no good cultural or historical reason for this. I just picked it up as a kid and never set it back down.

In these Set Tour essays I’m trying to write about sets that get used often as domains and ranges for functions. The integers get used a fair bit, although not nearly as often as real numbers do. The integers are a natural way to organize sequences of numbers. If the record of a week’s temperatures (in Fahrenheit) are “58, 45, 49, 54, 58, 60, 64”, there’s an almost compelling temperature function here. f(1) = 58, f(2) = 45, f(3) = 49, f(4) = 54, f(5) = 58, f(6) = 60, f(7) = 64. This is a function that has as its domain the integers. It happens that the range here is also integers, although you might be able to imagine a day when the temperature reading was 54.5.

Sequences turn up a lot. We are almost required to measure things we are interested in in discrete samples. So mathematical work with sequences uses integers as the domain almost by default. The use of integers as a domain gets done so often that it often becomes invisible, though. Someone studying my temperature data above might write the data as f1, f2, f3, and so on. One might reasonably never even notice there’s a function there, or a domain.

And that’s fine. A tool can be so useful it disappears. Attend a play; the stage is in light and the audience in darkness. The roles the light and darkness play disappear unless the director chooses to draw attention to this choice.

And to be honest, integers are a lousy domain for functions. It’s achingly hard to prove things for functions defined just on the integers. The easiest way to do anything useful is typically to find an equivalent problem for a related function that’s got the real numbers as a domain. Then show the answer for that gives you your best-possible answer for the original question.

If all we want are the positive integers, we put a little superscript + to our symbol: I+ or J+ or Z+. That’s a popular choice if we’re using the integers as an index. If we just want the negative numbers that’s a little weird, but, change the plus sign to a minus: I.

Now for some trouble.

Sometimes we want the positive numbers and zero, or in the lingo, the “nonnegative numbers”. Good luck with that. Mathematicians haven’t quite settled on what this should be called, or abbreviated. The “Natural numbers” is a common name for the numbers 0, 1, 2, 3, 4, and so on, and this makes perfect sense and gets abbreviated N. You can double-brace the left vertical stroke, or the diagonal stroke, as you like and that will be understood by everybody.

That is, everybody except the people who figure “natural numbers” should be 1, 2, 3, 4, and so on, and that zero has no place in this set. After all, every human culture counts with 1 and 2 and 3, and for that matter crows and raccoons understand the concept of “four”. Yet it took thousands of years for anyone to think of “zero”, so how natural could that be?

So we might resort to speaking of the “whole numbers” instead. More good luck with that. Besides leaving open the question of whether zero should be considered “whole” there’s the linguistic problem. “Whole” number carries, for many, the implication of a number that is an integer with no fractional part. We already have the word “integer” for that, yes. But the fact people will talk about rounding off to a whole number suggests the phrase “whole number” serves some role that the word “integer” doesn’t. Still, W is sitting around not doing anything useful.

Then there’s “counting numbers”. I would be willing to endorse this as a term for the integers 0, 1, 2, 3, 4, and so on, except. Have you ever met anybody who starts counting from zero? Yes, programmers for some — not all! — computer languages. You know which computer languages. They’re the languages which baffle new students because why on earth would we start counting things from zero all of a sudden? And the obvious single-letter abbreviation C is no good because we need that for complex numbers, a set that people actually use for domains a lot.

There is a good side to this, if you aren’t willing to sit out the 150 years or so mathematicians are going to need to sort this all out. You can set out a symbol that makes sense to you, early on in your writing, and stick with it. If you find you don’t like it, you can switch to something else in your next paper and nobody will protest. If you figure out a good one, people may imitate you. If you figure out a really good one, people will change it just a tiny bit so that their usage drives you crazy. Life is like that.

Eric Weisstein’s Mathworld recommends using Z* for the nonnegative integers. I don’t happen to care for that. I usually associate superscript * symbols with some operations involving complex-valued numbers and with the duals of sets, neither of which is in play here. But it’s not like he’s wrong and I’m right. If I were forced to pick a symbol right now I’d probably give Z0+. And for the nonpositive itself — the negative integers and zero — Z0- presents itself. I fully understand there are people who would be driven stark raving mad by this. Maybe you have a better one. I’d believe that.

Let me close with something non-controversial.

These are some sets that are too important to go unmentioned. But they don’t get used much in the domain-and-range role I’ve been using as basis for these essays. They are, in the terrain of these essays, some rubble.

You know the rational numbers? They’re the things you can write as fractions: 1/2, 5/13, 32/7, -6/7, 0 (think about it). This is a quite useful set, although it doesn’t get used much for the domain or range of functions, at least not in the fields of mathematics I see. It gets abbreviated as Q, though. There’s an extra vertical stroke on the left side of the loop, just as a vertical stroke gets added to the C for complex-valued numbers. Why Q? Well, “R” is already spoken for, as we need it for the real numbers. The key here is that every rational number can be written as the quotient of one integer divided by another. So, this is the set of Quotients. This abbreviation we get thanks to Bourbaki, the same folks who gave us Z for integers. If it strikes you that the imaginary French mathematician Bourbaki used a lot of German words, all I can say is I think that might have been part of the fun of the Bourbaki project. (Well, and German mathematicians gave us many breakthroughs in the understanding of sets in the late 19th and early 20th centuries. We speak with their language because they spoke so well.)

If you’re comfortable with real numbers and with rational numbers, you know of irrational numbers. These are (most) square roots, and pi and e, and the golden ratio and a lot of cosines of angles. Strangely, there really isn’t any common shorthand name or common notation for the irrational numbers. If we need to talk about them, we have the shorthand “R \ Q”. This means “the real numbers except for the rational numbers”. Or we have the shorthand “Qc”. This means “everything except the rational numbers”. That “everything” carries the implication “everything in the real numbers”. The “c” in the superscript stands for “complement”, everything outside the set we’re talking about. These are ungainly, yes. And it’s a bit odd considering that most real numbers are irrational numbers. The rational numbers are a most ineffable cloud of dust the atmosphere of the real numbers.

But, mostly, we don’t need to talk about functions that have an irrational-number domain. We can do our work with a real-number domain instead. So we leave that set with a clumsy symbol. If there’s ever a gold rush of fruitful mathematics to be done with functions on irrational domains then we’ll put in some better notation. Until then, there are better jobs for our letters to do.

## The Set Tour, Part 3: R^n

After talking about the real numbers last time, I had two obvious sets to use as follow up. Of course I’d overthink the choice of which to make my next common domain-and-range set.

## Rn

Rn is pronounced “are enn”, just as you might do if you didn’t know enough mathematics to think the superscript meant something important. It does mean something important; it’s just that there’s not a graceful way to say what offhand. This is the set of n-tuples of real numbers. That is, anything you pick out of Rn is an ordered set of things all of which are themselves real numbers. The “n” here is the name for some whole number whose value isn’t going to change during the length of this problem.

So when we speak of Rn we are really speaking of a family of sets, all of them similar in some important ways. The things in R2 look like pairs of real numbers: (3, 4), or (4π, -2e), or (2038, 0.010010001), pairs like that. The things in R3 are triplets of real numbers: (3, 4, 5), or (4π, -2e, 1 + 1/π). The things in R4 are quartets of real numbers: (3, 4, 5, 12) or (4π, -2e, 1 + 1/π, -6) or so. The things in R10 are probably clear enough to not need listing.

It’s possible to add together two things in Rn. At least if they come from the same Rn; you can’t add a pair of numbers to a quartet of numbers, not if you’re being honest. The addition rule is just what you’d come up with if you didn’t know enough mathematics to be devious, though: add the first number of the first thing to the first number of the second thing, and that’s the first number of the sum. Add the second number of the first thing to the second number of the second thing, and that’s the second number of the sum. Add the third number of the first thing to the third number of the second thing, and that’s the third number of the sum. Keep on like this until you run out of numbers in each thing. It’s possible you already have.

You can’t multiply together two things in Rn, though, unless your n is 1. (There may be some conceptual difference between R1 and plain old R. But I don’t recall seeing a mathematician being interested in the difference except when she’s studying the philosophy of mathematics.) The obvious multiplication scheme — multiply matching numbers, like you do with addition — produces something that doesn’t work enough like multiplication to be interesting. It’s possible for some n’s to work out schemes that act like multiplication enough to be interesting, but for the most part we don’t need them.

What we will do, though, is multiply something in Rn by a single real number. That real number is called a “scalar”. You do the multiplication, again, like you’d do if you were too new to mathematics to be clever. Multiply the first number in your thing by the scalar, and that’s the first number in your product. Multiply the second number in your thing by the scalar, and that’s the second number in your product. Multiply the third number in your thing by the scalar, and that’s the third number in your product. Carry on like this until you run out of numbers, and then stop. Usually good advice.

That you can add together two things from Rn, and you can multiply anything in Rn by a scalar, makes this a “vector space”. (There are some more requirements, but they amount to addition and multiplication working like you’d expect.) The term means about what you think; a “space” is a … well … something that acts mathematically like ordinary everyday space works. A “vector space” is a space where the things inside it are vectors. Vectors are a combination of a direction and a distance in that direction. They’re very well-represented as n-tuples. They get represented as n-tuples so often it’s easy to forget that’s just a convenient way to write them down.

This vector space property of Rn makes it a really useful set. R2 corresponds naturally to “the points on a flat surface”. R3 corresponds naturally to an idea of “all the points in normal everyday space where something could be”. Or, if you like, it can represent “the speed and direction something is travelling in”. Or the direction and amount of its acceleration, for that matter.

Because of these mathematicians will often call Rn the “n-dimensional Euclidean space”. The n is about how many components there are in an element of the set. The “space” tells us it’s a space. “Euclidean” tells us that it looks and works like, well, Euclidean geometry. We can talk about the distance between points and use the ideas we had from plane or solid geometry. We can talk about angles and areas and volumes similarly. We can do this so much we might say “n-dimensional space” as if there weren’t anything but Euclidean spaces out there.

And this is useful for more than describing where something happens to be. A great number of physics problems find it convenient to study the position and the velocity of a number of particles which interact. If we have N particles, then, and we’re in a three-dimensional space, and we’re keeping track of positions and velocities for each of them, then we can describe where everything is and how everything is moving as one element in the space R6N. We can describe movement in time as a function that has a domain of R6N and a range of R6N, and see the progression of time as tracing out a path in that space.

We can’t draw that, obviously, and I’d look skeptically at people who say they can visualize it. What we usually draw is a little enclosed space that’s either a rectangle or a blob, and draw out lines — “trajectories” — inside that. The different spots along the trajectory correspond to all the positions and velocities of all the particles in the system at different times.

Though that’s a fantastic use, it’s not the only one. It’s not required, for example, that a function have the same Rn as both domain and range. It can have different sets. If we want to be clear that the domain and range can be of different sizes, it’s common to call one Rn and the other Rm if we aren’t interested in pinning down just which spaces they are.

But, for example, a perfectly legitimate function would have a domain of R3 and a range of R1, the reals. There’s even an obvious, common one: return the size, the magnitude, of whatever the vector in the domain is. Or we might take as domain R4, and the range R2, following the rule “match an element in the domain to an element in the range that has the same first and third components”. That kind of function is called a “projection”, as it gives what might look like the shadow of the original thing in a smaller space.

If we wanted to go the other way, from R2 to R4 as an example, we could. Here set the rule “match an element in the domain to an element in the range which has the same first and second components, and has ‘3’ and ‘4’ as the third and fourth components”. That’s an “embedding”, giving us the idea that we can put a Euclidean space with fewer dimensions into a space with more. The idea comes naturally to anyone who’s seen a cartoon where a character leaps off the screen and interacts with the real world.

## The Set Tour, Stage 2: The Real Star

For the second of my little tour of sets that get commonly used as domains and ranges I want to name the most common of them all.

## R

This is the real numbers. In text that’s written with a bold R. Written by hand, and often in text, that’s written with a capital R that has a double stroke for the main vertical line. That’s an easy-to-write way to distinguish it from a plain old civilian R. The double-vertical-stroke convention is used for many of the most common sets of numbers. It will get used for letters like I and J (the integers), or N (the counting numbers). A vertical stroke will even get added to symbols that technically don’t have any vertical strokes, like Q (the rational numbers). There it’s just put inside the loop, on the left side, far enough from the edge that the reader can notice the vertical stroke is there.

R is a big one. It’s not just a big set. It’s also a popular one. It may as well be the default domain and range. If someone fails to tell you what either set is, you can suppose she meant R and be only rarely wrong. The real numbers are familiar and popular and it feels like we know what they are. It’s a bit tricky to define them exactly, though, and you’ll notice that I’m not doing that. You know what I mean, though. It’s whole numbers, and rational numbers, and irrational numbers like the square root of pi, and for that matter pi, and a whole bunch of other boring numbers nobody looks at. Let’s leave it at that.

All the intervals I talked about last time are subsets of R. If we really wanted to, we could turn a function with domain an interval like [0, 1] into a function with a domain of R. That’s a kind of “embedding”. Let me call the function with domain [0, 1] by the name “f”. I’ll then define g, on the domain R, by the rule “whatever f(x) is, if x is from 0 to 1; and some other, harmless value, if x isn’t”. Probably the harmless value is zero. Sometimes we need to change the domain a function’s defined on, and this is a way to do it.

If we only want to talk about the positive real numbers we can denote that by putting a plus sign in superscript: R+. If we only want the negative numbers we put in a minus sign: R. Do either of these include zero? My heart tells me neither should, but I wouldn’t be surprised if in practice either did, because zero is often useful to have around. To be careful we might explicitly include zero, using the notations of set theory. Then we might write $\textbf{R}^+ \cup \left\{0\right\}$.

Sometimes the rule for a function doesn’t make sense for some values. For example, if a function has the rule $f: x \mapsto 1 / (x - 1)$ then you can’t work out a value for f(1). That would require dividing by zero and we dare not do that. A careful mathematician would say the domain of that function f is all the real numbers R except for the number 1. This exclusion gets written as “R \ {1}”. The backslash means “except the numbers in the following set”. It might be a single number, such as in this example. It might be a lot of numbers. The function $g: x \mapsto \log\left(1 - x\right)$ is meaningless for any x that’s equal to or greater than 1. We could write its domain then as “R \ { x: x ≥ 1 }”.

That’s if we’re being careful. If we get a little careless, or if we’re writing casually, or if the set of non-permitted points is complicated we might omit that. Mathematical writing includes an assumption of good faith. The author is supposed to be trying to say something interesting and true. The reader is expected to be skeptical but not quarrelsome. Spotting a flaw in the argument because the domain doesn’t explicitly rule out some points it shouldn’t have is tedious. Finding that the interesting thing only holds true for values that are implicitly outside the domain is serious.

The set of real numbers is a group; it has an operation that works like addition. We call it addition. For that matter, it’s a ring. It has an operation that works like multiplication. We call it multiplication. And it’s even more than a ring. Everything in R except for the additive identity — 0, the number you can add to anything without changing what the thing is — has a multiplicative inverse. That is, any number except zero has some number you can multiply it by to get 1. This property makes it a “field”, to people who study (abstract) algebra. This “field” hasn’t got anything to do with gravitational or electrical or baseball or magnetic fields. But the overlap in names does serve to sometimes confuse people.

But having this multiplicative inverse means that we can do something that operates like division. Divide one thing by a second by taking the first thing and multiplying it by the second thing’s multiplicative inverse. We call this division-like operation “division”.

It’s not coincidence that the algebraic “addition” and “multiplication” and “division” operations are the ones we call addition and multiplication and division. What makes abstract algebra abstract is that it’s the study of things that work kind of like the real numbers do. The operations we can do on the real numbers inspire us to look for other sets that can let us do similar things.

## Reading the Comics, July 24, 2015: All The Popular Topics Are Here Edition

This week all the mathematically-themed comic strips seem to have come from Gocomics.com. Since that gives pretty stable URLs I don’t feel like I can include images of those comics. So I’m afraid it’s a bunch of text this time. I like to think you enjoy reading the text, though.

Mark Anderson’s Andertoons seemed to make its required appearance here with the July 20th strip. And the kid’s right about parentheses being very important in mathematics and “just” extra information in ordinary language. Parentheses as a way of grouping together terms appear as early as the 16th century, according to Florian Cajori. But the symbols wouldn’t become common for a couple of centuries. Cajori speculates that the use of parentheses in normal rhetoric may have slowed mathematicians’ acceptance of them. Vinculums — lines placed over a group of terms — and colons before and after the group seem to have been more popular. Leonhard Euler would use parentheses a good bit, and that settled things. Besides all his other brilliances, Euler was brilliant at setting notation. There are still other common ways of aggregating terms. But most of them are straight brackets or curled braces, which are almost the smallest possible changes from parentheses you can make.

Though his place was secure, Mark Anderson got in another strip the next day. This one’s based on the dangers of extrapolating mindlessly. One trouble with extrapolation is that if we just want to match the data we have then there are literally infinitely many possible extrapolations, each equally valid. But most of them are obvious garbage. If the high temperature the last few days was 78, 79, 80, and 81 degrees Fahrenheit, it may be literally true that we could extrapolate that to a high of 120,618 degrees tomorrow, but we’d be daft to believe it. If we understand the factors likely to affect our data we can judge what extrapolations are plausible and what ones aren’t. As ever, sanity checking, verifying that our results could be correct, is critical.

Bill Amend’s FoxTrot Classics (July 20) continues Jason’s attempts at baking without knowing the unstated assumptions of baking. See above comments about sanity checking. At least he’s ruling out the obviously silly rotation angle. (The strip originally ran the 22nd of July, 2004. You can see it in color, there, if you want to see things like that.) Some commenters have gotten quite worked up about Jason saying “degrees Kelvin” when he need only say “Kelvin”. I can’t join them. Besides the phenomenal harmlessness of saying “degrees Kelvin”, it wouldn’t quite flow for Jason to think “350 degrees” short for “350 Kelvin” instead of “350 degrees Kelvin”.

Nate Frakes’s Break of Day (July 21) is the pure number wordplay strip for this roundup. This might be my favorite of this bunch, mostly because I can imagine the way it would be staged as a bit on The Muppet Show or a similar energetic and silly show. John Atkinson’s Wrong Hands for July 23 is this roundup’s general mathematics wordplay strip. And Mark Parisi’s Off The Mark for July 22nd is the mathematics-literalist strip for this roundup.

Ruben Bolling’s Tom The Dancing Bug (July 23, rerun) is nominally an economics strip. Its premise is that since rational people do what maximizes their reward for the risk involved, then pointing out clearly how the risks and possible losses have changed will change their behavior. Underlying this are assumptions from probability and statistics. The core is the expectation value. That’s an average of what you might gain, or lose, from the different outcomes of something. That average is weighted by the probability of each outcome. A strictly rational person who hadn’t ruled anything in or out would be expected to do the thing with the highest expected gain, or the smallest expected loss. That people do not do things this way vexes folks who have not known many people.

## Reading the Comics, June 25, 2015: Not Making A Habit Of This Edition

I admit I did this recently, and am doing it again. But I don’t mean to make it a habit. I ran across a few comic strips that I can’t, even with a stretch, call mathematically-themed, but I liked them too much to ignore them either. So they’re at the end of this post. I really don’t intend to make this a regular thing in Reading the Comics posts.

Justin Boyd’s engagingly silly Invisible Bread (June 22) names the tuning “two steps below A”. He dubs this “negative C#”. This is probably an even funnier joke if you know music theory. The repetition of the notes in a musical scale could be used as an example of cyclic or modular arithmetic. Really, that the note above G is A of the next higher octave, and the note below A is G of the next lower octave, probably explains the idea already.

If we felt like, we could match the notes of a scale to the counting numbers. Match A to 0, B to 1, C to 2 and so on. Work out sharps and flats as you like. Then we could think of transposing a note from one key to another as adding or subtracting numbers. (Warning: do not try to pass your music theory class using this information! Transposition of keys is a much more subtle process than I am describing.) If the number gets above some maximum, it wraps back around to 0; if the number would go below zero, it wraps back around to that maximum. Relabeling the things in a group might make them easier or harder to understand. But it doesn’t change the way the things relate to one another. And that’s why we might call something F or negative C#, as we like and as we hope to be understood.

Hilary Price’s Rhymes With Orange (June 23) reminds us how important it is to pick the correct piece of chalk. The mathematical symbols on the board don’t mean anything. A couple of the odder bits of notation might be meant as shorthand. Often in the rush of working out a problem some of the details will get written as borderline nonsense. The mathematician is probably more interested in getting the insight down. She’ll leave the details for later reflection.

Jason Poland’s Robbie and Bobby (June 23) uses “calculating obscure digits of pi” as computer fun. Calculating digits of pi is hard, at least in decimals, which is all anyone cares about. If you wish to know the 5,673,299,925th decimal digit of pi, you need to work out all 5,673,299,924 digits that go before it. There are formulas to work out a binary (or hexadecimal) digit of pi without working out all the digits that go before. This saves quite some time if you need to explore the nether-realms of pi’s digits.

The comic strip also uses Stephen Hawking as the icon for most-incredibly-smart-person. It’s the role that Albert Einstein used to have, and still shares. I am curious whether Hawking is going to permanently displace Einstein as the go-to reference for incredible brilliance. His pop culture celebrity might be a transient thing. I suspect it’s going to last, though. Hawking’s life has a tortured-genius edge to it that gives it Romantic appeal, likely to stay popular.

Paul Trap’s Thatababy (June 23) presents confusing brand-new letters and numbers. Letters are obviously human inventions though. They’ve been added to and removed from alphabets for thousands of years. It’s only a few centuries since “i” and “j” became (in English) understood as separate letters. They had been seen as different ways of writing the same letter, or the vowel and consonant forms of the same letter. If enough people found a proposed letter useful it would work its way into the alphabet. Occasionally the ampersand & has come near being a letter. (The ampersand has a fascinating history. Honestly.) And conversely, if we collectively found cause to toss one aside we could remove it from the alphabet. English hasn’t lost any letters since yogh (the Old English letter that looks like a 3 written half a line off) was dropped in favor of “gh”, about five centuries ago, but there’s no reason that it couldn’t shed another.

Numbers are less obviously human inventions. But the numbers we use are, or at least work like they are. Arabic numerals are barely eight centuries old in Western European use. Their introduction was controversial. People feared shopkeepers and moneylenders could easily cheat people unfamiliar with these crazy new symbols. Decimals, instead of fractions, were similarly suspect. Negative numbers took centuries to understand and to accept as numbers. Irrational numbers too. Imaginary numbers also. Indeed, look at the connotations of those names: negative numbers. Irrational numbers. Imaginary numbers. We can add complex numbers to that roster. Each name at least sounds suspicious of the innovation.

There are more kinds of numbers. In the 19th century William Rowan Hamilton developed quaternions. These are 4-tuples of numbers that work kind of like complex numbers. They’re strange creatures, admittedly, not very popular these days. Their greatest strength is in representing rotations in three-dimensional space well. There are also octonions, 8-tuples of numbers. They’re more exotic than quaternions and have fewer good uses. We might find more, in time.

Rina Piccolo’s entry in Six Chix this week (June 24) draws a house with extra dimensions. An extra dimension is a great way to add volume, or hypervolume, to a place. A cube that’s 20 feet on a side has a volume of 203 or 8,000 cubic feet, after all. A four-dimensional hypercube 20 feet on each side has a hypervolume of 160,000 hybercubic feet. This seems like it should be enough for people who don’t collect books.

Morrie Turner’s Wee Pals (June 24, rerun) is just a bit of wordplay. It’s built on the idea kids might not understand the difference between the words “ratio” and “racial”.

Tom Toles’s Randolph Itch, 2 am (June 25, rerun) inspires me to wonder if anybody’s ever sold novelty 4-D glasses. Probably they have, sometime.

Now for the comics that I just can’t really make mathematics but that I like anyway:

Phil Dunlap’s Ink Pen (June 23, rerun) is aimed at the folks still lingering in grad school. Please be advised that most doctoral theses do not, in fact, end in supervillainy.

Darby Conley’s Get Fuzzy (June 25, rerun) tickles me. But Albert Einstein did after all say many things in his life, and not everything was as punchy as that line about God and dice.

## N-tuple.

We use numbers to represent things we want to think about. Sometimes the numbers represent real-world things: the area of our backyard, the number of pets we have, the time until we have to go back to work. Sometimes the numbers mean something more abstract: an index of all the stuff we’re tracking, or how its importance compares to other things we worry about.

Often we’ll want to group together several numbers. Each of these numbers may measure a different kind of thing, but we want to keep straight what kind of thing it is. For example, we might want to keep track of how many people are in each house on the block. The houses have an obvious index number — the street number — and the number of people in each house is just what it says. So instead of just keeping track of, say, “32” and “34” and “36”, and “3” and “2” and “3”, we would keep track of pairs: “32, 3”, and “34, 2”, and “36, 3”. These are called ordered pairs.

They’re not called ordered because the numbers are in order. They’re called ordered because the order in which the numbers are recorded contains information about what the numbers mean. In this case, the first number is the street address, and the second number is the count of people in the house, and woe to our data set if we get that mixed up.

And there’s no reason the ordering has to stop at pairs of numbers. You can have ordered triplets of numbers — (32, 3, 2), say, giving the house number, the number of people in the house, and the number of bathrooms. Or you can have ordered quadruplets — (32, 3, 2, 6), say, house number, number of people, bathroom count, room count. And so on.

An n-tuple is an ordered set of some collection of numbers. How many? We don’t care, or we don’t care to say right now. There are two popular ways to pronounce it. One is to say it the way you say “multiple” only with the first syllable changed to “enn”. Others say it about the same, but with a long u vowel, so, “enn-too-pull”. I believe everyone worries that everyone else says it the other way and that they sound like they’re the weird ones.

You might care to specify what your n is for your n-tuple. In that case you can plug in a value for that n right in the symbol: a 3-tuple is an ordered triplet. A 4-tuple is that ordered quadruplet. A 26-tuple seems like rather a lot but I’ll trust that you know what you’re trying to study. A 1-tuple is just a number. We might use that if we’re trying to make our notation consistent with something else in the discussion.

If you’re familiar with vectors you might ask: so, an n-tuple is just a vector? It’s not quite. A vector is an n-tuple, but in the same way a square is a rectangle. It has to meet some extra requirements. To be a vector we have to be able to add corresponding numbers together and get something meaningful out of it. The ordered pair (32, 3) representing “32 blocks north and 3 blocks east” can be a vector. (32, 3) plus (34, 2) can give us us (66, 5). This makes sense because we can say, “32 blocks north, 3 blocks east, 34 more blocks north, 2 more blocks east gives us 66 blocks north, 5 blocks east.” At least it makes sense if we don’t run out of city. But to add together (32, 3) plus (34, 2) meaning “house number 32 with 3 people plus house number 34 with 2 people gives us house number 66 with 5 people”? That’s not good, whatever town you’re in.

I think the commonest use of n-tuples is to talk about vectors, though. Vectors are such useful things.

## A Venn Diagram of the Real Number System

I’m aware that it isn’t properly exactly a Venn diagram, now, but the mathematics-artist Robert Austin has a nice picture of the real numbers, and the most popular subsets of the real numbers, and how they relate. The bubbles aren’t to scale — there’s just as many counting numbers (1, 2, 3, 4, et cetera) as there are rational numbers, and there are far more irrational numbers than there are rational numbers — but if you don’t mind that, then, this is at least a nice little illustration.

## Reading the Comics, January 11, 2015: Standard Genres And Bloom County Edition

I’m still getting back to normal after the Christmas and New Year’s disruption of, well, everything, which is why I’m taking it easy and just doing another comics review. I have to suppose Comic Strip Master Command was also taking it easy over the holidays since most of the subjects are routine genres — word answer problems, mathematics-connected puns, and the like — with the Bloom County reruns the cartoons that give me most to write about. It’s all part of the wondrous cycle of nature; I’m sure there’ll be a really meaty collection of topics along soon.

Gordon Bess’s Redeye (January 8, originally run August 21, 1968) is an example of the student giving a mischievous answer to a word problem. I feel like I should have a catchy name for this genre, given how much it turns up, but I haven’t got anything good that comes to mind. (I don’t tend to talk about the drawing much in these strips — most of the time it isn’t that important, and comic strips have been growing surprisingly indifferent to drawing — but I did notice while uploading this that Pokey’s stance and expression in the first panel is really quite good. You should be able to open the image in a new tab and see it at its fullest-available 1440-by-431 pixel size and that shows off well the crafting that went into the figure.)

Continue reading “Reading the Comics, January 11, 2015: Standard Genres And Bloom County Edition”

## Reading the Comics, December 27, 2014: Last of the Year Edition?

I’m curious whether this is going to be the final bunch of mathematics-themed comics for the year 2014. Given the feast-or-famine nature of the strips it’s plausible we might not have anything good through to mid-January, but, who knows? Of the comics in this set I think the first Peanuts the most interesting to me, since it’s funny and gets at something big and important, although the Ollie and Quentin is a better laugh.

Mark Leiknes’s Cow and Boy (December 23, rerun) talks about chaos theory, the notion that incredibly small differences in a state can produce enormous differences in a system’s behavior. Chaos theory became a pop-cultural thing in the 1980s, when Edward Lorentz’s work (of twenty years earlier) broke out into public consciousness. In chaos theory the chaos isn’t that the system is unpredictable — if you have perfect knowledge of the system, and the rules by which it interacts, you could make perfect predictions of its future. What matters is that, in non-chaotic systems, a small error will grow only slightly: if you predict the path of a thrown ball, and you have the ball’s mass slightly wrong, you’ll make a proportionately small error on what the path is like. If you predict the orbit of a satellite around a planet, and have the satellite’s starting speed a little wrong, your prediction is proportionately wrong. But in a chaotic system there are at least some starting points where tiny errors in your understanding of the system produce huge differences between your prediction and the actual outcome. Weather looks like it’s such a system, and that’s why it’s plausible that all of us change the weather just by existing, although of course we don’t know whether we’ve made it better or worse, or for whom.

Charles Schulz’s Peanuts (December 23, rerun from December 26, 1967) features Sally trying to divide 25 by 50 and Charlie Brown insisting she can’t do it. Sally’s practical response: “You can if you push it!” I am a bit curious why Sally, who’s normally around six years old, is doing division in school (and over Christmas break), but then the kids are always being assigned Thomas Hardy’s Tess of the d’Urbervilles for a book report and that is hilariously wrong for kids their age to read, so, let’s give that a pass.