My Little 2021 Mathematics A-to-Z: Zorn’s Lemma


The joke to which I alluded last week was a quick pun. The setup is, “What is yellow and equivalent to the Axiom of Choice?” It’s the topic for this week, and the conclusion of the Little 2021 Mathematics A-to-Z. I again thank Mr Wu, of Singapore Maths Tuition, for a delightful topic.

Zorn’s Lemma

Max Zorn did not name it Zorn’s Lemma. You expected that. He thought of it just as a Maximal Principle when introducing it in a 1934 presentation and 1935 paper. The word “lemma” connotes that some theorem is a small thing. It usually means it’s used to prove some larger and more interesting theorem. Zorn’s Lemma is one of those small things. With the right background, a rigorous proof is a couple not-too-dense paragraphs. Without the right background? It’s one of those proofs you read the statement of and nod, agreeing, that sounds reasonable.

The lemma is about partially ordered sets. A set’s partially ordered if it has a relationship between pairs of items in it. You will sometimes see a partially ordered set called a “poset”, a term of mathematical art which make me smile too. If we don’t know anything about the ordering relationship we’ll use the ≤ symbol, just like this was ordinary numbers. To be partially ordered, whenever x ≤ y and y ≤ x, we know that x and y must be equal. And the converse: if x = y then x ≤ y and y ≤ x. What makes this partial is that we’re not guaranteed that every x and y relate in some way. It’s a totally ordered set if we’re guaranteed that at least one of x ≤ y and y ≤ x is always true. And then there is such a thing as a well-ordered set. This is a totally ordered set for which every subset (unless it’s empty) has a minimal element.

If we have a couple elements, each of which we can put in some order, then we can create a chain. If x ≤ y and y ≤ z, then we can write x ≤ y ≤ z and we have at least three things all relating to one another. This seems like stuff too basic to notice, if we think too literally about the relationship being “is less than or equal to”. If the relationship is, say, “divides wholly into”, then we get some interesting different chains. Like, 2 divides into 4, which divides into 8, which divides into 24. And 3 divides into 6 which divides into 24. But 2 doesn’t divide into 3, nor 3 into 2. 4 doesn’t divide into 6, nor 6 into either 8 or 4.

So what Zorn’s Lemma says is, if all the chains in a partially ordered set each have an upper bound, then, the partially ordered set has a maximal element. “Maximal element” here means an element that doesn’t have a bigger comparable element. (That is, m is maximal if there’s no other element b for which m ≤ b. It’s possible that m and b can’t be compared, though, the way 6 doesn’t divide 8 and 8 doesn’t divide 6.) This is a little different from a “maximum” . It’s possible for there to be several maximal elements. But if you parse this as “if you can always find a maximum in a string of elements, there’s some maximum element”? And remember there could be many maximums? Then you’re getting the point.

You may also ask how this could be interesting. Zorn’s Lemma is an existence proof. Most existence proofs assure us a thing we thought existed does, but don’t tell us how to find it. This is all right. We tend to rely on an existence proof when we want to talk about some mathematical item but don’t care about fussy things like what it is. It is much the way we might talk about “an odd perfect number N”. We can describe interesting things that follow from having such a number even before we know what value N has.

A classic example, the one you find in any discussion of using Zorn’s Lemma, is about the basis for a vector space. This is like deciding how to give directions to a point in space. But vector spaces include some quite abstract things. One vector space is “the set of all functions you can integrate”. Another is “matrices whose elements are all four-dimensional rotations”. There might be literally infinitely many “directions” to go. How do we know we can find a set of directions that work as well as, for guiding us around a city, the north-south-east-west compass rose does? So there’s the answer. There are other things done all the time, too. A nontrivial ring-with-identity, for example, has to have a maximal ideal. (An ideal is a subset of the ring that’s still a ring.) This is handy to know if you’re working with rings a lot.

The joke in my prologue was built on the claim Zorn’s Lemma is equivalent to the Axiom of Choice. The Axiom of Choice is a piece of set theory that surprised everyone by being independent of the Zermelo-Fraenkel axioms. The Axiom says that, if you have a collection of disjoint nonempty sets, then there must exist at least one set with exactly one element from each of those sets. That is, you can pick one thing out of each of a set of bins. It’s easy to see how this has in common with Zorn’s Lemma being too obvious to imagine proving. That’s the sort of thing that makes a good axiom. Thing about a lemma, though, is we do prove it. That’s how we know it’s a lemma. How can a lemma be equivalent to an axiom?

I’l argue by analogy. In Euclidean geometry one of the axioms is this annoying statement about on which side of a line two other lines that intersect it will meet. If you have this axiom, you can prove some nice results, like, the interior angles of a triangle add up to two right angles. If you decide you’d rather make your axiom that bit about the interior angles adding up? You can go from that to prove the thing about two lines crossing a third line.

So it is here. If you suppose the Axiom of Choice is true, you can get Zorn’s Lemma: you can pick an element in your set, find a chain for which that’s the minimum, and find your maximal element from that. If you make Zorn’s Lemma your axiom? You can use x ≤ y to mean “x is a less desirable element to pick out of this set than is y”. And then you can choose a maximal element out of your set. (It’s a bit more work than that, but it’s that kind of work.)

There’s another theorem, or principle, that’s (with reservations) equivalent to both Zorn’s Lemma and the Axiom of Choice. It’s another piece that seems so obvious it should defy proof. This is the well-ordering theorem, which says that every set can be well-ordered. That is, so that every non-empty subset has some minimum element. Finally, a mathematical excuse for why we have alphabetical order, even if there’s no clear reason that “j” should come after “i”.

(I said “with reservations” above. This is because whether these are equivalent depends on what, precisely, kind of deductive logic you’re using. If you are not using ordinary propositional logic, and are using a “second-order logic” instead, they differ.)

Ermst Zermelo introduced the Axiom of Choice to set theory so that he could prove this in a way that felt reasonable. I bet you can imagine how you’d go from “every non-empty set has a minimum element” right back to “you can always pick one element of every set”, though. And, maybe if you squint, can see how to get from “there’s always a minimum” to “there has to be a maximum”. I’m speaking casually here because proving it precisely is more work than we need to do.

I mentioned how Zorn did not name his lemma after himself. Mathematicians typically don’t name things for themselves. Nor did he even think of it as a lemma. His name seems to have adhered to the principle in the late 30s. Credit the nonexistent mathematician Bourbaki writing about “le théorème de Zorn”. By 1940 John Tukey, celebrated for the Fast Fourier Transform, wrote of “Zorn’s Lemma”. Tukey’s impression was that this is how people in Princeton spoke of it at the time. He seems to have been the first to put the words “Zorn’s Lemma” in print, though. Zorn isn’t the first to have stated this. Kazimierez Kuratowski, in 1922, described what is clearly Zorn’s Lemma in a different form. Zorn remembered being aware of Kuratowski’s publication but did not remember noticing the property. The Hausdorff Maximal Principle, of Felix Hausdorff, has much the same content. Zorn said he did not know about Hausdorff’s 1927 paper until decades later.

Zorn’s lemma, the Axiom of Choice, the well-ordering theorem, and Hausdorff’s Maximal Principle all date to the early 20th century. So do a handful of other ideas that turn out to be equivalent. This was an era when set theory saw an explosive development of new and powerful ideas. The point of describing this chain is to emphasize that great concepts often don’t have a unique presentation. Part of the development of mathematics is picking through several quite similar expressions of a concept. Which one do we enshrine as an axiom, or at least the canonical presentation of the idea?

We have to choose.


And with this I at last declare the hard work Little 2021 Mathematics A-to-Z at an end. I plan to follow up, as traditional, with a little essay about what I learned while doing this project. All of the Little 2021 Mathematics A-to-Z essays should be at this link. And then all of the A-to-Z essays from all eight projects should be at this link. Thank you so for your support in these difficult times.

From my Third A-to-Z: Zermelo-Fraenkel Axioms


The close of my End 2016 A-to-Z let me show off one of my favorite modes, that of amateur historian of mathematics who doesn’t check his primary references enough. So far as I know I don’t have any serious errors here, but then, how would I know? … But keep in mind that the full story is more complicated and more ambiguous than presented. (This is true of all histories.) That I could fit some personal history in was also a delight.

I don’t know why Thoralf Skolem’s name does not attach to the Zermelo-Fraenkel Axioms. Mathematical things are named with a shocking degree of arbitrariness. Skolem did well enough for himself.


gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

Zermelo-Fraenkel Axioms

I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprising all the subsets of the original set. Good stuff to know.

There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s the ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or for a more instructive view, it’s as simple as it could be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

Reading the Comics, March 21, 2020: Pragmatic Calculations Edition


There were a handful of other comic strips last week. If they have a common theme (and I’ll try to drag one out) it’s that they circle around pragmatism. Not just using mathematics in the real world but the fussy stuff of what you can calculate and what you can use a calculation for.

And, again, I am hosting the Playful Math Education Blog Carnival this month. If you’ve run across any online tool that teaches mathematics, or highlights some delightful feature of mathematics? Please, let me know about it here, and let me know what of your own projects I should feature with it. The goal is to share things about mathematics that helped you understand more of it. Even if you think it’s a slight thing (“who cares if you can tell whether a number’s divisible by 11 by counting the digits right?”) don’t worry. Slight things count. Speaking of which …

Jef Mallett’s Frazz for the 20th has a kid ask about one of those add-the-digits divisibility tests. What happens if the number is too big to add up all the digits? In some sense, the question is meaningless. We can imagine finding the sum of digits no matter how many digits there are. At least if there are finitely many digits.

But there is a serious mathematical question here. We accept the existence of numbers so big no human being could ever know their precise value. At least, we accept they exist in the same way that “4” exists. If a computation can’t actually be finished, then, does it actually mean anything? And if we can’t figure a way to shorten the calculation, the way we can usually turn the infinitely-long sum of a series into a neat little formula?

Kid: 'A number is divisible by 3 if the sum of its digits is divisible by 3. But what if the number is so big there's too many digits to add up easily?' Frazz: 'If it's that big, the 1 or 2 left over isn't going to matter much.' Kid: 'Why don't they teach THAT kind of math more in school?' Frazz: 'I guess there's only jobs for so many songwriters, cartoonists, and janitors.'
Jef Mallett’s Frazz for the 20th of March, 2020. Essays featuring some topic raised by Frazz should be gathered at this link.

This gets into some cutting-edge mathematics. For calculations, some. But also, importantly, for proofs. A proof is, really, a convincing argument that something is true. The ideal of this is a completely filled-out string of logical deductions. These will take a long while. But, as long as it takes finitely many steps to complete, we normally accept the proof as done. We can imagine proofs that take more steps to complete than could possibly be thought out, or checked, or confirmed. We, living in the days after Gödel, are aware of the idea that there are statements which are true but unprovable. This is not that. Gödel’s Incompleteness Theorems tell us about statements that a deductive system can’t address. This is different. This is things that could be proven true (or false), if only the universe were more vast than it is.

There are logicians who work on the problem of what too-long-for-the-universe proofs can mean. Or even what infinitely long proofs can mean, if we allow those. And how they challenge our ideas of what “proof” and “knowledge” and “truth” are. I am not among these people, though, and can’t tell you what interesting results they have concluded. I just want to let you know the kid in Frazz is asking a question you can get a spot in a mathematics or philosophy department pondering. I mean so far as it’s possible to get a spot in a mathematics or philosophy department.

Speaker at a podium: 'If one person kills someone, 50% of the people involved are victims. If 99 people kill someone, 1% of the people involved are victims. The percent of victims is given by V = the limit of K/x as x approachs infinity, where K is people killed and x is the number of people killed. Thus, for sufficiently large x, murder is a victimless crime. So, the bigger we make a war, the more ethical it becomes!'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th of March, 2020. I have many essays that mention something raised by this comic strip. The many things Saturday Morning Breakfast Cereal has given me to write about are at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th is a less heady topic. Its speaker is doing an ethical calculation. These sorts of things are easy to spin into awful conclusions. They treat things like suffering with the same tools that we use to address the rates of fluids mixing, or of video game statistics. This often seems to trivialize suffering, which we feel like we shouldn’t do.

This kind of calculation is often done, though. It’s rather a hallmark of utilitarianism to try writing an equation for an ethical question. It blends often more into economics, where the questions can seem less cruel even if they are still about questions of life and death. But as with any model, what you build into the model directs your results. The lecturer here supposes that guilt is diminished by involving more people. (This seems rather true to human psychology, though it’s likely more that the sense of individual responsibility dissolves in a large enough group. There are many other things at work, though, all complicated and interacting in nonlinear ways.) If we supposed that the important measure was responsibility for the killing, we would get that the more people involved in killing, the worse it is, and that a larger war only gets less and less ethical. (This also seems true to human psychology.)

Mamet: 'I figure I have about 14,000 remaining days of life. So what's the big deal if I want to spend ONE of those days goofing off? That still leaves me with 13,00 days!' Cobb: 'Maybe you could spend a couple of those days learning math.' Mamet: 'Wait, make that 12,000. I'll need one day to PLAN the goof-off day.'
Jeff Corriveau’s Deflocked for the 20th of March, 2020. I’m surprised to learn this is a new tag for me. I’ve discussed the strip, it appears, only twice before, in 2012 and in 2015, before I tagged strips by name. All right. Well, this and future appearances by Deflocked will be at this link.

Jeff Corriveau’s Deflocked for the 20th sees Mamet calculating how many days of life he expects to have left. There are roughly 1,100 days in three years, so, Mamet’s figuring on about 40 years of life. These kinds of calculation are often grim to consider. But we all have long-term plans that we would like to do (retirement, and its needed savings, are an important one) and there’s no making a meaningful plan without an idea of what the goals are.


This finally closes out the last week’s comic strips. Please stop in next week as I get to some more mathematics comics and the Playful Math Education Blog Carnival. Thanks for reading.

Reading the Comics, April 10, 2019: Grand Avenue and Luann Want My Attention Edition


So this past week has been a curious blend for the mathematically-themed comics. There were many comics mentioning some mathematical topic. But that’s because Grand Advenue and Luann Againn — reprints of early 90s Luann comics — have been doing a lot of schoolwork. There’s a certain repetitiveness to saying, “and here we get a silly answer to a story problem” four times over. But we’ll see what I do with the work.

Mark Anderson’s Andertoons for the 7th is Mark Anderson’s Andertoons for the week. Very comforting to see. It’s a geometry-vocabulary joke, with Wavehead noticing the similar ends of some terms. I’m disappointed that I can’t offer much etymological insight. “Vertex”, for example, derives from the Latin for “highest point”, and traces back to the Proto-Indo-European root “wer-”, meaning “to turn, to bend”. “Apex” derives from the Latin for “summit” or “extreme”. And that traces back to the Proto-Indo-European “ap”, meaning “to take, to reach”. Which is all fine, but doesn’t offer much about how both words ended up ending in “ex”. This is where my failure to master Latin by reading a teach-yourself book on the bus during my morning commute for three months back in 2002 comes back to haunt me. There’s probably something that might have helped me in there.

On the blackboard is a square-based pyramid with 'apex' labelled; also a circular cone with 'vertex' labelled. Wavehead: 'And if you put them together they're a duplex.'
Mark Anderson’s Andertoons for the 7th of March, 2019. I write about this strip a lot. Essays mentioning Andertoons are at this link.

Mac King and Bill King’s Magic in a Minute for the 7th is an activity puzzle this time. It’s also a legitimate problem of graph theory. Not a complicated one, but still, one. Graph theory is about sets of points, called vertices, and connections between points, called edges. It gives interesting results for anything that’s networked. That shows up in computers, in roadways, in blood vessels, in the spreads of disease, in maps, in shapes.

Here's a tough little puzzle to get your brain firing on all four cylinders. See if you can connect the matching numbered boxes with three lines. The catch is that the liens cannot cross over each other. From left to right are disjoint boxes labelled 1, 2, 1, and 2. Above and below the center of the row are two boxes labelled 3.
Mac King and Bill King’s Magic in a Minute for the 7th of March, 2019. I should have the essays mentioning Magic In A Minute at this link.

One common problem, found early in studying graph theory, is about whether a graph is planar. That is, can you draw the whole graph, all its vertices and edges, without any lines cross each other? This graph, with six vertices and three edges, is planar. There are graphs that are not. If the challenge were to connect each number to a 1, a 2, and a 3, then it would be nonplanar. That’s a famous non-planar graph, given the obvious name K3, 3. A fun part of learning graph theory — at least fun for me — is looking through pictures of graphs. The goal is finding K3, 3 or another one called K5, inside a big messy graph.

Exam Question: Jack bought seven marbles and lost six. How many additional marbles must Jack buy to equal seven? Kid's answer: 'Jack wouldn't know. He's lost his marbles.'
Mike Thompson’s Grand Avenue for the 8th of March, 2019. I’m not always cranky about this comic strips. Examples of when I’m not are at this link, as are the times I’m annoyed with Grand Avenue.

Mike Thompson’s Grand Avenue for the 8th has had a week of story problems featuring both of the kid characters. Here’s the start of them. Making an addition or subtraction problem about counting things is probably a good way of making the problem less abstract. I don’t have children, so I don’t know whether they play marbles or care about them. The most recent time I saw any of my niblings I told them about the subtleties of industrial design in the old-fashioned Western Electric Model 2500 touch-tone telephone. They love me. Also I’m not sure that this question actually tests subtraction more than it tests reading comprehension. But there are teachers who like to throw in the occasional surprisingly easy one. Keeps students on their toes.

Gunther: 'You put a question mark next to the part about using the slope-intercept form of a linear equation. What don't you understand?' Luann: 'Lemme see. Oh ... yeah. I don't understand why on earth I need to know this.'
Greg Evans’s Luann Againn for the 10th of March, 2019. This strip originally ran the 10th of March, 1991. Essays which include some mention of Luann, either current or 1990s reprints, are at this link.

Greg Evans’s Luann Againn for the 10th is part of a sequence showing Gunther helping Luann with her mathematics homework. The story started the day before, but this was the first time a specific mathematical topic was named. The point-slope form is a conventional way of writing an equation which corresponds to a particular line. There are many ways to write equations for lines. This is one that’s convenient to use if you know coordinates for one point on the line and the slope of the line. Any coordinates which make the equation true are then the coordinates for some point on the line.

How To Survive a Shark Attack (illustrated with a chicken surviving a shark.0 Keep your eye on the shark and move slowly toward safety. Don't make any sudden movements such as splashing or jazz hands. If the shark comes at you, punch it in the gills, snout, or eyes. You won't hurt the shark, but it will be surprised by your audacity. If all else fails, try to confuse it with logical paradoxes. Chicken: 'This statement is false.' Shark, wide-eyed and confused: '?'
Doug Savage’s Savage Chickens for the 10th of March, 2019. And when I think of something to write about Savage Chickens the results are at this link.

Doug Savage’s Savage Chickens for the 10th tosses in a line about logical paradoxes. In this case, using a classic problem, the self-referential statement. Working out whether a statement is true or false — its “truth value” — is one of those things we expect logic to be able to do. Some self-referential statements, logical claims about themselves, are troublesome. “This statement is false” was a good one for baffling kids and would-be world-dominating computers in science fiction television up to about 1978. Some self-referential statements seem harmless, though. Nobody expects even the most timid world-dominating computer to be bothered by “this statement is true”. It takes more than just a statement being about itself to create a paradox.


And a last note. The blog hardly needs my push to help it out, but, sometimes people will miss a good thing. Ben Orlin’s Math With Bad Drawings just ran an essay about some of the many mathematics-themed comics that Hilary Price and Rina Piccolo’s Rhymes With Orange has run. The comic is one of my favorites too. Orlin looks through some of the comic’s twenty-plus year history and discusses the different types of mathematical jokes Price (with, in recent years, Piccolo) makes.

Myself, I keep all my Reading the Comics essays at this link, and those mentioning some aspect of Rhymes With Orange at this link.

My 2018 Mathematics A To Z: Sorites Paradox


Today’s topic is the lone (so far) request by bunnydoe, so I’m under pressure to make it decent. If she or anyone else would like to nominate subjects for the letters U through Z, please drop me a note at this post. I keep fooling myself into thinking I’ll get one done in under 1200 words.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Sorites Paradox.

This is a story which makes a capitalist look kind of good. I say nothing about its truth, or even, at this remove, where I got it. The story as I heard it was about Ray Kroc, who made McDonald’s into a thing people of every land can complain about. The story has him demonstrate skepticism about the use of business consultants. A consultant might find, for example, that each sesame-seed hamburger bun has (say) 43 seeds. And that if they just cut it down to 41 seeds then each franchise would save (say) $50,000 annually. And no customer would notice the difference. Fine; trim the seeds a little. The next round of consultant would point out, cutting from 41 seeds to 38 would save a further $65,000 per store per year. And again no customer would notice the difference. Cut to 36 seeds? No customer would notice. This process would end when each bun had three sesame seeds, and the customers notice.

I mention this not for my love of sesame-seed buns. It’s just a less-common version of the Sorites Paradox. It’s a very old logical problem. We draw it, and its name, from the Ancient Greek philosophers. In the oldest form, it’s about a heap of sand, and which grain of sand’s removal destroys the heap. This form we attribute to Eubulides of Miletus. Eubulides is credited with a fair number of logical paradoxes. One of them we all know, the Liar Paradox, “What I am saying now is a lie”. Another, the Horns Paradox, I hadn’t encountered before researching this essay. But it bids fair to bring me some delight every day of the rest of my life. “What you have not lost, you have. But you have not lost horns. Therefore you have horns.” Eubelides has a bunch of other paradoxes. Some read, to my uninformed eye, like restatements of other paradoxes. Some look ready to be recast as arguments about Lois Lane’s relationship with Superman. Miletus we know because for a good stretch there every interesting philosopher was hanging around Miletus.

Part of the paradox’s intractability must be that it’s so nearly induction. Induction is a fantastic tool for mathematical problems. We couldn’t do without it. But consider the argument. If a bun is unsatisfying, one more seed won’t make it satisfying. A bun with one seed is unsatisfying. Therefore all buns have an unsatisfying number of sesame seeds on them. It suggests there must be some point at which “adding one more seed won’t help” stops being true. Fine; where is that point, and why isn’t it one fewer or one more seed?

A certain kind of nerd has a snappy answer for the Sorites Paradox. Test a broad population on a variety of sesame-seed buns. There’ll be some so sparse that nearly everyone will say they’re unsatisfying. There’ll be some so abundant most everyone agrees they’re great. So there’s the buns most everyone says are fine. There’s the buns most everyone says are not. The dividing line is at any point between the sparsest that satisfy most people and the most abundant that don’t. The nerds then declare the problem solved and go off. Let them go. We were lucky to get as much of their time as we did. They’re quite busy solving what “really” happened for Rashomon. The approach of “set a line somewhere” is fine if all want is guidance on where to draw a line. It doesn’t help say why we can anoint some border over any other. At least when we use a river as border between states we can agree going into the water disrupts what we were doing with the land. And even then we have to ask what happens during droughts and floods, and if the river is an estuary, how tides affect matters.

We might see an answer by thinking more seriously about these sesame-seed buns. We force a problem by declaring that every bun is either satisfying or it is not. We can imagine buns with enough seeds that we don’t feel cheated by them, but that we also don’t feel satisfied by. This reflects one of the common assumptions of logic. Mathematicians know it as the Law of the Excluded Middle. A thing is true or it is not true. There is no middle case. This is fine for logic. But for everyday words?

It doesn’t work when considering sesame-seed buns. I can imagine a bun that is not satisfying, but also is not unsatisfying. Surely we can make some logical provision for the concept of “meh”. Now we need not draw some arbitrary line between “satisfying” and “unsatisfying”. We must draw two lines, one of them between “unsatisfying” and “meh”. There is a potential here for regression. Also for the thought of a bun that’s “satisfying-meh-satisfying by unsatisfying”. I shall step away from this concept.

But there are more subtle ways to not exclude the middle. For example, we might decide a statement’s truth exists on a spectrum. We can match how true a statement is to a number. Suppose an obvious falsehood is zero; an unimpeachable truth is one, and normal mortal statements somewhere in the middle. “This bun with a single sesame seed is satisfying” might have a truth of 0.01. This perhaps reflects the tastes of people who say they want sesame seeds but don’t actually care. “This bun with fifteen sesame seeds is satisfying” might have a truth of 0.25, say. “This bun with forty sesame seeds is satisfying” might have a truth of 0.97. (It’s true for everyone except those who remember the flush times of the 43-seed bun.) This seems to capture the idea that nothing is always wholly anything. But we can still step into absurdity. Suppose “this bun with 23 sesame seeds is satisfying” has a truth of 0.50. Then “this bun with 23 sesame seeds is not satisfying” should also have a truth of 0.50. What do we make of the statement “this bun with 23 sesame seeds is simultaneously satisfying and not satisfying”? Do we make something different to “this bun with 23 sesame seeds is simultaneously satisfying and satisfying”?

I see you getting tired in the back there. This may seem like word games. And we all know that human words are imprecise concepts. What has this to do with logic, or mathematics, or anything but the philosophy of language? And the first answer is that we understand logic and mathematics through language. When learning mathematics we get presented with definitions that seem absolute and indisputable. We start to see the human influence in mathematics when we ask why 1 is not a prime number. Later we see things like arguments about whether a ring has a multiplicative identity. And then there are more esoteric debates about the bounds of mathematical concepts.

Perhaps we can think of a concept we can’t describe in words. If we don’t express it to other people, the concept dies with us. We need words. No, putting it in symbols does not help. Mathematical symbols may look like slightly alien scrawl. But they are shorthand for words, and can be read as sentences, and there is this fuzziness in all of them.

And we find mathematical properties that share this problem. Consider: what is the color of the chemical element flerovium? Before you say I just made that up, flerovium was first synthesized in 1998, and officially named in 2012. We’d guess that it’s a silvery-white or maybe grey metallic thing. Humanity has only ever observed about ninety atoms of the stuff. It’s, for atoms this big, amazingly stable. We know an isotope of it that has a half-life of two and a half seconds. But it’s hard to believe we’ll ever have enough of the stuff to look at it and say what color it is.

That’s … all right, though? Maybe? Because we know the quantum mechanics that seem to describe how atoms form. And how they should pack together. And how light should be absorbed, and how light should be emitted, and how light should be scattered by it. At least in principle. The exact answers might be beyond us. But we can imagine having a solution, at least in principle. We can imagine the computer that after great diligent work gives us a picture of what a ten-ton lump of flerovium would look like.

So where does its color come from? Or any of the other properties that these atoms have as a group? No one atom has a color. No one atom has a density, either, or a viscosity. No one atom has a temperature, or a surface tension, or a boiling point. In combination, though, they have.

These are known to statistical mechanics, and through that thermodynamics, as intensive properties. If we have a partition function, which describes all the ways a system can be organized, we can extract information about these properties. They turn up as derivatives with respect to the right parameters of the system.

But the same problem exists. Take a homogeneous gas. It has some temperature. Divide it into two equal portions. Both sides have the same temperature. Divide each half into two equal portions again. All four pieces have the same temperature. Divide again, and again, and a few more times. You eventually get containers with so little gas in them they don’t have a temperature. Where did it go? When did it disappear?

The counterpart to an intensive property is an extensive one. This is stuff like the mass or the volume or the energy of a thing. Cut the gas’s container in two, and each has half the volume. Cut it in half again, and each of the four containers has one-quarter the volume. Keep this up and you stay in uncontroversial territory, because I am not discussing Zeno’s Paradoxes here.

And like Zeno’s Paradoxes, the Sorites Paradox can seem at first trivial. We can distinguish a heap from a non-heap; who cares where the dividing line is? Or whether the division is a gradual change? It seems easy. To show why it is easy is hard. Each potential answer is interesting, and plausible, and when you think hard enough of it, not quite satisfying. Good material to think about.


I hope to find some material think about the letter ‘T’ and have it published Friday. It’ll be available at this link, as are the rest of these glossary posts.

Someone Else’s Homework: Was It Hard? An Umbrella Search


I wanted to follow up, at last, on this homework problem a friend had.

The question: suppose you have a function f. Its domain is the integers Z. Its rage range is also the integers Z. You know two things about the function. First, for any two integers ‘a’ and ‘b’, you know that f(a + b) equals f(a) + f(b). Second, you know there is some odd number ‘c’ for which f(c) is even. The challenge: prove that f is even for all the integers.

My friend asked, as we were working out the question, “Is this hard?” And I wasn’t sure what to say. I didn’t think it was hard, but I understand why someone would. If you’re used to mathematics problems like showing that all the roots of this polynomial are positive, then this stuff about f being even is weird. It’s a different way of thinking about problems. I’ve got experience in that thinking that my friend hasn’t.

All right, but then, what thinking? What did I see that my friend didn’t? And I’m not sure I can answer that perfectly. Part of gaining mastery of a subject is pattern recognition. Spotting how some things fit a form, while other stuff doesn’t, and some other bits yet are irrelevant. But also part of gaining that mastery is that it becomes hard to notice that’s what you’re doing.

But I can try to look with fresh eyes. There is a custom in writing this sort of problem, and that drove much of my thinking. The custom is that a mathematics problem, at this level, works by the rules of a Minute Mystery Puzzle. You are given in the setup everything that you need to solve the problem, yes. But you’re also not given stuff that you don’t need. If the detective mentions to the butler how dreary the rain is on arriving, you’re getting the tip to suspect the houseguest whose umbrella is unaccounted for.

(This format is almost unavoidable for teaching mathematics. At least it seems unavoidable given the number of problems that don’t avoid it. This can be treacherous. One of the hardest parts in stepping out to research on one’s own is that there’s nobody to tell you what the essential pieces are. Telling apart the necessary, the convenient, and the irrelevant requires expertise and I’m not sure that I know how to teach it.)

The first unaccounted-for umbrella in this problem is the function’s domain and range. They’re integers. Why wouldn’t the range, particularly, be all the real numbers? What things are true about the integers that aren’t true about the real numbers? There’s a bunch of things. The highest-level things are rooted in topology. There’s gaps between one integer and its nearest neighbor. Oh, and an integer has a nearest neighbor. A real number doesn’t. That matters for approximations and for sequences and series. Not likely to matter here. Look to more basic, obvious stuff: there’s even and odd numbers. And the problem talks about knowing something for an odd number in the domain. This is a signal to look at odds and evens for the answer.

The second unaccounted-for umbrella is the most specific thing we learn about the function. There is some odd number ‘c’, and the function matches that integer ‘c’ in the domain to some even number f(c) in the range. This makes me think: what do I know about ‘c’? Most basic thing about any odd number is it’s some even number plus one. And that made me think: can I conclude anything about f(1)? Can I conclude anything about f at the sum of two numbers?

Third unaccounted-for umbrella. The less-specific thing we learn about the function. That is that for any integers ‘a’ and ‘b’, f(a + b) is f(a) + f(b). So see how this interacts with the second umbrella. f(c) is f(some-even-number) + f(1). Do I know anything about f(some-even-number)?

Sure. If I know anything about even numbers, it’s that any even number equals two times some integer. Let me call that some-integer ‘k’. Since some-even-number equals 2*k, then, f(some-even-number) is f(2*k), which is f(k + k). And by the third umbrella, that’s f(k) + f(k). By the first umbrella, f(k) has to be an integer. So f(k) + f(k) has to be even.

So, f(c) is an even number. And it has to equal f(2*k) + f(1). f(2*k) is even; so, f(1) has to be even. These are the things that leapt out to me about the problem. This is why the problem looked, to me, easy.

Because I knew that f(1) was even, I knew that f(1 + 1), or f(2), was even. And so would be f(2 + 1), that is, f(3). And so on, for at least all the positive integers.

Now, after that, in my first version of this proof, I got hung up on what seems like a very fussy technical point. And that was, what about f(0)? What about the negative integers? f(0) is easy enough to show. It follows from one of those tricks mathematics majors are told about early. Somewhere in grad school they start to believe it. And that is: adding zero doesn’t change a number’s value, but it can give you a more useful way to express that number. Here’s how adding zero helps: we know c = c + 0. So f(c) = f(c) + f(0) and whether f(c) is even or odd, f(0) has to be even. Evens and odds don’t work any other way.

After that my proof got hung up on what may seem like a pretty fussy technical point. That amounted to whether f(-1) was even or odd. I discussed this with a couple people who could not see what my issue with this was. I admit I wasn’t sure myself. I think I’ve narrowed it down to this: my questioning whether it’s known that the number “negative one” is the same thing as what we get from the operation “zero minus one”. I mean, in general, this isn’t much questioned. Not for the last couple centuries.

You might be having trouble even figuring out why I might worry there could be a difference. In “0 – 1” the – sign there is a binary operation, meaning, “subtract the number on the right from the number on the left”. In “-1” the – sign there is a unary operation, meaning, “take the additive inverse of the number on the right”. These are two different – signs that look alike. One of them interacts with two numbers. One of them interacts with a single number. How can they mean the same thing?

With some ordinary assumptions about what we mean by “addition” and “subtraction” and “equals” and “zero” and “numbers” and stuff, the difference doesn’t matter much. We can swap between “-1” and “0 – 1” effortlessly. If we couldn’t, we probably wouldn’t use the same symbol for the two ideas. But in the context of this particular question, could we count on that?

My friend wasn’t confident in understanding what the heck I was getting on about. Fair enough. But some part of me felt like that needed to be shown. If it hadn’t been recently shown, or used, in class, then it had to go into this proof. And that’s why I went, in the first essay, into the bit about additive inverses.

This was me over-thinking the problem. I got to looking at umbrellas that likely were accounted for.

My second proof, the one thought up in the shower, uses the same unaccounted-for umbrellas. In the first proof, the second unaccounted-for umbrella seemed particularly important. Knowing that f(c) was odd, what else could I learn? In the second proof, it’s the third unaccounted-for umbrella that seemed key. Knowing that f(a + b) is f(a) + f(b), what could I learn? That right away tells me that for any even number ‘d’, f(d) must be even.

Call this the fourth unaccounted-for umbrella. Every integer is either even or odd. So right away I could prove this for what I really want to say is half of the integers. Don’t call it that. There’s not a coherent way to say the even integers are any fraction of all the integers. There’s exactly as many even integers as there are integers. But you know what I mean. (What I mean is, in any finite interval of consecutive integers, half are going to be even. Well, there’ll be at most two more odd integers than there are even integers. That’ll be close enough to half if the interval is long enough. And if we pretend we can make bigger and bigger intervals until all the integers are covered … yeah. Don’t poke at that and do not use it at your thesis defense because it doesn’t work. That’s what it feels like ought to work.)

But that I could cover the even integers in the domain with one quick sentence was a hint. The hint was, look for some thing similar that would cover the odd integers in the domain. And hey, that second unaccounted-for umbrella said something about one odd integer in the domain. Add to that one of those boring little things that a mathematician knows about odd numbers: the difference between any two odd numbers is an even number. ‘c’ is an odd number. So any odd number in the domain, let’s call it ‘d’, is equal to ‘c’ plus some even number. And f(some-even-number) has to be even and there we go.

So all this is what I see when I look at the question. And why I see those things, and why I say this is not a hard problem. It’s all in spotting these umbrellas.

Someone Else’s Homework: Some More Thoughts


I wanted to get back to my friend’s homework problem. And a question my friend had about the question. It’s a question I figure is good for another essay.

But I also had second thoughts about the answer I gave. Not that it’s wrong, but that it could be better. Also that I’m not doing as well in spelling “range” as I had always assumed I would. This is what happens when I don’t run an essay through Hemmingway App to check whether my sentences are too convoluted. I also catch smaller word glitches.

Let me re-state the problem: Suppose you have a function f, with domain of the integers Z and rage of the integers Z. And also you know that f has the property that for any two integers ‘a’ and ‘b’, f(a + b) equals f(a) + f(b). And finally, suppose that for some odd number ‘c’, you know that f(c) is even. The challenge: prove that f is even for all the integers.

Like I say, the answer I gave on Tuesday is right. That’s fine. I just thought of a better answer. This often happens. There are very few interesting mathematical truths that only have a single proof. The ones that have only a single proof are on the cutting edge, new mathematics in a context we don’t understand well enough yet. (Yes, I am overlooking the obvious exception of ______ .) But a question so well-chewed-over that it’s fit for undergraduate homework? There’s probably dozens of ways to attack that problem.

And yes, you might only see one proof of something. Sometimes there’s an approach that works so well it’s silly to consider alternatives. Or the problem isn’t big enough to need several different proofs. There’s something to regret in that. Re-thinking an argument can make it better. As instructors we might recommend rewriting an assignment before turning it in. But I’m not sure that encourages re-thinking the assignment. It’s too easy to just copy-edit and catch obvious mistakes. Which is valuable, yes. But it’s good for communication, not for the mathematics itself.

So here’s my revised argument. It’s much cleaner, as I realized it while showering Wednesday morning.

Give me an integer. Let’s call it m. Well, m has to be either an even or an odd number. I’m supposing nothing about whether it’s positive or negative, by the way. This means what I show will work whether m is greater than, less than, or equal to zero.

Suppose that m is an even number. Then m has to equal 2*k for some integer k. (And yeah, k might be positive, might be negative, might be zero. Don’t know. Don’t care.) That is, m has to equal k + k. So f(m) = f(k) + f(k). That’s one of the two things we know about the function f. And f(k) + f(k) is is 2 * f(k). And f(k) is an integer: the integers are the function’s rage range). So 2 * f(k) is an even integer. So if m is an even number then f(m) has to be even.

All right. Suppose that m isn’t an even integer. Then it’s got to be an odd integer. So this means m has to be equal to c plus some even number, which I’m going ahead and calling 2*k. Remember c? We were given information about f for that element c in the domain. And again, k might be positive. Might be negative. Might be zero. Don’t know, and don’t need to know. So since m = c + 2*k, we know that f(m) = f(c) + f(2*k). And the other thing we know about f is that f(c) is even. f(2*k) is also even. f(c), which is even, plus f(2*k), which is even, has to be even. So if m is an odd number, then f(m) has to be even.

And so, as long as m is an integer, f(m) is even.

You see why I like that argument better. It’s shorter. It breaks things up into fewer cases. None of those cases have to worry about whether m is positive or negative or zero. Each of the cases is short, and moves straight to its goal. This is the proof I’d be happy submitting. Today, anyway. No telling what tomorrow will make me think.

Someone Else’s Homework: A Solution


I have a friend who’s been taking mathematical logic. While talking over the past week’s work they mentioned a problem that had stumped them. But they’d figured it out — at least the critical part — about a half-hour after turning it in. And I had fun going over it. Since the assignment’s already turned in and I don’t even know which class it was, I’d like to share it with you.

So here’s the problem. Suppose you have a function f, with domain of the integers Z and rage of the integers Z. And also you know that f has the property that for any two integers ‘a’ and ‘b’, f(a + b) equals f(a) + f(b). And finally, suppose that for some odd number ‘c’, you know that f(c) is even. The challenge: prove that f is even for all the integers.

If you want to take a moment to think about that, please do.

A Californian rabbit (white body, grey ears and nose and paws) eating a pile of vegetables. In the background is the sunlit outside in the window, with a small rabbit statue silhouetted behind the rabbit's back.
So you can ponder without spoilers here’s a picture of the rabbit we’re fostering for the month, who’s having lunch. The silhouette behind her back is of a little statue decoration and not some outsider trying to lure our foster rabbit to freedom outside, so far as we know. (Don’t set domesticated rabbits outside. It won’t go well for them. And domesticated rabbits aren’t native to North America, I mention for the majority of my readers who are.)

So here’s my thinking about this.

First thing I want to do is show that f(1) is an even number. How? Well, if ‘c’ is an odd number, then ‘c’ has to equal ‘2*k + 1’ for some integer ‘k’. So f(c) = f(2*k + 1). And therefore f(c) = f(2*k) + f(1). And, since 2*k is equal to k + k, then f(2*k) has to equal f(k) + f(k). Therefore f(c) = 2*f(k) + f(1). Whatever f(k) is, 2*f(k) has to be an even number. And we’re given f(c) is even. Therefore f(1) has to be even.

Now I can prove that if ‘k’ is any positive integer, then f(k) has to be even. Why? Because ‘k’ is equal to 1 + 1 + 1 + … + 1. And so f(k) has to equal f(1) + f(1) + f(1) + … + f(1). That is, it’s k * f(1). And if f(1) is even then so is k * f(1). So that covers the positive integers.

How about zero? Can I show that f(0) is even? Oh, sure, easy. Start with ‘c’. ‘c’ equals ‘c + 0’. So f(c) = f(c) + f(0). The only way that’s going to be true is if f(0) is equal to zero, which is an even number.

By the way, here’s an alternate way of arguing this: 0 = 0 + 0. So f(0) = f(0) + f(0). And therefore f(0) = 2 * f(0) and that’s an even number. Incidentally also zero. Submit the proof you like.

What’s not covered yet? Negative integers. It’s hard not to figure, well, we know f(1) is even, we know f(a + b) if f(a) + f(b). Shouldn’t, like, f(-2) just be -2 * f(1)? Oh, it so should. I don’t feel like we have that already proven, though. So let me nail that down. I’m going to use what we know about f(k) for positive ‘k’, and the fact that f(0) is 0.

So give me any negative integer; I’m going call it ‘-k’. Its additive inverse is ‘k’, which is a positive number. -k + k = 0. And so f(-k + k) = f(-k) + f(k) = f(0). So, f(-k) + f(k) = 0, and f(-k) = -f(k). If f(k) is even — and it is — then f(-k) is also even.

So there we go: whether ‘k’ is a positive, zero, or negative integer, f(k) is even. All the integers are either positive, zero, or negative. So f is even for any integer.

I’ve got some more thoughts about this problem.

There’s Technically Still Time To Buy A Theorem For Valentine’s Day


While I have not used the service myself, it does appear that Theory Mine is still going. It’s an automated theorem-creating software. For a price, they’ll create a theorem and name it whatever you choose within reason. It will almost certainly not be an interesting theorem, nor one that anyone will ever care about. But it’ll be far more legitimate than, like, naming a star after someone, which has always been an outright scam.

I discovered this last year, and wrote a bit about how this sort of thing can work. (I’m not certain this is precisely how Theory Mine works, but I am confident that it’s something along these lines.) Also a bit about the history of this sort of system and how it’s come about. And as I say, I haven’t used the service myself. It may sound like bragging, but I’ve created my own theorems. They’re not the monumental triumph of intellect and explanatory power that attaches to theories in science. They can be much more petty things, like “when can we expect the sum of the roots of a quadratic polynomial to be greater than zero”. A theorem you make for your own project will be a little more interesting than a completely auto-generated one like this. After all, your own theorem will at least answer something you wanted to know. A computer-generated one doesn’t even promise that. But it does take less effort to send a bit of money off and get a proof mailed back to you.

Reading the Comics, June 17, 2017: Icons Of Mathematics Edition


Comic Strip Master Command just barely missed being busy enough for me to split the week’s edition. Fine for them, I suppose, although it means I’m going to have to scramble together something for the Tuesday or the Thursday posting slot. Ah well. As befits the comics, there’s a fair bit of mathematics as an icon in the past week’s selections. So let’s discuss.

Mark Anderson’s Andertoons for the 11th is our Mark Anderson’s Andertoons for this essay. Kind of a relief to have that in right away. And while the cartoon shows a real disaster of a student at the chalkboard, there is some truth to the caption. Ruling out plausible-looking wrong answers is progress, usually. So is coming up with plausible-looking answers to work out whether they’re right or wrong. The troubling part here, I’d say, is that the kid came up with pretty poor guesses about what the answer might be. He ought to be able to guess that it’s got to be an odd number, and has to be less than 10, and really ought to be less than 7. If you spot that then you can’t make more than two wrong guesses.

Patrick J Marrin’s Francis for the 12th starts with what sounds like a logical paradox, about whether the Pope could make an infallibly true statement that he was not infallible. Really it sounds like a bit of nonsense. But the limits of what we can know about a logical system will often involve questions of this form. We ask whether something can prove whether it is provable, for example, and come up with a rigorous answer. So that’s the mathematical content which justifies my including this strip here.

Border Collis are, as we know, highly intelligent. The dogs are gathered around a chalkboard full of mathematics. 'I've checked my calculations three times. Even if master's firm and calm and behaves like an alpha male, we *should* be able to whip him.'
Niklas Eriksson’s Carpe Diem for the 13th of June, 2017. Yes, yes, it’s easy to get people excited for the Revolution, but it’ll come to a halt when someone asks about how they get the groceries afterwards.

Niklas Eriksson’s Carpe Diem for the 13th is a traditional use of the blackboard full of mathematics as symbolic of intelligence. Of course ‘E = mc2‘ gets in there. I’m surprised that both π and 3.14 do, too, for as little as we see on the board.

Mark Anderson’s Andertoons for the 14th is a nice bit of reassurance. Maybe the cartoonist was worried this would be a split-week edition. The kid seems to be the same one as the 11th, but the teacher looks different. Anyway there’s a lot you can tell about shapes from their perimeter alone. The one which most startles me comes up in calculus: by doing the right calculation about the lengths and directions of the edge of a shape you can tell how much area is inside the shape. There’s a lot of stuff in this field — multivariable calculus — that’s about swapping between “stuff you know about the boundary of a shape” and “stuff you know about the interior of the shape”. And finding area from tracing the boundary is one of them. It’s still glorious.

Samson’s Dark Side Of The Horse for the 14th is a counting-sheep joke and a Pi Day joke. I suspect the digits of π would be horrible for lulling one to sleep, though. They lack the just-enough-order that something needs for a semiconscious mind to drift off. Horace would probably be better off working out Collatz sequences.

Dana Simpson’s Phoebe and her Unicorn for the 14th mentions mathematics as iconic of what you do at school. Book reports also make the cut.

Dr Zarkov: 'Flash, this is Professor Quita, the inventor of the ... ' Prof Quita: 'Caramba! NO! I am a mere mathematician! With numbers, equations, paper, pencil, I work ... it is my good amigo, Dr Zarkov, who takes my theories and builds ... THAT!!' He points to a bigger TV screen.
Dan Barry’s Flash Gordon for the 31st of July, 1962, rerun the 16th of June, 2017. I am impressed that Dr Zarkov can make a TV set capable of viewing alternate universes. I still literally do not know how it is possible that we have sound for our new TV set, and I labelled and connected every single wire in the thing. Oh, wouldn’t it be a kick if Dr Zarkov has the picture from one alternate universe but the sound from a slightly different other one?

Dan Barry’s Flash Gordon for the 31st of July, 1962 and rerun the 16th I’m including just because I love the old-fashioned image of a mathematician in Professor Quita here. At this point in the comic strip’s run it was set in the far-distant future year of 1972, and the action here is on one of the busy multinational giant space stations. Flash himself is just back from Venus where he’d set up some dolphins as assistants to a fish-farming operation helping to feed that world and ours. And for all that early-60s futurism look at that gorgeous old adding machine he’s still got. (Professor Quinta’s discovery is a way to peer into alternate universes, according to the next day’s strip. I’m kind of hoping this means they’re going to spend a week reading Buck Rogers.)

Reading the Comics, June 3, 2017: Feast Week Conclusion Edition


And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

A Time To Worry: '[Our son] says he got a one-de-two-three-z on the math test.'
So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product 2^{p - 1} \cdot \left(2^p - 1\right) for certain prime numbers ‘p’. (They’re the ones for which 2^p - 1 is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

Tiger: 'I've got two plus four hours of homework. I won't be finished until ten minus three o'clock, or maybe even six plus one and a half o'clock.' Punkin: 'What subject?' Tiger: 'Arithmetic, stupid!'
Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

Officer Pupp: 'That Mouse is most sure an oaf of awful dumbness, Mrs Kwakk Wakk - y'know that?' Mrs Kwakk Wakk: 'By what means do you find proof of this, Officer Pupp?' 'His sense of speed is insipid - he doesn't seem to know that if I ran 60 miles an hour, and he only 40, that I would eventually catch up to him.' 'No-' 'Yes- I tell you- yes.' 'He seemed to know that a brick going 60 would catch up to a kat going 40.' 'Oh, he did, did he?' 'Why, yes.'
George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

Reading the Comics, April 22, 2017: Thought There’d Be Some More Last Week Edition


Allison Barrows’s PreTeena rerun for the 18th is a classic syllogism put into the comic strip’s terms. The thing about these sorts of deductive-logic syllogisms is that whether the argument is valid depends only on the shape of the argument. It has nothing to do with whether the thing being discussed makes any sense. This can be disorienting. It’s hard to ignore the everyday meaning of words when you hear a string of sentences. But it’s also hard to parse a string of sentences if the words don’t make sense in them. This is probably part of why on the mathematics side of things logic courses will skimp on syllogisms, using them to give an antique flavor and sense of style to the introduction of courses. It’s easier to use symbolic representations for logic instead.

Randy Glasbergen’s Glasbergen Cartoons rerun for the 20th is the old joke about arithmetic being different between school, government, and corporate work. I haven’t looked at the comments — the GoComics redesign, whatever else it does, makes it very easy to skip the comments — but I’m guessing by the second one someone’s said the Common Core method means getting the most wrong answer.

Dolly, coming home: 'Rithmetic would be a lot easier if it didn't have all those different numbers.'
Bil Keane and Jeff Keane’s Family Circus for the 21st of April, 2017. In fairness, there aren’t a lot of things we need all of 6, 7, and 8 for and you can just use whatever one of those you’re good at for any calculations with the others. Promise.

Bil Keane and Jeff Keane’s Family Circus for the 21st I don’t know is a rerun. But a lot of them are these days. Anyway, it looks like a silly joke about how nice mathematics would be without numbers; Dolly has no idea. I can sympathize with being intimidated by numerals. At the risk of being all New Math-y, I wonder if she wouldn’t like arithmetic more if it were presented as a game. Like, here’s a couple symbols — let’s say * and | for a start, and then some rules. * and * makes *, but * and | makes |. Also | and * makes |. But | and | makes |*. And so on. This is binary arithmetic, disguised, but I wonder if making it look like something inconsequential would make it more pleasant to learn, and if that would transfer over to arithmetic with 1’s and 0’s. Normal, useful arithmetic would be harder to play like this. You’d need ten symbols that are easy to write that aren’t already numbers, letters, or common symbols. But I wonder if it’d be worth it.

Tom Thaves’s Frank and Ernest for the 22nd is provided for mathematics teachers who need something to tape to their door. You’re welcome.

Words About A Wordless Induction Proof


This pair of tweets came across my feed. And who doesn’t like a good visual proof of a mathematical fact? I hope you enjoy.

So here’s the proposition.

This is the sort of identity we normally try proving by induction. Induction is a great scheme for proving identities like this. It works by finding some index on the formula. Then show that if the formula is true for one value of the index, then it’s true for the next-higher value of the index. Finally, find some value of the index for which it’s easy to check that the formula’s true. And that proves it’s true for all the values of that index above that base.

In this case the index is ‘n’. It’s really easy to prove the base case, since 13 is equal to 12 what with ‘1’ being the number everybody likes to raise to powers. Going from proving that if it’s true in one case — 1^3 + 2^3 + 3^3 + \cdots + n^3 — then it’s true for the next — 1^3 + 2^3 + 3^3 + \cdots + n^3 + (n + 1)^3 — is work. But you can get it done.

And then there’s this, done visually:

It took me a bit to read fully until I was confident in what it was showing. But it is all there.

As often happens with these wordless proofs you can ask whether it is properly speaking a proof. A proof is an argument and to be complete it has to contain every step needed to deduce the conclusion from the premises, following one of the rules of inference each step. Thing is basically no proof is complete that way, because it takes forever. We elide stuff that seems obvious, confident that if we had to we could fill in the intermediate steps. A wordless proof like trusts that if we try to describe what is in the picture then we are constructing the argument.

That’s surely enough of my words.

One Way To Get Your Own Theorem


While doing some research to better grouse about Ken Keeler’s Futurama theorem I ran across an amusing site I hadn’t known about. It is Theory Mine, a site that allows you to hire — and name — a genuine, mathematically sound theorem. The spirit of the thing is akin to that scam in which you “name” a star. But this is more legitimate in that, you know, it’s got any legitimacy. For this, you’re buying naming rights from someone who has any rights to sell. By convention the discoverer of a theorem can name it whatever she wishes, and there’s one chance in ten that anyone else will use the name.

I haven’t used it. I’ve made my own theorems, thanks, and could put them on a coffee mug or t-shirt if I wished to make a particularly boring t-shirt. But I’m delighted by the scheme. They don’t have a team of freelance mathematicians whipping up stuff and hoping it isn’t already known. Not for the kinds of prices they charge. This should inspire the question: well, where do the theorems come from?

The scheme uses an automated reasoning system. I don’t know the details of how it works, but I can think of a system by which this might work. It goes back to the Crisis of Foundations, the time in the late 19th/early 20th century when logicians got very worried that we were still letting physical intuitions and unstated assumptions stay in our mathematics. One solution: turn everything into symbols, icons with no connotations. The axioms of mathematics become a couple basic symbols. The laws of logical deduction become things we can do with the symbols, converting one line of symbols into a related other line. Every line we get is a theorem. And we know it’s correct. To write out the theorem in this scheme is to write out its proof, and to feel like you’re touching some deep magic. And there’s no human frailties in the system, besides the thrill of reeling off True Names like that.

You may not be sure what this works like. It may help to compare it to a slightly-fun number coding scheme. I mean the one where you start with a number, like, ‘1’. Then you write down how many times and which digit appears. There’s a single ‘1’ in that string, so you would write down ’11’. And repeat: In ’11’ there’s a sequence of two ‘1’s, so you would write down ’21’. And repeat: there’s a single ‘2’ and a single ‘1’, so you then write down ‘1211’. And again: there’s a single ‘1’, a single ‘2’, and then a double ‘1’, so you next write ‘111221’. And so on until you get bored or die.

When we do this for mathematics we start with a couple different basic units. And we also start with several things we may do at most symbols. So there’s rarely a single line that follows from the previous. There’s an ever-expanding tree of known truths. This may stave off boredom but I make no promises about death.

The result of this is pages and pages that look like Ancient High Martian. I don’t feel the thrill of doing this. Some people do, though. And as recreational mathematics goes I suppose it’s at least as good as sudoku. Anyway, this kind of project, rewarding indefatigability and thoroughness, is perfect for automation anyway. Let the computer work out all the things we can prove are true.

If I’m reading Theory Mine’s description correctly they seem to be doing something roughly like this. If they’re not, well, you go ahead and make your own rival service using my paragraphs as your system. All I ask is one penny for every use of L’Hôpital’s Rule, a theorem named for Guillaume de l’Hôpital and discovered by Johann Bernoulli. (I have heard that Bernoulli was paid for his work, but I do not know that’s true. I have now explained why, if we suppose that to be true, my prior sentence is a very funny joke and you should at minimum chuckle.)

This should inspire the question: what do we need mathematicians for, then? It’s for the same reason we need writers, when it would be possible to automate the composing of sentences that satisfy the rules of English grammar. I mean if there were rules to English grammar. That we can identify a theorem that’s true does not mean it has even the slightest interest to anyone, ever. There’s much more that could be known than that we could ever care about.

You can see this in Theory Mine’s example of Quentin’s Theorem. Quentin’s Theorem is about an operation you can do on a set whose elements consist of the non-negative whole numbers with a separate value, which they call color, attached. You can add these colored-numbers together according to some particular rules about how the values and the colors add. The order of this addition normally matters: blue two plus green three isn’t the same as green three plus blue two. Quentin’s Theorem finds cases where, if you add enough colored-numbers together, the order doesn’t matter. I know. I am also staggered by how useful this fact promises to be.

Yeah, maybe there is some use. I don’t know what it is. If anyone’s going to find the use it’ll be a mathematician. Or a physicist who’s found some bizarre quark properties she wants to codify. Anyway, if what you’re interested in is “what can you do to make a vertical column stable?” then the automatic proof generator isn’t helping you at all. Not without a lot of work put in to guiding it. So we can skip the hard work of finding and proving theorems, if we can do the hard work of figuring out where to look for these theorems instead. Always the way.

You also may wonder how we know the computer is doing its work right. It’s possible to write software that is logically proven to be correct. That is, the software can’t produce anything but the designed behavior. We don’t usually write software this way. It’s harder to write, because you have to actually design your software’s behavior. And we can get away without doing it. Usually there’s some human overseeing the results who can say what to do if the software seems to be going wrong. Advocates of logically-proven software point out that we’re getting more software, often passing results on to other programs. This can turn a bug in one program into a bug in the whole world faster than a responsible human can say, “I dunno. Did you try turning it off and on again?” I’d like to think we could get more logically-proven software. But I also fear I couldn’t write software that sound and, you know, mathematics blogging isn’t earning me enough to eat on.

Also, yes, even proven software will malfunction if the hardware the computer’s on malfunctions. That’s rare, but does happen. Fortunately, it’s possible to automate the checking of a proof, and that’s easier to do than creating a proof in the first place. We just have to prove we have the proof-checker working. Certainty would be a nice thing if we ever got it, I suppose.

Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition


Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

'Now for my math homework. I've got a comfortable chair, a good light, plenty of paper, a sharp pencil, a new eraser, and a terrific urge to go out and play some ball.'
Ted Shearer’s Quincy for the 14th of November, 1977, and reprinted the 7th of January, 2017. I kind of remember having a lamp like that. I don’t remember ever sitting down to do my mathematics homework with a paintbrush.

Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

The End 2016 Mathematics A To Z: Zermelo-Fraenkel Axioms


gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

Zermelo-Fraenkel Axioms

I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprising all the subsets of the original set. Good stuff to know.

There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s the ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or for a more instructive view, it’s as simple as it could be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

The End 2016 Mathematics A To Z: Cantor’s Middle Third


Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

Cantor’s Middle Third.

I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

(I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

Reading the Comics, October 8, 2016: Split Week Edition Part 2


And now I can finish off last week’s comics. It was a busy week. The first few days of this week have been pretty busy too. Meanwhile, Dave Kingsbury has recently read a biography of Lewis Carroll, and been inspired to form a haiku/tanka project. You might enjoy.

Susan Camilleri Konar is a new cartoonist for the Six Chix collective. Her first strip to get mentioned around these parts is from the 5th. It’s a casual mention of the Fibonacci sequence, which is one of the few sequences that a normal audience would recognize as something going on forever. And yes, I noticed the spiral in the background. That’s one of the common visual representations of the Fibonacci sequence: it starts from the center. The rectangles inside have dimensions 1 by 2, then 2 by 3, then 3 by 5, then 5 by 8, and so on; the spiral connects vertices of these rectangles. It’s an attractive spiral and you can derive the overrated Golden Ratio from the dimensions of larger rectangles. This doesn’t make the Golden Ratio important or anything, but it is there.

'It seems like Fibonacci's been entering his password for days now.'
Susan Camilleri Konar ‘s Six Chix for the 5th of October, 2016. And yet what distracts me is both how much food Fibonacci has on his desk and how much of it is hidden behind his computer where he can’t get at it. He’s going to end up spilling his coffee on something important fiddling around like that. And that’s not even getting at his computer being this weird angle relative to the walls.

Ryan North’s Dinosaur Comics for the 6th is part of a story about T-Rex looking for certain truth. Mathematics could hardly avoid coming up. And it does offer what look like universal truths: given the way deductive logic works, and some starting axioms, various things must follow. “1 + 1 = 2” is among them. But there are limits to how much that tells us. If we accept the rules of Monopoly, then owning four railroads means the rent for landing on one is a game-useful $200. But if nobody around you cares about Monopoly, so what? And so it is with mathematics. Utahraptor and Dromiceiomimus point out that the mathematics we know is built on premises we have selected because we find them interesting or useful. We can’t know that the mathematics we’ve deduced has any particular relevance to reality. Indeed, it’s worse than North points out: How do we know whether an argument is valid? Because we believe that its conclusions follow from its premises according to our rules of deduction. We rely on our possibly deceptive senses to tell us what the argument even was. We rely on a mind possibly upset by an undigested bit of beef, a crumb of cheese, or a fragment of an underdone potato to tell us the rules are satisfied. Mathematics seems to offer us absolute truths, but it’s hard to see how we can get there.

Rick Stromoskis Soup to Nutz for the 6th has a mathematics cameo in a student-resisting-class-questions problem. But the teacher’s question is related to the figure that made my first fame around these parts.

Mark Anderson’s Andertoons for the 7th is the long-awaited Andertoon for last week. It is hard getting education in through all the overhead.

Bill Watterson’s Calvin and Hobbes rerun for the 7th is a basic joke about Calvin’s lousy student work. Fun enough. Calvin does show off one of those important skills mathematicians learn, though. He does do a sanity check. He may not know what 12 + 7 and 3 + 4 are, but he does notice that 12 + 7 has to be something larger than 3 + 4. That’s a starting point. It’s often helpful before starting work on a problem to have some idea of what you think the answer should be.

Theorem Thursday: The Jordan Curve Theorem


There are many theorems that you have to get fairly far into mathematics to even hear of. Often they involve things that are so abstract and abstruse that it’s hard to parse just what we’re studying. This week’s entry is not one of them.

The Jordan Curve Theorem.

There are a couple of ways to write this. I’m going to fall back on the version that Richard Courant and Herbert Robbins put in the great book What Is Mathematics?. It’s a theorem in the field of topology, the study of how shapes interact. In particular it’s about simple, closed curves on a plane. A curve is just what you figure it should be. It’s closed if it … uh … closes, makes a complete loop. It’s simple if it doesn’t cross itself or have any disconnected bits. So, something you could draw without lifting pencil from paper and without crossing back over yourself. Have all that? Good. Here’s the theorem:

A simple closed curve in the plane divides that plane into exactly two domains, an inside and an outside.

It’s named for Camille Jordan, a French mathematician who lived from 1838 to 1922, and who’s renowned for work in group theory and topology. It’s a different Jordan from the one named in Gauss-Jordan Elimination, which is a matrix thing that’s important but tedious. It’s also a different Jordan from Jordan Algebras, which I remember hearing about somewhere.

The Jordan Curve Theorem is proved by reading its proposition and then saying, “Duh”. This is compelling, although it lacks rigor. It’s obvious if your curve is a circle, or a slightly squished circle, or a rectangle or something like that. It’s less obvious if your curve is a complicated labyrinth-type shape.

A labyrinth drawn in straight and slightly looped lines.
A generic complicated maze shape. Can you pick out which part is the inside and which the outside? Pretend you don’t notice that little peninsula thing in the upper right corner. I didn’t mean the line to overlap itself but I was using too thick a brush in ArtRage and didn’t notice before I’d exported the image.

It gets downright hard if the curve has a lot of corners. This is why a completely satisfying rigorous proof took decades to find. There are curves that are nowhere differentiable, that are nothing but corners, and those are hard to deal with. If you think there’s no such thing, then remember the Koch Snowflake. That’s that triangle sticking up from the middle of a straight line, that itself has triangles sticking up in the middle of its straight lines, and littler triangles still sticking up from the straight lines. Carry that on forever and you have a shape that’s continuous but always changing direction, and this is hard to deal with.

Still, you can have a good bit of fun drawing a complicated figure, then picking a point and trying to work out whether it’s inside or outside the curve. The challenging way to do that is to view your figure as a maze and look for a path leading outside. The easy way is to draw a new line. I recommend doing that in a different color.

In particular, draw a line from your target point to the outside. Some definitely outside point. You need the line to not be parallel to any of the curve’s line segments. And it’s easier if you don’t happen to intersect any vertices, but if you must, we’ll deal with that two paragraphs down.

A dot with a testing line that crosses the labyrinth curve six times, and therefore is outside the curve.
A red dot that turns out to be outside the labyrinth, based on the number of times the testing line, in blue, crosses the curve. I learned doing this that I should have drawn the dot and blue line first and then fit a curve around it so I wouldn’t have to work so hard to find one lousy point and line segment that didn’t have some problems.

So draw your testing line here from the point to something definitely outside. And count how many times your testing line crosses the original curve. If the testing line crosses the original curve an even number of times then the original point was outside the curve. If the testing line crosses the original an odd number of times then the original point was inside of the curve. Done.

If your testing line touches a vertex, well, then it gets fussy. It depends whether the two edges of the curve that go into that vertex stay on the same side as your testing line. If the original curve’s edges stay on the same side of your testing line, then don’t count that as a crossing. If the edges go on opposite sides of the testing line, then that does count as one crossing. With that in mind, carry on like you did before. An even number of crossings means your point was outside. An odd number of crossings means your point was inside.

The testing line touches a corner of the curve. The curve comes up to and goes away from the same side as the testing line.
This? Doesn’t count as the blue testing line crossing the black curve.

The testing line touches a corner of the curve. The curve crosses over, with legs on either side of the testing line at that point.
This? This counts as the blue testing line crossing the black curve.

So go ahead and do this a couple times with a few labyrinths and sample points. It’s fun and elevates your doodling to the heights of 19th-century mathematics. Also once you’ve done that a couple times you’ve proved the Jordan curve theorem.

Well, no, not quite. But you are most of the way to proving it for a special case. If the curve is a polygon, a shape made up of a finite number of line segments, then you’ve got almost all the proof done. You have to finish it off by choosing a ray, a direction, that isn’t parallel to any of the polygon’s line segments. (This is one reason this method only works for polygons, and fails for stuff like the Koch Snowflake. It also doesn’t work well with space-filling curves, which are things that exist. Yes, those are what they sound like: lines that squiggle around so much they fill up area. Some can fill volume. I swear. It’s fractal stuff.) Imagine all the lines that are parallel to that ray. There’s definitely some point along that line that’s outside the curve. You’ll need that for reference. Classify all the points on that line by whether there’s an even or an odd number of crossings between a starting point and your reference definitely-outside point. Keep doing that for all these many parallel lines.

And that’s it. The mess of points that have an odd number of intersections are the inside. The mess of points that have an even number of intersections are the outside.

You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space. And for hyperdimensional spaces too. You can always work out an inside and an outside, as long as space isn’t being all weird. But it might sound like it’s not much of a theorem. So you can work out an inside and an outside; so what?

But it’s one of those great utility theorems. It pops in to places, the perfect tool for a problem you were just starting to notice existed. If I can get my rhetoric organized I hope to show that off next week, when I figure to do the Five-Color Map Theorem.

Theorem Thursday: Liouville’s Approximation Theorem And How To Make Your Own Transcendental Number


As I get into the second month of Theorem Thursdays I have, I think, the whole roster of weeks sketched out. Today, I want to dive into some real analysis, and the study of numbers. It’s the sort of thing you normally get only if you’re willing to be a mathematics major. I’ll try to be readable by people who aren’t. If you carry through to the end and follow directions you’ll have your very own mathematical construct, too, so enjoy.

Liouville’s Approximation Theorem

It all comes back to polynomials. Of course it does. Polynomials aren’t literally everything in mathematics. They just come close. Among the things we can do with polynomials is divide up the real numbers into different sets. The tool we use is polynomials with integer coefficients. Integers are the positive and the negative whole numbers, stuff like ‘4’ and ‘5’ and ‘-12’ and ‘0’.

A polynomial is the sum of a bunch of products of coefficients multiplied by a variable raised to a power. We can use anything for the variable’s name. So we use ‘x’. Sometimes ‘t’. If we want complex-valued polynomials we use ‘z’. Some people trying to make a point will use ‘y’ or ‘s’ but they’re just showing off. Coefficients are just numbers. If we know the numbers, great. If we don’t know the numbers, or we want to write something that doesn’t commit us to any particular numbers, we use letters from the start of the alphabet. So we use ‘a’, maybe ‘b’ if we must. If we need a lot of numbers, we use subscripts: a0, a1, a2, and so on, up to some an for some big whole number n. To talk about one of these without committing ourselves to a specific example we use a subscript of i or j or k: aj, ak. It’s possible that aj and ak equal each other, but they don’t have to, unless j and k are the same whole number. They might also be zero, but they don’t have to be. They can be any numbers. Or, for this essay, they can be any integers. So we’d write a generic polynomial f(x) as:

f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_{n - 1}x^{n - 1} + a_n x^n

(Some people put the coefficients in the other order, that is, a_n + a_{n - 1}x + a_{n - 2}x^2 and so on. That’s not wrong. The name we give a number doesn’t matter. But it makes it harder to remember what coefficient matches up with, say, x14.)

A zero, or root, is a value for the variable (‘x’, or ‘t’, or what have you) which makes the polynomial equal to zero. It’s possible that ‘0’ is a zero, but don’t count on it. A polynomial of degree n — meaning the highest power to which x is raised is n — can have up to n different real-valued roots. All we’re going to care about is one.

Rational numbers are what we get by dividing one whole number by another. They’re numbers like 1/2 and 5/3 and 6. They’re numbers like -2.5 and 1.0625 and negative a billion. Almost none of the real numbers are rational numbers; they’re exceptional freaks. But they are all the numbers we actually compute with, once we start working out digits. Thus we remember that to live is to live paradoxically.

And every rational number is a root of a first-degree polynomial. That is, there’s some polynomial f(x) = a_0 + a_1 x that’s made zero for your polynomial. It’s easy to tell you what it is, too. Pick your rational number. You can write that as the integer p divided by the integer q. Now look at the polynomial f(x) = p – q x. Astounded yet?

That trick will work for any rational number. It won’t work for any irrational number. There’s no first-degree polynomial with integer coefficients that has the square root of two as a root. There are polynomials that do, though. There’s f(x) = 2 – x2. You can find the square root of two as the zero of a second-degree polynomial. You can’t find it as the zero of any lower-degree polynomials. So we say that this is an algebraic number of the second degree.

This goes on higher. Look at the cube root of 2. That’s another irrational number, so no first-degree polynomials have it as a root. And there’s no second-degree polynomials that have it as a root, not if we stick to integer coefficients. Ah, but f(x) = 2 – x3? That’s got it. So the cube root of two is an algebraic number of degree three.

We can go on like this, although I admit examples for higher-order algebraic numbers start getting hard to justify. Most of the numbers people have heard of are either rational or are order-two algebraic numbers. I can tell you truly that the eighth root of two is an eighth-degree algebraic number. But I bet you don’t feel enlightened. At best you feel like I’m setting up for something. The number r(5), the smallest radius a disc can have so that five of them will completely cover a disc of radius 1, is eighth-degree and that’s interesting. But you never imagined the number before and don’t have any idea how big that is, other than “I guess that has to be smaller than 1”. (It’s just a touch less than 0.61.) I sound like I’m wasting your time, although you might start doing little puzzles trying to make smaller coins cover larger ones. Do have fun.

Liouville’s Approximation Theorem is about approximating algebraic numbers with rational ones. Almost everything we ever do is with rational numbers. That’s all right because we can make the difference between the number we want, even if it’s r(5), and the numbers we can compute with, rational numbers, as tiny as we need. We trust that the errors we make from this approximation will stay small. And then we discover chaos science. Nothing is perfect.

For example, suppose we need to estimate π. Everyone knows we can approximate this with the rational number 22/7. That’s about 3.142857, which is all right but nothing great. Some people know we can approximate it as 333/106. (I didn’t until I started writing this paragraph and did some research.) That’s about 3.141509, which is better. Then there’s 355/113, which is not as famous as 22/7 but is a celebrity compared to 333/106. That’s about 3.141529. Then we get into some numbers only mathematics hipsters know: 103993/33102 and 104348/33215 and so on. Fine.

The Liouville Approximation Theorem is about sequences that converge on an irrational number. So we have our first approximation x1, that’s the integer p1 divided by the integer q1. So, 22 and 7. Then there’s the next approximation x2, that’s the integer p2 divided by the integer q2. So, 333 and 106. Then there’s the next approximation yet, x3, that’s the integer p3 divided by the integer q3. As we look at more and more approximations, xj‘s, we get closer and closer to the actual irrational number we want, in this case π. Also, the denominators, the qj‘s, keep getting bigger.

The theorem speaks of having an algebraic number, call it x, of some degree n greater than 1. Then we have this limit on how good an approximation can be. The difference between the number x that we want, and our best approximation p / q, has to be larger than the number (1/q)n + 1. The approximation might be higher than x. It might be lower than x. But it will be off by at least the n-plus-first power of 1/q.

Polynomials let us separate the real numbers into infinitely many tiers of numbers. They also let us say how well the most accessible tier of numbers, rational numbers, can approximate these more exotic things.

One of the things we learn by looking at numbers through this polynomial screen is that there are transcendental numbers. These are numbers that can’t be the root of any polynomial with integer coefficients. π is one of them. e is another. Nearly all numbers are transcendental. But the proof that any particular number is one is hard. Joseph Liouville showed that transcendental numbers must exist by using continued fractions. But this approximation theorem tells us how to make our own transcendental numbers. This won’t be any number you or anyone else has ever heard of, unless you pick a special case. But it will be yours.

You will need:

  1. a1, an integer from 1 to 9, such as ‘1’, ‘9’, or ‘5’.
  2. a2, another integer from 1 to 9. It may be the same as a1 if you like, but it doesn’t have to be.
  3. a3, yet another integer from 1 to 9. It may be the same as a1 or a2 or, if it so happens, both.
  4. a4, one more integer from 1 to 9 and you know what? Let’s summarize things a bit.
  5. A whopping great big gob of integers aj, every one of them from 1 to 9, for every possible integer ‘j’ so technically this is infinitely many of them.
  6. Comfort with the notation n!, which is the factorial of n. For whole numbers that’s the product of every whole number from 1 to n, so, 2! is 1 times 2, or 2. 3! is 1 times 2 times 3, or 6. 4! is 1 times 2 times 3 times 4, or 24. And so on.
  7. Not to be thrown by me writing -n!. By that I mean work out n! and then multiply that by -1. So -2! is -2. -3! is -6. -4! is -24. And so on.

Now, assemble them into your very own transcendental number z, by this formula:

z = a_1 \cdot 10^{-1} + a_2 \cdot 10^{-2!} + a_3 \cdot 10^{-3!} + a_4 \cdot 10^{-4!} + a_5 \cdot 10^{-5!} + a_6 \cdot 10^{-6!} \cdots

If you’ve done it right, this will look something like:

z = 0.a_{1}a_{2}000a_{3}00000000000000000a_{4}0000000 \cdots

Ah, but, how do you know this is transcendental? We can prove it is. The proof is by contradiction, which is how a lot of great proofs are done. We show nonsense follows if the thing isn’t true, so the thing must be true. (There are mathematicians that don’t care for proof-by-contradiction. They insist on proof by charging straight ahead and showing a thing is true directly. That’s a matter of taste. I think every mathematician feels that way sometimes, to some extent or on some issues. The proof-by-contradiction is easier, at least in this case.)

Suppose that your z here is not transcendental. Then it’s got to be an algebraic number of degree n, for some finite number n. That’s what it means not to be transcendental. I don’t know what n is; I don’t care. There is some n and that’s enough.

Now, let’s let zm be a rational number approximating z. We find this approximation by taking the first m! digits after the decimal point. So, z1 would be just the number 0.a1. z2 is the number 0.a1a2. z3 is the number 0.a1a2000a3. I don’t know what m you like, but that’s all right. We’ll pick a nice big m.

So what’s the difference between z and zm? Well, it can’t be larger than 10 times 10-(m + 1)!. This is for the same reason that π minus 3.14 can’t be any bigger than 0.01.

Now suppose we have the best possible rational approximation, p/q, of your number z. Its first m! digits are going to be p / 10m!. This will be zm And by the Liouville Approximation Theorem, then, the difference between z and zm has to be at least as big as (1/10m!)(n + 1).

So we know the difference between z and zm has to be larger than one number. And it has to be smaller than another. Let me write those out.

\frac{1}{10^{m! (n + 1)}} < |z - z_m | < \frac{10}{10^{(m + 1)!}}

We don’t need the z – zm anymore. That thing on the rightmost side we can write what I’ll swear is a little easier to use. What we have left is:

\frac{1}{10^{m! (n + 1)}} < \frac{1}{10^{(m + 1)! - 1}}

And this will be true whenever the number m! (n + 1) is greater than (m + 1)! – 1 for big enough numbers m.

But there’s the thing. This isn’t true whenever m is greater than n. So the difference between your alleged transcendental number and its best-possible rational approximation has to be simultaneously bigger than a number and smaller than that same number without being equal to it. Supposing your number is anything but transcendental produces nonsense. Therefore, congratulations! You have a transcendental number.

If you chose all 1’s for your aj‘s, then you have what is sometimes called the Liouville Constant. If you didn’t, you may have a transcendental number nobody’s ever noticed before. You can name it after someone if you like. That’s as meaningful as naming a star for someone and cheaper. But you can style it as weaving someone’s name into the universal truth of mathematics. Enjoy!

I’m glad to finally give you a mathematics essay that lets you make something you can keep.

What’s The Shortest Proof I’ve Done?


I didn’t figure to have a bookend for last week’s “What’s The Longest Proof I’ve Done? question. I don’t keep track of these things, after all. And the length of a proof must be a fluid concept. If I show something is a direct consequence of a previous theorem, is the proof’s length the two lines of new material? Or is it all the proof of the previous theorem plus two new lines?

I would think the shortest proof I’d done was showing that the logarithm of 1 is zero. This would be starting from the definition of the natural logarithm of a number x as the definite integral of 1/t on the interval from 1 to x. But that requires a bunch of analysis to support the proof. And the Intermediate Value Theorem. Does that stuff count? Why or why not?

But this happened to cross my desk: The Shortest-Known Paper Published in a Serious Math Journal: Two Succinct Sentences, an essay by Dan Colman. It reprints a paper by L J Lander and T R Parkin which appeared in the Bulletin of the American Mathematical Society in 1966.

It’s about Euler’s Sums of Powers Conjecture. This is a spinoff of Fermat’s Last Theorem. Leonhard Euler observed that you need at least two whole numbers so that their squares add up to a square. And you need three cubes of whole numbers to add up to the cube of a whole number. Euler speculated you needed four whole numbers so that their fourth powers add up to a fourth power, five whole numbers so that their fifth powers add up to a fifth power, and so on.

And it’s not so. Lander and Parkin found that this conjecture is false. They did it the new old-fashioned way: they set a computer to test cases. And they found four whole numbers whose fifth powers add up to a fifth power. So the quite short paper answers a long-standing question, and would be hard to beat for accessibility.

There is another famous short proof sometimes credited as the most wordless mathematical presentation. Frank Nelson Cole gave it on the 31st of October, 1903. It was about the Mersenne number 267-1, or in human notation, 147,573,952,589,676,412,927. It was already known the number wasn’t prime. (People wondered because numbers of the form 2n-1 often lead us to perfect numbers. And those are interesting.) But nobody knew which factors it was. Cole gave his talk by going up to the board, working out 267-1, and then moving to the other side of the board. There he wrote out 193,707,721 × 761,838,257,287, and showed what that was. Then, per legend, he sat down without ever saying a word, and took in the standing ovation.

I don’t want to cast aspersions on a great story like that. But mathematics is full of great stories that aren’t quite so. And I notice that one of Cole’s doctoral students was Eric Temple Bell. Bell gave us a great many tales of mathematics history that are grand and great stories that just weren’t so. So I want it noted that I don’t know where we get this story from, or how it may have changed in the retellings. But Cole’s proof is correct, at least according to Octave.

So not every proof is too long to fit in the universe. But then I notice that Mathworld’s page regarding the Euler Sum of Powers Conjecture doesn’t cite the 1966 paper. It cites instead Lander and Parkin’s “A Counterexample to Euler’s Sum of Powers Conjecture” from Mathematics of Computation volume 21, number 97, of 1967. There the paper has grown to three pages, although it’s only a couple paragraphs of one page and three lines of citation on the third. It’s not so easy to read either, but it does explain how they set about searching for counterexamples. But it may give you some better idea of how numerical mathematicians find things.

What’s The Longest Proof I’ve Done?


You know what’s a question I’m surprised I don’t get asked? I mean in the context of being a person with an advanced mathematics degree. I don’t get asked what’s the longest proof I’ve ever done. Either just reading to understand, or proving for myself. Maybe people are too intimidated by the idea of advanced mathematics to try asking such things. Maybe they’re afraid I’d bury them under a mountain of technical details. But I’d imagine musicians get asked what the hardest or the longest piece they’ve memorized is. I’m sure artists get asked what’s the painting (or sculpture, or whatnot) they’ve worked on the longest was.

It’s just as well nobody’s asked. I’m not sure what the longest proof I’ve done, or gone through, would even be. Some of it is because there’s an inherent arbitrariness to the concept of “a proof”. Proofs are arguments, and they’re almost always made up of many smaller pieces. The advantage of making these small pieces is that small proofs are usually easier to understand. We can then assemble the conclusions of many small proofs to make one large proof. But then how long was the large proof? Does it contain all the little proofs that go into it?

And, truth be told, I didn’t think to pay attention to how long any given proof was. If I had to guess I would think the longest proof I’d done, just learned, would be from a grad school course in ordinary differential equations. This is the way we study systems in which how things are changing depends on what things are now. These often match physical, dynamic, systems very well. I remember in the class spending several two-hour sessions trying to get through a major statement in a field called Kolmogorov-Arnold-Moser Theory. This is a major statement about dynamical systems being perturbed, given a little shove. And it describes what conditions make the little shove really change the way the whole system behaves.

What I’m getting to is that there appears to be a new world’s record-holder for the Longest Actually Completed Proof. It’s about a problem I never heard of before but that’s apparently been open since the 1980s. It’s known as the Boolean Pythagorean Triples problem. The MathsByAGirl blog has an essay about it, and gives some idea of its awesome size. It’s about 200 terabytes of text. As you might imagine, it’s a proof by exhaustion. That is, it divides up a problem into many separate cases, and tries out all the cases. That’s a legitimate approach. It tends to produce proofs that are long and easy to verify, at least at each particular case. They might not be insightful, that is, they might not suggest new stuff to do, but they work. (And I don’t know that this proof doesn’t suggest new stuff to do. I haven’t read it, for good reason. It’s well outside my specialty.)

But proofs can be even bigger. John Carlos Baez published a while back an essay, “Insanely Long Proofs”. And that’s awe-inspiring. Baez is able to provide theorems which we know to be true. You’ll be able to understand what they conclude, too. And in the logic system applicable to them, their proofs would be so long that the entire universe isn’t big enough just to write down the number of symbols needed to complete the proof. Let me say that again. It’s not that writing out the proof would take more than all the space in the universe. It’s that writing out how long the proof would be, written out would take more than all the space in the universe.

So you should ask, then how do we know it’s true? Baez explains.

Reading the Comics, June 3, 2016: Word Problems Without Pictures Edition


I haven’t got Sunday’s comics under review yet. But the past seven days were slow ones for mathematically-themed comics. Maybe Comic Strip Master Command is under the impression that it’s the (United States) summer break already. It’s not, although Funky Winkerbean did a goofy sequence graduating its non-player-character students. And Zits has been doing a summer reading storyline that only makes sense if Jeremy Duncan is well into summer. Maybe Comic Strip Master Command thinks it’s a month later than it actually is?

Tony Cochrane’s Agnes for the 29th of May looks at first like a bit of nonsense wordplay. But whether a book with the subject “All About Books” would discuss itself, and how it would discuss itself, is a logic problem. And not just a logic problem. Start from pondering how the book All About Books would describe the content of itself. You can go from that to an argument that it’s impossible to compress every possible message. Imagine an All About Books which contained shorthand descriptions of every book. And the descriptions have enough detail to exactly reconstruct each original book. But then what would the book list for the description of All About Books?

And self-referential things can lead to logic paradoxes swiftly. You’d have some fine ones if Agnes were to describe a book All About Not-Described Books. Is the book described in itself? The question again sounds silly. But thinking seriously about it leads us to the decidability problem. Any interesting-enough logical system will always have statements that are meaningful and true that no one can prove.

Furthermore, the suggestion of an “All About `All About Books’ Book” suggests to me power sets. That’s the set of all the ways you can collect the elements of a set. Power sets are always bigger than the original set. They lead to the staggering idea that there are many sizes of infinitely large sets, a never-ending stack of bigness.

Robb Armstrong’s Jump Start for the 31st of May is part of a sequence about getting a tutor for a struggling kid. That it’s mathematics is incidental to the storyline, must be said. (It’s an interesting storyline, partly about the Jojo’s father, a police officer, coming to trust Ray, an ex-convict. Jump Start tells many interesting and often deeply weird storylines. And it never loses its camouflage of being an ordinary family comic strip.) It uses the familiar gimmick of motivating a word problem by making it about something tangible.

Ken Cursoe’s Tiny Sepuku for the 2nd of June uses the motif of non-Euclidean geometry as some supernatural magic. It’s a small reference, you might miss it. I suppose it is true that a high-dimensional analogue to conic sections would focus things from many dimensions. If those dimensions match time and space, maybe it would focus something from all humanity into the brain. I would try studying instead, though.

Russell Myers’s Broom Hilda for the 3rd is a resisting-the-word-problems joke. It’s funny to figure on missing big if you have to be wrong at all. But something you learn in numerical mathematics, particularly, is that it’s all right to start from a guess. Often you can take a wrong answer and improve it. If you can’t get the exact right answer, you can usually get a better answer. And often you can get as good as you need. So in practice, sorry to say, I can’t recommend going for the ridiculous answer. You can do better.

A Leap Day 2016 Mathematics A To Z: Wlog


Wait for it.

Wlog.

I’d like to say a good word for boredom. It needs the good words. The emotional state has an appalling reputation. We think it’s the sad state someone’s in when they can’t find anything interesting. It’s not. It’s the state in which we are so desperate for engagement that anything is interesting enough.

And that isn’t a bad thing! Finding something interesting enough is a precursor to noticing something curious. And curiosity is a precursor to discovery. And discovery is a precursor to seeing a fuller richness of the world.

Think of being stuck in a waiting room, deprived of reading materials or a phone to play with or much of anything to do. But there is a clock. Your classic analog-face clock. Its long minute hand sweeps out the full 360 degrees of the circle once every hour, 24 times a day. Its short hour hand sweeps out that same arc every twelve hours, only twice a day. Why is the big unit of time marked with the short hand? Good question, I don’t know. Probably, ultimately, because it changes so much less than the minute hand that it doesn’t need the attention of length drawn to it.

But let our waiting mathematician get a little more bored, and think more about the clock. The hour and minute hand must sometimes point in the same direction. They do at 12:00 by the clock, for example. And they will at … a little bit past 1:00, and a little more past 2:00, and a good while after 9:00, and so on. How many times during the day will they point the same direction?

Well, one easy way to do this is to work out how long it takes the hands, once they’ve met, to meet up again. Presumably we don’t want to wait the whole hour-and-some-more-time for it. But how long is that? Well, we know the hands start out pointing the same direction at 12:00. The first time after that will be after 1:00. At exactly 1:00 the hour hand is 30 degrees clockwise of the minute hand. The minute hand will need five minutes to catch up to that. In those five minutes the hour hand will have moved another 2.5 degrees clockwise. The minute hand needs about four-tenths of a minute to catch up to that. In that time the hour hand moves — OK, we’re starting to see why Zeno was not an idiot. He never was.

But we have this roughly worked out. It’s about one hour, five and a half minutes between one time the hands meet and the next. In the course of twelve hours there’ll be time for them to meet up … oh, of course, eleven times. Over the course of the day they’ll meet up 22 times and we can get into a fight over whether midnight counts as part of today, tomorrow, or both days, or neither. (The answer: pretend the day starts at 12:01.)

Hold on, though. How do we know that the time between the hands meeting up at 12:00 and the one at about 1:05 is the same as the time between the hands meeting up near 1:05 and the next one, sometime a little after 2:10? Or between that one and the one at a little past 3:15? What grounds do we have for saying this one interval is a fair representation of them all?

We can argue that it should be fairly enough. Imagine that all the markings were washed off the clock. It’s just two hands sweeping around in circles, one relatively fast, one relatively slow, forever. Give the clockface a spin. When the hands come together again rotate the clock so those two hands are vertical, the “12:00” position. Is this actually 12:00? … Well, we’ve got a one-in-eleven chance it is. It might be a little past 1:05; it might be that time something past 6:30. The movement of the clock hands gives no hint what time it really is.

And that is why we’re justified taking this one interval as representative of them all. The rate at which the hands move, relative to each other, doesn’t depend on what the clock face behind it says. The rate is, if the clock isn’t broken, always the same. So we can use information about one special case that happens to be easy to work out to handle all the cases.

That’s the mathematics term for this essay. We can study the one specific case without loss of generality, or as it’s inevitably abbreviated, wlog. This is the trick of studying something possibly complicated, possibly abstract, by looking for a representative case. That representative case may tell us everything we need to know, at least about this particular problem. Generality means what you might figure from the ordinary English meaning of it: it means this answer holds in general, as opposed to in this specific instance.

Some thought has to go in to choosing the representative case. We have to pick something that doesn’t, somehow, miss out on a class of problems we would want to solve. We mustn’t lose the generality. And it’s an easy mistake to make, especially as a mathematics student first venturing into more abstract waters. I remember coming up against that often when trying to prove properties of infinitely long series. It’s so hard to reason something about a bunch of numbers whose identities I have no idea about; why can’t I just use the sequence, oh, 1/1, 1/2, 1/3, 1/4, et cetera and let that be good enough? Maybe 1/1, 1/4, 1/9, 1/16, et cetera for a second test, just in case? It’s because it takes time to learn how to safely handle infinities.

It’s still worth doing. Few of us are good at manipulating things in the abstract. We have to spend more mental energy imagining the thing rather than asking the questions we want of it. Reducing that abstraction — even if it’s just a little bit, changing, say, from “an infinitely-differentiable function” to “a polynomial of high enough degree” — can rescue us. We can try out things we’re confident we understand, and derive from it things we don’t know.

I can’t say that a bored person observing a clock would deduce all this. Parts of it, certainly. Maybe all, if she thought long enough. I believe it’s worth noticing and thinking of these kinds of things. And it’s why I believe it’s fine to be bored sometimes.

Reading the Comics, April 10, 2016: Four-Digit Prime Number Edition


In today’s installment of Reading The Comics, mathematics gets name-dropped a bunch in strips that aren’t really about my favorite subject other than my love. Also, I reveal the big lie we’ve been fed about who drew the Henry comic strip attributed to Carl Anderson. Finally, I get a question from Queen Victoria. I feel like this should be the start of a podcast.

Todd responds to arithmetic flash cards: 'Tater tots! Sloppy Joes! Mac and Cheese!' 'Todd, what are you doing? These are all math!' 'Sorry ... every day at school we have math right before lunch and you told me to say the first thing that pops into my mind!'
Patrick Roberts’ Todd the Dinosaur for the 6th of April, 2016.

Patrick Roberts’ Todd the Dinosaur for the 6th of April just name-drops mathematics. The flash cards suggest it. They’re almost iconic for learning arithmetic. I’ve seen flash cards for other subjects. But apart from learning the words of other languages I’ve never been able to make myself believe they’d work. On the other hand, I haven’t used flash cards to learn (or teach) things myself.

Mom, taking the mathematics book away from Bad Dad: 'I'll take over now ... fractions and long division aren't `scientifically accepted as unknowable`.'
Joe Martin’s Boffo for the 7th of April, 2016. I bet the link expires in early May.

Joe Martin’s Boffo for the 7th of April is a solid giggle. (I have a pretty watery giggle myself.) There are unknowable, or at least unprovable, things in mathematics. Any logic system with enough rules to be interesting has ideas which would make sense, and which might be true, but which can’t be proven. Arithmetic is such a system. But just fractions and long division by itself? No, I think we need something more abstract for that.

Henry is sent to bed. He can't sleep until he reads from his New Math text.
Carl Anderson’s Henry for the 7th of April, 2016.

Carl Anderson’s Henry for the 7th of April is, of course, a rerun. It’s also a rerun that gives away that the “Carl Anderson” credit is a lie. Anderson turned over drawing the comic strip in 1942 to John Liney, for weekday strips, and Don Trachte for Sundays. There is no possible way the phrase “New Math” appeared on the cover of a textbook Carl Anderson drew. Liney retired in 1979, and Jack Tippit took over until 1983. Then Dick Hodgins, Jr, drew the strip until 1990. So depending on how quickly word of the New Math penetrated Comic Strip Master Command, this was drawn by either Liney, Tippit, or possibly Hodgins. (Peanuts made New Math jokes in the 60s, but it does seem the older the comic strip the longer it takes to mention new stuff.) I don’t know when these reruns date from. I also don’t know why Comics Kingdom is fibbing about the artist. But then they went and cancelled The Katzenjammer Kids without telling anyone either.

Eric the Circle for the 8th, this one by “lolz”, declares that Eric doesn’t like being graphed. This is your traditional sort of graph, one in which points with coordinates x and y are on the plot if their values make some equation true. For a circle, that equation’s something like (x – a)2 + (y – b)2 = r2. Here (a, b) are the coordinates for the point that’s the center of the circle, and r is the radius of the circle. This looks a lot like Eric is centered on the origin, the point with coordinates (0, 0). It’s a popular choice. Any center is as good. Another would just have equations that take longer to work with.

Richard Thompson’s Cul de Sac rerun for the 10th is so much fun to look at that I’m including it even though it just name-drops mathematics. The joke would be the same if it were something besides fractions. Although see Boffo.

Norm Feuti’s Gil rerun for the 10th takes on mathematics’ favorite group theory application, the Rubik’s Cube. It’s the way I solved them best. This approach falls outside the bounds of normal group theory, though.

Mac King and Bill King’s Magic in a Minute for the 10th shows off a magic trick. It’s also a non-Rubik’s-cube problem in group theory. One of the groups that a mathematics major learns, after integers-mod-four and the like, is the permutation group. In this, the act of swapping two (or more) things is a thing. This puzzle restricts the allowed permutations down to swapping one item with the thing next to it. And thanks to that, an astounding result emerges. It’s worth figuring out why the trick would work. If you can figure out the reason the first set of switches have to leave a penny on the far right then you’ve got the gimmick solved.

Pab Sungenis’s New Adventures of Queen Victoria for the 10th made me wonder just how many four-digit prime numbers there are. If I haven’t worked this out wrong, there’s 1,061 of them.

A Leap Day 2016 Mathematics A To Z: Grammar


My next entry for this A To Z was another request, this one from Jacob Kanev, who doesn’t seem to have a WordPress or other blog. (If I’m mistaken, please, let me know.) Kanev’s given me several requests, some of them quite challenging. Some too challenging: I have to step back from describing “both context sensitive and not” kinds of grammar just now. I hope all will forgive me if I just introduce the base idea.

Grammar.

One of the ideals humans hold when writing a mathematical proof is to crush all humanity from the proof. It’s nothing personal. It reflects a desire to be certain we have proved things without letting any unstated assumptions or unnoticed biases interfering. The 19th century was a lousy century for mathematicians and their intuitions. Many ideas that seemed clear enough turned out to be paradoxical. It’s natural to want to not make those mistakes again. We can succeed.

We can do this by stripping out everything but the essentials. We can even do away with words. After all, if I say something is a “square”, that suggests I mean what we mean by “square” in English. Our mathematics might not have proved all the square-ness of the thing. And so we reduce the universe to symbols. Letters will do as symbols, if we want to be kind to our typesetters. We do want to be kind now that, thanks to LaTeX, we do our own typesetting.

This is called building a “formal language”. The “formal” here means “relating to the form” rather than “the way you address people when you can’t just say `heya, gang’.” A formal language has two important components. One is the symbols that can be operated on. The other is the operations you can do on the symbols.

If we’ve set it all up correctly then we get something wonderful. We have “statements”. They’re strings of the various symbols. Some of the statements are axioms; they’re assumed to be true without proof. We can turn a statement into another one by using a statement we have and one of the operations. If the operation requires, we can add in something else we already know to be true. Something we’ve already proven.

Any statement we build this way — starting from an axiom and building with the valid operations — is a new and true statement. It’s a theorem. The proof of the theorem? It’s the full sequence of symbols and operations that we’ve built. The line between advanced mathematics and magic is blurred. To give a theorem its full name is to give its proof. (And now you understand why the biographies of many of the pioneering logicians of the late 19th and early 20th centuries include a period of fascination with the Kabbalah and other forms of occult or gnostic mysticism.)

A grammar is what’s required to describe a language like this. It’s defined to be a quartet of properties. The first property is the collection of symbols that can’t be the end of a statement. These are called nonterminal symbols. The second property is the collection of symbols that can end a statement. These are called terminal symbols. (You see why we want to have those as separate lists.) The third property is the collection of rules that let you build new statements from old. The fourth property is the collection of things we take to be true to start. We only have finitely many options for each of these, at least for your typical grammar. I imagine someone has experimented with infinite grammars. But that hasn’t got to be enough of a research field people have to pay attention to them. Not yet, anyway.

Now it’s reasonable to ask if we need mathematicians at all. If building up theorems is just a matter of applying the finitely many rules of inference on finitely many collections of symbols, finitely many times over, then what about this can’t be done by computer? And done better by a computer, since a computer doesn’t need coffee, or bathroom breaks an hour later, or the hope of moving to a tenure-track position?

Well, we do need mathematicians. I don’t say that just because I hope someone will give me money in exchange for doing mathematics. It’s because setting up a computer to just grind out every possible theorem will never turn up what you want to know now. There are several reasons for this.

Here’s a way to see why. It’s drawn from Douglas Hofstadter’s Gödel, Escher, Bach, a copy of which you can find in any college dorm room or student organization office. At least you could back when I was an undergraduate. I don’t know what the kids today use.

Anyway, this scheme has three nonterminal symbols: I, M, and U. As a terminal symbol … oh, let’s just use the space at the end of a string. That way everything looks like words. We will include a couple variables, lowercase letters like x and y and z. They stand for any string of nonterminal symbols. They’re falsework. They help us get work done, but must not appear in our final result.

There’s four rules of inference. The first: if xI is valid, then so is xIM. The second: if Mx is valid, then so is Mxx. The third: if MxIIIy is valid, then so is MxUy. The fourth: if MxUUy is valid, then so is Mxy.

We have one axiom, assumed without proof to be true: MI.

So let’s putter around some. MI is true. So by the second rule, so is MII. That’s a theorem. And since MII is true, by the second rule again, so is MIIII. That’s another theorem. Since MIIII is true, by the first rule, so is MIIIIM. We’ve got another theorem already. Since MIIIIM is true, by the third rule, so is MIUM. We’ve got another theorem. For that matter, since MIIIIM is true, again by the third rule, so is MUIM. Would you like MIUMIUM? That’s waiting there to be proved too.

And that will do. First question: what does any of this even mean? Nobody cares about whether MIUMIUM is a theorem in this system. Nobody cares about figuring out whether MUIUMUIUI might be a theorem. We care about questions like “what’s the smallest odd perfect number?” or “how many equally-strong vortices can be placed in a ring without the system becoming unstable?” With everything reduced to symbol-shuffling like this we’re safe from accidentally assuming something which isn’t justified. But we’re pretty far from understanding what these theorems even mean.

In this case, these strings don’t mean anything. They’re a toy so we can get comfortable with the idea of building theorems this way. We don’t expect them to do any more work than we expect Lincoln Logs to build usable housing. But you can see how we’re starting pretty far from most interesting mathematics questions.

Still, if we started from a system that meant something, we would get there in time, right? … Surely? …

Well, maybe. The thing is, even with this I, M, U scheme and its four rules there are a lot of things to try out. From the first axiom, MI, we can produce either MII or MIM. From MII we can produce MIIM or MIIII. From MIIII we could produce MIIIIM, or MUI, or MIU, or MIIIIIIII. From each of those we can produce … quite a bit of stuff.

All of those are theorems in this scheme and that’s nice. But it’s a lot. Suppose we have set up symbols and axioms and rules that have clear interpretations that relate to something we care about. If we set the computer to produce every possible legitimate result we are going to produce an enormous number of results that we don’t care about. They’re not wrong, they’re just off-point. And there’s a lot more true things that are off-point than there are true things on-point. We need something with judgement to pick out results that have anything to do with what we want to know. And trying out combinations to see if we can produce the pattern we want is hard. Really hard.

And there’s worse. If we set up a formal language that matches real mathematics, then we need a lot of work to prove anything. Even simple statements can take forever. I seem to remember my logic professor needing 27 steps to work out the uncontroversial theorem “if x = y and y = z, then x = z”. (Granting he may have been taking the long way around for demonstration purposes.) We would have to look in theorems of unspeakably many symbols to find the good stuff.

Now it’s reasonable to ask what the point of all this is. Why create a scheme that lets us find everything that can be proved, only to have all we’re interested in buried in garbage?

There are some uses. To make us swear we’ve read Jorge Luis Borges, for one. Another is to study the theory of what we can prove. That is, what are we able to learn by logical deduction? And another is to design systems meant to let us solve particular kinds of problems. That approach makes the subject merge into computer science. Code for a computer is, in a sense, about how to change a string of data into another string of data. What are the legitimate data to start with? What are the rules by which to change the data? And these are the sorts of things grammars, and the study of grammars, are about.

A Leap Day 2016 Mathematics A To Z: Conjecture


For today’s entry in the Leap Day 2016 Mathematics A To Z I have an actual request from from Elke Stangl. I’d had another ‘c’ request, for ‘continued fractions’. I’ve decided to address that by putting ‘Fractions, continued’ on the roster. If you have other requests, for letters not already committed, please let me know. I’ve got some letters I can use yet.

Conjecture.

An old joke says a mathematician’s job is to turn coffee into theorems. I prefer tea, which may be why I’m not employed as a mathematician. A theorem is a logical argument that starts from something known to be true. Or we might start from something assumed to be true, if we think the setup interesting and plausible. And it uses laws of logical inference to draw a conclusion that’s also true and, hopefully, interesting. If it isn’t interesting, maybe it’s useful. If it isn’t either, maybe at least the argument is clever.

How does a mathematician know what theorems to try proving? We could assemble any combination of premises as the setup to a possible theorem. And we could imagine all sorts of possible conclusions. Most of them will be syntactically gibberish, the equivalent of our friends the monkeys banging away on keyboards. Of those that aren’t, most will be untrue, or at least impossible to argue. Of the rest, potential theorems that could be argued, many will be too long or too unfocused to follow. Only a tiny few potential combinations of premises and conclusions could form theorems of any value. How does a mathematician get a good idea where to spend her time?

She gets it from experience. In learning what theorems, what arguments, have been true in the past she develops a feeling for things that would plausibly be true. In playing with mathematical constructs she notices patterns that seem to be true. As she gains expertise she gets a sense for things that feel right. And she gets a feel for what would be a reasonable set of premises to bundle together. And what kinds of conclusions probably follow from an argument that people can follow.

This potential theorem, this thing that feels like it should be true, a conjecture.

Properly, we don’t know whether a conjecture is true or false. The most we can say is that we don’t have evidence that it’s false. New information might show that we’re wrong and we would have to give up the conjecture. Finding new examples that it’s true might reinforce our idea that it’s true, but that doesn’t prove it’s true.

For example, we have the Goldbach Conjecture. According to it every even number greater than two can be written as the sum of exactly two prime numbers. The evidence for it is very good: every even number we’ve tied has worked out, up through at least 4,000,000,000,000,000,000. But it isn’t proven. It’s possible that it’s impossible from the standard rules of arithmetic.

That’s a famous conjecture. It’s frustrated mathematicians for centuries. It’s easy to understand and nobody’s found a proof. Famous conjectures, the ones that get names, tend to do that. They looked nice and simple and had hidden depths.

Most conjectures aren’t so storied. They instead appear as notes at the end of a section in a journal article or a book chapter. Or they’re put on slides meant to refresh the audience’s interest where it’s needed. They are needed at the fifteen-minute park of a presentation, just after four slides full of dense equations. They are also needed at the 35-minute mark, in the middle of a field of plots with too many symbols and not enough labels. And one’s needed just before the summary of the talk, so that the audience can try to remember what the presentation was about and why they thought they could understand it. If the deadline were not so tight, if the conference were a month or so later, perhaps the mathematician would find a proof for these conjectures.

Perhaps. As above, some conjectures turn out to be hard. Fermat’s Last Theorem stood for four centuries as a conjecture. Its first proof turned out to be nothing like anything Fermat could have had in mind. Mathematics popularizers lost an easy hook when that was proven. We used to be able to start an essay on Fermat’s Last Theorem by huffing about how it was properly a conjecture but the wrong term stuck to it because English is a perverse language. Now we have to start by saying how it used to be a conjecture instead.

But few are like that. Most conjectures are ideas that feel like they ought to be true. They appear because a curious mind will look for new ideas that resemble old ones, or will notice patterns that seem to resemble old patterns.

And sometimes conjectures turn out to be false. Something can look like it ought to be true, or maybe would be true, and yet be false. Often we can prove something isn’t true by finding an example, just as you might expect. But that doesn’t mean it’s easy. Here’s a false conjecture, one that was put forth by Goldbach. All odd numbers are either prime, or can be written as the sum of a prime and twice a square number. (He considered 1 to be a prime number.) It’s not true, but it took over a century to show that. If you want to find a counterexample go ahead and have fun trying.

Still, if a mathematician turns coffee into theorems, it is through the step of finding conjectures, promising little paths in the forest of what is not yet known.

A Leap Day 2016 Mathematics A To Z: Axiom


I had a great deal of fun last summer with an A To Z glossary of mathematics terms. To repeat a trick with some variation, I called for requests a couple weeks back. I think the requests have settled down so let me start. (However, if you’ve got a request for one of the latter alphabet letters, please let me know. There’s ten letters not yet committed.) I’m going to call this a Leap Day 2016 Mathematics A To Z to mark when it sets off. This way I’m not committed to wrapping things up before a particular season ends. On, now, to the start and the first request, this one from Elke Stangl:

Axiom.

Mathematics is built of arguments. Ideally, these are all grounded in deductive logic. These would be arguments that start from things we know to be true, and use the laws of logical inference to conclude other things that are true. We want valid arguments, ones in which every implication is based on true premises and correct inferences. In practice we accept some looseness about this, because it would just take forever to justify every single little step. But the structure is there. From some things we know to be true, deduce something we hadn’t before proven was true.

But where do we get things we know to be true? Well, we could ask the philosophy department. The question’s one of their specialties. But we might be scared of them, and they of us. After all, the mathematics department and the philosophy department are only usually both put in the College of Arts and Sciences. Sometimes philosophy is put in the College of Humanities instead. Let’s stay where we were instead.

We know to be true stuff we’ve already proved to be true. So we can use the results of arguments we’ve already finished. That’s comforting. Whatever work we, or our forerunners, have done was not in vain. But how did we know those results were true? Maybe they were the consequences of earlier stuff we knew to be true. Maybe they came from earlier valid arguments.

You see the regression problem. We don’t have anything we know to be true except the results of arguments, and the arguments depended on having something true to build from. We need to start somewhere.

The real world turns out to be a poor starting point, by the way. Oh, it’s got some good sides. Reality is useful in many ways, but it has a lot of problems to be resolved. Most things we could say about the real world are transitory: they were once untrue, became true, and will someday be false again. It’s hard to see how you can build a universal truth on a transitory foundation. And that’s even if we know what’s true in the real world. We have senses that seem to tell us things about the real world. But the philosophy department, if we eavesdrop on them, would remind us of some dreadful implications. The concept of “the real world” is hard to make precise. Even if we suppose we’ve done that, we don’t know that what we could perceive has anything to do with the real world. The folks in the psychology department and the people who study physiology reinforce the direness of the situation. Even if perceptions can tell us something relevant, and even if our senses aren’t deliberately deceived, they’re still bad at perceiving stuff. We need to start somewhere else if we want certainty.

That somewhere is the axiom. We declare some things to be a kind of basic law. Here are some thing we need not prove true; they simply are.

(Sometimes mathematicians say “postulate” instead of “axiom”. This is because some things sound better called “postulates”. Meanwhile other things sound better called “axioms”. There is no functional difference.)

Most axioms tend to be straightforward things. We tend to like having uncontroversial foundations for our arguments. It may hardly seem necessary to say “all right angles are congruent”, but how would you prove that? It may seem obvious that, given a collection of sets of things, it’s possible to select exactly one thing from each of those sets. How do you know you can?

Well, they might follow from some other axioms, by some clever enough argument. This is possible. Mathematicians consider it elegant to have as few axioms as necessary for their work. (They’re not alone, or rare, in that preference.) I think that reflects a cultural desire to say as much as possible with as little work as possible. The more things we have to assume to show a thing is true, the more likely that in a new application one of those assumptions won’t hold. And that would spoil our knowledge of that conclusion. Sometimes we can show the interesting point of one axiom could be derived from some other axiom or axioms. We might replace an axiom with these alternates if that gives us more enlightening arguments.

Sometimes people seize on this whole axiom business to argue that mathematics (and science, dragged along behind) is a kind of religion. After all, you need to have faith that some things are true. This strikes me as bad theology and poor mathematics. The most obvious difference between an article of faith and an axiom must be that axioms are voluntary. They are things you assume to be true because you expect them to enlighten something you wish to study. If they don’t, you’re free to try other axioms.

The axiom I mentioned three paragraphs back, about selecting exactly one thing from each of a collection of sets? That’s known as the Axiom of Choice. It’s used in the theory of sets. But you don’t have to assume it’s true. Much of set theory stands independent of it. Many set theorists go about their work neither committing to the idea that it’s true or that it’s false.

What makes a good set of axioms is rather like what makes a good set of rules for a sport. You do want to have a set that’s reasonably clear. You want them to provide for many interesting consequences. You want them to not have any contradictions. (You settle for them having no contradictions anyone’s found or suspects.) You want them to have as few ambiguities as possible. What makes up that set may evolve as the field, or as the sport, evolves. People do things that weren’t originally thought about. People get more experience and more perspective on the way the rules are laid out. People notice they had been assuming something without stating it. We revise and, we hope, improve the foundations with time.

There’s no guarantee that every set of axioms will produce something interesting. Well, you wouldn’t expect to necessarily get a playable game by throwing together some random collection of rules from several different sports, either. Most mathematicians stick to familiar groups of axioms, for the same reason most athletes stick to sports they didn’t make up. We know from long experience that this set will give us an interesting geometry, or calculus, or topology, or so on.

There’ll never be a standard universal set of axioms covering all mathematics. There are different sets of axioms that directly contradict each other but that are, to the best of our knowledge, internally self-consistent. The axioms that describe geometry on a flat surface, like a map, are inconsistent with those that describe geometry on a curved surface, like a globe. We need both maps and globes. So we have both flat and curved geometries, and we decide what kind fits the work we want to do.

And there’ll never be a complete list of axioms for any interesting field, either. One of the unsettling discoveries of 20th Century logic was of incompleteness. Any set of axioms interesting enough to cover the ability to do arithmetic will have statements that would be meaningful, but that can’t be proven true or false. We might add some of these undecidable things to the set of axioms, if they seem useful. But we’ll always have other things not provably true or provably false.

Reading the Comics, January 8, 2015: Rerun-Heavy Edition


I couldn’t think of what connective theme there might be to the mathematically-themed comic strips of the last couple days. It finally struck me: there’s a lot of reruns in this. That’ll do. Most of them are reruns from before I started writing about comics so much in these parts.

Bill Watterson’s Calvin and Hobbes for the 5th of January (a rerun, of course, from the 7th of January, 1986) is a kid-resisting-the-test joke. The particular form is trying to claim a religious exemption from mathematics tests. I sometimes see attempts to claim that mathematics is a kind of religion since, after all, you have to believe it’s true. I’ll grant that you do have to assume some things without proof. Those are the rules of logical inference, and the axioms of the field, particularly. But I can’t make myself buy a definition of “religion” that’s just “something you believe”.

But there are religious overtones to a lot of mathematics. The field promises knowable universal truths, things that are true regardless of who and in what context might know them. And the study of mathematical infinity seems to inspire thoughts of God. Amir D Aczel’s The Mystery Of The Aleph: Mathematics, The Kabbala, and the Search for Infinity is a good read on the topic. Addition is still not a kind of religion, though.

'My second boyfriend has a brain as big as a large seedless watermelon.' 'Robert, what is the square root of 2,647,129?' '1627 and how do you get ink stains out of your shirt pocket?'
Bud Grace’s The Piranha Club for the 6th of January, 2015.

Bud Grace’s The Piranha Club for the 6th of January uses the ability to do arithmetic as proof of intelligence. It’s a kind of intelligence, sure. There’s fun to be had in working out a square root in your head, or on paper. But there’s really no need for it now that we’ve got calculator technology, except for what it teaches you about how to compute.

Ruben Bolling’s Super-Fun-Pak Comix for the 6th of June is an installment of A Voice From Another Dimension. It’s just what the title suggests, and of course it would have to be a three-panel comic. The idea that creatures could live in more, or fewer, dimensions of space is a captivating one. It’s challenging to figure how it could work, though. Spaces of one or two dimensions don’t seem like they would allow biochemistry to work. And, as I understand it, chemistry itself seems unlikely to work right in four or more dimensions of space too. But it’s still fun to think about.

David L Hoyt and Jeff Knurek’s Jumble for the 7th of January is a counting-number joke. It does encourage asking whether numbers are created or discovered, which is a tough question. Counting numbers like “four” are so familiar and so apparently universal that they don’t seem to be constructs. (Even if they are, animals have an understanding of at least small counting numbers like these.) But if “four” is somehow not a human construct, then what about “4,000, 000,000, 000,000, 000,000, 000,000, 000,000”, a number so large it’s hard to think of something we have that many of that we can visualize. And even if that is, “one fourth” seems a bit different from that, and “four i” — the number which, squared, gives us negative 16 — seems qualitatively different. But if they’re constructs, then why do they correspond well to things we can see in the real world?

LIHYL (O O - - -), RUCYR (O - - O -), AMDTEN (O - - O O -), GAULEE (- O - O - O). The number that equals four plus four didn't exist until it was `(- - -) (- - - - -) (- -)'. Dashes between the parentheses in that last answer because it's some wordplay there.

David L Hoyt and Jeff Knurek’s Jumble for the 7th of January, 2016. The link will likely expire around mid-February.

Greg Curfman’s Meg Classics for the 7th of January originally ran the 19th of September, 1997. It’s about a kid distractingly interested in multiplication. You get these sometimes. My natural instinct is to put the bigger number first and the smaller number second in a multiplication. “2 times 27” makes me feel nervous in a way “27 times 2” never will.

Hector D Cantu and Carlos Castellanos’s Baldo for the 8th of January is a rerun from 2011. It’s an old arithmetic joke. I wouldn’t be surprised if George Burns and Gracie Allen did it. (Well, a little surprised. Gracie Allen didn’t tend to play quite that kind of dumb. But everybody tells some jokes that are a little out of character.)

Reading the Comics, December 30, 2015: Seeing Out The Year Edition


There’s just enough comic strips with mathematical themes that I feel comfortable doing a last Reading the Comics post for 2015. And as maybe fits that slow week between Christmas and New Year’s, there’s not a lot of deep stuff to write about. But there is a Jumble puzzle.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips gives us someone so wrapped up in measuring data as to not notice the obvious. The obvious, though, isn’t always right. This is why statistics is a deep and useful field. It’s why measurement is a powerful tool. Careful measurement and statistical tools give us ways to not fool ourselves. But it takes a lot of sampling, a lot of study, to give those tools power. It can be easy to get lost in the problems of gathering data. Plus numbers have this hypnotic power over human minds. I understand Lard’s problem.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 27th of December messes with a kid’s head about the way we know 1 + 1 equals 2. The classic Principia Mathematica construction builds it out of pure logic. We come up with an idea that we call “one”, and another that we call “plus one”, and an idea we call “two”. If we don’t do anything weird with “equals”, then it follows that “one plus one equals two” must be true. But does the logic mean anything to the real world? Or might we be setting up a game with no relation to anything observable? The punchy way I learned this question was “one cup of popcorn added to one cup of water doesn’t give you two cups of soggy popcorn”. So why should the logical rules that say “one plus one equals two” tell us anything we might want to know about how many apples one has?

Words: LIHWE, CAQUK, COYKJE, TALAFO. Unscramble to - - - O O, O O - - -, - - - - - O, - - O - O -, and solve the puzzle: 'The math teacher liked teaching addition and subtraction - - - - - - -'.
David L Hoyt and Jeff Knurek’s Jumble for the 28th of December, 2015. The link will probably expire in late January 2016.

David L Hoyt and Jeff Knurek’s Jumble for the 28th of December features a mathematics teacher. That’s enough to include here. (You might have an easier time getting the third and fourth words if you reason what the surprise-answer word must be. You can use that to reverse-engineer what letters have to be in the circles.)

Richard Thompson’s Richard’s Poor Almanac for the 28th of December repeats the Platonic Fir Christmas Tree joke. It’s in color this time. Does the color add to the perfection of the tree, or take away from it? I don’t know how to judge.

A butterfly tells another 'you *should* feel guilty --- the flutter of your wings ended up causing a hurricane that claimed thousands of lives!'
Rina Piccolo filling in for Hilary Price on Rhymes With Orange for the 29th of December, 2015. It’s a small thing but I always like the dog looking up in the title panel for Cartoonist Showcase weeks.

Hilary Price’s Rhymes With Orange for the 29th of December gives its panel over to Rina Piccolo. Price often has guest-cartoonist weeks, which is a generous use of her space. Piccolo already has one and a sixth strips — she’s one of the Six Chix cartoonists, and also draws the charming Tina’s Groove — but what the heck. Anyway, this is a comic strip about the butterfly effect. That’s the strangeness by which a deterministic system can still be unpredictable. This counter-intuitive conclusion dates back to the 1890s, when Henri Poincaré was trying to solve the big planetary mechanics question. That question is: is the solar system stable? Is the Earth going to remain in about its present orbit indefinitely far into the future? Or might the accumulated perturbations from Jupiter and the lesser planets someday pitch it out of the solar system? Or, less likely, into the Sun? And the sad truth is, the best we can say is we can’t tell.

In Brian Anderson’s Dog Eat Doug for the 30th of December, Sophie ponders some deep questions. Most of them are purely philosophical questions and outside my competence. “What are numbers?” is also a philosophical question, but it feels like something a mathematician ought to have a position on. I’m not sure I can offer a good one, though. Numbers seem to be to be these things which we imagine. They have some properties and that obey certain rules when we combine them with other numbers. The most familiar of these numbers and properties correspond with some intuition many animals have about discrete objects. Many times over we’ve expanded the idea of what kinds of things might be numbers without losing the sense of how numbers can interact, somehow. And those expansions have generally been useful. They strangely match things we would like to know about the real world. And we can discover truths about these numbers and these relations that don’t seem to be obviously built into the definitions. It’s almost as if the numbers were real objects with the capacity to surprise and to hold secrets.

Why should that be? The lazy answer is that if we came up with a construct that didn’t tell us anything interesting about the real world, we wouldn’t bother studying it. A truly irrelevant concept would be a couple forgotten papers tucked away in an unread journal. But that is missing the point. It’s like answering “why is there something rather than nothing” with “because if there were nothing we wouldn’t be here to ask the question”. That doesn’t satisfy. Why should it be possible to take some ideas about quantity that ravens, raccoons, and chimpanzees have, then abstract some concepts like “counting” and “addition” and “multiplication” from that, and then modify those concepts, and finally have the modification be anything we can see reflected in the real world? There is a mystery here. I can’t fault Sophie for not having an answer.

Reading the Comics, October 29, 2015: Spherical Squirrel Edition


John Zakour and Scott Roberts’s Maria’s Day is going to Sunday-only publication. A shame, but I understand Zakour and Roberts choosing to focus their energies on better-paying venues. That those venues are “writing science fiction novels” says terrifying things about the economic logic of web comics.

This installment, from the 23rd, is a variation on the joke about the lawyer, or accountant, or consultant, or economist, who carefully asks “what do you want the answer to be?” before giving it. Sports are a rich mine of numbers, though. Mostly they’re statistics, and we might wonder: why does anyone care about sports statistics? Once the score of a game is done counted, what else matters? A sociologist and a sports historian are probably needed to give true, credible answers. My suspicion is that it amounts to money, as it ever does. If one wants to gamble on the outcomes of sporting events, one has to have a good understanding of what is likely to happen, and how likely it is to happen. And I suppose if one wants to manage a sporting event, one wants to spend money and time and other resources to best effect. That requires data, and that we see in numbers. And there are so many things that can be counted in any athletic event, aren’t there? All those numbers carry with them a hypnotic pull.

In Darrin Bell’s Candorville for the 24th of October, Lemont mourns how he’s forgotten how to do long division. It’s an easy thing to forget. For one, we have calculators, as Clyde points out. For another, long division ultimately requires we guess at and then try to improve an answer. It can’t be reduced to an operation that will never require back-tracking and trying some part of it again. That back-tracking — say, trying to put 28 into the number seven times, and finding it actually goes at least eight times — feels like a mistake. It feels like the sort of thing a real mathematician would never do.

And that’s completely wrong. Trying an answer, and finding it’s not quite right, and improving on it is perfectly sound mathematics. Arguably it’s the whole field of numerical mathematics. Perhaps students would find long division less haunting if they were assured that it is fine to get a wrong-but-close answer as long as you make it better.

John Graziano’s Ripley’s Believe It or Not for the 25th of October talks about the Rubik’s Cube, and all the ways it can be configured. I grant it sounds like 43,252,003,274,489,856,000 is a bit high a count of possible combinations. But it is about what I hear from proper mathematics texts, the ones that talk about group theory, so let’s let it pass.

The Rubik’s Cube gets talked about in group theory, the study of things that work kind of like arithmetic. In this case, turning one of the faces — well, one of the thirds of a face — clockwise or counterclockwise by 90 degrees, so the whole thing stays a cube, works like adding or subtracting one, modulo 4. That is, we pretend the only numbers are 0, 1, 2, and 3, and the numbers wrap around. 3 plus 1 is 0; 3 plus 2 is 1. 1 minus 2 is 3; 1 minus 3 is 2. There are several separate rotations that can be done, each turning a third of each face of the cube. That each face of the cube starts a different color means it’s easy to see how these different rotations interact and create different color patterns. And rotations look easy to understand. We can at least imagine rotating most anything. In the Rubik’s Cube we can look at a lot of abstract mathematics in a handheld and friendly-looking package. It’s a neat thing.

Scott Hilburn’s The Argyle Sweater for the 26th of October is really a physics joke. But it uses (gibberish) mathematics as the signifier of “a fully thought-out theory” and that’s good enough for me. Also the talk of a “big boing” made me giggle and I hope it does you too.

Izzy Ehnes’s The Best Medicine Cartoon makes, I believe, its debut for Reading the Comics posts with its entry for the 26th. It’s also the anthropomorphic-numerals joke for the week.

Frank Page’s Bob the Squirrel is struggling under his winter fur this week. On the 27th Bob tries to work out the Newtonian forces involved in rolling about in his condition. And this gives me the chance to share a traditional mathematicians joke and a cliche punchline.

The story goes that a dairy farmer knew he could be milking his cows better. He could surely get more milk, and faster, if only the operations of his farm were arranged better. So he hired a mathematician, to find the optimal way to configure everything. The mathematician toured every part of the pastures, the milking barn, the cows, everything relevant. And then the mathematician set to work devising a plan for the most efficient possible cow-milking operation. The mathematician declared, “First, assume a spherical cow.”

The punch line has become a traditional joke in the mathematics and science fields. As a joke it comments on the folkloric disconnection between mathematicians and practicality. It also comments on the absurd assumptions that mathematicians and scientists will make for the sake of producing a model, and for getting an answer.

The joke within the joke is that it’s actually fine to make absurd assumptions. We do it all the time. All models are simplifications of the real world, tossing away things that may be important to the people involved but that just complicate the work we mean to do. We may assume cows are spherical because that reflects, in a not too complicated way, that while they might choose to get near one another they will also, given the chance, leave one another some space. We may pretend a fluid has no viscosity, because we are interested in cases where the viscosity does not affect the behavior much. We may pretend people are fully aware of the costs, risks, and benefits of any action they wish to take, at least when they are trying to decide which route to take to work today.

That an assumption is ridiculous does not mean the work built on it is ridiculous. We must defend why we expect those assumptions to make our work practical without introducing too much error. We must test whether the conclusions drawn from the assumption reflect what we wanted to model reasonably well. We can still learn something from a spherical cow. Or a spherical squirrel, if that’s the case.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 28th of October is a binary numbers joke. It’s the other way to tell the joke about there being 10 kinds of people in the world. (I notice that joke made in the comments on Gocomics.com. That was inevitable.)

Eric the Circle for the 29th of October, this one by “Gilly” again, jokes about mathematics being treated as if quite subject to law. The truth of mathematical facts isn’t subject to law, of course. But the use of mathematics is. It’s obvious, for example, in the setting of educational standards. What things a member of society must know to be a functioning part of it are, western civilization has decided, a subject governments may speak about. Thus what mathematics everyone should know is a subject of legislation, or at least legislation in the attenuated form of regulated standards.

But mathematics is subject to parliament (or congress, or the diet, or what have you) in subtler ways. Mathematics is how we measure debt, that great force holding society together. And measurement again has been (at least in western civilization) a matter for governments. We accept the principle that a government may establish a fundamental unit of weight or fundamental unit of distance. So too may it decide what is a unit of currency, and into how many pieces the unit may be divided. And from this it can decide how to calculate with that currency: if the “proper” price of a thing would be, say, five-ninths of the smallest available bit of currency, then what should the buyer give the seller?

Who cares, you might ask, and fairly enough. I can’t get worked up about the risk that I might overpay four-ninths of a penny for something, nor feel bad that I might cheat a merchant out of five-ninths of a penny. But consider: when Arabic numerals first made their way to the west they were viewed with suspicion. Everyone at the market or the moneylenders’ knew how Roman numerals worked, and could follow addition and subtraction with ease. Multiplication was harder, but it could be followed. Division was a diaster and I wouldn’t swear that anyone has ever successfully divided using Roman numerals, but at least everything else was nice and familiar.

But then suddenly there was this influx of new symbols, only one of them something that had ever been a number before. One of them at least looked like the letter O, but it was supposed to represent a missing quantity. And every calculation on this was some strange gibberish where one unfamiliar symbol plus another unfamiliar symbol turned into yet another unfamiliar symbol or maybe even two symbols. Sure, the merchant or the moneylender said it was easier, once you learned the system. But they were also the only ones who understood the system, and the ones who would profit by making “errors” that could not be detected.

Thus we see governments, even in worldly, trade-friendly city-states like Venice, prohibiting the use of Arabic numerals. Roman numerals may be inferior by every measure, but they were familiar. They stood at least until enough generations passed that the average person could feel “1 + 1 = 2” contained no trickery.

If one sees in this parallels to the problem of reforming mathematics education, all I can offer is that people are absurd, and we must love the absurdness of them.

One last note, so I can get this essay above two thousand words somehow. In the 1910s Alfred North Whitehead and Bertrand Russell published the awesome and menacing Principia Mathematica. This was a project to build arithmetic, and all mathematics, on sound logical grounds utterly divorced from the great but fallible resource of human intuition. They did probably as well as human beings possibly could. They used a bewildering array of symbols and such a high level of abstraction that a needy science fiction movie could put up any random page of the text and pass it off as Ancient High Martian.

But they were mathematicians and philosophers, and so could not avoid a few wry jokes, and one of them comes in Volume II, around page 86 (it’ll depend on the edition you use). There, in Proposition 110.643, Whitehead and Russell establish “1 + 1 = 2” and remark, “the above proposition is occasionally useful”. They note at least three uses in their text alone. (Of course this took so long because they were building a lot of machinery before getting to mere work like this.)

Back in my days as a graduate student I thought it would be funny to put up a mock political flyer, demanding people say “NO ON PROP *110.643”. I was wrong. But the joke is strong enough if you don’t go to the trouble of making up the sign. I didn’t make up the sign anyway.

And to murder my own weak joke: arguably “1 + 1 = 2” is established much earlier, around page 380 of the first volume, in proposition *54.43. The thing is, that proposition warns that “it will follow, when mathematical addition has been defined”, which it hasn’t been at that point. But if you want to say it’s Proposition *54.43 instead go ahead; it will not get you any better laugh.

If you’d like to see either proof rendered as non-head-crushingly as possible, the Metamath Proof Explorer shows the reasoning for Proposition *54.43 as well as that for *110.643. And it contains hyperlinks so that you can try to understand the exact chain of reasoning which comes to that point. Good luck. I come from a mathematical heritage that looks at the Principia Mathematica and steps backward, quickly, before it has the chance to notice us and attack.

Reading the Comics, October 22, 2015: Foundations Edition


I am, yes, saddened to hear that Apartment 3-G is apparently shuffling off to a farm upstate. There it will be visited by a horrifying kangaroo-deer-fox-demon. And an endless series of shots of two talking heads saying they should go outside, when they’re already outside. But there are still many comic strips running, on Gocomics.com and on Comics Kingdom. They’ll continue to get into mathematically themed subjects. And best of all I can use a Popeye strip to talk about the logical foundations of mathematics and what computers can do for them.

Jef Mallett’s Frazz for the 18th of October carries on the strange vendetta against “showing your work”. If you do read through the blackboard-of-text you’ll get some fun little jokes. I like the explanation of how “obscure calculus symbols” could be used, “And a Venn diagram!” Physics majors might notice the graph on the center-right, to the right of the DNA strand. That could show many things, but the one most plausible to me is a plot of the velocity and the position of an object undergoing simple harmonic motion.

Still, I do wonder what work Caulfield would show if the problem were to say what fraction were green apples, if there were 57 green and 912 red apples. There are levels where “well, duh” will not cut it. In case “well, duh” does cut it, then a mathematician might say the answer is “obvious”. But she may want to avoid the word “obvious”, which has a history of being dangerously flexible. She might then say “by inspection”. That means, basically, look at it and yeah, of course that’s right.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 18th of October uses mathematics as the quick way to establish “really smart”. It doesn’t take many symbols this time around, curiously. Superstar Equation E = mc2 appears in a misquoted form. At first that seems obvious, since if there were an equals sign in the denominator the whole expression would not parse. Then, though, you notice: if E and m and c mean what they usually do in the Superstar Equation, then, “E – mc2” is equal to zero. It shouldn’t be in the denominator anyway. So, the big guy has to be the egghead.

Peter Maresca’s Origins of the Sunday Comics for the 18th of October reprints one of Windsor McCay’s Dreams of the Rarebit Fiend strips. As normal for Dreams and so much of McCay’s best work, it’s a dream-to-nightmare strip. And this one gives a wonderful abundance of numerals, and the odd letter, to play with. Mathematical? Maybe not. But it is so merrily playful it’d be a shame not to include.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th of October is a joke soundly in set theory. It also feels like it’s playing with a set-theory-paradox problem but I can’t pin down which one exactly. It feels most like the paradox of “find the smallest uninteresting counting number”. But being the smallest uninteresting counting number would be an interesting property to have. So any candidate number has to count as interesting. It also feels like it’s circling around the heap paradox. Take a heap of sand and remove one grain, and you still have a heap of sand. But if you keep doing that, at some point, you have just one piece of sand from the original pile, and that is no heap. When does the heap move away?

Daniel Shelton’s Ben for the 21st of October is a teaching-arithmetic problem, using jellybeans. And fractions. Well, real objects can do wonders in connecting a mathematical abstraction to something one has an intuition for. One just has to avoid unwanted connotations and punching.

Doug Savage’s Savage Chickens for the 21st of October uses “mathematics homework” as the emblem of the hardest kind of homework there might ever be. I saw the punch line coming a long while off, but still laughed.

The Sea Hag is dismissive of scientists, who try to take credit for magic even though 'they can't even THINK! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!'
Bud Sagendorf’s Popeye for the 22nd of October, 2015. The strip originally ran sometime in 1981. This would be only a few years after the Four-Color Theorem was solved by computer. The computer did this by trying out all the possibilities and reporting everything was OK.

Bud Sagendorf’s Popeye began what it billed as a new story, “Science Vs Sorcery”, on Monday the 19th. I believe it’s properly a continuation of the previous story, though, “Back-Room Pest!” which began the 13th of July. “Back-Room Pest!”, according to my records, originally ran from the 27th of July, 1981, through to the 23rd of January, 1982. So there’s obviously time missing. And this story, like “Back-Room Pest”, features nutty inventor Professor O G Wotasnozzle. I know, I know, you’re all deeply interested in working out correct story guides for this.

Anyway, the Sea Hag in arguing against scientists claims “they can’t even think! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!” It’s a funny line and remarkably pointed for an early-80s Popeye comic. The complaint that computers leave one unable to do even simple reasoning is an old one, of course. The complaint has been brought against every device or technique that promises to lighten a required mental effort. It seems to me similar to the way new kinds of weapons are accused of making war too monstrous and too unchivalrous, too easily done by cowards. I suppose it’s also the way a fable like the story of John Henry hold up human muscle against the indignity of mechanical work.

The crack about needing another machine to prove the first was right is less usual, though. Sagendorf may have meant to be whimsically funny, but he hit on something true. One of the great projects of late 19th and early 20th century mathematics was the attempt to place its foundations on strict logic, independent of all human intuition. (Intuition can be a great guide, but it can lead one astray.) Out of this came a study of proofs as objects, as mathematical constructs which must themselves follow certain rules.

And here we reach a spooky borderland between mathematics and sorcery. We can create a proof system that is, in a way, a language with a grammar. A string of symbols that satisfies all the grammatical rules is itself a proof, a valid argument following from the axioms of the system. (The axioms are some basic set of statements which we declare to be true by assumption.) And it does not matter how the symbols are assembled: by mathematician, by undergrad student worker, by monkey at a specialized typewriter, by a computer stringing things together. Once a grammatically valid string of symbols is done, that string of symbols is a theorem, with its proof written out. The proof is the string of symbols that is the theorem written out. If it were not for the modesty of what is claimed to be done — proofs about arithmetic or geometry or the like — one might think we had left behind mathematics and were now summoning demons by declaring their True Names. Or risk the stars overhead going out, one by one.

So it is possible to create a machine that simply grinds out proofs. Or, since this is the 21st century, a computer that does that. If the computer is given no guidance it may spit out all sorts of theorems that are true but boring. But we can set up a system by which the computer, by itself, works out whether a given theorem does follow from the axioms of mathematics. More, this has been done. It’s a bit of a pain, because any proofs that are complicated enough to really need checking involve an incredible number of steps. But for a challenging enough proof it is worth doing, and automated proof checking is one of the tools mathematicians can now draw on.

Of course, then we have the problem of knowing that the computer is carrying out its automatic-proof programming correctly. I’m not stepping into that kind of trouble.

The attempt to divorce mathematics from all human intuition was a fruitful one. The most awe-inspiring discovery to come from it is surely that of incompleteness. Any mathematical system interesting enough will contain within it statements that are true, but can’t be proven true from the axioms.

Georgia Dunn’s Breaking Cat News for the 22nd of October features a Venn Diagram. It’s part of how cats attempt to understand toddlers. My understanding is that their work is correct.

The Kind Of Book That Makes Me Want To Refocus On Logic


For my birthday my love gave me John Stillwell’s Roads to Infinity: The Mathematics of Truth and Proof. It was a wonderful read. More, it’s the sort of read that gets me excited about a subject.

The subject in this case is mathematical logic, and specifically the sections of it which describe infinitely large sets, and the provability of theorems. That these are entwined subjects may seem superficially odd. Stillwell explains well how the insights developed in talking about infinitely large sets develops the tools to study whether logical systems are complete and decidable.

At least it explains it well to me. I know I’m not a typical reader. I’m not certain if I would have understood the book as well as I did if I hadn’t had a senior-level course in mathematical logic. And that was a long time ago, but it was also the only mathematics course which described approaches to killing the Hydra. Stillwell’s book talks about it too and I admit I appreciate the refresher. (Yeah, this is not a literal magical all-but-immortal multi-headed beast mathematicians deal with. It’s also not the little sea creature. What mathematicians mean by a ‘hydra’ is a branching graph which looks kind of like a grape vine, and by ‘slaying’ it we mean removing branches according to particular rules that make it not obvious that we’ll ever get to finish.)

I appreciate also — maybe as much as I liked the logic — the historical context. The development of how mathematicians understand infinity and decidability is the sort of human tale that people don’t realize even exists. One of my favorite sections mentioned a sequence in which great minds, Gödel among them, took turns not understanding the reasoning behind some new important and now-generally-accepted breakthroughs.

So I’m left feeling I want to recommend the book, although I’m not sure who to. It’s obviously a book that scouts out mathematical logic in ways that make sense if you aren’t a logician. But it uses — as it must — the notation and conventions and common concepts of mathematical logic. My love, a philosopher by trade, would probably have no trouble understanding any particular argument, and would probably pick up symbols as they’re introduced. But there’d have to be a lot of double-checking notes about definitions. And the easy familiarity with non-commutative multiplication is a mathematics-major thing, and to a lesser extent a physics-major thing. Someone without that background would fairly worry something weird was going on other than the weirdness that was going on.

Anyway, the book spoke to a particular kind of mathematics I’d loved and never had the chance to do much with. If this is a field you feel some love for, and have some training in, then it may be right for you.

Reading the Comics, September 10, 2015: Back To School Edition


I assume that Comic Strip Master Command ordered many mathematically-themed comic strips to coincide with the United States school system getting back up to full. That or they knew I’d have a busy week. This is only the first part of comic strips that have appeared since Tuesday.

Mel Henze’s Gentle Creatures for the 7th and the 8th of September use mathematical talk to fill out the technobabble. It’s a cute enough notion. These particular strips ran last year, and I talked about them then. The talk of a “Lagrangian model” interests me. It name-checks a real and important and interesting scientist who’s not Einstein or Stephen Hawking. But I’m still not aware of any “Lagrangian model” that would be relevant to starship operations.

Jon Rosenberg’s Scenes from a Multiverse for the 7th of September speaks of a society of “powerful thaumaturgic diagrammers” who used Venn diagrams not wisely but too well. The diagrammers got into trouble when one made “a Venn diagram that showed the intersection of all the Venns and all the diagrams”. I imagine this not to be a rigorous description of what happened. But Venn diagrams match up well with many logic problems. And self-referential logic, logic statements that describe their own truth or falsity, is often problematic. So I would accept a story in which Venn diagrams about Venn diagrams leads to trouble. The motif of tying logic and mathematics into magic is an old one. I understand it. A clever mathematical argument often feels like magic, especially the surprising ones. To me, the magical theorems are those that prove a set of seemingly irrelevant lemmas. Then, with that stock in hand, the theorem goes on to the main point in a few wondrous lines. If you can do that, why not transmute lead, or accidentally retcon a society out of existence?

Mark Anderson’s Andertoons for the 8th of September just delights me. Occasionally I feel a bit like Mark Anderson’s volunteer publicity department. A panel like this, though, makes me feel that he deserves it.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 8th of September is the first anthropomorphic-geometric-figures joke we’ve had here in a while.

Mike Baldwin’s Cornered for the 9th of September is a drug testing joke, and a gambling joke. Both are subjects driven by probabilities. Any truly interesting system is always changing. If we want to know whether something affects the system we have to know whether we can make a change that’s bigger than the system does on its own. And this gives us drug-testing and other statistical inference tests. If we apply a drug, or some treatment, or whatever, how does the system change? Does it change enough, consistently, that it’s not plausible that the change just happened by chance? Or by some other influence?

You might have noticed a controversy going around psychology journals. A fair number of experiments were re-run, by new experimenters following the original protocols as closely as possible. Quite a few of the reported results didn’t happen again, or happened in a weaker way. That’s produced some handwringing. No one thinks deliberate experimental fraud is that widespread in the field. There may be accidental fraud, people choosing data or analyses that heighten the effect they want to prove, or that pick out any effect. However, it may also simply be chance again. Psychology experiments tend to have a lower threshold of “this is sufficiently improbable that it indicates something is happening” than, say, physics has. Psychology has a harder time getting the raw data. A supercollider has enormous startup costs, but you can run the thing for as long as you like. And every electron is the same thing. A test of how sleep deprivation affects driving skills? That’s hard. No two sleepers or drivers are quite alike, even at different times of the day. There’s not an obvious cure. Independent replication of previously done experiments helps. That’s work that isn’t exciting — necessary as it is, it’s also repeating what others did — and it’s harder to get people to do it, or pay for it. But in the meantime it’s harder to be sure what interesting results to trust.

Ruben Bolling’s Super-Fun-Pak Comix for the 9th of September is another Chaos Butterfly installment. I don’t want to get folks too excited for posts I technically haven’t written yet, but there is more Chaos Butterfly soon.

Rick Stromoski’s Soup To Nutz for the 10th of September has Royboy guess the odds of winning a lottery are 50-50. Silly, yes, but only because we know that anyone is much more likely to lose a lottery than to win it. But then how do we know that?

Since the rules of a lottery are laid out clearly we can reason about the probability of winning. We can calculate the number of possible outcomes of the game, and how many of them count as winning. Suppose each of those possible outcomes are equally likely. Then the probability of winning is the number of winning outcomes divided by the number of probable outcomes. Quite easy.

— Of course, that’s exactly what Royboy did. There’s two possible outcomes, winning or losing. Lacking reason to think they aren’t equally likely he concluded a win and a loss were just as probable.

We have to be careful what we mean by “an outcome”. What we probably mean for a drawn-numbers lottery is the number of ways the lottery numbers can be drawn. For a scratch-off card we mean the number of tickets that can be printed. But we’re still stuck with this idea of “equally likely” outcomes. I suspect we know what we mean by this, but trying to say what that is clearly, and without question-begging, is hard. And even this works only because we know the rules by which the lottery operates. Or we can look them up. If we didn’t know the details of the lottery’s workings, past the assumption that it has consistently followed rules, what could we do?

Well, that’s what we have probability classes for, and particularly the field of Bayesian probability. This field tries to estimate the probabilities of things based on what actually happens. Suppose Royboy played the lottery fifty times and lost every time. That would smash the idea that his chances were 50-50, although that would not yet tell him what the chances really are.

Reading the Comics, September 6, 2015: September 6, 2015 Edition


Well, we had another of those days where Comic Strip Master Command ordered everybody to do mathematics jokes. I’ll survive.

Henry is frustrated with his arithmetic, until he goes to the pool hall and counts off numbers on those score chips.
Don Trachte’s Henry for the 6th of September, 2015.

Don Trachte’s Henry is a reminder that arithmetic, like so many things, is easier to learn when you’re comfortable with the context. Personally I’ve never understood why some of the discs on pool scoring racks are different colors but imagine it relates to scoring values, somehow. I’ve encountered multiple people who assume I must be good at pool, since it’s all geometry, and what isn’t just geometry is physics. I’ve disappointed them all so far.

Tony Rubino and Gary Markstein’s Daddy’s Home uses arithmetic as an example of joy-crushing school drudgery. It could’ve as easily been asking the capital of Montana.

Scott Adams’s Dilbert Classics, a rerun from the 29th of June, 1992, has Dilbert make a breakthrough in knot theory. The fundamental principle is correct: there are many knots that one could use for tying shoelaces, just as there are many knots that could be used for tying ties. Discovering new ones is a good ways for knot theorists to get a bit of harmless publicity. Nobody needs them. From a knot-theory perspetive it also doesn’t matter if you lace the shoe’s holes crosswise or ladder-style. There are surely other ways to lace the holes, too, but nobody needs them either.

Maria Scrivan’s Half Full uses a blackboard full of mathematical symbols and name-drops Common Core. Fifty years ago this same joke was published, somewhere, with “Now solve it using the New Math” as punchline. Thirty years from now it will run again, with “Now solve it using the (insert name here)” as punchline. Some things are eternal truths.

T Lewis and Michael Fry’s Over The Hedge presents one of those Cretan paradox-style logic problems. Anyway, I choose to read it as such. I’m tickled by it.

And to close things out, both Leigh Rubin’s Rubes and Mikael Wulff and Anders Morgenthaler’s WuMo did riffs on the story of Newton and the falling apple. Is this truly mathematically-themed? Well, it’s tied to the legend of calculus’s origin, so that’s near enough for me.

%d bloggers like this: