I’m slow about sharing them is all. It’s a simple dynamic: I want to write enough about each tweet that it’s interesting to share, and then once a little time has passed, I need to do something more impressive to be worth the wait. Eventually, nothing is ever shared. Let me try to fix that.
Just as it says: a link to Leonhard Euler’s Elements of Algebra, as rendered by Google Books. Euler you’ll remember from every field of mathematics ever. This 1770 textbook is one of the earliest that presents algebra that looks like, you know, algebra, the way we study it today. Much of that is because this book presented algebra so well that everyone wanted to imitate it.
This Theorem of the Day from back in November already is one about elliptic functions. Those came up several times in the Summer 2017 Mathematics A To Z. This day about the Goins-Maddox-Rusin Theorem on Heron Triangles, is dense reading even by the standards of the Theorem of the Day tweet (which fits each day’s theorem into a single slide). Still, it’s worth lounging about in the mathematics.
Elke Stangl, writing about one of those endlessly-to-me interesting subjects: phase space. This is a particular way of representing complicated physical systems. Set it up right and all sorts of physics problems become, if not easy, at least things there’s a standard set of tools for. Thermodynamics really encourages learning about such phase spaces, and about entropy, and here she writes about some of this.
Non-limit calculating e by hand. https://t.co/Kv80RotboJ Fun activity & easily reproducible. Anyone know the author?
So ‘e’ is an interesting number. At least, it’s a number that’s got a lot of interesting things built around it. Here, John Golden points out a neat, fun, and inefficient way to find the value of ‘e’. It’s kin to that scheme for calculating π inefficiently that I was being all curmudgeonly about a couple of Pi Days ago.
Jo Morgan comes to the rescue of everyone who tries to read old-time mathematics. There were a lot of great and surprisingly readable great minds publishing in the 19th century, but then you get partway through a paragraph and it might as well be Old High Martian with talk about diminishings and consequents and so on. So here’s some help.
For college students that will be taking partial differential equations next semester, here is a very good online book. https://t.co/txtfbMaRKc
As it says on the tin: a textbook on partial differential equations. If you find yourself adrift in the subject, maybe seeing how another author addresses the same subject will help, if nothing else for finding something familiar written in a different fashion.
Here's a cool way to paper-fold an ellipse:
1) Cut a circle and fold it so that the circumference falls on a fixed point inside 2) Repeat this procedure using random folds pic.twitter.com/TAU50pvgll
And this is just fun: creating an ellipse as the locus of points that are never on the fold line when a circle’s folded by a particular rule.
Finally, something whose tweet origin I lost. It was from one of the surprisingly many economists I follow considering I don’t do financial mathematics. But it links to a bit of economic history: Origins of the Sicilian Mafia: The Market for Lemons. It’s 31 pages plus references. And more charts about wheat production in 19th century Sicily than I would have previously expected to see.
By the way, if you’re interested in me on Twitter, that would be @Nebusj. Thanks for stopping in, should you choose to.
I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:
And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:
So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it raised to the same power. and are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.
Here’s the first way I figure to make my life simpler. It’s in rewriting that and stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.
At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number matches to the point with x-coordinate 1, y-coordinate 1. The number matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.
So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)
The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be , because mathematicians like radians more than degrees. They make other work easier.
So. The point corresponds to the polar coordinates and . The point corresponds to the polar coordinates and . Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.
With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by , where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.
It’s easy to raise a number written like this to a power. raised to the n-th power is going to be equal to . (Because and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.
And we can get back to something that looks like too. That is, something that’s a real number plus times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.
OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.
And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.
This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.
For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.
What this all amounts to is that I can turn this:
without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:
I can’t look at that number, , sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:
Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:
And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:
That’s good for any real-valued φ. For example, it’s good for the number . And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:
And that’s the function whose limit I want to take at ∞. No, really.
A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that thing that Fermat’s Last Theorem is all about. And you’ve probably seen . It turns up a lot because that’s a line, and we do a lot of stuff with lines.
Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. , for example, that’s easy to solve. it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that is simple while is (most of the time) impossible?
I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like . For example, there’s for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is and that one took a computer search to find. We can forgive Euler not noticing it.
Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.
But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:
Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.
So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.
There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.
We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve , but specific ones, like . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)
But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.
Graph theory arises whenever we have a bunch of things that can be connected. We call the things “vertices”, because that’s a good corner-type word. The connections we call “edges”, because that’s a good connection-type word. It’s easy to create graphs that look like the edges of a crystal, especially if you draw edges as straight as much as possible. You don’t have to. You can draw them curved. Then they look like the scary tangles of wire around your wireless router complex.
Graph theory really got organized in the 19th century, and went crazy in the 20th. It turns out there’s lots of things that connect to other things. Networks, whether computers or social or thematically linked concepts. Anything that has to be delivered from one place to another. All the interesting chemicals. Anything that could be put in a pipe or taken on a road has some graph theory thing applicable to it.
A lot of graph theory ponders loops. The original problem was about how to use every bridge, every edge, exactly one time. Look at a tangled mass of a graph and it’s hard not to start looking for loops. They’re often interesting. It’s not easy to tell if there’s a loop that lets you get to every vertex exactly once.
What if there aren’t loops? What if there aren’t any vertices you can step away from and get back to by another route? Well, then you have a tree.
A tree’s a graph where all the vertices are connected so that there aren’t any closed loops. We normally draw them with straight lines, the better to look like actual trees. We then stop trying to make them look like actual trees by doing stuff like drawing them as a long horizontal spine with a couple branches sticking off above and below, or as * type stars, or H shapes. They still correspond to real-world things. If you’re not sure how consider the layout of one of those long, single-corridor hallways as in a hotel or dormitory. The rooms connect to one another as a tree once again, as long as no room opens to anything but its own closet or bathroom or the central hallway.
We can talk about the radius of a graph. That’s how many edges away any point can be from the center of the tree. And every tree has a center. Or two centers. If it has two centers they share an edge between the two. And that’s one of the quietly amazing things about trees to me. However complicated and messy the tree might be, we can find its center. How many things allow us that?
A tree might have some special vertex. That’s called the ‘root’. It’s what the vertices and the connections represent that make a root; it’s not something inherent in the way trees look. We pick one for some special reason and then we highlight it. Maybe put it at the bottom of the drawing, making ‘root’ for once a sensible name for a mathematics thing. Often we put it at the top of the drawing, because I guess we’re just being difficult. Well, we do that because we were modelling stuff where a thing’s properties depend on what it comes from. And that puts us into thoughts of inheritance and of family trees. And weird as it is to put the root of a tree at the top, it’s also weird to put the eldest ancestors at the bottom of a family tree. People do it, but in those illuminated drawings that make a literal tree out of things. You don’t see it in family trees used for actual work, like filling up a couple pages at the start of a king or a queen’s biography.
Trees give us neat new questions to ponder, like, how many are there? I mean, if you have a certain number of vertices then how many ways are there to arrange them? One or two or three vertices all have just the one way to arrange them. Four vertices can be hooked up a whole two ways. Five vertices offer a whole three different ways to connect them. Six vertices offer six ways to connect and now we’re finally getting something interesting. There’s eleven ways to connect seven vertices, and 23 ways to connect eight vertices. The number keeps on rising, but it doesn’t follow the obvious patterns for growth of this sort of thing.
And if that’s not enough to idly ponder then think of destroying trees. Draw a tree, any shape you like. Pick one of the vertices. Imagine you obliterate that. How many separate pieces has the tree been broken into? It might be as few as two. It might be as many as the number of remaining vertices. If graph theory took away the pastime of wandering around Köningsburg’s bridges, it has given us this pastime we can create anytime we have pen, paper, and a long meeting.
I know it seems like when I write these essays I spend the most time on the first comic in the bunch and give the last ones a sentence, maybe two at most. I admit when there’s a lot of comics I have to write up at once my energy will droop. But Comic Strip Master Command apparently wants the juiciest topics sent out earlier in the week. I have to follow their lead.
Stephen Beals’s Adult Children for the 14th uses mathematics to signify deep thinking. In this case Claremont, the dog, is thinking of the Riemann Zeta function. It’s something important in number theory, so longtime readers should know this means it leads right to an unsolved problem. In this case it’s the Riemann Hypothesis. That’s the most popular candidate for “what is the most important unsolved problem in mathematics right now?” So you know Claremont is a deep-thinking dog.
The big Σ ordinary people might recognize as representing “sum”. The notation means to evaluate, for each legitimate value of the thing underneath — here it’s ‘n’ — the value of the expression to the right of the Sigma. Here that’s . Then add up all those terms. It’s not explicit here, but context would make clear, n is positive whole numbers: 1, 2, 3, and so on. s would be a positive number, possibly a whole number.
The big capital Pi is more mysterious. It’s Sigma’s less popular brother. It means “product”. For each legitimate value of the thing underneath it — here it’s “p” — evaluate the expression on the right. Here that’s . Then multiply all that together. In the context of the Riemann Zeta function, “p” here isn’t just any old number, or even any old whole number. It’s only the prime numbers. Hence the “p”. Good notation, right? Yeah.
This particular equation, once shored up with the context the symbols live in, was proved by Leonhard Euler, who proved so much you sometimes wonder if later mathematicians were needed at all. It ties in to how often whole numbers are going to be prime, and what the chances are that some set of numbers are going to have no factors in common. (Other than 1, which is too boring a number to call a factor.) But even if Claremont did know that Euler got there first, it’s almost impossible to do good new work without understanding the old.
Charlos Gary’s Working It Out for the 14th is this essay’s riff on pie charts. Or bar charts. Somewhere around here the past week I read that a French idiom for the pie chart is the “cheese chart”. That’s a good enough bit I don’t want to look more closely and find out whether it’s true. If it turned out to be false I’d be heartbroken.
Ryan North’s Dinosaur Comics for the 15th talks about everyone’s favorite physics term, entropy. Everyone knows that it tends to increase. Few advanced physics concepts feel so important to everyday life. I almost made one expression of this — Boltzmann’s H-Theorem — a Theorem Thursday post. I might do a proper essay on it yet. Utahraptor describes this as one of “the few statistical laws of physics”, which I think is a bit unfair. There’s a lot about physics that is statistical; it’s often easier to deal with averages and distributions than the mass of real messy data.
Utahraptor’s right to point out that it isn’t impossible for entropy to decrease. It can be expected not to, in time. Indeed decent scientists thinking as philosophers have proposed that “increasing entropy” might be the only way to meaningfully define the flow of time. (I do not know how decent the philosophy of this is. This is far outside my expertise.) However: we would expect at least one tails to come up if we simultaneously flipped infinitely many coins fairly. But there is no reason that it couldn’t happen, that infinitely many fairly-tossed coins might all come up heads. The probability of this ever happening is zero. If we try it enough times, it will happen. Such is the intuition-destroying nature of probability and of infinitely large things.
Tony Cochran’s Agnes on the 16th proposes to decode the Voynich Manuscript. Mathematics comes in as something with answers that one can check for comparison. It’s a familiar role. As I seem to write three times a month, this is fair enough to say to an extent. Coming up with an answer to a mathematical question is hard. Checking the answer is typically easier. Well, there are many things we can try to find an answer. To see whether a proposed answer works usually we just need to go through it and see if the logic holds. This might be tedious to do, especially in those enormous brute-force problems where the proof amounts to showing there are a hundred zillion special cases and here’s an answer for each one of them. But it’s usually a much less hard thing to do.
Johnny Hart and Brant Parker’s Wizard of Id Classics for the 17th uses what seems like should be an old joke about bad accountants and nepotism. Well, you all know how important bookkeeping is to the history of mathematics, even if I’m never that specific about it because it never gets mentioned in the histories of mathematics I read. And apparently sometime between the strip’s original appearance (the 20th of August, 1966) and my childhood the Royal Accountant character got forgotten. That seems odd given the comic potential I’d imagine him to have. Sometimes a character’s only good for a short while is all.
Mark Anderson’s Andertoons for the 18th is the Andertoons representative for this essay. Fair enough. The kid speaks of exponents as a kind of repeating oneself. This is how exponents are inevitably introduced: as multiplying a number by itself many times over. That’s a solid way to introduce raising a number to a whole number. It gets a little strained to describe raising a number to a rational number. It’s a confusing mess to describe raising a number to an irrational number. But you can make that logical enough, with effort. And that’s how we do make the idea rigorous. A number raised to (say) the square root of two is something greater than the number raised to 1.4, but less than the number raised to 1.5. More than the number raised to 1.41, less than the number raised to 1.42. More than the number raised to 1.414, less than the number raised to 1.415. This takes work, but it all hangs together. And then we ask about raising numbers to an imaginary or complex-valued number and we wave that off to a higher-level mathematics class.
Lachowski’s Get A Life for the 18th is the sudoku joke for this essay. It’s also a representative of the idea that any mathematical thing is some deep, complicated puzzle at least as challenging as calculating one’s taxes. I feel like this is a rerun, but I don’t see any copyright dates. Sudoku jokes like this feel old, but comic strips have been known to make dated references before.
Samson’s Dark Side Of The Horse for the 19th is this essay’s Dark Side Of The Horse gag. I thought initially this was a counting-sheep in a lab coat. I’m going to stick to that mistaken interpretation because it’s more adorable that way.
People think mathematics is mostly counting and arithmetic. It’s what we get at when we say “do the math[s]”. It’s why the mathematician in the group is the one called on to work out what the tip should be. Heck, I attribute part of my love for mathematics to a Berenstain Bears book which implied being a mathematician was mostly about adding up sums in a base on the Moon, which is an irresistible prospect. In fact, usually counting and arithmetic are, at least, minor influences on real mathematics. There are legends of how catastrophically bad at figuring mathematical genius can be. But usually isn’t always, and this week I’d like to show off a case where counting things and adding things up lets us prove something interesting.
The Five-Color Map Theorem.
No, not four. I imagine anyone interested enough to read a mathematics blog knows the four-color map theorem. It says that you only need four colors to color a map. That’s true, given some qualifiers. No discontiguous chunks that need the same color. Two regions with the same color can touch at a point, they just can’t share a line or curve. The map is on a plane or the surface of a sphere. Probably some other requirements. I’m not going to prove that. Nobody has time for that. The best proofs we’ve figured out for it amount to working out how every map fits into one of a huge number of cases, and trying out each case. It’s possible to color each of those cases with only four colors, so, we’re done. Nice but unenlightening and way too long to deal with.
The five-color map theorem is a lot like the four-color map theorem, with this difference: it says that you only need five colors to color a map. Same qualifiers as before. Yes, it’s true because the four-color map theorem is true and because five is more than four. We can do better than that. We can prove five colors are enough even without knowing whether four colors will do. And it’s easy. The ease of the five-color map theorem gave people reason to think four colors would be maybe harder but still manageable.
The proof I want to show uses one of mathematicians’ common tricks. It employs the same principle which Hercules used to slay the Hydra, although it has less cauterizing lake-monster flesh with flaming torches, as that’s considered beneath the dignity of the Academy anymore except when grading finals for general-requirements classes. The part of the idea we do use is to take a problem which we might not be able to do and cut it down to one we can do. Properly speaking this is a kind of induction proof. In those we start from problems we can do and show that if we can do those, we can do all the complicated problems. But we come at it by cutting down complicated problems and making them simple ones.
Please enjoy my little map of the place. It gives all the states a single color because I don’t really know how to use QGIS and it would probably make my day job easier if I did. (Well, QGIS is open-source software, so its interface is a disaster and its tutorials gibberish. The only way to do something with it is to take flaming torches to it.)
There’s eight regions here, eight states, so it’s not like we’re at the point we can’t figure how to color this with five different colors. That’s all right. I’m using this for a demonstration. Pretend the Dominion of New England is so complicated we can’t tell whether five colors are enough. Oh, and a spot of lingo: if five colors are enough to color the map we say the map is “colorable”. We say it’s “5-colorable” if we want to emphasize five is enough colors.
So imagine that we erase the border between Maine and New Hampshire. Combine them into a single state over the loud protests of the many proud, scary Mainers. But if this simplified New England is colorable, so is the real thing. There’s at least one color not used for Greater New Hampshire, Vermont, or Massachusetts. We give that color to a restored Maine. If the simplified map can be 5-colored, so can the original.
Maybe we can’t tell. Suppose the simplified map is still too complicated to make it obvious. OK, then. Cut out another border. How about we offend Roger Williams partisans and merge Rhode Island into Massachusetts? Massachusetts started out touching five other states, which makes it a good candidate for a state that needed a sixth color. With Rhode Island reduced to being a couple counties of the Bay State, Greater Massachusetts only touches four other states. It can’t need a sixth color. There’s at least one of our original five that’s free.
OK, but, how does that help us find a color for Rhode Island? Maine it’s easy to see why there’s a free color. But Rhode Island?
Well, it’ll have to be the same color as either Greater New Hampshire or Vermont or New York. At least one of them has to be available. Rhode Island doesn’t touch them. Connecticut’s color is out because Rhode Island shares a border with it. Same with Greater Massachusetts’s color. But we’ve got three colors for the taking.
But is our reduced map 5-colorable? Even with Maine part of New Hampshire and Rhode Island part of Massachusetts it might still be too hard to tell. There’s six territories in it, after all. We can simplify things a little. Let’s reverse the treason of 1777 and put Vermont back into New York, dismissing New Hampshire’s claim on the territory as obvious absurdity. I am never going to be allowed back into New England. This Greater New York needs one color for itself, yes. And it touches four other states. But these neighboring states don’t touch each other. A restored Vermont could use the same color as New Jersey or Connecticut. Greater Massachusetts and Greater New Hampshire are unavailable, but there’s still two choices left.
And now look at the map we have remaining. There’s five states in it: Greater New Hampshire, Greater Massachusetts, Greater New York, Regular Old Connecticut, and Regular old New Jersey. We have five colors. Obviously we can give the five territories different colors.
This is one case, one example map. That’s all we need. A proper proof makes things more abstract, but uses the same pattern. Any map of a bunch of territories is going to have at least one territory that’s got at most five neighbors. Maybe it will have several. Look for one of them. If you find a territory with just one neighbor, such as Maine had, remove that border. You’ve got a simpler map and there must be a color free for the restored territory.
If you find a territory with just two neighbors, such as Rhode Island, take your pick. Merge it with either neighbor. You’ll still have at least one color free for the restored territory. With three neighbors, such as Vermont or Connecticut, again you have your choice. Merge it with any of the three neighbors. You’ll have a simpler map and there’ll be at least one free color.
If you have four neighbors, the way New York has, again pick a border you like and eliminate that. There is a catch. You can imagine one of the neighboring territories reaching out and wrapping around to touch the original state on more than one side. Imagine if Massachusetts ran far out to sea, looped back through Canada, and came back to touch New Jersey, Vermont from the north, and New York from the west. That’s more of a Connecticut stunt to pull, I admit. But that’s still all right. Most of the colonies tried this sort of stunt. And even if Massachusetts did that, we would have colors available. It would be impossible for Vermont and New Jersey to touch. We’ve got a theorem that proves it.
If you have five neighbors, the way Massachusetts has, well, maybe you’re lucky. We are here. None of its neighboring states touches more than two others. We can cut out a border easily and have colors to spare. But we could be in trouble. We could have a map in which all the bordering states touch three or four neighbors and that seems like it would run out of colors. Let me show a picture of that.
So this map looks dire even when you ignore that line that looks like it isn’t connected where C and D come together. Flood fill didn’t run past it, so it must be connected. It just doesn’t look right. Everybody has four neighbors except the province of B, which has three. The province of A has got five. What can we do?
Call on the Jordan Curve Theorem again. At least one of the provinces has to be landlocked, relative to the others. In this case, the borders of provinces A, D, and C come together to make a curve that keeps B in the inside and E on the outside. So we’re free to give B and E the same color. We treat this in the proof by doing a double merger. Erase the boundary between provinces A and B, and also that between provinces A and E. (Or you might merge B, A, and F together. It doesn’t matter. The Jordan Curve Theorem promises us there’ll be at least one choice and that’s all we need.)
So there we have it. As long as we have a map that has some provinces with up to five neighbors, we can reduce the map. And reduce it again, if need be, and again and again. Eventually we’ll get to a map with only five provinces and that has to be 5-colorable.
Just … now … one little nagging thing. We’re relying on there always being some province with at most five neighbors. Why can’t there be some horrible map where every province has six or more neighbors?
Counting will tell us. Arithmetic will finish the job. But we have to get there by way of polygons.
That is, the easiest way to prove this depends on a map with boundaries that are all polygons. That’s all right. Polygons are almost the polynomials of geometry. You can make a polygon that looks so much like the original shape the eye can’t tell the difference. Look at my Dominion of New England map. That’s computer-rendered, so it’s all polygons, and yet all those shore and river boundaries look natural.
But what makes up a polygon? Well, it’s a bunch of straight lines. We call those ‘edges’. Each edge starts and ends at a corner. We call those ‘vertices’. These edges come around and close together to make a ‘face’, a territory like we’ve been talking about. We’re going to count all the regions that have a certain number of neighboring other regions.
Specifically, F2 will represent however many faces there are that have two sides. F3 will represent however many faces there are that have three sides. F4 will represent however many faces there are that have four sides. F10 … yeah, you got this.
One thing you didn’t get. The outside counts as a face. We need this to make the count come out right, so we can use some solid-geometry results. In my map that’s the vast white space that represents the Atlantic Ocean, the other United States, the other parts of Canada, the Great Lakes, all the rest of the world. So Maine, for example, belongs to F2 because it touches New Hampshire and the great unknown void of the rest of the universe. Rhode Island belongs to F3 similarly. New Hampshire’s in F4.
Any map has to have at least one thing that’s in F2, F3, F4, or F5. They touch at most two, three, four or five neighbors. (If they touched more, they’d represent a face that was a polygon of even more sides.)
How do we know? It comes from Euler’s Formula, which starts out describing the ways corners and edges and faces of a polyhedron fit together. Our map, with its polygon on the surface of the sphere, turns out to be just as good as a polyhedron. It looks a little less blocky, but that doesn’t show.
By Euler’s Formula, there’s this neat relationship between the number of vertices, the number of edges, and the number of faces in a polyhedron. (This is the same Leonhard Euler famous for … well, everything in mathematics, really. But in this case it’s for his work with shapes.) It holds for our map too. Call the number of vertices V. Call the number of edges E. Call the number of faces F. Then:
Always true. Try drawing some maps yourself, using simple straight lines, and see if it works. For that matter, look at my Really Really Simplified map and see if it doesn’t hold true still.
Here’s one of those insights that’s so obvious it’s hard to believe. Every edge ends in two vertices. Three edges meet at every vertex. (We don’t have more than three territories come together at a point. If that were to happen, we’d change the map a little to find our coloring and then put it back afterwards. Pick one of the territories and give it a disc of area from the four or five or more corners. The troublesome corner is gone. Once we’ve done with our proof, shrink the disc back down to nothing. Coloring done!) And therefore .
A polygon has the same number of edges as vertices, and if you don’t believe that then draw some and count. Every edge touches exactly two regions. Every vertex touches exactly three edges. So we can rework Euler’s formula. Multiply it by six and we get . And from doubling the equation about edges and vertices equation in the last paragraph, . So if we break up that 6E into 4E and 2E we can rewrite that Euler’s formula again. It becomes . 6V – 4E is zero, so, .
Do we know anything about F itself?
Well, yeah. . The number of faces has to equal the sum of the number of faces of two edges, and of three edges, and of four edges, and of five edges, and of six edges, and on and on. Counting!
Do we know anything about how E and F relate?
Well, yeah. A polygon in F2 has two edges. A polygon in F3 has three edges. A polygon in F4 has four edges. And each edge runs up against two faces. So therefore . This goes on forever but that’s all right. We don’t need all these terms.
Because here’s what we do have. We know that . And we know how to write both E and F in terms of F2, F3, F4, and so on. We’re going to show at least one of these low-subscript Fsomethings has to be positive, that is, there has to be at least one of them.
Start by just shoving our long sum expressions into the modified Euler’s Formula we had. That gives us this:
Doesn’t look like we’ve got anywhere, does it? That’s all right. Multiply that -1 and that 6 into their parentheses. And then move the terms around, so that we group all the terms with F2 together, and all the terms with F3 together, and all the terms with F4 together, and so on. This gets us to:
I know, that’s a lot of parentheses. And it adds negative numbers to positive which I guess we’re allowed to do but who wants to do that? Simplify things a little more:
And now look at that. Each Fsubscript has to be zero or a positive number. You can’t have a negative number of shapes. If you can I don’t want to hear about it. Most of those Fsubscript‘s get multiplied by a negative number before they’re added up. But the sum has to be a positive number.
There’s only one way that this sum can be a positive number. At least one of F2, F3, F4, or F5 has to be a positive number. So there must be at least one region with at most five neighbors. And that’s true without knowing anything about our map. So it’s true about the original map, and it’s true about a simplified map, and about a simplified-more map, and on and on.
And that is why this hydra-style attack method always works. We can always simplify a map until it obviously can be colored with five colors. And we can go from that simplified map back to the original map, and color it in just fine. Formally, this is an existence proof: it shows there must be a way to color a map with five colors. But it does so the devious way, by showing a way to color the map. We don’t get enough existence proofs like that. And, at its critical point, we know the proof is true because we can count the number of regions and the number of edges and the number of corners they have. And we can add and subtract those numbers in the right way. Just like people imagine mathematicians do all day.
Properly this works only on the surface of a sphere. Euler’s Formula, which we use for the proof, depends on that. We get away with it on a piece of paper because we can pretend this is just a part of the globe so small we don’t see how flat it is. The vast white edge we suppose wraps around the whole world. And that’s fine since we mostly care about maps on flat surfaces or on globes. If we had a map that needed three dimensions, like one that looked at mining and water and overflight and land-use rights, things wouldn’t be so easy. Nor would they work at all if the map turned out to be on an exotic shape like a torus, a doughnut shape.
But this does have a staggering thought. Suppose we drew boundary lines. And suppose we found an arrangement of them so that we needed more than five colors. This would tell us that we have to be living on a surface such as a torus, the doughnut shape. We could learn something about the way space is curved by way of an experiment that never looks at more than where two regions come together. That we can find information about the whole of space, global information, by looking only at local stuff amazes me. I hope it at least surprises you.
From fiddling with this you probably figure the four-color map theorem should follow right away. Maybe involve a little more arithmetic but nothing too crazy. I agree, it so should. It doesn’t. Sorry.
When I started this A to Z I figured it would be a nice string of quick, two-to-four paragraph compositions. So far each one has been a monster post instead. I’m hoping to get to some easier ones. For today I mean to talk about a graph, as in graph theory. That’s not the kind of graph that’s a plot of some function or a pie chart or a labelled map or something like that.
This kind of graph we do study as pictures, though. Specifically, they’re pictures with two essential pieces: a bunch of points or dots and a bunch of curves connecting dots. The dots we call vertices. The curves we call edges. My mental model tends to be of a bunch of points in a pegboard connected by wire or string. That might not work for you, but the idea of connecting things is what graphs, and graph theory, are good for studying.
When René Descartes first described the system we’ve turned into Cartesian coordinates he didn’t put it forth in quite the way we build them these days. This shouldn’t be too surprising; he lived about four centuries ago, and we have experience with the idea of matching every point on the plane to some ordered pair of numbers that he couldn’t have. The idea has been expanded on, and improved, and logical rigor I only pretend to understand laid underneath the concept. But the core remains: we put somewhere on our surface an origin point — usually this gets labelled O, mnemonic for “origin” and also suggesting the zeroes which fill its coordinates — and we pick some direction to be the x-coordinate and some direction to be the y-coordinate, and the ordered pair for a point are how far in the x-direction and how far in the y-direction one must go from the origin to get there.
The most obvious difference between Cartesian coordinates as Descartes set them up and Cartesian coordinates as we use them is that Descartes would fill a plane with four chips, one quadrant each in the plane. The first quadrant is the points to the right of and above the origin. The second quadrant is to the left of and still above the origin. The third quadrant is to the left of and below the origin, and the fourth is to the right of the origin but below it. This division of the plane into quadrants, and even their identification as quadrants I, II, III, and IV respectively, still exists, one of those minor points on which prealgebra and algebra students briefly trip on their way to tripping over the trigonometric identities.
Descartes had, from his perspective, excellent reason to divide the plane up this way. It’s a reason difficult to imagine today. By separating the plane like this he avoided dealing with something mathematicians of the day were still uncomfortable with. It’s easy enough to describe a point in the first quadrant as being so far to the right and so far above the origin. But a point in the second quadrant is … not any distance to the right. It’s to the left. How far to the right is something that’s to the left?
I like game shows. Liking game shows is not one of the more respectable hobbies, compared to, say, Crimean War pedantry, or laughing at goats. Game shows have a long history of being sneered at by people who can’t be bothered to learn enough about game shows to sneer at them for correct reasons. Lost somewhere within my archives is even an anthology of science fiction short stories about game shows, which if you take out the punch lines of “and the loser DIES!” or “and the host [ typically Chuck Woolery ] is SATAN!”, would leave nearly nothing, and considering that science fiction as a genre has spent most of its existence feeling picked-on as the “smelly, unemployed cousin of the entertainment industry” (Mike Nelson’s Movie Megacheese) that’s quite some sneering. Sneering at game shows even earned an episode of The Mary Tyler Moore show which managed to be not just bad but offensively illogical.
Nevertheless, I like them, and was a child at a great age for game shows on broadcast television: the late 1970s and early 1980s had an apparently endless menu of programs, testing people’s abilities to think of words, to spell words, to price household goods, and guess how other people answered surveys. We haven’t anything like that anymore; on network TV about the only game shows that survive are Jeopardy! (which nearly alone of the genre gets any respect), Wheel of Fortune, The Price Is Right, and, returned after decades away, Let’s Make A Deal. (I don’t regard reality shows as game shows, despite a common programming heritage. I can’t say what it is precisely other than location and sometimes scale that, say, Survivor or The Amazing Race do that Beat The Clock or Truth Or Consequences do not, but there’s something.) Now and then something new flutters into being, but it vanishes without leaving much of a trace, besides retreading jokes about the people who’d watch it.
All that is longwinded preliminary to one of those things that amuses mostly me. On the Thursday (27 October) episode of Let’s Make A Deal, they briefly looked like they might be playing The Monty Hall Problem.
Back to the theme of divisibility of numbers. Since we have the idea of writing numbers with a small set of digits, and with the place of those digits carrying information about how big the number is, we can think about what’s implied by that information.
In the number 222, the first two is matched to blocks (hundreds) that are ten times as large as those for the second two (tens), and the second two is matched to units (tens) which are ten times as large as those for the third two (units). It is now extremely rare to have the size of those blocks differ from one place to the next; that is, a number before the initial two here we take without needing it made explicit to represent ten times that hundreds unit, and a number after the final two (and therefore after the decimal point) would represent units which are one-tenth that of the final two’s size.
It has also become extremely rare for the relationship between blocks to be anything but a factor of ten, with two exceptions which I’ll mention next paragraph. The only block other than those with common use which comes to my mind is the sixty-to-one division of hours or degrees into minutes, and then of minutes into seconds. Even there the division of degrees of arc into minutes and seconds might be obsolete, as it’s so much easier on the computer to enter a latitude and longitude with decimals instead. So blocks of ten, decimals, it is, or in the way actual people speak of such things, a number written in base ten.
One of the personality traits which my Dearly Beloved most often tolerates in me is my tendency toward hyperbole, a rhetorical device employed successfully on the Internet by almost four people and recognized as such as recently as 1998. I’m not satisfied saying there was an enormous, slow-moving line for a roller coaster we rode last August; I have to say that fourteen months later we’re still on that line.
I mention this because I need to discuss one of those rare people who can be discussed accurately only in hyperbole: Leonhard Euler, 1703 – 1783. He wrote about essentially every field of mathematics it was possible to write about: calculus and geometry and physics and algebra and number theory and graph theory and logic, on music and the motions of the moon, on optics and the finding of longitude, on fluid dynamics and the frequency of prime numbers. After his death the Saint Petersburg Academy needed nearly fifty years to finish publishing his remaining work. If you ever need to fake being a mathematician, let someone else introduce the topic and then speak of how Euler’s Theorem is fundamental to it. There are several thousand Euler’s Theorems, although some of them share billing with another worthy, and most of them are fundamental to at least sixteen fields of mathematics each. I exaggerate; I must, but I note that a search for “Euler” on Wolfram Mathworld turns up 681 matches, as of this moment, out of 13,081 entries. It’s difficult to imagine other names taking up more than five percent of known mathematics. Even Karl Friedrich Gauss only matches 272 entries, and Isaac Newton a paltry 138.