My 2019 Mathematics A To Z: Koenigsberg Bridge Problem


Today’s A To Z term was nominated by Bunny Hugger. I’m glad to write about it. The problem is foundational to both graph theory and topology.

I’m more fluent in graph theory, and my writing will reflect that. But its critical insight involves looking at spaces and ignoring things like distance and area and angle. It is amazing that one can discard so much of geometry and still have anything to consider. What we do learn then applies to very many problems.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Königsberg Bridge Problem.

Once upon a time there was a city named Königsberg. It no longer is. It is Kaliningrad now. It’s no longer in that odd non-contiguous chunk of Prussia facing the Baltic Sea. It’s now in that odd non-contiguous chunk of Russia facing the Baltic Sea.

I put it this way because what the city evokes, to mathematicians, is a story. I do not have specific reason to think the story untrue. But it is a good story, and as I think more about history I grow more skeptical of good stories. A good story teaches, though not always the thing it means to convey.

The story is this. The city is on two sides of the Pregel river, now the Pregolya River. Two large islands are in the river. For several centuries these four land masses were connected by a total of seven bridges. And we are told that people in the city would enjoy free time with an idle puzzle. Was there a way to walk all seven bridges one and only one time? If no one did something fowl like taking a boat to cross the river, or not going the whole way across a bridge, anyway? There were enough bridges, though, and enough possible ways to cross them, that trying out every option was hopeless.

Then came Leonhard Euler. Who is himself a preposterous number of stories. Pick any major field of mathematics; there is an Euler’s Theorem at its center. Or an Euler’s Formula. Euler’s Method. Euler’s Function. Likely he brought great new light to it.

And in 1736 he solved the Königsberg Bridge Problem. The answer was to look at what would have to be true for a solution to exist. He noticed something so obvious it required genius not to dismiss it. It seems too simple to be useful. In a successful walk you enter each land mass (river bank or island) the same number of times you leave it. So if you cross each bridge exactly once, you use an even number of bridges per land mass. The exceptions are that you must start at one land mass, and end at a land mass. Maybe a different one. How you get there doesn’t count for the problem. How you leave doesn’t either. So the land mass you start from may have an odd number of bridges. So may the one you end on. So there are up to two land masses that may have an odd number of bridges.

Once this is observed, it’s easy to tell that Königsberg’s Bridges did not match that. All four land masses in Königsberg have an odd number of bridges. And so we could stop looking. It’s impossible to walk the seven bridges exactly once each in a tour, not without cheating.

Graph theoreticians, like the topologists of my prologue, now consider this foundational to their field. To look at a geographic problem and not concern oneself with areas and surfaces and shapes? To worry only about how sets connect? This guides graph theory in how to think about networks.

The city exists, as do the islands, and the bridges existed as described. So does Euler’s solution. And his reasoning is sound. The reasoning is ingenious, too. Everything hard about the problem evaporates. So what do I doubt about this fine story?

Well, I don’t know that this bridge problem was something the people of Königsberg thought about. At least not in the way it’s presented, this idle problem everyone who visited the river wondered about without trying very hard to solve. The only people I ever hear discussing this are mathematicians. And mathematicians are as fond of good stories as anyone else, and accept that when the reality is messy and ambiguous and confused. I’m not alone in having doubts. The Mathematics Association of America’s web page about the problem concedes it is “according to lore” that the people of the city had this problem.

Teo Paoletti, author of that web page, says Danzig mayor Carl Leonhard Gottlieb Ehler wrote Euler, asking for a solution. This falls short of proving that the bridges were a common subject of speculation. It does show at least that Ehler thought it worth pondering. Euler apparently did not think it was even mathematics. Not that he thought it was hard; he simply thought it didn’t depend on mathematical principles. It took only reason. But he did find something interesting: why was it not mathematics? Paoletti quotes Euler as writing:

This question is so banal, but seemed to me worthy of attention in that [neither] geometry, nor algebra, nor even the art of counting was sufficient to solve it.

I am reminded of a mathematical joke. It’s about the professor who always went on at great length about any topic, however slight. I have no idea why this should stick with me. Finally one day the professor admitted of something, “This problem is not interesting.” The students barely had time to feel relief. The professor went on: “But the reasons why it is not interesting are very interesting. So let us explore that.”

The Königsberg Bridge Problem is in the first chapter of every graph theory book ever. And it is a good graph theory problem. It may not be fair to say it created graph theory, though. Euler seems to have treated this as a little side bit of business, unrelated to his real mathematics. Graph theory as we know it — as a genre — formed in the 19th century. So did topology. In hindsight we can see how studying these bridges brought us good questions to ask, and ways to solve them. But for something like a century after Euler published this, it was just the clever solution to a recreational mathematics puzzle. It was as important as finding knight’s tours of chessboards.

That we take it as the introduction to graph theory, and maybe topology, tells us something. It is an easy problem to pose. Its solution is clever, but not obscure. It takes no long chains of complex reasoning. Many people approach mathematics problems with fear. By telling this story, we promise mathematics that feels as secure as a stroll along the riverfront. This promise is good through about chapter three, section four, where there are four definitions on one page and the notation summons obscure demons of LaTeX.

Still. Look at what the story of the bridges tells us. We notice something curious about our environment. The problem seems mathematical, or at least geographic. The problem is of no consequence. But it lingers in the mind. The obvious approaches to solving it won’t work. But think of the problem differently. The problem becomes simple. And better than simple. It guides one to new insights. In a century it gives birth to two fields of mathematics. In two centuries these are significant fields. They’re things even non-mathematicians have heard of. It’s almost a mathematician’s fantasy of insight and accomplishment.

But this does happen. The world suggests no end of little mathematics problems. Sometimes they are wonderful. Richard Feynman’s memoirs tell of his imagination being captured by a plate spinning in the air. Solving that helped him resolve a problem in developing Quantum Electrodynamics. There are more mundane problems. One of my professors in grad school remembered tossing and catching a tennis racket and realizing he didn’t know why sometimes it flipped over and sometimes didn’t. His specialty was in dynamical systems, and he could work out the mechanics of what a tennis racket should do, and when. And I know that within me is the ability to work out when a pile of books becomes too tall to stand on its own. I just need to work up to it.

The story of the Königsberg Bridge Problem is about this. Even if nobody but the mayor of Danzig pondered how to cross the bridges, and he only got an answer because he infected Euler with the need to know? It is a story of an important piece of mathematics. Good stories will tell us things that are true, which are not necessarily the things that happen in them.


Thanks for reading this. All of the Fall 2019 A To Z posts ought to be at this link. On Thursday I should publish my ‘L’ post. All of my past A To Z essays should be available at this link, And tomorrow I hope to finish off the comic strips worth just quick mentions from last week. See you then.

Reading the Comics, April 10, 2019: Grand Avenue and Luann Want My Attention Edition


So this past week has been a curious blend for the mathematically-themed comics. There were many comics mentioning some mathematical topic. But that’s because Grand Advenue and Luann Againn — reprints of early 90s Luann comics — have been doing a lot of schoolwork. There’s a certain repetitiveness to saying, “and here we get a silly answer to a story problem” four times over. But we’ll see what I do with the work.

Mark Anderson’s Andertoons for the 7th is Mark Anderson’s Andertoons for the week. Very comforting to see. It’s a geometry-vocabulary joke, with Wavehead noticing the similar ends of some terms. I’m disappointed that I can’t offer much etymological insight. “Vertex”, for example, derives from the Latin for “highest point”, and traces back to the Proto-Indo-European root “wer-”, meaning “to turn, to bend”. “Apex” derives from the Latin for “summit” or “extreme”. And that traces back to the Proto-Indo-European “ap”, meaning “to take, to reach”. Which is all fine, but doesn’t offer much about how both words ended up ending in “ex”. This is where my failure to master Latin by reading a teach-yourself book on the bus during my morning commute for three months back in 2002 comes back to haunt me. There’s probably something that might have helped me in there.

On the blackboard is a square-based pyramid with 'apex' labelled; also a circular cone with 'vertex' labelled. Wavehead: 'And if you put them together they're a duplex.'
Mark Anderson’s Andertoons for the 7th of March, 2019. I write about this strip a lot. Essays mentioning Andertoons are at this link.

Mac King and Bill King’s Magic in a Minute for the 7th is an activity puzzle this time. It’s also a legitimate problem of graph theory. Not a complicated one, but still, one. Graph theory is about sets of points, called vertices, and connections between points, called edges. It gives interesting results for anything that’s networked. That shows up in computers, in roadways, in blood vessels, in the spreads of disease, in maps, in shapes.

Here's a tough little puzzle to get your brain firing on all four cylinders. See if you can connect the matching numbered boxes with three lines. The catch is that the liens cannot cross over each other. From left to right are disjoint boxes labelled 1, 2, 1, and 2. Above and below the center of the row are two boxes labelled 3.
Mac King and Bill King’s Magic in a Minute for the 7th of March, 2019. I should have the essays mentioning Magic In A Minute at this link.

One common problem, found early in studying graph theory, is about whether a graph is planar. That is, can you draw the whole graph, all its vertices and edges, without any lines cross each other? This graph, with six vertices and three edges, is planar. There are graphs that are not. If the challenge were to connect each number to a 1, a 2, and a 3, then it would be nonplanar. That’s a famous non-planar graph, given the obvious name K3, 3. A fun part of learning graph theory — at least fun for me — is looking through pictures of graphs. The goal is finding K3, 3 or another one called K5, inside a big messy graph.

Exam Question: Jack bought seven marbles and lost six. How many additional marbles must Jack buy to equal seven? Kid's answer: 'Jack wouldn't know. He's lost his marbles.'
Mike Thompson’s Grand Avenue for the 8th of March, 2019. I’m not always cranky about this comic strips. Examples of when I’m not are at this link, as are the times I’m annoyed with Grand Avenue.

Mike Thompson’s Grand Avenue for the 8th has had a week of story problems featuring both of the kid characters. Here’s the start of them. Making an addition or subtraction problem about counting things is probably a good way of making the problem less abstract. I don’t have children, so I don’t know whether they play marbles or care about them. The most recent time I saw any of my niblings I told them about the subtleties of industrial design in the old-fashioned Western Electric Model 2500 touch-tone telephone. They love me. Also I’m not sure that this question actually tests subtraction more than it tests reading comprehension. But there are teachers who like to throw in the occasional surprisingly easy one. Keeps students on their toes.

Gunther: 'You put a question mark next to the part about using the slope-intercept form of a linear equation. What don't you understand?' Luann: 'Lemme see. Oh ... yeah. I don't understand why on earth I need to know this.'
Greg Evans’s Luann Againn for the 10th of March, 2019. This strip originally ran the 10th of March, 1991. Essays which include some mention of Luann, either current or 1990s reprints, are at this link.

Greg Evans’s Luann Againn for the 10th is part of a sequence showing Gunther helping Luann with her mathematics homework. The story started the day before, but this was the first time a specific mathematical topic was named. The point-slope form is a conventional way of writing an equation which corresponds to a particular line. There are many ways to write equations for lines. This is one that’s convenient to use if you know coordinates for one point on the line and the slope of the line. Any coordinates which make the equation true are then the coordinates for some point on the line.

How To Survive a Shark Attack (illustrated with a chicken surviving a shark.0 Keep your eye on the shark and move slowly toward safety. Don't make any sudden movements such as splashing or jazz hands. If the shark comes at you, punch it in the gills, snout, or eyes. You won't hurt the shark, but it will be surprised by your audacity. If all else fails, try to confuse it with logical paradoxes. Chicken: 'This statement is false.' Shark, wide-eyed and confused: '?'
Doug Savage’s Savage Chickens for the 10th of March, 2019. And when I think of something to write about Savage Chickens the results are at this link.

Doug Savage’s Savage Chickens for the 10th tosses in a line about logical paradoxes. In this case, using a classic problem, the self-referential statement. Working out whether a statement is true or false — its “truth value” — is one of those things we expect logic to be able to do. Some self-referential statements, logical claims about themselves, are troublesome. “This statement is false” was a good one for baffling kids and would-be world-dominating computers in science fiction television up to about 1978. Some self-referential statements seem harmless, though. Nobody expects even the most timid world-dominating computer to be bothered by “this statement is true”. It takes more than just a statement being about itself to create a paradox.


And a last note. The blog hardly needs my push to help it out, but, sometimes people will miss a good thing. Ben Orlin’s Math With Bad Drawings just ran an essay about some of the many mathematics-themed comics that Hilary Price and Rina Piccolo’s Rhymes With Orange has run. The comic is one of my favorites too. Orlin looks through some of the comic’s twenty-plus year history and discusses the different types of mathematical jokes Price (with, in recent years, Piccolo) makes.

Myself, I keep all my Reading the Comics essays at this link, and those mentioning some aspect of Rhymes With Orange at this link.

My 2018 Mathematics A To Z: Yamada Polynomial


I had another free choice. I thought I’d go back to one of the topics I knew and loved in grad school even though I didn’t have the time to properly study it then. It turned out I had forgotten some important points and spent a night crash-relearning knot theory. This isn’t a bad thing necessarily.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Yamada Polynomial.

This is a thing which comes from graphs. Not the graphs you ever drew in algebra class. Graphs as in graph theory. These figures made of spots called vertices. Pairs of vertices are connected by edges. There’s many interesting things to study about these.

One path to take in understanding graphs is polynomials. Of course I would bring things back to polynomials. But there’s good reasons. These reasons come to graph theory by way of knot theory. That’s an interesting development since we usually learn graph theory before knot theory. But knot theory has the idea of representing these complicated shapes as polynomials.

There are a bunch of different polynomials for any given graph. The oldest kind, the Alexander Polynomial, J W Alexander developed in the 1920s. And that was about it until the 1980s when suddenly everybody was coming up with good new polynomials. The definitions are different. They give polynomials that look different. Some are able to distinguish between a knot and the knot that’s its reflection across a mirror. Some, like the Alexander aren’t. But they’re common in some important ways. One is that they might not actually be, you know, polynomials. I mean, they’ll be the sum of numbers — whole numbers, even — times a variable raised to a power. The variable might be t, might be x. Might be something else, but it doesn’t matter. It’s a pure dummy variable. But the variable might be raised to a negative power, which isn’t really a polynomial. It might even be raised to, oh, one-half or three-halves, or minus nine-halves, or something like that. We can try saying this is “a polynomial in t-to-the-halves”. Mostly it’s because we don’t have a better name for it.

And going from a particular knot to a polynomial follows a pretty common procedure. At least it can, when you’re learning knot theory and feel a bit overwhelmed trying to prove stuff about “knot invariants” and “homologies” and all. Having a specific example can be such a comfort. You can work this out by an iterative process. Take a specific drawing of your knot. There’s places where the strands of the knot cross over one another. For each of those crossings you ponder some alternate cases where the strands cross over in a different way. And then you add together some coefficient times the polynomial of this new, different knot. The coefficient you get by the rules of whatever polynomial you’re making. The new, different knots are, usually, no more complicated than what you started with. They’re often simpler knots. This is what saves you from an eternity of work. You’re breaking the knot down into more but simpler knots. Just the fact of doing that can be satisfying enough. Eventually you get to something really simple, like a circle, and declare that’s some basic polynomial. Then there’s a lot bit of adding up coefficients and powers and all that. Tedious but not hard.

Knots are made from a continuous loop of … we’ll just call it thread. It can fold over itself many times. It has to, really, or it hasn’t got a chance of being more interesting than a circle. A graph is different. That there are vertices seems to change things. Less than you’d think, though. The thread of a knot can cross over and under itself. Edges of a graph can cross over and under other edges. This isn’t too different. We can also imagine replacing a spot where two edges cross over and under the other with an intersection and new vertex.

So we get to the Yamada polynomial by treating a graph an awful lot like we might treat a knot. Take the graph and split it up at each overlap. At each overlap we have something that looks, at least locally, kind of like an X. An upper left, upper right, lower left, and lower right intersection. The lower left connects to the upper right, and the upper left connects to the lower right. But these two edges don’t actually touch; one passes over the other. (By convention, the lower left going to the upper right is on top.)

There’s three alternate graphs. One has the upper left connected to the lower left, and the upper right connected to the lower right. This looks like replacing the X with a )( loop. The second alternate has the upper left connected to the upper right, and the lower left connected to the lower right. This looks like … well, that )( but rotated ninety degrees. I can’t do that without actually including a picture. The third alternate puts a vertex in the X. So now the upper left, upper right, lower left, and lower right all connect to the new vertex in the center.

Probably you’d agree that replacing the original X with a )( pattern, or its rotation, probably doesn’t make the graph any more complicated. And it might make the graph simpler. But adding that new vertex looks like trouble. It looks like it’s getting more complicated. We might get stuck in an infinite regression of more-complicated polynomials.

What saves us is the coefficient we’re multiplying the polynomials for these new graphs by. It’s called the “chromatic coefficient” and it reflects how many different colors you need to color in this graph. An edge needs to connect two different colors. And — what happens if an edge connects a vertex to itself? That is, the edge loops around back to where it started? That’s got a chromatic number of zero and the moment we get a single one of these loops anywhere in our graph we can stop calculating. We’re done with that branch of the calculations. This is what saves us.

There’s a catch. It’s a catch that knot polynomials have, too. This scheme writes a polynomial not just for a particular graph but a particular way of rendering this graph. There’s always other ways to draw it. If nothing else you can always twirl a edge over itself, into a loop like you get when Christmas tree lights start tangling themselves up. But you can move the vertices to different places. You can have an edge go outside the rest of the figure instead of inside, that sort of thing. Starting from a different rendition of the shape gets you to a different polynomial.

Superficially different, anyway. What you get from two different renditions of the same graph are polynomials different by your dummy variable raised to a whole number. Also maybe a plus-or-minus sign. You can see a difference between, say, t^{-1} - 2 + 3t (to make up an example) and t - 2t^2 + 3t^3 . But you can see that second polynomial is just t^2\left(t^{-1} - 2 + 3t\right) . It’s some confounding factor times something that is distinctive to the graph.

And that distinctive part, the thing that doesn’t change if you draw the graph differently? That’s the Yamada polynomial, at last. It’s a way to represent this collection of vertices and edges using only coefficients and exponents.

I would like to give an impressive roster of uses for these polynomials here. I’m afraid I have to let you down. There is the obvious use: if you suspect two graphs are really the same, despite how different they look, here’s a test. Calculate their Yamada polynomials and if they’re different, you know the graphs were different. It can be hard to tell. Get anything with more than, say, eight vertices and 24 edges in it and you’re not going to figure that out by sight.

I encountered the Yamada polynomial specifically as part of a textbook chapter about chemistry. It’s easy to imagine there should be great links between knots and graphs and the way that atoms bundle together into molecules. The shape of their structures describes what they will do. But I am not enough of a chemist to say how this description helps chemists understand molecules. It’s possible that it doesn’t: Yamada’s paper introducing the polynomial was published in 1989. My knot theory textbook might have brought it up because it looked exciting. There are trends and fashions in mathematical thought too. I don’t know what several more decades of work have done to the polynomial’s reputation. I’m glad to hear from people who know better.


There’s one more term in the Fall 2018 Mathematics A To Z to come. Will I get the article about it written before Friday? We’ll know on Saturday! At least I don’t have more Reading the Comics posts to write before Sunday.

My 2018 Mathematics A To Z: Oriented Graph


I am surprised to have had no suggestions for an ‘O’ letter. I’m glad to take a free choice, certainly. It let me get at one of those fields I didn’t specialize in, but could easily have. And let me mention that while I’m still taking suggestions for the letters P through T, each other letter has gotten at least one nomination. I can be swayed by a neat term, though, so if you’ve thought of something hard to resist, try me. And later this month I’ll open up the letters U through Z. Might want to start thinking right away about what X, Y, and Z could be.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Oriented Graph.

This is another term from graph theory, one of the great mathematical subjects for doodlers. A graph, here, is made of two sets of things. One is a bunch of fixed points, called ‘vertices’. The other is a bunch of curves, called ‘edges’. Every edge starts at one vertex and ends at one vertex. We don’t require that every vertex have an edge grow from it.

Already you can see why this is a fun subject. It models some stuff really well. Like, anything where you have a bunch of sources of stuff, that come together and spread out again? Chances are there’s a graph that describes this. There’s a compelling all-purpose interpretation. Have vertices represent the spots where something accumulates, or rests, or changes, or whatever. Have edges represent the paths along which something can move. This covers so much.

The next step is a “directed graph”. This comes from making the edges different. If we don’t say otherwise we suppose that stuff can move along an edge in either direction. But suppose otherwise. Suppose there are some edges that can be used in only one direction. This makes a “directed edge”. It’s easy to see in graph theory networks of stuff like city streets. Once you ponder that, one-way streets follow close behind. If every edge in a graph is directed, then you have a directed graph. Moving from a regular old undirected graph to a directed graph changes everything you’d learned about graph theory. Mostly it makes things harder. But you get some good things in trade. We become able to model sources, for example. This is where whatever might move comes from. Also sinks, which is where whatever might move disappears from our consideration.

You might fear that by switching to a directed graph there’s no way to have a two-way connection between a pair of vertices. Or that if there is you have to go through some third vertex. I understand your fear, and wish to reassure you. We can get a two-way connection even in a directed graph: just have the same two vertices be connected by two edges. One goes one way, one goes the other. I hope you feel some comfort.

What if we don’t have that, though? What if the directed graph doesn’t have any vertices with a pair of opposite-directed edges? And that, then, is an oriented graph. We get the orientation from looking at pairs of vertices. Each pair either has no edge connecting them, or has a single directed edge between them.

There’s a lot of potential oriented graphs. If you have three vertices, for example, there’s seven oriented graphs to make of that. You’re allowed to have a vertex not connected to any others. You’re also allowed to have the vertices grouped into a couple of subsets, and connect only to other vertices in their own subset. This is part of why four vertices can give you 42 different oriented graphs. Five vertices can give you 582 different oriented graphs. You can insist on a connected oriented graph.

A connected graph is what you guess. It’s a graph where there’s no vertices off on their own, unconnected to anything. There’s no subsets of vertices connected only to each other. This doesn’t mean you can always get from any one vertex to any other vertex. The directions might not allow you to that. But if you’re willing to break the laws, and ignore the directions of these edges, you could then get from any vertex to any other vertex. Limiting yourself to connected graphs reduces the number of oriented graphs you can get. But not by as much as you might guess, at least not to start. There’s only one connected oriented graph for two vertices, instead of two. Three vertices have five connected oriented graphs, rather than seven. Four vertices have 34, rather than 42. Five vertices, 535 rather than 582. The total number of lost graphs grows, of course. The percentage of lost graphs dwindles, though.

There’s something more. What if there are no unconnected vertices? That is, every pair of vertices has an edge? If every pair of vertices in a graph has a direct connection we call that a “complete” graph. This is true whether the graph is directed or not. If you do have a complete oriented graph — every pair of vertices has a direct connection, and only the one direction — then that’s a “tournament”. If that seems like a whimsical name, consider one interpretation of it. Imagine a sports tournament in which every team played every other team once. And that there’s no ties. Each vertex represents one team. Each edge is the match played by the two teams. The direction is, let’s say, from the losing team to the winning team. (It’s as good if the direction is from the winning team to the losing team.) Then you have a complete, oriented, directed graph. And it represents your tournament.

And that delights me. A mathematician like me might talk a good game about building models. How one can represent things with mathematical constructs. Here, it’s done. You can make little dots, for vertices, and curved lines with arrows, for edges. And draw a picture that shows how a round-robin tournament works. It can be that direct.


My next Fall 2018 Mathematics A-To-Z post should be Friday. It’ll be available at this link, as are the rest of these glossary posts. And I’ve got requests for the next letter. I just have to live up to at least one of them.

The End 2016 Mathematics A To Z: Tree


Graph theory begins with a beautiful legend. I have no reason to suppose it’s false, except my natural suspicion of beautiful legends as origin stories. Its organization as a field is traced to 18th century Köningsburg, where seven bridges connected the banks of a river and a small island in the center. Whether it was possible to cross each bridge exactly once and get back where one started was, they say, a pleasant idle thought to ponder and path to try walking. Then Leonhard Euler solved the problem. It’s impossible.

Tree.

Graph theory arises whenever we have a bunch of things that can be connected. We call the things “vertices”, because that’s a good corner-type word. The connections we call “edges”, because that’s a good connection-type word. It’s easy to create graphs that look like the edges of a crystal, especially if you draw edges as straight as much as possible. You don’t have to. You can draw them curved. Then they look like the scary tangles of wire around your wireless router complex.

Graph theory really got organized in the 19th century, and went crazy in the 20th. It turns out there’s lots of things that connect to other things. Networks, whether computers or social or thematically linked concepts. Anything that has to be delivered from one place to another. All the interesting chemicals. Anything that could be put in a pipe or taken on a road has some graph theory thing applicable to it.

A lot of graph theory ponders loops. The original problem was about how to use every bridge, every edge, exactly one time. Look at a tangled mass of a graph and it’s hard not to start looking for loops. They’re often interesting. It’s not easy to tell if there’s a loop that lets you get to every vertex exactly once.

What if there aren’t loops? What if there aren’t any vertices you can step away from and get back to by another route? Well, then you have a tree.

A tree’s a graph where all the vertices are connected so that there aren’t any closed loops. We normally draw them with straight lines, the better to look like actual trees. We then stop trying to make them look like actual trees by doing stuff like drawing them as a long horizontal spine with a couple branches sticking off above and below, or as * type stars, or H shapes. They still correspond to real-world things. If you’re not sure how consider the layout of one of those long, single-corridor hallways as in a hotel or dormitory. The rooms connect to one another as a tree once again, as long as no room opens to anything but its own closet or bathroom or the central hallway.

We can talk about the radius of a graph. That’s how many edges away any point can be from the center of the tree. And every tree has a center. Or two centers. If it has two centers they share an edge between the two. And that’s one of the quietly amazing things about trees to me. However complicated and messy the tree might be, we can find its center. How many things allow us that?

A tree might have some special vertex. That’s called the ‘root’. It’s what the vertices and the connections represent that make a root; it’s not something inherent in the way trees look. We pick one for some special reason and then we highlight it. Maybe put it at the bottom of the drawing, making ‘root’ for once a sensible name for a mathematics thing. Often we put it at the top of the drawing, because I guess we’re just being difficult. Well, we do that because we were modelling stuff where a thing’s properties depend on what it comes from. And that puts us into thoughts of inheritance and of family trees. And weird as it is to put the root of a tree at the top, it’s also weird to put the eldest ancestors at the bottom of a family tree. People do it, but in those illuminated drawings that make a literal tree out of things. You don’t see it in family trees used for actual work, like filling up a couple pages at the start of a king or a queen’s biography.

Trees give us neat new questions to ponder, like, how many are there? I mean, if you have a certain number of vertices then how many ways are there to arrange them? One or two or three vertices all have just the one way to arrange them. Four vertices can be hooked up a whole two ways. Five vertices offer a whole three different ways to connect them. Six vertices offer six ways to connect and now we’re finally getting something interesting. There’s eleven ways to connect seven vertices, and 23 ways to connect eight vertices. The number keeps on rising, but it doesn’t follow the obvious patterns for growth of this sort of thing.

And if that’s not enough to idly ponder then think of destroying trees. Draw a tree, any shape you like. Pick one of the vertices. Imagine you obliterate that. How many separate pieces has the tree been broken into? It might be as few as two. It might be as many as the number of remaining vertices. If graph theory took away the pastime of wandering around Köningsburg’s bridges, it has given us this pastime we can create anytime we have pen, paper, and a long meeting.

Theorem Thursday: Tutte’s Theorem, Magic, And Happy Endings


To wrap up my two-month Theorem Thursday project I have another request to fill. And it’s another from Gaurish, whose requests for the Leap Day A To Z inspired so many interesting posts.

Tutte’s Theorem

I admit I had not heard of Tutte’s Theorem before Gaurish’s request and I had to spend time working up to knowing what it was and why it was useful. I also admit I’m not completely sure I have it. But I’m happy to try to accept with grace corrections from the people in graph theory who know better.

It comes back to graphs. These are a bunch of points, “vertices”, which have curves, “edges” connecting pairs of them. This describes a lot of systems. Bunches of things that have to connect are naturally graphs. Connecting utilities to houses gave the first example I called up last week. The Internet’s made people vaguely familiar with the idea of bunches of computers linked to bunches of other computers. Social networks can be seen as graphs; each person is a vertex, her friendships edges connecting to other vertices.

In a graph not every vertex has to be connected to every other vertex. Not everyone needs to be friends with everyone else, which is a relief. I don’t want to point fingers but some of your friends are insufferable. I don’t know how you put up with them. You, you’re great. But the friends of a friend … yeeeeeeesh.

So we — mathematicians, anyway — get to wondering. Give me a graph. It’s got some vertices and some edges. Let’s look at only some of these edges. Each edge links to two vertices. Is there a subset of the edges that touch every one of the vertices exactly once? Sometimes there are; sometimes there aren’t. If there are, we have a “perfect matching”.

We look at this sort of thing because mathematicians learn to look at coverings. Coverings are what they sound like. What’s the smallest set of some standard item you need to … cover … all the stuff you’re interested in this problem? I think we’re bred to look for coverings in Real Analysis, because the idea of covering space with discs gives us measure. This gives us a rigorous idea of what length is, and what dimensions are, and lets us see there have to be more irrational than rational numbers and other interesting results like that. Mathematicians get used to looking for this sort of thing in other fields.

Tutte’s Theorem is about perfect matchings. It says what conditions a graph has to have to have a perfect matching. It’s based on striking subsets of the vertices from the original graph. The accompanying edges go too. What’s left might be connected, which means just what you think. You can get from any vertex in the decimated graph to any other vertex by following its edges. Or it might be disconnected, which means again just what you think. Mathematics is not always about complicated lingo.

Take the survivors. Count how many of the remaining connected components have an odd number of vertices. Is that less than or equal to the number of vertices you struck out? If it is, and if it is no matter how many vertices you struck out, and no matter how you arranged those vertices, then there’s a perfect matching.

This isn’t one for testing. I mean, consider what’s easier to do: start from your original graph and test out coloring in some edges to see if you can touch every edge the one time? Take every possible sub-graph of the original graph and count the number of connected sub-pieces with an odd number of vertices? … Well, maybe that isn’t so bad, if you set a computer to do the boring work. It’s going to take forever for your graph with 102 vertices, though, unless it’s a boring network. You have to test every possible removal all the way up to striking 102 vertices. (OK, it’s easy to show it’s true if 102 of 102 vertices are removed from the graph. Also if 101 of 102 vertices are removed. And 100 of 102 is also not hard. But that’s only a couple easy cases left.)

I don’t know the real work the theorem does. It has some neat and implications good for pop mathematics. Take a standard deck of 52 well-shuffled cards. Deal them out into thirteen piles of four cards each. Is it possible to select exactly one card from each pile so that, when done, there’s exactly one of each face value — Ace, Two, Three, Four, up through Queen and King — in the selected set? Indeed it is. I leave it to the Magic In A Minute cartoonists to turn this into a way to annoy a compliant monkey.

Another appealing use for it is in marriage problems. Well, marriage looked to mathematicians like a good source of bipartite graph problems. Society’s since noticed that nobody really hit on a compelling reason for “why does a woman have to marry a man, exactly”. I imagine the change to filter into mathematics textbooks sometime in the next seventy years. But please accept that this problem was formed and solved in the 1930s.

Suppose you have a group of women and a group of men, of equal size. Each woman would be happy married to some of the men in that group. Each man would be happy married to some of the women in that group. Is it possible to match women and men up in a way that everybody is married to at least someone they’d be happy with? We’re not guaranteeing that anyone gets their best possible pairing. We promise just that everyone marries someone they’ll be glad to.

It depends. Maybe this is best done by testing. Work through every possible subset of women. That is, look at every group of one woman, every group of two women, every group of three women, and so on. By group I mean what normal people mean by group, not what mathematicians mean. So look at your group of women. Count how many of the men at least one woman would be content marrying. Is that number equal to or larger than the number of women? Is that number equal to or larger than the number of women, however many women you picked and whichever women you did pick? If it did, great: everybody can marry someone they’re happy with.

Parlor tricks, I admit, but pleasant ones. What are its real uses? At this point I am really synthesizing my readings on the subject rather than speaking from things I’m confident about. If I understand right, though, Tutte’s Theorem is a foundation for the study of graph toughness. That is what you’d guess from the name. It’s about how easy it is to break up a graph into disconnected pieces. It’s easy to imagine real networks with strength or weakness. Image the person that holds a complicated group of friends together, for example, and whose removal devastates it. Consider the electrical network with a few vulnerable points that allow small problems to become nationwide blackouts. I believe it’s relevant to the study of proteins. Those long strands of blocks of molecules that then fold back on themselves. (I haven’t seen a source that says this, but it can’t imagine why it shouldn’t. I am open to correction from sneering protein researchers.) I would be surprised if the theorem has nothing to say about when a strand of Christmas tree lights will get too tangled to fix.

Let me close with a puzzle, a delightful little one. It regards one of the graphs we met last week. K5 is a complete graph with five vertices. Each vertex is connected to all four of its siblings. It can’t have a perfect matching. Only a graph with an even number of vertices can. Each edge connects to two vertices, after all. So — what is the subset that breaks Tutte’s theorem? It must be possible to strike some set of vertices from K5 so that the number of stricken vertices is smaller than the number of remaining connected components with an odd number of vertices. What’s that set?

Go ahead and ponder it a while. If you give up I don’t blame you. The answer is at the other end of this link. If you’d like a hint, let me offer this, which you might be able to mouse over to reveal.

It is obvious once you’ve heard what the subset of vertices is, and it is not K5. The rest of this paragraph is padding so that the size of the spoiler doesn’t give matters away. And by the way I’d like to thank the world at large for doing such a great job not spoiling Star Wars: The Force Awakens. So why could you not give some similar consideration for Star Trek Beyond? I stopped reading TrekBBS for a month partly to avoid spoilers and then I started getting them everywhere I looked. Not cool, world.

Good luck!

Theorem Thursday: Kuratowski’s Reduction Theorem and Playing With Gas Pipelines


I’m doing that thing again. Sometime for my A To Z posts I’ve used one essay to explain something, but also to introduce ideas and jargon that will come up later. Here, I want to do a theorem in graph theory. But I don’t want to overload one essay. So I’m going to do a theorem that’s interesting and neat in itself. But my real interest in next week’s piece. This is just so recurring readers are ready for it.

The Kuratowski Reduction Theorem.

It starts with a children’s activity book-type puzzle. A lot of real mathematics does. In the traditional form that I have a faint memory of ever actually seeing it’s posed as a problem of hooking up utilities to houses. There are three utilities, usually gas, water, and electricity. There are three houses. Is it possible to connect pipes from each of the three utility starting points to each of the three houses?

Of course. We do it all the time. We do this with many more utilities and many more than three buildings. The underground of Manhattan island is a network of pipes and tunnels and subways and roads and basements and buildings so complicated I doubt humans can understand it all. But the problem isn’t about that. The problem is about connecting these pipes all in the same plane, by drawing lines on a sheet of paper, without ever going into the third dimension. Nor making that little semicircular hop that denotes one line going over the other.

That’s a little harder. By that I mean it’s impossible. You can try and it’s fun to try a while. Draw three dots that are the houses and three dots that are the utilities. Try drawing three lines, one from each utility to each of the houses. Or one leading into each house that comes from each of the utilities. The lines don’t have to be straight. They can have extra jogs, too. Soon you’ll hit on the possibilities of lines that go way out, away from the dots, in the quest to avoid crossing over one another. It doesn’t matter. The attempt’s doomed to failure.

You’ll be sure of this by at latest the twelfth attempt at finding an arrangement. But that leaves open the possibility you weren’t clever enough to find an arrangement. To close that possibility guess what theorem is sitting there ready to answer your question, just like I told you it would be?

This is a problem in graph theory. I’ve talked about graph theory before. It’s the field of mathematics most comfortable to people who like doodling. A graph is a bunch of points, which we call vertices, connected by arcs or lines, which we call edges. For this utilities graph problem, the houses are the vertices. The pipes are the edges. An edge has to start at one vertex and end at a vertex. These may be the same vertex. We’re not judging. A vertex can have one edge connecting it to something else, or two edges, or three edges. It can have no edges. It can have any number of edges. We’re even allowed to have two or more edges connecting a vertex to the same vertex. My experience is we think of that last, forgetting that it is a possibility, but it’s there.

This is a “nonplanar” graph. This means you can’t draw it in a plane, like a sheet of paper, without having at least two edges crossing each other. We draw this on paper by making one of the lines wiggle in a little half-circle to show it’s going over, or to fade out and back in again to show it’s going under. There are planar graphs. Try the same problem with two houses and two utilities, for example. Or three houses and two utilities. Or three houses and three utilities, but one of the houses doesn’t get one of the utilities. Your choice which. It can be a little surprise to the homeowners.

This utilities graph is an example of a “bipartite” graph. The “bi” maybe suggests where things are going. You can always divide the vertices in a graph into two groups for the same reason you can always divide a pile of change into two piles. As long as you have at least two vertices or pieces of change. But a graph is bipartite if, once you’ve divided the vertices up, each edge has one vertex in the first set and the other vertex in the second. For the utilities graph these sets are easy to find. Each edge, each pipe, connects one utility to one house. There’s our division: vertices representing houses and vertices representing utilities.

This graph turns up a lot. Graph theorists have a shorthand way of writing it. It’s written as K3,3. This means it’s a bipartite graph. It has three vertices in the first set. There’s three vertices in the second set. There’s an edge connecting everything in the first set to everything in the second. Go ahead now and guess what K2, 2 is. Or K3,5. The K — I’ve never heard what the K stands for, although while writing this essay I started to wonder if it’s for “Kuratowski”. That seems too organized, somehow.

Not every graph is bipartite. You might say “of course; why else would we have a name `bipartite’ if there weren’t such a thing as `non-bipartite’?” Well, we have the name “graph” for everything that’s a graph in graph theory. But there are non-bipartite graphs. They just don’t look like the utility-graph problem. Imagine three vertices, each of them connected to the other two. If you aren’t imagining a triangle you’re overthinking this. But this is a perfectly good non-bipartite graph. There’s no way to split the vertices into two sets with every edge connecting something in one set to something in the other. No, that isn’t inevitable once you have an odd number of vertices. Look above at the utilities problem where there’s three houses and two utilities. That’s nice and bipartite.

Non-bipartite graphs can be planar. The one with three vertices, each connected to each other, is. The one with four vertices, each vertex connected to each other, is also planar. But if you have five vertices, each connected to each other — well, that’s a lovely star-in-pentagon shape. It’s also not planar. There’s no connecting each vertex to each other one without some line crossing another or jumping out of the plane.

This shape, five vertices each connected to one another, shows up a lot too. And it has a shorthand notation. It’s K5. That is, it’s five points, all connected to each other. This makes it a “complete” graph: every set of two vertices has an edge connecting them. If you’ve leapt to the supposition that K3 is that circle and K4 is that square with diagonals drawn in you’re right. K6 is six vertices, each one connected to the five other vertices.

It may seem intolerably confusing that we have two kinds of graphs and describe them both with K and a subscript. But they’re completely different. The bipartite graphs have a subscript that’s two numbers separated by a comma: p, q. The p is the number of vertices in one of the subsets. The q is the number of vertices in the other subset. There’s an edge connecting every point in p to every point in q, and vice-versa. The points in the p subset aren’t connected to one another, though. And the points in the q subset aren’t connected to one another. That they don’t mean this isn’t a complete graph.

The others, K with a single number r in the subscript, are complete graphs, ones that aren’t bipartite. They have r vertices, and each vertex is connected to the (r – 1) other vertices. So there’s (1/2) times r times (r – 1) edges all told.

Not every graph is either Kp, q or Kr. There’s a lot of kinds of graphs out there. Some are planar, some are not. But here’s an amazing thing, and it’s Kuratowski’s Reduction Theorem. If a graph is not planar, then it has to have, somewhere inside it, K3, 3 or K5 or both. Maybe several of them.

A graph that’s hidden within another is called a “subgraph”. This follows the same etymological reasoning that gives us “subsets” and “subgroups” and many other mathematics words beginning with “sub”. And these subgraphs turn up whenever you have a nonplanar graph. A subgraph uses some set of the vertices and edges of the original graph; it doesn’t need all of them. A nonplanar graph has a subgraph that’s K3, 3 or K5 or both.

Sometimes it’s easy to find one of these. K4, 4 obviously has K3, 3 inside it. Pick three of the four vertices on one side and three of the four vertices on the other, and look at the edges connecting them up. There’s your K3, 3. Or on the other side, K6 obviously has K5 inside it. Pick any five of the vertices inside K6 and the edges connecting those vertices. There’s your K5.

Sometimes it’s hard to find one of these. We can make a graph look more complicated without changing whether it’s planar or not. Take your K3, 3 again. Go to each edge and draw another dot, another vertex, inside it. Well, now it’s a graph that’s got twelve vertices in it. It’s not obvious whether this is bipartite still. (Play with it a while.) But it hasn’t become planar, not because of this. It won’t be.

This is because we can make graphs more complicated, or more simple, without changing whether they’re planar. The process is a lot like what we did last week with the five-color map theorem, making a map simpler until it was easy enough to color. Suppose there’s a little section of the graph that’s a vertex connected by one edge to a middle vertex connected by one edge to a third vertex. Do we actually need that middle vertex for anything? Besides padding our vertex count? Nah. We can drop that whole edge-middle vertex-edge sequence and replace it all with a single edge. And there’s other rules that let us turn a vertex-edge-vertex set into a single vertex. That sort of thing. It won’t change a planar graph to a nonplanar one or vice-versa.

So it can be hard to find the K3, 3 or the K5 hiding inside a nonplanar graph. A big enough graph can have so much going on it’s hard to find the pattern. But they’ll be there, in some group of five or six vertices with the right paths between them.

It would make a good activity puzzle, if you could explain what to look for to kids.

Reading the Comics, May 12, 2016: No Pictures Again Edition


I’ve hardly stopped reading the comics. I doubt I could even if I wanted at this point. But all the comics this bunch are from GoComics, which as far as I’m aware doesn’t turn off access to comic strips after a couple of weeks. So I don’t quite feel justified including the images of the comics when you can just click links to them instead.

It feels a bit barren, I admit. I wonder if I shouldn’t commission some pictures so I have something for visual appeal. There’s people I know who do comics online. They might be able to think of something to go alongside every “Student has snarky answer for a word problem” strip.

Brian and Ron Boychuk’s The Chuckle Brothers for the 8th of May drops in an absolute zero joke. Absolute zero’s a neat concept. People became aware of it partly by simple extrapolation. Given that the volume of a gas drops as the temperature drops, is there a temperature at which the volume drops to zero? (It’s complicated. But that’s the thread I use to justify pointing out this strip here.) And people also expected there should be an absolute temperature scale because it seemed like we should be able to describe temperature without tying it to a particular method of measuring it. That is, it would be a temperature “absolute” in that it’s not explicitly tied to what’s convenient for Western Europeans in the 19th century to measure. That zero and that instrument-independent temperature idea get conflated, and reasonably so. Hasok Chang’s Inventing Temperature: Measurement and Scientific Progress is well-worth the read for people who want to understand absolute temperature better.

Gene Weingarten, Dan Weingarten & David Clark’s Barney and Clyde for the 9th is another strip that seems like it might not belong here. While it’s true that accidents sometimes lead to great scientific discoveries, what has that to do with mathematics? And the first thread is that there are mathematical accidents and empirical discoveries. Many of them are computer-assisted. There is something that feels experimental about doing a simulation. Modern chaos theory, the study of deterministic yet unpredictable systems, has at its founding myth Edward Lorentz discovering that tiny changes in a crude weather simulation program mattered almost right away. (By founding myth I don’t mean that it didn’t happen. I just mean it’s become the stuff of mathematics legend.)

But there are other ways that “accidents” can be useful. Monte Carlo methods are often used to find extreme — maximum or minimum — solutions to complicated systems. These are good if it’s hard to find a best possible answer, but it’s easy to compare whether one solution is better or worse than another. We can get close to the best possible answer by picking an answer at random, and fiddling with it at random. If we improve things, good: keep the change. You can see why this should get us pretty close to a best-possible-answer soon enough. And if we make things worse then … usually but not always do we reject the change. Sometimes we take this “accident”. And that’s because if we only take improvements we might get caught at a local extreme. An even better extreme might be available but only by going down an initially unpromising direction. So it’s worth allowing for some “mistakes”.

Mark Anderson’s Andertoons for the 10th of Anderson is some wordplay on volume. The volume of boxes is an easy formula to remember and maybe it’s a boring one. It’s enough, though. You can work out the volume of any shape using just the volume of boxes. But you do need integral calculus to tell how to do it. So maybe it’s easier to memorize the formula for volumes of a pyramid and a sphere.

Berkeley Breathed’s Bloom County for the 10th of May is a rerun from 1981. And it uses a legitimate bit of mathematics for Milo to insult Freida. He calls her a “log 10 times 10 to the derivative of 10,000”. The “log 10” is going to be 1. A reference to logarithm, without a base attached, means either base ten or base e. “log” by itself used to invariably mean base ten, back when logarithms were needed to do ordinary multiplication and division and exponentiation. Now that we have calculators for this mathematicians have started reclaiming “log” to mean the natural logarithm, base e, which is normally written “ln”, but that’s still an eccentric use. Anyway, the logarithm base ten of ten is 1: 10 is equal to 10 to the first power.

10 to the derivative of 10,000 … well, that’s 10 raised to whatever number “the derivative of 10,000” is. Derivatives take us into calculus. They describe how much a quantity changes as one or more variables change. 10,000 is just a number; it doesn’t change. It’s called a “constant”, in another bit of mathematics lingo that reminds us not all mathematics lingo is hard to understand. Since it doesn’t change, its derivative is zero. As anything else changes, the constant 10,000 does not. So the derivative of 10,000 is zero. 10 to the zeroth power is 1.

So, one times one is … one. And it’s rather neat that kids Milo’s age understand derivatives well enough to calculate that.

Ruben Bolling’s Super-Fun-Pak Comix rerun for the 10th happens to have a bit of graph theory in it. One of Uncle Cap’n’s Puzzle Pontoons is a challenge to trace out a figure without retracting a line or lifting your pencil. You can’t, not this figure. One of the first things you learn in graph theory teaches how to tell, and why. And thanks to a Twitter request I’m figuring to describe some of that for the upcoming Theorem Thursdays project. Watch this space!

Charles Schulz’s Peanuts Begins for the 11th, a rerun from the 6th of February, 1952, is cute enough. It’s one of those jokes about how a problem seems intractable until you’ve found the right way to describe it. I can’t fault Charlie Brown’s thinking here. Figuring out a way the problems are familiar and easy is great.

Shaenon K Garrity and Jeffrey C Wells’s Skin Horse for the 12th is a “see, we use mathematics in the real world” joke. In this case it’s triangles and triangulation. That’s probably the part of geometry it’s easiest to demonstrate a real-world use for, and that makes me realize I don’t remember mathematics class making use of that. I remember it coming up some, particularly in what must have been science class when we built and launched model rockets. We used a measure of how high an angle the rocket reached, and knowledge of how far the observing station was from the launchpad. But that wasn’t mathematics class for some reason, which is peculiar.

A Leap Day 2016 Mathematics A To Z: Matrix


I get to start this week with another request. Today’s Leap Day Mathematics A To Z term is a famous one, and one that I remember terrifying me in the earliest days of high school. The request comes from Gaurish, chief author of the Gaurish4Math blog.

Matrix.

Lewis Carroll didn’t like the matrix. Well, Charles Dodgson, anyway. And it isn’t that he disliked matrices particularly. He believed it was a bad use of a word. “Surely,” he wrote, “[ matrix ] means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He might have had etymology on his side. The word meant the place where something was developed, the source of something else. History has outvoted him, and his preferred “block”. The first mathematicians to use the word “matrix” were interested in things derived from the matrix. So for them, the matrix was the source of something else.

What we mean by a matrix is a collection of some number of rows and columns. Inside each individual row and column is some mathematical entity. We call this an element. Elements are almost always real numbers. When they’re not real numbers they’re complex-valued numbers. (I’m sure somebody, somewhere has created matrices with something else as elements. You’ll never see these freaks.)

Matrices work a lot like vectors do. We can add them together. We can multiply them by real- or complex-valued numbers, called scalars. But we can do other things with them. We can define multiplication, at least sometimes. The definition looks like a lot of work, but it represents something useful that way. And for square matrices, ones with equal numbers of rows and columns, we can find other useful stuff. We give that stuff wonderful names like traces and determinants and eigenvalues and eigenvectors and such.

One of the big uses of matrices is to represent a mapping. A matrix can describe how points in a domain map to points in a range. Properly, a matrix made up of real numbers can only describe what are called linear mappings. These are ones that turn the domain into the range by stretching or squeezing down or rotating the whole domain the same amount. A mapping might follow different rules in different regions, but that’s all right. We can write a matrix that approximates the original mapping, at least in some areas. We do this in the same way, and for pretty much the same reason, we can approximate a real and complicated curve with a bunch of straight lines. Or the way we can approximate a complicated surface with a bunch of triangular plates.

We can compound mappings. That is, we can start with a domain and a mapping, and find the image of that domain. We can then use a mapping again and find the image of the image of that domain. The matrix that describes this mapping-of-a-mapping is the one you get by multiplying the matrix of the first mapping and the matrix of the second mapping together. This is why we define matrix multiplication the odd way we do. Mapping are that useful, and matrices are that tied to them.

I wrote about some of the uses of matrices in a Set Tour essay. That was based on a use of matrices in physics. We can describe the changing of a physical system with a mapping. And we can understand equilibriums, states where a system doesn’t change, by looking at the matrix that approximates what the mapping does near but not exactly on the equilibrium.

But there are other uses of matrices. Many of them have nothing to do with mappings or physical systems or anything. For example, we have graph theory. A graph, here, means a bunch of points, “vertices”, connected by curves, “edges”. Many interesting properties of graphs depend on how many other vertices each vertex is connected to. And this is well-represented by a matrix. Index your vertices. Then create a matrix. If vertex number 1 connects to vertex number 2, put a ‘1’ in the first row, second column. If vertex number 1 connects to vertex number 3, put a ‘1’ in the first row, third column. If vertex number 2 isn’t connected to vertex number 3, put a ‘0’ in the second row, third column. And so on.

We don’t have to use ones and zeroes. A “network” is a kind of graph where there’s some cost associated with each edge. We can put that cost, that number, into the matrix. Studying the matrix of a graph or network can tell us things that aren’t obvious from looking at the drawing.

How November 2015 Treated My Mathematics Blog


So after a couple dismal months my ratings appear to be up. The number of page views and of visitors, in fact, seem to be at all-time highs. At least they’re at highs for the past twelve months. I would like to think that the depressed readings of September and October — 708 page views and 381 visitors; 733 page views and 405 visitors, respectively — are behind me. November saw 1,215 page views and 519 visitors.

Some of this is an accident. My humor blog got a tidal wave of readers courtesy The Onion AV Club. The AV Club wrote up a piece about the sad end of the comic strip Apartment 3-G, and I’ve written a shocking amount about the soap strip. They mentioned me. And as I’ve used my comic strip posts there to mention my Reading the Comics series here, some curious people followed along.

That said, I’m not sure how many of those readers were AV Club curiosity-seekers. A crude estimate suggests somewhere a little over two hundred were. So even discounting that something near a thousand regular-style reders came in and looked around, and that’s nice to see. It’s back up to about where the readership was before the mysterious dropoff, in July, that many suspect results from mobile devices being incorrectly read.

For the roster of countries, well, the top was the United States as always, with some 837 page views. The United Kingdom came in with 62. The Canada appears third at 50 views, and the Philippines next at 20. The Singapore and the Australia tie at 19.

Single-reader countries this past month were Algeria, Argentina, Belgium, Egypt, Finland, Israel, Jamaica, Malaysia, Mexico, Nigeria, Puerto Rico, Romania, Thailand, Turkey, and Vietnam. Belgium, Nigeria, and Thailand are repeats from October. No country’s on a three-month streak.

The Reading the Comics posts are as ever the most popular group and I’ve bundled them under the one category tag. But my Ramsey Theory question turned out to be slightly more popular than any of them in November. After grouping together all the comics posts, the most popular articles look like:

  1. Why Was Someone Upset With Ramsey Theory In 1979? a question about a dimly remembered Dear Abby-class question.
  2. Reading the Comics, an ongoing series.
  3. How October Treated My Mathematics Blog, and yes, I risk an endless loop by mentioning this here.
  4. How Many Trapezoids I Can Draw and goodness it’s nice to see the trapezoids turning up again.
  5. How Antifreeze Works, one of my little pointers to someone else’s interesting writing.

Nothing really dominated my search term queries this month. Some of the things that turned up were:

  • illustration of electromagnetic wave theory scientist comics strip
  • james clerk maxwell comics (I’m not sure I have any of these; this suggests I ought to be finding some.)
  • origin is the gateway to your entire gaming universe. (I’ve had this explained to me, but I forget what it means.)
  • places 1975 miles from charlotte nc (I know of none specifically 1,975 miles away.)
  • if i got 70 percent in all exams what grade do i need on final to pass course? (This I can help with.)

December starts with my blog here at 30,298 page views, and with 543 WordPress followers. I expect it’ll be overtaken in page views by my humor blog sometime soon.

A Summer 2015 Mathematics A To Z: vertex (graph theory)


Vertex.

I mentioned graph theory several weeks back, when this Mathematics A To Z project was barely begun. It’s a fun field. It’s a great one for doodlers, and it’s one that has surprising links to other problems.

Graph theory divides the conceptual universe into “things that could be connected” and “ways they are connected”. The “things that could be connected” we call vertices. The “ways they are connected” are the edges. Vertices might have an obvious physical interpretation. They might, represent the corners of a cube or a pyramid or some other common shape. That, I imagine, is why these things were ever called vertices. A diagram of a graph can look a lot like a drawing of a solid object. It doesn’t have to, though. Many graphs will have vertices and edges connected in ways that no solid object could have. They will usually be ones that you could build in wireframe. Use gumdrops for the vertices and strands of wire or plastic or pencils for the edges.

Vertices might stand in for the houses that need to be connected to sources of water and electricity and Internet. They might be the way we represent devices connected on the Internet. They might represent all the area within a state’s boundaries. The Köningsburg bridge problem, held up as the ancestor of graph theory, has its vertices represent the islands and river banks one gets to by bridges. Vertices are, as I say, the things that might be connected.

“Things that might be connected” is a broader category than you might imagine. For example, an important practical use of mathematics is making error-detecting and error-correcting codes. This is how you might send a message that gets garbled — in sending, in transmitting, or in reception — and still understand what was meant. You can model error-detecting or correcting codes as a graph. In this case every possible message is a vertex. Edges connect together the messages that could plausibly be misinterpreted as one another. How many edges you draw — how much misunderstanding you allow for — depends on how many errors you want to be able to detect, or to correct.

When we draw this on paper or a chalkboard or the like we usually draw it as a + or an x or maybe a *. How much we draw depends on how afraid we are of losing sight of it as we keep working. In publication it’s often drawn as a simple dot. This is because printers are able to draw dots that don’t get muddied up by edges being drawn in or eraser marks removing edges.

A Summer 2015 Mathematics A To Z: graph


Graph. (As in Graph Theory)

When I started this A to Z I figured it would be a nice string of quick, two-to-four paragraph compositions. So far each one has been a monster post instead. I’m hoping to get to some easier ones. For today I mean to talk about a graph, as in graph theory. That’s not the kind of graph that’s a plot of some function or a pie chart or a labelled map or something like that.

This kind of graph we do study as pictures, though. Specifically, they’re pictures with two essential pieces: a bunch of points or dots and a bunch of curves connecting dots. The dots we call vertices. The curves we call edges. My mental model tends to be of a bunch of points in a pegboard connected by wire or string. That might not work for you, but the idea of connecting things is what graphs, and graph theory, are good for studying.

Continue reading “A Summer 2015 Mathematics A To Z: graph”

Kenneth Appel and Colored Maps


Word’s come through mathematics circles about the death of Kenneth Ira Appel, who along with Wolgang Haken did one of those things every mathematically-inclined person really wishes to do: solve one of the long-running unsolved problems of mathematics. Even better, he solved one of those accessible problems. There are a lot of great unsolved problems that take a couple paragraphs just to set up for the lay audience (who then will wonder what use the problem is, as if that were the measure of interesting); Appel and Haken’s was the Four Color Theorem, which people can understand once they’ve used crayons and coloring books (even if they wonder whether it’s useful for anyone besides Hammond).

It was, by everything I’ve read, a controversial proof at the time, although by the time I was an undergraduate the controversy had faded the way controversial stuff doesn’t seem that exciting decades on. The proximate controversy was that much of the proof was worked out by computer, which is the sort of thing that naturally alarms people whose jobs are to hand-carve proofs using coffee and scraps of lumber. The worry about that seems to have faded as more people get to use computers and find they’re not putting the proof-carvers out of work to any great extent, and as proof-checking software gets up to the task of doing what we would hope.

Still, the proof, right as it probably is, probably offers philosophers of mathematics a great example for figuring out just what is meant by a “proof”. The word implies that a proof is an argument which convinces a person of some proposition. But the Four Color Theorem proof is … well, according to Appel and Haken, 50 pages of text and diagrams, with 85 pages containing an additional 2,500 diagrams, and 400 microfiche pages with additional diagrams of verifications of claims made in the main text. I’ll never read all that, much less understand all that; it’s probably fair to say very few people ever will.

So I couldn’t, honestly, say it was proved to me. But that’s hardly the standard for saying whether something is proved. If it were, then every calculus class would produce the discovery that just about none of calculus has been proved, and that this whole “infinite series” thing sounds like it’s all doubletalk made up on the spot. And yet, we could imagine — at least, I could imagine — a day when none of the people who wrote the proof, or verified it for publication, or have verified it since then, are still alive. At that point, would the theorem still be proved?

(Well, yes: the original proof has been improved a bit, although it’s still a monstrously large one. And Neil Robertson, Daniel P Sanders, Paul Seymour, and Robin Thomas published a proof, similar in spirit but rather smaller, and have been distributing the tools needed to check their work; I can’t imagine there being nobody alive who hasn’t done, or at least has the ability to do, the checking work.)

I’m treading into the philosophy of mathematics, and I realize my naivete about questions like what constitutes a proof are painful to anyone who really studies the field. I apologize for inflicting that pain.

Before Drawing a Graph


I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

Continue reading “Before Drawing a Graph”