The End 2016 Mathematics A To Z: Tree


Graph theory begins with a beautiful legend. I have no reason to suppose it’s false, except my natural suspicion of beautiful legends as origin stories. Its organization as a field is traced to 18th century Köningsburg, where seven bridges connected the banks of a river and a small island in the center. Whether it was possible to cross each bridge exactly once and get back where one started was, they say, a pleasant idle thought to ponder and path to try walking. Then Leonhard Euler solved the problem. It’s impossible.

Tree.

Graph theory arises whenever we have a bunch of things that can be connected. We call the things “vertices”, because that’s a good corner-type word. The connections we call “edges”, because that’s a good connection-type word. It’s easy to create graphs that look like the edges of a crystal, especially if you draw edges as straight as much as possible. You don’t have to. You can draw them curved. Then they look like the scary tangles of wire around your wireless router complex.

Graph theory really got organized in the 19th century, and went crazy in the 20th. It turns out there’s lots of things that connect to other things. Networks, whether computers or social or thematically linked concepts. Anything that has to be delivered from one place to another. All the interesting chemicals. Anything that could be put in a pipe or taken on a road has some graph theory thing applicable to it.

A lot of graph theory ponders loops. The original problem was about how to use every bridge, every edge, exactly one time. Look at a tangled mass of a graph and it’s hard not to start looking for loops. They’re often interesting. It’s not easy to tell if there’s a loop that lets you get to every vertex exactly once.

What if there aren’t loops? What if there aren’t any vertices you can step away from and get back to by another route? Well, then you have a tree.

A tree’s a graph where all the vertices are connected so that there aren’t any closed loops. We normally draw them with straight lines, the better to look like actual trees. We then stop trying to make them look like actual trees by doing stuff like drawing them as a long horizontal spine with a couple branches sticking off above and below, or as * type stars, or H shapes. They still correspond to real-world things. If you’re not sure how consider the layout of one of those long, single-corridor hallways as in a hotel or dormitory. The rooms connect to one another as a tree once again, as long as no room opens to anything but its own closet or bathroom or the central hallway.

We can talk about the radius of a graph. That’s how many edges away any point can be from the center of the tree. And every tree has a center. Or two centers. If it has two centers they share an edge between the two. And that’s one of the quietly amazing things about trees to me. However complicated and messy the tree might be, we can find its center. How many things allow us that?

A tree might have some special vertex. That’s called the ‘root’. It’s what the vertices and the connections represent that make a root; it’s not something inherent in the way trees look. We pick one for some special reason and then we highlight it. Maybe put it at the bottom of the drawing, making ‘root’ for once a sensible name for a mathematics thing. Often we put it at the top of the drawing, because I guess we’re just being difficult. Well, we do that because we were modelling stuff where a thing’s properties depend on what it comes from. And that puts us into thoughts of inheritance and of family trees. And weird as it is to put the root of a tree at the top, it’s also weird to put the eldest ancestors at the bottom of a family tree. People do it, but in those illuminated drawings that make a literal tree out of things. You don’t see it in family trees used for actual work, like filling up a couple pages at the start of a king or a queen’s biography.

Trees give us neat new questions to ponder, like, how many are there? I mean, if you have a certain number of vertices then how many ways are there to arrange them? One or two or three vertices all have just the one way to arrange them. Four vertices can be hooked up a whole two ways. Five vertices offer a whole three different ways to connect them. Six vertices offer six ways to connect and now we’re finally getting something interesting. There’s eleven ways to connect seven vertices, and 23 ways to connect eight vertices. The number keeps on rising, but it doesn’t follow the obvious patterns for growth of this sort of thing.

And if that’s not enough to idly ponder then think of destroying trees. Draw a tree, any shape you like. Pick one of the vertices. Imagine you obliterate that. How many separate pieces has the tree been broken into? It might be as few as two. It might be as many as the number of remaining vertices. If graph theory took away the pastime of wandering around Köningsburg’s bridges, it has given us this pastime we can create anytime we have pen, paper, and a long meeting.

Advertisements

Theorem Thursday: Tutte’s Theorem, Magic, And Happy Endings


To wrap up my two-month Theorem Thursday project I have another request to fill. And it’s another from Gaurish, whose requests for the Leap Day A To Z inspired so many interesting posts.

Tutte’s Theorem

I admit I had not heard of Tutte’s Theorem before Gaurish’s request and I had to spend time working up to knowing what it was and why it was useful. I also admit I’m not completely sure I have it. But I’m happy to try to accept with grace corrections from the people in graph theory who know better.

It comes back to graphs. These are a bunch of points, “vertices”, which have curves, “edges” connecting pairs of them. This describes a lot of systems. Bunches of things that have to connect are naturally graphs. Connecting utilities to houses gave the first example I called up last week. The Internet’s made people vaguely familiar with the idea of bunches of computers linked to bunches of other computers. Social networks can be seen as graphs; each person is a vertex, her friendships edges connecting to other vertices.

In a graph not every vertex has to be connected to every other vertex. Not everyone needs to be friends with everyone else, which is a relief. I don’t want to point fingers but some of your friends are insufferable. I don’t know how you put up with them. You, you’re great. But the friends of a friend … yeeeeeeesh.

So we — mathematicians, anyway — get to wondering. Give me a graph. It’s got some vertices and some edges. Let’s look at only some of these edges. Each edge links to two vertices. Is there a subset of the edges that touch every one of the vertices exactly once? Sometimes there are; sometimes there aren’t. If there are, we have a “perfect matching”.

We look at this sort of thing because mathematicians learn to look at coverings. Coverings are what they sound like. What’s the smallest set of some standard item you need to … cover … all the stuff you’re interested in this problem? I think we’re bred to look for coverings in Real Analysis, because the idea of covering space with discs gives us measure. This gives us a rigorous idea of what length is, and what dimensions are, and lets us see there have to be more irrational than rational numbers and other interesting results like that. Mathematicians get used to looking for this sort of thing in other fields.

Tutte’s Theorem is about perfect matchings. It says what conditions a graph has to have to have a perfect matching. It’s based on striking subsets of the vertices from the original graph. The accompanying edges go too. What’s left might be connected, which means just what you think. You can get from any vertex in the decimated graph to any other vertex by following its edges. Or it might be disconnected, which means again just what you think. Mathematics is not always about complicated lingo.

Take the survivors. Count how many of the remaining connected components have an odd number of vertices. Is that less than or equal to the number of vertices you struck out? If it is, and if it is no matter how many vertices you struck out, and no matter how you arranged those vertices, then there’s a perfect matching.

This isn’t one for testing. I mean, consider what’s easier to do: start from your original graph and test out coloring in some edges to see if you can touch every edge the one time? Take every possible sub-graph of the original graph and count the number of connected sub-pieces with an odd number of vertices? … Well, maybe that isn’t so bad, if you set a computer to do the boring work. It’s going to take forever for your graph with 102 vertices, though, unless it’s a boring network. You have to test every possible removal all the way up to striking 102 vertices. (OK, it’s easy to show it’s true if 102 of 102 vertices are removed from the graph. Also if 101 of 102 vertices are removed. And 100 of 102 is also not hard. But that’s only a couple easy cases left.)

I don’t know the real work the theorem does. It has some neat and implications good for pop mathematics. Take a standard deck of 52 well-shuffled cards. Deal them out into thirteen piles of four cards each. Is it possible to select exactly one card from each pile so that, when done, there’s exactly one of each face value — Ace, Two, Three, Four, up through Queen and King — in the selected set? Indeed it is. I leave it to the Magic In A Minute cartoonists to turn this into a way to annoy a compliant monkey.

Another appealing use for it is in marriage problems. Well, marriage looked to mathematicians like a good source of bipartite graph problems. Society’s since noticed that nobody really hit on a compelling reason for “why does a woman have to marry a man, exactly”. I imagine the change to filter into mathematics textbooks sometime in the next seventy years. But please accept that this problem was formed and solved in the 1930s.

Suppose you have a group of women and a group of men, of equal size. Each woman would be happy married to some of the men in that group. Each man would be happy married to some of the women in that group. Is it possible to match women and men up in a way that everybody is married to at least someone they’d be happy with? We’re not guaranteeing that anyone gets their best possible pairing. We promise just that everyone marries someone they’ll be glad to.

It depends. Maybe this is best done by testing. Work through every possible subset of women. That is, look at every group of one woman, every group of two women, every group of three women, and so on. By group I mean what normal people mean by group, not what mathematicians mean. So look at your group of women. Count how many of the men at least one woman would be content marrying. Is that number equal to or larger than the number of women? Is that number equal to or larger than the number of women, however many women you picked and whichever women you did pick? If it did, great: everybody can marry someone they’re happy with.

Parlor tricks, I admit, but pleasant ones. What are its real uses? At this point I am really synthesizing my readings on the subject rather than speaking from things I’m confident about. If I understand right, though, Tutte’s Theorem is a foundation for the study of graph toughness. That is what you’d guess from the name. It’s about how easy it is to break up a graph into disconnected pieces. It’s easy to imagine real networks with strength or weakness. Image the person that holds a complicated group of friends together, for example, and whose removal devastates it. Consider the electrical network with a few vulnerable points that allow small problems to become nationwide blackouts. I believe it’s relevant to the study of proteins. Those long strands of blocks of molecules that then fold back on themselves. (I haven’t seen a source that says this, but it can’t imagine why it shouldn’t. I am open to correction from sneering protein researchers.) I would be surprised if the theorem has nothing to say about when a strand of Christmas tree lights will get too tangled to fix.

Let me close with a puzzle, a delightful little one. It regards one of the graphs we met last week. K5 is a complete graph with five vertices. Each vertex is connected to all four of its siblings. It can’t have a perfect matching. Only a graph with an even number of vertices can. Each edge connects to two vertices, after all. So — what is the subset that breaks Tutte’s theorem? It must be possible to strike some set of vertices from K5 so that the number of stricken vertices is smaller than the number of remaining connected components with an odd number of vertices. What’s that set?

Go ahead and ponder it a while. If you give up I don’t blame you. The answer is at the other end of this link. If you’d like a hint, let me offer this, which you might be able to mouse over to reveal.

It is obvious once you’ve heard what the subset of vertices is, and it is not K5. The rest of this paragraph is padding so that the size of the spoiler doesn’t give matters away. And by the way I’d like to thank the world at large for doing such a great job not spoiling Star Wars: The Force Awakens. So why could you not give some similar consideration for Star Trek Beyond? I stopped reading TrekBBS for a month partly to avoid spoilers and then I started getting them everywhere I looked. Not cool, world.

Good luck!

Theorem Thursday: Kuratowski’s Reduction Theorem and Playing With Gas Pipelines


I’m doing that thing again. Sometime for my A To Z posts I’ve used one essay to explain something, but also to introduce ideas and jargon that will come up later. Here, I want to do a theorem in graph theory. But I don’t want to overload one essay. So I’m going to do a theorem that’s interesting and neat in itself. But my real interest in next week’s piece. This is just so recurring readers are ready for it.

The Kuratowski Reduction Theorem.

It starts with a children’s activity book-type puzzle. A lot of real mathematics does. In the traditional form that I have a faint memory of ever actually seeing it’s posed as a problem of hooking up utilities to houses. There are three utilities, usually gas, water, and electricity. There are three houses. Is it possible to connect pipes from each of the three utility starting points to each of the three houses?

Of course. We do it all the time. We do this with many more utilities and many more than three buildings. The underground of Manhattan island is a network of pipes and tunnels and subways and roads and basements and buildings so complicated I doubt humans can understand it all. But the problem isn’t about that. The problem is about connecting these pipes all in the same plane, by drawing lines on a sheet of paper, without ever going into the third dimension. Nor making that little semicircular hop that denotes one line going over the other.

That’s a little harder. By that I mean it’s impossible. You can try and it’s fun to try a while. Draw three dots that are the houses and three dots that are the utilities. Try drawing three lines, one from each utility to each of the houses. Or one leading into each house that comes from each of the utilities. The lines don’t have to be straight. They can have extra jogs, too. Soon you’ll hit on the possibilities of lines that go way out, away from the dots, in the quest to avoid crossing over one another. It doesn’t matter. The attempt’s doomed to failure.

You’ll be sure of this by at latest the twelfth attempt at finding an arrangement. But that leaves open the possibility you weren’t clever enough to find an arrangement. To close that possibility guess what theorem is sitting there ready to answer your question, just like I told you it would be?

This is a problem in graph theory. I’ve talked about graph theory before. It’s the field of mathematics most comfortable to people who like doodling. A graph is a bunch of points, which we call vertices, connected by arcs or lines, which we call edges. For this utilities graph problem, the houses are the vertices. The pipes are the edges. An edge has to start at one vertex and end at a vertex. These may be the same vertex. We’re not judging. A vertex can have one edge connecting it to something else, or two edges, or three edges. It can have no edges. It can have any number of edges. We’re even allowed to have two or more edges connecting a vertex to the same vertex. My experience is we think of that last, forgetting that it is a possibility, but it’s there.

This is a “nonplanar” graph. This means you can’t draw it in a plane, like a sheet of paper, without having at least two edges crossing each other. We draw this on paper by making one of the lines wiggle in a little half-circle to show it’s going over, or to fade out and back in again to show it’s going under. There are planar graphs. Try the same problem with two houses and two utilities, for example. Or three houses and two utilities. Or three houses and three utilities, but one of the houses doesn’t get one of the utilities. Your choice which. It can be a little surprise to the homeowners.

This utilities graph is an example of a “bipartite” graph. The “bi” maybe suggests where things are going. You can always divide the vertices in a graph into two groups for the same reason you can always divide a pile of change into two piles. As long as you have at least two vertices or pieces of change. But a graph is bipartite if, once you’ve divided the vertices up, each edge has one vertex in the first set and the other vertex in the second. For the utilities graph these sets are easy to find. Each edge, each pipe, connects one utility to one house. There’s our division: vertices representing houses and vertices representing utilities.

This graph turns up a lot. Graph theorists have a shorthand way of writing it. It’s written as K3,3. This means it’s a bipartite graph. It has three vertices in the first set. There’s three vertices in the second set. There’s an edge connecting everything in the first set to everything in the second. Go ahead now and guess what K2, 2 is. Or K3,5. The K — I’ve never heard what the K stands for, although while writing this essay I started to wonder if it’s for “Kuratowski”. That seems too organized, somehow.

Not every graph is bipartite. You might say “of course; why else would we have a name `bipartite’ if there weren’t such a thing as `non-bipartite’?” Well, we have the name “graph” for everything that’s a graph in graph theory. But there are non-bipartite graphs. They just don’t look like the utility-graph problem. Imagine three vertices, each of them connected to the other two. If you aren’t imagining a triangle you’re overthinking this. But this is a perfectly good non-bipartite graph. There’s no way to split the vertices into two sets with every edge connecting something in one set to something in the other. No, that isn’t inevitable once you have an odd number of vertices. Look above at the utilities problem where there’s three houses and two utilities. That’s nice and bipartite.

Non-bipartite graphs can be planar. The one with three vertices, each connected to each other, is. The one with four vertices, each vertex connected to each other, is also planar. But if you have five vertices, each connected to each other — well, that’s a lovely star-in-pentagon shape. It’s also not planar. There’s no connecting each vertex to each other one without some line crossing another or jumping out of the plane.

This shape, five vertices each connected to one another, shows up a lot too. And it has a shorthand notation. It’s K5. That is, it’s five points, all connected to each other. This makes it a “complete” graph: every set of two vertices has an edge connecting them. If you’ve leapt to the supposition that K3 is that circle and K4 is that square with diagonals drawn in you’re right. K6 is six vertices, each one connected to the five other vertices.

It may seem intolerably confusing that we have two kinds of graphs and describe them both with K and a subscript. But they’re completely different. The bipartite graphs have a subscript that’s two numbers separated by a comma: p, q. The p is the number of vertices in one of the subsets. The q is the number of vertices in the other subset. There’s an edge connecting every point in p to every point in q, and vice-versa. The points in the p subset aren’t connected to one another, though. And the points in the q subset aren’t connected to one another. That they don’t mean this isn’t a complete graph.

The others, K with a single number r in the subscript, are complete graphs, ones that aren’t bipartite. They have r vertices, and each vertex is connected to the (r – 1) other vertices. So there’s (1/2) times r times (r – 1) edges all told.

Not every graph is either Kp, q or Kr. There’s a lot of kinds of graphs out there. Some are planar, some are not. But here’s an amazing thing, and it’s Kuratowski’s Reduction Theorem. If a graph is not planar, then it has to have, somewhere inside it, K3, 3 or K5 or both. Maybe several of them.

A graph that’s hidden within another is called a “subgraph”. This follows the same etymological reasoning that gives us “subsets” and “subgroups” and many other mathematics words beginning with “sub”. And these subgraphs turn up whenever you have a nonplanar graph. A subgraph uses some set of the vertices and edges of the original graph; it doesn’t need all of them. A nonplanar graph has a subgraph that’s K3, 3 or K5 or both.

Sometimes it’s easy to find one of these. K4, 4 obviously has K3, 3 inside it. Pick three of the four vertices on one side and three of the four vertices on the other, and look at the edges connecting them up. There’s your K3, 3. Or on the other side, K6 obviously has K5 inside it. Pick any five of the vertices inside K6 and the edges connecting those vertices. There’s your K5.

Sometimes it’s hard to find one of these. We can make a graph look more complicated without changing whether it’s planar or not. Take your K3, 3 again. Go to each edge and draw another dot, another vertex, inside it. Well, now it’s a graph that’s got twelve vertices in it. It’s not obvious whether this is bipartite still. (Play with it a while.) But it hasn’t become planar, not because of this. It won’t be.

This is because we can make graphs more complicated, or more simple, without changing whether they’re planar. The process is a lot like what we did last week with the five-color map theorem, making a map simpler until it was easy enough to color. Suppose there’s a little section of the graph that’s a vertex connected by one edge to a middle vertex connected by one edge to a third vertex. Do we actually need that middle vertex for anything? Besides padding our vertex count? Nah. We can drop that whole edge-middle vertex-edge sequence and replace it all with a single edge. And there’s other rules that let us turn a vertex-edge-vertex set into a single vertex. That sort of thing. It won’t change a planar graph to a nonplanar one or vice-versa.

So it can be hard to find the K3, 3 or the K5 hiding inside a nonplanar graph. A big enough graph can have so much going on it’s hard to find the pattern. But they’ll be there, in some group of five or six vertices with the right paths between them.

It would make a good activity puzzle, if you could explain what to look for to kids.

Reading the Comics, May 12, 2016: No Pictures Again Edition


I’ve hardly stopped reading the comics. I doubt I could even if I wanted at this point. But all the comics this bunch are from GoComics, which as far as I’m aware doesn’t turn off access to comic strips after a couple of weeks. So I don’t quite feel justified including the images of the comics when you can just click links to them instead.

It feels a bit barren, I admit. I wonder if I shouldn’t commission some pictures so I have something for visual appeal. There’s people I know who do comics online. They might be able to think of something to go alongside every “Student has snarky answer for a word problem” strip.

Brian and Ron Boychuk’s The Chuckle Brothers for the 8th of May drops in an absolute zero joke. Absolute zero’s a neat concept. People became aware of it partly by simple extrapolation. Given that the volume of a gas drops as the temperature drops, is there a temperature at which the volume drops to zero? (It’s complicated. But that’s the thread I use to justify pointing out this strip here.) And people also expected there should be an absolute temperature scale because it seemed like we should be able to describe temperature without tying it to a particular method of measuring it. That is, it would be a temperature “absolute” in that it’s not explicitly tied to what’s convenient for Western Europeans in the 19th century to measure. That zero and that instrument-independent temperature idea get conflated, and reasonably so. Hasok Chang’s Inventing Temperature: Measurement and Scientific Progress is well-worth the read for people who want to understand absolute temperature better.

Gene Weingarten, Dan Weingarten & David Clark’s Barney and Clyde for the 9th is another strip that seems like it might not belong here. While it’s true that accidents sometimes lead to great scientific discoveries, what has that to do with mathematics? And the first thread is that there are mathematical accidents and empirical discoveries. Many of them are computer-assisted. There is something that feels experimental about doing a simulation. Modern chaos theory, the study of deterministic yet unpredictable systems, has at its founding myth Edward Lorentz discovering that tiny changes in a crude weather simulation program mattered almost right away. (By founding myth I don’t mean that it didn’t happen. I just mean it’s become the stuff of mathematics legend.)

But there are other ways that “accidents” can be useful. Monte Carlo methods are often used to find extreme — maximum or minimum — solutions to complicated systems. These are good if it’s hard to find a best possible answer, but it’s easy to compare whether one solution is better or worse than another. We can get close to the best possible answer by picking an answer at random, and fiddling with it at random. If we improve things, good: keep the change. You can see why this should get us pretty close to a best-possible-answer soon enough. And if we make things worse then … usually but not always do we reject the change. Sometimes we take this “accident”. And that’s because if we only take improvements we might get caught at a local extreme. An even better extreme might be available but only by going down an initially unpromising direction. So it’s worth allowing for some “mistakes”.

Mark Anderson’s Andertoons for the 10th of Anderson is some wordplay on volume. The volume of boxes is an easy formula to remember and maybe it’s a boring one. It’s enough, though. You can work out the volume of any shape using just the volume of boxes. But you do need integral calculus to tell how to do it. So maybe it’s easier to memorize the formula for volumes of a pyramid and a sphere.

Berkeley Breathed’s Bloom County for the 10th of May is a rerun from 1981. And it uses a legitimate bit of mathematics for Milo to insult Freida. He calls her a “log 10 times 10 to the derivative of 10,000”. The “log 10” is going to be 1. A reference to logarithm, without a base attached, means either base ten or base e. “log” by itself used to invariably mean base ten, back when logarithms were needed to do ordinary multiplication and division and exponentiation. Now that we have calculators for this mathematicians have started reclaiming “log” to mean the natural logarithm, base e, which is normally written “ln”, but that’s still an eccentric use. Anyway, the logarithm base ten of ten is 1: 10 is equal to 10 to the first power.

10 to the derivative of 10,000 … well, that’s 10 raised to whatever number “the derivative of 10,000” is. Derivatives take us into calculus. They describe how much a quantity changes as one or more variables change. 10,000 is just a number; it doesn’t change. It’s called a “constant”, in another bit of mathematics lingo that reminds us not all mathematics lingo is hard to understand. Since it doesn’t change, its derivative is zero. As anything else changes, the constant 10,000 does not. So the derivative of 10,000 is zero. 10 to the zeroth power is 1.

So, one times one is … one. And it’s rather neat that kids Milo’s age understand derivatives well enough to calculate that.

Ruben Bolling’s Super-Fun-Pak Comix rerun for the 10th happens to have a bit of graph theory in it. One of Uncle Cap’n’s Puzzle Pontoons is a challenge to trace out a figure without retracting a line or lifting your pencil. You can’t, not this figure. One of the first things you learn in graph theory teaches how to tell, and why. And thanks to a Twitter request I’m figuring to describe some of that for the upcoming Theorem Thursdays project. Watch this space!

Charles Schulz’s Peanuts Begins for the 11th, a rerun from the 6th of February, 1952, is cute enough. It’s one of those jokes about how a problem seems intractable until you’ve found the right way to describe it. I can’t fault Charlie Brown’s thinking here. Figuring out a way the problems are familiar and easy is great.

Shaenon K Garrity and Jeffrey C Wells’s Skin Horse for the 12th is a “see, we use mathematics in the real world” joke. In this case it’s triangles and triangulation. That’s probably the part of geometry it’s easiest to demonstrate a real-world use for, and that makes me realize I don’t remember mathematics class making use of that. I remember it coming up some, particularly in what must have been science class when we built and launched model rockets. We used a measure of how high an angle the rocket reached, and knowledge of how far the observing station was from the launchpad. But that wasn’t mathematics class for some reason, which is peculiar.

A Leap Day 2016 Mathematics A To Z: Matrix


I get to start this week with another request. Today’s Leap Day Mathematics A To Z term is a famous one, and one that I remember terrifying me in the earliest days of high school. The request comes from Gaurish, chief author of the Gaurish4Math blog.

Matrix.

Lewis Carroll didn’t like the matrix. Well, Charles Dodgson, anyway. And it isn’t that he disliked matrices particularly. He believed it was a bad use of a word. “Surely,” he wrote, “[ matrix ] means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He might have had etymology on his side. The word meant the place where something was developed, the source of something else. History has outvoted him, and his preferred “block”. The first mathematicians to use the word “matrix” were interested in things derived from the matrix. So for them, the matrix was the source of something else.

What we mean by a matrix is a collection of some number of rows and columns. Inside each individual row and column is some mathematical entity. We call this an element. Elements are almost always real numbers. When they’re not real numbers they’re complex-valued numbers. (I’m sure somebody, somewhere has created matrices with something else as elements. You’ll never see these freaks.)

Matrices work a lot like vectors do. We can add them together. We can multiply them by real- or complex-valued numbers, called scalars. But we can do other things with them. We can define multiplication, at least sometimes. The definition looks like a lot of work, but it represents something useful that way. And for square matrices, ones with equal numbers of rows and columns, we can find other useful stuff. We give that stuff wonderful names like traces and determinants and eigenvalues and eigenvectors and such.

One of the big uses of matrices is to represent a mapping. A matrix can describe how points in a domain map to points in a range. Properly, a matrix made up of real numbers can only describe what are called linear mappings. These are ones that turn the domain into the range by stretching or squeezing down or rotating the whole domain the same amount. A mapping might follow different rules in different regions, but that’s all right. We can write a matrix that approximates the original mapping, at least in some areas. We do this in the same way, and for pretty much the same reason, we can approximate a real and complicated curve with a bunch of straight lines. Or the way we can approximate a complicated surface with a bunch of triangular plates.

We can compound mappings. That is, we can start with a domain and a mapping, and find the image of that domain. We can then use a mapping again and find the image of the image of that domain. The matrix that describes this mapping-of-a-mapping is the one you get by multiplying the matrix of the first mapping and the matrix of the second mapping together. This is why we define matrix multiplication the odd way we do. Mapping are that useful, and matrices are that tied to them.

I wrote about some of the uses of matrices in a Set Tour essay. That was based on a use of matrices in physics. We can describe the changing of a physical system with a mapping. And we can understand equilibriums, states where a system doesn’t change, by looking at the matrix that approximates what the mapping does near but not exactly on the equilibrium.

But there are other uses of matrices. Many of them have nothing to do with mappings or physical systems or anything. For example, we have graph theory. A graph, here, means a bunch of points, “vertices”, connected by curves, “edges”. Many interesting properties of graphs depend on how many other vertices each vertex is connected to. And this is well-represented by a matrix. Index your vertices. Then create a matrix. If vertex number 1 connects to vertex number 2, put a ‘1’ in the first row, second column. If vertex number 1 connects to vertex number 3, put a ‘1’ in the first row, third column. If vertex number 2 isn’t connected to vertex number 3, put a ‘0’ in the second row, third column. And so on.

We don’t have to use ones and zeroes. A “network” is a kind of graph where there’s some cost associated with each edge. We can put that cost, that number, into the matrix. Studying the matrix of a graph or network can tell us things that aren’t obvious from looking at the drawing.

How November 2015 Treated My Mathematics Blog


So after a couple dismal months my ratings appear to be up. The number of page views and of visitors, in fact, seem to be at all-time highs. At least they’re at highs for the past twelve months. I would like to think that the depressed readings of September and October — 708 page views and 381 visitors; 733 page views and 405 visitors, respectively — are behind me. November saw 1,215 page views and 519 visitors.

Some of this is an accident. My humor blog got a tidal wave of readers courtesy The Onion AV Club. The AV Club wrote up a piece about the sad end of the comic strip Apartment 3-G, and I’ve written a shocking amount about the soap strip. They mentioned me. And as I’ve used my comic strip posts there to mention my Reading the Comics series here, some curious people followed along.

That said, I’m not sure how many of those readers were AV Club curiosity-seekers. A crude estimate suggests somewhere a little over two hundred were. So even discounting that something near a thousand regular-style reders came in and looked around, and that’s nice to see. It’s back up to about where the readership was before the mysterious dropoff, in July, that many suspect results from mobile devices being incorrectly read.

For the roster of countries, well, the top was the United States as always, with some 837 page views. The United Kingdom came in with 62. The Canada appears third at 50 views, and the Philippines next at 20. The Singapore and the Australia tie at 19.

Single-reader countries this past month were Algeria, Argentina, Belgium, Egypt, Finland, Israel, Jamaica, Malaysia, Mexico, Nigeria, Puerto Rico, Romania, Thailand, Turkey, and Vietnam. Belgium, Nigeria, and Thailand are repeats from October. No country’s on a three-month streak.

The Reading the Comics posts are as ever the most popular group and I’ve bundled them under the one category tag. But my Ramsey Theory question turned out to be slightly more popular than any of them in November. After grouping together all the comics posts, the most popular articles look like:

  1. Why Was Someone Upset With Ramsey Theory In 1979? a question about a dimly remembered Dear Abby-class question.
  2. Reading the Comics, an ongoing series.
  3. How October Treated My Mathematics Blog, and yes, I risk an endless loop by mentioning this here.
  4. How Many Trapezoids I Can Draw and goodness it’s nice to see the trapezoids turning up again.
  5. How Antifreeze Works, one of my little pointers to someone else’s interesting writing.

Nothing really dominated my search term queries this month. Some of the things that turned up were:

  • illustration of electromagnetic wave theory scientist comics strip
  • james clerk maxwell comics (I’m not sure I have any of these; this suggests I ought to be finding some.)
  • origin is the gateway to your entire gaming universe. (I’ve had this explained to me, but I forget what it means.)
  • places 1975 miles from charlotte nc (I know of none specifically 1,975 miles away.)
  • if i got 70 percent in all exams what grade do i need on final to pass course? (This I can help with.)

December starts with my blog here at 30,298 page views, and with 543 WordPress followers. I expect it’ll be overtaken in page views by my humor blog sometime soon.

A Summer 2015 Mathematics A To Z: vertex (graph theory)


Vertex.

I mentioned graph theory several weeks back, when this Mathematics A To Z project was barely begun. It’s a fun field. It’s a great one for doodlers, and it’s one that has surprising links to other problems.

Graph theory divides the conceptual universe into “things that could be connected” and “ways they are connected”. The “things that could be connected” we call vertices. The “ways they are connected” are the edges. Vertices might have an obvious physical interpretation. They might, represent the corners of a cube or a pyramid or some other common shape. That, I imagine, is why these things were ever called vertices. A diagram of a graph can look a lot like a drawing of a solid object. It doesn’t have to, though. Many graphs will have vertices and edges connected in ways that no solid object could have. They will usually be ones that you could build in wireframe. Use gumdrops for the vertices and strands of wire or plastic or pencils for the edges.

Vertices might stand in for the houses that need to be connected to sources of water and electricity and Internet. They might be the way we represent devices connected on the Internet. They might represent all the area within a state’s boundaries. The Köningsburg bridge problem, held up as the ancestor of graph theory, has its vertices represent the islands and river banks one gets to by bridges. Vertices are, as I say, the things that might be connected.

“Things that might be connected” is a broader category than you might imagine. For example, an important practical use of mathematics is making error-detecting and error-correcting codes. This is how you might send a message that gets garbled — in sending, in transmitting, or in reception — and still understand what was meant. You can model error-detecting or correcting codes as a graph. In this case every possible message is a vertex. Edges connect together the messages that could plausibly be misinterpreted as one another. How many edges you draw — how much misunderstanding you allow for — depends on how many errors you want to be able to detect, or to correct.

When we draw this on paper or a chalkboard or the like we usually draw it as a + or an x or maybe a *. How much we draw depends on how afraid we are of losing sight of it as we keep working. In publication it’s often drawn as a simple dot. This is because printers are able to draw dots that don’t get muddied up by edges being drawn in or eraser marks removing edges.

A Summer 2015 Mathematics A To Z: graph


Graph. (As in Graph Theory)

When I started this A to Z I figured it would be a nice string of quick, two-to-four paragraph compositions. So far each one has been a monster post instead. I’m hoping to get to some easier ones. For today I mean to talk about a graph, as in graph theory. That’s not the kind of graph that’s a plot of some function or a pie chart or a labelled map or something like that.

This kind of graph we do study as pictures, though. Specifically, they’re pictures with two essential pieces: a bunch of points or dots and a bunch of curves connecting dots. The dots we call vertices. The curves we call edges. My mental model tends to be of a bunch of points in a pegboard connected by wire or string. That might not work for you, but the idea of connecting things is what graphs, and graph theory, are good for studying.

Continue reading “A Summer 2015 Mathematics A To Z: graph”

Kenneth Appel and Colored Maps


Word’s come through mathematics circles about the death of Kenneth Ira Appel, who along with Wolgang Haken did one of those things every mathematically-inclined person really wishes to do: solve one of the long-running unsolved problems of mathematics. Even better, he solved one of those accessible problems. There are a lot of great unsolved problems that take a couple paragraphs just to set up for the lay audience (who then will wonder what use the problem is, as if that were the measure of interesting); Appel and Haken’s was the Four Color Theorem, which people can understand once they’ve used crayons and coloring books (even if they wonder whether it’s useful for anyone besides Hammond).

It was, by everything I’ve read, a controversial proof at the time, although by the time I was an undergraduate the controversy had faded the way controversial stuff doesn’t seem that exciting decades on. The proximate controversy was that much of the proof was worked out by computer, which is the sort of thing that naturally alarms people whose jobs are to hand-carve proofs using coffee and scraps of lumber. The worry about that seems to have faded as more people get to use computers and find they’re not putting the proof-carvers out of work to any great extent, and as proof-checking software gets up to the task of doing what we would hope.

Still, the proof, right as it probably is, probably offers philosophers of mathematics a great example for figuring out just what is meant by a “proof”. The word implies that a proof is an argument which convinces a person of some proposition. But the Four Color Theorem proof is … well, according to Appel and Haken, 50 pages of text and diagrams, with 85 pages containing an additional 2,500 diagrams, and 400 microfiche pages with additional diagrams of verifications of claims made in the main text. I’ll never read all that, much less understand all that; it’s probably fair to say very few people ever will.

So I couldn’t, honestly, say it was proved to me. But that’s hardly the standard for saying whether something is proved. If it were, then every calculus class would produce the discovery that just about none of calculus has been proved, and that this whole “infinite series” thing sounds like it’s all doubletalk made up on the spot. And yet, we could imagine — at least, I could imagine — a day when none of the people who wrote the proof, or verified it for publication, or have verified it since then, are still alive. At that point, would the theorem still be proved?

(Well, yes: the original proof has been improved a bit, although it’s still a monstrously large one. And Neil Robertson, Daniel P Sanders, Paul Seymour, and Robin Thomas published a proof, similar in spirit but rather smaller, and have been distributing the tools needed to check their work; I can’t imagine there being nobody alive who hasn’t done, or at least has the ability to do, the checking work.)

I’m treading into the philosophy of mathematics, and I realize my naivete about questions like what constitutes a proof are painful to anyone who really studies the field. I apologize for inflicting that pain.

Before Drawing a Graph


I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

Continue reading “Before Drawing a Graph”