Some News on the Travelling Salesman Problem

I need to get back to just talking about mathematics problems some. Happily there was some news come across my desk. It’s about the Travelling Salesman Problem. This is one of those classic problems, simple to state and obviously interesting, and that are terribly hard to solve. How do you draw a path connecting a large number of points with the shortest total path? You can recast what you’re doing: make the path that takes the shortest time, or that has the lowest cost, the least energy, the greatest profit, whatever. Optimization problems are very alike, and useful.

There’s not many good answers to the problem, though. Basically, test out every possible combination and pick the best one from all that. For large enough problems this is hopeless. For small enough problems, though? You can work something out. So there’s this report of researchers, lead by Professor Takayuki Kawahara at the Tokyo University of Science. They’ve developed an integrated circuit with really good performance for, they believe, up to 22 ‘cities’. That 22 is a breakthrough tells you something of how hard the problems are.

I’m unclear, from the press release, just how the system works. (I’m also unsure, from reading the press releases, that they have actually used this for 22 cities or whether they have good reason to think it will work as hoped for. I may be over-parsing.) There’s a description of using “spin cells” and it happens I know about something named spin cells. And that they are used in optimization problems. What I do not know is that my spin cells are the spin cells being used here. If I find out, I’ll share.

Reading the Comics, November 30, 2019: Big Embarrassing Mistake Edition

See if you can spot where I discover my having made a big embarrassing mistake. It’s fun! For people who aren’t me!

Lincoln Peirce’s Big Nate for the 24th has boy-genius Peter drawing “electromagnetic vortex flow patterns”. Nate, reasonably, sees this sort of thing as completely abstract art. I’m not precisely sure what Peirce means by “electromagnetic vortex flow”. These are all terms that mathematicians, and mathematical physicists, would be interested in. That specific combination, though, I can find only a few references for. It seems to serve as a sensing tool, though.

Nate: 'Ah, now that's what I'm talking about! A boy, paper, and crayons, the simple pleasures. I know you're a genius, Peter, but it's great to see you just being a kid for a change! And you're really letting it rip! You're not trying to make something that looks real! It's just colors and shapes and --- ' Peter: 'This is a diagram of electromagnetic vortex flow patterns.' Nate: 'I knew that.' Peter: 'Hand me the turquoise.'
Lincoln Peirce’s Big Nate for the 24th of November, 2019. So, did you know I’ve been spelling Lincoln Peirce’s name wrong all this time? Yeah, I didn’t realize either. But look at past essays with Big Nate discussed in them and you’ll see. I’m sorry for this and embarrassed to have done such a lousy job looking at the words in front of me for so long.

No matter. Electromagnetic fields are interesting to a mathematical physicist, and so mathematicians. Often a field like this can be represented as a system of vortices, too, points around which something swirls and which combine into the field that we observe. This can be a way to turn a continuous field into a set of discrete particles, which we might have better tools to study. And to draw what electromagnetic fields look like — even in a very rough form — can be a great help to understanding what they will do, and why. They also can be beautiful in ways that communicate even to those who don’t undrestand the thing modelled.

Megan Dong’s Sketchshark Comics for the 25th is a joke based on the reputation of the Golden Ratio. This is the idea that the ratio, 1:\frac{1}{2}\left(1 + \sqrt{5}\right) (roughly 1:1.6), is somehow a uniquely beautiful composition. You may sometimes see memes with some nice-looking animal and various boxes superimposed over it, possibly along with a spiral. The rectangles have the Golden Ratio ratio of width to height. And the ratio is kind of attractive since \frac{1}{2}\left(1 + \sqrt{5}\right) is about 1.618, and 1 \div \frac{1}{2}\left(1 + \sqrt{5}\right) is about 0.618. It’s a cute pattern, and there are other similar cute patterns.. There is a school of thought that this is somehow transcendently beautiful, though.

Man, shooing off a woman holding a cat: 'I don't like cute animals. I like BEAUTIFUL animals.' In front of portraits of an eagle, lion, and whale: 'Animals with golden-ratio proportions and nice bone-structure.'
Megan Dong’s Sketchshark Comics for the 25th of November, 2019. So far I’m aware I have never discussed this comic before, making this another new-tag day. This and future essays with Sketchshark Comics in them should be at this link.

It’s all bunk. People may find stuff that’s about one-and-a-half times as tall as it is wide, or as wide as it is tall, attractive. But experiments show that they aren’t more likely to find something with Golden Ratio proportions more attractive than, say, something with 1:1.5 proportions, or 1:1.8 , or even to be particularly consistent about what they like. You might be able to find (say) that the ratio of an eagle’s body length to the wing span is something close to 1:1.6 . But any real-world thing has a lot of things you can measure. It would be surprising if you couldn’t find something near enough a ratio you liked. The guy is being ridiculous.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th builds on the idea that everyone could be matched to a suitable partner, given a proper sorting algorithm. I am skeptical of any “simple algorithm” being any good for handling complex human interactions such as marriage. But let’s suppose such an algorithm could exist.

Mathematician: 'Thanks to computer science we no longer need dating. We can produce perfect marriages with simple algorithms.' Assistant: 'ooh!' [ AND SO ] Date-o-Tron, to the mathematician and her assistant: 'There are many women you'd be happier with, but they're already with people whom they prefer to you. Thus, you will be paired with your 4,291th favorite choice. We have a stable equilibrium.' Mathematician: 'Hooray!'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th of November, 2019. Someday I’ll go a week without an essay mentioning Saturday Morning Breakfast Cereal, but this is not that day. Or week. The phrasing gets a little necessarily awkward here.

This turns matchmaking into a problem of linear programming. Arguably it always was. But the best possible matches for society might not — likely will not be — the matches everyone figures to be their first choices. Or even top several choices. For one, our desired choices are not necessarily the ones that would fit us best. And as the punch line of the comic implies, what might be the globally best solution, the one that has the greatest number of people matched with their best-fit partners, would require some unlucky souls to be in lousy fits.

Although, while I believe that’s the intention of the comic strip, it’s not quite what’s on panel. The assistant is told he’ll be matched with his 4,291th favorite choice, and I admit having to go that far down the favorites list is demoralizing. But there are about 7.7 billion people in the world. This is someone who’ll be a happier match with him than 6,999,995,709 people would be. That’s a pretty good record, really. You can fairly ask how much worse that is than the person who “merely” makes him happier than 6,999,997,328 people would

And that’s all I have for last week. Sunday I hope to publish another Reading the Comics post, one way or another. And later this week I’ll have closing thoughts on the Fall 2019 A-to-Z sequence. And I do sincerely apologize to Lincoln Peirce for getting his name wrong, and this on a comic strip I’ve been reading since about 1991.

My 2019 Mathematics A To Z: Linear Programming

Today’s A To Z term is another proposed by @aajohannas.

I couldn’t find a place to fit this in the essay proper. But it’s too good to leave out. The simplex method, discussed within, traces to George Dantzig. He’d been planning methods for the US Army Air Force during the Second World War. Dantzig is a person you have heard about, if you’ve heard any mathematical urban legends. In 1939 he was late to Jerzy Neyman’s class. He took two statistics problems on the board to be homework. He found them “harder than usual”, but solved them in a couple days and turned in the late homework hoping Neyman would be understanding. They weren’t homework. They were examples of famously unsolved problems. Within weeks Neyman had written one of the solutions up for publication. When he needed a thesis topic Neyman advised him to just put what he already had in a binder. It’s the stuff every grad student dreams of. The story mutated. It picked up some glurge to become a narrative about positive thinking. And mutated further, into the movie Good Will Hunting.

The story gets better, for my purposes. The simplex method can be understood as one of those homework problems. Dantzig describes some of that in this 1987 essay about the origins of the method. The essay is worth reading to understand some of how people come to think of good mathematics.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Linear Programming.

Every three days one of the comic strips I read has the elderly main character talk about how they never used algebra. This is my hyperbole. But mathematics has got the reputation for being difficult and inapplicable to everyday life. We’ll concede using arithmetic, when we get angry at the fast food cashier who hands back our two pennies before giving change for our $6.77 hummus wrap. But otherwise, who knows what an elliptic integral is, and whether it’s working properly?

Linear programming does not have this problem. In part, this is because it lacks a reputation. But those who have heard of it, acknowledge it as immensely practical mathematics. It is about something a particular kind of human always finds compelling. That is how to do a thing best.

There are several kinds of “best”. There is doing a thing in as little time as possible. Or for as little effort as possible. For the greatest profit. For the highest capacity. For the best score. For the least risk. The goals have a thousand names, none of which we need to know. They all mean the same thing. They mean “the thing we wish to optimize”. To optimize has two directions, which are one. The optimum is either the maximum or the minimum. To be good at finding a maximum is to be good at finding a minimum.

It’s obvious why we call this “programming”; obviously, we leave the work of finding answers to a computer. It’s a spurious reason. The “programming” here comes from an independent sense of the word. It means more about finding a plan. Think of “programming” a night’s entertainment, so that every performer gets their turn, all scene changes have time to be done, you don’t put two comedians right after the other, and you accommodate the performer who has to leave early and the performer who’ll get in an hour late. Linear programming problems are often about finding how to do as well as possible given various priorities. All right. At least the “linear” part is obvious. A mathematics problem is “linear” when it’s something we can reasonably expect to solve. This is not the technical meaning. Technically what it means is we’re looking at a function something like:

ax + by + cz

Here, x, y, and z are the independent variables. We don’t know their values but wish to. a, b, and c are coefficients. These values are set to some constant for the problem, but they might be something else for other problems. They’re allowed to be positive or negative or even zero. If a coefficient is zero, then the matching variable doesn’t affect matters at all. The corresponding value can be anything at all, within the constraints.

I’ve written this for three variables, as an example and because ‘x’ and ‘y’ and ‘z’ are comfortable, familiar variables. There can be fewer. There can be more. There almost always are. Two- and three-variable problems will teach you how to do this kind of problem. They’re too simple to be interesting, usually. To avoid committing to a particular number of variables we can use indices. x_j for values of j from 1 up to N. Or we can bundle all these values together into a vector, and write everything as \vec{x} . This has a particular advantage since when we can write the coefficients as a vector too. Then we use the notation of linear algebra, and write that we hope to maximize the value of:


(The superscript T means “transpose”. As a linear algebra problem we’d usually think of writing a vector as a tall column of things. By transposing that we write a long row of things. By transposing we can use the notation of matrix multiplication.)

This is the objective function. Objective here in the sense of goal; it’s the thing we want to find the best possible value of.

We have constraints. These represent limits on the variables. The variables are always things that come in limited supply. There’s no allocating more money than the budget allows, nor putting more people on staff than work for the company. Often these constraints interact. Perhaps not only is there only so much staff, but no one person can work more than a set number of days in a row. Something like that. That’s all right. We can write all these constraints as a matrix equation. An inequality, properly. We can bundle all the constraints into a big matrix named A, and demand:

A\vec{x} \le \vec{b}

Also, traditionally, we suppose that every component of \vec{x} is non-negative. That is, positive, or at lowest, zero. This reflects the field’s core problems of figuring how to allocate resources. There’s no allocating less than zero of something.

But we need some bounds. This is easiest to see with a two-dimensional problem. Try it yourself: draw a pair of axes on a sheet of paper. Now put in a constraint. Doesn’t matter what. The constraint’s edge is a straight line, which you can draw at any position and any angle you like. This includes horizontal and vertical. Shade in one side of the constraint. Whatever you shade in is the “feasible region”, the sets of values allowed under the constraint. Now draw in another line, another constraint. Shade in one side or the other of that. Draw in yet another line, another constraint. Shade in one side or another of that. The “feasible region” is whatever points have taken on all these shades. If you were lucky, this is a bounded region, a triangle. If you weren’t lucky, it’s not bounded. It’s maybe got some corners but goes off to the edge of the page where you stopped shading things in.

So adding that every component of \vec{x} is at least as big as zero is a backstop. It means we’ll usually get a feasible region with a finite volume. What was the last project you worked on that had no upper limits for anything, just minimums you had to satisfy? Anyway if you know you need something to be allowed less than zero go ahead. We’ll work it out. The important thing is there’s finite bounds on all the variables.

I didn’t see the bounds you drew. It’s possible you have a triangle with all three shades inside. But it’s also possible you picked the other sides to shade, and you have an annulus, with no region having more than two shades in it. This can happen. It means it’s impossible to satisfy all the constraints at once. At least one of them has to give. You may be reminded of the sign taped to the wall of your mechanics’ about picking two of good-fast-cheap.

But impossibility is at least easy. What if there is a feasible region?

Well, we have reason to hope. The optimum has to be somewhere inside the region, that’s clear enough. And it even has to be on the edge of the region. If you’re not seeing why, think of a simple example, like, finding the maximum of 2x + y , inside the square where x is between 0 and 2 and y is between 0 and 3. Suppose you had a putative maximum on the inside, like, where x was 1 and y was 2. What happens if you increase x a tiny bit? If you increase y by twice that? No, it’s only on the edges you can get a maximum that can’t be locally bettered. And only on the corners of the edges, at that.

(This doesn’t prove the case. But it is what the proof gets at.)

So the problem sounds simple then! We just have to try out all the vertices and pick the maximum (or minimum) from them all.

OK, and here’s where we start getting into trouble. With two variables and, like, three constraints? That’s easy enough. That’s like five points to evaluate? We can do that.

We never need to do that. If someone’s hiring you to test five combinations I admire your hustle and need you to start getting me consulting work. A real problem will have many variables and many constraints. The feasible region will most often look like a multifaceted gemstone. It’ll extend into more than three dimensions, usually. It’s all right if you just imagine the three, as long as the gemstone is complicated enough.

Because now we’ve got lots of vertices. Maybe more than we really want to deal with. So what’s there to do?

The basic approach, the one that’s foundational to the field, is the simplex method. A “simplex” is a triangle. In three dimensions, anyway. In four dimensions it’s a tetrahedron. In two dimensions it’s a line segment. Generally, however many dimensions of space you have? The simplex is the simplest thing that fills up volume in your space.

You know how you can turn any polygon into a bunch of triangles? Just by connecting enough vertices together? You can turn a polyhedron into a bunch of tetrahedrons, by adding faces that connect trios of vertices. And for polyhedron-like shapes in more dimensions? We call those polytopes. Polytopes we can turn into a bunch of simplexes. So this is why it’s the “simplex method”. Any one simplex it’s easy to check the vertices on. And we can turn the polytope into a bunch of simplexes. And we can ignore all the interior vertices of the simplexes.

So here’s the simplex method. First, break your polytope up into simplexes. Next, pick any simplex; doesn’t matter which. Pick any outside vertex of that simplex. This is the first viable possible solution. It’s most likely wrong. That’s okay. We’ll make it better.

Because there are other vertices on this simplex. And there are other simplexes, adjacent to that first, which share this vertex. Test the vertices that share an edge with this one. Is there one that improves the objective function? Probably. Is there a best one of those in this simplex? Sure. So now that’s our second viable possible solution. If we had to give an answer right now, that would be our best guess.

But this new vertex, this new tentative solution? It shares edges with other vertices, across several simplexes. So look at these new neighbors. Are any of them an improvement? Which one of them is the best improvement? Move over there. That’s our next tentative solution.

You see where this is going. Keep at this. Eventually it’ll wind to a conclusion. Usually this works great. If you have, like, 8 constraints, you can usually expect to get your answer in from 16 to 24 iterations. If you have 20 constraints, expect an answer in from 40 to 60 iterations. This is doing pretty well.

But it might take a while. It’s possible for the method to “stall” a while, often because one or more of the variables is at its constraint boundary. Or the division of polytope into simplexes got unlucky, and it’s hard to get to better solutions. Or there might be a string of vertices that are all at, or near, the same value, so the simplex method can’t resolve where to “go” next. In the worst possible case, the simplex method takes a number of iterations that grows exponentially with the number of constraints. This, yes, is very bad. It doesn’t typically happen. It’s a numerical algorithm. There’s some problem to spoil any numerical algorithm.

You may have complaints. Like, the world is complicated. Why are we only looking at linear objective functions? Or, why only look at linear constraints? Well, if you really need to do that? Go ahead, but that’s not linear programming anymore. Think hard about whether you really need that, though. Linear anything is usually simpler than nonlinear anything. I mean, if your optimization function absolutely has to have y^2 in it? Could we just say you have a new variable w that just happens to be equal to the square of y? Will that work? If you have to have the sine of z? Are you sure that z isn’t going to get outside the region where the sine of z is pretty close to just being z? Can you check?

Maybe you have, and there’s just nothing for it. That’s all right. This is why optimization is a living field of study. It demands judgement and thought and all that hard work.

Thank you for reading. This and all the other Fall 2019 A To Z posts should be at this link. They should be joined next week by letters ‘M’ and ‘N’. Also next week I hope to open for nominations for the next set of letters. All of my past A To Z essays should be available at this link.

Getting Into Shapes

This is, in part, a post for myself. They all are, but this is moreso. My day job includes some Geographic Information Services stuff, which is how we say “maps” when we want to be taken seriously as Information Technology professionals. When we make maps, what we really do is have a computer draw polygons, and then put dots on them. A common need is to put a dot in the middle of a polygon. Yes, this sounds silly, but describe your job this abstractly and see how it comes out.

The trouble is polygons can be complicated stuff. Can be, not are. If the polygon is, like, the border of your building’s property it’s probably not too crazy. It’s probably a rectangle, or at least a trapezoid. Maybe there’s a curved boundary. If you need a dot, such as to place the street address or a description of the property, you can make a good guess about where to put it so it’s inside the property and not too close to an edge.

But you can’t always. The polygons can be complicated. Especially if you’re representing stuff that reflects government or scientific or commercial interest. There’s good reasons to be interested in the boundaries between the low-low tide and the high-high tide lines of a beach, but that’s not going to look like anything simple for any realistic property. Finding a representative spot to fix labels or other business gets tricky.

So this crossed my Twitter feed and I’ll probably want to refer back to it at some point. It’s an algorithm, published last August by Vladimir Agafonkin at Mapbox, which uses some computation tricks to find a reasonable center.

The approach is, broadly, of a kind with many numerical methods. It tries to find an answer by taking a guess and then seeing if any obvious variations will make it a little better. If you can, then, repeat these variations. Eventually, usually, you’ll get to a pretty good answer. It may not be the exact best possible answer, but that’s all right. We accept that we’ll have a merely approximate answer, but we’ll get it more quickly than we otherwise would have. Often this is fine. Nobody will be upset that the label on a map would be “better” moved one pixel to the right if they get the map ten seconds faster. Optimization is often like that.

I have not tried putting this code into mine yet; I’ve just now read it and I have some higher-priority tasks at work. But I’m hoping to remember that this exists and to see whether I can use it.

Reading the Comics, November 1, 2015: Uncertainty and TV Schedules Edition

Brian Fies’s Mom’s Cancer is a heartbreaking story. It’s compelling reading, but people who are emotionally raw from lost love ones, or who know they’re particularly sensitive to such stories, should consider before reading that the comic is about exactly what the title says.

But it belongs here because in the October 29th and the November 2nd installments are about a curiosity of area, and volume, and hypervolume, and more. That is that our perception of how big a thing is tends to be governed by one dimension, the length or the diameter of the thing. But its area is the square of that, its volume the cube of that, its hypervolume some higher power yet of that. So very slight changes in the diameter produce great changes in the volume. Conversely, though, great changes in volume will look like only slight changes. This can hurt.

Tom Toles’s Randolph Itch, 2 am from the 29th of October is a Roman numerals joke. I include it as comic relief. The clock face in the strip does depict 4 as IV. That’s eccentric but not unknown for clock faces; IIII seems to be more common. There’s not a clear reason why this should be. The explanation I find most nearly convincing is an aesthetic one. Roman numerals are flexible things, and can be arranged for artistic virtue in ways that Arabic numerals make impossible.

The aesthetic argument is that the four-character symbol IIII takes up nearly as much horizontal space as the VIII opposite it. The two-character IV would look distractingly skinny. Now, none of the symbols takes up exactly the same space as their counterpart. X is shorter than II, VII longer than V. But IV-versus-VIII does seem like the biggest discrepancy. Still, Toles’s art shows it wouldn’t look all that weird. And he had to conserve line strokes, so that the clock would read cleanly in newsprint. I imagine he also wanted to avoid using different representations of “4” so close together.

Jon Rosenberg’s Scenes From A Multiverse for the 29th of October is a riff on both quantum mechanics — Schödinger’s Cat in a box — and the uncertainty principle. The uncertainty principle can be expressed as a fascinating mathematical construct. It starts with Ψ, a probability function that has spacetime as its domain, and the complex-valued numbers as its range. By applying a function to this function we can derive yet another function. This function-of-a-function we call an operator, because we’re saying “function” so much it’s starting to sound funny. But this new function, the one we get by applying an operator to Ψ, tells us the probability that the thing described is in this place versus that place. Or that it has this speed rather than that speed. Or this angular momentum — the tendency to keep spinning — versus that angular momentum. And so on.

If we apply an operator — let me call it A — to the function Ψ, we get a new function. What happens if we apply another operator — let me call it B — to this new function? Well, we get a second new function. It’s much the way if we take a number, and multiply it by another number, and then multiply it again by yet another number. Of course we get a new number out of it. What would you expect? This operators-on-functions things looks and acts in many ways like multiplication. We even use symbols that look like multiplication: AΨ is operator A applied to function Ψ, and BAΨ is operator B applied to the function AΨ.

Now here is the thing we don’t expect. What if we applied operator B to Ψ first, and then operator A to the product? That is, what if we worked out ABΨ? If this was ordinary multiplication, then, nothing all that interesting. Changing the order of the real numbers we multiply together doesn’t change what the product is.

Operators are stranger creatures than real numbers are. It can be that BAΨ is not the same function as ABΨ. We say this means the operators A and B do not commute. But it can be that BAΨ is exactly the same function as ABΨ. When this happens we say that A and B do commute.

Whether they do or they don’t commute depends on the operators. When we know what the operators are we can say whether they commute. We don’t have to try them out on some functions and see what happens, although that sometimes is the easiest way to double-check your work. And here is where we get the uncertainty principle from.

The operator that lets us learn the probability of particles’ positions does not commute with the operator that lets us learn the probability of particles’ momentums. We get different answers if we measure a particle’s position and then its velocity than we do if we measure its velocity and then its position. (Velocity is not the same thing as momentum. But they are related. There’s nothing you can say about momentum in this context that you can’t say about velocity.)

The uncertainty principle is a great source for humor, and for science fiction. It seems to allow for all kinds of magic. Its reality is no less amazing, though. For example, it implies that it is impossible for an electron to spiral down into the nucleus of an atom, collapsing atoms the way satellites eventually fall to Earth. Matter can exist, in ways that let us have solid objects and chemistry and biology. This is at least as good as a cat being perhaps boxed.

Jan Eliot’s Stone Soup Classics for the 29th of October is a rerun from 1995. (The strip itself has gone to Sunday-only publication.) It’s a joke about how arithmetic is easy when you have the proper motivation. In 1995 that would include catching TV shows at a particular time. You see, in 1995 it was possible to record and watch TV shows when you wanted, but it required coordinating multiple pieces of electronics. It would often be easier to just watch when the show actually aired. Today we have it much better. You can watch anything you want anytime you want, using any piece of consumer electronics you have within reach, including several current models of microwave ovens and programmable thermostats. This does, sadly, remove one motivation for doing arithmetic. Also, I’m not certain the kids’ TV schedule is actually consistent with what was on TV in 1995.

Oh, heck, why not. Obviously we’re 14 minutes before the hour. Let me move onto the hour for convenience. It’s 744 minutes to the morning cartoons; that’s 12.4 hours. Taking the morning cartoons to start at 8 am, that means it’s currently 14 minutes before 24 minutes before 8 pm. I suspect a rounding error. Let me say they’re coming up on 8 pm. 194 minutes to Jeopardy implies the game show is on at 11 pm. 254 minutes to The Simpsons puts that on at midnight, which is probably true today, though I don’t think it was so in 1995 just yet. 284 minutes to Grace puts that on at 12:30 am.

I suspect that Eliot wanted it to be 978 minutes to the morning cartoons, which would bump Oprah to 4:00, Jeopardy to 7:00, Simpsons and Grace to 8:00 and 8:30, and still let the cartoons begin at 8 am. Or perhaps the kids aren’t that great at arithmetic yet.

Stephen Beals’s Adult Children for the 30th of October tries to build a “math error” out of repeated use of the phrase “I couldn’t care less”. The argument is that the thing one cares least about is unique. But why can’t there be two equally least-cared-about things?

We can consider caring about things as an optimization problem. Optimization problems are about finding the most of something given some constraints. If you want the least of something, multiply the thing you have by minus one and look for the most of that. You may giggle at this. But it’s the sensible thing to do. And many things can be equally high, or low. Take a bundt cake pan, and drizzle a little water in it. The water separates into many small, elliptic puddles. If the cake pan were perfectly formed, and set on a perfectly level counter, then the bottom of each puddle would be at the same minimum height. I grant a real cake pan is not perfect; neither is any counter. But you can imagine such.

Just because you can imagine it, though, must it exist? Think of the “smallest positive number”. The idea is simple. Positive numbers are a set of numbers. Surely there’s some smallest number. Yet there isn’t; name any positive number and we can name a smaller number. Divide it by two, for example. Zero is smaller than any positive number, but it’s not itself a positive number. A minimum might not exist, at least not within the confines where we are to look. It could be there is not something one could not care less about.

So a minimum might or might not exist, and it might or might not be unique. This is why optimization problems are exciting, challenging things.

A bedbug declares that 'according to our quantum mechanical computations, our entire observable universe is almost certainly Fred Wardle's bed.'
Niklas Eriksson’s Carpe Diem for the 1st of November, 2015. I’m not sure how accurately the art depicts bedbugs, although I’m also not sure how accurately Eriksson should.

Niklas Eriksson’s Carpe Diem for the 1st of November is about understanding the universe by way of observation and calculation. We do rely on mathematics to tell us things about the universe. Immanuel Kant has a bit of reputation in mathematical physics circles for this observation. (I admit I’ve never seen the original text where Kant observed this, so I may be passing on an urban legend. My love has several thousands of pages of Kant’s writing, but I do not know if any of them touch on natural philosophy.) If all we knew about space was that gravitation falls off as the square of the distance between two things, though, we could infer that space must have three dimensions. Otherwise that relationship would not make geometric sense.

Jeff Harris’s kids-information feature Shortcuts for the 1st of November was about the Harvard Computers. By this we mean the people who did the hard work of numerical computation, back in the days before this could be done by electrical and then electronic computer. Mathematicians relied on people who could do arithmetic in those days. There is the folkloric belief that mathematicians are inherently terrible at arithmetic. (I suspect the truth is people assume mathematicians must be better at arithmetic than they really are.) But here, there’s the mathematics of thinking what needs to be calculated, and there’s the mathematics of doing the calculations.

Their existence tends to be mentioned as a rare bit of human interest in numerical mathematics books, usually in the preface in which the author speaks with amazement of how people who did computing were once called computers. I wonder if books about font and graphic design mention how people who typed used to be called typewriters in their prefaces.

Reading the Comics, July 19, 2015: Rerun Comics Edition

I’m stepping my blog back away from the daily posting schedule. It’s fun, but it’s also exhausting. Sometimes, Comic Strip Master Command helps out. It slowed the rate of mathematically-themed comics just enough.

By this post’s title I don’t mean that my post is a rerun. But several of the comics mentioned happen to be. One of the good — maybe best — things about the appearance of comics on and ComicsKingdom is that comic strips that have ended, such as Randolph Itch, 2 am or (alas) Cul de Sac can still appear without taking up space. And long-running comic strips such as Luann can have earlier strips be seen to a new audience, again without doing any harm to the newest generation of cartoonists. So, there’s that.

Greg Evans’s Luann Againn (July 13, originally run July 13, 1987) makes a joke of Tiffany not understanding the odds of a contest. That’s amusing enough. Estimating the probability of something happening does require estimating how many things are possible, though, and how likely they are relative to one another. Supposing that every entry in a sweepstakes is equally likely to win seems fair enough. Estimating the number of sweepstakes entries is another problem.

Tom Toles’s Randolph Itch, 2 am (July 13, rerun from July 29, 2002) tells a silly little pirates-and-algebra joke. I like this one for the silliness and the artwork. The only sad thing is there wasn’t a natural way to work equations for a circle into it, so there’d be a use for “r”.

Continue reading “Reading the Comics, July 19, 2015: Rerun Comics Edition”

Convex hulls, the TSP, and long drives

The ‘God Plays Dice’ blog here makes me aware of a really fascinating little project: in 1998, apparently, Barry Stiefel decided to use a week off work to see the United States. All of them. And by dint of driving with a ruthlessness I can hardly believe, he managed it, albeit with some states just barely peeked at from the road, with Kentucky surprisingly hard to get at. Stiefel also managed, in a 2003 road expedition, to visit 21 states in a single carefully chosen day. I’m impressed.

This all seems connected to the Travelling Salesman Problem, or generally the problem of how to find the beset way to do something when there are so many ways to do it there’s no hope of testing all the possible combinations, but when there aren’t an infinite continuum of ways so that you can’t use calculus techniques to find a solution. Problems like this tend to be interesting ones: you can usually see exactly why the problem might be interesting, and you can work out a couple easy little toy cases, and if you fiddle around with toys that are a little more realistic you see just how hard they are.

I’ve never driven in more than six United States states in a day, near as I can tell. I have driven across Pennsylvania, even though that means crossing the Appalachians, and as anyone from the East Coast knows deep down, the Appalachians are a fundamentally uncrossable geographic barrier and anyone claiming to have crossed it is surely mad.

God plays dice

I’ve been reading the book In Pursuit of the Traveling Salesman by William Cook lately. In Chapter 10, on “The Human Touch”, Cook mentions a paper, by James MacGregor and Thomas Ormerod, “Human performance on the traveling salesman problem”. One thing that this paper mentions is that humans reach solutions to the TSP by looking at global properties such as the convex hull — there’s a theorem that a tour must visit the vertices of the convex hull in order.

This reminds me of one of my favorite travelogues, Barry Stiefel’s fifty states in a week’s vacation, in which Stiefel visited all fifty states on a week’s vacation (doing the lower 48 by driving, and flying to Alaska and Hawaii) Stiefel writes, in regard to getting to Kentucky: Now things were going to get complicated. Until then, my route had been primarily one of following the inside perimeter…

View original post 147 more words

Reading The Comics, November 9, 2014: Finally, A Picture Edition

I knew if I kept going long enough some cartoonist not on would have to mention mathematics. That finally happened with one from Comics Kingdom, and then one from the slightly freak case of Rick Detorie’s One Big Happy. Detorie’s strip is on, but a rerun from several years ago. He has a different one that runs on the normal daily pages. This is for sound economic reasons: actual newspapers pay much better than the online groupings of them (considering how cheap Comics Kingdom and Gocomics are for subscribers I’m not surprised) so he doesn’t want his current strips run on As for why his current strips do appear on, for example, the fairly good online comics page of, that’s a good question, and one that deserves a full answer.

The psychiatric patient is looking for something in the middle of curved space-time.
Vic Lee’s Pardon My Planet for the 6th of November, 2014.

Vic Lee’s Pardon My Planet (November 9), which broke the streak of Comics Kingdom not making it into these pages, builds around a quote from Einstein I never heard of before but which sounds like the sort of vaguely inspirational message that naturally attaches to famous names. The patient talks about the difficulty of finding something in “the middle of four-dimensional curved space-time”, although properly speaking it could be tricky finding anything within a bounded space, whether it’s curved or not. The generic mathematics problem you’d build from this would be to have some function whose maximum in a region you want to find (if you want the minimum, just multiply your function by minus one and then find the maximum of that), and there’s multiple ways to do that. One obvious way is the mathematical equivalent of getting to the top of a hill by starting from wherever you are and walking the steepest way uphill. Another way is to just amble around, picking your next direction at random, always taking directions that get you higher and usually but not always refusing directions that bring you lower. You can probably see some of the obvious problems with either approach, and this is why finding the spot you want can be harder than it sounds, even if it’s easy to get started looking.

Reuben Bolling’s Super Fun-Pak Comix (November 6), which is technically a rerun since the Super Fun-Pak Comix have been a longrunning feature in his Tom The Dancing Bug pages, is primarily a joke about the Heisenberg Uncertainty Principle, that there is a limit to what information one can know about the universe. This limit can be understood mathematically, though. The wave formulation of quantum mechanics describes everything there is to know about a system in terms of a function, called the state function and normally designated Ψ, the value of which can vary with location and time. Determining the location or the momentum or anything about the system is done by a process called “applying an operator to the state function”. An operator is a function that turns one function into another, which sounds like pretty sophisticated stuff until you learn that, like, “multiply this function by minus one” counts.

In quantum mechanics anything that can be observed has its own operator, normally a bit tricker than just “multiply this function by minus one” (although some are not very much harder!), and applying that operator to the state function is the mathematical representation of making that observation. If you want to observe two distinct things, such as location and momentum, that’s a matter of applying the operator for the first thing to your state function, and then taking the result of that and applying the operator for the second thing to it. And here’s where it gets really interesting: it doesn’t have to, but it can depend what order you do this in, so that you get different results applying the first operator and then the second from what you get applying the second operator and then the first. The operators for location and momentum are such a pair, and the result is that we can’t know to arbitrary precision both at once. But there are pairs of operators for which it doesn’t make a difference. You could, for example, know both the momentum and the electrical charge of Scott Baio simultaneously to as great a precision as your Scott-Baio-momentum-and-electrical-charge-determination needs are, and the mathematics will back you up on that.

Ruben Bolling’s Tom The Dancing Bug (November 6), meanwhile, was a rerun from a few years back when it looked like the Large Hadron Collider might never get to working and the glitches started seeming absurd, as if an enormous project involving thousands of people and millions of parts could ever suffer annoying setbacks because not everything was perfectly right the first time around. There was an amusing notion going around, illustrated by Bolling nicely enough, that perhaps the results of the Large Hadron Collider would be so disastrous somehow that the universe would in a fit of teleological outrage prevent its successful completion. It’s still a funny idea, and a good one for science fiction stories: Isaac Asimov used the idea in a short story dubbed “Thiotimoline and the Space Age”, published 1959, which resulted in the attempts to manipulate a compound which dissolves before it adds water might have accidentally sent hurricanes Carol, Edna, and Diane into New England in 1954 and 1955.

Chip Sansom’s The Born Loser (November 7) gives me a bit of a writing break by just being a pun strip that you can save for next March 14.

Dan Thompson’s Brevity (November 7), out of reruns, is another pun strip, though with giant monsters.

Francesco Marciuliano’s Medium Large (November 7) is about two of the fads of the early 80s, those of turning everything into a breakfast cereal somehow and that of playing with Rubik’s Cubes. Rubik’s Cubes have long been loved by a certain streak of mathematicians because they are a nice tangible representation of group theory — the study of things that can do things that look like addition without necessarily being numbers — that’s more interesting than just picking up a square and rotating it one, two, three, or four quarter-turns. I still think it’s easier to just peel the stickers off (and yet, the die-hard Rubik’s Cube Popularizer can point out there’s good questions about polarity you can represent by working out the rules of how to peel off only some stickers and put them back on without being detected).

Ruthie questions whether she'd be friends with people taking carrot sticks from her plate, or whether anyone would take them in the first place. Word problems can be tricky things.
Rick Detorie’s One Big Happy for the 9th of November, 2014.

Rick Detorie’s One Big Happy (November 9), and I’m sorry, readers about a month in the future from now, because that link’s almost certainly expired, is another entry in the subject of word problems resisted because the thing used to make the problem seem less abstract has connotations that the student doesn’t like.

Fred Wagner’s Animal Crackers (November 9) is your rare comic that could be used to teach positional notation, although when you actually pay attention you realize it doesn’t actually require that.

Mac and Bill King’s Magic In A Minute (November 9) shows off a mathematically-based slight-of-hand trick, describing a way to make it look like you’re reading your partner-monkey’s mind. This is probably a nice prealgebra problem to work out just why it works. You could also consider this a toe-step into the problem of encoding messages, finding a way to send information about something in a way that the original information can be recovered, although obviously this particular method isn’t terribly secure for more than a quick bit of stage magic.