My 2019 Mathematics A To Z: Linear Programming


Today’s A To Z term is another proposed by @aajohannas.

I couldn’t find a place to fit this in the essay proper. But it’s too good to leave out. The simplex method, discussed within, traces to George Dantzig. He’d been planning methods for the US Army Air Force during the Second World War. Dantzig is a person you have heard about, if you’ve heard any mathematical urban legends. In 1939 he was late to Jerzy Neyman’s class. He took two statistics problems on the board to be homework. He found them “harder than usual”, but solved them in a couple days and turned in the late homework hoping Neyman would be understanding. They weren’t homework. They were examples of famously unsolved problems. Within weeks Neyman had written one of the solutions up for publication. When he needed a thesis topic Neyman advised him to just put what he already had in a binder. It’s the stuff every grad student dreams of. The story mutated. It picked up some glurge to become a narrative about positive thinking. And mutated further, into the movie Good Will Hunting.

The story gets better, for my purposes. The simplex method can be understood as one of those homework problems. Dantzig describes some of that in this 1987 essay about the origins of the method. The essay is worth reading to understand some of how people come to think of good mathematics.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Linear Programming.

Every three days one of the comic strips I read has the elderly main character talk about how they never used algebra. This is my hyperbole. But mathematics has got the reputation for being difficult and inapplicable to everyday life. We’ll concede using arithmetic, when we get angry at the fast food cashier who hands back our two pennies before giving change for our $6.77 hummus wrap. But otherwise, who knows what an elliptic integral is, and whether it’s working properly?

Linear programming does not have this problem. In part, this is because it lacks a reputation. But those who have heard of it, acknowledge it as immensely practical mathematics. It is about something a particular kind of human always finds compelling. That is how to do a thing best.

There are several kinds of “best”. There is doing a thing in as little time as possible. Or for as little effort as possible. For the greatest profit. For the highest capacity. For the best score. For the least risk. The goals have a thousand names, none of which we need to know. They all mean the same thing. They mean “the thing we wish to optimize”. To optimize has two directions, which are one. The optimum is either the maximum or the minimum. To be good at finding a maximum is to be good at finding a minimum.

It’s obvious why we call this “programming”; obviously, we leave the work of finding answers to a computer. It’s a spurious reason. The “programming” here comes from an independent sense of the word. It means more about finding a plan. Think of “programming” a night’s entertainment, so that every performer gets their turn, all scene changes have time to be done, you don’t put two comedians right after the other, and you accommodate the performer who has to leave early and the performer who’ll get in an hour late. Linear programming problems are often about finding how to do as well as possible given various priorities. All right. At least the “linear” part is obvious. A mathematics problem is “linear” when it’s something we can reasonably expect to solve. This is not the technical meaning. Technically what it means is we’re looking at a function something like:

ax + by + cz

Here, x, y, and z are the independent variables. We don’t know their values but wish to. a, b, and c are coefficients. These values are set to some constant for the problem, but they might be something else for other problems. They’re allowed to be positive or negative or even zero. If a coefficient is zero, then the matching variable doesn’t affect matters at all. The corresponding value can be anything at all, within the constraints.

I’ve written this for three variables, as an example and because ‘x’ and ‘y’ and ‘z’ are comfortable, familiar variables. There can be fewer. There can be more. There almost always are. Two- and three-variable problems will teach you how to do this kind of problem. They’re too simple to be interesting, usually. To avoid committing to a particular number of variables we can use indices. x_j for values of j from 1 up to N. Or we can bundle all these values together into a vector, and write everything as \vec{x} . This has a particular advantage since when we can write the coefficients as a vector too. Then we use the notation of linear algebra, and write that we hope to maximize the value of:

\vec{c}^T\vec{x}

(The superscript T means “transpose”. As a linear algebra problem we’d usually think of writing a vector as a tall column of things. By transposing that we write a long row of things. By transposing we can use the notation of matrix multiplication.)

This is the objective function. Objective here in the sense of goal; it’s the thing we want to find the best possible value of.

We have constraints. These represent limits on the variables. The variables are always things that come in limited supply. There’s no allocating more money than the budget allows, nor putting more people on staff than work for the company. Often these constraints interact. Perhaps not only is there only so much staff, but no one person can work more than a set number of days in a row. Something like that. That’s all right. We can write all these constraints as a matrix equation. An inequality, properly. We can bundle all the constraints into a big matrix named A, and demand:

A\vec{x} \le \vec{b}

Also, traditionally, we suppose that every component of \vec{x} is non-negative. That is, positive, or at lowest, zero. This reflects the field’s core problems of figuring how to allocate resources. There’s no allocating less than zero of something.

But we need some bounds. This is easiest to see with a two-dimensional problem. Try it yourself: draw a pair of axes on a sheet of paper. Now put in a constraint. Doesn’t matter what. The constraint’s edge is a straight line, which you can draw at any position and any angle you like. This includes horizontal and vertical. Shade in one side of the constraint. Whatever you shade in is the “feasible region”, the sets of values allowed under the constraint. Now draw in another line, another constraint. Shade in one side or the other of that. Draw in yet another line, another constraint. Shade in one side or another of that. The “feasible region” is whatever points have taken on all these shades. If you were lucky, this is a bounded region, a triangle. If you weren’t lucky, it’s not bounded. It’s maybe got some corners but goes off to the edge of the page where you stopped shading things in.

So adding that every component of \vec{x} is at least as big as zero is a backstop. It means we’ll usually get a feasible region with a finite volume. What was the last project you worked on that had no upper limits for anything, just minimums you had to satisfy? Anyway if you know you need something to be allowed less than zero go ahead. We’ll work it out. The important thing is there’s finite bounds on all the variables.

I didn’t see the bounds you drew. It’s possible you have a triangle with all three shades inside. But it’s also possible you picked the other sides to shade, and you have an annulus, with no region having more than two shades in it. This can happen. It means it’s impossible to satisfy all the constraints at once. At least one of them has to give. You may be reminded of the sign taped to the wall of your mechanics’ about picking two of good-fast-cheap.

But impossibility is at least easy. What if there is a feasible region?

Well, we have reason to hope. The optimum has to be somewhere inside the region, that’s clear enough. And it even has to be on the edge of the region. If you’re not seeing why, think of a simple example, like, finding the maximum of 2x + y , inside the square where x is between 0 and 2 and y is between 0 and 3. Suppose you had a putative maximum on the inside, like, where x was 1 and y was 2. What happens if you increase x a tiny bit? If you increase y by twice that? No, it’s only on the edges you can get a maximum that can’t be locally bettered. And only on the corners of the edges, at that.

(This doesn’t prove the case. But it is what the proof gets at.)

So the problem sounds simple then! We just have to try out all the vertices and pick the maximum (or minimum) from them all.

OK, and here’s where we start getting into trouble. With two variables and, like, three constraints? That’s easy enough. That’s like five points to evaluate? We can do that.

We never need to do that. If someone’s hiring you to test five combinations I admire your hustle and need you to start getting me consulting work. A real problem will have many variables and many constraints. The feasible region will most often look like a multifaceted gemstone. It’ll extend into more than three dimensions, usually. It’s all right if you just imagine the three, as long as the gemstone is complicated enough.

Because now we’ve got lots of vertices. Maybe more than we really want to deal with. So what’s there to do?

The basic approach, the one that’s foundational to the field, is the simplex method. A “simplex” is a triangle. In three dimensions, anyway. In four dimensions it’s a tetrahedron. In two dimensions it’s a line segment. Generally, however many dimensions of space you have? The simplex is the simplest thing that fills up volume in your space.

You know how you can turn any polygon into a bunch of triangles? Just by connecting enough vertices together? You can turn a polyhedron into a bunch of tetrahedrons, by adding faces that connect trios of vertices. And for polyhedron-like shapes in more dimensions? We call those polytopes. Polytopes we can turn into a bunch of simplexes. So this is why it’s the “simplex method”. Any one simplex it’s easy to check the vertices on. And we can turn the polytope into a bunch of simplexes. And we can ignore all the interior vertices of the simplexes.

So here’s the simplex method. First, break your polytope up into simplexes. Next, pick any simplex; doesn’t matter which. Pick any outside vertex of that simplex. This is the first viable possible solution. It’s most likely wrong. That’s okay. We’ll make it better.

Because there are other vertices on this simplex. And there are other simplexes, adjacent to that first, which share this vertex. Test the vertices that share an edge with this one. Is there one that improves the objective function? Probably. Is there a best one of those in this simplex? Sure. So now that’s our second viable possible solution. If we had to give an answer right now, that would be our best guess.

But this new vertex, this new tentative solution? It shares edges with other vertices, across several simplexes. So look at these new neighbors. Are any of them an improvement? Which one of them is the best improvement? Move over there. That’s our next tentative solution.

You see where this is going. Keep at this. Eventually it’ll wind to a conclusion. Usually this works great. If you have, like, 8 constraints, you can usually expect to get your answer in from 16 to 24 iterations. If you have 20 constraints, expect an answer in from 40 to 60 iterations. This is doing pretty well.

But it might take a while. It’s possible for the method to “stall” a while, often because one or more of the variables is at its constraint boundary. Or the division of polytope into simplexes got unlucky, and it’s hard to get to better solutions. Or there might be a string of vertices that are all at, or near, the same value, so the simplex method can’t resolve where to “go” next. In the worst possible case, the simplex method takes a number of iterations that grows exponentially with the number of constraints. This, yes, is very bad. It doesn’t typically happen. It’s a numerical algorithm. There’s some problem to spoil any numerical algorithm.

You may have complaints. Like, the world is complicated. Why are we only looking at linear objective functions? Or, why only look at linear constraints? Well, if you really need to do that? Go ahead, but that’s not linear programming anymore. Think hard about whether you really need that, though. Linear anything is usually simpler than nonlinear anything. I mean, if your optimization function absolutely has to have y^2 in it? Could we just say you have a new variable w that just happens to be equal to the square of y? Will that work? If you have to have the sine of z? Are you sure that z isn’t going to get outside the region where the sine of z is pretty close to just being z? Can you check?

Maybe you have, and there’s just nothing for it. That’s all right. This is why optimization is a living field of study. It demands judgement and thought and all that hard work.


Thank you for reading. This and all the other Fall 2019 A To Z posts should be at this link. They should be joined next week by letters ‘M’ and ‘N’. Also next week I hope to open for nominations for the next set of letters. All of my past A To Z essays should be available at this link.

Reading the Comics, February 17, 2016: Using Mathematics Edition


Is there a unifying theme between many of the syndicated comic strips with mathematical themes the last few days? Of course there is. It’s students giving snarky answers to their teachers’ questions. That’s the theme every week. But other stuff comes up.

Joe Martin’s Boffo for the 12th of depicts “the early days before all the bugs were worked out” of mathematics. And the early figure got a whole string of operations which don’t actually respect the equals sign, before getting finally to the end. Were I to do this, I would use an arrow, =>, and I suspect many mathematicians would too. It’s a way of indicating the flow of one’s thoughts without trying to assert that 2+2 is actually the same number as 1 + 1 + 1 + 1 + 6.

Mathematics, the early days before all the bugs were worked out: 2 + 2 = 1 + 1 + 1 + 1 = 10 + 5 + 5 = 20 x 5 = 100 / 4 = 25 x 7 + 5 = 180 x 2 = 360 / 9 = 40 - 15 = 25 + 1 + 10 = 36 / 9 = 4
Joe Martin’s Boffo for the 12th of February, 2016. The link will likely expire in mid-March.

And this comic is funny, in part, because it’s true. New mathematical discoveries tend to be somewhat complicated, sloppy messes to start. Over time, if the thing is of any use, the mathematical construct gets better. By better I mean the logic behind it gets better explained. You’d expect that, of course, just because time to reflect gives time to improve exposition. But the logic also tends to get better. We tend to find arguments that are, if not shorter, then better-constructed. We get to see how something gets used, and how to relate it to other things we’d like to do, and how to generalize the pieces of argument that go into it. If we think of a mathematical argument as a narrative, then, we learn how to write the better narrative.

Then, too, we get better at notation, at isolating what concepts we want to describe and how to describe them. For example, to write the fourth power of a number such as ‘x’, mathematicians used to write ‘xxxx’ — fair enough, but cumbersome. Or then xqq — the ‘q’ standing for quadratic, that is, square, of the thing before. That’s better. At least it’s less stuff to write. How about “xiiii” (as in the Roman numeral IV)? Getting to “x4” took time, and thought, and practice with what we wanted to raise numbers to powers to do. In short, we had to get the bugs worked out.

John Rose’s Barney Google and Snuffy Smith for the 12th of February is your normal student-resisting-word-problems joke. And hey, at least they have train service still in Smith’s hometown.

'If it's a hunnert miles to th' city an' a train is travelin' thurty miles an hour is due t'arrive at 5:00 pm --- what time does th' train leave Hootin' Holler, Jughaid?' 'I dunno, Miz Prunelly, but you better go now jest t'be on th' safe side!!'
John Rose’s Barney Google and Snuffy Smith for the 12th of February, 2016.

Randy Glasbergen’s Glasbergen Cartoons for the 12th (a rerun; Galsbergen died last year) is a similar student-resisting-problems joke. Arithmetic gets an appearance no doubt because it’s the easiest kind of problem to put on the board and not distract from the actual joke.

Mark Pett’s Lucky Cow for the 14th (a rerun from the early 2000s) mentions the chaos butterfly. I am considering retiring chaos butterfly mentions from these roundups because I seem to say the same thing each time. But I haven’t yet, so I’ll say it. Part of what makes a system chaotic is that it’s deterministic and unpredictable. Most different outcomes result from starting points so similar they can’t be told apart. There’s no guessing whether any action makes things better or worse, and whether that’s in the short or the long term.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th is surely not a response to that Pearls Before Swine from last time. I believe all the Saturday Morning Breakfast Cereal strips to appear on Gocomics are reruns from its earlier days as a web comic. But it serves as a riposte to the “nobody uses mathematics anyway” charge. And it’s a fine bit of revenge fantasy.

Historically, being the sole party that understands the financial calculations has not brought money lenders appreciation.

Tony Cochran’s Agnes for the 17th also can’t be a response to that Pearls Before Swine. The lead times just don’t work that way. But it gives another great reason to learn mathematics. I encourage anyone who wants to be Lord and Queen of Mathdom; it’s worth a try.

Tom Thaves’s Frank and Ernest for the 17th tells one of the obvious jokes about infinite sets. Fortunately mathematicians aren’t expected to list everything that goes into an infinitely large set. It would put a terrible strain on our wrists. Usually it’s enough to describe the things that go in it. Some descriptions are easy, especially if there’s a way to match the set with something already familiar, like counting numbers or real numbers. And sometimes a description has to be complicated.

There are urban legends among grad students. Many of them are thesis nightmares. One is about such sets. The story goes of the student who had worked for years on a set whose elements all had some interesting collection of properties. At the defense her advisor — the person who’s supposed to have guided her through finding and addressing an interesting problem — actually looks at the student’s work for the first time in ages, or ever. And starts drawing conclusions from it. And proves that the only set whose elements all have these properties is the null set, which hasn’t got anything in it. The whole thesis is a bust. Thaves probably didn’t have that legend in mind. But you could read the comic that way.

Percy Crosby’s Skippy for the 17th gives a hint how long kids in comic strips have been giving smart answers to teachers. This installment’s from 1928 sometime. Skippy’s pretty confident in himself, it must be said.

Reading the Comics, June 30, 2015: Fumigating The Theater Edition


One of my favorite ever episodes of The Muppet Show when I was a kid had the premise the Muppet Theater was being fumigated and so they had to put on a show from the train station instead. (It was the Loretta Lynn episode, third season, number eight.) I loved seeing them try to carry on as normal when not a single thing was as it should be. Since then — probably before, too, but I don’t remember that — I’ve loved seeing stuff trying to carry on in adverse circumstances.

Why this is mentioned here is that Sunday night my computer had a nasty freeze and some video card mishaps. I discovered that my early-2011 MacBook Pro might be among those recalled earlier this year for a service glitch. My computer is in for what I hope is a simple, free, and quick repair. But obviously I’m not at my best right now. I might be even longer than usual answering people and goodness knows how the statistics survey of June will go.

Anyway. Rick Kirkman and Jerry Scott’s Baby Blues (June 26) is a joke about motivating kids to do mathematics. And about how you can’t do mathematics over summer vacation.

Mom offers to buy three candy bars at 45 cents each if Hammie can say how much that'll be. 'It's summer, Mom! You can't mix candy and math!'
Rick Kirkman and Jerry Scott’s Baby Blues for the 26th of June, 2015.

Ruben Bolling’s Tom The Dancing Bug (June 26) features a return appearance of Chaos Butterfly. Chaos Butterfly does what Chaos Butterfly does best.

Charles Schulz’s Peanuts Begins (June 26; actually just the Peanuts of March 23, 1951) uses arithmetic as a test of smartness. And as an example of something impractical.

Alex Hallatt’s Arctic Circle (June 28) is a riff on the Good Will Hunting premise. That movie’s particular premise — the janitor solves an impossible problem left on the board — is, so far as I know, something that hasn’t happened. But it’s not impossible. Training will help one develop reasoning ability. Training will provide context and definitions and models to work from. But that’s not essential. All that’s essential is the ability to reason. Everyone has that ability; everyone can do mathematics. Someone coming from outside the academy could do first-rate work. However, I’d bet on the person with the advanced degree in mathematics. There is value in training.

The penguin-janitor offers a solution to the unsolved mathematics problem on the blackboard. It's a smiley face. It wasn't what they were looking for.
Alex Hallatt’s Arctic Circle for the 28th of June, 2015.

But as many note, the Good Will Hunting premise has got a kernel of truth in it. In 1939, George Dantzig, a grad student in mathematics at University of California/Berkeley, came in late to class. He didn’t know that two problems on the board were examples of unproven theorems, and assumed them to be homework. So he did them, though he apologized for taking so long to do them. Before you draw too much inspiration from this, though, remember that Dantzig was a graduate student almost ready to start work on a PhD thesis. And the problems were not thought unsolvable, just conjectures not yet proven. Snopes, as ever, provides some explanation of the legend and some of the variant ways the story is told.

Mac King and Bill King’s Magic In A Minute (June 28) shows off a magic trick that you could recast as a permutations problem. If you’ve been studying group theory, and many of my Mathematics A To Z terms have readied you for group theory, you can prove why this trick works.

Guy Gilchrist’s Nancy (June 28) carries on Baby Blues‘s theme of mathematics during summer vacation being simply undoable.

As only fifty percent of the population is happy, and one person is in a great mood, what must the other one be in?
Piers Baker’s Ollie and Quentin for December 28, 2014, and repeated on June 28, 2015.

Piers Baker’s Ollie and Quentin (June 28) is a gambler’s fallacy-themed joke. It was run — on ComicsKingdom, back then — back in December, and I talked some more about it then.

Mike Twohy’s That’s Life (June 28) is about the perils of putting too much attention into mental arithmetic. It’s also about how perilously hypnotic decimals are: if the pitcher had realized “fourteen million over three years” must be “four and two-thirds million per year” he’d surely have been less distracted.

Reading The Comics, May 22, 2015: Might Be Giving Up Mickey Mouse Edition


We’re drawing upon the end of the school year, on the United States calendar. Comic Strip Master Command may have ordered fewer mathematically-themed comic strips to be run. That’s all right. I have plans. I also may need to stop paying attention to the Disney comic strips, reruns of Mickey Mouse and Donald Duck. I explain why within.

The line of equations droops to the floor, then rises again. 'Fortunately, I had a Red Bull handy.'
Niklas Eriksson’s Carpe Diem for the 16th of May, 2015.

Niklas Eriksson’s Carpe Diem made its first appearance in my pages here on the 16th of May. (It’s been newly introduced to United States comics. I’m sorry, I just can’t read all the syndicated newspaper comic strips in the world for mathematical content. If someone wants to franchise the Reading The Comics idea for a country they like, let’s talk. We can negotiate reasonable terms.) Anyway, it uses the usual string of mathematical symbols to express the idea of a lot of hard mathematical work. The big down-arrow just before superstar equation E = mc2 is authentic enough. Trying to show the chain of thought, or to point out the conclusions one hopes follow from the work done, is a common part of discovering or inventing mathematics.

Mother Goose reads how a woman over 50 supposedly has more chance of being struck by lightning than of marrying the right man. Grimm says she'll meet the right man someday, and scoots away.
Mike Peters’s Mother Goose and Grimm for the 17th of May, 2015. Note: the claim in the second and third panels is transparent nonsense. But it’s a comic strip, after all.

Mike Peters’s Mother Goose and Grimm (May 17) riffs on that ancient and transparently stupid bit of folklore about the chance of an older woman having a better chance of dying in some improbable manner than of marrying successfully. It’s always been obvious nonsense and people passing along the claim uncritically should be ashamed of themselves. I’ll give Peters a pass since the point is to set up a joke, and joke-setup can get away with a lot. Still.

Continue reading “Reading The Comics, May 22, 2015: Might Be Giving Up Mickey Mouse Edition”

Reading the Comics, April 6, 2015: Little Infinite Edition


As I warned, there were a lot of mathematically-themed comic strips the last week, and here I can at least get us through the start of April. This doesn’t include the strips that ran today, the 7th of April by my calendar, because I have to get some serious-looking men to look at my car and I just know they’re going to disapprove of what my CV joint covers look like, even though I’ve done nothing to them. But I won’t be reading most of today’s comic strips until after that’s done, and so commenting on them later.

Mark Anderson’s Andertoons (April 3) makes its traditional appearance in my roundup, in this case with a business-type guy declaring infinity to be “the loophole of all loopholes!” I think that’s overstating things a fair bit, but strange and very counter-intuitive things do happen when you try to work out a problem in which infinities turn up. For example: in ordinary arithmetic, the order in which you add together a bunch of real numbers makes no difference. If you want to add together infinitely many real numbers, though, it is possible to have them add to different numbers depending on what order you add them in. Most unsettlingly, it’s possible to have infinitely many real numbers add up to literally any real number you like, depending on the order in which you add them. And then things get really weird.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips (April 3) is the other strip in this roundup to at least name-drop infinity. I confess I don’t see how “being infinite” would help in bringing about world peace, but I suppose being finite hasn’t managed the trick just yet so we might want to think outside the box.

Continue reading “Reading the Comics, April 6, 2015: Little Infinite Edition”