I apologize that, even though the past week was light on mathematically-themed comic strips, I didn’t have them written up by my usual Sunday posting time. It was just too busy a week, and I am still decompressing from the A to Z sequence. I’ll have them as soon as I’m able.
In the meanwhile may I share a couple of things I thought worth reading, and that have been waiting in my notes folder for the chance to highlight?
There are around 7000 people currently living in this planet who got 20 tails in a row the first time they tried flipping a coin in their life pic.twitter.com/LvUWs4jnLA
This Fermat’s Library tweet is one of those entertaining consequences of probability, multiplied by the large number of people in the world. If you flip twenty coins in a row there’s a one in 1,048,576 chance that all twenty will come up heads, or all twenty will come up tails. So about one in every million times you flip twenty coins, they all come up the same way. If the seven billion people in the world have flipped at least twenty coins in their lives, then something like seven thousand of them had the coins turn up heads every single one of those twenty times. That all seven billion people have tossed a coin seems like the biggest point to attack this trivia on. A lot of people are too young, or don’t have access to, coins. But there’s still going to be thousands who did start their coin-flipping lives with a remarkable streak.
Also back in October, so you see how long things have been circulating around here, John D Cook published an article about the World Series. Or any series contest. At least ones where the chance of each side winning don’t depend on the previous games in the series. If one side has a probability ‘p’ of winning any particular game, what’s the chance they’ll win a best-four-of-seven? What makes this a more challenging mathematics problem is that a best-of-seven series stops after one side’s won four games. So you can’t simply say it’s the chance of four wins. You need to account for four wins out of five games, out of six games, and out of seven games. Fortunately there’s a lot of old mathematics that explores just this.
The economist Brandford DeLong noticed the first write-up of the Prisoners Dilemma. This is one of the first bits of game theory that anyone learns, and it’s an important bit. It establishes that the logic of cooperatives games — any project where people have to work together — can have a terrible outcome. What makes the most sense for the individuals makes the least sense for the group. That a good outcome for everyone depends on trust, whether established through history or through constraints everyone’s agreed to respect.
And finally here’s part of a series about quick little divisibility tests. This is that trick where you tell what a number’s divisible by through adding or subtracting its (base ten) digits. Everyone who’d be reading this post knows about testing for divisibility by three or nine. Here’s some rules for also testing divisibility by eleven (which you might know), by seven (less likely), and thirteen. With a bit of practice, and awareness of some exceptional numbers, you can tell by sight whether a number smaller than a thousand is prime. Add a bit of flourish to your doing this and you can establish a reputation as a magical mathematician.
Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.
Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?
But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.
This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.
The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.
But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.
There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.
Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.
The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.
Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.
… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.
But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.
There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.
We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.
It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.
The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literate gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.
Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.
I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.
It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.
Chris Browne’s Hagar the Horrible for the 31st of March happens to be funny-because-it’s-true. It’s supposed to be transgressive to see a gambler as the best mathematician available. But quite a few of the great pioneering minds of mathematics were also gamblers looking for an edge. It may shock you to learn that mathematicians in past centuries didn’t have enough money, and would look for ways to get more. And, as ever, knowing something secret about the way cards or dice or any unpredictable event might happen gives one an edge. The question of whether a 9 or a 10 is more likely to be thrown on three dice was debated for centuries, by people as familiar to us as Galileo. And by people as familiar to mathematicians as Gerolamo Cardano.
Gambling blends imperceptibly into everything people want to do. The question of how to fairly divide the pot of an interrupted game may seem sordid. But recast it as the problem of how to divide the assets of a partnership which had to halt — say, because one of the partners had to stop participating — and we have something that looks respectable. And gambling blends imperceptibly into security. The result of any one project may be unpredictable. The result of many similar ones, on average, often is. Card games or joint-stock insurance companies; the mathematics is the same. A good card-counter might be the best mathematician available.
Tony Cochran’s Agnes for the 31st name-drops Diophantine equations. It’s in the service of a student resisting class joke. Diophantine equations are equations for which we only allow integer, whole-number, answers. The name refers to Diophantus of Alexandria, who lived in the third century AD. His Arithmetica describes many methods for solving equations, a prototype to algebra as we know it in high school today. Generally, a Diophantine equation is a hard problem. It’s impossible, for example, to say whether an arbitrary Diophantine equation even has a solution. Finding what it might be is another bit of work. Fermat’s Last Theorem is a Diophantine equation, and that took centuries to work out that there isn’t generally an answer.
Mind, we can say for specific cases whether a Diophantine equation has a solution. And those specific cases can be pretty general. If we know integers a and b, then we can find integers x and y that make “ax + by = 1” true, for example.
Graham Harrop’s Ten Cats for the 31st hurts mathematicians’ feelings on the way to trying to help a shy cat. I’m amused anyway.
And Jonathan Lemon’s Rabbits Against Magic for the 1st of April mentions Fermat’s Last Theorem. The structure of the joke is fine. If we must ask an irrelevant question of the Information Desk mathematics has got plenty of good questions. The choice makes me suspect Lemon’s showing his age, though. The imagination-capturing power of Fermat’s Last Theorem as a great unknown has to have been diminished since the first proof was found over two decades ago. It’d be someone who grew up knowing there was this mystery about xn plus yn equalling zn who’d jump to this reference.
Tom Toles’s Randolph Itch, 2 am for the 2nd of April mentions “zero-sum games”. The term comes from the mathematical theory of games. The field might sound frivolous, but that’s because you don’t know how much stuff the field considers to be “games”. Mathematicians who study them consider “games” to be sets of decisions. One or more people make choices, and gain or lose as a result of those choices. That is a pretty vague description. It covers playing solitaire and multiplayer Civilization V. It also covers career planning and imperial brinksmanship. And, for that matter, business dealings.
“Zero-sum” games refer to how we score the game’s objectives. If it’s zero-sum, then anything gained by one player must be balanced by equal losses by the other player or players. For example, in a sports league’s season standings, one team’s win must balance another team’s loss. The total number of won games, across all the teams, has to equal the total number of lost games. But a game doesn’t have to be zero-sum. It’s possible to create games in which all participants gain something, or all lose something. Or where the total gained doesn’t equal the total lost. These are, imaginatively, called non-zero-sum games. They turn up often in real-world applications. Political or military strategy often is about problems in which both parties can lose. Business opportunities are often intended to see the directly involved parties benefit. This is surely why Randolph is shown reading the business pages.
Ad Nihil here presents an interesting-looking game demonstrating something I hadn’t heard of before, Parrondo’s Paradox, which apparently is a phenomenon in which a combination of losing strategies becomes a winning strategy. I do want to think about this more, so I offer the link to that blog’s report so that I hopefully will go back and consider it more when I’m able.
My inspiration with my daughter’s 8th grade probability problems continues. In a previous post I worked on a hypothetical story of monitoring all communications for security with a Bayesian analysis approach. This time when I saw the spinning wheel problems in her text book, I was yet again inspired to create a game system to demonstrate Parrondo’s Paradox.
“Parrondo’s paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996.” – Wikipedia.org
Simply put, with certain (not all) combinations, you may create an overall winning strategy by playing different losing scenarios alternatively in the long run. Here’s the game system I came up with this (simpler than the original I believe):
Let’s imagine a spinning wheel like below, divided into eight equal parts with 6 parts red…