My 2019 Mathematics A To Z: Martingales


Today’s A To Z term was nominated again by @aajohannas. The other compelling nomination was from Vayuputrii, for the Mittag-Leffler function. I was tempted. But I realized I could not think of a clear way to describe why the function was interesting. Or even where it comes from that avoided being a heap of technical terms. There’s no avoiding technical terms in writing about mathematics, but there’s only so much I want to put in at once either. It also makes me realize I don’t understand the Mittag-Leffler function, but it is after all something I haven’t worked much with.

The Mittag-Leffler function looks like it’s one of those things named for several contributors, like Runge-Kutta Integration or Cauchy-Kovalevskaya Theorem or something. Not so here; this was one person, Gösta Mittag-Leffler. His name’s all over the theory of functions. And he was one of the people helping Sofia Kovalevskaya, whom you know from every list of pioneering women in mathematics, secure her professorship.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Martingales.

A martingale is how mathematicians prove you can’t get rich gambling.

Well, that exaggerates. Some people will be lucky, of course. But there’s no strategy that works. The only strategy that works is to rig the game. You can do this openly, by setting rules that give you a slight edge. You usually have to be the house to do this. Or you can do it covertly, using tricks like card-counting (in blackjack) or weighted dice or other tricks. But a fair game? Meaning one not biased towards or against any player? There’s no strategy to guarantee winning that.

We can make this more technical. Martingales arise form the world of stochastic processes. This is an indexed set of random variables. A random variable is some variable with a value that depends on the result of some phenomenon. A tossed coin. Rolled dice. Number of people crossing a particular walkway over a day. Engine temperature. Value of a stock being traded. Whatever. We can’t forecast what the next value will be. But we now the distribution, which values are more likely and which ones are unlikely and which ones impossible.

The field grew out of studying real-world phenomena. Things we could sample and do statistics on. So it’s hard to think of an index that isn’t time, or some proxy for time like “rolls of the dice”. Stochastic processes turn up all over the place. A lot of what we want to know is impossible, or at least impractical, to exactly forecast. Think of the work needed to forecast how many people will cross this particular walk four days from now. But it’s practical to describe what are more and less likely outcomes. What the average number of walk-crossers will be. What the most likely number will be. Whether to expect tomorrow to be a busier or a slower day.

And this is what the martingale is for. Start with a sequence of your random variables. How many people have crossed that street each day since you started studying. What is the expectation value, the best guess, for the next result? Your best guess for how many will cross tomorrow? Keeping in mind your knowledge of how all these past values. That’s an important piece. It’s not a martingale if the history of results isn’t a factor.

Every probability question has to deal with knowledge. Sometimes it’s easy. The probability of a coin coming up tails next toss? That’s one-half. The probability of a coin coming up tails next toss, given that it came up tails last time? That’s still one-half. The probability of a coin coming up tails next toss, given that it came up tails the last 40 tosses? That’s … starting to make you wonder if this is a fair coin. I’d bet tails, but I’d also ask to examine both sides, for a start.

So a martingale is a stochastic process where we can make forecasts about the future. Particularly, the expectation value. The expectation value is the sum of the products of every possible value and how probable they are. In a martingale, the expected value for all time to come is just the current value. So if whatever it was you’re measuring was, say, 40 this time? That’s your expectation for the whole future. Specific values might be above 40, or below 40, but on average, 40 is it.

Put it that way and you’d think, well, how often does that ever happen? Maybe some freak process will give you that, but most stuff?

Well, here’s one. The random walk. Set a value. At each step, it can increase or decrease by some fixed value. It’s as likely to increase as to decrease. This is a martingale. And it turns out a lot of stuff is random walks. Or can be processed into random walks. Even if the original walk is unbalanced — say it’s more likely to increase than decrease. Then we can do a transformation, and find a new random variable based on the original. Then that one is as likely to increase as decrease. That one is a martingale.

It’s not just random walks. Poisson processes are things where the chance of something happening is tiny, but it has lots of chances to happen. So this measures things like how many car accidents happen on this stretch of road each week. Or where a couple plants will grow together into a forest, as opposed to lone trees. How often a store will have too many customers for the cashiers on hand. These processes by themselves aren’t often martingales. But we can use them to make a new stochastic process, and that one is a martingale.

Where this all comes to gambling is in stopping times. This is a random variable that’s based on the stochastic process you started with. Its value at each index represents the probability that the random variable in that has reached some particular value by this index. The language evokes a gambler’s decision: when do you stop? There are two obvious stopping times for any game. One is to stop when you’ve won enough money. The other is to stop when you’ve lost your whole stake.

So there is something interesting about a martingale that has bounds. It will almost certainly hit at least one of those bounds, in a finite time. (“Almost certainly” has a technical meaning. It’s the same thing I mean when I say if you flip a fair coin infinitely many times then “almost certainly” it’ll come up tails at least once. Like, it’s not impossible that it doesn’t. It just won’t happen.) And for the gambler? The boundary of “runs out of money” is a lot closer than “makes the house run out of money”.

Oh, if you just want a little payoff, that’s fine. If you’re happy to walk away from the table with a one percent profit? You can probably do that. You’re closer to that boundary than to the runs-out-of-money one. A ten percent profit? Maybe so. Making an unlimited amount of money, like you’d want to live on your gambling winnings? No, that just doesn’t happen.

This gets controversial when we turn from gambling to the stock market. Or a lot of financial mathematics. Look at the value of a stock over time. I write “stock” for my convenience. It can be anything with a price that’s constantly open for renegotiation. Stocks, bonds, exchange funds, used cars, fish at the market, anything. The price over time looks like it’s random, at least hour-by-hour. So how can you reliably make money if the fluctuations of the price of a stock are random?

Well, if I knew, I’d have smaller student loans outstanding. But martingales seem like they should offer some guidance. Much of modern finance builds on not dealing with a stock price varying. Instead, buy the right to buy the stock at a set price. Or buy the right to sell the stock at a set price. This lets you pay to secure a certain profit, or a worst-possible loss, in case the price reaches some level. And now you see the martingale. Is it likely that the stock will reach a certain price within this set time? How likely? This can, in principle, guide you to a fair price for this right-to-buy.

The mathematical reasoning behind that is fine, so far as I understand it. Trouble arises because pricing correctly means having a good understanding of how likely it is prices will reach different levels. Fortunately, there are few things humans are better at than estimating probabilities. Especially the probabilities of complicated situations, with abstract and remote dangers.

So martingales are an interesting corner of mathematics. They apply to purely abstract problems like random walks. Or to good mathematical physics problems like Brownian motion and the diffusion of particles. And they’re lurking behind the scenes of the finance news. Exciting stuff.


Thanks for reading. This and all the other Fall 2019 A To Z posts should be at this link. Yes, I too am amazed to be halfway done; it feels like I’m barely one-fifth of the way done. For Thursday I hope to publish ‘N’. And I am taking nominations for subjects for the letters O through T, at this link.

Advertisements

Checking Back in On That 117-Year-Old Roller Coaster


I apologize to people who want to know the most they can about the comic strips of the past week. I’ve not had time to write about them. Part of what has kept me busy is a visit to Lakemont Park, in Altoona, Pennsylvania. The park has had several bad years, including two years in which it did not open at all. But still standing at the park is the oldest-known roller coaster, Leap The Dips.

My first visit to this park, in 2013, among other things gave me a mathematical question to ask. That is, could any of the many pieces of wood in it be original? How many pieces would you expect?

Two parts of the white-painted-wood roller coaster track. In front is the diagonal lift hill. Behind is a basically horizontal track which has a small dip in the middle.
One of the dips of Leap The Dips. These hills are not large ones. The biggest drop is about nine feet; the coaster is a total of 41 feet high at its greatest. The track goes back and forth in a figure-eight layout several times, and in the middle of each ‘straightaway’ leg is a dip like this.

Problems of this form happen all the time. They turn up whenever there’s something which has a small chance of happening, but many chances to happen. In this case, there’s a small chance that any particular piece of wood will need replacing. But there are a lot of pieces of wood, and they might need replacement at any ride inspection. So there’s an obvious answer to how likely it is any piece of wood would survive a century-plus. And, from that, how much of that wood should be original.

And, since this is a probability question, I found reasons not to believe in this answer. These reasons amount to my doubting that the reality is much like the mathematical abstraction. I even found evidence that my doubts were correct.

Covered station for the roller coaster, with 'LEAP THE DIPS' written in what looks like a hand-painted sign hanging from above. Two roller coaster chairs sit by the station.
The station for the Leap The Dips roller coaster, Lakemont Park, Altoona, Pennsylvania. There are two separate cars visible on the tracks by the station. When I last visited there was only one car on the tracks. The cars have a front and a back seat, and while there is a bar to grab hold of, there are no other restraints, which makes the low-speed ride more exciting.

The sad thing to say about revisiting Lakemont Park — well, one is that the park has lost almost all its amusement park rides. It’s got athletic facilities, and a couple miniature golf courses, but besides two wooden and one kiddie roller coaster, and an antique-cars ride, there’s not much left of its long history as an amusement park. But the other thing is that Leap The Dips was closed when I was able to visit. The ride’s under repairs, and seems to be getting painted too. This is sad, but I hope it implies better things soon.

Reading the Comics, July 26, 2019: Children With Mathematics Edition


Three of the strips I have for this installment feature kids around mathematics talk. That’s enough for a theme name.

Gary Delainey and Gerry Rasmussen’s Betty for the 23rd is a strip about luck. It’s easy to form the superstitious view that you have a finite amount of luck, or that you have good and bad lucks which offset each other. It feels like it. If you haven’t felt like it, then consider that time you got an unexpected $200, hours before your car’s alternator died.

If events are independent, though, that’s just not so. Whether you win $600 in the lottery this week has no effect on whether you win any next week. Similarly whether you’re struck by lightning should have no effect on whether you’re struck again.

Betty: 'We didn't use up our luck winning $600 in the lottery!' Bub: 'You don't think so? Shorty's brother got hit by lightning and lived. The second time, he also lived, but it ruined his truck.' Betty: 'I don't know how to respond to that.' Bub: 'And the third time ... '
Gary Delainey and Gerry Rasmussen’s Betty for the 23rd of July, 2019. I thought this might be a new tag, but, no. Other essays mentioning Betty are at this link.

Except that this assumes independence. Even defines independence. This is obvious when you consider that, having won $600, it’s easier to buy an extra twenty dollars in lottery tickets and that does increase your (tiny) chance of winning again. If you’re struck by lightning, perhaps it’s because you tend to be someplace that’s often struck by lightning. Probability is a subtler topic than everyone acknowledges, even when they remember that it is such a subtle topic.

It sure seems like this strip wants to talk about lottery winners struck by lightning, doesn’t it?

Susan: 'What are you so happy about?' Lemont: 'This morning Lionel and I were had breakfast at Pancake-ville. When it came time to calculate a tip I asked 'What's 20% of $22.22' and it told me. It occurred to me, we're living in the future! We have electric cars, drones, instant knowledge at our fingertips ... it's the future I've dreamt of my entire life!' Susan: 'Sigh ... you always did hate math.' Lemont: 'Only in the FUTURE can a man track down his old math teacher on Facebook and gloat.'
Darrin Bell’s Candorville for the 23rd of July, 2019. Essays inspired by Candorville in some way are here.

Darrin Bell’s Candorville for the 23rd jokes about the uselessness of arithmetic in modern society. I’m a bit surprised at Lemont’s glee in not having to work out tips by hand. The character’s usually a bit of a science nerd. But liking science is different from enjoying doing arithmetic. And bad experiences learning mathematics can sour someone on the subject for life. (Which is true of every subject. Compare the number of people who come out of gym class enjoying physical fitness.)

If you need some Internet Old, read the comments at GoComics, which include people offering dire warnings about what you need in case your machine gives the wrong answer. Which is technically true, but for this application? Getting the wrong answer is not an immediately awful affair. Also a lot of cranky complaining about tipping having risen to 20% just because the United States continues its economic punishment of working peoples.

Woman: 'Oh my gosh, you have twins!' Mathematician: 'Yeah. Please meet my sons.' 'Did you give them rhyming names?' 'No.' 'Alliterative names? Are they named for twins from any books?' 'Lady, I'm a mathematician. I think in clear logical terms. None of this froufrou nonsense for my kids.' 'Okay, okay. So their names are?' 'Benjamin and Benjamax.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 25th of July, 2019. Haven’t seen this comic mentioned since two days ago. Essays mentioning some aspect of Saturday Morning Breakfast Cereal should be gathered at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 25th is some wordplay. Mathematicians often need to find minimums of things. Or maximums of things. Being able to do one lets you do the other, as you’d expect. If you didn’t expect, think about it a moment, and then you expect it. So min and max are often grouped together.

Thatababy drawing on a Scalene Triangle, scales and eyes added to one. An Octagon: octopus legs added to an octagon. Rhombus: rhombus with wheels, windows, and a driver added to it, and a passenger hailing it down.
Paul Trap’s Thatababy for the 26th of July, 2019. Essays exploring some topic mentioned by Thatababy are here.

Paul Trap’s Thatababy for the 26th is circling around wordplay, turning some common shape names into pictures. This strip might be aimed at mathematics teachers’ doors. I’d certainly accept these as jokes that help someone learn their shapes.


And you know what? I hope to have another Reading the Comics post around Thursday at this link. And that’s not even thinking what I might do for this coming Sunday.

On The Goldfish Situation


If you’ve been following me on Twitter you’ve seen reports of the Great Migration. This is the pompous name I give to the process of bringing the goldfish who were in tanks in the basement for the winter back outside again. This to let them enjoy the benefits of the summer, like, not having me poking around testing their water every day. (We had a winter with a lot of water quality problems. I’m probably over-testing.)

My reports about moving them back — by setting a net in that could trap some fish and moving them out — included reports of how many remained in each tank. And many people told me how such updates as “Twelve goldfish are in the left tank, three in the right, and fifteen have been brought outside” sound like the start of a story problem. Maybe it does. I don’t have a particular story problem built on this. I’m happy to take nominations for such.

But I did have some mathematics essays based on the problem of moving goldfish to the pond outdoors and to the warm water tank indoors:

  • How To Count Fish, about how one could estimate a population by sampling it twice.
  • How To Re-Count Fish, about one of the practical problems in using this to count as few goldfish as we have at our household.
  • How Not To Count Fish, about how this population estimate wouldn’t work because of the peculiarities of goldfish psychology. Honest.

That I spend one essay describing how to do a thing, and then two more essays describing why it won’t work, may seem characteristically me. Well, yeah. Mathematics is a great tool. To use a tool safely requires understanding its powers and its limitations. I like thinking about what mathematics can and can’t do.

Reading the Comics, April 18, 2019: Slow But Not Stopped Week Edition


The first, important, thing is that I have not disappeared or done something worse. I just had one of those weeks where enough was happening that something had to give. I could either write up stuff for my mathematics blog, or I could feel guilty about not writing stuff up for my mathematics blog. Since I didn’t have time to do both, I went with feeling guilty about not writing, instead. I’m hoping this week will give me more writing time, but I am fooling only myself.

Second is that Comics Kingdom has, for all my complaining, gotten less bad in the redesign. Mostly in that the whole comics page loads at once, now, instead of needing me to click to “load more comics” every six strips. Good. The strips still appear in weird random orders, especially strips like Prince Valiant that only run on Sundays, but still. I can take seeing a vintage Boner’s Ark Sunday strip six unnecessary times. The strips are still smaller than they used to be, and they’re not using the decent, three-row format that they used to. And the archives don’t let you look at a week’s worth in one page. But it’s less bad, and isn’t that all we can ever hope for out of the Internet anymore?

And finally, Comic Strip Master Command wanted to make this an easy week for me by not having a lot to write about. It got so light I’ve maybe overcompensated. I’m not sure I have enough to write about here, but, I don’t want to completely vanish either.

Man walking past a street sign for 52 Ludlow Avenue; the 5 falls down and hits him on the head. Woman with him: 'Numbers are hard.'
Dave Whamond’s Reality Check for the 15th of April, 2019. Appearances in these pages of Reality Check should be gathered at this link.

Dave Whamond’s Reality Check for the 15th is … hm. Well, it’s not an anthropomorphic-numerals joke. It is some kind of wordplay, making concrete a common phrase about, and attitude toward, numbers. I could make the fussy difference between numbers and numerals here but I’m not sure anyone has the patience for that.

Man in a cloudscape: 'I made it to heaven!' Angel: 'You sure did! Now you get to do the best stuff! You can design new systems of mathematics! You can attempt to create self-consistent physics systems. Beset of all, try to create a maximally complex reality using the simplest possible constructions!' Man: 'But that sounds terrible.' Angel: 'QUIET! He hears EVERYTHING.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 17th of April, 2019. I am surprised that this is the first time this strip has drawn a mention this month. Well, this and other Saturday Morning Breakfast Cereal posts are at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 17th touches around mathematics without, I admit, necessarily saying anything specific. The angel(?) welcoming the man to heaven mentions creating new systems of mathematics as some fit job for the heavenly host. The discussion of creating self-consistent physics systems seems mathematical in nature too. I’m not sure whether saying one could “attempt” to create self-consistent physics is meant to imply that our universe’s physics are not self-consistent. To create a “maximally complex reality using the simplest possible constructions” seems like a mathematical challenge as well. There are important fields of mathematics built on optimizing, trying to create the most extreme of one thing subject to some constraints or other.

I think the strip’s premise is the old, partially a joke, concept that God is a mathematician. This would explain why the angel(?) seems to rate doing mathematics or mathematics-related projects as so important. But even then … well, consider. There’s nothing about designing new systems of mathematics that ordinary mortals can’t do. Creating new physics or new realities is beyond us, certainly, but designing the rules for such seems possible. I think I understood this comic better then I had thought about it less. Maybe including it in this column has only made trouble for me.

First chicken: 'What do you want for your birthday?' Second chicken: 'I want everybody to ignore my birthday!' First: 'But if I ignore your birthday I'll be giving the perfect birthday gift, which means I'll be celebrating your birthday, which means I won't be ignoring it!!! AAAAUGH! BIRTHDAY PARADOX!!'
Doug Savage’s Savage Chickens for the 17th of April, 2019. Essays inspired by something from Savage Chickens should be at this link.

Doug Savage’s Savage Chickens for the 17th amuses me by making a strip out of a logic paradox. It’s not quite your “this statement is a lie” paradox, but it feels close to that, to me. To have the first chicken call it “Birthday Paradox” also teases a familiar probability problem. It’s not a true paradox. It merely surprises people who haven’t encountered the problem before. This would be the question of how many people you need to have in a group before there’s a 50 percent (75 percent, 99 percent, whatever you like) chance of at least one pair sharing a birthday.

And I notice on Wikipedia a neat variation of this birthday problem. This generalization considers splitting people into two distinct groups, and how many people you need in each group to have a set chance of a pair, one person from each group, sharing a birthday. Apparently both a 32-person group of 16 women and 16 men, or a 49-person group of 43 women and six men, have a 50% chance of some woman-man pair sharing a birthday. Neat.

Man speaking to a teacher: 'There are two angry parents outside. One's upset that you're teaching multiplication ... the other us upset you're teaching division.' Outside the door are an angry bunny and an angry amoeba.
Mark Parisi’s Off The Mark for the 18th of April, 2019. And essays inspired by Off The Mark should appear at this link.

Mark Parisi’s Off The Mark for the 18th sports a bit of wordplay. It’s built on how multiplication and division also have meanings in biology. … If I’m not mis-reading my dictionary, “multiply” meant any increase in number first, and the arithmetic operation we now call multiplication afterwards. Division, similarly, meant to separate into parts before it meant the mathematical operation as well. So it might be fairer to say that multiplication and division are words that picked up mathematical meaning.


And if you thought this week’s pickings had slender mathematical content? Jef Mallett’s Frazz, for the 19th, just mentioned mathematics homework. Well, there were a couple of quite slight jokes the previous week too, that I never mentioned. Jenny Campbell’s Flo and Friends for the 8th did a Roman numerals joke. The rerun of Richard Thompson’s Richard’s Poor Almanac for the 11th had the Platonic Fir Christmas tree, rendered as a geometric figure. I’ve discussed the connotations of that before.

And there we are. I hope to have some further writing this coming week. But if all else fails my next Reading the Comics essay, like all of them, should be at this link.

What Dates Are Most Likely For Easter?


I had a slight nagging feeling about this. A couple years back I calculated the most and least probable dates for Easter, on the Gregorian calendar, using the current computus. That essay’s here, with results about how often we can expect Easter and when. It also holds some thoughts about whether the probable dates of Easter are even a thing that can be meaningfully calculated. And it turns out, uncharacteristically, that I forgot to do a follow-up calculating the dates of Easter on the Julian calendar. Maybe I’ll get to it yet.

Reading the Comics, March 26, 2019: March 26, 2019 Edition


And we had another of those peculiar days where a lot of strips are on-topic enough for me to talk about.

Eric the Circle, this one by Kyle, for the 26th has a bit of mathematical physics in it. This is the kind of diagram you’ll see all the time, at least if you do the mathematics that tells you where things will be and when. The particular example is an easy problem, a thing rolling down an inclined plane. But the work done for it applies to more complicated problems. The question it’s for is, “what happens when this thing slides down the plane?” And that depends on the forces at work. There’s gravity, certainly . If there were something else it’d be labelled. Gravity’s represented with that arrow pointing straight down. That gives us the direction. The label (Eric)(g) gives us how strong this force is.

Caption: Eric on an inclined plane. It shows a circle on a right triangle, with the incline of the angle labelled 'x'. The force of gravity is pointing vertically down, labelled (Eric)(g). The force parallel to the incline is labelled (Eric)(g)sin(x); the force perpendicular to the incline is labelled (Eric)(g)cos(x).
Eric the Circle, by Kyle, for the 26th of March, 2019. Essays inspired at all by Eric the Circle are at this link.

Where the diagram gets interesting, and useful, are those dashed lines ending in arrows. One of those lines is, or at least means to be, parallel to the incline. The other is perpendicular to it. These both reflect gravity. We can represent the force of gravity as a vector. That means, we can represent the force of gravity as the sum of vectors. This is like how we can can write “8” or we can write “3 + 5”, depending on what’s more useful for what we’re doing. (For example, if you wanted to work out “67 + 8”, you might be better off doing “67 + 3 + 5”.) The vector parallel to the plane and the one perpendicular to the plane add up to the original gravity vector.

The force that’s parallel to the plane is the only force that’ll actually accelerate Eric. The force perpendicular to the plane just … keeps it snug against the plane. (Well, it can produce friction. We try not to deal with that in introductory physics because it is so hard. At most we might look at whether there’s enough friction to keep Eric from starting to slide downhill.) The magnitude of the force parallel to the plane, and perpendicular to the plane, are easy enough to work out. These two forces and the original gravity can be put together into a little right triangle. It’s the same shape but different size to the right triangle made by the inclined plane plus a horizontal and a vertical axis. So that’s how the diagram knows the parallel force is the original gravity times the sine of x. And that the perpendicular force is the original gravity times the cosine of x.

The perpendicular force is often called the “normal” force. This because mathematical physicists noticed we had only 2,038 other, unrelated, things called “normal”.

Rick Detorie’s One Big Happy for the 26th sees Ruthie demand to know who this Venn person was. Fair question. Mathematics often gets presented as these things that just are. That someone first thought about these things gets forgotten.

Ruthie, on the phone: 'Homework hot line? On the Same/Different page of our workbook there are two circles like this. They're called Venn diagrams and I wanna know who this Venn person is. And if I put two squares together, can we call it the Ruthie diagram, and how much money do I get for that? ... Huh? Well, I'll wait here 'til you find somebody who DOES know!'
Rick Detorie’s One Big Happy for the 26th of March, 2019. This is a rerun from … 2007, I want to say? There are two separate feeds, one of current and one of several-years-old, strips on the web. Essays including One Big Happy, current or years-old reruns, should be at this link.

John Venn, who lived from 1834 to 1923 — he died the 4th of April, it happens — was an English mathematician and philosopher and logician and (Anglican) priest. This is not a rare combination of professions. From 1862 he was a lecturer in Moral Science at Cambridge. This included work in logic, yes. But he also worked on probability questions. Wikipedia credits his 1866 Logic Of Chance with advancing the frequentist interpretation of probability. This is one of the major schools of thought about what the “probability of an event” is. It’s the one where you list all the things that could possibly happen, and consider how many of those are the thing you’re interested in. So, when you do a problem like “what’s the probability of rolling two six-sided dice and getting a total of four”? You’re doing a frequentist probability problem.

Venn Diagrams he presented to the world around 1880. These show the relationships between different sets. And the relationships of mathematical logic problems they represent. Venn, if my sources aren’t fibbing, didn’t take these diagrams to be a new invention of his own. He wrote of them as “Euler diagrams”. Venn diagrams, properly, need to show all the possible intersections of all the sets in play. You just mark in some way the intersections that happen to have nothing in them. Euler diagrams don’t require this overlapping. The name “Venn diagram” got attached to these pictures in the early 20th century. Euler here is Leonhard Euler, who created every symbol and notation mathematicians use for everything, and who has a different “Euler’s Theorem” that’s foundational to every field of mathematics, including the ones we don’t yet know exist. I exaggerate by 0.04 percent here.

Although we always start Venn diagrams off with circles, they don’t have to be. Circles are good shapes if you have two or three sets. It gets hard to represent all the possible intersections with four circles, though. This is when you start seeing weirder shapes. Wikipedia offers some pictures of Venn diagrams for four, five, and six sets. Meanwhile Mathworld has illustrations for seven- and eleven-set Venn diagrams. At this point, the diagrams are more for aesthetic value than to clarify anything, though. You could draw them with squares. Some people already do. Euler diagrams, particularly, are often squares, sometimes with rounded corners.

Venn had his other projects, too. His biography at St Andrews writes of his composing The Biographical History of Gonville and Caius College (Cambridge). And then he had another history of the whole Cambridge University. It also mentions his skills in building machines, though only cites one, a device for bowling cricket balls. The St Andrews biography says that in 1909 “Venn’s machine clean bowled one of [the Australian Cricket Team’s] top stars four times”. I do not know precisely what it means but I infer it to be a pretty good showing for the machine. His Wikipedia biography calls him a “passionate gardener”. Apparently the Cambridgeshire Horticultural Society awarded him prizes for his roses in July 1885 and for white carrots in September that year. And that he was a supporter of votes for women.

An illustration of an abacus. Caption: 'No matter what the category, you'll usually find me in the upper 99%.'
Ashleigh Brilliant’s Pot-Shots for the 26th of March, 2019. The strip originally appeared sometime in 1979. Essays discussing anything from Pot-Shots should appear at this link.

Ashleigh Brilliant’s Pot-Shots for the 26th makes a cute and true claim about percentiles. That a person will usually be in the upper 99% of whatever’s being measured? Hard to dispute. But, measure enough things and eventually you’ll fall out of at least one of them. How many things? This is easy to calculate if we look at different things that are independent of each other. In that case we could look at 69 things before there we’d expect a 50% chance of at least one not being in the upper 99%.

It’s getting that independence that’s hard. There’s often links between things. For example, a person’s height does not tell us much about their weight. But it does tell us something. A person six foot, ten inches tall is almost certainly not also 35 pounds, even though a person could be that size or could be that weight. A person’s scores on a reading comprehension test and their income? But test-taking results and wealth are certainly tied together. Age and income? Most of us have a bigger income at 46 than at 6. This is part of what makes studying populations so hard.

Snow, cat, to a kitten: '1 + 1 = 2 ... unless it's spring.' (Looking at a bird's nest with five eggs.) 'Then 1 + 1 = 5.'
T Shepherd’s Snow Sez for the 26th of March, 2019. Essays including an appearance of Essays inspired at all by Snow Sez should be gathered at this link. They will be, anyway; this is a new tag.

T Shepherd’s Snow Sez for the 26th is finally a strip I can talk about briefly, for a change. Snow does a bit of arithmetic wordplay, toying with what an expression like “1 + 1” might represent.


There were a lot of mathematically-themed comic strips last week. There’ll be another essay soon, and it should appear at this link. And then there’s always Sunday, as long as I stay ahead of deadline. I am never ahead of deadline.

Reading the Comics, March 23, 2019: March 23, 2019 Edition


I didn’t cover quite all of last week’s mathematics comics with Sunday’s essay. There were a handful that all ran on Saturday. And, as has become tradition, I’ll also list a couple that didn’t rate a couple paragraphs.

Rick Kirkman and Jerry Scott’s Baby Blues for the 23rd has a neat variation on story problems. Zoe’s given the assignment to make her own. I don’t remember getting this as homework, in elementary school, but it’s hard to see why I wouldn’t. It’s a great exercise: not just set up an arithmetic problem to solve, but a reason one would want to solve it.

Composing problems is a challenge. It’s a skill, and you might be surprised that when I was in grad school we didn’t get much training in it. We were just taken to be naturally aware of how to identify a skill one wanted to test, and to design a question that would mostly test that skill, and to write it out in a question that challenged students to identify what they were to do and how to do it, and why they might want to do it. But as a grad student I wasn’t being prepared to teach elementary school students, just undergraduates.

Dad: 'Homework?' Zoe: 'Yeah, math. Our teacher is having us write our own story problem.' Dad: 'What have you got?' Zoe: 'If Hammie picks his nose at the rate of five boogers an hour ... ' Hammie: 'Ooh! Put me on a jet ski!'
Rick Kirkman and Jerry Scott’s Baby Blues for the 23rd of March, 2019. Essays inspired by some Baby Blues strip appear at this link.

Mastroianni and Hart’s B.C. for the 23rd is a joke in the funny-definition category, this for “chaos theory”. Chaos theory formed as a mathematical field in the 60s and 70s, and it got popular alongside the fractal boom in the 80s. The field can be traced back to the 1890s, though, which is astounding. There was no way in the 1890s to do the millions of calculations needed to visualize any good chaos-theory problem. They had to develop results entirely by thinking.

Wiley’s definition is fine enough about certain systems being unpredictable. Wiley calls them “advanced”, although they don’t need to be that advanced. A compound pendulum — a solid rod that swings on the end of another swinging rod — can be chaotic. You can call that “advanced” if you want but then people are going to ask if you’ve had your mind blown by this post-singularity invention, the “screw”.

Cute Chick, reading Wiley's Dictionary: 'Chaos Theory. Mathematical principle that advanced systems are wholly unpredictable due to the introduction of random tweets.'
Mastroianni and Hart’s B.C. for the 23rd of March, 2019. Appearances here inspired by B.C., current syndication or 1960s reprints on GoComics, are at this link. Yeah, the character here is named ‘Cute Chick’ because that was funny when the comic started in 1958 and it can’t be updated for some reason?

What makes for chaos is not randomness. Anyone knows the random is unpredictable in detail. That’s no insight. What’s exciting is when something’s unpredictable but deterministic. Here it’s useful to think of continental divides. These are the imaginary curves which mark the difference in where water runs. Pour a cup of water on one side of the line, and if it doesn’t evaporate, it eventually flows to the Pacific Ocean. Pour the cup of water on the other side, it eventually flows to the Atlantic Ocean. These divides are often wriggly things. Water may mostly flow downhill, but it has to go around a lot of hills.

So pour the water on that line. Where does it go? There’s no unpredictability in it. The water on one side of the line goes to one ocean, the water on the other side, to the other ocean. But where is the boundary? And that can be so wriggly, so crumpled up on itself, so twisted, that there’s no meaningfully saying. There’s just this zone where the Pacific Basin and the Atlantic Basin merge into one another. Any drop of water, however tiny, dropped in this zone lands on both sides. And that is chaos.

Neatly for my purposes there’s even a mountain at a great example of this boundary. Triple Divide Peak, in Montana, rests on the divides between the Atlantic and the Pacific basins, and also on the divide between the Atlantic and the Arctic oceans. (If one interprets the Hudson Bay as connecting to the Arctic rather than the Atlantic Ocean, anyway. If one takes Hudson Bay to be on the Atlantic Ocean, then Snow Dome, Alberta/British Columbia, is the triple point.) There’s a spot on this mountain (or the other one) where a spilled cup of water could go to any of three oceans.

There's at least a 99.9 percent chance that in a group of 70 people at least two will share a birthday. The Pentagon had to ban staff from playing Pokemon Go in the building. Picasso created more than 13,500 paintings and designs, 10,000 prints and engravings, 34,000 book illustrations, and 300 sculptures and ceramics --- making him one of the world's most prolific artists.
John Graziano’s Ripley’s Believe It Or Not for the 23rd of March, 2019. The various pieces of mathematics trivia featured in Ripley’s Believe It Or Not get shown off at this link. I still think it’s weird to write Graziano’s Ripley’s. Anyway, with 57,800 listed pieces of art here Picasso is only credited as “one of” the world’s most prolific artists? Who’s out there with 57,802 pieces?

John Graziano’s Ripley’s Believe It Or Not for the 23rd mentions one of those beloved bits of mathematics trivia, the birthday problem. That’s finding the probability that no two people in a group of some particular size will share a birthday. Or, equivalently, the probability that at least two people share some birthday. That’s not a specific day, mind you, just that some two people share a birthday. The version that usually draws attention is the relatively low number of people needed to get a 50% chance there’s some birthday pair. I haven’t seen the probability of 70 people having at least one birthday pair before. 99.9 percent seems plausible enough.

The birthday problem usually gets calculated something like this: Grant that one person has a birthday. That’s one day out of either 365 or 366, depending on whether we consider leap days. Consider a second person. There are 364 out of 365 chances that this person’s birthday is not the same as the first person’s. (Or 365 out of 366 chances. Doesn’t make a real difference.) Consider a third person. There are 363 out of 365 chances that this person’s birthday is going to be neither the first nor the second person’s. So the chance that all three have different birthdays is \frac{364}{365} \cdot \frac{363}{365} . Consider the fourth person. That person has 362 out of 365 chances to have a birthday none of the first three have claimed. So the chance that all four have different birthdays is \frac{364}{365} \cdot \frac{363}{365} \cdot \frac{362}{365} . And so on. The chance that at least two people share a birthday is 1 minus the chance that no two people share a birthday.

As always happens there are some things being assumed here. Whether these probability calculations are right depends on those assumptions. The first assumption being made is independence: that no one person’s birthday affects when another person’s is likely to be. Obvious, you say? What if we have twins in the room? What if we’re talking about the birthday problem at a convention of twins and triplets? Or people who enjoyed the minor renown of being their city’s First Babies of the Year? (If you ever don’t like the result of a probability question, ask about the independence of events. Mathematicians like to assume independence, because it makes a lot of work easier. But assuming isn’t the same thing as having it.)

The second assumption is that birthdates are uniformly distributed. That is, that a person picked from a room is no more likely to be born the 13th of February than they are the 24th of September. And that is not quite so. September births are (in the United States) slightly more likely than other months, for example, which suggests certain activities going on around New Year’s. Across all months (again in the United States) birthdates of the 13th are slightly less likely than other days of the month. I imagine this has to be accounted for by people who are able to select a due date by inducing delivery. (Again if you need to attack a probability question you don’t like, ask about the uniformity of whatever random thing is in place. Mathematicians like to assume uniform randomness, because it akes a lot of work easier. But assuming it isn’t the same as proving it.)

Do these differences mess up the birthday problem results? Probably not that much. We are talking about slight variations from uniform distribution. But I’ll be watching Ripley’s to see if it says anything about births being more common in September, or less common on 13ths.


And now the comics I didn’t find worth discussing. They’re all reruns, it happens. Morrie Turner’s Wee Pals rerun for the 20th just mentions mathematics class. That could be any class that has tests coming up, though. Percy Crosby’s Skippy for the 21st is not quite the anthropomorphic numerals jokes for the week. It’s getting around that territory, though, as Skippy claims to have the manifestation of a zero. Bill Rechin’s Crock for the 22nd is a “pick any number” joke. I discussed as much as I could think of about this when it last appeared, in May of 2018. Also I’m surprised that Crock is rerunning strips that quickly now. It has, in principle, decades of strips to draw from.


And that finishes my mathematical comics review for last week. I’ll start posting essays about next week’s comics here, most likely on Sunday, when I’m ready.

Reading the Comics, March 13, 2019: Ziggy Rerun Scandal Edition


I do not know that the Ziggy printed here is a rerun. I don’t seem to have mentioned it in previous Reading the Comics posts, but that isn’t definite. How much mathematical content a comic strip needs to rate a mention depends on many things, and a strip that seems too slight one week might inspire me another. I’ll explain why I’ve started to get suspicious of the quite humanoid figure.

Tom II Wilson’s Ziggy for the 12th is framed around weather forecasts. It’s the probability question people encounter most often, unless they’re trying to outsmart the contestants on Let’s Make A Deal. (And many games on The Price Is Right, too.) Many people have complained about not knowing the meaning of a “50% chance of rain” for a day. If I understand it rightly, it means, when conditions have been like this in the recorded past, it’s rained about 50% of the time. I’m open to correction from meteorologists and it just occurred to me I know one. Mm.

Few people ask about the probability a forecast is correct. In some ways it’s an unanswerable question. To say there is a one-in-six chance a fairly thrown die will turn up a ‘1’ is not wrong just because it’s rolled a ‘1’ eight times out of the last ten. But it does seem like a forecast such as this should include a sense of confidence, how sure the forecaster is that the current weather is all that much like earlier times.

Weather forecaster on the TV Ziggy watches: 'Tomorrow's weather, there's a 50% chance of rain, and a 50% chance I'm even right about the 50%!!'
Tom II Wilson’s Ziggy for the 12th of March, 2019. When I do find a mathematical context to discuss Ziggy the results should appear at this link. Speculating about the comic’s rerun schedule isn’t really my business.

I’m not sure how much of the joke is meant to be the repetition of “50% chance”. The joke might be meant to say that if he’s got a 50% chance of being wrong, then, isn’t the 50% chance of rain “correctly” a 50% chance of not-rain … which is the same chance of rain? The logic doesn’t hold up, if you pay attention, but it sounds like it should make sense, and having the “wrong” version of something be the same as the original is a valid comic construction.

So now for the promised Ziggy rerun scandal. To the best of my knowledge Ziggy is presented as being in new run. It’s done by the son of the comic strip’s creator, but that’s common enough for long-running comic strips. This Monday, though, ran a Ziggy-at-the-psychiatrist joke that was, apart from coloring, exactly the comic run the 2nd of March, barely two weeks before. (Compare the scribbles in the psychiatrist’s diploma.) It wouldn’t be that weird if a comic were accidentally repeated; production mistakes happen, after all. It’s slightly weird that the daily, black-and-white, original got colored in two different ways, but I can imagine this happening by accident.

Still, that got me primed to look for Ziggy repeats. I couldn’t find this one having an earlier appearance. But I did find that the 9th of January this year was a reprint of the Ziggy from the 11th of January, 2017. I wrote about both appearances, without noticing they were reruns. Here’s the 2017 essay, and over here is the 2019 essay, from before I was very good at remembering what the year was. Mercifully I didn’t say anything contradictory on the two appearances. I’m more interested in how I said things differently in the two appearances. Anyway this earlier year seems to have been part of a week’s worth of reruns, noticeable by the copyright date. I can’t begrudge a cartoonist their vacation. The psychiatrist strip doesn’t seem to be part of that, though, and its repetition is some as-yet-unexplained event.

Pete: 'Have you seen my ... ' Peggy: 'Top drawer, dresser.' Pete: 'What day is the ... ' Peggy: 'Monday.' Pete: 'Do we have any ... ' Peggy: 'Middle cabinet, kitchen.' Pete: 'What's the square root of 532?' Peggy: '23.06512518.' (In the last panel Peggy looks smugly at the reader.)
Tony Rubino and Gary Markstein’s Daddy’s Home for the 13th of March, 2019. The steadily growing number of essays with a mention of Daddy’s Home are at this link.

Tony Rubino and Gary Markstein’s Daddy’s Home for the 13th has a much more casual and non-controversial bit of mathematics. Pete tosses out a calculate-the-square-root problem as a test of Peggy’s omniscience. One of the commenters points out that the square root of 532 is closer to 23.06512519 than it is Peggy’s 23.06512818. It suggests the writers found the square root by something that gave plenty of digits. For example, the macOS Calculator program offers me “23.065 125 189 341 592”. But then they chopped off, rather than rounding off, digits when the panel space ran out.

Teacher: 'Nancy, Esther, I'm making you partners for classwork today.' Nancy, thinking: 'How are we supposed to work together? We're fighting!' Nancy, tearing a page of mathematics problems down the center: 'Here, you take the right side of the equals sign and I'll take the left.'
Olivia Jaimes’s Nancy for the 13th of March, 2019. Essays mentioning Nancy, either current-run or the “classic” vintage reprints, should appear here.

Olivia Jaimes’s Nancy for the 13th has Nancy dividing up mathematics problems along the equals sign. That’s cute and fanciful enough. One could imagine working out expressions on either side of the equals sign in the hopes of getting them to match. That wouldn’t work for these algebra problems, but, that’s something.

This isn’t what Nancy might do, unless she flashed forward to college and became a mathematics or physics major. But one great trick in differential equations is called the separation of variables. Differential equations describe how quantities change. They’re great. They’re hard. A lot of solving differential equations amounts to rewriting them as simpler differential equations.

Separation is a trick usable when there’s two quantities whose variation affect each other. If you can rewrite the differential equation so that one variable only appears on the left side, and the other variable only appears on the right? Then you can split this equation into two simpler equations. Both sides of the equation have to be some fixed number. So you can separate the differential equations of two variables into two differential equations, each with one variable. One with the first variable, one with the other. And, usually, a differential equation of one variable is easier than a differential equation with two variables. So Nancy and Esther could work each half by themselves. But the work would have to be put together at the end, too.


And for a truly marginal mathematics topic: Lincoln Pierce’s Big Nate: First Class for the 13th, reprinting the 2nd of March, 1994, mentions a mathematics test for Nate’s imminent doom.


And this wraps up the comic strips for the previous week. Come Sunday there should be a fresh new comic post. Yes, Andertoons is scheduled to be there.

Six Or Arguably Four Things For Pi Day


I hope you’ll pardon me for being busy. I haven’t had the chance to read all the Pi Day comic strips yet today. But I’d be a fool to let the day pass without something around here. I confess I’m still not sure that Pi Day does anything lasting to encourage people to think more warmly of mathematics. But there is probably some benefit if people temporarily think more fondly of the subject. Certainly I’ll do more foolish things than to point at things and say, “pi, cool, huh?” this week alone.

I’ve got a couple of essays that discuss π some. The first noteworthy one is Calculating Pi Terribly, discussing a way to calculate the value of π using nothing but a needle, a tile floor, and a hilariously excessive amount of time. Or you can use an HTML5-and-JavaScript applet and slightly less time, and maybe even experimentally calculate the digits of π to two decimal places, if you get lucky.

Randolph dreaming about his presentation; it shows a Pie Chart: Landed On Stage, 28%. Back wall, 13%. Glancing blow off torso, 22%. Hit podium, 12%. Direct hit in face, 25%. Several pies have been thrown, hitting the stage, back wall, his torso, the podium, his face. Corner illustration: 'I turn now to the bar graph.'
Tom Toles’s Randolph Itch, 2am for the 11th of June, 2018. I’m not sure when it did first run, past that it was in 2000, but I’ve featured it at least two times before, both of those in 2015, peculiarly. So in short I have no idea how GoComics picks its reruns for this strip.

In Calculating Pi Less Terribly I showed a way to calculate π that’s … well, you see where that sentence was going. This is a method that uses an alternating series. To get π exactly correct you have to do an infinite amount of work. But if you just want π to a certain precision, all right. This will even tell you how much work you have to do. There are other formulas that will get you digits of π with less work, though, and maybe I’ll write up one of those sometime.

Jack-o-lantern standing on a scale: 'Hey! I weigh exactly 3.14 pounds!' Caption: 'Pumpkin Pi'.
Dave Whamond’s Reality Check for the 27th of October, 2018. Does the weight count if the jack-o-lantern is wearing sneakers?

And the last of the relevant essays I’ve already written is an A To Z essay about normal numbers. I don’t know whether π is a normal number. No human, to the best of my knowledge, does. Well, anyone with an opinion on the matter would likely say, of course it’s normal. There’s fantastic reasons to think it is. But none of those amount to a proof it is.

[PI sces ] Guy at bar talking to Pi: 'Wow, so you were born on March 14th at 1:59, 26 seconds? What're the odds?'
Scott Hilburn’s The Argyle Sweater for the 14th of March, 2018. Also a free probability question, if you’re going to assume that every second of the year is equally likely to be the time of birth.

That’s my three items. After that I’d like to share … I don’t know whether to classify this as one or three pieces. They’re YouTube videos which a couple months ago everybody in the world was asking me if I’d seen. Now it’s your turn. I apologize if you too got this, a couple months ago, but don’t worry. You can tell people you watched and not actually do it. I’ll alibi you.

Pi figure, wearing glasses, reading The Neverending Story.
Mark Parisi’s Off The Mark for the 14th of March, 2018. Really the book seems a little short for that.

It’s a string of videos posted on youTube by 3Blue1Brown. The first lays out the matter with a neat physics problem. Imagine you have an impenetrable wall, a frictionless floor, and two blocks. One starts at rest. The other is sliding towards the first block and the wall. How many times will one thing collide with another? That is, will one block collide with another block, or will one block collide with a wall?

[ How ancient mathematicians amused themselves, AKA how to celebrate Pi Day today; third annual Pi-Easting Contest. Emcee: 'And HERE he is, our defending champ, that father of conic sections --- ARCHIMEDES!' They're all eating cakes shaped like pi.
Michael Cavna’s Warped for the 14th of March, 2018. Yes, but have you seen Pythagoras and his golden thigh?

The answer seems like it should depend on many things. What it actually depends on is the ratio of the masses of the two blocks. If they’re the same mass, then there are three collisions. You can probably work that sequence out in your head and convince yourself it’s right. If the outer block has ten times the mass of the inner block? There’ll be 31 collisions before all the hits are done. You might work that out by hand. I did not. You will not work out what happens if the outer block has 100 times the mass of the inner block. That’ll be 314 collisions. If the outer block has 1,000 times the mass of the inner block? 3,141 collisions. You see where this is going.

[ To Stephen Hawking, Thanks for making the Universe a little easier for the rest of us to understand ] Jay: 'I suppose it's only appropriate that he'd go on Pi Day.' Roy: 'Not to mention, Einstein's birthday.' Katherine: 'I'll bet they're off in some far reach of the universe right now playing backgammon.'
John Zakour and Scott Roberts’s Working Daze for the 15th of March, 2018. No, you should never read the comments, but here, really, don’t read the comments.

The second video in the sequence explains why the digits of π turn up in this. And shows how to calculate this. You could, in principle, do this all using Newtonian mechanics. You will not live long enough to finish that, though.

Pie chart. Most of the chart: 'likes pie'. Small wedge of the chart: 'likes charts'.
Daniel Beyer’s Long Story Short for the 14th of March, 2015.

The video shows a way that saves an incredible load of work. But you save on that tedious labor by having to think harder. Part of it is making use of conservation laws, that energy and linear momentum are conserved in collisions. But part is by recasting the problem. Recast it into “phase space”. This uses points in an abstract space to represent different configurations of a system. Like, how fast blocks are moving, and in what direction. The recasting of the problem turns something that’s impossibly tedious into something that’s merely … well, it’s still a bit tedious. But it’s much less hard work. And it’s a good chance to show off you remember the Inscribed Angle Theorem. You do remember the Inscribed Angle Theorem, don’t you? The video will catch you up. It’s a good show of how phase spaces can make physics problems so much more manageable.

'Happy Pi Day.' 'Mmm. I love apple pie.' 'Pi day, not Pie Day. Pi ... you know ... 3.14 ... March 14th. Get it?' 'Today is a pie-eating holiday?' 'Sort of. They do celebrate it with pie, but it's mostly about pi.' 'I don't understand what that kid says half the time.'
John Hambrock’s The Brilliant Mind of Edison Lee for the 14th of March, 2016. The strip is like this a lot.

The third video recasts the problem yet again. In this form, it’s about rays of light reflecting between mirrors. And this is a great recasting. That blocks bouncing off each other and walls should have anything to do with light hitting mirrors seems ridiculous. But set out your phase space, and look hard at what collisions and reflections are like, and you see the resemblance. The sort of trick used to make counting reflections easy turns up often in phase spaces. It also turns up in physics problems on toruses, doughnut shapes. You might ask when do we ever do anything on a doughnut shape. Well, real physical doughnuts, not so much. But problems where there are two independent quantities, and both quantities are periodic? There’s a torus lurking in there. There might be a phase space using that shape, and making your life easier by doing so.

Anthropomorphic numerals at a cocktail party. 2: 'You're greater than me. I could listen to you forever.' Pi: 'Aw, shucks. I'm blushing.' (It is.) Caption: 'Humble Pi.'
Scott Hilburn’s The Argyle Sweater for the 14th of March, 2017. And while the strip is true, arguably, 2 goes on forever also; it’s just not very interesting how it does.

That’s my promised four or maybe six items. Pardon, please, now, as I do need to get back to reading the comics.

Reading the Comics, January 16, 2019: Young People’s Mathematics Edition


Today’s quartet of mathematically-themed comic strips doesn’t have an overwhelming theme. There’s some bits about the mathematics that young people do, so, that’s enough to separate this from any other given day’s comics essay.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th is built on a bit of mathematical folklore. As Weinersmith’s mathematician (I don’t remember that we’ve been given her name) mentions, there is a belief that “revolutionary” mathematics is done by young people. That isn’t to say that older mathematicians don’t do great work. But the stereotype is that an older mathematician will produce masterpieces in already-established fields. It’s the young that establish new fields. Indeed, one of mathematics’s most prestigious awards, the Fields Medal, is only awarded to mathematicians under the age of forty. I was cheated of mine. Long story.

Mathematician: 'Only young people do revolutionary mathematics. 20 is ancient. 15 is old. 10 is middle-aged.' Kid, holding up two fingers: 'Three is THIS MANY.' Mathematician: 'It's counter-intuitive, but we must accept it.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th of January, 2019. I have many essays inspired by something said in Saturday Morning Breakfast Cereal. You can find them at this link.

There’s intuitive appeal in the idea that revolutions in thinking are for the young. We think that people get set in their ways as they develop their careers. We have a couple dramatic examples, most notably Évariste Galois, who developed what we now see as foundations of group theory and died at twenty. While the idea is commonly held, I don’t know that it’s actually true. That is, that it holds up to scrutiny. It seems hard to create a definition for “revolutionary mathematics” that could be agreed upon by two people. So it would be difficult to test at what age people do their most breathtaking work, and whether it is what they do when young or when experienced.

Is there harm to believing an unprovable thing? If it makes you give up on trying, yes. My suspicion is that true revolutionary work happens when a well-informed, deep thinker comes to a field that hasn’t been studied in that way before. And when it turns out to be a field well-suited to study that way. That doesn’t require youth. It requires skill in one field, and an understanding that there’s another field ready to be studied that way.

Spud: 'Can you help me with this math problem?' Wallace: '10 + 12? It helps if you visualize real things. Say you have ten cans of E-Z Cheez and someone gives you twelve more ... how many cans of E-Z Cheez do you have?' Spud: 'I'm sweating.'
Will Henry’s Wallace the Brave for the 14th of January, 2019. I have only had a few chances to talk about Wallace the Brave so far, but the chances I’ve taken are at this link. (It and Breaking Cat News are the two recently-launched comics I’m most excited by.)

Will Henry’s Wallace the Brave for the 14th is a mathematics anxiety joke. Wallace tries to help by turning an abstract problem into a concrete one. This is often a good way to approach a problem. Even in more advanced mathematics, one can often learn the way to solve a general problem by trying a couple of specific examples. It’s almost as though there’s only a certain amount of abstraction people can deal with, and you need to re-cast problems so they stay within your limits.

Yes, the comments turn to complaining about Common Core. I’m not sure what would help Spud work through this problem (or problems in general). But thinking of alternate problems that estimated or approached what he really wanted might help. If he noticed, for example, that 10 + 12 has to be a little more than 10 + 10, and he found 10 + 10 easy, then he’d be close to a right answer. If he noticed that 10 + 12 had to be 10 + 10 + 2, and he found 10 + 10 easy, then he might find 20 + 2 easy as well. Maybe Spud would be better off thinking of ways to rewrite a problem without changing the result.

Widow, to the party gathered at the gravesite: 'Needless to say, calculus wasn't his best subject.' The epitaph: 'It's a calculated risk, but you only live once!'
Wiley Miller’s Non Sequitur for the 15th of January, 2019. Essays mentioning Non Sequitur should appear at this link.

Wiley Miller’s Non Sequitur for the 15th mentions calculus. It’s more of a probability joke. To speak of a calculated risk is to speak of doing something that’s not certain, but that has enough of a payoff to be worth the cost of failure. But one problem with this attitude is that people are very, very bad at estimating probabilities. We have terrible ideas of how likely losses are and how uncertain rewards can be. But even if we allow that the risks and rewards are calculated right, there’s a problem with things you only do once. Or only can do once. You can get into a good debate about whether there’s even a meaningful idea of probability for things that happen only the one time. Life’s among them.

Kid: 'Dad! Let's tackle my homework!' Moose: 'Later, son. I'm busy.' Kid goes to Westpork Savings and Loan. The bank clerk's sitting under a sign, 'Let us help you with your money problems.' Kid reads: 'If Farmer Smith sells wheat at $1.25 a bushel and Farmer Brown sells it at $1.30, how many bushels must each sell ... '
Bob Weber Sr’s Moose and Molly for the 16th of January, 2019. I haven’t had the chance to talk about Moose and Molly before. But now I have the tag, and will be putting essays mentioning it at this link.

Bob Weber Sr’s Moose and Molly for the 16th is a homework joke. It does actually depend on being mathematics homework, though, or there’d be no grounds for Moose’s kid to go to the savings and loan clerk who’ll help with “money problems”.


I think there’s one more batch of comic strips to discuss this week. When I’ve published it, you should find the essay at this link. And then there’ll be Sunday again.

Reading the Comics, January 5, 2019: Start of the Year Edition


With me wrapping up the mathematically-themed comic strips that ran the first of the year, you can see how far behind I’m falling keeping everything current. In my defense, Monday was busier than I hoped it would be, so everything ran late. Next week is looking quite slow for comics, so maybe I can catch up then. I will never catch up on anything the rest of my life, ever.

Scott Hilburn’s The Argyle Sweater for the 2nd is a bit of wordplay about regular and irregular polygons. Many mathematical constructs, in geometry and elsewhere, come in “regular” and “irregular” forms. The regular form usually has symmetries that make it stand out. For polygons, this is each side having the same length, and each interior angle being congruent. Irregular is everything else. The symmetries which constrain the regular version of anything often mean we can prove things we otherwise can’t. But most of anything is the irregular. We might know fewer interesting things about them, or have a harder time proving them.

Teacher: 'Well, class, who'd like to show Mr Hoffmeyer how to correctly make an irregular polygon regular?' On the blackboard is an irregular pentagon and, drawn by Mr Hoffmeyer, a box of Ex-Lax.
Scott Hilburn’s The Argyle Sweater for the 2nd of January, 2019. The many appearances of Argyle Sweater in these pages are at this link.

I’m not sure what the teacher would be asking for in how to “make an irregular polygon regular”. I mean if we pretend that it’s not setting up the laxative joke. I can think of two alternatives that would make sense. One is to draw a polygon with the same number of sides and the same perimeter as the original. The other is to draw a polygon with the same number of sides and the same area as the original. I’m not sure of the point of either. I suppose polygons of the same area have some connection to quadrature, that is, integration. But that seems like it’s higher-level stuff than this class should be doing. I hate to question the reality of a comic strip but that’s what I’m forced to do.

Mutt, to Jeff in the hospital bed: 'Don't be afraid! Surgery on the tonsils is very simple!' Doctor: 'Don't you worry about the results!' Jeff: 'How do you know I'll be all right?' Doctor: 'Well, I lost my last eleven patients! So if the law of probabilities doesn't lie, you'll be all right! May I do something for you before I begin?' Jeff: 'Oh, yes, Doc! Help me put on my trousers and my jacket!'
Bud Fisher’s Mutt and Jeff rerun for the 4th of January, 2019. The several appearances of Mutt and Jeff in these pages are at this link.

Bud Fisher’s Mutt and Jeff rerun for the 4th is a gambler’s fallacy joke. Superficially the gambler’s fallacy seems to make perfect sense: the chance of twelve bad things in a row has to be less than the chance of eleven bad things in a row. So after eleven bad things, the twelfth has to come up good, right? But there’s two ways this can go wrong.

Suppose each attempted thing is independent. In this case, what if each patient is equally likely to live or die, regardless of what’s come before? And in that case, the eleven deaths don’t make it more likely that the next will live.

Suppose each attempted thing is not independent, though. This is easy to imagine. Each surgery, for example, is a chance for the surgeon to learn what to do, or not do. He could be getting better, that is, more likely to succeed, each operation. Or the failures could reflect the surgeon’s skills declining, perhaps from overwork or age or a loss of confidence. Impossible to say without more data. Eleven deaths on what context suggests are low-risk operations suggest a poor chances of surviving any given surgery, though. I’m on Jeff’s side here.

On the blackboard: 'Ratios: Apples 9, Oranges 6'. Wavehead, to teacher: 'Technically the ratio is 3:2, but as a practical matter we shouldn't even really be considering this.'
Mark Anderson’s Andertoons for the 5th of January, 2019. The amazingly many appearances of Andertoons in these pages are at this link.

Mark Anderson’s Andertoons for the 5th is a welcome return of Wavehead. It’s about ratios. My impression is that ratios don’t get much attention in themselves anymore, except to dunk on stupid Twitter comments. It’s too easy to jump right into fractions, and division. Ratios underlie this, at least historically. It’s even in the name, ‘rational numbers’.

Wavehead’s got a point in literally comparing apples and oranges. It’s at least weird to compare directly different kinds of things. This is one of those conceptual gaps between ancient mathematics and modern mathematics. We’re comfortable stripping the units off of numbers, and working with them as abstract entities. But that does mean we can calculate things that don’t make sense. This produces the occasional bit of fun on social media where we see something like Google trying to estimate a movie’s box office per square inch of land in Australia. Just because numbers can be combined doesn’t mean they should be.

Kid: 'Dad, I need help with a math problem. If striking NFL players who get $35,000 a game are replaced by scab players who get $1,000 a game ... what will be the point spread in a game between the Lions and the Packers?'
Larry Wright’s Motley rerun for the 5th of January, 2019. The occasional appearances of Motley in these pages are at this link.

Larry Wright’s Motley rerun for the 5th has the form of a story problem. And one timely to the strip’s original appearance in 1987, during the National Football League players strike. The setup, talking about the difference in weekly pay between the real players and the scabs, seems like it’s about the payroll difference. The punchline jumps to another bit of mathematics, the point spread. Which is an estimate of the expected difference in scoring between teams. I don’t know for a fact, but would imagine the scab teams had nearly meaningless point spreads. The teams were thrown together extremely quickly, without much training time. The tools to forecast what a team might do wouldn’t have the data to rely on.


The at-least-weekly appearances of Reading the Comics in these pages are at this link.

Yes, I Am Late With The Comics Posts Today


I apologize that, even though the past week was light on mathematically-themed comic strips, I didn’t have them written up by my usual Sunday posting time. It was just too busy a week, and I am still decompressing from the A to Z sequence. I’ll have them as soon as I’m able.

In the meanwhile may I share a couple of things I thought worth reading, and that have been waiting in my notes folder for the chance to highlight?

This Fermat’s Library tweet is one of those entertaining consequences of probability, multiplied by the large number of people in the world. If you flip twenty coins in a row there’s a one in 1,048,576 chance that all twenty will come up heads, or all twenty will come up tails. So about one in every million times you flip twenty coins, they all come up the same way. If the seven billion people in the world have flipped at least twenty coins in their lives, then something like seven thousand of them had the coins turn up heads every single one of those twenty times. That all seven billion people have tossed a coin seems like the biggest point to attack this trivia on. A lot of people are too young, or don’t have access to, coins. But there’s still going to be thousands who did start their coin-flipping lives with a remarkable streak.

Also back in October, so you see how long things have been circulating around here, John D Cook published an article about the World Series. Or any series contest. At least ones where the chance of each side winning don’t depend on the previous games in the series. If one side has a probability ‘p’ of winning any particular game, what’s the chance they’ll win a best-four-of-seven? What makes this a more challenging mathematics problem is that a best-of-seven series stops after one side’s won four games. So you can’t simply say it’s the chance of four wins. You need to account for four wins out of five games, out of six games, and out of seven games. Fortunately there’s a lot of old mathematics that explores just this.

The economist Brandford DeLong noticed the first write-up of the Prisoners Dilemma. This is one of the first bits of game theory that anyone learns, and it’s an important bit. It establishes that the logic of cooperatives games — any project where people have to work together — can have a terrible outcome. What makes the most sense for the individuals makes the least sense for the group. That a good outcome for everyone depends on trust, whether established through history or through constraints everyone’s agreed to respect.

And finally here’s part of a series about quick little divisibility tests. This is that trick where you tell what a number’s divisible by through adding or subtracting its (base ten) digits. Everyone who’d be reading this post knows about testing for divisibility by three or nine. Here’s some rules for also testing divisibility by eleven (which you might know), by seven (less likely), and thirteen. With a bit of practice, and awareness of some exceptional numbers, you can tell by sight whether a number smaller than a thousand is prime. Add a bit of flourish to your doing this and you can establish a reputation as a magical mathematician.

Reading the Comics, December 19, 2018: Andertoons Is Back Edition


I had not wanted to mention, for fear of setting off a panic. But Mark Anderson’s Andertoons, which I think of as being in every Reading the Comics post, hasn’t been around lately. If I’m not missing something, it hasn’t made an appearance in three months now. I don’t know why, and I’ve been trying not to look too worried by it. Mostly I’ve been forgetting to mention the strange absence. This even though I would think any given Tuesday or Friday that I should talk about the strip not having anything for me to write about. Fretting about it would make a great running theme. But I have never spotted a running theme before it’s finished. In any event the good news is that the long drought has ended, and Andertoons reappears this week. Yes, I’m hoping that it won’t be going to long between appearances this time.

Mrs Olsen: 'How do you know I haven't got my flu shot?' Caulfield: 'Just playing the odds.' Mrs Olsen: 'Maybe I was playing some odds myself. Maybe I got to the pharmacy and remembered that this year's vaccine is 30-40% effective.' Caulfield: 'I'd take those odds.' Mrs Olsen: 'They're not my kind of odds.' Caulfield: 'And what are the odds you bought a lottery ticket on your way out?' (Pause.) Mrs Olsen: 'You are getting under my skin.' Caulfield: 'That's good news. Now there's a 30-40% chance you'll develop a resistance.'
Jef Mallett’s Frazz for the 16th of December, 2018. Other essays discussing topics raised by Frazz are at this link.

Jef Mallett’s Frazz for the 16th talks about probabilities. This in the context of assessing risks. People are really bad at estimating probabilities. We’re notoriously worse at assessing risks, especially when it’s a matter of balancing a present cost like “fifteen minutes waiting while the pharmacy figures out whether insurance will pay for the flu shot” versus a nebulous benefit like “lessened chance of getting influenza, or at least having a less severe influenza”. And it’s asymmetric, too. We view improbable but potentially enormous losses differently from the way we view improbable but potentially enormous gains. And it’s hard to make the rationally-correct choice reliably, not when there are so many choices of this kind every day.

Guard, to new prisoner: 'Never mind Professor Phillip. He's always preoccupied with some theory of escape probability.' The cell walls are covered with mathematical scrawls.
Tak Bui’s PC and Pixel for the 16th of December, 2018. This and other essays, when they’re written, inspired by PC and Pixel should be at this link. It’s a new tag, which surprises me.

Tak Bui’s PC and Pixel for the 16th features a wall full of mathematical symbols, used to represent deep thought about a topic. The symbols are gibberish, yes. I’m not sure that an actual “escape probability” could be done in a legible way, though. Or even what precisely Professor Phillip might be calculating. I imagine it would be an estimate of the various ways he might try to escape, and what things might affect that. This might be for the purpose of figuring out what he might do to maximize his chances of a successful escape. Although I wouldn’t put it past the professor to just be quite curious what the odds are. There’s a thrill in having a problem solved, even if you don’t use the answer for anything.

Amazing Yet Tautological strip: 'Each year America consumes enough EGG SALAD ... ' (Picture of a woman holding up a lumpy pile that context indicates is egg salad.) ' ... to give EACH AMERICAN an annualized national-average serving of the tasty concoction!'
Ruben Bolling’s Super-Fun-Pak Comix for the 18th of December, 2018. Essays based on Super-Fun-Pak Comix are at this link. (Amazing Yet Tautological is one of the features that turns up in Super-Fun-Pak Comix, which is why it doesn’t rate a tag on its own)

Ruben Bolling’s Super-Fun-Pak Comix for the 18th has a trivia-panel-spoof dubbed Amazing Yet Tautological. One could make an argument that most mathematics trivia fits into this category. At least anything about something that’s been proven. Anyway, whether this is a tautological strip depends on what the strip means by “average” in the phrase “average serving”. There’s about four jillion things dubbed “average” and each of them has a context in which they make sense. The thing intended here, and the thing meant if nobody says anything otherwise, is the “arithmetic mean”. That’s what you get from adding up everything in a sample (here, the amount of egg salad each person in America eats per year) and dividing it by the size of the sample (the number of people in America that year). Another “average” which would make sense, but would break this strip, would be the median. That would be the amount of egg salad that half of all Americans eat more than, and half eat less than. But whether every American could have that big a serving really depends on what that median is. The “mode”, the most common serving, would also be a reasonable “average” to expect someone to talk about.

Teacher showing solid geometry to the class. Wavehead: 'I saw a movie where the robot monster came right at me. If you want me to get excited about 3D shapes, you're going to have to do better than that.'
Mark Anderson’s Andertoons for the 19th of December, 2018. The many essays which discuss Andertoons are at this link.

Mark Anderson’s Andertoons for the 19th is that strip’s much-awaited return to my column here. It features solid geometry, which is both an important part of geometry and also a part that doesn’t get nearly as much attention as plane geometry. It’s reductive to suppose the problem is that it’s harder to draw solids than planar figures. I suspect that’s a fair part of the problem, though. Mathematicians don’t get much art training, not anymore. And while geometry is supposed to be able to rely on pure reasoning, a good picture still helps. And a bad picture will lead us into trouble.


Each of the Reading the Comics posts should all be at this link. And I have finished the alphabet in my Fall 2018 Mathematics A To Z glossary. There should be a few postscript thoughts to come this week, though.

My 2018 Mathematics A To Z: Zugzwang


My final glossary term for this year’s A To Z sequence was suggested by aajohannas, who’d also suggested “randomness” and “tiling”. I don’t know of any blogs or other projects they’re behind, but if I do hear, I’ll pass them on.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Zugzwang.

Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.

Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?

But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.

This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.

The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.

But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.

There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.

Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.

The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.

Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.

… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.

But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.

There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.

We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.

It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.

The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literate gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.

Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.

I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.

It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.


And with that … I am done! All of the Fall 2018 Mathematics A To Z posts should be at this link. Next week I’ll post my big list of all the letters, though. And, as has become tradition, a post about what I learned by doing this project. And sometime before then I should have at least one more Reading the Comics post. Thanks kindly for reading and we’ll see when in 2019 I feel up to doing another of these.

My 2018 Mathematics A To Z: Witch of Agnesi


Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Witch of Agnesi.

I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote.

It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this:

y = \frac{8a^3}{x^2 + 4a^2}

In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat.

It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten.

We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here.

Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why 8a^3 ? Why not a^3 ?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one.

The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project.

For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve.

Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve.

Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out y = \frac{8a^3}{x^2 + 4a^2} , yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve.

It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa.

It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too.

And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve.

It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea.

Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas.

(Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.)

It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me.

I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student.


And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading.

Reading the Comics, December 4, 2018: Christmas Specials Edition


This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.

Graham Nolan’s Sunshine State for the 3rd is a misplaced Pi Day strip. I did check the copyright to see if it might be a rerun from when it was more seasonal.

Liz: 'I'm going to bake pies. What's your favorite?' 'Cherry!' 'Apple!' Liz 'Here comes Paul! Let's ask him, too.' Dink: 'He hates pie!' Paul: 'What are you talking about?' Dink: 'Nothing that would interest you.' Mel: 'We're talking about pie!' Paul: 'So you don't think I'm smart enough to discuss pi? Pi is the ratio of a circle's circumference to its diameter! It's a mathematical constant used in mathematics and physics! Its value is approximately 3.14159!' Mel: 'You forgot the most important thing about pie!' Paul: 'What's that?' Mel: 'It tastes delicious!' Dink: 'I hate pie!' Mel, Dink, and Liz: 'We know!'
Graham Nolan’s Sunshine State for the 3rd of December, 2018. This and other essays mentioning Sunshine State should be at this link. Or will be someday; it’s a new tag. Yeah, Paul’s so smart he almost knows the difference between it’s and its.

Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.

An Old West town. an anthropomorphic 2 says to a 4, 'You know, Slim, I don't like the odds.' Standing opposite them, guns at the ready, are a hostile 5, 1, 3, and 7.
Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd of December, 2018. Essays inspired by Yaffle should appear at this link. It’s also a new tag, so don’t go worrying that there’s only this one essay there yet.

John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.

Harley: 'C'mon, Edison, let's play basketball.' Edison: 'If I take into account the size and weight of the ball, the diameter of the hoop and your height in relation to it, and the number of hours someone your age would've had time to practice ... I can conclude that I'd win by 22 points. Nice game. Better luck next time.' Harley: 'But ... '
John Hambrock’s The Brilliant Mind of Edison Lee for the 4th of December, 2018. More ideas raised by Edison Lee I discuss at this link. Also it turns out Edison’s friend here is named Harley, which I mention so I have an easier time finding his name next time I need to refer to this strip. This will not work.

But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.

Rover, dog: 'Can I help with your homework?' Red, kid: 'How are you at long division?' Rover: 'OK, I guess. Lemme see the problem first.' (Red holds the notes out to Rover, who tears the page off and chews it up.) Red: 'That was actually short division, but it'll do nicely for now.'
Brian Basset’s Red and Rover for the 4th of December, 2018. And more Red and Rover discussions are at this link.

Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.


I’m figuring to get to the letter ‘W’ in my Fall 2018 Mathematics A To Z glossary for Tuesday. Reading the Comics posts this week. And I also figure there should be two more When posted, they’ll be at this link.

My 2018 Mathematics A To Z: Randomness


Today’s topic is an always rich one. It was suggested by aajohannas, who so far as I know has’t got an active blog or other project. If I’m mistaken please let me know. I’m glad to mention the creative works of people hanging around my blog.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Randomness.

An old Sydney Harris cartoon I probably won’t be able to find a copy of before this publishes. A couple people gather around an old fanfold-paper printer. On the printout is the sequence “1 … 2 … 3 … 4 … 5 … ” The caption: ‘Bizarre sequence of computer-generated random numbers’.

Randomness feels familiar. It feels knowable. It means surprise, unpredictability. The upending of patterns. The obliteration of structure. I imagine there are sociologists who’d say it’s what defines Modernity. It’s hard to avoid noticing that the first great scientific theories that embrace unpredictability — evolution and thermodynamics — came to public awareness at the same time impressionism came to arts, and the subconscious mind came to psychology. It’s grown since then. Quantum mechanics is built on unpredictable specifics. Chaos theory tells us even if we could predict statistics it would do us no good. Randomness feels familiar, even necessary. Even desirable. A certain type of nerd thinks eagerly of the Singularity, the point past which no social interactions are predictable anymore. We live in randomness.

And yet … it is hard to find randomness. At least to be sure we have found it. We might choose between options we find ambivalent by tossing a coin. This seems random. But anyone who was six years old and trying to cheat a sibling knows ways around that. Drop the coin without spinning it, from a half-inch above the table, and you know the outcome, all the way through to the sibling’s punching you. When we’re older and can be made to be better sports we’re fairer about it. We toss the coin and give it a spin. There’s no way we could predict the outcome. Unless we knew just how strong a toss we gave it, and how fast it spun, and how the mass of the coin was distributed. … Really, if we knew enough, our tossed coin would be as predictably as the coin we dropped as a six-year-old. At least unless we tossed in some chaotic way, where each throw would be deterministic, but we couldn’t usefully make a prediction.

At a craps table, Commander Data looks with robo-concern at the dice in his hand. Riker, Worf, and some characters from the casino hotel watch, puzzled.
Dice are also predictable, if you are able to precisely measure how the weight inside them is distributed, and can be precise enough about how you’ll throw them, and know enough about the surface they’ll roll on. Screen capture from TrekCore’s archive of Star Trek: The Next Generation images.

Our instinctive idea of what randomness must be is flawed. That shouldn’t surprise. Our instinctive idea of anything is flawed. But randomness gives us trouble. It’s obvious, for example, that randomly selected things should have no pattern. But then how is that reasonable? If we draw letters from the alphabet at random, we should expect sometimes to get some cute pattern like ‘aaaaa’ or ‘qwertyuiop’ or the works of Shakespeare. Perhaps we mean we shouldn’t get patterns any more often than we would expect. All right; how often is that?

We can make tests. Some of them are obvious. Take something that generates possibly-random results. Look up how probable each of those outcomes is. Then run off a bunch of outcomes. Do we get about as many of each result as we should expect? Probability tells us we should get as close as we like to the expected frequency if we let the random process run long enough. If this doesn’t happen, great! We can conclude we don’t really have something random.

We can do more tests. Some of them are brilliantly clever. Suppose there’s a way to order the results. Since mathematicians usually want numbers, putting them in order is easy to do. If they’re not, there’s usually a way to match results to numbers. You’ll see me slide here into talking about random numbers as though that were the same as random results. But if I can distinguish different outcomes, then I can label them. If I can label them, I can use numbers as labels. If the order of the numbers doesn’t matter — should “red” be a 1 or a 2? Should “green” be a 3 or an 8? — then, fine; any order is good.

There are 120 ways to order five distinct things. So generate lots of sets of, say, five numbers. What order are they in? There’s 120 possibilities. Do each of the possibilities turn up as often as expected? If they don’t, great! We can conclude we don’t really have something random.

I can go on. There are many tests which will let us say something isn’t a truly random sequence. They’ll allow for something like Sydney Harris’s peculiar sequence of random numbers. Mostly by supposing that if we let it run long enough the sequence would stop. But these all rule out random number generators. Do we have any that rule them in? That say yes, this generates randomness?

I don’t know of any. I suspect there can’t be any, on the grounds that a test of a thousand or a thousand million or a thousand million quadrillion numbers can’t assure us the generator won’t break down next time we use it. If we knew the algorithm by which the random numbers were generated — oh, but there we’re foiled before we can start. An algorithm is the instructions of how to do a thing. How can an instruction tell us how to do a thing that can’t be predicted?

Algorithms seem, briefly, to offer a way to tell whether we do have a good random sequence, though. We can describe patterns. A strong pattern is easy to describe, the way a familiar story is easy to reference. A weak pattern, a random one, is hard to describe. It’s like a dream, in which you can just list events. So we can call random something which can’t be described any more efficiently than just giving a list of all the results. But how do we know that can’t be done? 7, 7, 2, 4, 5, 3, 8, 5, 0, 9 looks like a pretty good set of digits, whole numbers from 0 through 9. I’ll bet not more than one in ten of you guesses correctly what the next digit in the sequence is. Unless you’ve noticed that these are the digits in the square root of π, so that the next couple digits have to be 0, 5, 5, and 1.

We know, on theoretical grounds, that we have randomness all around us. Quantum mechanics depends on it. If we need truly random numbers we can set a sensor. It will turn the arrival of cosmic rays, or the decay of radioactive atoms, or the sighing of a material flexing in the heat into numbers. We trust we gather these and process them in a way that doesn’t spoil their unpredictability. To what end?

That is, why do we care about randomness? Especially why should mathematicians care? The image of mathematics is that it is a series of logical deductions. That is, things known to be true because they follow from premises known to be true. Where can randomness fit?

One answer, one close to my heart, is called Monte Carlo methods. These are techniques that find approximate answers to questions. They do well when exact answers are too hard for us to find. They use random numbers to approximate answers and, often, to make approximate answers better. This demands computations. The field didn’t really exist before computers, although there are some neat forebears. I mean the Buffon needle problem, which lets you calculate the digits of π about as slowly as you could hope to do.

Another, linked to Monte Carlo methods, is stochastic geometry. “Stochastic” is the word mathematicians attach to things when they feel they’ve said “random” too often, or in an undignified manner. Stochastic geometery is what we can know about shapes when there’s randomness about how the shapes are formed. This sounds like it’d be too weak a subject to study. That it’s built on relatively weak assumptions means it describes things in many fields, though. It can be seen in understanding how forests grow. How to find structures inside images. How to place cell phone towers. Why materials should act like they do instead of some other way. Why galaxies cluster.

There’s also a stochastic calculus, a bit of calculus with randomness added. This is useful for understanding systems where some persistent unpredictable behavior is there. It comes, if I understand the histories of this right, from studying the ways molecules will move around in weird zig-zagging twists. They do this even when there is no overall flow, just a fluid at a fixed temperature. It too has surprising applications. Without the assumption that some prices of things are regularly jostled by arbitrary and unpredictable forces, and the treatment of that by stochastic calculus methods, we wouldn’t have nearly the ability to hedge investments against weird chaotic events. This would be a bad thing, I am told by people with more sophisticated investments than I have. I personally own like ten shares of the Tootsie Roll corporation and am working my way to a $2.00 rebate check from Boyer.

Playland's Derby Racer in motion, at night, featuring a ride operator leaning maybe twenty degrees inward.
Rye Playland’s is the fastest carousel I’m aware of running. Riders are warned ahead of time to sit so they’re leaning to the left, and the ride will not get up to full speed until the ride operator checks everyone during the ride. To get some idea of its speed, notice the ride operator on the left and how far he leans. He’s not being dramatic; that’s the natural stance. Also the tilt in the carousel’s floor is not camera trickery; it does lean like that.

Given that we need randomness, but don’t know how to get it — or at least don’t know how to be sure we have it — what is there to do? We accept our failings and make do with “quasirandom numbers”. We find some process that generates numbers which look about like random numbers should. These have failings. Most important is that if we could predict them. They’re random like “the date Easter will fall on” is random. The date Easter will fall is not at all random; it’s defined by a specific and humanly knowable formula. But if the only information you have is that this year, Easter fell on the 1st of April (Gregorian computus), you don’t have much guidance to whether this coming year it’ll be on the 7th, 14th, or 21st of April the next year. Most notably, quasirandom number generators will tend to repeat after enough numbers are drawn. If we know we won’t need enough numbers to see a repetition, though? Another stereotype of the mathematician is that of a person who demands exactness. It is often more true to say she is looking for an answer good enough. We are usually all right with a merely good enough quasirandomness.

Boyer candies — Mallo Cups, most famously, although I more like the peanut butter Smoothies — come with a cardboard card backing. Each card has two play money “coins”, of values from 5 cents to 50 cents. These can be gathered up for a rebate check or for various prizes. Whether your coin is 5 cents, 10, 25, or 50 cents … well, there’s no way to tell, before you open the package. It’s, so far as you can tell, randomness.


My next A To Z post should be available at this link. It’s coming Tuesday and should be the letter ‘S’.

Reading the Comics, November 16, 2018: The Rest Of The Week Edition


After that busy start last Sunday, Comic Strip Master Command left only a few things for the rest of the week. Here’s everything that seemed worthy of some comment to me:

Alex Hallatt’s Arctic Circle for the 12th is an arithmetic cameo. It’s used as the sort of thing that can be tested, with the straightforward joke about animal testing to follow. It’s not a surprise that machines should be able to do arithmetic. We’ve built machines for centuries to do arithmetic. Literally; Wilhelm Gottfried Leibniz designed and built a calculating machine able to add, subtract, multiply, and divide. This accomplishment from one of the founders of integral calculus is a potent reminder of how much we can accomplish if we’re supposed to be writing instead. (That link is to Robert Benchley’s classic essay “How To Get Things Done”. It is well worth reading, both because it is funny and because it’s actually good, useful advice.)

Rabbit, reading the paper: 'Artificial intelligence could make animal testing obsolete.' Polar Bear: 'Thank goodness.' Penguin imagines the Polar Bear in school, being asked by the teacher the square root of 121, with a robot beside him whispering '11'.
Alex Hallatt’s Arctic Circle for the 12th of November, 2018. Other essays based on Arctic Circle should be at this link.

But it’s also true that animals do know arithmetic. At least a bit. Not — so far as we know — to the point they ponder square roots and such. But certainly to count, to understand addition and subtraction roughly, to have some instinct for calculations. Stanislas Dehaene’s The Number Sense: How the Mind Creates Mathematics is a fascinating book about this. I’m only wary about going deeper into the topic since I don’t know a second (and, better, third) pop book touching on how animals understand mathematics. I feel more comfortable with anything if I’ve encountered it from several different authors. Anyway it does imply the possibility of testing a polar bear’s abilities at arithmetic, only in the real world.

In school. Binkley: 'Don't say anything, Ms Harlow, but a giant spotted snorkewacker from my closet full of anxieties has followed me to school and since experience has proven that he plans to grab me, I'd like permission to go home and hide.' Ms Harlow: 'Mr Binkley, that's the stinkiest excuse I've ever heard for getting out of a geometry exam. Go sit down.' Binkley's face-down at his desk; the Giant Spotted Snorklewacker asks, 'Pssst! What's the Pythagorean theorem?'
Berkeley Breathed’s Bloom County rerun for the 13th of November, 2018. It originally ran the 17th of February, 1983. Never mind the copyright notice; those would often show the previous year the first couple weeks of the year. Essays based on topics raised by Bloom County — original or modern continuation — should be at this link.

Berkeley Breathed’s Bloom County rerun for the 13th has another mathematics cameo. Geometry’s a subject worthy of stoking Binkley’s anxieties, though. It has a lot of definitions that have to be carefully observed. And while geometry reflects the understanding we have of things from moving around in space, it demands a precision that we don’t really have an instinct for. It’s a lot to worry about.

Written into two ring stains on a napkin: 'People who drink coffee'. 'People who drink tea'. Pointing to the intersection: 'People who share napkins.'
Terry Border’s Bent Objects for the 15th of November, 2018. Other essays based on Bent Objects will be at this link. It’s a new tag, so for now, there’s just that.

Terry Border’s Bent Objects for the 15th is our Venn Diagram joke for the week. I like this better than I think the joke deserves, probably because it is done in real materials. (Which is the Bent Objects schtick; it’s always photographs of objects arranged to make the joke.)

Teacher: 'I need to buy some graph paper for my students. Is there a convenience store near here?' Guy: 'Yeah, just two miles way from campus.' Later: Teacher, driving, realizes: 'Wait, he didn't specify a coordinate system. NOOOOOOO!' as her car leaps into the air.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th of November, 2018. In case there’s ever another essay which mentions Saturday Morning Breakfast Cereal it’ll be at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is a joke on knowing how far to travel but not what direction. Normal human conversations carry contextually reasonable suppositions. Told something is two miles away, it’s probably along the major road you’re on, or immediately nearby. I’d still ask for clarification told something was “two miles away”. Two blocks, I’d let slide, on the grounds that it’s no big deal to correct a mistake.

Still, mathematicians carry defaults with them too. They might be open to a weird, general case, certainly. But we have expectations. There’s usually some obvious preferred coordinate system, or directions. If it’s important that we be ready for alternatives we highlight that. We specify the coordinate system we want. Perhaps we specify we’re taking that choice “without loss of generality”, that is, without supposing some other choice would be wrong.

I noticed the mathematician’s customized plate too. “EIPI1” is surely a reference to the expression e^{\imath \pi} + 1 . That sum, it turns out, equals zero. It reflects this curious connection between exponentiation, complex-valued numbers, and the trigonometric functions. It’s a weird thing to know is true, and it’s highly regarded in certain nerd circles for that weirdness.

The Odds. Guy checking his phone after his friend's been knocked down: 'There's tons of stuff about being struck by a bolt of lightning --- nothing about bolts of fabric.' [Title panel extra gag: 'Lucky for you it's soft and silky.']
Hilary Price’s Rhymes With Orange for the 16th of November, 2018. And times I’ve discussed something from Rhymes With Orange should be at this link.

Hilary Price’s Rhymes With Orange for the 16th features a what-are-the-odds sort of joke, this one about being struck by a bolt from the sky. Lightning’s the iconic bolt to strike someone, and be surprising about it. Fabric would be no less surprising, though. And there’s no end of stories of weird things falling from the skies. It’s easier to get stuff into the sky than you might think, and there are only a few options once that’s happened.


And as ever, all my Reading the Comics posts should all be at this link.

Through the end of December my Fall 2018 Mathematics A To Z continues. I’m still open for topics to discuss from the last half-dozen letters of the alphabet. Even if someone’s already given a word for some letter, suggest something anyway. You might inspire me in good ways.

My 2018 Mathematics A To Z: Infinite Monkey Theorem


Dina Yagodich gave me the topic for today. She keeps up a YouTube channel with a variety of interesting videos. And she did me a favor. I’ve been thinking a long while to write a major post about this theorem. Its subject turns up so often. I’d wanted to have a good essay about it. I hope this might be one.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Infinite Monkey Theorem.

Some mathematics escapes mathematicians and joins culture. This is one such. The monkeys are part of why. They’re funny and intelligent and sad and stupid and deft and clumsy, and they can sit at a keyboard almost look in place. They’re so like humans, except that we empathize with them. To imagine lots of monkeys, and putting them to some silly task, is compelling.

Monkey Typewriter Theory: An immortal monkey pounding on a typewriter will eventually reproduce the text of 'Hamlet'. Baby Keyboard Theory: Left alone, a baby pounding on a computer keyboard will eventually order 32 cases of bathroom caulk from an online retailer.
Paul Trapp’s Thatababy for the 13th of February, 2014.

The metaphor traces back to a 1913 article by the mathematical physicist Émile Borel which I have not read. Searching the web I find much more comment about it than I find links to a translation of the text. And only one copy of the original, in French. And that page wants €10 for it. So I can tell you what everybody says was in Borel’s original text, but can’t verify it. The paper’s title is “Statistical Mechanics and Irreversibility”. From this I surmise that Borel discussed one of the great paradoxes of statistical mechanics. If we open a bottle of one gas in an airtight room, it disperses through the room. Why doesn’t every molecule of gas just happen, by chance, to end up back where it started? It does seem that if we waited long enough, it should. It’s unlikely it would happen on any one day, but give it enough days …

But let me turn to many web sites that are surely not all copying Wikipedia on this. Borel asked us to imagine a million monkeys typing ten hours a day. He posited it was possible but extremely unlikely that they would exactly replicate all the books of the richest libraries of the world. But that would be more likely than the atmosphere in a room un-mixing like that. Fair enough, but we’re not listening anymore. We’re thinking of monkeys. Borel’s is a fantastic image. It would see some adaptation in the years. Physicist Arthur Eddington, in 1928, made it an army of monkeys, with their goal being the writing all the books in the British Museum. By 1960 Bob Newhart had an infinite number of monkeys and typewriters, and a goal of all the great books. Stating the premise gets a laugh I doubt the setup would today. I’m curious whether Newhart brought the idea to the mass audience. (Google NGrams for “monkeys at typewriters” suggest that phrase was unwritten, in books, before about 1965.) We may owe Bob Newhart thanks for a lot of monkeys-at-typewriters jokes.

Kid: 'Mom, Dad, I want to go bungee jumping this summer!' Dad: 'A thousand monkeys working a thousand typewriters would have a better chance of randomly typing the complete works of William Shakespeare over the summer than you have of bungee jumping.' (Awksard pause.) Kid: 'What's a typewriter?' Dad: 'A thousand monkeys randomly TEXTING!'
Bill Hinds’s Cleats rerun for the 1st of July, 2018.

Newhart has a monkey hit on a line from Hamlet. I don’t know if it was Newhart that set the monkeys after Shakespeare particularly, rather than some other great work of writing. Shakespeare does seem to be the most common goal now. Sometimes the number of monkeys diminishes, to a thousand or even to one. Some people move the monkeys off of typewriters and onto computers. Some take the cowardly measure of putting the monkeys at “keyboards”. The word is ambiguous enough to allow for typewriters, computers, and maybe a Megenthaler Linotype. The monkeys now work 24 hours a day. This will be a comment someday about how bad we allowed pre-revolutionary capitalism to get.

The cultural legacy of monkeys-at-keyboards might well itself be infinite. It turns up in comic strips every few weeks at least. Television shows, usually writing for a comic beat, mention it. Computer nerds doing humor can’t resist the idea. Here’s a video of a 1979 Apple ][ program titled THE INFINITE NO. OF MONKEYS, which used this idea to show programming tricks. And it’s a great philosophical test case. If a random process puts together a play we find interesting, has it created art? No deliberate process creates a sunset, but we can find in it beauty and meaning. Why not words? There’s likely a book to write about the infinite monkeys in pop culture. Though the quotations of original materials would start to blend together.

But the big question. Have the monkeys got a chance? In a break from every probability question ever, the answer is: it depends on what the question precisely is. Occasional real-world experiments-cum-art-projects suggest that actual monkeys are worse typists than you’d think. They do more of bashing the keys with a stone before urinating on it, a reminder of how slight is the difference between humans and our fellow primates. So we turn to abstract monkeys who behave more predictably, and run experiments that need no ethical oversight.

Toby: 'So this English writer is like a genius, right? And he's the greatest playwright ever. And I want to be just like him! Cause what he does, see, is he gets infinite monkeys on typewriters and just lets 'em go nuts, so eventually they write ALL of Shakespeare's plays!' Brother: 'Cool! And what kind of monkey is an 'infinite'?' Toby: 'Beats me, but I hope I don't have to buy many of them.' Dad: 'Toby, are you *sure* ywou completely pay attention when your teachers are talking?' Toby: 'What? Yes! Why?'

Greg Cravens’ The Buckets for the 30th of March, 2014.

So we must think what we mean by Shakespeare’s Plays. Arguably the play is a specific performance of actors in a set venue doing things. This is a bit much to expect of even a skilled abstract monkey. So let us switch to the book of a play. This has a more clear representation. It’s a string of characters. Mostly letters, some punctuation. Good chance there’s numerals in there. It’s probably a lot of characters. So the text to match is some specific, long string of characters in a particular order.

And what do we mean by a monkey at the keyboard? Well, we mean some process that picks characters randomly from the allowed set. When I see something is picked “randomly” I want to know what the distribution rule is. Like, are Q’s exactly as probable as E’s? As &’s? As %’s? How likely it is a particular string will get typed is easiest to answer if we suppose a “uniform” distribution. This means that every character is equally likely. We can quibble about capital and lowercase letters. My sense is most people frame the problem supposing case-insensitivity. That the monkey is doing fine to type “whaT beArD weRe i BEsT tO pLAy It iN?”. Or we could set the monkey at an old typesetter’s station, with separate keys for capital and lowercase letters. Some will even forgive the monkeys punctuating terribly. Make your choices. It affects the numbers, but not the point.

Literary Calendar. Several jokes, including: Saturday 7pm: an infinite number of chimpanzees discuss their multi-volume 'Treasury of Western Literature with no Typos' at the Museum of Natural History. Nit picking to follow.
Richard Thompson’s Richard’s Poor Almanac rerun for the 7th of November, 2016.

I’ll suppose there are 91 characters to pick from, as a Linotype keyboard had. So the monkey has capitals and lowercase and common punctuation to get right. Let your monkey pick one character. What is the chance it hit the first character of one of Shakespeare’s plays? Well, the chance is 1 in 91 that you’ve hit the first character of one specific play. There’s several dozen plays your monkey might be typing, though. I bet some of them even start with the same character, so giving an exact answer is tedious. If all we want monkey-typed Shakespeare plays, we’re being fussy if we want The Tempest typed up first and Cymbeline last. If we want a more tractable problem, it’s easier to insist on a set order.

So suppose we do have a set order. Then there’s a one-in-91 chance the first character matches the first character of the desired text. A one-in-91 chance the second character typed matches the second character of the desired text. A one-in-91 chance the third character typed matches the third character of the desired text. And so on, for the whole length of the play’s text. Getting one character right doesn’t make it more or less likely the next one is right. So the chance of getting a whole play correct is \frac{1}{91} raised to the power of however many characters are in the first script. Call it 800,000 for argument’s sake. More characters, if you put two spaces between sentences. The prospects of getting this all correct is … dismal.

I mean, there’s some cause for hope. Spelling was much less fixed in Shakespeare’s time. There are acceptable variations for many of his words. It’d be silly to rule out a possible script that (say) wrote “look’d” or “look’t”, rather than “looked”. Still, that’s a slender thread.

Proverb Busters: testing the validity of old sayings. Doctor: 'A hundred monkeys at a hundred typewriters. Over time, will one of them eventually write a Shakepeare play?' Winky: 'Nope. Just the script for Grown-Ups 3'. Doctor: 'Another proverb busted.'
Tim Rickard’s Brewster Rockit for the 1st of April, 2014.

But there is more reason to hope. Chances are the first monkey will botch the first character. But what if they get the first character of the text right on the second character struck? Or on the third character struck? It’s all right if there’s some garbage before the text comes up. Many writers have trouble starting and build from a first paragraph meant to be thrown away. After every wrong letter is a new chance to type the perfect thing, reassurance for us all.

Since the monkey does type, hypothetically, forever … well, so each character has a probability of only \left(\frac{1}{91}\right)^{800,000} (or whatever) of starting the lucky sequence. The monkey will have 91^{800,000} chances to start. More chances than that.

And we don’t have only one monkey. We have a thousand monkeys. At least. A million monkeys. Maybe infinitely many monkeys. Each one, we trust, is working independently, owing to the monkeys’ strong sense of academic integrity. There are 91^{800,000} monkeys working on the project. And more than that. Each one takes their chance.

Melvin: 'Hold on now --- replacement? Who could you find to do all the tasks only Melvin can perform?' Rita: 'A macaque, in fact. Listen, if an infinite number of monkeys can write all the great works, I'm confident that one will more than cover for you.'
John Zakour and Scott Roberts’s Working Daze for the 29th of May, 2018.

There are dizzying possibilities here. There’s the chance some monkey will get it all exactly right first time out. More. Think of a row of monkeys. What’s the chance the first thing the first monkey in the row types is the first character of the play? What’s the chance the first thing the second monkey in the row types is the second character of the play? The chance the first thing the third monkey in the row types is the third character in the play? What’s the chance a long enough row of monkeys happen to hit the right buttons so the whole play appears in one massive simultaneous stroke of the keys? Not any worse than the chance your one monkey will type this all out. Monkeys at keyboards are ergodic. It’s as good to have a few monkeys working a long while as to have many monkeys working a short while. The Mythical Man-Month is, for this project, mistaken.

That solves it then, doesn’t it? A monkey, or a team of monkeys, has a nonzero probability of typing out all Shakespeare’s plays. Or the works of Dickens. Or of Jorge Luis Borges. Whatever you like. Given infinitely many chances at it, they will, someday, succeed.

Except.

A thousand monkeys at a thousand typewriters ... will eventually write 'Hamlet'. A thousand cats at a thousand typewriters ... will tell you go to write your own danged 'Hamlet'.
Doug Savage’s Savage Chickens for the 14th of August, 2018.

What is the chance that the monkeys screw up? They get the works of Shakespeare just right, but for a flaw. The monkeys’ Midsummer Night’s Dream insists on having the fearsome lion played by “Smaug the joiner” instead. This would send the play-within-the-play in novel directions. The result, though interesting, would not be Shakespeare. There’s a nonzero chance they’ll write the play that way. And so, given infinitely many chances, they will.

What’s the chance that they always will? That they just miss every single chance to write “Snug”. It comes out “Smaug” every time?

Eddie: 'You know the old saying about putting an infinite number of monkeys at an infinite number of typewriters, and eventually they'll accidentally write Shakespeare's plays?' Toby: 'I guess.' Eddie: 'My English teacher says that nothing about our class should worry those monkeys ONE BIT!'
Greg Cravens’s The Buckets for the 6th of October, 2018.

We can say. Call the probability that they make this Snug-to-Smaug typo any given time p . That’s a number from 0 to 1. 0 corresponds to not making this mistake; 1 to certainly making it. The chance they get it right is 1 - p . The chance they make this mistake twice is smaller than p . The chance that they get it right at least once in two tries is closer to 1 than 1 - p is. The chance that, given three tries, they make the mistake every time is even smaller still. The chance that they get it right at least once is even closer to 1.

You see where this is going. Every extra try makes the chance they got it wrong every time smaller. Every extra try makes the chance they get it right at least once bigger. And now we can let some analysis come into play.

So give me a positive number. I don’t know your number, so I’ll call it ε. It’s how unlikely you want something to be before you say it won’t happen. Whatever your ε was, I can give you a number M . If the monkeys have taken more than M tries, the chance they get it wrong every single time is smaller than your ε. The chance they get it right at least once is bigger than 1 – ε. Let the monkeys have infinitely many tries. The chance the monkey gets it wrong every single time is smaller than any positive number. So the chance the monkey gets it wrong every single time is zero. It … can’t happen, right? The chance they get it right at least once is closer to 1 than to any other number. So it must be 1. So it must be certain. Right?

Poncho, the dog, looking over his owner's laptop: 'They say if you let an infinite number of cats walk on an infinite number of keyboards, they'll eventually type all the great works of Shakespeare.' The cat walks across the laptop, connecting to their owner's bank site and entering the correct password. Poncho: 'I'll take it.'
Paul Gilligan’s Pooch Cafe for the 17th of September, 2018.

But let me give you this. Detach a monkey from typewriter duty. This one has a coin to toss. It tosses fairly, with the coin having a 50% chance of coming up tails and 50% chance of coming up heads each time. The monkey tosses the coin infinitely many times. What is the chance the coin comes up tails every single one of these infinitely many times? The chance is zero, obviously. At least you can show the chance is smaller than any positive number. So, zero.

Yet … what power enforces that? What forces the monkey to eventually have a coin come up heads? It’s … nothing. Each toss is a fair toss. Each toss is independent of its predecessors. But there is no force that causes the monkey, after a hundred million billion trillion tosses of “tails”, to then toss “heads”. It’s the gambler’s fallacy to think there is one. The hundred million billion trillionth-plus-one toss is as likely to come up tails as the first toss is. It’s impossible that the monkey should toss tails infinitely many times. But there’s no reason it can’t happen. It’s also impossible that the monkeys still on the typewriters should get Shakespeare wrong every single time. But there’s no reason that can’t happen.

It’s unsettling. Well, probability is unsettling. If you don’t find it disturbing you haven’t thought long enough about it. Infinities, too, are unsettling so.

Researcher overseeing a room of monkeys: 'Shakespeare would be OK, but I'd prefer they come up with a good research grant proposal.'
John Deering’s Strange Brew for the 20th of February, 2014.

Formally, mathematicians interpret this — if not explain it — by saying the set of things that can happen is a “probability space”. The likelihood of something happening is what fraction of the probability space matches something happening. (I’m skipping a lot of background to say something that simple. Do not use this at your thesis defense without that background.) This sort of “impossible” event has “measure zero”. So its probability of happening is zero. Measure turns up in analysis, in understanding how calculus works. It complicates a bunch of otherwise-obvious ideas about continuity and stuff. It turns out to apply to probability questions too. Imagine the space of all the things that could possibly happen as being the real number line. Pick one number from that number line. What is the chance you have picked exactly the number -24.11390550338228506633488? I’ll go ahead and say you didn’t. It’s not that you couldn’t. It’s not impossible. It’s just that the chance that this happened, out of the infinity of possible outcomes, is zero.

The infinite monkeys give us this strange set of affairs. Some things have a probability of zero of happening, which does not rule out that they can. Some things have a probability of one of happening, which does not mean they must. I do not know what conclusion Borel ultimately drew about the reversibility problem. I expect his opinion to be that we have a clear answer, and unsettlingly great room for that answer to be incomplete.


This and other Fall 2018 Mathematics A-To-Z posts can be read at this link. The next essay should come Friday and will, I hope, be shorter.