I apologize to people who want to know the most they can about the comic strips of the past week. I’ve not had time to write about them. Part of what has kept me busy is a visit to Lakemont Park, in Altoona, Pennsylvania. The park has had several bad years, including two years in which it did not open at all. But still standing at the park is the oldest-known roller coaster, Leap The Dips.

My first visit to this park, in 2013, among other things gave me a mathematical question to ask. That is, could any of the many pieces of wood in it be original? How many pieces would you expect?

Problems of this form happen all the time. They turn up whenever there’s something which has a small chance of happening, but many chances to happen. In this case, there’s a small chance that any particular piece of wood will need replacing. But there are a lot of pieces of wood, and they might need replacement at any ride inspection. So there’s an obvious answer to how likely it is any piece of wood would survive a century-plus. And, from that, how much of that wood should be original.

The sad thing to say about revisiting Lakemont Park — well, one is that the park has lost almost all its amusement park rides. It’s got athletic facilities, and a couple miniature golf courses, but besides two wooden and one kiddie roller coaster, and an antique-cars ride, there’s not much left of its long history as an amusement park. But the other thing is that Leap The Dips was closed when I was able to visit. The ride’s under repairs, and seems to be getting painted too. This is sad, but I hope it implies better things soon.

Three of the strips I have for this installment feature kids around mathematics talk. That’s enough for a theme name.

Gary Delainey and Gerry Rasmussen’s Betty for the 23rd is a strip about luck. It’s easy to form the superstitious view that you have a finite amount of luck, or that you have good and bad lucks which offset each other. It feels like it. If you haven’t felt like it, then consider that time you got an unexpected $200, hours before your car’s alternator died.

If events are independent, though, that’s just not so. Whether you win $600 in the lottery this week has no effect on whether you win any next week. Similarly whether you’re struck by lightning should have no effect on whether you’re struck again.

Except that this assumes independence. Even defines independence. This is obvious when you consider that, having won $600, it’s easier to buy an extra twenty dollars in lottery tickets and that does increase your (tiny) chance of winning again. If you’re struck by lightning, perhaps it’s because you tend to be someplace that’s often struck by lightning. Probability is a subtler topic than everyone acknowledges, even when they remember that it is such a subtle topic.

Darrin Bell’s Candorville for the 23rd jokes about the uselessness of arithmetic in modern society. I’m a bit surprised at Lemont’s glee in not having to work out tips by hand. The character’s usually a bit of a science nerd. But liking science is different from enjoying doing arithmetic. And bad experiences learning mathematics can sour someone on the subject for life. (Which is true of every subject. Compare the number of people who come out of gym class enjoying physical fitness.)

If you need some Internet Old, read the comments at GoComics, which include people offering dire warnings about what you need in case your machine gives the wrong answer. Which is technically true, but for this application? Getting the wrong answer is not an immediately awful affair. Also a lot of cranky complaining about tipping having risen to 20% just because the United States continues its economic punishment of working peoples.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 25th is some wordplay. Mathematicians often need to find minimums of things. Or maximums of things. Being able to do one lets you do the other, as you’d expect. If you didn’t expect, think about it a moment, and then you expect it. So min and max are often grouped together.

Paul Trap’s Thatababy for the 26th is circling around wordplay, turning some common shape names into pictures. This strip might be aimed at mathematics teachers’ doors. I’d certainly accept these as jokes that help someone learn their shapes.

If you’ve been following me on Twitter you’ve seen reports of the Great Migration. This is the pompous name I give to the process of bringing the goldfish who were in tanks in the basement for the winter back outside again. This to let them enjoy the benefits of the summer, like, not having me poking around testing their water every day. (We had a winter with a lot of water quality problems. I’m probably over-testing.)

The Great Migration finally: four goldfish brought outside today. 12 remain in the left tank, 14 in the right, I think.

My reports about moving them back — by setting a net in that could trap some fish and moving them out — included reports of how many remained in each tank. And many people told me how such updates as “Twelve goldfish are in the left tank, three in the right, and fifteen have been brought outside” sound like the start of a story problem. Maybe it does. I don’t have a particular story problem built on this. I’m happy to take nominations for such.

But I did have some mathematics essays based on the problem of moving goldfish to the pond outdoors and to the warm water tank indoors:

How To Count Fish, about how one could estimate a population by sampling it twice.

How To Re-Count Fish, about one of the practical problems in using this to count as few goldfish as we have at our household.

How Not To Count Fish, about how this population estimate wouldn’t work because of the peculiarities of goldfish psychology. Honest.

That I spend one essay describing how to do a thing, and then two more essays describing why it won’t work, may seem characteristically me. Well, yeah. Mathematics is a great tool. To use a tool safely requires understanding its powers and its limitations. I like thinking about what mathematics can and can’t do.

The first, important, thing is that I have not disappeared or done something worse. I just had one of those weeks where enough was happening that something had to give. I could either write up stuff for my mathematics blog, or I could feel guilty about not writing stuff up for my mathematics blog. Since I didn’t have time to do both, I went with feeling guilty about not writing, instead. I’m hoping this week will give me more writing time, but I am fooling only myself.

Second is that Comics Kingdom has, for all my complaining, gotten less bad in the redesign. Mostly in that the whole comics page loads at once, now, instead of needing me to click to “load more comics” every six strips. Good. The strips still appear in weird random orders, especially strips like Prince Valiant that only run on Sundays, but still. I can take seeing a vintage Boner’s Ark Sunday strip six unnecessary times. The strips are still smaller than they used to be, and they’re not using the decent, three-row format that they used to. And the archives don’t let you look at a week’s worth in one page. But it’s less bad, and isn’t that all we can ever hope for out of the Internet anymore?

And finally, Comic Strip Master Command wanted to make this an easy week for me by not having a lot to write about. It got so light I’ve maybe overcompensated. I’m not sure I have enough to write about here, but, I don’t want to completely vanish either.

Dave Whamond’s Reality Check for the 15th is … hm. Well, it’s not an anthropomorphic-numerals joke. It is some kind of wordplay, making concrete a common phrase about, and attitude toward, numbers. I could make the fussy difference between numbers and numerals here but I’m not sure anyone has the patience for that.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 17th touches around mathematics without, I admit, necessarily saying anything specific. The angel(?) welcoming the man to heaven mentions creating new systems of mathematics as some fit job for the heavenly host. The discussion of creating self-consistent physics systems seems mathematical in nature too. I’m not sure whether saying one could “attempt” to create self-consistent physics is meant to imply that our universe’s physics are not self-consistent. To create a “maximally complex reality using the simplest possible constructions” seems like a mathematical challenge as well. There are important fields of mathematics built on optimizing, trying to create the most extreme of one thing subject to some constraints or other.

I think the strip’s premise is the old, partially a joke, concept that God is a mathematician. This would explain why the angel(?) seems to rate doing mathematics or mathematics-related projects as so important. But even then … well, consider. There’s nothing about designing new systems of mathematics that ordinary mortals can’t do. Creating new physics or new realities is beyond us, certainly, but designing the rules for such seems possible. I think I understood this comic better then I had thought about it less. Maybe including it in this column has only made trouble for me.

Doug Savage’s Savage Chickens for the 17th amuses me by making a strip out of a logic paradox. It’s not quite your “this statement is a lie” paradox, but it feels close to that, to me. To have the first chicken call it “Birthday Paradox” also teases a familiar probability problem. It’s not a true paradox. It merely surprises people who haven’t encountered the problem before. This would be the question of how many people you need to have in a group before there’s a 50 percent (75 percent, 99 percent, whatever you like) chance of at least one pair sharing a birthday.

And I notice on Wikipedia a neat variation of this birthday problem. This generalization considers splitting people into two distinct groups, and how many people you need in each group to have a set chance of a pair, one person from each group, sharing a birthday. Apparently both a 32-person group of 16 women and 16 men, or a 49-person group of 43 women and six men, have a 50% chance of some woman-man pair sharing a birthday. Neat.

Mark Parisi’s Off The Mark for the 18th sports a bit of wordplay. It’s built on how multiplication and division also have meanings in biology. … If I’m not mis-reading my dictionary, “multiply” meant any increase in number first, and the arithmetic operation we now call multiplication afterwards. Division, similarly, meant to separate into parts before it meant the mathematical operation as well. So it might be fairer to say that multiplication and division are words that picked up mathematical meaning.

I had a slight nagging feeling about this. A couple years back I calculated the most and least probable dates for Easter, on the Gregorian calendar, using the current computus. That essay’s here, with results about how often we can expect Easter and when. It also holds some thoughts about whether the probable dates of Easter are even a thing that can be meaningfully calculated. And it turns out, uncharacteristically, that I forgot to do a follow-up calculating the dates of Easter on the Julian calendar. Maybe I’ll get to it yet.

And we had another of those peculiar days where a lot of strips are on-topic enough for me to talk about.

Eric the Circle, this one by Kyle, for the 26th has a bit of mathematical physics in it. This is the kind of diagram you’ll see all the time, at least if you do the mathematics that tells you where things will be and when. The particular example is an easy problem, a thing rolling down an inclined plane. But the work done for it applies to more complicated problems. The question it’s for is, “what happens when this thing slides down the plane?” And that depends on the forces at work. There’s gravity, certainly . If there were something else it’d be labelled. Gravity’s represented with that arrow pointing straight down. That gives us the direction. The label (Eric)(g) gives us how strong this force is.

Where the diagram gets interesting, and useful, are those dashed lines ending in arrows. One of those lines is, or at least means to be, parallel to the incline. The other is perpendicular to it. These both reflect gravity. We can represent the force of gravity as a vector. That means, we can represent the force of gravity as the sum of vectors. This is like how we can can write “8” or we can write “3 + 5”, depending on what’s more useful for what we’re doing. (For example, if you wanted to work out “67 + 8”, you might be better off doing “67 + 3 + 5”.) The vector parallel to the plane and the one perpendicular to the plane add up to the original gravity vector.

The force that’s parallel to the plane is the only force that’ll actually accelerate Eric. The force perpendicular to the plane just … keeps it snug against the plane. (Well, it can produce friction. We try not to deal with that in introductory physics because it is so hard. At most we might look at whether there’s enough friction to keep Eric from starting to slide downhill.) The magnitude of the force parallel to the plane, and perpendicular to the plane, are easy enough to work out. These two forces and the original gravity can be put together into a little right triangle. It’s the same shape but different size to the right triangle made by the inclined plane plus a horizontal and a vertical axis. So that’s how the diagram knows the parallel force is the original gravity times the sine of x. And that the perpendicular force is the original gravity times the cosine of x.

The perpendicular force is often called the “normal” force. This because mathematical physicists noticed we had only 2,038 other, unrelated, things called “normal”.

Rick Detorie’s One Big Happy for the 26th sees Ruthie demand to know who this Venn person was. Fair question. Mathematics often gets presented as these things that just are. That someone first thought about these things gets forgotten.

John Venn, who lived from 1834 to 1923 — he died the 4th of April, it happens — was an English mathematician and philosopher and logician and (Anglican) priest. This is not a rare combination of professions. From 1862 he was a lecturer in Moral Science at Cambridge. This included work in logic, yes. But he also worked on probability questions. Wikipedia credits his 1866 Logic Of Chance with advancing the frequentist interpretation of probability. This is one of the major schools of thought about what the “probability of an event” is. It’s the one where you list all the things that could possibly happen, and consider how many of those are the thing you’re interested in. So, when you do a problem like “what’s the probability of rolling two six-sided dice and getting a total of four”? You’re doing a frequentist probability problem.

Venn Diagrams he presented to the world around 1880. These show the relationships between different sets. And the relationships of mathematical logic problems they represent. Venn, if my sources aren’t fibbing, didn’t take these diagrams to be a new invention of his own. He wrote of them as “Euler diagrams”. Venn diagrams, properly, need to show all the possible intersections of all the sets in play. You just mark in some way the intersections that happen to have nothing in them. Euler diagrams don’t require this overlapping. The name “Venn diagram” got attached to these pictures in the early 20th century. Euler here is Leonhard Euler, who created every symbol and notation mathematicians use for everything, and who has a different “Euler’s Theorem” that’s foundational to every field of mathematics, including the ones we don’t yet know exist. I exaggerate by 0.04 percent here.

Although we always start Venn diagrams off with circles, they don’t have to be. Circles are good shapes if you have two or three sets. It gets hard to represent all the possible intersections with four circles, though. This is when you start seeing weirder shapes. Wikipedia offers some pictures of Venn diagrams for four, five, and six sets. Meanwhile Mathworld has illustrations for seven- and eleven-set Venn diagrams. At this point, the diagrams are more for aesthetic value than to clarify anything, though. You could draw them with squares. Some people already do. Euler diagrams, particularly, are often squares, sometimes with rounded corners.

Venn had his other projects, too. His biography at St Andrews writes of his composing The Biographical History of Gonville and Caius College (Cambridge). And then he had another history of the whole Cambridge University. It also mentions his skills in building machines, though only cites one, a device for bowling cricket balls. The St Andrews biography says that in 1909 “Venn’s machine clean bowled one of [the Australian Cricket Team’s] top stars four times”. I do not know precisely what it means but I infer it to be a pretty good showing for the machine. His Wikipedia biography calls him a “passionate gardener”. Apparently the Cambridgeshire Horticultural Society awarded him prizes for his roses in July 1885 and for white carrots in September that year. And that he was a supporter of votes for women.

Ashleigh Brilliant’s Pot-Shots for the 26th makes a cute and true claim about percentiles. That a person will usually be in the upper 99% of whatever’s being measured? Hard to dispute. But, measure enough things and eventually you’ll fall out of at least one of them. How many things? This is easy to calculate if we look at different things that are independent of each other. In that case we could look at 69 things before there we’d expect a 50% chance of at least one not being in the upper 99%.

It’s getting that independence that’s hard. There’s often links between things. For example, a person’s height does not tell us much about their weight. But it does tell us something. A person six foot, ten inches tall is almost certainly not also 35 pounds, even though a person could be that size or could be that weight. A person’s scores on a reading comprehension test and their income? But test-taking results and wealth are certainly tied together. Age and income? Most of us have a bigger income at 46 than at 6. This is part of what makes studying populations so hard.

T Shepherd’s Snow Sez for the 26th is finally a strip I can talk about briefly, for a change. Snow does a bit of arithmetic wordplay, toying with what an expression like “1 + 1” might represent.

I didn’t cover quite all of last week’s mathematics comics with Sunday’s essay. There were a handful that all ran on Saturday. And, as has become tradition, I’ll also list a couple that didn’t rate a couple paragraphs.

Rick Kirkman and Jerry Scott’s Baby Blues for the 23rd has a neat variation on story problems. Zoe’s given the assignment to make her own. I don’t remember getting this as homework, in elementary school, but it’s hard to see why I wouldn’t. It’s a great exercise: not just set up an arithmetic problem to solve, but a reason one would want to solve it.

Composing problems is a challenge. It’s a skill, and you might be surprised that when I was in grad school we didn’t get much training in it. We were just taken to be naturally aware of how to identify a skill one wanted to test, and to design a question that would mostly test that skill, and to write it out in a question that challenged students to identify what they were to do and how to do it, and why they might want to do it. But as a grad student I wasn’t being prepared to teach elementary school students, just undergraduates.

Mastroianni and Hart’s B.C. for the 23rd is a joke in the funny-definition category, this for “chaos theory”. Chaos theory formed as a mathematical field in the 60s and 70s, and it got popular alongside the fractal boom in the 80s. The field can be traced back to the 1890s, though, which is astounding. There was no way in the 1890s to do the millions of calculations needed to visualize any good chaos-theory problem. They had to develop results entirely by thinking.

Wiley’s definition is fine enough about certain systems being unpredictable. Wiley calls them “advanced”, although they don’t need to be that advanced. A compound pendulum — a solid rod that swings on the end of another swinging rod — can be chaotic. You can call that “advanced” if you want but then people are going to ask if you’ve had your mind blown by this post-singularity invention, the “screw”.

What makes for chaos is not randomness. Anyone knows the random is unpredictable in detail. That’s no insight. What’s exciting is when something’s unpredictable but deterministic. Here it’s useful to think of continental divides. These are the imaginary curves which mark the difference in where water runs. Pour a cup of water on one side of the line, and if it doesn’t evaporate, it eventually flows to the Pacific Ocean. Pour the cup of water on the other side, it eventually flows to the Atlantic Ocean. These divides are often wriggly things. Water may mostly flow downhill, but it has to go around a lot of hills.

So pour the water on that line. Where does it go? There’s no unpredictability in it. The water on one side of the line goes to one ocean, the water on the other side, to the other ocean. But where is the boundary? And that can be so wriggly, so crumpled up on itself, so twisted, that there’s no meaningfully saying. There’s just this zone where the Pacific Basin and the Atlantic Basin merge into one another. Any drop of water, however tiny, dropped in this zone lands on both sides. And that is chaos.

Neatly for my purposes there’s even a mountain at a great example of this boundary. Triple Divide Peak, in Montana, rests on the divides between the Atlantic and the Pacific basins, and also on the divide between the Atlantic and the Arctic oceans. (If one interprets the Hudson Bay as connecting to the Arctic rather than the Atlantic Ocean, anyway. If one takes Hudson Bay to be on the Atlantic Ocean, then Snow Dome, Alberta/British Columbia, is the triple point.) There’s a spot on this mountain (or the other one) where a spilled cup of water could go to any of three oceans.

John Graziano’s Ripley’s Believe It Or Not for the 23rd mentions one of those beloved bits of mathematics trivia, the birthday problem. That’s finding the probability that no two people in a group of some particular size will share a birthday. Or, equivalently, the probability that at least two people share some birthday. That’s not a specific day, mind you, just that some two people share a birthday. The version that usually draws attention is the relatively low number of people needed to get a 50% chance there’s some birthday pair. I haven’t seen the probability of 70 people having at least one birthday pair before. 99.9 percent seems plausible enough.

The birthday problem usually gets calculated something like this: Grant that one person has a birthday. That’s one day out of either 365 or 366, depending on whether we consider leap days. Consider a second person. There are 364 out of 365 chances that this person’s birthday is not the same as the first person’s. (Or 365 out of 366 chances. Doesn’t make a real difference.) Consider a third person. There are 363 out of 365 chances that this person’s birthday is going to be neither the first nor the second person’s. So the chance that all three have different birthdays is . Consider the fourth person. That person has 362 out of 365 chances to have a birthday none of the first three have claimed. So the chance that all four have different birthdays is . And so on. The chance that at least two people share a birthday is 1 minus the chance that no two people share a birthday.

As always happens there are some things being assumed here. Whether these probability calculations are right depends on those assumptions. The first assumption being made is independence: that no one person’s birthday affects when another person’s is likely to be. Obvious, you say? What if we have twins in the room? What if we’re talking about the birthday problem at a convention of twins and triplets? Or people who enjoyed the minor renown of being their city’s First Babies of the Year? (If you ever don’t like the result of a probability question, ask about the independence of events. Mathematicians like to assume independence, because it makes a lot of work easier. But assuming isn’t the same thing as having it.)

The second assumption is that birthdates are uniformly distributed. That is, that a person picked from a room is no more likely to be born the 13th of February than they are the 24th of September. And that is not quite so. September births are (in the United States) slightly more likely than other months, for example, which suggests certain activities going on around New Year’s. Across all months (again in the United States) birthdates of the 13th are slightly less likely than other days of the month. I imagine this has to be accounted for by people who are able to select a due date by inducing delivery. (Again if you need to attack a probability question you don’t like, ask about the uniformity of whatever random thing is in place. Mathematicians like to assume uniform randomness, because it akes a lot of work easier. But assuming it isn’t the same as proving it.)

Do these differences mess up the birthday problem results? Probably not that much. We are talking about slight variations from uniform distribution. But I’ll be watching Ripley’s to see if it says anything about births being more common in September, or less common on 13ths.

And now the comics I didn’t find worth discussing. They’re all reruns, it happens. Morrie Turner’s Wee Pals rerun for the 20th just mentions mathematics class. That could be any class that has tests coming up, though. Percy Crosby’s Skippy for the 21st is not quite the anthropomorphic numerals jokes for the week. It’s getting around that territory, though, as Skippy claims to have the manifestation of a zero. Bill Rechin’s Crock for the 22nd is a “pick any number” joke. I discussed as much as I could think of about this when it last appeared, in May of 2018. Also I’m surprised that Crock is rerunning strips that quickly now. It has, in principle, decades of strips to draw from.

I do not know that the Ziggy printed here is a rerun. I don’t seem to have mentioned it in previous Reading the Comics posts, but that isn’t definite. How much mathematical content a comic strip needs to rate a mention depends on many things, and a strip that seems too slight one week might inspire me another. I’ll explain why I’ve started to get suspicious of the quite humanoid figure.

Tom II Wilson’s Ziggy for the 12th is framed around weather forecasts. It’s the probability question people encounter most often, unless they’re trying to outsmart the contestants on Let’s Make A Deal. (And many games on The Price Is Right, too.) Many people have complained about not knowing the meaning of a “50% chance of rain” for a day. If I understand it rightly, it means, when conditions have been like this in the recorded past, it’s rained about 50% of the time. I’m open to correction from meteorologists and it just occurred to me I know one. Mm.

Few people ask about the probability a forecast is correct. In some ways it’s an unanswerable question. To say there is a one-in-six chance a fairly thrown die will turn up a ‘1’ is not wrong just because it’s rolled a ‘1’ eight times out of the last ten. But it does seem like a forecast such as this should include a sense of confidence, how sure the forecaster is that the current weather is all that much like earlier times.

I’m not sure how much of the joke is meant to be the repetition of “50% chance”. The joke might be meant to say that if he’s got a 50% chance of being wrong, then, isn’t the 50% chance of rain “correctly” a 50% chance of not-rain … which is the same chance of rain? The logic doesn’t hold up, if you pay attention, but it sounds like it should make sense, and having the “wrong” version of something be the same as the original is a valid comic construction.

So now for the promised Ziggy rerun scandal. To the best of my knowledge Ziggy is presented as being in new run. It’s done by the son of the comic strip’s creator, but that’s common enough for long-running comic strips. This Monday, though, ran a Ziggy-at-the-psychiatrist joke that was, apart from coloring, exactly the comic run the 2nd of March, barely two weeks before. (Compare the scribbles in the psychiatrist’s diploma.) It wouldn’t be that weird if a comic were accidentally repeated; production mistakes happen, after all. It’s slightly weird that the daily, black-and-white, original got colored in two different ways, but I can imagine this happening by accident.

Still, that got me primed to look for Ziggy repeats. I couldn’t find this one having an earlier appearance. But I did find that the 9th of January this year was a reprint of the Ziggy from the 11th of January, 2017. I wrote about both appearances, without noticing they were reruns. Here’s the 2017 essay, and over here is the 2019 essay, from before I was very good at remembering what the year was. Mercifully I didn’t say anything contradictory on the two appearances. I’m more interested in how I said things differently in the two appearances. Anyway this earlier year seems to have been part of a week’s worth of reruns, noticeable by the copyright date. I can’t begrudge a cartoonist their vacation. The psychiatrist strip doesn’t seem to be part of that, though, and its repetition is some as-yet-unexplained event.

Tony Rubino and Gary Markstein’s Daddy’s Home for the 13th has a much more casual and non-controversial bit of mathematics. Pete tosses out a calculate-the-square-root problem as a test of Peggy’s omniscience. One of the commenters points out that the square root of 532 is closer to 23.06512519 than it is Peggy’s 23.06512818. It suggests the writers found the square root by something that gave plenty of digits. For example, the macOS Calculator program offers me “23.065 125 189 341 592”. But then they chopped off, rather than rounding off, digits when the panel space ran out.

Olivia Jaimes’s Nancy for the 13th has Nancy dividing up mathematics problems along the equals sign. That’s cute and fanciful enough. One could imagine working out expressions on either side of the equals sign in the hopes of getting them to match. That wouldn’t work for these algebra problems, but, that’s something.

This isn’t what Nancy might do, unless she flashed forward to college and became a mathematics or physics major. But one great trick in differential equations is called the separation of variables. Differential equations describe how quantities change. They’re great. They’re hard. A lot of solving differential equations amounts to rewriting them as simpler differential equations.

Separation is a trick usable when there’s two quantities whose variation affect each other. If you can rewrite the differential equation so that one variable only appears on the left side, and the other variable only appears on the right? Then you can split this equation into two simpler equations. Both sides of the equation have to be some fixed number. So you can separate the differential equations of two variables into two differential equations, each with one variable. One with the first variable, one with the other. And, usually, a differential equation of one variable is easier than a differential equation with two variables. So Nancy and Esther could work each half by themselves. But the work would have to be put together at the end, too.

I hope you’ll pardon me for being busy. I haven’t had the chance to read all the Pi Day comic strips yet today. But I’d be a fool to let the day pass without something around here. I confess I’m still not sure that Pi Day does anything lasting to encourage people to think more warmly of mathematics. But there is probably some benefit if people temporarily think more fondly of the subject. Certainly I’ll do more foolish things than to point at things and say, “pi, cool, huh?” this week alone.

I’ve got a couple of essays that discuss π some. The first noteworthy one is Calculating Pi Terribly, discussing a way to calculate the value of π using nothing but a needle, a tile floor, and a hilariously excessive amount of time. Or you can use an HTML5-and-JavaScript applet and slightly less time, and maybe even experimentally calculate the digits of π to two decimal places, if you get lucky.

In Calculating Pi Less Terribly I showed a way to calculate π that’s … well, you see where that sentence was going. This is a method that uses an alternating series. To get π exactly correct you have to do an infinite amount of work. But if you just want π to a certain precision, all right. This will even tell you how much work you have to do. There are other formulas that will get you digits of π with less work, though, and maybe I’ll write up one of those sometime.

And the last of the relevant essays I’ve already written is an A To Z essay about normal numbers. I don’t know whether π is a normal number. No human, to the best of my knowledge, does. Well, anyone with an opinion on the matter would likely say, of course it’s normal. There’s fantastic reasons to think it is. But none of those amount to a proof it is.

That’s my three items. After that I’d like to share … I don’t know whether to classify this as one or three pieces. They’re YouTube videos which a couple months ago everybody in the world was asking me if I’d seen. Now it’s your turn. I apologize if you too got this, a couple months ago, but don’t worry. You can tell people you watched and not actually do it. I’ll alibi you.

It’s a string of videos posted on youTube by 3Blue1Brown. The first lays out the matter with a neat physics problem. Imagine you have an impenetrable wall, a frictionless floor, and two blocks. One starts at rest. The other is sliding towards the first block and the wall. How many times will one thing collide with another? That is, will one block collide with another block, or will one block collide with a wall?

The answer seems like it should depend on many things. What it actually depends on is the ratio of the masses of the two blocks. If they’re the same mass, then there are three collisions. You can probably work that sequence out in your head and convince yourself it’s right. If the outer block has ten times the mass of the inner block? There’ll be 31 collisions before all the hits are done. You might work that out by hand. I did not. You will not work out what happens if the outer block has 100 times the mass of the inner block. That’ll be 314 collisions. If the outer block has 1,000 times the mass of the inner block? 3,141 collisions. You see where this is going.

The video shows a way that saves an incredible load of work. But you save on that tedious labor by having to think harder. Part of it is making use of conservation laws, that energy and linear momentum are conserved in collisions. But part is by recasting the problem. Recast it into “phase space”. This uses points in an abstract space to represent different configurations of a system. Like, how fast blocks are moving, and in what direction. The recasting of the problem turns something that’s impossibly tedious into something that’s merely … well, it’s still a bit tedious. But it’s much less hard work. And it’s a good chance to show off you remember the Inscribed Angle Theorem. You do remember the Inscribed Angle Theorem, don’t you? The video will catch you up. It’s a good show of how phase spaces can make physics problems so much more manageable.

The third video recasts the problem yet again. In this form, it’s about rays of light reflecting between mirrors. And this is a great recasting. That blocks bouncing off each other and walls should have anything to do with light hitting mirrors seems ridiculous. But set out your phase space, and look hard at what collisions and reflections are like, and you see the resemblance. The sort of trick used to make counting reflections easy turns up often in phase spaces. It also turns up in physics problems on toruses, doughnut shapes. You might ask when do we ever do anything on a doughnut shape. Well, real physical doughnuts, not so much. But problems where there are two independent quantities, and both quantities are periodic? There’s a torus lurking in there. There might be a phase space using that shape, and making your life easier by doing so.

That’s my promised four or maybe six items. Pardon, please, now, as I do need to get back to reading the comics.

Today’s quartet of mathematically-themed comic strips doesn’t have an overwhelming theme. There’s some bits about the mathematics that young people do, so, that’s enough to separate this from any other given day’s comics essay.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th is built on a bit of mathematical folklore. As Weinersmith’s mathematician (I don’t remember that we’ve been given her name) mentions, there is a belief that “revolutionary” mathematics is done by young people. That isn’t to say that older mathematicians don’t do great work. But the stereotype is that an older mathematician will produce masterpieces in already-established fields. It’s the young that establish new fields. Indeed, one of mathematics’s most prestigious awards, the Fields Medal, is only awarded to mathematicians under the age of forty. I was cheated of mine. Long story.

There’s intuitive appeal in the idea that revolutions in thinking are for the young. We think that people get set in their ways as they develop their careers. We have a couple dramatic examples, most notably Évariste Galois, who developed what we now see as foundations of group theory and died at twenty. While the idea is commonly held, I don’t know that it’s actually true. That is, that it holds up to scrutiny. It seems hard to create a definition for “revolutionary mathematics” that could be agreed upon by two people. So it would be difficult to test at what age people do their most breathtaking work, and whether it is what they do when young or when experienced.

Is there harm to believing an unprovable thing? If it makes you give up on trying, yes. My suspicion is that true revolutionary work happens when a well-informed, deep thinker comes to a field that hasn’t been studied in that way before. And when it turns out to be a field well-suited to study that way. That doesn’t require youth. It requires skill in one field, and an understanding that there’s another field ready to be studied that way.

Will Henry’s Wallace the Brave for the 14th is a mathematics anxiety joke. Wallace tries to help by turning an abstract problem into a concrete one. This is often a good way to approach a problem. Even in more advanced mathematics, one can often learn the way to solve a general problem by trying a couple of specific examples. It’s almost as though there’s only a certain amount of abstraction people can deal with, and you need to re-cast problems so they stay within your limits.

Yes, the comments turn to complaining about Common Core. I’m not sure what would help Spud work through this problem (or problems in general). But thinking of alternate problems that estimated or approached what he really wanted might help. If he noticed, for example, that 10 + 12 has to be a little more than 10 + 10, and he found 10 + 10 easy, then he’d be close to a right answer. If he noticed that 10 + 12 had to be 10 + 10 + 2, and he found 10 + 10 easy, then he might find 20 + 2 easy as well. Maybe Spud would be better off thinking of ways to rewrite a problem without changing the result.

Wiley Miller’s Non Sequitur for the 15th mentions calculus. It’s more of a probability joke. To speak of a calculated risk is to speak of doing something that’s not certain, but that has enough of a payoff to be worth the cost of failure. But one problem with this attitude is that people are very, very bad at estimating probabilities. We have terrible ideas of how likely losses are and how uncertain rewards can be. But even if we allow that the risks and rewards are calculated right, there’s a problem with things you only do once. Or only can do once. You can get into a good debate about whether there’s even a meaningful idea of probability for things that happen only the one time. Life’s among them.

Bob Weber Sr’s Moose and Molly for the 16th is a homework joke. It does actually depend on being mathematics homework, though, or there’d be no grounds for Moose’s kid to go to the savings and loan clerk who’ll help with “money problems”.

With me wrapping up the mathematically-themed comic strips that ran the first of the year, you can see how far behind I’m falling keeping everything current. In my defense, Monday was busier than I hoped it would be, so everything ran late. Next week is looking quite slow for comics, so maybe I can catch up then. I will never catch up on anything the rest of my life, ever.

Scott Hilburn’s The Argyle Sweater for the 2nd is a bit of wordplay about regular and irregular polygons. Many mathematical constructs, in geometry and elsewhere, come in “regular” and “irregular” forms. The regular form usually has symmetries that make it stand out. For polygons, this is each side having the same length, and each interior angle being congruent. Irregular is everything else. The symmetries which constrain the regular version of anything often mean we can prove things we otherwise can’t. But most of anything is the irregular. We might know fewer interesting things about them, or have a harder time proving them.

I’m not sure what the teacher would be asking for in how to “make an irregular polygon regular”. I mean if we pretend that it’s not setting up the laxative joke. I can think of two alternatives that would make sense. One is to draw a polygon with the same number of sides and the same perimeter as the original. The other is to draw a polygon with the same number of sides and the same area as the original. I’m not sure of the point of either. I suppose polygons of the same area have some connection to quadrature, that is, integration. But that seems like it’s higher-level stuff than this class should be doing. I hate to question the reality of a comic strip but that’s what I’m forced to do.

Bud Fisher’s Mutt and Jeff rerun for the 4th is a gambler’s fallacy joke. Superficially the gambler’s fallacy seems to make perfect sense: the chance of twelve bad things in a row has to be less than the chance of eleven bad things in a row. So after eleven bad things, the twelfth has to come up good, right? But there’s two ways this can go wrong.

Suppose each attempted thing is independent. In this case, what if each patient is equally likely to live or die, regardless of what’s come before? And in that case, the eleven deaths don’t make it more likely that the next will live.

Suppose each attempted thing is not independent, though. This is easy to imagine. Each surgery, for example, is a chance for the surgeon to learn what to do, or not do. He could be getting better, that is, more likely to succeed, each operation. Or the failures could reflect the surgeon’s skills declining, perhaps from overwork or age or a loss of confidence. Impossible to say without more data. Eleven deaths on what context suggests are low-risk operations suggest a poor chances of surviving any given surgery, though. I’m on Jeff’s side here.

Mark Anderson’s Andertoons for the 5th is a welcome return of Wavehead. It’s about ratios. My impression is that ratios don’t get much attention in themselves anymore, except to dunk on stupid Twitter comments. It’s too easy to jump right into fractions, and division. Ratios underlie this, at least historically. It’s even in the name, ‘rational numbers’.

Wavehead’s got a point in literally comparing apples and oranges. It’s at least weird to compare directly different kinds of things. This is one of those conceptual gaps between ancient mathematics and modern mathematics. We’re comfortable stripping the units off of numbers, and working with them as abstract entities. But that does mean we can calculate things that don’t make sense. This produces the occasional bit of fun on social media where we see something like Google trying to estimate a movie’s box office per square inch of land in Australia. Just because numbers can be combined doesn’t mean they should be.

Larry Wright’s Motley rerun for the 5th has the form of a story problem. And one timely to the strip’s original appearance in 1987, during the National Football League players strike. The setup, talking about the difference in weekly pay between the real players and the scabs, seems like it’s about the payroll difference. The punchline jumps to another bit of mathematics, the point spread. Which is an estimate of the expected difference in scoring between teams. I don’t know for a fact, but would imagine the scab teams had nearly meaningless point spreads. The teams were thrown together extremely quickly, without much training time. The tools to forecast what a team might do wouldn’t have the data to rely on.

I apologize that, even though the past week was light on mathematically-themed comic strips, I didn’t have them written up by my usual Sunday posting time. It was just too busy a week, and I am still decompressing from the A to Z sequence. I’ll have them as soon as I’m able.

In the meanwhile may I share a couple of things I thought worth reading, and that have been waiting in my notes folder for the chance to highlight?

There are around 7000 people currently living in this planet who got 20 tails in a row the first time they tried flipping a coin in their life pic.twitter.com/LvUWs4jnLA

This Fermat’s Library tweet is one of those entertaining consequences of probability, multiplied by the large number of people in the world. If you flip twenty coins in a row there’s a one in 1,048,576 chance that all twenty will come up heads, or all twenty will come up tails. So about one in every million times you flip twenty coins, they all come up the same way. If the seven billion people in the world have flipped at least twenty coins in their lives, then something like seven thousand of them had the coins turn up heads every single one of those twenty times. That all seven billion people have tossed a coin seems like the biggest point to attack this trivia on. A lot of people are too young, or don’t have access to, coins. But there’s still going to be thousands who did start their coin-flipping lives with a remarkable streak.

Also back in October, so you see how long things have been circulating around here, John D Cook published an article about the World Series. Or any series contest. At least ones where the chance of each side winning don’t depend on the previous games in the series. If one side has a probability ‘p’ of winning any particular game, what’s the chance they’ll win a best-four-of-seven? What makes this a more challenging mathematics problem is that a best-of-seven series stops after one side’s won four games. So you can’t simply say it’s the chance of four wins. You need to account for four wins out of five games, out of six games, and out of seven games. Fortunately there’s a lot of old mathematics that explores just this.

The economist Brandford DeLong noticed the first write-up of the Prisoners Dilemma. This is one of the first bits of game theory that anyone learns, and it’s an important bit. It establishes that the logic of cooperatives games — any project where people have to work together — can have a terrible outcome. What makes the most sense for the individuals makes the least sense for the group. That a good outcome for everyone depends on trust, whether established through history or through constraints everyone’s agreed to respect.

And finally here’s part of a series about quick little divisibility tests. This is that trick where you tell what a number’s divisible by through adding or subtracting its (base ten) digits. Everyone who’d be reading this post knows about testing for divisibility by three or nine. Here’s some rules for also testing divisibility by eleven (which you might know), by seven (less likely), and thirteen. With a bit of practice, and awareness of some exceptional numbers, you can tell by sight whether a number smaller than a thousand is prime. Add a bit of flourish to your doing this and you can establish a reputation as a magical mathematician.

I had not wanted to mention, for fear of setting off a panic. But Mark Anderson’s Andertoons, which I think of as being in every Reading the Comics post, hasn’t been around lately. If I’m not missing something, it hasn’t made an appearance in three months now. I don’t know why, and I’ve been trying not to look too worried by it. Mostly I’ve been forgetting to mention the strange absence. This even though I would think any given Tuesday or Friday that I should talk about the strip not having anything for me to write about. Fretting about it would make a great running theme. But I have never spotted a running theme before it’s finished. In any event the good news is that the long drought has ended, and Andertoons reappears this week. Yes, I’m hoping that it won’t be going to long between appearances this time.

Jef Mallett’s Frazz for the 16th talks about probabilities. This in the context of assessing risks. People are really bad at estimating probabilities. We’re notoriously worse at assessing risks, especially when it’s a matter of balancing a present cost like “fifteen minutes waiting while the pharmacy figures out whether insurance will pay for the flu shot” versus a nebulous benefit like “lessened chance of getting influenza, or at least having a less severe influenza”. And it’s asymmetric, too. We view improbable but potentially enormous losses differently from the way we view improbable but potentially enormous gains. And it’s hard to make the rationally-correct choice reliably, not when there are so many choices of this kind every day.

Tak Bui’s PC and Pixel for the 16th features a wall full of mathematical symbols, used to represent deep thought about a topic. The symbols are gibberish, yes. I’m not sure that an actual “escape probability” could be done in a legible way, though. Or even what precisely Professor Phillip might be calculating. I imagine it would be an estimate of the various ways he might try to escape, and what things might affect that. This might be for the purpose of figuring out what he might do to maximize his chances of a successful escape. Although I wouldn’t put it past the professor to just be quite curious what the odds are. There’s a thrill in having a problem solved, even if you don’t use the answer for anything.

Ruben Bolling’s Super-Fun-Pak Comix for the 18th has a trivia-panel-spoof dubbed Amazing Yet Tautological. One could make an argument that most mathematics trivia fits into this category. At least anything about something that’s been proven. Anyway, whether this is a tautological strip depends on what the strip means by “average” in the phrase “average serving”. There’s about four jillion things dubbed “average” and each of them has a context in which they make sense. The thing intended here, and the thing meant if nobody says anything otherwise, is the “arithmetic mean”. That’s what you get from adding up everything in a sample (here, the amount of egg salad each person in America eats per year) and dividing it by the size of the sample (the number of people in America that year). Another “average” which would make sense, but would break this strip, would be the median. That would be the amount of egg salad that half of all Americans eat more than, and half eat less than. But whether every American could have that big a serving really depends on what that median is. The “mode”, the most common serving, would also be a reasonable “average” to expect someone to talk about.

Mark Anderson’s Andertoons for the 19th is that strip’s much-awaited return to my column here. It features solid geometry, which is both an important part of geometry and also a part that doesn’t get nearly as much attention as plane geometry. It’s reductive to suppose the problem is that it’s harder to draw solids than planar figures. I suspect that’s a fair part of the problem, though. Mathematicians don’t get much art training, not anymore. And while geometry is supposed to be able to rely on pure reasoning, a good picture still helps. And a bad picture will lead us into trouble.

My final glossary term for this year’s A To Z sequence was suggested by aajohannas, who’d also suggested “randomness” and “tiling”. I don’t know of any blogs or other projects they’re behind, but if I do hear, I’ll pass them on.

Zugzwang.

Some areas of mathematics struggle against the question, “So what is this useful for?” As though usefulness were a particular merit — or demerit — for a field of human study. Most mathematics fields discover some use, though, even if it takes centuries. Others are born useful. Probability, for example. Statistics. Know what the fields are and you know why they’re valuable.

Game theory is another of these. The subject, as often happens, we can trace back centuries. Usually as the study of some particular game. Occasionally in the study of some political science problem. But game theory developed a particular identity in the early 20th century. Some of this from set theory experts. Some from probability experts. Some from John von Neumann, because it was the 20th century and all that. Calling it “game theory” explains why anyone might like to study it. Who doesn’t like playing games? Who, studying a game, doesn’t want to play it better?

But why it might be interesting is different from why it might be important. Think of what a game is. It is a string of choices made by one or more parties. The point of the choices is to achieve some goal. Put that way you realize: this is everything. All life is making choices, all in the pursuit of some goal, even if that goal is just “not end up any worse off”. I don’t know that the earliest researchers in game theory as a field realized what a powerful subject they had touched on. But by the 1950s they were doing serious work in strategic planning, and by 1964 were even giving us Stanley Kubrick movies.

This is taking me away from my glossary term. The field of games is enormous. If we narrow the field some we can discuss specific kinds of games. And say more involved things about these games. So first we’ll limit things by thinking only of sequential games. These are ones where there are a set number of players, and they take turns making choices. I’m not sure whether the field expects the order of play to be the same every time. My understanding is that much of the focus is on two-player games. What’s important is that at any one step there’s only one party making a choice.

The other thing narrowing the field is to think of information. There are many things that can affect the state of the game. Some of them might be obvious, like where the pieces are on the game board. Or how much money a player has. We’re used to that. But there can be hidden information. A player might conceal some game money so as to make other players underestimate her resources. Many card games have one or more cards concealed from the other players. There can be information unknown to any party. No one can make a useful prediction what the next throw of the game dice will be. Or what the next event card will be.

But there are games where there’s none of this ambiguity. These are called games with “perfect information”. In them all the players know the past moves every player has made. Or at least should know them. Players are allowed to forget what they ought to know.

There’s a separate but similar-sounding idea called “complete information”. In a game with complete information, players know everything that affects the gameplay. At least, probably, apart from what their opponents intend to do. This might sound like an impossibly high standard, at first. All games with shuffled decks of cards and with dice to roll are out. There’s no concealing or lying about the state of affairs.

Set complete-information aside; we don’t need it here. Think only of perfect-information games. What are they? Some ancient games, certainly. Tic-tac-toe, for example. Some more modern versions, like Connect Four and its variations. Some that are actually deep, like checkers and chess and go. Some that are, arguably, more puzzles than games, as in sudoku. Some that hardly seem like games, like several people agreeing how to cut a cake fairly. Some that seem like tests to prove people are fundamentally stupid, like when you auction off a dollar. (The rules are set so players can easily end up paying more then a dollar.) But that’s enough for me, at least. You can see there are games of clear, tangible interest here.

The last restriction: think only of two-player games. Or at least two parties. Any of these two-party sequential games with perfect information are a part of “combinatorial game theory”. It doesn’t usually allow for incomplete-information games. But at least the MathWorld glossary doesn’t demand they be ruled out. So I will defer to this authority. I’m not sure how the name “combinatorial” got attached to this kind of game. My guess is that it seems like you should be able to list all the possible combinations of legal moves. That number may be enormous, as chess and go players are always going on about. But you could imagine a vast book which lists every possible game. If your friend ever challenged you to a game of chess the two of you could simply agree, oh, you’ll play game number 2,038,940,949,172 and then look up to see who won. Quite the time-saver.

Most games don’t have such a book, though. Players have to act on what they understand of the current state, and what they think the other player will do. This is where we get strategies from. Not just what we plan to do, but what we imagine the other party plans to do. When working out a strategy we often expect the other party to play perfectly. That is, to make no mistakes, to not do anything that worsens their position. Or that reduces their chance of winning.

… And yes, arguably, the word “chance” doesn’t belong there. These are games where the rules are known, every past move is known, every future move is in principle computable. And if we suppose everyone is making the best possible move then we can imagine forecasting the whole future of the game. One player has a “chance” of winning in the same way Christmas day of the year 2038 has a “chance” of being on a Tuesday. That is, the probability is just an expression of our ignorance, that we don’t happen to be able to look it up.

But what choice do we have? I’ve never seen a reference that lists all the possible games of tic-tac-toe. And that’s about the simplest combinatorial-game-theory game anyone might actually play. What’s possible is to look at the current state of the game. And evaluate which player seems to be closer to her goal. And then look at all the possible moves.

There are three things a move can do. It can put the party closer to the goal. It can put the party farther from the goal. Or it can do neither. On her turn the other party might do something that moves you farther from your goal, moves you closer to your goal, or doesn’t affect your status at all. It seems like this makes strategy obvious. On every step take the available move that takes one closest to the goal. This is known as a “greedy” strategy. As the name suggests it isn’t automatically bad. If you expect the game to be a short one, greed might be the best approach. The catch is that moves that seem less good — even ones that seem to hurt you initially — might set up other, even better moves. So strategy requires some thinking beyond the current step. Properly, it requires thinking through to the end of the game. Or at least until the end of the game seems obvious.

We should like a strategy that leaves us no choice but to win. Next-best would be one that leaves the game undecided, since something might happen like the other player needing to catch a bus and so resigning. This is how I got my solitary win in the two months I spent in the college chess club. Worst would be the games that leave us no choice but to lose.

It can be that there are no good moves. That is, that every move available makes it a little less likely that we win. Sometimes a game offers the chance to pass, preserving the state of the game but giving the other party the turn. Then maybe the other party will do something that creates a better opportunity for us. But if we are allowed to pass, there’s a good chance the game lets the other party pass, too, and we end up in the same fix. And it may be the rules of the game don’t allow passing anyway. One must move.

The phenomenon of having to make a move when it’s impossible to make a good move has prominence in chess. I don’t have the chess knowledge to say how common the situation is. But it seems to be a situation people who study chess problems love. I suppose it appeals to a love of lost causes and the hope that you can be brilliant enough to see what everyone else has overlooked. German chess literate gave it a name 160 years ago, “zugzwang”, “compulsion to move”. Somehow I never encountered the term when I was briefly a college chess player. Perhaps because I was never in zugzwang and was just too incompetent a player to find my good moves. I first encountered the term in Michael Chabon’s The Yiddish Policeman’s Union. The protagonist picked up on the term as he investigated the murder of a chess player and then felt himself in one.

Combinatorial game theorists have picked up the word, and sharpened its meaning. If I understand correctly chess players allow the term to be used for any case where a player hurts her position by moving at all. Game theorists make it more dire. This may reflect their knowledge that an optimal strategy might require taking some dismal steps along the way. The game theorist formally grants the term only to the situation where the compulsion to move changes what should be a win into a loss. This seems terrible, but then, we’ve all done this in play. We all feel terrible about it.

I’d like here to give examples. But in searching the web I can find only either courses in game theory. These are a bit too much for even me to sumarize. Or chess problems, which I’m not up to understanding. It seems hard to set out an example: I need to not just set out the game, but show that what had been a win is now, by any available move, turned into a loss. Chess is looser. It even allows, I discover, a double zugzwang, where both players are at a disadvantage if they have to move.

It’s a quite relatable problem. You see why game theory has this reputation as mathematics that touches all life.

Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical.

Witch of Agnesi.

I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote.

It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this:

In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat.

It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten.

We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here.

Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why ? Why not ?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one.

The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project.

For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve.

Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve.

Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out , yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve.

It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa.

It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too.

And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve.

It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea.

Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas.

(Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.)

It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me.

I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student.

And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading.

This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.

Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.

John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.

But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.

Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.

Today’s topic is an always rich one. It was suggested by aajohannas, who so far as I know has’t got an active blog or other project. If I’m mistaken please let me know. I’m glad to mention the creative works of people hanging around my blog.

Randomness.

An old Sydney Harris cartoon I probably won’t be able to find a copy of before this publishes. A couple people gather around an old fanfold-paper printer. On the printout is the sequence “1 … 2 … 3 … 4 … 5 … ” The caption: ‘Bizarre sequence of computer-generated random numbers’.

Randomness feels familiar. It feels knowable. It means surprise, unpredictability. The upending of patterns. The obliteration of structure. I imagine there are sociologists who’d say it’s what defines Modernity. It’s hard to avoid noticing that the first great scientific theories that embrace unpredictability — evolution and thermodynamics — came to public awareness at the same time impressionism came to arts, and the subconscious mind came to psychology. It’s grown since then. Quantum mechanics is built on unpredictable specifics. Chaos theory tells us even if we could predict statistics it would do us no good. Randomness feels familiar, even necessary. Even desirable. A certain type of nerd thinks eagerly of the Singularity, the point past which no social interactions are predictable anymore. We live in randomness.

And yet … it is hard to find randomness. At least to be sure we have found it. We might choose between options we find ambivalent by tossing a coin. This seems random. But anyone who was six years old and trying to cheat a sibling knows ways around that. Drop the coin without spinning it, from a half-inch above the table, and you know the outcome, all the way through to the sibling’s punching you. When we’re older and can be made to be better sports we’re fairer about it. We toss the coin and give it a spin. There’s no way we could predict the outcome. Unless we knew just how strong a toss we gave it, and how fast it spun, and how the mass of the coin was distributed. … Really, if we knew enough, our tossed coin would be as predictably as the coin we dropped as a six-year-old. At least unless we tossed in some chaotic way, where each throw would be deterministic, but we couldn’t usefully make a prediction.

Our instinctive idea of what randomness must be is flawed. That shouldn’t surprise. Our instinctive idea of anything is flawed. But randomness gives us trouble. It’s obvious, for example, that randomly selected things should have no pattern. But then how is that reasonable? If we draw letters from the alphabet at random, we should expect sometimes to get some cute pattern like ‘aaaaa’ or ‘qwertyuiop’ or the works of Shakespeare. Perhaps we mean we shouldn’t get patterns any more often than we would expect. All right; how often is that?

We can make tests. Some of them are obvious. Take something that generates possibly-random results. Look up how probable each of those outcomes is. Then run off a bunch of outcomes. Do we get about as many of each result as we should expect? Probability tells us we should get as close as we like to the expected frequency if we let the random process run long enough. If this doesn’t happen, great! We can conclude we don’t really have something random.

We can do more tests. Some of them are brilliantly clever. Suppose there’s a way to order the results. Since mathematicians usually want numbers, putting them in order is easy to do. If they’re not, there’s usually a way to match results to numbers. You’ll see me slide here into talking about random numbers as though that were the same as random results. But if I can distinguish different outcomes, then I can label them. If I can label them, I can use numbers as labels. If the order of the numbers doesn’t matter — should “red” be a 1 or a 2? Should “green” be a 3 or an 8? — then, fine; any order is good.

There are 120 ways to order five distinct things. So generate lots of sets of, say, five numbers. What order are they in? There’s 120 possibilities. Do each of the possibilities turn up as often as expected? If they don’t, great! We can conclude we don’t really have something random.

I can go on. There are many tests which will let us say something isn’t a truly random sequence. They’ll allow for something like Sydney Harris’s peculiar sequence of random numbers. Mostly by supposing that if we let it run long enough the sequence would stop. But these all rule out random number generators. Do we have any that rule them in? That say yes, this generates randomness?

I don’t know of any. I suspect there can’t be any, on the grounds that a test of a thousand or a thousand million or a thousand million quadrillion numbers can’t assure us the generator won’t break down next time we use it. If we knew the algorithm by which the random numbers were generated — oh, but there we’re foiled before we can start. An algorithm is the instructions of how to do a thing. How can an instruction tell us how to do a thing that can’t be predicted?

Algorithms seem, briefly, to offer a way to tell whether we do have a good random sequence, though. We can describe patterns. A strong pattern is easy to describe, the way a familiar story is easy to reference. A weak pattern, a random one, is hard to describe. It’s like a dream, in which you can just list events. So we can call random something which can’t be described any more efficiently than just giving a list of all the results. But how do we know that can’t be done? 7, 7, 2, 4, 5, 3, 8, 5, 0, 9 looks like a pretty good set of digits, whole numbers from 0 through 9. I’ll bet not more than one in ten of you guesses correctly what the next digit in the sequence is. Unless you’ve noticed that these are the digits in the square root of π, so that the next couple digits have to be 0, 5, 5, and 1.

We know, on theoretical grounds, that we have randomness all around us. Quantum mechanics depends on it. If we need truly random numbers we can set a sensor. It will turn the arrival of cosmic rays, or the decay of radioactive atoms, or the sighing of a material flexing in the heat into numbers. We trust we gather these and process them in a way that doesn’t spoil their unpredictability. To what end?

That is, why do we care about randomness? Especially why should mathematicians care? The image of mathematics is that it is a series of logical deductions. That is, things known to be true because they follow from premises known to be true. Where can randomness fit?

One answer, one close to my heart, is called Monte Carlo methods. These are techniques that find approximate answers to questions. They do well when exact answers are too hard for us to find. They use random numbers to approximate answers and, often, to make approximate answers better. This demands computations. The field didn’t really exist before computers, although there are some neat forebears. I mean the Buffon needle problem, which lets you calculate the digits of π about as slowly as you could hope to do.

Another, linked to Monte Carlo methods, is stochastic geometry. “Stochastic” is the word mathematicians attach to things when they feel they’ve said “random” too often, or in an undignified manner. Stochastic geometery is what we can know about shapes when there’s randomness about how the shapes are formed. This sounds like it’d be too weak a subject to study. That it’s built on relatively weak assumptions means it describes things in many fields, though. It can be seen in understanding how forests grow. How to find structures inside images. How to place cell phone towers. Why materials should act like they do instead of some other way. Why galaxies cluster.

There’s also a stochastic calculus, a bit of calculus with randomness added. This is useful for understanding systems where some persistent unpredictable behavior is there. It comes, if I understand the histories of this right, from studying the ways molecules will move around in weird zig-zagging twists. They do this even when there is no overall flow, just a fluid at a fixed temperature. It too has surprising applications. Without the assumption that some prices of things are regularly jostled by arbitrary and unpredictable forces, and the treatment of that by stochastic calculus methods, we wouldn’t have nearly the ability to hedge investments against weird chaotic events. This would be a bad thing, I am told by people with more sophisticated investments than I have. I personally own like ten shares of the Tootsie Roll corporation and am working my way to a $2.00 rebate check from Boyer.

Given that we need randomness, but don’t know how to get it — or at least don’t know how to be sure we have it — what is there to do? We accept our failings and make do with “quasirandom numbers”. We find some process that generates numbers which look about like random numbers should. These have failings. Most important is that if we could predict them. They’re random like “the date Easter will fall on” is random. The date Easter will fall is not at all random; it’s defined by a specific and humanly knowable formula. But if the only information you have is that this year, Easter fell on the 1st of April (Gregorian computus), you don’t have much guidance to whether this coming year it’ll be on the 7th, 14th, or 21st of April the next year. Most notably, quasirandom number generators will tend to repeat after enough numbers are drawn. If we know we won’t need enough numbers to see a repetition, though? Another stereotype of the mathematician is that of a person who demands exactness. It is often more true to say she is looking for an answer good enough. We are usually all right with a merely good enough quasirandomness.

Boyer candies — Mallo Cups, most famously, although I more like the peanut butter Smoothies — come with a cardboard card backing. Each card has two play money “coins”, of values from 5 cents to 50 cents. These can be gathered up for a rebate check or for various prizes. Whether your coin is 5 cents, 10, 25, or 50 cents … well, there’s no way to tell, before you open the package. It’s, so far as you can tell, randomness.

After that busy start last Sunday, Comic Strip Master Command left only a few things for the rest of the week. Here’s everything that seemed worthy of some comment to me:

Alex Hallatt’s Arctic Circle for the 12th is an arithmetic cameo. It’s used as the sort of thing that can be tested, with the straightforward joke about animal testing to follow. It’s not a surprise that machines should be able to do arithmetic. We’ve built machines for centuries to do arithmetic. Literally; Wilhelm Gottfried Leibniz designed and built a calculating machine able to add, subtract, multiply, and divide. This accomplishment from one of the founders of integral calculus is a potent reminder of how much we can accomplish if we’re supposed to be writing instead. (That link is to Robert Benchley’s classic essay “How To Get Things Done”. It is well worth reading, both because it is funny and because it’s actually good, useful advice.)

But it’s also true that animals do know arithmetic. At least a bit. Not — so far as we know — to the point they ponder square roots and such. But certainly to count, to understand addition and subtraction roughly, to have some instinct for calculations. Stanislas Dehaene’s The Number Sense: How the Mind Creates Mathematics is a fascinating book about this. I’m only wary about going deeper into the topic since I don’t know a second (and, better, third) pop book touching on how animals understand mathematics. I feel more comfortable with anything if I’ve encountered it from several different authors. Anyway it does imply the possibility of testing a polar bear’s abilities at arithmetic, only in the real world.

Berkeley Breathed’s Bloom County rerun for the 13th has another mathematics cameo. Geometry’s a subject worthy of stoking Binkley’s anxieties, though. It has a lot of definitions that have to be carefully observed. And while geometry reflects the understanding we have of things from moving around in space, it demands a precision that we don’t really have an instinct for. It’s a lot to worry about.

Terry Border’s Bent Objects for the 15th is our Venn Diagram joke for the week. I like this better than I think the joke deserves, probably because it is done in real materials. (Which is the Bent Objects schtick; it’s always photographs of objects arranged to make the joke.)

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is a joke on knowing how far to travel but not what direction. Normal human conversations carry contextually reasonable suppositions. Told something is two miles away, it’s probably along the major road you’re on, or immediately nearby. I’d still ask for clarification told something was “two miles away”. Two blocks, I’d let slide, on the grounds that it’s no big deal to correct a mistake.

Still, mathematicians carry defaults with them too. They might be open to a weird, general case, certainly. But we have expectations. There’s usually some obvious preferred coordinate system, or directions. If it’s important that we be ready for alternatives we highlight that. We specify the coordinate system we want. Perhaps we specify we’re taking that choice “without loss of generality”, that is, without supposing some other choice would be wrong.

I noticed the mathematician’s customized plate too. “EIPI1” is surely a reference to the expression . That sum, it turns out, equals zero. It reflects this curious connection between exponentiation, complex-valued numbers, and the trigonometric functions. It’s a weird thing to know is true, and it’s highly regarded in certain nerd circles for that weirdness.

Hilary Price’s Rhymes With Orange for the 16th features a what-are-the-odds sort of joke, this one about being struck by a bolt from the sky. Lightning’s the iconic bolt to strike someone, and be surprising about it. Fabric would be no less surprising, though. And there’s no end of stories of weird things falling from the skies. It’s easier to get stuff into the sky than you might think, and there are only a few options once that’s happened.

Through the end of December my Fall 2018 Mathematics A To Z continues. I’m still open for topics to discuss from the last half-dozen letters of the alphabet. Even if someone’s already given a word for some letter, suggest something anyway. You might inspire me in good ways.

Dina Yagodich gave me the topic for today. She keeps up a YouTube channel with a variety of interesting videos. And she did me a favor. I’ve been thinking a long while to write a major post about this theorem. Its subject turns up so often. I’d wanted to have a good essay about it. I hope this might be one.

Infinite Monkey Theorem.

Some mathematics escapes mathematicians and joins culture. This is one such. The monkeys are part of why. They’re funny and intelligent and sad and stupid and deft and clumsy, and they can sit at a keyboard almost look in place. They’re so like humans, except that we empathize with them. To imagine lots of monkeys, and putting them to some silly task, is compelling.

The metaphor traces back to a 1913 article by the mathematical physicist Émile Borel which I have not read. Searching the web I find much more comment about it than I find links to a translation of the text. And only one copy of the original, in French. And that page wants €10 for it. So I can tell you what everybody says was in Borel’s original text, but can’t verify it. The paper’s title is “Statistical Mechanics and Irreversibility”. From this I surmise that Borel discussed one of the great paradoxes of statistical mechanics. If we open a bottle of one gas in an airtight room, it disperses through the room. Why doesn’t every molecule of gas just happen, by chance, to end up back where it started? It does seem that if we waited long enough, it should. It’s unlikely it would happen on any one day, but give it enough days …

But let me turn to many web sites that are surely not all copying Wikipedia on this. Borel asked us to imagine a million monkeys typing ten hours a day. He posited it was possible but extremely unlikely that they would exactly replicate all the books of the richest libraries of the world. But that would be more likely than the atmosphere in a room un-mixing like that. Fair enough, but we’re not listening anymore. We’re thinking of monkeys. Borel’s is a fantastic image. It would see some adaptation in the years. Physicist Arthur Eddington, in 1928, made it an army of monkeys, with their goal being the writing all the books in the British Museum. By 1960 Bob Newhart had an infinite number of monkeys and typewriters, and a goal of all the great books. Stating the premise gets a laugh I doubt the setup would today. I’m curious whether Newhart brought the idea to the mass audience. (Google NGrams for “monkeys at typewriters” suggest that phrase was unwritten, in books, before about 1965.) We may owe Bob Newhart thanks for a lot of monkeys-at-typewriters jokes.

Newhart has a monkey hit on a line from Hamlet. I don’t know if it was Newhart that set the monkeys after Shakespeare particularly, rather than some other great work of writing. Shakespeare does seem to be the most common goal now. Sometimes the number of monkeys diminishes, to a thousand or even to one. Some people move the monkeys off of typewriters and onto computers. Some take the cowardly measure of putting the monkeys at “keyboards”. The word is ambiguous enough to allow for typewriters, computers, and maybe a Megenthaler Linotype. The monkeys now work 24 hours a day. This will be a comment someday about how bad we allowed pre-revolutionary capitalism to get.

The cultural legacy of monkeys-at-keyboards might well itself be infinite. It turns up in comic strips every few weeks at least. Television shows, usually writing for a comic beat, mention it. Computer nerds doing humor can’t resist the idea. Here’s a video of a 1979 Apple ][ program titled THE INFINITE NO. OF MONKEYS, which used this idea to show programming tricks. And it’s a great philosophical test case. If a random process puts together a play we find interesting, has it created art? No deliberate process creates a sunset, but we can find in it beauty and meaning. Why not words? There’s likely a book to write about the infinite monkeys in pop culture. Though the quotations of original materials would start to blend together.

But the big question. Have the monkeys got a chance? In a break from every probability question ever, the answer is: it depends on what the question precisely is. Occasional real-world experiments-cum-art-projects suggest that actual monkeys are worse typists than you’d think. They do more of bashing the keys with a stone before urinating on it, a reminder of how slight is the difference between humans and our fellow primates. So we turn to abstract monkeys who behave more predictably, and run experiments that need no ethical oversight.

So we must think what we mean by Shakespeare’s Plays. Arguably the play is a specific performance of actors in a set venue doing things. This is a bit much to expect of even a skilled abstract monkey. So let us switch to the book of a play. This has a more clear representation. It’s a string of characters. Mostly letters, some punctuation. Good chance there’s numerals in there. It’s probably a lot of characters. So the text to match is some specific, long string of characters in a particular order.

And what do we mean by a monkey at the keyboard? Well, we mean some process that picks characters randomly from the allowed set. When I see something is picked “randomly” I want to know what the distribution rule is. Like, are Q’s exactly as probable as E’s? As &’s? As %’s? How likely it is a particular string will get typed is easiest to answer if we suppose a “uniform” distribution. This means that every character is equally likely. We can quibble about capital and lowercase letters. My sense is most people frame the problem supposing case-insensitivity. That the monkey is doing fine to type “whaT beArD weRe i BEsT tO pLAy It iN?”. Or we could set the monkey at an old typesetter’s station, with separate keys for capital and lowercase letters. Some will even forgive the monkeys punctuating terribly. Make your choices. It affects the numbers, but not the point.

I’ll suppose there are 91 characters to pick from, as a Linotype keyboard had. So the monkey has capitals and lowercase and common punctuation to get right. Let your monkey pick one character. What is the chance it hit the first character of one of Shakespeare’s plays? Well, the chance is 1 in 91 that you’ve hit the first character of one specific play. There’s several dozen plays your monkey might be typing, though. I bet some of them even start with the same character, so giving an exact answer is tedious. If all we want monkey-typed Shakespeare plays, we’re being fussy if we want The Tempest typed up first and Cymbeline last. If we want a more tractable problem, it’s easier to insist on a set order.

So suppose we do have a set order. Then there’s a one-in-91 chance the first character matches the first character of the desired text. A one-in-91 chance the second character typed matches the second character of the desired text. A one-in-91 chance the third character typed matches the third character of the desired text. And so on, for the whole length of the play’s text. Getting one character right doesn’t make it more or less likely the next one is right. So the chance of getting a whole play correct is raised to the power of however many characters are in the first script. Call it 800,000 for argument’s sake. More characters, if you put two spaces between sentences. The prospects of getting this all correct is … dismal.

I mean, there’s some cause for hope. Spelling was much less fixed in Shakespeare’s time. There are acceptable variations for many of his words. It’d be silly to rule out a possible script that (say) wrote “look’d” or “look’t”, rather than “looked”. Still, that’s a slender thread.

But there is more reason to hope. Chances are the first monkey will botch the first character. But what if they get the first character of the text right on the second character struck? Or on the third character struck? It’s all right if there’s some garbage before the text comes up. Many writers have trouble starting and build from a first paragraph meant to be thrown away. After every wrong letter is a new chance to type the perfect thing, reassurance for us all.

Since the monkey does type, hypothetically, forever … well, so each character has a probability of only (or whatever) of starting the lucky sequence. The monkey will have chances to start. More chances than that.

And we don’t have only one monkey. We have a thousand monkeys. At least. A million monkeys. Maybe infinitely many monkeys. Each one, we trust, is working independently, owing to the monkeys’ strong sense of academic integrity. There are monkeys working on the project. And more than that. Each one takes their chance.

There are dizzying possibilities here. There’s the chance some monkey will get it all exactly right first time out. More. Think of a row of monkeys. What’s the chance the first thing the first monkey in the row types is the first character of the play? What’s the chance the first thing the second monkey in the row types is the second character of the play? The chance the first thing the third monkey in the row types is the third character in the play? What’s the chance a long enough row of monkeys happen to hit the right buttons so the whole play appears in one massive simultaneous stroke of the keys? Not any worse than the chance your one monkey will type this all out. Monkeys at keyboards are ergodic. It’s as good to have a few monkeys working a long while as to have many monkeys working a short while. The Mythical Man-Month is, for this project, mistaken.

That solves it then, doesn’t it? A monkey, or a team of monkeys, has a nonzero probability of typing out all Shakespeare’s plays. Or the works of Dickens. Or of Jorge Luis Borges. Whatever you like. Given infinitely many chances at it, they will, someday, succeed.

Except.

What is the chance that the monkeys screw up? They get the works of Shakespeare just right, but for a flaw. The monkeys’ Midsummer Night’s Dream insists on having the fearsome lion played by “Smaug the joiner” instead. This would send the play-within-the-play in novel directions. The result, though interesting, would not be Shakespeare. There’s a nonzero chance they’ll write the play that way. And so, given infinitely many chances, they will.

What’s the chance that they always will? That they just miss every single chance to write “Snug”. It comes out “Smaug” every time?

We can say. Call the probability that they make this Snug-to-Smaug typo any given time . That’s a number from 0 to 1. 0 corresponds to not making this mistake; 1 to certainly making it. The chance they get it right is . The chance they make this mistake twice is smaller than . The chance that they get it right at least once in two tries is closer to 1 than is. The chance that, given three tries, they make the mistake every time is even smaller still. The chance that they get it right at least once is even closer to 1.

You see where this is going. Every extra try makes the chance they got it wrong every time smaller. Every extra try makes the chance they get it right at least once bigger. And now we can let some analysis come into play.

So give me a positive number. I don’t know your number, so I’ll call it ε. It’s how unlikely you want something to be before you say it won’t happen. Whatever your ε was, I can give you a number . If the monkeys have taken more than tries, the chance they get it wrong every single time is smaller than your ε. The chance they get it right at least once is bigger than 1 – ε. Let the monkeys have infinitely many tries. The chance the monkey gets it wrong every single time is smaller than any positive number. So the chance the monkey gets it wrong every single time is zero. It … can’t happen, right? The chance they get it right at least once is closer to 1 than to any other number. So it must be 1. So it must be certain. Right?

But let me give you this. Detach a monkey from typewriter duty. This one has a coin to toss. It tosses fairly, with the coin having a 50% chance of coming up tails and 50% chance of coming up heads each time. The monkey tosses the coin infinitely many times. What is the chance the coin comes up tails every single one of these infinitely many times? The chance is zero, obviously. At least you can show the chance is smaller than any positive number. So, zero.

Yet … what power enforces that? What forces the monkey to eventually have a coin come up heads? It’s … nothing. Each toss is a fair toss. Each toss is independent of its predecessors. But there is no force that causes the monkey, after a hundred million billion trillion tosses of “tails”, to then toss “heads”. It’s the gambler’s fallacy to think there is one. The hundred million billion trillionth-plus-one toss is as likely to come up tails as the first toss is. It’s impossible that the monkey should toss tails infinitely many times. But there’s no reason it can’t happen. It’s also impossible that the monkeys still on the typewriters should get Shakespeare wrong every single time. But there’s no reason that can’t happen.

It’s unsettling. Well, probability is unsettling. If you don’t find it disturbing you haven’t thought long enough about it. Infinities, too, are unsettling so.

Formally, mathematicians interpret this — if not explain it — by saying the set of things that can happen is a “probability space”. The likelihood of something happening is what fraction of the probability space matches something happening. (I’m skipping a lot of background to say something that simple. Do not use this at your thesis defense without that background.) This sort of “impossible” event has “measure zero”. So its probability of happening is zero. Measure turns up in analysis, in understanding how calculus works. It complicates a bunch of otherwise-obvious ideas about continuity and stuff. It turns out to apply to probability questions too. Imagine the space of all the things that could possibly happen as being the real number line. Pick one number from that number line. What is the chance you have picked exactly the number -24.11390550338228506633488? I’ll go ahead and say you didn’t. It’s not that you couldn’t. It’s not impossible. It’s just that the chance that this happened, out of the infinity of possible outcomes, is zero.

The infinite monkeys give us this strange set of affairs. Some things have a probability of zero of happening, which does not rule out that they can. Some things have a probability of one of happening, which does not mean they must. I do not know what conclusion Borel ultimately drew about the reversibility problem. I expect his opinion to be that we have a clear answer, and unsettlingly great room for that answer to be incomplete.

I have to specify. There’s a bunch of mathematics concepts called `distribution’. Some of them are linked. Some of them are just called that because we don’t have a better word. Like, what else would you call multiplying the sum of something? I want to describe a distribution that comes to us in probability and in statistics. Through these it runs through modern physics, as well as truly difficult sciences like sociology and economics.

We get to distributions through random variables. These are variables that might be any one of multiple possible values. There might be as few as two options. There might be a finite number of possibilities. There might be infinitely many. They might be numbers. At the risk of sounding unimaginative, they often are. We’re always interested in measuring things. And we’re used to measuring them in numbers.

What makes random variables hard to deal with is that, if we’re playing by the rules, we never know what it is. Once we get through (high school) algebra we’re comfortable working with an ‘x’ whose value we don’t know. But that’s because we trust that, if we really cared, we would find out what it is. Or we would know that it’s a ‘dummy variable’, whose value is unimportant but gets us to something that is. A random variable is different. Its value matters, but we can’t know what it is.

Instead we get a distribution. This is a function which gives us information about what the outcomes are, and how likely they are. There are different ways to organize this data. If whoever’s talking about it doesn’t say just what they’re doing, bet on it being a “probability distribution function”. This follows slightly different rules based on whether the range of values is discrete or continuous, but the idea is roughly the same. Every possible outcome has a probability at least zero but not more than one. The total probability over every possible outcome is exactly one. There’s rules about the probability of two distinct outcomes happening. Stuff like that.

Distributions are interesting enough when they’re about fixed things. In learning probability this is stuff like hands of cards or totals of die rolls or numbers of snowstorms in the season. Fun enough. These get to be more personal when we take a census, or otherwise sample things that people do. There’s something wondrous in knowing that while, say, you might not know how long a commute your neighbor has, you know there’s an 80 percent change it’s between 15 and 25 minutes (or whatever). It’s also good for urban planners to know.

It gets exciting when we look at how distributions can change. It’s hard not to think of that as “changing over time”. (You could make a fair argument that “change” is “time”.) But it doesn’t have to. We can take a function with a domain that contains all the possible values in the distribution, and a range that’s something else. The image of the distribution is some new distribution. (Trusting that the function doesn’t do something naughty.) These functions — these mappings — might reflect nothing more than relabelling, going from (say) a distribution of “false and true” values to one of “-5 and 5” values instead. They might reflect regathering data; say, going from the distribution of a die’s outcomes of “1, 2, 3, 4, 5, or 6” to something simpler, like, “less than two, exactly two, or more than two”. Or they might reflect how something does change in time. They’re all mappings; they’re all ways to change what a distribution represents.

These mappings turn up in statistical mechanics. Processes will change the distribution of positions and momentums and electric charges and whatever else the things moving around do. It’s hard to learn. At least my first instinct was to try to warm up to it by doing a couple test cases. Pick specific values for the random variables and see how they change. This can help build confidence that one’s calculating correctly. Maybe give some idea of what sorts of behaviors to expect.

But it’s calculating the wrong thing. You need to look at the distribution as a specific thing, and how that changes. It’s a change of view. It’s like the change in view from thinking of a position as an x- and y- and maybe z-coordinate to thinking of position as a vector. (Which, I realize now, gave me slightly similar difficulties in thinking of what to do for any particular calculation.)

Distributions can change in time, just the way that — in simpler physics — positions might change. Distributions might stabilize, forming an equilibrium. This can mean that everything’s found a place to stop and rest. That will never happen for any interesting problem. What you might get is an equilibrium like the rings of Saturn. Everything’s moving, everything’s changing, but the overall shape stays the same. (Roughly.)

There are many specifically named distributions. They represent patterns that turn up all the time. The binomial distribution, for example, which represents what to expect if you have a lot of examples of something that can be one of two values each. The Poisson distribution, for representing how likely something that could happen any time (or any place) will happen in a particular span of time (or space). The normal distribution, also called the Gaussian distribution, which describes everything that isn’t trying to be difficult. There are like 400 billion dozen more named ones, each really good at describing particular kinds of problems. But they’re all distributions.