My 2018 Mathematics A To Z: Witch of Agnesi


Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Witch of Agnesi.

I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote.

It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this:

y = \frac{8a^3}{x^2 + 4a^2}

In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat.

It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten.

We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here.

Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why 8a^3 ? Why not a^3 ?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one.

The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project.

For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve.

Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve.

Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out y = \frac{8a^3}{x^2 + 4a^2} , yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve.

It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa.

It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too.

And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve.

It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea.

Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas.

(Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.)

It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me.

I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student.


And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading.

Advertisements

Reading the Comics, December 4, 2018: Christmas Specials Edition


This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.

Graham Nolan’s Sunshine State for the 3rd is a misplaced Pi Day strip. I did check the copyright to see if it might be a rerun from when it was more seasonal.

Liz: 'I'm going to bake pies. What's your favorite?' 'Cherry!' 'Apple!' Liz 'Here comes Paul! Let's ask him, too.' Dink: 'He hates pie!' Paul: 'What are you talking about?' Dink: 'Nothing that would interest you.' Mel: 'We're talking about pie!' Paul: 'So you don't think I'm smart enough to discuss pi? Pi is the ratio of a circle's circumference to its diameter! It's a mathematical constant used in mathematics and physics! Its value is approximately 3.14159!' Mel: 'You forgot the most important thing about pie!' Paul: 'What's that?' Mel: 'It tastes delicious!' Dink: 'I hate pie!' Mel, Dink, and Liz: 'We know!'
Graham Nolan’s Sunshine State for the 3rd of December, 2018. This and other essays mentioning Sunshine State should be at this link. Or will be someday; it’s a new tag. Yeah, Paul’s so smart he almost knows the difference between it’s and its.

Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.

An Old West town. an anthropomorphic 2 says to a 4, 'You know, Slim, I don't like the odds.' Standing opposite them, guns at the ready, are a hostile 5, 1, 3, and 7.
Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd of December, 2018. Essays inspired by Yaffle should appear at this link. It’s also a new tag, so don’t go worrying that there’s only this one essay there yet.

John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.

Harley: 'C'mon, Edison, let's play basketball.' Edison: 'If I take into account the size and weight of the ball, the diameter of the hoop and your height in relation to it, and the number of hours someone your age would've had time to practice ... I can conclude that I'd win by 22 points. Nice game. Better luck next time.' Harley: 'But ... '
John Hambrock’s The Brilliant Mind of Edison Lee for the 4th of December, 2018. More ideas raised by Edison Lee I discuss at this link. Also it turns out Edison’s friend here is named Harley, which I mention so I have an easier time finding his name next time I need to refer to this strip. This will not work.

But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.

Rover, dog: 'Can I help with your homework?' Red, kid: 'How are you at long division?' Rover: 'OK, I guess. Lemme see the problem first.' (Red holds the notes out to Rover, who tears the page off and chews it up.) Red: 'That was actually short division, but it'll do nicely for now.'
Brian Basset’s Red and Rover for the 4th of December, 2018. And more Red and Rover discussions are at this link.

Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.


I’m figuring to get to the letter ‘W’ in my Fall 2018 Mathematics A To Z glossary for Tuesday. Reading the Comics posts this week. And I also figure there should be two more When posted, they’ll be at this link.

My 2018 Mathematics A To Z: Randomness


Today’s topic is an always rich one. It was suggested by aajohannas, who so far as I know has’t got an active blog or other project. If I’m mistaken please let me know. I’m glad to mention the creative works of people hanging around my blog.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Randomness.

An old Sydney Harris cartoon I probably won’t be able to find a copy of before this publishes. A couple people gather around an old fanfold-paper printer. On the printout is the sequence “1 … 2 … 3 … 4 … 5 … ” The caption: ‘Bizarre sequence of computer-generated random numbers’.

Randomness feels familiar. It feels knowable. It means surprise, unpredictability. The upending of patterns. The obliteration of structure. I imagine there are sociologists who’d say it’s what defines Modernity. It’s hard to avoid noticing that the first great scientific theories that embrace unpredictability — evolution and thermodynamics — came to public awareness at the same time impressionism came to arts, and the subconscious mind came to psychology. It’s grown since then. Quantum mechanics is built on unpredictable specifics. Chaos theory tells us even if we could predict statistics it would do us no good. Randomness feels familiar, even necessary. Even desirable. A certain type of nerd thinks eagerly of the Singularity, the point past which no social interactions are predictable anymore. We live in randomness.

And yet … it is hard to find randomness. At least to be sure we have found it. We might choose between options we find ambivalent by tossing a coin. This seems random. But anyone who was six years old and trying to cheat a sibling knows ways around that. Drop the coin without spinning it, from a half-inch above the table, and you know the outcome, all the way through to the sibling’s punching you. When we’re older and can be made to be better sports we’re fairer about it. We toss the coin and give it a spin. There’s no way we could predict the outcome. Unless we knew just how strong a toss we gave it, and how fast it spun, and how the mass of the coin was distributed. … Really, if we knew enough, our tossed coin would be as predictably as the coin we dropped as a six-year-old. At least unless we tossed in some chaotic way, where each throw would be deterministic, but we couldn’t usefully make a prediction.

At a craps table, Commander Data looks with robo-concern at the dice in his hand. Riker, Worf, and some characters from the casino hotel watch, puzzled.
Dice are also predictable, if you are able to precisely measure how the weight inside them is distributed, and can be precise enough about how you’ll throw them, and know enough about the surface they’ll roll on. Screen capture from TrekCore’s archive of Star Trek: The Next Generation images.

Our instinctive idea of what randomness must be is flawed. That shouldn’t surprise. Our instinctive idea of anything is flawed. But randomness gives us trouble. It’s obvious, for example, that randomly selected things should have no pattern. But then how is that reasonable? If we draw letters from the alphabet at random, we should expect sometimes to get some cute pattern like ‘aaaaa’ or ‘qwertyuiop’ or the works of Shakespeare. Perhaps we mean we shouldn’t get patterns any more often than we would expect. All right; how often is that?

We can make tests. Some of them are obvious. Take something that generates possibly-random results. Look up how probable each of those outcomes is. Then run off a bunch of outcomes. Do we get about as many of each result as we should expect? Probability tells us we should get as close as we like to the expected frequency if we let the random process run long enough. If this doesn’t happen, great! We can conclude we don’t really have something random.

We can do more tests. Some of them are brilliantly clever. Suppose there’s a way to order the results. Since mathematicians usually want numbers, putting them in order is easy to do. If they’re not, there’s usually a way to match results to numbers. You’ll see me slide here into talking about random numbers as though that were the same as random results. But if I can distinguish different outcomes, then I can label them. If I can label them, I can use numbers as labels. If the order of the numbers doesn’t matter — should “red” be a 1 or a 2? Should “green” be a 3 or an 8? — then, fine; any order is good.

There are 120 ways to order five distinct things. So generate lots of sets of, say, five numbers. What order are they in? There’s 120 possibilities. Do each of the possibilities turn up as often as expected? If they don’t, great! We can conclude we don’t really have something random.

I can go on. There are many tests which will let us say something isn’t a truly random sequence. They’ll allow for something like Sydney Harris’s peculiar sequence of random numbers. Mostly by supposing that if we let it run long enough the sequence would stop. But these all rule out random number generators. Do we have any that rule them in? That say yes, this generates randomness?

I don’t know of any. I suspect there can’t be any, on the grounds that a test of a thousand or a thousand million or a thousand million quadrillion numbers can’t assure us the generator won’t break down next time we use it. If we knew the algorithm by which the random numbers were generated — oh, but there we’re foiled before we can start. An algorithm is the instructions of how to do a thing. How can an instruction tell us how to do a thing that can’t be predicted?

Algorithms seem, briefly, to offer a way to tell whether we do have a good random sequence, though. We can describe patterns. A strong pattern is easy to describe, the way a familiar story is easy to reference. A weak pattern, a random one, is hard to describe. It’s like a dream, in which you can just list events. So we can call random something which can’t be described any more efficiently than just giving a list of all the results. But how do we know that can’t be done? 7, 7, 2, 4, 5, 3, 8, 5, 0, 9 looks like a pretty good set of digits, whole numbers from 0 through 9. I’ll bet not more than one in ten of you guesses correctly what the next digit in the sequence is. Unless you’ve noticed that these are the digits in the square root of π, so that the next couple digits have to be 0, 5, 5, and 1.

We know, on theoretical grounds, that we have randomness all around us. Quantum mechanics depends on it. If we need truly random numbers we can set a sensor. It will turn the arrival of cosmic rays, or the decay of radioactive atoms, or the sighing of a material flexing in the heat into numbers. We trust we gather these and process them in a way that doesn’t spoil their unpredictability. To what end?

That is, why do we care about randomness? Especially why should mathematicians care? The image of mathematics is that it is a series of logical deductions. That is, things known to be true because they follow from premises known to be true. Where can randomness fit?

One answer, one close to my heart, is called Monte Carlo methods. These are techniques that find approximate answers to questions. They do well when exact answers are too hard for us to find. They use random numbers to approximate answers and, often, to make approximate answers better. This demands computations. The field didn’t really exist before computers, although there are some neat forebears. I mean the Buffon needle problem, which lets you calculate the digits of π about as slowly as you could hope to do.

Another, linked to Monte Carlo methods, is stochastic geometry. “Stochastic” is the word mathematicians attach to things when they feel they’ve said “random” too often, or in an undignified manner. Stochastic geometery is what we can know about shapes when there’s randomness about how the shapes are formed. This sounds like it’d be too weak a subject to study. That it’s built on relatively weak assumptions means it describes things in many fields, though. It can be seen in understanding how forests grow. How to find structures inside images. How to place cell phone towers. Why materials should act like they do instead of some other way. Why galaxies cluster.

There’s also a stochastic calculus, a bit of calculus with randomness added. This is useful for understanding systems where some persistent unpredictable behavior is there. It comes, if I understand the histories of this right, from studying the ways molecules will move around in weird zig-zagging twists. They do this even when there is no overall flow, just a fluid at a fixed temperature. It too has surprising applications. Without the assumption that some prices of things are regularly jostled by arbitrary and unpredictable forces, and the treatment of that by stochastic calculus methods, we wouldn’t have nearly the ability to hedge investments against weird chaotic events. This would be a bad thing, I am told by people with more sophisticated investments than I have. I personally own like ten shares of the Tootsie Roll corporation and am working my way to a $2.00 rebate check from Boyer.

Playland's Derby Racer in motion, at night, featuring a ride operator leaning maybe twenty degrees inward.
Rye Playland’s is the fastest carousel I’m aware of running. Riders are warned ahead of time to sit so they’re leaning to the left, and the ride will not get up to full speed until the ride operator checks everyone during the ride. To get some idea of its speed, notice the ride operator on the left and how far he leans. He’s not being dramatic; that’s the natural stance. Also the tilt in the carousel’s floor is not camera trickery; it does lean like that.

Given that we need randomness, but don’t know how to get it — or at least don’t know how to be sure we have it — what is there to do? We accept our failings and make do with “quasirandom numbers”. We find some process that generates numbers which look about like random numbers should. These have failings. Most important is that if we could predict them. They’re random like “the date Easter will fall on” is random. The date Easter will fall is not at all random; it’s defined by a specific and humanly knowable formula. But if the only information you have is that this year, Easter fell on the 1st of April (Gregorian computus), you don’t have much guidance to whether this coming year it’ll be on the 7th, 14th, or 21st of April the next year. Most notably, quasirandom number generators will tend to repeat after enough numbers are drawn. If we know we won’t need enough numbers to see a repetition, though? Another stereotype of the mathematician is that of a person who demands exactness. It is often more true to say she is looking for an answer good enough. We are usually all right with a merely good enough quasirandomness.

Boyer candies — Mallo Cups, most famously, although I more like the peanut butter Smoothies — come with a cardboard card backing. Each card has two play money “coins”, of values from 5 cents to 50 cents. These can be gathered up for a rebate check or for various prizes. Whether your coin is 5 cents, 10, 25, or 50 cents … well, there’s no way to tell, before you open the package. It’s, so far as you can tell, randomness.


My next A To Z post should be available at this link. It’s coming Tuesday and should be the letter ‘S’.

Reading the Comics, November 16, 2018: The Rest Of The Week Edition


After that busy start last Sunday, Comic Strip Master Command left only a few things for the rest of the week. Here’s everything that seemed worthy of some comment to me:

Alex Hallatt’s Arctic Circle for the 12th is an arithmetic cameo. It’s used as the sort of thing that can be tested, with the straightforward joke about animal testing to follow. It’s not a surprise that machines should be able to do arithmetic. We’ve built machines for centuries to do arithmetic. Literally; Wilhelm Gottfried Leibniz designed and built a calculating machine able to add, subtract, multiply, and divide. This accomplishment from one of the founders of integral calculus is a potent reminder of how much we can accomplish if we’re supposed to be writing instead. (That link is to Robert Benchley’s classic essay “How To Get Things Done”. It is well worth reading, both because it is funny and because it’s actually good, useful advice.)

Rabbit, reading the paper: 'Artificial intelligence could make animal testing obsolete.' Polar Bear: 'Thank goodness.' Penguin imagines the Polar Bear in school, being asked by the teacher the square root of 121, with a robot beside him whispering '11'.
Alex Hallatt’s Arctic Circle for the 12th of November, 2018. Other essays based on Arctic Circle should be at this link.

But it’s also true that animals do know arithmetic. At least a bit. Not — so far as we know — to the point they ponder square roots and such. But certainly to count, to understand addition and subtraction roughly, to have some instinct for calculations. Stanislas Dehaene’s The Number Sense: How the Mind Creates Mathematics is a fascinating book about this. I’m only wary about going deeper into the topic since I don’t know a second (and, better, third) pop book touching on how animals understand mathematics. I feel more comfortable with anything if I’ve encountered it from several different authors. Anyway it does imply the possibility of testing a polar bear’s abilities at arithmetic, only in the real world.

In school. Binkley: 'Don't say anything, Ms Harlow, but a giant spotted snorkewacker from my closet full of anxieties has followed me to school and since experience has proven that he plans to grab me, I'd like permission to go home and hide.' Ms Harlow: 'Mr Binkley, that's the stinkiest excuse I've ever heard for getting out of a geometry exam. Go sit down.' Binkley's face-down at his desk; the Giant Spotted Snorklewacker asks, 'Pssst! What's the Pythagorean theorem?'
Berkeley Breathed’s Bloom County rerun for the 13th of November, 2018. It originally ran the 17th of February, 1983. Never mind the copyright notice; those would often show the previous year the first couple weeks of the year. Essays based on topics raised by Bloom County — original or modern continuation — should be at this link.

Berkeley Breathed’s Bloom County rerun for the 13th has another mathematics cameo. Geometry’s a subject worthy of stoking Binkley’s anxieties, though. It has a lot of definitions that have to be carefully observed. And while geometry reflects the understanding we have of things from moving around in space, it demands a precision that we don’t really have an instinct for. It’s a lot to worry about.

Written into two ring stains on a napkin: 'People who drink coffee'. 'People who drink tea'. Pointing to the intersection: 'People who share napkins.'
Terry Border’s Bent Objects for the 15th of November, 2018. Other essays based on Bent Objects will be at this link. It’s a new tag, so for now, there’s just that.

Terry Border’s Bent Objects for the 15th is our Venn Diagram joke for the week. I like this better than I think the joke deserves, probably because it is done in real materials. (Which is the Bent Objects schtick; it’s always photographs of objects arranged to make the joke.)

Teacher: 'I need to buy some graph paper for my students. Is there a convenience store near here?' Guy: 'Yeah, just two miles way from campus.' Later: Teacher, driving, realizes: 'Wait, he didn't specify a coordinate system. NOOOOOOO!' as her car leaps into the air.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th of November, 2018. In case there’s ever another essay which mentions Saturday Morning Breakfast Cereal it’ll be at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is a joke on knowing how far to travel but not what direction. Normal human conversations carry contextually reasonable suppositions. Told something is two miles away, it’s probably along the major road you’re on, or immediately nearby. I’d still ask for clarification told something was “two miles away”. Two blocks, I’d let slide, on the grounds that it’s no big deal to correct a mistake.

Still, mathematicians carry defaults with them too. They might be open to a weird, general case, certainly. But we have expectations. There’s usually some obvious preferred coordinate system, or directions. If it’s important that we be ready for alternatives we highlight that. We specify the coordinate system we want. Perhaps we specify we’re taking that choice “without loss of generality”, that is, without supposing some other choice would be wrong.

I noticed the mathematician’s customized plate too. “EIPI1” is surely a reference to the expression e^{\imath \pi} + 1 . That sum, it turns out, equals zero. It reflects this curious connection between exponentiation, complex-valued numbers, and the trigonometric functions. It’s a weird thing to know is true, and it’s highly regarded in certain nerd circles for that weirdness.

The Odds. Guy checking his phone after his friend's been knocked down: 'There's tons of stuff about being struck by a bolt of lightning --- nothing about bolts of fabric.' [Title panel extra gag: 'Lucky for you it's soft and silky.']
Hilary Price’s Rhymes With Orange for the 16th of November, 2018. And times I’ve discussed something from Rhymes With Orange should be at this link.

Hilary Price’s Rhymes With Orange for the 16th features a what-are-the-odds sort of joke, this one about being struck by a bolt from the sky. Lightning’s the iconic bolt to strike someone, and be surprising about it. Fabric would be no less surprising, though. And there’s no end of stories of weird things falling from the skies. It’s easier to get stuff into the sky than you might think, and there are only a few options once that’s happened.


And as ever, all my Reading the Comics posts should all be at this link.

Through the end of December my Fall 2018 Mathematics A To Z continues. I’m still open for topics to discuss from the last half-dozen letters of the alphabet. Even if someone’s already given a word for some letter, suggest something anyway. You might inspire me in good ways.

My 2018 Mathematics A To Z: Infinite Monkey Theorem


Dina Yagodich gave me the topic for today. She keeps up a YouTube channel with a variety of interesting videos. And she did me a favor. I’ve been thinking a long while to write a major post about this theorem. Its subject turns up so often. I’d wanted to have a good essay about it. I hope this might be one.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Infinite Monkey Theorem.

Some mathematics escapes mathematicians and joins culture. This is one such. The monkeys are part of why. They’re funny and intelligent and sad and stupid and deft and clumsy, and they can sit at a keyboard almost look in place. They’re so like humans, except that we empathize with them. To imagine lots of monkeys, and putting them to some silly task, is compelling.

Monkey Typewriter Theory: An immortal monkey pounding on a typewriter will eventually reproduce the text of 'Hamlet'. Baby Keyboard Theory: Left alone, a baby pounding on a computer keyboard will eventually order 32 cases of bathroom caulk from an online retailer.
Paul Trapp’s Thatababy for the 13th of February, 2014.

The metaphor traces back to a 1913 article by the mathematical physicist Émile Borel which I have not read. Searching the web I find much more comment about it than I find links to a translation of the text. And only one copy of the original, in French. And that page wants €10 for it. So I can tell you what everybody says was in Borel’s original text, but can’t verify it. The paper’s title is “Statistical Mechanics and Irreversibility”. From this I surmise that Borel discussed one of the great paradoxes of statistical mechanics. If we open a bottle of one gas in an airtight room, it disperses through the room. Why doesn’t every molecule of gas just happen, by chance, to end up back where it started? It does seem that if we waited long enough, it should. It’s unlikely it would happen on any one day, but give it enough days …

But let me turn to many web sites that are surely not all copying Wikipedia on this. Borel asked us to imagine a million monkeys typing ten hours a day. He posited it was possible but extremely unlikely that they would exactly replicate all the books of the richest libraries of the world. But that would be more likely than the atmosphere in a room un-mixing like that. Fair enough, but we’re not listening anymore. We’re thinking of monkeys. Borel’s is a fantastic image. It would see some adaptation in the years. Physicist Arthur Eddington, in 1928, made it an army of monkeys, with their goal being the writing all the books in the British Museum. By 1960 Bob Newhart had an infinite number of monkeys and typewriters, and a goal of all the great books. Stating the premise gets a laugh I doubt the setup would today. I’m curious whether Newhart brought the idea to the mass audience. (Google NGrams for “monkeys at typewriters” suggest that phrase was unwritten, in books, before about 1965.) We may owe Bob Newhart thanks for a lot of monkeys-at-typewriters jokes.

Kid: 'Mom, Dad, I want to go bungee jumping this summer!' Dad: 'A thousand monkeys working a thousand typewriters would have a better chance of randomly typing the complete works of William Shakespeare over the summer than you have of bungee jumping.' (Awksard pause.) Kid: 'What's a typewriter?' Dad: 'A thousand monkeys randomly TEXTING!'
Bill Hinds’s Cleats rerun for the 1st of July, 2018.

Newhart has a monkey hit on a line from Hamlet. I don’t know if it was Newhart that set the monkeys after Shakespeare particularly, rather than some other great work of writing. Shakespeare does seem to be the most common goal now. Sometimes the number of monkeys diminishes, to a thousand or even to one. Some people move the monkeys off of typewriters and onto computers. Some take the cowardly measure of putting the monkeys at “keyboards”. The word is ambiguous enough to allow for typewriters, computers, and maybe a Megenthaler Linotype. The monkeys now work 24 hours a day. This will be a comment someday about how bad we allowed pre-revolutionary capitalism to get.

The cultural legacy of monkeys-at-keyboards might well itself be infinite. It turns up in comic strips every few weeks at least. Television shows, usually writing for a comic beat, mention it. Computer nerds doing humor can’t resist the idea. Here’s a video of a 1979 Apple ][ program titled THE INFINITE NO. OF MONKEYS, which used this idea to show programming tricks. And it’s a great philosophical test case. If a random process puts together a play we find interesting, has it created art? No deliberate process creates a sunset, but we can find in it beauty and meaning. Why not words? There’s likely a book to write about the infinite monkeys in pop culture. Though the quotations of original materials would start to blend together.

But the big question. Have the monkeys got a chance? In a break from every probability question ever, the answer is: it depends on what the question precisely is. Occasional real-world experiments-cum-art-projects suggest that actual monkeys are worse typists than you’d think. They do more of bashing the keys with a stone before urinating on it, a reminder of how slight is the difference between humans and our fellow primates. So we turn to abstract monkeys who behave more predictably, and run experiments that need no ethical oversight.

Toby: 'So this English writer is like a genius, right? And he's the greatest playwright ever. And I want to be just like him! Cause what he does, see, is he gets infinite monkeys on typewriters and just lets 'em go nuts, so eventually they write ALL of Shakespeare's plays!' Brother: 'Cool! And what kind of monkey is an 'infinite'?' Toby: 'Beats me, but I hope I don't have to buy many of them.' Dad: 'Toby, are you *sure* ywou completely pay attention when your teachers are talking?' Toby: 'What? Yes! Why?'

Greg Cravens’ The Buckets for the 30th of March, 2014.

So we must think what we mean by Shakespeare’s Plays. Arguably the play is a specific performance of actors in a set venue doing things. This is a bit much to expect of even a skilled abstract monkey. So let us switch to the book of a play. This has a more clear representation. It’s a string of characters. Mostly letters, some punctuation. Good chance there’s numerals in there. It’s probably a lot of characters. So the text to match is some specific, long string of characters in a particular order.

And what do we mean by a monkey at the keyboard? Well, we mean some process that picks characters randomly from the allowed set. When I see something is picked “randomly” I want to know what the distribution rule is. Like, are Q’s exactly as probable as E’s? As &’s? As %’s? How likely it is a particular string will get typed is easiest to answer if we suppose a “uniform” distribution. This means that every character is equally likely. We can quibble about capital and lowercase letters. My sense is most people frame the problem supposing case-insensitivity. That the monkey is doing fine to type “whaT beArD weRe i BEsT tO pLAy It iN?”. Or we could set the monkey at an old typesetter’s station, with separate keys for capital and lowercase letters. Some will even forgive the monkeys punctuating terribly. Make your choices. It affects the numbers, but not the point.

Literary Calendar. Several jokes, including: Saturday 7pm: an infinite number of chimpanzees discuss their multi-volume 'Treasury of Western Literature with no Typos' at the Museum of Natural History. Nit picking to follow.
Richard Thompson’s Richard’s Poor Almanac rerun for the 7th of November, 2016.

I’ll suppose there are 91 characters to pick from, as a Linotype keyboard had. So the monkey has capitals and lowercase and common punctuation to get right. Let your monkey pick one character. What is the chance it hit the first character of one of Shakespeare’s plays? Well, the chance is 1 in 91 that you’ve hit the first character of one specific play. There’s several dozen plays your monkey might be typing, though. I bet some of them even start with the same character, so giving an exact answer is tedious. If all we want monkey-typed Shakespeare plays, we’re being fussy if we want The Tempest typed up first and Cymbeline last. If we want a more tractable problem, it’s easier to insist on a set order.

So suppose we do have a set order. Then there’s a one-in-91 chance the first character matches the first character of the desired text. A one-in-91 chance the second character typed matches the second character of the desired text. A one-in-91 chance the third character typed matches the third character of the desired text. And so on, for the whole length of the play’s text. Getting one character right doesn’t make it more or less likely the next one is right. So the chance of getting a whole play correct is \frac{1}{91} raised to the power of however many characters are in the first script. Call it 800,000 for argument’s sake. More characters, if you put two spaces between sentences. The prospects of getting this all correct is … dismal.

I mean, there’s some cause for hope. Spelling was much less fixed in Shakespeare’s time. There are acceptable variations for many of his words. It’d be silly to rule out a possible script that (say) wrote “look’d” or “look’t”, rather than “looked”. Still, that’s a slender thread.

Proverb Busters: testing the validity of old sayings. Doctor: 'A hundred monkeys at a hundred typewriters. Over time, will one of them eventually write a Shakepeare play?' Winky: 'Nope. Just the script for Grown-Ups 3'. Doctor: 'Another proverb busted.'
Tim Rickard’s Brewster Rockit for the 1st of April, 2014.

But there is more reason to hope. Chances are the first monkey will botch the first character. But what if they get the first character of the text right on the second character struck? Or on the third character struck? It’s all right if there’s some garbage before the text comes up. Many writers have trouble starting and build from a first paragraph meant to be thrown away. After every wrong letter is a new chance to type the perfect thing, reassurance for us all.

Since the monkey does type, hypothetically, forever … well, so each character has a probability of only \left(\frac{1}{91}\right)^{800,000} (or whatever) of starting the lucky sequence. The monkey will have 91^{800,000} chances to start. More chances than that.

And we don’t have only one monkey. We have a thousand monkeys. At least. A million monkeys. Maybe infinitely many monkeys. Each one, we trust, is working independently, owing to the monkeys’ strong sense of academic integrity. There are 91^{800,000} monkeys working on the project. And more than that. Each one takes their chance.

Melvin: 'Hold on now --- replacement? Who could you find to do all the tasks only Melvin can perform?' Rita: 'A macaque, in fact. Listen, if an infinite number of monkeys can write all the great works, I'm confident that one will more than cover for you.'
John Zakour and Scott Roberts’s Working Daze for the 29th of May, 2018.

There are dizzying possibilities here. There’s the chance some monkey will get it all exactly right first time out. More. Think of a row of monkeys. What’s the chance the first thing the first monkey in the row types is the first character of the play? What’s the chance the first thing the second monkey in the row types is the second character of the play? The chance the first thing the third monkey in the row types is the third character in the play? What’s the chance a long enough row of monkeys happen to hit the right buttons so the whole play appears in one massive simultaneous stroke of the keys? Not any worse than the chance your one monkey will type this all out. Monkeys at keyboards are ergodic. It’s as good to have a few monkeys working a long while as to have many monkeys working a short while. The Mythical Man-Month is, for this project, mistaken.

That solves it then, doesn’t it? A monkey, or a team of monkeys, has a nonzero probability of typing out all Shakespeare’s plays. Or the works of Dickens. Or of Jorge Luis Borges. Whatever you like. Given infinitely many chances at it, they will, someday, succeed.

Except.

A thousand monkeys at a thousand typewriters ... will eventually write 'Hamlet'. A thousand cats at a thousand typewriters ... will tell you go to write your own danged 'Hamlet'.
Doug Savage’s Savage Chickens for the 14th of August, 2018.

What is the chance that the monkeys screw up? They get the works of Shakespeare just right, but for a flaw. The monkeys’ Midsummer Night’s Dream insists on having the fearsome lion played by “Smaug the joiner” instead. This would send the play-within-the-play in novel directions. The result, though interesting, would not be Shakespeare. There’s a nonzero chance they’ll write the play that way. And so, given infinitely many chances, they will.

What’s the chance that they always will? That they just miss every single chance to write “Snug”. It comes out “Smaug” every time?

Eddie: 'You know the old saying about putting an infinite number of monkeys at an infinite number of typewriters, and eventually they'll accidentally write Shakespeare's plays?' Toby: 'I guess.' Eddie: 'My English teacher says that nothing about our class should worry those monkeys ONE BIT!'
Greg Cravens’s The Buckets for the 6th of October, 2018.

We can say. Call the probability that they make this Snug-to-Smaug typo any given time p . That’s a number from 0 to 1. 0 corresponds to not making this mistake; 1 to certainly making it. The chance they get it right is 1 - p . The chance they make this mistake twice is smaller than p . The chance that they get it right at least once in two tries is closer to 1 than 1 - p is. The chance that, given three tries, they make the mistake every time is even smaller still. The chance that they get it right at least once is even closer to 1.

You see where this is going. Every extra try makes the chance they got it wrong every time smaller. Every extra try makes the chance they get it right at least once bigger. And now we can let some analysis come into play.

So give me a positive number. I don’t know your number, so I’ll call it ε. It’s how unlikely you want something to be before you say it won’t happen. Whatever your ε was, I can give you a number M . If the monkeys have taken more than M tries, the chance they get it wrong every single time is smaller than your ε. The chance they get it right at least once is bigger than 1 – ε. Let the monkeys have infinitely many tries. The chance the monkey gets it wrong every single time is smaller than any positive number. So the chance the monkey gets it wrong every single time is zero. It … can’t happen, right? The chance they get it right at least once is closer to 1 than to any other number. So it must be 1. So it must be certain. Right?

Poncho, the dog, looking over his owner's laptop: 'They say if you let an infinite number of cats walk on an infinite number of keyboards, they'll eventually type all the great works of Shakespeare.' The cat walks across the laptop, connecting to their owner's bank site and entering the correct password. Poncho: 'I'll take it.'
Paul Gilligan’s Pooch Cafe for the 17th of September, 2018.

But let me give you this. Detach a monkey from typewriter duty. This one has a coin to toss. It tosses fairly, with the coin having a 50% chance of coming up tails and 50% chance of coming up heads each time. The monkey tosses the coin infinitely many times. What is the chance the coin comes up tails every single one of these infinitely many times? The chance is zero, obviously. At least you can show the chance is smaller than any positive number. So, zero.

Yet … what power enforces that? What forces the monkey to eventually have a coin come up heads? It’s … nothing. Each toss is a fair toss. Each toss is independent of its predecessors. But there is no force that causes the monkey, after a hundred million billion trillion tosses of “tails”, to then toss “heads”. It’s the gambler’s fallacy to think there is one. The hundred million billion trillionth-plus-one toss is as likely to come up tails as the first toss is. It’s impossible that the monkey should toss tails infinitely many times. But there’s no reason it can’t happen. It’s also impossible that the monkeys still on the typewriters should get Shakespeare wrong every single time. But there’s no reason that can’t happen.

It’s unsettling. Well, probability is unsettling. If you don’t find it disturbing you haven’t thought long enough about it. Infinities, too, are unsettling so.

Researcher overseeing a room of monkeys: 'Shakespeare would be OK, but I'd prefer they come up with a good research grant proposal.'
John Deering’s Strange Brew for the 20th of February, 2014.

Formally, mathematicians interpret this — if not explain it — by saying the set of things that can happen is a “probability space”. The likelihood of something happening is what fraction of the probability space matches something happening. (I’m skipping a lot of background to say something that simple. Do not use this at your thesis defense without that background.) This sort of “impossible” event has “measure zero”. So its probability of happening is zero. Measure turns up in analysis, in understanding how calculus works. It complicates a bunch of otherwise-obvious ideas about continuity and stuff. It turns out to apply to probability questions too. Imagine the space of all the things that could possibly happen as being the real number line. Pick one number from that number line. What is the chance you have picked exactly the number -24.11390550338228506633488? I’ll go ahead and say you didn’t. It’s not that you couldn’t. It’s not impossible. It’s just that the chance that this happened, out of the infinity of possible outcomes, is zero.

The infinite monkeys give us this strange set of affairs. Some things have a probability of zero of happening, which does not rule out that they can. Some things have a probability of one of happening, which does not mean they must. I do not know what conclusion Borel ultimately drew about the reversibility problem. I expect his opinion to be that we have a clear answer, and unsettlingly great room for that answer to be incomplete.


This and other Fall 2018 Mathematics A-To-Z posts can be read at this link. The next essay should come Friday and will, I hope, be shorter.

My 2018 Mathematics A To Z: Distribution (probability)


Today’s term ended up being a free choice. Nobody found anything appealing in the D’s to ask about. That’s all right.

I’m still looking for topics for the letters G through M, excluding L, if you’d like in on those letters.

And for my own sake, please check out the Playful Mathematics Education Blog Carnival, #121, if you haven’t already.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Distribution (probability).

I have to specify. There’s a bunch of mathematics concepts called `distribution’. Some of them are linked. Some of them are just called that because we don’t have a better word. Like, what else would you call multiplying the sum of something? I want to describe a distribution that comes to us in probability and in statistics. Through these it runs through modern physics, as well as truly difficult sciences like sociology and economics.

We get to distributions through random variables. These are variables that might be any one of multiple possible values. There might be as few as two options. There might be a finite number of possibilities. There might be infinitely many. They might be numbers. At the risk of sounding unimaginative, they often are. We’re always interested in measuring things. And we’re used to measuring them in numbers.

What makes random variables hard to deal with is that, if we’re playing by the rules, we never know what it is. Once we get through (high school) algebra we’re comfortable working with an ‘x’ whose value we don’t know. But that’s because we trust that, if we really cared, we would find out what it is. Or we would know that it’s a ‘dummy variable’, whose value is unimportant but gets us to something that is. A random variable is different. Its value matters, but we can’t know what it is.

Instead we get a distribution. This is a function which gives us information about what the outcomes are, and how likely they are. There are different ways to organize this data. If whoever’s talking about it doesn’t say just what they’re doing, bet on it being a “probability distribution function”. This follows slightly different rules based on whether the range of values is discrete or continuous, but the idea is roughly the same. Every possible outcome has a probability at least zero but not more than one. The total probability over every possible outcome is exactly one. There’s rules about the probability of two distinct outcomes happening. Stuff like that.

Distributions are interesting enough when they’re about fixed things. In learning probability this is stuff like hands of cards or totals of die rolls or numbers of snowstorms in the season. Fun enough. These get to be more personal when we take a census, or otherwise sample things that people do. There’s something wondrous in knowing that while, say, you might not know how long a commute your neighbor has, you know there’s an 80 percent change it’s between 15 and 25 minutes (or whatever). It’s also good for urban planners to know.

It gets exciting when we look at how distributions can change. It’s hard not to think of that as “changing over time”. (You could make a fair argument that “change” is “time”.) But it doesn’t have to. We can take a function with a domain that contains all the possible values in the distribution, and a range that’s something else. The image of the distribution is some new distribution. (Trusting that the function doesn’t do something naughty.) These functions — these mappings — might reflect nothing more than relabelling, going from (say) a distribution of “false and true” values to one of “-5 and 5” values instead. They might reflect regathering data; say, going from the distribution of a die’s outcomes of “1, 2, 3, 4, 5, or 6” to something simpler, like, “less than two, exactly two, or more than two”. Or they might reflect how something does change in time. They’re all mappings; they’re all ways to change what a distribution represents.

These mappings turn up in statistical mechanics. Processes will change the distribution of positions and momentums and electric charges and whatever else the things moving around do. It’s hard to learn. At least my first instinct was to try to warm up to it by doing a couple test cases. Pick specific values for the random variables and see how they change. This can help build confidence that one’s calculating correctly. Maybe give some idea of what sorts of behaviors to expect.

But it’s calculating the wrong thing. You need to look at the distribution as a specific thing, and how that changes. It’s a change of view. It’s like the change in view from thinking of a position as an x- and y- and maybe z-coordinate to thinking of position as a vector. (Which, I realize now, gave me slightly similar difficulties in thinking of what to do for any particular calculation.)

Distributions can change in time, just the way that — in simpler physics — positions might change. Distributions might stabilize, forming an equilibrium. This can mean that everything’s found a place to stop and rest. That will never happen for any interesting problem. What you might get is an equilibrium like the rings of Saturn. Everything’s moving, everything’s changing, but the overall shape stays the same. (Roughly.)

There are many specifically named distributions. They represent patterns that turn up all the time. The binomial distribution, for example, which represents what to expect if you have a lot of examples of something that can be one of two values each. The Poisson distribution, for representing how likely something that could happen any time (or any place) will happen in a particular span of time (or space). The normal distribution, also called the Gaussian distribution, which describes everything that isn’t trying to be difficult. There are like 400 billion dozen more named ones, each really good at describing particular kinds of problems. But they’re all distributions.

Reading the Comics, August 4, 2018: August 4, 2018 Edition


And finally, at last, there’s a couple of comics left over from last week and that all ran the same day. If I hadn’t gone on forever about negative Kelvin temperatures I might have included them in the previous essay. That’s all right. These are strips I expect to need relatively short discussions to explore. Watch now as I put out 2,400 words explaining Wavehead misinterpreting the teacher’s question.

Dave Whamond’s Reality Check for the 4th is proof that my time spent writing about which is better, large numbers or small last week wasn’t wasted. There I wrote about four versus five for Beetle Bailey. Here it’s the same joke, but with compound words. Well, that’s easy to take care of.

[ Caption: Most people have a forehead --- Dave has a Five-Head. ] (Dave has an extremely tall head with lots of space between his eyebrows and his hair.) Squirrel in the corner: 'He'll need a 12-gallon hat.'
Dave Whamond’s Reality Check for the 4th of August, 2018. I’m sure it’s a coincidence that the tall-headed person shares a name with the cartoonist.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th is driving me slightly crazy. The equation on the board looks like an electrostatics problem to me. The ‘E’ is a common enough symbol for the strength of an electric field. And the funny-looking K’s look to me like the Greek kappa. This often represents the dielectric constant. That measures how well an electric field can move through a material. The upside-down triangles, known in the trade as Delta, describe — well, that’s getting complicated. By themselves, they describe measuring “how much the thing right after this changes in different directions”. When there’s a x symbol between the Delta and the thing, it measures something called the “curl”. This roughly measures how much the field inspires things caught up in it to turn. (Don’t try passing this off to your thesis defense committee.) The Delta x Delta x E describes the curl of the curl of E. Oh, I don’t like visualizing that. I don’t blame you if you don’t want to either.

Professor Ridley: 'Imagine an infinitely thin rod. Visualize it but don't laugh at it. I know it's difficult. Now, the following equations hold for ... ' [ Caption: Professor Ridley's cry for help goes unnoticed. ]
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th of August, 2018. Really not clear what the cry for help would be about. Just treat the rod as a limiting case of an enormous number of small spheres placed end to end and you’re done.

Anyway. So all this looks like it’s some problem about a rod inside an electric field. Fine enough. What I don’t know and can’t work out is what the problem is studying exactly. So I can’t tell you whether the equation, so far as we see it, is legitimately something to see in class. Envisioning a rod that’s infinitely thin is a common enough mathematical trick, though. Three-dimensional objects are hard to deal with. They have edges. These are fussy to deal with. Making sure the interior, the boundary, and the exterior match up in a logically consistent way is tedious. But a wire? A plane? A single point? That’s easy. They don’t have an interior. You don’t have to match up the complicated stuff.

For real world problems, yeah, you have to deal with the interior. Or you have to work out reasons why the interiors aren’t important in your problem. And it can be that your object is so small compared to the space it has to work in that the fact it’s not infinitely thin or flat or smooth just doesn’t matter. Mathematical models, such as give us equations, are a blend of describing what really is there and what we can work with.

Lotto official looking over a burnt, shattered check: 'What are the ODDS?! First he wins the lottery and then he gets struck by lightning!'
Mike Shiell’s The Wandering Melon for the 4th of August, 2018. Still, impressive watchband that it’s stood up to all that trouble.

Mike Shiell’s The Wandering Melon for the 4th is a probability joke, about two events that nobody’s likely to experience. The chance any individual will win a lottery is tiny, but enough people play them that someone wins just about any given week. The chance any individual will get struck by lightning is tiny too. But it happens to people. The combination? Well, that’s obviously impossible.

In July of 2015, Peter McCathie had this happen. He survived a lightning strike first. And then won the Atlantic Lotto 6/49. This was years apart, but the chance of both happening the same day, or same week? … Well, the world is vast and complicated. Unlikely things will happen.


And that’s all that I have for the past week. Come Sunday I should have my next Reading the Comics post, and you can find it and other essays at this link. Other essays that mention Reality Check are at this link. The many other essays which talk about Saturday Morning Breakfast Cereal are at this link. And other essays about The Wandering Melon are at this link. Thanks.

Reading the Comics, July 28, 2018: Command Performance Edition


One of the comics from the last half of last week is here mostly because Roy Kassinger asked if I was going to include it. Which one? Read on and see.

Scott Metzger’s The Bent Pinky for the 24th is the anthropomorphic-numerals joke for the week. It’s pretty easy to learn, or memorize, or test small numbers for whether they’re prime. The bigger a number gets the harder it is. Mostly it takes time. You can rule some numbers out easily enough. If they’re even numbers other than 2, for example. Or if their (base ten) digits add up to a multiple of three or nine. But once you’ve got past a couple easy filters … you don’t have to just try dividing them by all the prime numbers up to their square root. Comes close, though. Would save a lot of time if the numerals worked that out ahead of time and then kept the information around, in case it were needed. Seems a little creepy to be asking that of other numbers, really. Certainly to give special privileges to numbers for accidents of their creation.

Check-out at Whole Numbers Foods. The cashier, 7, asks, 'We have great discounts! Are you a prime member?' The customer is an unhappy-looking 8.
Scott Metzger’s The Bent Pinky for the 24th of July, 2018. I’m curious whether the background customers were a 2 and a 3 because they do represent prime numbers, or whether they were just picked because they look good.

Tony Rubino and Gary Markstein’s Daddy’s Home for the 25th is an iteration of bad-at-arithmetic jokes. In this case there’s the arithmetic that’s counting, and there’s the arithmetic that’s the addition and subtraction demanded for checkbook-balancing.

Dad: 'Dang ... my checkbook doesn't balance again. That's happened more times than I can count.' Neighbor: 'See, that's why your checkbook doesn't balance.'
Tony Rubino and Gary Markstein’s Daddy’s Home for the 25th of July, 2018. I don’t question the plausibility of people writing enough checks in 2018 that they need to balance them, even though my bank has been sold three times to new agencies before I’ve been able to use up one book of 25 checks. I do question whether people with the hobby of checkbook-balancing routinely do this outside, at the fence, while hanging out with the neighbor.

Wiley Miller’s Non Sequitur for the 25th is an Einstein joke. In a rare move for the breed this doesn’t have “E = mc2” in it, except in the implication that it was easier to think of than squirrel-proof bird feeders would be. Einstein usually gets acclaim for mathematical physics work. But he was also a legitimate inventor, with patents in his own right. He and his student Leó Szilárd developed a refrigerator that used no moving parts. Most refrigeration technology requires the use of toxic chemicals to actually do the cooling. Einstein and Szilárd hoped to make something less likely to leak these toxins. The design never saw widespread use. Ordinary refrigerators, using freon (shockingly harmless biologically, though dangerous to the upper atmosphere) got reliable enough that the danger of leaks got tolerable. And the electromagnetic pump the machine used instead made noise that at least some reports say was unbearable. The design as worked out also used a potassium-sodium alloy, not the sort of thing easy to work with. Now and then there’s talk of reviving the design. Its potential, as something that could use any heat source to provide refrigeration, seems neat. And everybody in this side of science and engineering wants to work on something that Einstein touched.

[ When Einstein switched to something a lot easier. ] (He's standing in front of a blackboard full of sketches.) Einstein: '*Sigh* ... I give up. No matter how many ways I work it out, the squirrels still get to the bird feeders.
Wiley Miller’s Non Sequitur for the 25th of July, 2018. In fairness to the squirrels, they have to eat something as long as the raccoons are getting into the squirrel feeders.

Mort Walker and Greg Walker’s Beetle Bailey for the 26th is here by special request. I wasn’t sure it was on-topic enough for my usual rigorous standards. But there is some social-aspects-of-mathematics to it. The assumption that ‘five’ is naturally better than ‘four’ for example. There is the connotation that some numbers are better than others. Yes, there are famously lucky numbers like 7 or unlucky ones like 13 (in contemporary Anglo-American culture, anyway; others have different lucks). But there’s also the sense that a larger number is of course better than a smaller one.

General Halftrack, hitting a golf ball onto the green: 'FIVE!' Lieutenant Fuzz: 'You're supposed to yell 'fore'!' Halftrack: 'A better shot deserves a better number!'
Mort Walker and Greg Walker’s Beetle Bailey for the 26th of July, 2018. Mort Walker’s name is still on the credits, and in the signature. I don’t know whether they’re still working through comics which he had a hand in writing or drawing before his death in January. It would seem amazing to be seven months ahead of deadline — I’ve never got more than two weeks, myself, and even that by cheating — but he did have a pretty good handle on the kinds of jokes he should be telling.

Except when it’s not. A first-rate performance is understood to be better than a third-rate one. A star of the first magnitude is more prominent than one of the fourth. This whether we mean celebrities or heavenly bodies. We have mixed systems. One at least respects the heritage of ancient Greek astronomers, who rated the brightest of stars as first magnitude and the next bunch as second and so on. In this context, if we take brightness to be a good thing, we understand lower numbers to be better. Another system regards the larger numbers as being more of what we’re assumed to want, and therefore be better.

Nasty confusions will happen when the schemes of thought collide. Is a class three hurricane more or less of a threat than a class four? Ought we be more worried if the United States Department of Defense declares it’s moved from Defence Condition four to Defcon 3? In October 1966, the Fermi 1 fission reactor near Detroit suffered a “Class 1 emergency”. Does that mean the city was at the highest or the lowest health risk from the partial meltdown? (In this case, this particular term reflects the lowest actionable level of radiation was detected. I am not competent to speak on how great the risk to the population was.) It would have been nice to have unambiguous language on this point.

On to the joke’s logic, though. Wouldn’t General Halftrack be accustomed to thinking of lower numbers as better? Getting to the green in three strokes is obviously preferable to four, and getting there in five would be a disaster.

Bucky Katt: 'See, at first Whitey was giving me a million to one odds that the Patriots would win the World Series, but I was able to talk him down to ten to one odds. So, since it was much more likely to happen at ten to one, I upped my bet from one dollar to a thousand dollars.' Rob: 'Do you know *anything* about gambling?' Bucky: 'Duhhh, excuse me, Senor Skeptico, I think it was me who made the bet --- not you!'
Darby Conley’s Get Fuzzy rerun for the 28th of July, 2018. It originally ran the 13th of May, 2006. It may have been repeated since then, also.

Darby Conley’s Get Fuzzy for the 28th is an applied-probability strip. The calculating of odds is rich with mathematical and psychological influences. With some events it’s possible to define quite precisely what the odds should be. If there are a thousand numbers each equally likely to be the daily lottery winner, and only one that will be, we can go from that to saying what the chance of 254 being the winner is. But many events are impossible to forecast that way. We have to use other approaches. If something has happened several times recently, we can say it’s probably rather likely. Fluke events happen, yes. But we can do fairly good work by supposing that stuff is mostly normal, and that the next baseball season will look something like the last one.

As to how to bet wisely — well, many have tried to work that out. One of the big questions in financial mathematics is how to hedge bets. I write financial mathematics, but it applies to sports betting and really anything else with many outcomes. One of the common goals is to simply avoid catastrophe, to make sure that whatever happens you aren’t too badly off. This involves betting various amounts on many outcomes. More on the outcomes you think likely, but also some on the improbable outcomes. Long shots do sometimes happen, and pay out well; it can be worth putting a little money on that just in case. Judging the likelihood of those events, especially in complicated problems that can’t be reduced to logic, is one of the hard parts. If it could be made into a system we wouldn’t need people to do it. But it does seem that knowing what you bet on helps make better bets.


If you’ve liked this essay you might like other Reading the Comics posts. Many of them are gathered at this link. When I have other essays that discuss The Bent Pinky at this link; it’s a new tag. Essays tagged Daddy’s Home are at this link. Essays that mention the comic strip Non Sequitur should be at this link, when they come up too. (This is another new tag, to my surprise.) Other appearances of Beetle Bailey should be on this page. And other Get Fuzzy-inspired discussions are at this link.

Reading the Comics, May 12, 2018: New Nancy Artist Edition


And now, closer to deadline than I like, let me wrap up last week’s mathematically-themed comic strips. I had a lot happening, that’s all I can say.

Glenn McCoy and Gary McCoy’s The Flying McCoys for the 10th is another tragic moment in the mathematics department. I’m amused that white lab coats are taken to read as “mathematician”. There are mathematicians who work in laboratories, naturally. Many interesting problems are about real-world things that can be modelled and tested and played with. It’s hardly the mathematics-department uniform, but then, I’m not sure mathematicians have a uniform. We just look like academics is all.

A wall of the Mathematics Department has fallen in. A guy in lab coat says, 'Quick --- someone call the square root of 829,921!!'
Glenn McCoy and Gary McCoy’s The Flying McCoys for the 10th of May, 2018. I suppose the piece of chalk serves as a mathematician’s professional badge, but it would be odd for a person walking in to the room to happen to have a piece. I mean, there’s good reason he might, since there’s never enough chalk in the right places and it has to be stolen from somewhere. But that’s a bit too much backstory for a panel like this.

It also shows off that motif of mathematicians as doing anything with numbers in a more complicated way than necessary. I can’t imagine anyone in an emergency trying to evoke 9-1-1 by solving any kind of puzzle. But comic strip characters are expected to do things at least a bit ridiculously. I suppose.

Mark Litzler’s Joe Vanilla for the 11th is about random numbers. We need random numbers; they do so much good. Getting them is hard. People are pretty lousy at picking random numbers in their head. We can say what “lousy” random numbers look like. They look wrong. There’s digits that don’t get used as much as the others do. There’s strings of digits that don’t get used as much as other strings of the same length do. There are patterns, and they can be subtle ones, that just don’t look right.

Person beside a sign, with the numbers 629510, 787921 and 864370 crossed out and 221473 at the bottom. Caption: 'Performance Art: Random Number Generator'.
Mark Litzler’s Joe Vanilla for the 11th of May, 2018. Bonus: depending on how you want to group a string of six numbers there’s as many as eleven random numbers to select there.

And yet we have a terrible time trying to say what good random numbers look like. Suppose we want to have a string of random zeroes and ones: is 101010 better or worse than 110101? Or 000111? Well, for a string of digits that short there’s no telling. It’s in big batches that we should expect to see no big patterns. … Except that occasionally randomness should produce patterns. How often should we expect patterns, and of what size? This seems to depend on what patterns we’ve found interesting enough to look for. But how can the cultural quirks that make something seem interesting be a substantial mathematical property?

Nancy: 'Don't you hate when you sit down at a computer and can't remember what you were going to do. For the life of me I can't recall what I wanted to do when I sat down.' Teacher: 'Nice try, Nancy, but you still have to take the countywide math test.' (Two other rows of students are on similar computers.)
Olivia Jaimes’s Nancy for the 11th of May, 2018. … Or has the Internet moved on from talking about Nancy already? Bear in mind, I still post to Usenet, so that’s how out of touch I am.

Olivia Jaimes’s Nancy for the 11th uses mathematics-assessment tests for its joke. It’s of marginal relevance, yes, but it does give me a decent pretext to include the new artist’s work here. I don’t know how long the Internet is going to be interested in Nancy. I have to get what attention I can while it lasts.

Scott Hilburn’s The Argyle Sweater for the 12th is the anthropomorphic-geometry joke for the week. Unless there was one I already did Sunday that I already forgot. Oh, no, that was anthropomorphic-numerals. It’s easy to see why a circle might be labelled irrational: either its radius or its area has to be. Both can be. The triangle, though …

Marriage Counsellor: 'She says you're very close-minded.' Triangle: 'It's called 'rational'. But she's all 'pi this' and 'pi that'. Circle: 'It's a constant struggle, doctor.'
Scott Hilburn’s The Argyle Sweater for the 12th of May, 2018. Will admit that I hadn’t heard of Heronian Triangles before I started poking around this, and I started to speculate whether it was even possible for all three legs of a triangle to be rational and the area also be rational. So you can imagine what I felt like when I did some searching and found the 5-12-13 right triangle, since that’s just the other Pythagorean Triplet you learn after the 3-4-5 one. Oh, I guess also the 3-4-5 one.

Well, that’s got me thinking. Obviously all the sides of a triangle can be rational, and so its perimeter can be too. But … the area of an equilateral triangle is \frac{1}{2}\sqrt{3} times the square of the length of any side. It can have a rational side and an irrational area, or vice-versa. Just as the circle has. If it’s not an equilateral triangle?

Can you have a triangle that has three rational sides and a rational area? And yes, you can. Take the right triangle that has sides of length 5, 12, and 13. Or any scaling of that, larger or smaller. There is indeed a whole family of triangles, the Heronian Triangles. All their sides are integers, and their areas are integers too. (Sides and areas rational are just as good as sides and areas integers. If you don’t see why, now you see why.) So there’s that at least. The name derives from Heron/Hero, the ancient Greek mathematician whom we credit with that snappy formula that tells us, based on the lengths of the three sides, what the area of the triangle is. Not the Pythagorean formula, although you can get the Pythagorean formula from it.

Still, I’m going to bet that there’s some key measure of even a Heronian Triangle that ends up being irrational. Interior angles, most likely. And there are many ways to measure triangles; they can’t all end up being rational at once. There are over two thousand ways to define a “center” of a triangle, for example. The odds of hitting a rational number on all of them at once? (Granted, most of these triangle centers are unknown except to the center’s discoverer/definer and that discoverer’s proud but baffled parents.)

Paul: 'Claire, this online business program looks good.' Claire: 'Yeah, I saw that one. But I think it's too intense. I mean, look at this. They make you take two courses in statistics and probability. What are the odds I'd ever need that? ... Oh, wait ... '
Carla Ventresca and Henry Beckett’s On A Claire Day rerun for the 12th of May, 2018. If I make it out right this originally ran the 14th of May, 2010. I forget whether I’ve featured this here already. Likely will drop it from repeats given how hard it is to write much about it. Shame, too, as I’ve just now added that tag to the roster here.

Carla Ventresca and Henry Beckett’s On A Claire Day for the 12th mentions taking classes in probability and statistics. They’re the classes nobody doubts are useful in the real world. It’s easy to figure probability is more likely to be needed than functional analysis on some ordinary day outside the university. I can’t even compose that last sentence without the language of probability.

I’d kind of agree with calling the courses intense, though. Well, “intense” might not be the right word. But challenging. Not that you’re asked to prove anything deep. The opposite, really. An introductory course in either provides a lot of tools. Many of them require no harder arithmetic work than multiplication, division, and the occasional square root. But you do need to learn which tool to use in which scenario. And there’s often not the sorts of proofs that make it easy to understand which tool does what. Doing the proofs would require too much fussing around. Many of them demand settling finicky little technical points that take you far from the original questions. But that leaves the course as this archipelago of small subjects, each easy in themselves. But the connections between them are obscured. Is that better or worse? It must depend on the person hoping to learn.

Someone Else’s Homework: A Probability Question


My friend’s finished the last of the exams and been happy with the results. And I’m stuck thinking harder about a little thing that came across my Twitter feed last night. So let me share a different problem that we had discussed over the term.

It’s a probability question. Probability’s a great subject. So much of what people actually do involves estimating probabilities and making judgements based on them. In real life, yes, but also for fun. Like a lot of probability questions, this one is abstracted into a puzzle that’s nothing like anything anybody does for fun. But that makes it practical, anyway.

So. You have a bowl with fifteen balls inside. Five of the balls are labelled ‘1’. Five of the balls are labelled ‘2’. Five of the balls are labelled ‘3’. The balls are well-mixed, which is how mathematicians say that all of the balls are equally likely to be drawn out. Three balls are picked out, without being put back in. What’s the probability that the three balls have values which, together, add up to 6?

My friend’s instincts about this were right, knowing what things to calculate. There was part of actually doing one of these calculations that went wrong. And was complicated by my making a dumb mistake in my arithmetic. Fortunately my friend wasn’t shaken by my authority, and we got to what we’re pretty sure is the right answer.

Did The Greatest Generation Hosts Get As Drunk As I Expected?


I finally finished listening to Benjamin Ahr Harrison and Adam Pranica’s Greatest Generation podcast reviews of the first season of Star Trek: Deep Space Nine. (We’ve had fewer long car trips for this.) So I can return to my projection of how their drinking game would turn out.

Their plan was to make more exciting the discussion of some of Deep Space Nine‘s episodes by recording their reviews while drinking a lot. The plan was, for the fifteen episodes they had in the season, there would be a one-in-fifteen chance of doing any particular episode drunk. So how many drunk episodes would you expect to get, on this basis?

It’s a well-formed expectation value problem. There could be as few as zero or as many as fifteen, but some cases are more likely than others. Each episode could be recorded drunk or not-drunk. There’s an equal chance of each episode being recorded drunk. Whether one episode is drunk or not doesn’t depend on whether the one before was, and doesn’t affect whether the next one is. (I’ll come back to this.)

The most likely case was for there to be one drunk episode. The probability of exactly one drunk episode was a little over 38%. No drunk episodes was also a likely outcome. There was a better than 35% chance it would never have turned up. The chance of exactly two drunk episodes was about 19%. There drunk episodes had a slightly less than 6% chance of happening. Four drunk episodes a slightly more than 1% chance of happening. And after that you get into the deeply unlikely cases.

As the Deep Space Nine season turned out, this one-in-fifteen chance came up twice. It turned out they sort of did three drunk episodes, though. One of the drunk episodes turned out to be the first of two they planned to record that day. I’m not sure why they didn’t just swap what episode they recorded first, but I trust they had logistical reasons. As often happens with probability questions, the independence of events — whether a success for one affects the outcome of another — changes calculations.

There’s not going to be a second-season update to this. They’ve chosen to make a more elaborate recording game of things. They’ve set up a modified Snakes and Ladders type board with a handful of spots marked for stunts. Some sound like fun, such as recording without taking any notes about the episode. Some are, yes, drinking episodes. But this is all a very different and more complicated thing to project. If I were going to tackle that it’d probably be by running a bunch of simulations and taking averages from that.

Still from Deep Space Nine, season 6, episode 23, 'Profit and Lace', the sex-changed Quark feeling her breasts and looking horrified.
Real actual episode that was really actually made and really actually aired for real. I’m going to go ahead and guess that it hasn’t aged well.

Also I trust they’ve been warned about the episode where Quark has a sex change so he can meet a top Ferengi soda magnate after accidentally giving his mother a heart attack because gads but that was a thing that happened somehow.

Reading the Comics, January 23, 2018: Adult Content Edition


I was all set to say how complaining about GoComics.com’s pages not loading had gotten them fixed. But they only worked for Monday alone; today they’re broken again. Right. I haven’t tried sending an error report again; we’ll see if that works. Meanwhile, I’m still not through last week’s comic strips and I had just enough for one day to nearly enough justify an installment for the one day. Should finish off the rest of the week next essay, probably in time for next week.

Mark Leiknes’s Cow and Boy rerun for the 23rd circles around some of Zeno’s Paradoxes. At the heart of some of them is the question of whether a thing can be divided infinitely many times, or whether there must be some smallest amount of a thing. Zeno wonders about space and time, but you can do as well with substance, with matter. Mathematics majors like to say the problem is easy; Zeno just didn’t realize that a sum of infinitely many things could be a finite and nonzero number. This misses the good question of how the sum of infinitely many things, none of which are zero, can be anything but infinitely large? Or, put another way, what’s different in adding \frac11 + \frac12 + \frac13 + \frac14 + \cdots and adding \frac11 + \frac14 + \frac19 + \frac{1}{16} + \cdots that the one is infinitely large and the other not?

Or how about this. Pick your favorite string of digits. 23. 314. 271828. Whatever. Add together the series \frac11 + \frac12 + \frac13 + \frac14 + \cdots except that you omit any terms that have your favorite string there. So, if you picked 23, don’t add \frac{1}{23} , or \frac{1}{123} , or \frac{1}{802301} or such. That depleted series does converge. The heck is happening there? (Here’s why it’s true for a single digit being thrown out. Showing it’s true for longer strings of digits takes more work but not really different work.)

J C Duffy’s Lug Nuts for the 23rd is, I think, the first time I have to give a content warning for one of these. It’s a porn-movie advertisement spoof. But it mentions Einstein and Pi and has the tagline “she didn’t go for eggheads … until he showed her a new equation!”. So, you know, it’s using mathematics skill as a signifier of intelligence and riffing on the idea that nerds like sex too.

John Graziano’s Ripley’s Believe It or Not for the 23rd has a trivia that made me initially think “not”. It notes Vince Parker, Senior and Junior, of Alabama were both born on Leap Day, the 29th of February. I’ll accept this without further proof because of the very slight harm that would befall me were I to accept this wrongly. But it also asserted this was a 1-in-2.1-million chance. That sounded wrong. Whether it is depends on what you think the chance is of.

Because what’s the remarkable thing here? That a father and son have the same birthday? Surely the chance of that is 1 in 365. The father could be born any day of the year; the son, also any day. Trusting there’s no influence of the father’s birthday on the son’s, then, 1 in 365 it is. Or, well, 1 in about 365.25, since there are leap days. There’s approximately one leap day every four years, so, surely that, right?

And not quite. In four years there’ll be 1,461 days. Four of them will be the 29th of January and four the 29th of September and four the 29th of August and so on. So if the father was born any day but leap day (a “non-bissextile day”, if you want to use a word that starts a good fight in a Scrabble match), the chance the son’s birth is the same is 4 chances in 1,461. 1 in 365.25. If the father was born on Leap Day, then the chance the son was born the same day is only 1 chance in 1,461. Still way short of 1-in-2.1-million. So, Graziano’s Ripley’s is wrong if that’s the chance we’re looking at.

Ah, but what if we’re looking at a different chance? What if we’re looking for the chance that the father is born the 29th of February and the son is also born the 29th of February? There’s a 1-in-1,461 chance the father’s born on Leap Day. And a 1-in-1,461 chance the son’s born on Leap Day. And if those events are independent, the father’s birth date not influencing the son’s, then the chance of both those together is indeed 1 in 2,134,521. So Graziano’s Ripley’s is right if that’s the chance we’re looking at.

Which is a good reminder: if you want to work out the probability of some event, work out precisely what the event is. Ordinary language is ambiguous. This is usually a good thing. But it’s fatal to discussing probability questions sensibly.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd presents his mathematician discovering a new set of numbers. This will happen. Mathematics has had great success, historically, finding new sets of things that look only a bit like numbers were understood. And showing that if they follow rules that are, as much as possible, like the old numbers, we get useful stuff out of them. The mathematician claims to be a formalist, in the punch line. This is a philosophy that considers mathematical results to be the things you get by starting with some symbols and some rules for manipulating them. What this stuff means, and whether it reflects anything of interest in the real world, isn’t of interest. We can know the results are good because they follow the rules.

This sort of approach can be fruitful. It can force you to accept results that are true but intuition-defying. And it can give results impressive confidence. You can even, at least in principle, automate the creating and the checking of logical proofs. The disadvantages are that it takes forever to get anything done. And it’s hard to shake the idea that we ought to have some idea what any of this stuff means.

Reading the Comics, December 9, 2017: Zach Weinersmith Wants My Attention Edition


If anything dominated the week in mathematically-themed comic strips it was Zach Weinersmith’s Saturday Morning Breakfast Cereal. I don’t know how GoComics selects the strips to (re?)print on their site. But there were at least four that seemed on-point enough for me to mention. So, okay. He’s got my attention. What’s he do with it?

On the 3rd of December is a strip I can say is about conditional probability. The mathematician might be right that the chance someone will be murdered by a serial killer are less than one in ten million. But that is the chance of someone drawn from the whole universe of human experiences. There are people who will never be near a serial killer, for example, or who never come to his attention or who evade his interest. But if we know someone is near a serial killer, or does attract his interest? The information changes the probability. And this is where you get all those counter-intuitive and somewhat annoying logic puzzles about, like, the chance someone’s other child is a girl if the one who just walked in was, and how that changes if you’re told whether the girl who just entered was the elder.

On the 5th is a strip about sequences. And built on the famous example of exponential growth from doubling a reward enough times. Well, you know these things never work out for the wise guy. The “Fibonacci Spiral” spoken of in the next-to-last panel is a spiral, like you figure. The dimensions of the spiral are based on those of golden-ratio rectangles. It looks a great deal like a logarithmic spiral to the untrained eye. Also to the trained eye, but you knew that. I think it’s supposed to be humiliating that someone would call such a spiral “random”. But I admit I don’t get that part.

The strip for the 6th has a more implicit mathematical content. It hypothesizes that mathematicians, given the chance, will be more interested in doing recreational puzzles than even in eating and drinking. It’s amusing, but I’ll admit I’ve found very few puzzles all that compelling. This isn’t to say there aren’t problems I keep coming back to because I’m curious about them, just that they don’t overwhelm my common sense. Don’t ask me when I last received actual pay for doing something mathematical.

And then on the 9th is one more strip, about logicians. And logic puzzles, such as you might get in a Martin Gardner collection. The problem is written out on the chalkboard with some shorthand logical symbols. And they’re symbols both philosophers and mathematicians use. The letter that looks like a V with a crossbar means “for all”. (The mnemonic I got was “it’s an A-for-all, upside-down”. This paired with the other common symbol, which looks like a backwards E and means there exists: “E-for-exists, backwards”. Later I noticed upside-down A and backwards E could both be just 180-degree-rotated A and E. But try saying “180-degree-rotated” in a quick way.) The curvy E between the letters ‘x’ and ‘S’ means “belongs to the set”. So that first line says “for all x that belong to the set S this follows”. Writing out “isLiar(x)” instead of, say, “L(x)”, is more a philosopher’s thing than a mathematician’s. But it wouldn’t throw anyway. And the T just means emphasizing that this is true.

And that is as much about Saturday Morning Breakfast Cereal as I have to say this week.

Sam Hurt’s Eyebeam for the 4th tells a cute story about twins trying to explain infinity to one another. I’m not sure I can agree with the older twin’s assertion that infinity means there’s no biggest number. But that’s just because I worry there’s something imprecise going on there. I’m looking forward to the kids learning about negative numbers, though, and getting to wonder what’s the biggest negative real number.

Percy Crosby’s Skippy for the 4th starts with Skippy explaining a story problem. One about buying potatoes, in this case. I’m tickled by how cranky Skippy is about boring old story problems. Motivation is always a challenge. The strip originally ran the 7th of October, 1930.

Dave Whamond’s Reality Check for the 6th uses a panel of (gibberish) mathematics as an example of an algorithm. Algorithms are mathematical, in origin at least. The word comes to us from the 9th century Persian mathematician Al-Khwarizmi’s text about how to calculate. The modern sense of the word comes from trying to describe the methods by which a problem can be solved. So, legitimate use of mathematics to show off the idea. The symbols still don’t mean anything.

Joe: 'Grandpa, what's 5x7?' Grandpa: 'Why do you wanna know?' Joe: 'I'm testing your memory.' Grandpa: 'Oh! The answer's 35.' Joe: 'Thanks! Now what is 8x8?' Grandpa: 'Joe, is that last night's homework?' Joe: 'We're almost done! Only 19 more!'
Rick Detorie’s One Big Happy for the 7th of December, 2017. And some attention, please, for Ruthie there. She’s completely irrelevant to the action, but it makes sense for her to be there if Grandpa is walking them to school, and she adds action — and acting — to the scenes.

Rick Detorie’s One Big Happy for the 7th has Joe trying to get his mathematics homework done at the last minute. … And it’s caused me to reflect on how twenty multiplication problems seems like a reasonable number to do. But there’s only fifty multiplications to even do, at least if you’re doing the times tables up to the 10s. No wonder students get so bored seeing the same problems over and over. It’s a little less dire if you’re learning times tables up to the 12s, but not that much better. Yow.

Olivia Walch’s Imogen Quest for the 8th looks pretty legitimate to me. It’s going to read as gibberish to people who haven’t done parametric functions, though. Start with the plane and the familiar old idea of ‘x’ and ‘y’ representing how far one is along a horizontal and a vertical direction. Here, we’re given a dummy variable ‘t’, and functions to describe a value for ‘x’ and ‘y’ matching each value of ‘t’. The plot then shows all the points that ever match a pair of ‘x’ and ‘y’ coordinates for some ‘t’. The top drawing is a shape known as the cardioid, because it kind of looks like a Valentine-heart. The lower figure is a much more complicated parametric equation. It looks more anatomically accurate,

Still no sign of Mark Anderson’s Andertoons and the drought is worrying me, yes.

But they’re still going on the cartoonist’s web site, so there’s that.

How Drunk Can We Expect The Greatest Generation Podcast Hosts To Get?


Among my entertainments is listening to the Greatest Generation podcast, hosted by Benjamin Ahr Harrison and Adam Pranica. They recently finished reviewing all the Star Trek: The Next Generation episodes, and have started Deep Space Nine. To add some fun and risk to episode podcasts the hosts proposed to record some episodes while drinking heavily. I am not a fun of recreational over-drinking, but I understand their feelings. There’s an episode where Quark has a sex-change operation because he gave his mother a heart attack right before a politically charged meeting with a leading Ferengi soda executive. Nobody should face that mess sober.

At the end of the episode reviewing “Babel”, Harrison proposed: there’s 15 episodes left in the season. Use a random number generator to pick a number from 1 to 15; if it’s one, they do the next episode (“Captive Pursuit”) drunk. And it was; what are the odds? One in fifteen. I just said.

Still from Next Generation season 1, episode 3, 'The Naked Now', causing all us Trekkies at home to wonder if maybe this new show wasn't going to be as good as we so desperately needed it to be?
Space-drunk engineer Jim Shimoda throwing control chips around in the moment that made him a Greatest Generation running joke. In the podcast’s context this makes sense. In the original context this made us all in 1987 grit our teeth and say, “No, no, this really is as good a show as we need this to be shut up shut up shut up”.

The question: how many episodes would they be doing drunk? As they discussed in the next episode, this would imply they’d always get smashed for the last episode of the season. This is a straightforward expectation-value problem. The expectation value of a thing is the sum of all the possible outcomes times the chance of each outcome. Here, the possible outcome is adding 1 to the number of drunk episodes. The chance of any particular episode being a drunk episode is 1 divided by ‘N’, if ‘N’ is the number of episodes remaining. So the next-to-the-last episode has 1 chance in 2 of being drunk. The second-from-the-last has 1 chance in 3 of being drunk. And so on.

This expectation value isn’t hard to calculate. If we start counting from the last episode of the season, then it’s easy. Add up 1 + \frac12 + \frac13 + \frac14 + \frac15 + \frac16 + \cdots , ending when we get up to one divided by the number of episodes in the season. 25 or 26, for most seasons of Deep Space Nine. 15, from when they counted here. This is the start of the harmonic series.

The harmonic series gets taught in sequences and series in calculus because it does some neat stuff if you let it go on forever. For example, every term in this sequence gets smaller and smaller. (The “sequence” is the terms that go into the sum: 1, \frac12, \frac13, \frac14, \frac{1}{1054}, \frac{1}{2038} , and so on. The “series” is the sum of a sequence, a single number. I agree it seems weird to call a “series” that sum, but it’s the word we’re stuck with. If it helps, consider: when we talk about “a TV series” we usually mean the whole body of work, not individual episodes.) You can pick any number, however tiny you like. I can then respond with the last term in the sequence bigger than your number. Infinitely many terms in the sequence will be smaller than your pick. And yet: you can pick any number you like, however big. And I can take a finite number of terms in this sequence to make a sum bigger than whatever number you liked. The sum will eventually be bigger than 10, bigger than 100, bigger than a googolplex. These two facts are easy to prove, but they seem like they ought to be contradictory. You can see why infinite series are fun and produce much screaming on the part of students.

No Star Trek show has a season has infinitely many episodes, though, however long the second season of Enterprise seemed to drag out. So we don’t have to worry about infinitely many drunk episodes.

Since there were 15 episodes up for drunkenness in the first season of Deep Space Nine the calculation’s easy. I still did it on the computer. For the first season we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{15} drunk episodes. This is a number a little bigger than 3.318. So, more likely three drunk episodes, four being likely. For the 25-episode seasons (seasons four and seven, if I’m reading this right), we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{25} or just over 3.816 drunk episodes. Likely four, maybe three. For the 26-episode seasons (seasons two, five, and six), we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{26} drunk episodes. That’s just over 3.854.

The number of drunk episodes to expect keeps growing. The harmonic series grows without bounds. But it keeps growing slower, compared to the number of terms you add together. You need a 31-episode season to be able to expect at four drunk episodes. To expect five drunk episodes you’d need an 83-episode season. If the guys at Worst Episode Ever, reviewing The Simpsons, did all 625-so-far episodes by this rule we could only expect seven drunk episodes.

Still, three, maybe four, drunk episodes of the 15 remaining first season is a fair number. They shouldn’t likely be evenly spaced. The chance of a drunk episode rises the closer they get to the end of the season. Expected length between drunk episodes is interesting but I don’t want to deal with that. I’ll just say that it probably isn’t the five episodes the quickest, easiest suggested by taking 15 divided by 3.

And it’s moot anyway. The hosts discussed it just before starting “Captive Pursuit”. Pranica pointed out, for example, the smashed-last-episode problem. What they decided they meant was there would be a 1-in-15 chance of recording each episode this season drunk. For the 25- or 26-episode seasons, each episode would get its 1-in-25 or 1-in-26 chance.

That changes the calculations. Not in spirit: that’s still the same. Count the number of possible outcomes and the chance of each one being a drunk episode and add that all up. But the work gets simpler. Each episode has a 1-in-15 chance of adding 1 to the total of drunk episodes. So the expected number of drunk episodes is the number of episodes (15) times the chance each is a drunk episode (1 divided by 15). We should expect 1 drunk episode. The same reasoning holds for all the other seasons; we should expect 1 drunk episode per season.

Still, since each episode gets an independent draw, there might be two drunk episodes. Could be three. There’s no reason that all 15 couldn’t be drunk. (Except that at the end of reviewing “Captive Pursuit” they drew for the next episode and it’s not to be a drunk one.) What are the chances there’s no drunk episodes? What are the chances there’s two, or three, or eight drunk episodes?

There’s a rule for this. This kind of problem is a mathematically-famous one. We get our results from the “binomial distribution”. This applies whenever there’s a bunch of attempts at something. And each attempt can either clearly succeed or clearly fail. And the chance of success (or failure) each attempt is always the same. That’s what applies here. If there’s ‘N’ episodes, and the chance is ‘p’ that any one will be drunk, then we get the chance ‘y’ of turning up exactly ‘k’ drunk episodes by the formula:

y = \frac{N!}{k! \cdot \left(n - k\right)!} p^k \left(1 - p\right)^{n - k}

That looks a bit ugly, yeah. (I don’t like using ‘y’ as the name for a probability. I ran out of good letters and didn’t want to do subscripts.) It’s just tedious to calculate is all. Factorials and everything. Better to let the computer work it out. There is a formula that’s easy enough to work with, though. That’s because the chance of a drunk episode is the same each episode. I don’t know a formula to get the chance of exactly zero or one or four drunk episodes with the first, one-in-N chance. Probably the only thing to do is run a lot of simulations and trust that’s approximately right.

But for this rule it’s easy enough. There’s this formula, like I said. I figured out the chance of all the possible drunk episode combinations for the seasons. I mean I had the computer work it out. All I figured out was how to make it give me the results in a format I liked. Here’s what I got.

The chance of these many drunk episodes In a 15-episode season is
0 0.355
1 0.381
2 0.190
3 0.059
4 0.013
5 0.002
6 0.000
7 0.000
8 0.000
9 0.000
10 0.000
11 0.000
12 0.000
13 0.000
14 0.000
15 0.000

Sorry it’s so dull, but the chance of a one-in-fifteen event happening 15 times in a row? You’d expect that to be pretty small. It’s got a probability of something like 0.000 000 000 000 000 002 28 of happening. Not technically impossible, but yeah, impossible.

How about for the 25- and 26-episode seasons? Here’s the chance of all the outcomes:

The chance of these many drunk episodes In a 25-episode season is
0 0.360
1 0.375
2 0.188
3 0.060
4 0.014
5 0.002
6 0.000
7 0.000
8 or more 0.000

And things are a tiny bit different for a 26-episode season.

The chance of these many drunk episodes In a 26-episode season is
0 0.361
1 0.375
2 0.188
3 0.060
4 0.014
5 0.002
6 0.000
7 0.000
7 0.000
8 or more 0.000

Yes, there’s a greater chance of no drunk episodes. The difference is really slight. It only looks so big because of rounding. A no-drunk 25 episode season has a chance of about 0.3604, while a no-drunk 26 episodes season has a chance of about 0.3607. The difference comes from the chance of lots of drunk episodes all being even worse somehow.

And there’s some neat implications through this. There’s a slightly better than one in three chance that each of the second through seventh seasons won’t have any drunk episodes. We could expect two dry seasons, hopefully not the one with Quark’s sex-change episode. We can reasonably expect at least one season with two drunk episodes. There’s a slightly more than 40 percent chance that some season will have three drunk episodes. There’s just under a 10 percent chance some season will have four drunk episodes.

Still from Deep Space Nine, season 6, episode 23, 'Profit and Lace', the sex-changed Quark feeling her breasts and looking horrified.
Real actual episode that was really actually made and really actually aired for real. I don’t know when I last saw it. I’m going to go ahead and guess that it hasn’t aged well.

There’s no guarantees, though. Probability has a curious blend. There’s no predicting when any drunk episode will come. But we can make meaningful predictions about groups of episodes. These properties seem like they should be contradictions. And they’re not, and that’s wonderful.

Reading the Comics, November 25, 2017: Shapes and Probability Edition


This week was another average-grade week of mathematically-themed comic strips. I wonder if I should track them and see what spurious correlations between events and strips turn up. That seems like too much work and there’s better things I could do with my time, so it’s probably just a few weeks before I start doing that.

Ruben Bolling’s Super-Fun-Pax Comics for the 19th is an installment of A Voice From Another Dimension. It’s in that long line of mathematics jokes that are riffs on Flatland, and how we might try to imagine spaces other than ours. They’re taxing things. We can understand some of the rules of them perfectly well. Does that mean we can visualize them? Understand them? I’m not sure, and I don’t know a way to prove whether someone does or does not. This wasn’t one of the strips I was thinking of when I tossed “shapes” into the edition title, but you know what? It’s close enough to matching.

Olivia Walch’s Imogen Quest for the 20th — and I haven’t looked, but it feels to me like I’m always featuring Imogen Quest lately — riffs on the Monty Hall Problem. The problem is based on a game never actually played on Monty Hall’s Let’s Make A Deal, but very like ones they do. There’s many kinds of games there, but most of them amount to the contestant making a choice, and then being asked to second-guess the choice. In this case, pick a door and then second-guess whether to switch to another door. The Monty Hall Problem is a great one for Internet commenters to argue about while the rest of us do something productive. The trouble — well, one trouble — is that whether switching improves your chance to win the car is that whether it does depends on the rules of the game. It’s not stated, for example, whether the host must open a door showing a goat behind it. It’s not stated that the host certainly knows which doors have goats and so chooses one of those. It’s not certain the contestant even wants a car when, hey, goats. What assumptions you make about these issues affects the outcome.

If you take the assumptions that I would, given the problem — the host knows which door the car’s behind, and always offers the choice to switch, and the contestant would rather have a car, and such — then Walch’s analysis is spot on.

Jonathan Mahood’s Bleeker: The Rechargeable Dog for the 20th features a pretend virtual reality arithmetic game. The strip is of incredibly low mathematical value, but it’s one of those comics I like that I never hear anyone talking about, so, here.

Richard Thompson’s Cul de Sac rerun for the 20th talks about shapes. And the names for shapes. It does seem like mathematicians have a lot of names for slightly different quadrilaterals. In our defense, if you’re talking about these a lot, it helps to have more specific names than just “quadrilateral”. Rhomboids are those parallelograms which have all four sides the same length. A parallelogram has to have two pairs of equal-sized legs, but the two pairs’ sizes can be different. Not so a rhombus. Mathworld says a rhombus with a narrow angle that’s 45 degrees is sometimes called a lozenge, but I say they’re fibbing. They make even more preposterous claims on the “lozenge” page.

Todd Clark’s Lola for the 20th does the old “when do I need to know algebra” question and I admit getting grumpy like this when people ask. Do French teachers have to put up with this stuff?

Brian Fies’s Mom’s Cancer rerun for the 23rd is from one of the delicate moments in her story. Fies’s mother just learned the average survival rate for her cancer treatment is about five percent and, after months of things getting haltingly better, is shaken. But as with most real-world probability questions context matters. The five-percent chance is, as described, the chance someone who’d just been diagnosed in the state she’d been diagnosed in would survive. The information that she’s already survived months of radiation and chemical treatment and physical therapy means they’re now looking at a different question. What is the chance she will survive, given that she has survived this far with this care?

Mark Anderson’s Andertoons for the 24th is the Mark Anderson’s Andertoons for the week. It’s a protesting-student kind of joke. For the student’s question, I’m not sure how many sides a polygon has before we can stop memorizing them. I’d say probably eight. Maybe ten. Of the shapes whose names people actually care about, mm. Circle, triangle, a bunch of quadrilaterals, pentagons, hexagons, octagons, maybe decagon and dodecagon. No, I’ve never met anyone who cared about nonagons. I think we could drop heptagons without anyone noticing either. Among quadrilaterals, ugh, let’s see. Square, rectangle, rhombus, parallelogram, trapezoid (or trapezium), and I guess diamond although I’m not sure what that gets you that rhombus doesn’t already. Toss in circles, ellipses, and ovals, and I think that’s all the shapes whose names you use.

Stephan Pastis’s Pearls Before Swine for the 25th does the rounding-up joke that’s been going around this year. It’s got a new context, though.

When Is Thanksgiving Most Likely To Happen?


I thought I had written this up. Which is good because I didn’t want to spend the energy redoing these calculations.

The date of Thanksgiving, as observed in the United States, is that it’s the fourth Thursday of November. So it might happen anytime from the 22nd through the 28th. But because of the quirks of the Gregorian calendar, it can happen that a particular date, like the 23rd of November, is more or less likely to be a Thursday than some other day of the week.

So here’s the results of what days are most and least likely to be Thanksgiving. It turns out the 23rd, this year’s candidate, is tied for the rarest of Thanksgiving days. It’s not that rare, in comparison. It happens only two fewer times every 400 years than do Thanksgivings on the 22nd of November, the (tied) most common day.

Reading the Comics, November 18, 2017: Story Problems and Equation Blackboards Edition


It was a normal-paced week at Comic Strip Master Command. It was also one of those weeks that didn’t have anything from Comics Kingdom or Creators.Com. So I’m afraid you’ll all just have to click the links for strips you want to actually see. Sorry.

Bill Amend’s FoxTrot for the 12th has Jason and Marcus creating “mathic novels”. They, being a couple of mathematically-gifted smart people, credit mathematics knowledge with smartness. A “chiliagon” is a thousand-sided regular polygon that’s mostly of philosophical interest. A regular polygon with a thousand equal sides and a thousand equal angles looks like a circle. There’s really no way to draw one so that the human eye could see the whole figure and tell it apart from a circle. But if you can understand the idea of a regular polygon it seems like you can imagine a chilagon and see how that’s not a circle. So there’s some really easy geometry things that can’t be visualized, or at least not truly visualized, and just have to be reasoned with.

Rick Detorie’s One Big Happy for the 12th is a story-problem-subversion joke. The joke’s good enough as it is, but the supposition of the problem is that the driving does cover fifty miles in an hour. This may not be the speed the car travels at the whole time of the problem. Mister Green is maybe speeding to make up for all the time spent travelling slower.

Brandon Sheffield and Dami Lee’s Hot Comics for Cool People for the 13th uses a blackboard full of equations to represent the deep thinking being done on a silly subject.

Shannon Wheeler’s Too Much Coffee Man for the 15th also uses a blackboard full of equations to represent the deep thinking being done on a less silly subject. It’s a really good-looking blackboard full of equations, by the way. Beyond the appearance of our old friend E = mc2 there’s a lot of stuff that looks like legitimate quantum mechanics symbols there. They’re at least not obvious nonsense, as best I can tell without the ability to zoom the image in. I wonder if Wheeler didn’t find a textbook and use some problems from it for the feeling of authenticity.

Samson’s Dark Side of the Horse for the 16th is a story-problem subversion joke.

Jef Mallett’s Frazz for the 18th talks about making a bet on the World Series, which wrapped up a couple weeks ago. It raises the question: can you bet on an already known outcome? Well, sure, you can bet on anything you like, given a willing partner. But there does seem to be something fundamentally different between betting on something whose outcome isn’t in principle knowable, such as the winner of the next World Series, and betting on something that could be known but happens not to be, such as the winner of the last. We see this expressed in questions like “is it true the 13th of a month is more likely to be Friday than any other day of the week?” If you know which month and year is under discussion the chance the 13th is Friday is either 1 or 0. But we mean something more like, if we don’t know what month and year it is, what’s the chance this is a month with a Friday the 13th? Something like this is at work in this World Series bet. (The Astros won the recently completed World Series.)

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 18th is also featured on some underemployed philosopher’s “Reading the Comics” WordPress blog and fair enough. Utilitarianism exists in an odd triple point, somewhere on the borders of ethics, economics, and mathematics. The idea that one could quantize the good or the utility or the happiness of society, and study how actions affect it, is a strong one. It fits very well the modern mindset that holds everything can be quantified even if we don’t know how to do it well just yet. And it appeals strongly to a mathematically-minded person since it sounds like pure reason. It’s not, of course, any more than any ethical scheme can be. But it sounds like the ethics a Vulcan would come up with and that appeals to a certain kind of person. (The comic is built on one of the implications of utilitarianism that makes it seem like the idea’s gone off the rails.)

There’s some mathematics symbols on The Utilitarian’s costume. The capital U on his face is probably too obvious to need explanation. The \sum u on his chest relies on some mathematical convention. For maybe a half-millennium now mathematicians have been using the capital sigma to mean “take a sum of things”. The things are whatever the expression after that symbol is. Usually, the Sigma will have something below and above which carries meaning. It says what the index is for the thing after the symbol, and what the bounds of the index are. Here, it’s not set. This is common enough, though, if this is understood from context. Or if it’s obvious. The small ‘u’ to the right suggests the utility of whatever’s thought about. (“Utility” being the name for the thing measured and maximized; it might be happiness, it might be general well-being, it might be the number of people alive.) So the symbols would suggest “take the sum of all the relevant utilities”. Which is the calculation that would be done in this case.

Reading the Comics, September 29, 2017: Anthropomorphic Mathematics Edition


The rest of last week had more mathematically-themed comic strips than Sunday alone did. As sometimes happens, I noticed an objectively unimportant detail in one of the comics and got to thinking about it. Whether I could solve the equation as posted, or whether at least part of it made sense as a mathematics problem. Well, you’ll see.

Patrick McDonnell’s Mutts for the 25th of September I include because it’s cute and I like when I can feature some comic in these roundups. Maybe there’s some discussion that could be had about what “equals” means in ordinary English versus what it means in mathematics. But I admit that’s a stretch.

Professor Earl's Math Class. (Earl is the dog.) 'One belly rub equals two pats on the head!'
Patrick McDonnell’s Mutts for the 25th of September, 2017. I should be interested in other people’s research on this. My love’s parents’ dogs are the ones I’ve had the most regular contact with the last few years, and the dogs have all been moderately to extremely alarmed by my doing suspicious things, such as existing or being near them or being away from them or reaching a hand to them or leaving a treat on the floor for them. I know this makes me sound worrisome, but my love’s parents are very good about taking care of dogs others would consider just too much trouble.

Olivia Walch’s Imogen Quest for the 25th uses, and describes, the mathematics of a famous probability problem. This is the surprising result of how few people you need to have a 50 percent chance that some pair of people have a birthday in common. It then goes over to some other probability problems. The examples are silly. But the reasoning is sound. And the approach is useful. To find the chance of something happens it’s often easiest to work out the chance it doesn’t. Which is as good as knowing the chance it does, since a thing can either happen or not happen. At least in probability problems, which define “thing” and “happen” so there’s not ambiguity about whether it happened or not.

Piers Baker’s Ollie and Quentin rerun for the 26th I’m pretty sure I’ve written about before, although back before I included pictures of the Comics Kingdom strips. (The strip moved from Comics Kingdom over to GoComics, which I haven’t caught removing old comics from their pages.) Anyway, it plays on a core piece of probability. It sets out the world as things, “events”, that can have one of multiple outcomes, and which must have one of those outcomes. Coin tossing is taken to mean, by default, an event that has exactly two possible outcomes, each equally likely. And that is near enough true for real-world coin tossing. But there is a little gap between “near enough” and “true”.

Rick Stromoski’s Soup To Nutz for the 27th is your standard sort of Dumb Royboy joke, in this case about him not knowing what percentages are. You could do the same joke about fractions, including with the same breakdown of what part of the mathematics geek population ruins it for the remainder.

Nate Fakes’s Break of Day for the 28th is not quite the anthropomorphic-numerals joke for the week. Anthropomorphic mathematics problems, anyway. The intriguing thing to me is that the difficult, calculus, problem looks almost legitimate to me. On the right-hand-side of the first two lines, for example, the calculation goes from

\int -8 e^{-\frac{ln 3}{14} t}

to
-8 -\frac{14}{ln 3} e^{-\frac{ln 3}{14} t}

This is a little sloppy. The first line ought to end in a ‘dt’, and the second ought to have a constant of integration. If you don’t know what these calculus things are let me explain: they’re calculus things. You need to include them to express the work correctly. But if you’re just doing a quick check of something, the mathematical equivalent of a very rough preliminary sketch, it’s common enough to leave that out.

It doesn’t quite parse or mean anything precisely as it is. But it looks like the sort of thing that some context would make meaningful. That there’s repeated appearances of - \frac{ln 3}{14} , or - \frac{14}{ln 3} , particularly makes me wonder if Frakes used a problem he (or a friend) was doing for some reason.

Mark Anderson’s Andertoons for the 29th is a welcome reassurance that something like normality still exists. Something something student blackboard story problem something.

Anthony Blades’s Bewley rerun for the 29th depicts a parent once again too eager to help with arithmetic homework.

Maria Scrivan’s Half Full for the 29th gives me a proper anthropomorphic numerals panel for the week, and none too soon.

The Summer 2017 Mathematics A To Z: Sárközy’s Theorem


Gaurish, of For the love of Mathematics, gives me another chance to talk number theory today. Let’s see how that turns out.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Sárközy’s Theorem.

I have two pieces to assemble for this. One is in factors. We can take any counting number, a positive whole number, and write it as the product of prime numbers. 2038 is equal to the prime 2 times the prime 1019. 4312 is equal to 2 raised to the third power times 7 raised to the second times 11. 1040 is 2 to the fourth power times 5 times 13. 455 is 5 times 7 times 13.

There are many ways to divide up numbers like this. Here’s one. Is there a square number among its factors? 2038 and 455 don’t have any. They’re each a product of prime numbers that are never repeated. 1040 has a square among its factors. 2 times 2 divides into 1040. 4312, similarly, has a square: we can write it as 2 squared times 2 times 7 squared times 11. So that is my first piece. We can divide counting numbers into squarefree and not-squarefree.

The other piece is in binomial coefficients. These are numbers, often quite big numbers, that get dumped on the high school algebra student as she tries to work with some expression like (a + b)^n . They’re also dumped on the poor student in calculus, as something about Newton’s binomial coefficient theorem. Which we hear is something really important. In my experience it wasn’t explained why this should rank up there with, like, the differential calculus. (Spoiler: it’s because of polynomials.) But it’s got some great stuff to it.

Binomial coefficients are among those utility players in mathematics. They turn up in weird places. In dealing with polynomials, of course. They also turn up in combinatorics, and through that, probability. If you run, for example, 10 experiments each of which could succeed or fail, the chance you’ll get exactly five successes is going to be proportional to one of these binomial coefficients. That they touch on polynomials and probability is a sign we’re looking at a thing woven into the whole universe of mathematics. We saw them some in talking, last A-To-Z around, about Yang Hui’s Triangle. That’s also known as Pascal’s Triangle. It has more names too, since it’s been found many times over.

The theorem under discussion is about central binomial coefficients. These are one specific coefficient in a row. The ones that appear, in the triangle, along the line of symmetry. They’re easy to describe in formulas. for a whole number ‘n’ that’s greater than or equal to zero, evaluate what we call 2n choose n:

{{2n} \choose{n}} =  \frac{(2n)!}{(n!)^2}

If ‘n’ is zero, this number is \frac{0!}{(0!)^2} or 1. If ‘n’ is 1, this number is \frac{2!}{(1!)^2} or 2. If ‘n’ is 2, this number is \frac{4!}{(2!)^2} 6. If ‘n’ is 3, this number is (sparing the formula) 20. The numbers keep growing. 70, 252, 924, 3432, 12870, and so on.

So. 1 and 2 and 6 are squarefree numbers. Not much arguing that. But 20? That’s 2 squared times 5. 70? 2 times 5 times 7. 252? 2 squared times 3 squared times 7. 924? That’s 2 squared times 3 times 7 times 11. 3432? 2 cubed times 3 times 11 times 13; there’s a 2 squared in there. 12870? 2 times 3 squared times it doesn’t matter anymore. It’s not a squarefree number.

There’s a bunch of not-squarefree numbers in there. The question: do we ever stop seeing squarefree numbers here?

So here’s Sárközy’s Theorem. It says that this central binomial coefficient {{2n} \choose{n}} is never squarefree as long as ‘n’ is big enough. András Sárközy showed in 1985 that this was true. How big is big enough? … We have a bound, at least, for this theorem. If ‘n’ is larger than the number 2^{8000} then the corresponding coefficient can’t be squarefree. It might not surprise you that the formulas involved here feature the Riemann Zeta function. That always seems to turn up for questions about large prime numbers.

That’s a common state of affairs for number theory problems. Very often we can show that something is true for big enough numbers. I’m not sure there’s a clear reason why. When numbers get large enough it can be more convenient to deal with their logarithms, I suppose. And those look more like the real numbers than the integers. And real numbers are typically easier to prove stuff about. Maybe that’s it. This is vague, yes. But to ask ‘why’ some things are easy and some are hard to prove is a hard question. What is a satisfying ’cause’ here?

It’s tempting to say that since we know this is true for all ‘n’ above a bound, we’re done. We can just test all the numbers below that bound, and the rest is done. You can do a satisfying proof this way: show that eventually the statement is true, and show all the special little cases before it is. This particular result is kind of useless, though. 2^{8000} is a number that’s something like 241 digits long. For comparison, the total number of things in the universe is something like a number about 80 digits long. Certainly not more than 90. It’d take too long to test all those cases.

That’s all right. Since Sárközy’s proof in 1985 there’ve been other breakthroughs. In 1988 P Goetgheluck proved it was true for a big range of numbers: every ‘n’ that’s larger than 4 and less than 2^{42,205,184} . That’s a number something more than 12 million digits long. In 1991 I Vardi proved we had no squarefree central binomial coefficients for ‘n’ greater than 4 and less than 2^{774,840,978} , which is a number about 233 million digits long. And then in 1996 Andrew Granville and Olivier Ramare showed directly that this was so for all ‘n’ larger than 4.

So that 70 that turned up just a few lines in is the last squarefree one of these coefficients.

Is this surprising? Maybe, maybe not. I’ll bet most of you didn’t have an opinion on this topic twenty minutes ago. Let me share something that did surprise me, and continues to surprise me. In 1974 David Singmaster proved that any integer divides almost all the binomial coefficients out there. “Almost all” is here a term of art, but it means just about what you’d expect. Imagine the giant list of all the numbers that can be binomial coefficients. Then pick any positive integer you like. The number you picked will divide into so many of the giant list that the exceptions won’t be noticeable. So that square numbers like 4 and 9 and 16 and 25 should divide into most binomial coefficients? … That’s to be expected, suddenly. Into the central binomial coefficients? That’s not so obvious to me. But then so much of number theory is strange and surprising and not so obvious.

The Summer 2017 Mathematics A To Z: Quasirandom numbers


Gaurish, host of, For the love of Mathematics, gives me the excuse to talk about amusement parks. You may want to brace yourself. Yes, this essay includes a picture. It would have included a video if I had enough WordPress privileges for that.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Quasirandom numbers.

Think of a merry-go-round. Or carousel, if you prefer. I will venture a guess. You might like merry-go-rounds. They’re beautiful. They can evoke happy thoughts of childhood when they were a big ride it was safe to go on. But they don’t often make one think of thrills.. They’re generally sedate things. They don’t need to be. There’s no great secret to making a carousel a thrill ride. They knew it a century ago, when all the great American carousels were carved. It’s simple. Make the thing spin fast enough, at the five or six rotations per minute the ride was made for. There are places that do this yet. There’s the Cedar Downs ride at Cedar Point, Sandusky, Ohio. There’s the antique carousel at Crossroads Village, a historical village/park just outside Flint, Michigan. There’s the Derby Racer at Playland in Rye, New York. There’s the carousel in the Merry-Go-Round Museum in Sandusky, Ohio. Any of them are great rides. Two of them have a special edge. I’ll come back to them.

Playland's Derby Racer in motion, at night, featuring a ride operator leaning maybe twenty degrees inward.
Rye (New York) Playland Amusement Park’s is the fastest carousel I’m aware of running. Riders are warned ahead of time to sit so they’re leaning to the left, and the ride will not get up to full speed until the ride operator checks everyone during the ride. To get some idea of its speed, notice the ride operator on the left and how far she leans. She’s not being dramatic; that’s the natural stance. Also the tilt in the carousel’s floor is not camera trickery; it does lean like that. If you have a spare day in the New York City area and any interest in classic amusement parks, this is worth the trip.

Randomness is a valuable resource. We know it’s key to many things. We have major fields of mathematics built on it. We can understand the behavior of variables without ever knowing what value they have. All we need is to know than the chance they might be in some particular range. This makes possible all kinds of problems too complicated to do otherwise. We know it’s critical. Quantum mechanics would not work without randomness. Without quantum mechanics, matter doesn’t work. And that’s true randomness, the kind where something is unpredictable. It’s not the kind of randomness we talk about when we ask, say, what’s the chance someone was born on a Tuesday. That’s mere hidden information: if we knew the month and date and year of a person’s birth we would know whether they were born Tuesday or not. We need more.

So the trouble is actually getting a random number. Well, a sequence of randomly drawn numbers. We rarely need this if we’re doing analysis. We can understand how some process changes the shape of a distribution without ever using the distribution. We can take derivatives of a function without ever evaluating the original function, after all.

But we do need randomly drawn numbers. We do too much numerical work with them. For example, it’s impossible to exactly integrate most functions. Numerical methods can take a ferociously long time to evaluate. A family of methods called Monte Carlo rely on randomly-drawn values to estimate the integral. The results are strikingly good for the work required. But they must have random numbers. The name “Monte Carlo” is not some cryptic code. It is an expression of how randomly drawn numbers make the tool work.

It’s hard to get random numbers. Consider: we can’t write an algorithm to do it. If we were to write one, then we’d be able to predict that the sequence of numbers was. We have some recourse. We could set up instruments to rely on the randomness that seems to be in the world. Thermal fluctuations, for example, created by processes outside any computer’s control, can give us a pleasant dose of randomness. If we need higher-quality random numbers than that we can go to exotic equipment. Geiger counters watching the decay of a not-alarmingly-radioactive sample. Cosmic ray detectors watching the sky.

Or we can write something that produces numbers that look random enough. They won’t really be random, and if we wait long enough we’ll notice the sequence repeats itself. But if we only need, say, ten numbers, who cares if the sequence will repeat after ten million numbers? (We’ll surely need more than ten numbers. But we can postpone the repetition until we’ve drawn far more than ten million numbers.)

Two of the carousels I’ve mentioned have an astounding property. The horses in a file move. I mean, relative to each other. Some horse will start the race in front of its neighbors; some will start behind. The four move forward and back thanks to a mechanism of, I am assured, staggering complexity. There are only three carousels in the world that have it. There’s Cedar Downs at Cedar Point in Sandusky, Ohio; the Racing Downs at Playland in Rye, New York; and the Derby Racer at Blackpool Pleasure Beach in Blackpool, England. The mechanism in Blackpool’s hasn’t operated in years. The one at Playland’s had not run in years, but was restored for the 2017 season. My love and I made a trip specifically to ride that. (You may have heard of a fire at the carousel in Playland this summer. This was of part of the building for their other, non-racing, antique carousel. My last information was that the carousel itself was all right.)

These racing derbies have the horses in a file move forward and back in a “random” way. It’s not truly random. If you knew exactly which gears were underneath each horse, and where in their rotations they were, you could say which horse was about to gain on its partners and which was about to fall back. But all that is concealed from the rider. The horse patterns will eventually, someday, repeat. If the gear cycles aren’t interrupted by maintenance or malfunctions. But nobody’s going to ride any horse long enough to notice. We have in these rides a randomness as good as what your computer makes, at least for the purpose it serves.

Cedar Point's Cedar Downs during the race, showing the blur of the ride's motion.
The racing nature of Playland’s and Cedar Point’s derby racers mean that every ride includes exciting extra moments of overtaking or falling behind your partners to the side. It also means quarreling with your siblings about who really won the race because your horse started like four feet behind your sister’s and it ended only two feet behind so hers didn’t beat yours and, long story short, there was some punching, there was some spitting, and now nobody is gonna be allowed to get ice cream at the Carvel’s (for Playland) or cheese on a stick (for Cedar Point). This is the Cedar Downs ride at Cedar Point, and focuses on the poles that move the horses.

What does it mean to look random? Some things seem obvious. All the possible numbers ought to come up, sooner or later. Any particular possible number shouldn’t repeat too often. Any particular possible number shouldn’t go too long without repeating. There shouldn’t be clumps of numbers; if, say, ‘4’ turns up, we shouldn’t see ‘5’ turn up right away all the time.

We can make the idea of “looking” random quite literal. Suppose we’re selecting numbers from 0 through 9. We can draw the random numbers we’ve picked. Use the numbers as coordinates. Say we pick four digits: 1, 3, 9, and 0. Then draw the point that’s at x-coordinate 13, y-coordinate 90. Then the next four digits. Let’s say they’re 4, 2, 3, and 8. Then draw the point that’s at x-coordinate 42, y-coordinate 38. And repeat. What will this look like?

If it clumps up, we probably don’t have good random numbers. If we see lines that points collect along, or avoid, there’s a good chance our numbers aren’t very random. If there’s whole blocks of space that they occupy, and others they avoid, we may have a defective source of random numbers. We should expect the points to cover a space pretty uniformly. (There are more rigorous, logically sound, methods. The eye can be fooled easily enough. But it’s the same principle. We have some test that notices clumps and gaps.) But …

The thing is, there’s always going to be some clumps. There’ll always be some gaps. Part of randomness is that it forms patterns, or at least things that look like patterns to us. We can describe how big a clump (or gap; it’s the same thing, really) is for any particular quantity of randomly drawn numbers. If we see clumps bigger than that we can throw out the numbers as suspect. But … still …

Toss a coin fairly twenty times, and there’s no reason it can’t turn up tails sixteen times. This doesn’t happen often, but it will happen sometimes. Just luck. This surplus of tails should evaporate as we take more tosses. That is, we most likely won’t see 160 tails out of 200 tosses. We certainly will not see 1,600 tails out of 2,000 tosses. We know this as the Law of Large Numbers. Wait long enough and weird fluctuations will average out.

What if we don’t have time, though? For coin-tossing that’s silly; of course we have time. But for Monte Carlo integration? It could take too long to be confident we haven’t got too-large gaps or too-tight clusters.

This is why we take quasi-random numbers. We begin with what randomness we’re able to manage. But we massage it. Imagine our coins example. Suppose after ten fair tosses we noticed there had been eight tails turn up. Then we would start tossing less fairly, trying to make heads more common. We would be happier if there were 12 rather than 16 tails after twenty tosses.

Draw the results. We get now a pattern that looks still like randomness. But it’s a finer sorting; it looks like static tidied up some. The quasi-random numbers are not properly random. Knowing that, say, the last several numbers were odd means the next one is more likely to be even, the Gambler’s Fallacy put to work. But in aggregate, we trust, we’ll be able to enjoy the speed and power of randomly-drawn numbers. It shows its strengths when we don’t know just how finely we must sample a range of numbers to get good, reliable results.

To carousels. I don’t know whether the derby racers have quasirandom outcomes. I would find believable someone telling me that all the possible orderings of the four horses in any file are equally likely. To know would demand detailed knowledge of how the gearing works, though. Also probably simulations of how the system would work if it ran long enough. It might be easier to watch the ride for a couple of days and keep track of the outcomes. If someone wants to sponsor me doing a month-long research expedition to Cedar Point, drop me a note. Or just pay for my season pass. You folks would do that for me, wouldn’t you? Thanks.