My All 2020 Mathematics A to Z: Butterfly Effect


It’s a fun topic today, one suggested by Jacob Siehler, who I think is one of the people I met through Mathstodon. Mathstodon is a mathematics-themed instance of Mastodon, an open-source microblogging system. You can read its public messages here.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Butterfly Effect.

I take the short walk from my home to the Red Cedar River, and I pour a cup of water in. What happens next? To the water, anyway. Me, I think about walking all the way back home with this empty cup.

Let me have some simplifying assumptions. Pretend the cup of water remains somehow identifiable. That it doesn’t evaporate or dissolve into the riverbed. That it isn’t scooped up by a city or factory, drunk by an animal, or absorbed into a plant’s roots. That it doesn’t meet any interesting ions that turn it into other chemicals. It just goes as the river flows dictate. The Red Cedar River merges into the Grand River. This then moves west, emptying into Lake Michigan. Water from that eventually passes the Straits of Mackinac into Lake Huron. Through the St Clair River it goes to Lake Saint Clair, the Detroit River, Lake Erie, the Niagara River, the Niagara Falls, and Lake Ontario. Then into the Saint Lawrence River, then the Gulf of Saint Lawrence, before joining finally the North Atlantic.

Photograph of a small, tree-lined riverbed from a wooden bridge over it.
To the right: East Lansing and the Michigan State University campus. To the left, in a sense: the Atlantic Ocean.

If I pour in a second cup of water, somewhere else on the Red Cedar River, it has a similar journey. The details are different, but the course does not change. Grand River to Lake Michigan to three more Great Lakes to the Saint Lawrence to the North Atlantic Ocean. If I wish to know when my water passes the Mackinac Bridge I have a difficult problem. If I just wish to know what its future is, the problem is easy.

So now you understand dynamical systems. There’s some details to learn before you get a job, yes. But this is a perspective that explains what people in the field do, and why that. Dynamical systems are, largely, physics problems. They are about collections of things that interact according to some known potential energy. They may interact with each other. They may interact with the environment. We expect that where these things are changes in time. These changes are determined by the potential energies; there’s nothing random in it. Start a system from the same point twice and it will do the exact same thing twice.

We can describe the system as a set of coordinates. For a normal physics system the coordinates are the positions and momentums of everything that can move. If the potential energy’s rule changes with time, we probably have to include the time and the energy of the system as more coordinates. This collection of coordinates, describing the system at any moment, is a point. The point is somewhere inside phase space, which is an abstract idea, yes. But the geometry we know from the space we walk around in tells us things about phase space, too.

Imagine tracking my cup of water through its journey in the Red Cedar River. It draws out a thread, running from somewhere near my house into the Grand River and Lake Michigan and on. This great thin thread that I finally lose interest in when it flows into the Atlantic Ocean.

Dynamical systems drops in phase space act much the same. As the system changes in time, the coordinates of its parts change, or we expect them to. So “the point representing the system” moves. Where it moves depends on the potentials around it, the same way my cup of water moves according to the flow around it. “The point representing the system” traces out a thread, called a trajectory. The whole history of the system is somewhere on that thread.

Harvey, sulking: 'What a horrible year! How did it come to this?' Penny: 'I blame chaos theory.' Harvey: 'If it's chaos theory, I know EXACTLY who to blame! Some stupid butterfly flapped its wings and here we are.'
Stephen Beals’s Adult Children for the 21st of June, 2020. There were at least two “chaos theory”/“butterfly effect” comic strips in my feed just last week, and I wasn’t even looking. You can find comics essays where I talk about Adult Children at this link, and comic strips in general here.

Phase space, like a map, has regions. For my cup of water there’s a region that represents “is in Lake Michigan”. There’s another that represents “is going over Niagara Falls”. There’s one that represents “is stuck in Sandusky Bay a while”. When we study dynamical systems we are often interested in what these regions are, and what the boundaries between them are. Then a glance at where the point representing a system is tells us what it is doing. If the system represents a satellite orbiting a planet, we can tell whether it’s in a stable orbit, about to crash into a moon, or about to escape to interplanetary space. If the system represents weather, we can say it’s calm or stormy. If the system is a rigid pendulum — a favorite system to study, because we can draw its phase space on the blackboard — we can say whether the pendulum rocks back and forth or spins wildly.

Come back to my second cup of water, the one with a different history. It has a different thread from the first. So, too, a dynamical system started from a different point traces out a different trajectory. To find a trajectory is, normally, to solve differential equations. This is often useful to do. But from the dynamical systems perspective we’re usually interested in other issues.

For example: when I pour my cup of water in, does it stay together? The cup of water started all quite close together. But the different drops of water inside the cup? They’ve all had their own slightly different trajectories. So if I went with a bucket, one second later, trying to scoop it all up, likely I’d succeed. A minute later? … Possibly. An hour later? A day later?

By then I can’t gather it back up, practically speaking, because the water’s gotten all spread out across the Grand River. Possibly Lake Michigan. If I knew the flow of the river perfectly and knew well enough where I dropped the water in? I could predict where each goes, and catch each molecule of water right before it falls over Niagara. This is tedious but, after all, if you start from different spots — as the first and the last drop of my cup do — you expect to, eventually, go different places. They all end up in the North Atlantic anyway.

Photograph of Niagara Falls, showing the American Falls and the Bridal Veil, with a faint rainbow visible to the left of image and a boat sailing to the right.
Me, screaming to the pilot of the boat at center-right: “There’s my water drop! No, to the left! The left — your other left!”

Except … well, there is the Chicago Sanitary and Ship Canal. It connects the Chicago River to the Des Plaines River. The result is that some of Lake Michigan drains to the Ohio River, and from there the Mississippi River, and the Gulf of Mexico. There are also some canals in Ohio which connect Lake Erie to the Ohio River. I don’t know offhand of ones in Indiana or Wisconsin bringing Great Lakes water to the Mississippi. I assume there are, though.

Then, too, there is the Erie Canal, and the other canals of the New York State Canal System. These link the Niagara River and Lake Erie and Lake Ontario to the Hudson River. The Pennsylvania Canal System, too, links Lake Erie to the Delaware River. The Delaware and the Hudson may bring my water to the mid-Atlantic. I don’t know the canal systems of Ontario well enough to say whether some water goes to Hudson Bay; I’d grant that’s possible, though.

Think of my poor cups of water, now. I had been sure their fate was the North Atlantic. But if they happen to be in the right spot? They visit my old home off the Jersey Shore. Or they flow through Louisiana and warmer weather. What is their fate?

I will have butterflies in here soon.

Imagine two adjacent drops of water, one about to be pulled into the Chicago River and one with Lake Huron in its future. There is almost no difference in their current states. Their destinies are wildly separate, though. It’s surprising that so small a difference matters. Thinking through the surprise, it’s fair that this can happen, even for a deterministic system. It happens that there is a border, separating those bound for the Gulf and those for the North Atlantic, between these drops.

But how did those water drops get there? Where were they an hour before? … Somewhere else, yes. But still, on opposite sides of the border between “Gulf of Mexico water” and “North Atlantic water”. A day before, the drops were somewhere else yet, and the border was still between them. This separation goes back to, even, if the two drops came from my cup of water. Within the Red Cedar River is a border between a destiny of flowing past Quebec and of flowing past Saint Louis. And between flowing past Quebec and flowing past Syracuse. Between Syracuse and Philadelphia.

How far apart are those borders in the Red Cedar River? If you’ll go along with my assumptions, smaller than my cup of water. Not that I have the cup in a special location. The borders between all these fates are, probably, a complicated spaghetti-tangle. Anywhere along the river would be as fortunate. But what happens if the borders are separated by a space smaller than a drop? Well, a “drop” is a vague size. What if the borders are separated by a width smaller than a water molecule? There’s surely no subtleties in defining the “size” of a molecule.

That these borders are so close does not make the system random. It is still deterministic. Put a drop of water on this side of the border and it will go to this fate. But how do we know which side of the line the drop is on? If I toss this new cup out to the left rather than the right, does that matter? If my pinky twitches during the toss? If I am breathing in rather than out? What if a change too small to measure puts the drop on the other side?

And here we have the butterfly effect. It is about how a difference too small to observe has an effect too large to ignore. It is not about a system being random. It is about how we cannot know the system well enough for its predictability to tell us anything.

The term comes from the modern study of chaotic systems. One of the first topics in which the chaos was noticed, numerically, was weather simulations. The difference between a number’s representation in the computer’s memory and its rounded-off printout was noticeable. Edward Lorenz posed it aptly in 1963, saying that “one flap of a sea gull’s wings would be enough to alter the course of the weather forever”. Over the next few years this changed to a butterfly. In 1972 Philip Merrilees titled a talk Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? My impression is that these days the butterflies may be anywhere, and they alter hurricanes.

Comic strip Chaos Butterfly Man: 'Bitten by a radioactive chaos butterfly, Mike Mason gained the powers of a chaos butterfly!' [ Outside a bank ] Mason flaps his arm at an escaping robber. Robber: 'Ha-ha! What harm can THAT do me?' [ Nine days later ] Robber: 'Where did this thunderstorm finally come fr ... YOW!' (He's struck by lightning.)
RubenBolling’s Super-Fun-Pak Comix for the 23rd of May, 2020. Ruben Bolling uses Chaos Butterfly a good bit in both Super-Fun-Pak Comix and in the main strip, Tom the Dancing Bug. I have fewer essays exploring these Chaos Butterfly strips than you might imagine from that because I ran out of different things to say about the joke. Bolling’s is a great strip, though, and I recommend you consider it.

That we settle on butterflies as agents of chaos we can likely credit to their image. They seem to be innocent things so slight they barely exist. Hummingbirds probably move with too much obvious determination to fit the role. The Big Bad Wolf huffing and puffing would realistically be almost as nothing as a butterfly. But he has the power of myth to make him seem mightier than the storms. There are other happy accidents supporting butterflies, though. Edward Lorenz’s 1960s weather model makes trajectories that, plotted, create two great ellipsoids. The figures look like butterflies, all different but part of the same family. And there is Ray Bradbury’s classic short story, A Sound Of Thunder. If you don’t remember 7th grade English class, in the story time-travelling idiots change history, putting a fascist with terrible spelling in charge of a dystopian world, by stepping on a butterfly.

The butterfly then is metonymy for all the things too small to notice. Butterflies, sea gulls, turning the ceiling fan on in the wrong direction, prying open the living room window so there’s now a cross-breeze. They can matter, we learn.

Reading the Comics, March 17, 2020: Random Edition


I thought last week’s comic strips mentioning mathematics in detail were still subjects easy to describe in one or two paragraphs each. I wasn’t quite right. So here’s a half of a week, even if it is a day later than I had wanted to post.

John Zakour and Scott Roberts’s Working Daze for the 15th is a straggler Pi Day joke, built on the nerd couple Roy and Kathy letting the date slip their minds. This is a very slight Pi Day reference but I feel the need to include it for completeness’s sake. It reminds me of the sequence where one year Schroeder forgot Beethoven’s birthday, and was devastated.

Sue: 'So, Roy, what big fun did you and Kathy have for Pi Day this year?' Roy, caught by surprise, freezes, and then turns several colors in succession before he starts to cry. Ed, to Sue: 'Hard to say which is worse for him, that you forgot, or that you remembered.'
John Zakour and Scott Roberts’s Working Daze for the 15th of March, 2020. Essays featuring Working Daze, which often turns up in Pi Day events, are at this link. And generally essays tied to Pi Day are at this link.

Lincoln Peirce’s Big Nate for the 15th is a wordy bit of Nate refusing the story problem. Nate complains about a lack of motivation for the characters in it. But then what we need for a story problem isn’t the characters to do something so much as it is the student to want to solve the problem. That’s hard work. Everyone’s fascinated by some mathematical problems, but it’s hard to think of something that will compel everyone to wonder what the answer could be.

At one point Nate wonders what happens if Todd stops for gas. Here he’s just ignoring the premise of the question: Todd is given as travelling an average 55 mph until he reaches Saint Louis, and that’s that. So this question at least is answered. But he might need advice to see how it’s implied.

Quiz: 'Many lives in Los Angeles. Todd lives in Boston. They plan to meet in St Louis, which is 1,825 miles from Los Angeles and 1,192 miles from Boston. If Mandy takes a train travelling a constant 80 mph and Todd drives a car at a constant 55 mph, which of them will reach St Lous first?' Nate's answer: 'That depends. Who ARE these people? Are they a couple? Is this romance? If it is, wouldn't Todd drive way faster than 55 mph? He'd be all fired up to see Many, right? And wouldn't Mandy take a plane and get to St Louis in like three hours? Especially if she hasn't seen Todd in a while? But we don't know how long since they've been together because you decided not to tell us! Plus anything can happen while they're traveling. What if Todd stops for gas and the cashier is a total smoke show and he's like, Mandy Who? I can't answer until I have some real intel on these people. I can't believe you even asked the question.' Out loud, 'Also, Todd and Mandy are dorky names.' Teacher: 'This isn't what I meant by show your work.'
Lincoln Peirce’s Big Nate for the 15th of March, 2020. Essays with something mentioned by either Big Nate or the 1990s-repeats Big Nate: First Class are gathered at this link.

So this problem is doable by long division: 1825 divided by 80, and 1192 divided by 55, and see what’s larger. Can we avoid dividing by 55 if we’re doing it by hand? I think so. Here’s what I see: 1825 divided by 80 is equal to 1600 divided by 80 plus 225 divided by 80. That first is 20; that second is … eh. It’s a little less than 240 divided by 80, which is 3. So Mandy will need a little under 23 hours.

Is 23 hours enough for Todd to get to Saint Louis? Well, 23 times 55 will be 23 times 50 plus 23 times 5. 23 times 50 is 22 times 50 plus 1 times 50. 22 times 50 is 11 times 100, or 1100. So 23 times 50 is 1150. And 23 times 5 has to be 150. That’s more than 1192. So Todd gets there first. I might want to figure just how much less than 23 hours Mandy needs, to be sure of my calculation, but this is how I do it without putting 55 into an ugly number like 1192.

Cow: 'What're you doing?' Billy: 'I'm devising a system to win the lottery! Plugging in what I know about chaos theory and numerical behavior in nonlinear dynamical systems should give me the winning picks.' (Silent penultimate panel.) Cow: 'You're just writing down a bunch of numbers.' Billy: 'Maybe.'
Mark Leiknes’s Cow and Boy repeat for the 17th of March, 2020. The too-rare appearances of Cow and Boy Reruns in my essays are here.

Mark Leiknes’s Cow and Boy repeat for the 17th sees the Boy, Billy, trying to beat the lottery. He throws at it the terms chaos theory and nonlinear dynamical systems. They’re good and probably relevant systems. A “dynamical system” is what you’d guess from the name: a collection of things whose properties keep changing. They change because of other things in the collection. When “nonlinear” crops up in mathematics it means “oh but such a pain to deal with”. It has a more precise definition, but this is its meaning. More precisely: in a linear system, a change in the initial setup makes a proportional change in the outcome. If Todd drove to Saint Louis on a path two percent longer, he’d need two percent more time to get there. A nonlinear system doesn’t guarantee that; a two percent longer drive might take ten percent longer, or one-quarter the time, or some other weirdness. Nonlinear systems are really good for giving numbers that look random. There’ll be so many little factors that make non-negligible results that they can’t be predicted in any useful time. This is good for drawing number balls for a lottery.

Chaos theory turns up a lot in dynamical systems. Dynamical systems, even nonlinear ones, often have regions that behave in predictable patterns. We may not be able to say what tomorrow’s weather will be exactly, but we can say whether it’ll be hot or freezing. But dynamical systems can have regions where no prediction is possible. Not because they don’t follow predictable rules. But because any perturbation, however small, produces changes that overwhelm the forecast. This includes the difference between any possible real-world measurement and the real quantity.

Obvious question: how is there anything to study in chaos theory, then? Is it all just people looking at complicated systems and saying, yup, we’re done here? Usually the questions turn on problems such as how probable it is we’re in a chaotic region. Or what factors influence whether the system is chaotic, and how much of it is chaotic. Even if we can’t say what will happen, we can usually say something about when we can’t say what will happen, and why. Anyway if Billy does believe the lottery is chaotic, there’s not a lot he can be doing with predicting winning numbers from it. Cow’s skepticism is fair.

T-Rex: 'Dromiceiomimus, pick a number between one and a hundred thousand million.' Dromiceiomimus: '17?' T-Rex: 'Gasp! That's the number I was thinking of!' Dromiceiomimus: 'Great! Do I win something?' T-Rex: 'You just came out on a one in a hundred thousand million chance and you want a prize? It's not enough to spit in the face of probability itself?' Utahraptor: 'It's not THAT unlikely she'd chose your number. We're actually pretty bad at random number generation and if you ask folks to pick a number in a range, some choices show up more often than others. It's not that unlikely you'd both land on the same number!' T-Rex: 'But *I* didn't choose 17 randomly! It's ... the number of times I have thought about ice cream today, I'm not even gonna lie.'
Ryan North’s Dinosaur Comics for the 17th of March, 2020. Essays that mention something brought up in Dinosaur Comics are gathered at this link.

Ryan North’s Dinosaur Comics for the 17th is one about people asked to summon random numbers. Utahraptor is absolutely right. People are terrible at calling out random numbers. We’re more likely to summon odd numbers than we should be. We shy away from generating strings of numbers. We’d feel weird offering, say, 1234, though that’s as good a four-digit number as 1753. And to offer 2222 would feel really weird. Part of this is that there’s not really such a thing as “a” random number; it’s sequences of numbers that are random. We just pick a number from a random sequence. And we’re terrible at producing random sequences. Here’s one study, challenging people to produce digits from 1 through 9. Are their sequences predictable? If the numbers were uniformly distributed from 1 through 9, then any prediction of the next digit in a sequence should have a one chance in nine of being right. It turns out human-generated sequences form patterns that could be forecast, on average, 27% of the time. Individual cases could get forecast 45% of the time.

There are some neat side results from that study too, particularly that they were able to pretty reliably tell the difference between two individuals by their “random” sequences. We may be bad at thinking up random numbers but the details of how we’re bad can be unique.


And I’m not done yet. There’s some more comic strips from last week to discuss and I’ll have that post here soon. Thanks for reading.

Reading the Comics, September 24, 2016: Infinities Happen Edition


I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics.

Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen.

'It's true, Dr Peel. I'm a bit of a psychic.' 'Would you share the winning lottery numbers with me?' '1, 10, 17, 39, 43, and 47'. 'Those are the winning lottery numbers?' 'Yes!' 'For this Tuesday?' 'Ah! That's where it gets a bit fuzzy.'
Kieran Meehan’s Pros and Cons for the 18th of September, 2016. I can’t say whether any of these are supposed to be the PowerBall number. (The comic strip’s title is a revision of its original, which more precisely described its gimmick but was harder to remember: A Lawyer, A Doctor, and a Cop.)

Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye.

Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone.

Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it?

Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it.

Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians.

Reading the Comics, February 17, 2016: Using Mathematics Edition


Is there a unifying theme between many of the syndicated comic strips with mathematical themes the last few days? Of course there is. It’s students giving snarky answers to their teachers’ questions. That’s the theme every week. But other stuff comes up.

Joe Martin’s Boffo for the 12th of depicts “the early days before all the bugs were worked out” of mathematics. And the early figure got a whole string of operations which don’t actually respect the equals sign, before getting finally to the end. Were I to do this, I would use an arrow, =>, and I suspect many mathematicians would too. It’s a way of indicating the flow of one’s thoughts without trying to assert that 2+2 is actually the same number as 1 + 1 + 1 + 1 + 6.

Mathematics, the early days before all the bugs were worked out: 2 + 2 = 1 + 1 + 1 + 1 = 10 + 5 + 5 = 20 x 5 = 100 / 4 = 25 x 7 + 5 = 180 x 2 = 360 / 9 = 40 - 15 = 25 + 1 + 10 = 36 / 9 = 4
Joe Martin’s Boffo for the 12th of February, 2016. The link will likely expire in mid-March.

And this comic is funny, in part, because it’s true. New mathematical discoveries tend to be somewhat complicated, sloppy messes to start. Over time, if the thing is of any use, the mathematical construct gets better. By better I mean the logic behind it gets better explained. You’d expect that, of course, just because time to reflect gives time to improve exposition. But the logic also tends to get better. We tend to find arguments that are, if not shorter, then better-constructed. We get to see how something gets used, and how to relate it to other things we’d like to do, and how to generalize the pieces of argument that go into it. If we think of a mathematical argument as a narrative, then, we learn how to write the better narrative.

Then, too, we get better at notation, at isolating what concepts we want to describe and how to describe them. For example, to write the fourth power of a number such as ‘x’, mathematicians used to write ‘xxxx’ — fair enough, but cumbersome. Or then xqq — the ‘q’ standing for quadratic, that is, square, of the thing before. That’s better. At least it’s less stuff to write. How about “xiiii” (as in the Roman numeral IV)? Getting to “x4” took time, and thought, and practice with what we wanted to raise numbers to powers to do. In short, we had to get the bugs worked out.

John Rose’s Barney Google and Snuffy Smith for the 12th of February is your normal student-resisting-word-problems joke. And hey, at least they have train service still in Smith’s hometown.

'If it's a hunnert miles to th' city an' a train is travelin' thurty miles an hour is due t'arrive at 5:00 pm --- what time does th' train leave Hootin' Holler, Jughaid?' 'I dunno, Miz Prunelly, but you better go now jest t'be on th' safe side!!'
John Rose’s Barney Google and Snuffy Smith for the 12th of February, 2016.

Randy Glasbergen’s Glasbergen Cartoons for the 12th (a rerun; Galsbergen died last year) is a similar student-resisting-problems joke. Arithmetic gets an appearance no doubt because it’s the easiest kind of problem to put on the board and not distract from the actual joke.

Mark Pett’s Lucky Cow for the 14th (a rerun from the early 2000s) mentions the chaos butterfly. I am considering retiring chaos butterfly mentions from these roundups because I seem to say the same thing each time. But I haven’t yet, so I’ll say it. Part of what makes a system chaotic is that it’s deterministic and unpredictable. Most different outcomes result from starting points so similar they can’t be told apart. There’s no guessing whether any action makes things better or worse, and whether that’s in the short or the long term.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th is surely not a response to that Pearls Before Swine from last time. I believe all the Saturday Morning Breakfast Cereal strips to appear on Gocomics are reruns from its earlier days as a web comic. But it serves as a riposte to the “nobody uses mathematics anyway” charge. And it’s a fine bit of revenge fantasy.

Historically, being the sole party that understands the financial calculations has not brought money lenders appreciation.

Tony Cochran’s Agnes for the 17th also can’t be a response to that Pearls Before Swine. The lead times just don’t work that way. But it gives another great reason to learn mathematics. I encourage anyone who wants to be Lord and Queen of Mathdom; it’s worth a try.

Tom Thaves’s Frank and Ernest for the 17th tells one of the obvious jokes about infinite sets. Fortunately mathematicians aren’t expected to list everything that goes into an infinitely large set. It would put a terrible strain on our wrists. Usually it’s enough to describe the things that go in it. Some descriptions are easy, especially if there’s a way to match the set with something already familiar, like counting numbers or real numbers. And sometimes a description has to be complicated.

There are urban legends among grad students. Many of them are thesis nightmares. One is about such sets. The story goes of the student who had worked for years on a set whose elements all had some interesting collection of properties. At the defense her advisor — the person who’s supposed to have guided her through finding and addressing an interesting problem — actually looks at the student’s work for the first time in ages, or ever. And starts drawing conclusions from it. And proves that the only set whose elements all have these properties is the null set, which hasn’t got anything in it. The whole thesis is a bust. Thaves probably didn’t have that legend in mind. But you could read the comic that way.

Percy Crosby’s Skippy for the 17th gives a hint how long kids in comic strips have been giving smart answers to teachers. This installment’s from 1928 sometime. Skippy’s pretty confident in himself, it must be said.

Reading the Comics, January 15, 2015: Electric Brains and Klein Bottles Edition


I admit I don’t always find a theme running through Comic Strip Master Command’s latest set of mathematically-themed comics. The edition names are mostly so that I can tell them apart when I see a couple listed in the Popular Posts roundup anyway.

Little Iodine and her parents see an electronic brain capable of solving any problem; her father offers 'square root of 7,921 x^2 y^2'. It gets it correct, 89 xy. Little Iodine, inspired, makes her own. 'Where are you getting all the money for those ice cream cones and stuff?' her father demands. 'I made a 'lectric brain --- the kids pay me a nickel when they got homework --- here --- give me a problem.' He offers 9 times 16. The electric brain writes out 'Dere teecher, Plees xcuse my chidl for not doing his homwork'. 'And then these letters come out --- the kid gives it to the teacher and everything's okey --- '
Jimmy Hatlo’s Little Iodine for the 12th of January, 2016. Originally run the 7th of November, 1954.

Jimmy Hatlo’s Little Iodine is a vintage comic strip from the 1950s. It strikes me as an unlicensed adaptation of Baby Schnooks, but that’s not something for me to worry about. The particular strip, originally from the 7th of November, 1954 (and just run the 12th of January this year) interests me for its ancient views of computers. It’s from the days they were called “electric brains”. I’m also impressed that the machine on display early on is able to work out the “square root of 7921 x2 y2”. The square root of 7921 is no great feat. Being able to work with the symbols of x and y without knowing what they stand for, though, does impress me. I’m not sure there were computers which could handle that sort of symbolic manipulation in 1954. That sort of ability to work with a quantity by name rather than value is what we would buy Mathematica for, if we could afford it. It’s also at least a bit impressive that someone knows the square of 89 offhand. All told, I think this is my favorite of this essay’s set of strips. But it’s a weak field considering none of them are “students giving a snarky reply to a homework/exam/blackboard question”.

Joe Martin’s Willy and Ethel for the 13th of January is a percentages joke. Some might fault it for talking about people giving 110 percent, but of course, what is “100 percent”? If it’s the standard amount of work being done then it does seem like ten people giving 110 percent gets the job done as quickly as eleven people doing 100 percent. If work worked like that.

Willy asks his kid: 'OK, here's a question of my own that involves math and principles. If you're on an 11 man crew and 10 of them are giving 110%, do you have to show up for work?
Joe Martin’s Willy and Ethel for the 13th of January, 2016. The link will likely expire in mid-February.

Steve Sicula’s Home and Away for the 13th (a rerun from the 8th of October, 2004) gives a wrongheaded application of a decent principle. The principle is that of taking several data points and averaging their value. The problem with data is that it’s often got errors in it. Something weird happened and it doesn’t represent what it’s supposed to. Or it doesn’t represent it well. By averaging several data points together we can minimize the influence of a fluke reading. Or if we’re measuring something that changes in time, we might use a running average of the last several sampled values. In this way a short-term spike or a meaningless flutter will be minimized. We can avoid wasting time reacting to something that doesn’t matter. (The cost of this, though, is that if a trend is developing we will notice it later than we otherwise would.) Still, sometimes a data point is obviously wrong.

Zach Weinersmith’s Saturday Morning Breakfast Cereal wanted my attention, and so on the 13th it did a joke about Zeno’s Paradox. There are actually four classic Zeno’s Paradoxes, although the one riffed on here I think is the most popular. This one — the idea that you can’t finish something (leaving a room is the most common form) because you have to get halfway done, and have to get halfway to being halfway done, and halfway to halfway to halfway to being done — is often resolved by people saying that Zeno just didn’t understand that an infinite series could converge. That is, that you can add together infinitely many numbers and get a finite number. I’m inclined to think Zeno did not, somehow, think it was impossible to leave rooms. What the paradoxes as a whole get to are questions about space and time: they’re either infinitely divisible or they’re not. And either way produces effects that don’t seem to quite match our intuitions.

The next day Saturday Morning Breakfast Cereal does a joke about Klein bottles. These are famous topological constructs. At least they’re famous in the kinds of places people talk about topological constructs. It’s much like the Möbius strip, a ribbon given a twist and joined back to its edge. The Klein bottle similarly you can imagine as a cylinder stretched out into the fourth dimension, given a twist, then joined back to itself. We can’t really do this, what with it being difficult to craft four-dimensional objects. But we can imagine this, and it creates an object that doesn’t have a boundary, and has only one side. There’s not an inside or an outside. There’s no making this in the real world, but we can make nice-looking approximations, usually as bottles.

Ruben Bolling’s Super-Fun-Pak Comix for the 13th of January is an extreme installment of Chaos Butterfly. The trouble with touching Chaos Butterfly to cause disasters is that you don’t know — you can’t know — what would have happened had you not touched the butterfly. You change your luck, but there’s no way to tell whether for the better or worse. One of the commenters at Gocomics.com alludes to this problem.

Jon Rosenberg’s Scenes From A Multiverse for the 13th of January makes quite literal quantum mechanics talk about probability waves and quantum foam and the like. The wave formulation of quantum mechanics, the most popular and accessible one, describes what’s going on in equations that look much like the equations for things diffusing into space. And quantum mechanical problems are often solved by supposing that the probability distribution we’re interested in can be broken up into a series of sinusoidal waves. Representing a complex function as a set of waves is a common trick, not just in quantum mechanics, because it works so well so often. Sinusoidal waves behave in nice, predictable ways for most differential equations. So converting a hard differential equation problem into a long string of relatively easy differential equation problems is usually a good trade.

Tom Thaves’s Frank and Ernest for the 14th of January ties together the baffling worlds of grammar and negative numbers. It puts Frank and Ernest on panel with Euclid, who’s a fair enough choice to represent the foundation of (western) mathematics. He’s famous for the geometry we now call Euclidean. That’s the common everyday kind of blackboards and tabletops and solid cubes and spheres. But among his writings are compilations of arithmetic, as understood at the time. So if we know anyone in Ancient Greece to have credentials to talk about negative numbers it’s him. But the choice of Euclid traps the panel into an anachronism: the Ancient Greeks just didn’t think of negative numbers. They could work through “a lack of things” or “a shortage of something”, but a negative? That’s a later innovation. But it’s hard to think of a good rewriting of the joke. You might have Isaac Newton be consulted, but Newton makes normal people think of gravity and physics, confounding the mathematics joke. There’s a similar problem with Albert Einstein. Leibniz or Gauss should be good, but I suspect they’re not the household names that even Euclid is. And if we have to go “less famous mathematician than Gauss” we’re in real trouble. (No, not Andrew Wiles. Normal people know him as “the guy that proved Fermat’s thing”, and that’s too many words to fit on panel.) Perhaps the joke can’t be made to read cleanly and make good historic sense.