My All 2020 Mathematics A to Z: Butterfly Effect

It’s a fun topic today, one suggested by Jacob Siehler, who I think is one of the people I met through Mathstodon. Mathstodon is a mathematics-themed instance of Mastodon, an open-source microblogging system. You can read its public messages here.

Butterfly Effect.

I take the short walk from my home to the Red Cedar River, and I pour a cup of water in. What happens next? To the water, anyway. Me, I think about walking all the way back home with this empty cup.

Let me have some simplifying assumptions. Pretend the cup of water remains somehow identifiable. That it doesn’t evaporate or dissolve into the riverbed. That it isn’t scooped up by a city or factory, drunk by an animal, or absorbed into a plant’s roots. That it doesn’t meet any interesting ions that turn it into other chemicals. It just goes as the river flows dictate. The Red Cedar River merges into the Grand River. This then moves west, emptying into Lake Michigan. Water from that eventually passes the Straits of Mackinac into Lake Huron. Through the St Clair River it goes to Lake Saint Clair, the Detroit River, Lake Erie, the Niagara River, the Niagara Falls, and Lake Ontario. Then into the Saint Lawrence River, then the Gulf of Saint Lawrence, before joining finally the North Atlantic.

If I pour in a second cup of water, somewhere else on the Red Cedar River, it has a similar journey. The details are different, but the course does not change. Grand River to Lake Michigan to three more Great Lakes to the Saint Lawrence to the North Atlantic Ocean. If I wish to know when my water passes the Mackinac Bridge I have a difficult problem. If I just wish to know what its future is, the problem is easy.

So now you understand dynamical systems. There’s some details to learn before you get a job, yes. But this is a perspective that explains what people in the field do, and why that. Dynamical systems are, largely, physics problems. They are about collections of things that interact according to some known potential energy. They may interact with each other. They may interact with the environment. We expect that where these things are changes in time. These changes are determined by the potential energies; there’s nothing random in it. Start a system from the same point twice and it will do the exact same thing twice.

We can describe the system as a set of coordinates. For a normal physics system the coordinates are the positions and momentums of everything that can move. If the potential energy’s rule changes with time, we probably have to include the time and the energy of the system as more coordinates. This collection of coordinates, describing the system at any moment, is a point. The point is somewhere inside phase space, which is an abstract idea, yes. But the geometry we know from the space we walk around in tells us things about phase space, too.

Imagine tracking my cup of water through its journey in the Red Cedar River. It draws out a thread, running from somewhere near my house into the Grand River and Lake Michigan and on. This great thin thread that I finally lose interest in when it flows into the Atlantic Ocean.

Dynamical systems drops in phase space act much the same. As the system changes in time, the coordinates of its parts change, or we expect them to. So “the point representing the system” moves. Where it moves depends on the potentials around it, the same way my cup of water moves according to the flow around it. “The point representing the system” traces out a thread, called a trajectory. The whole history of the system is somewhere on that thread.

Phase space, like a map, has regions. For my cup of water there’s a region that represents “is in Lake Michigan”. There’s another that represents “is going over Niagara Falls”. There’s one that represents “is stuck in Sandusky Bay a while”. When we study dynamical systems we are often interested in what these regions are, and what the boundaries between them are. Then a glance at where the point representing a system is tells us what it is doing. If the system represents a satellite orbiting a planet, we can tell whether it’s in a stable orbit, about to crash into a moon, or about to escape to interplanetary space. If the system represents weather, we can say it’s calm or stormy. If the system is a rigid pendulum — a favorite system to study, because we can draw its phase space on the blackboard — we can say whether the pendulum rocks back and forth or spins wildly.

Come back to my second cup of water, the one with a different history. It has a different thread from the first. So, too, a dynamical system started from a different point traces out a different trajectory. To find a trajectory is, normally, to solve differential equations. This is often useful to do. But from the dynamical systems perspective we’re usually interested in other issues.

For example: when I pour my cup of water in, does it stay together? The cup of water started all quite close together. But the different drops of water inside the cup? They’ve all had their own slightly different trajectories. So if I went with a bucket, one second later, trying to scoop it all up, likely I’d succeed. A minute later? … Possibly. An hour later? A day later?

By then I can’t gather it back up, practically speaking, because the water’s gotten all spread out across the Grand River. Possibly Lake Michigan. If I knew the flow of the river perfectly and knew well enough where I dropped the water in? I could predict where each goes, and catch each molecule of water right before it falls over Niagara. This is tedious but, after all, if you start from different spots — as the first and the last drop of my cup do — you expect to, eventually, go different places. They all end up in the North Atlantic anyway.

Except … well, there is the Chicago Sanitary and Ship Canal. It connects the Chicago River to the Des Plaines River. The result is that some of Lake Michigan drains to the Ohio River, and from there the Mississippi River, and the Gulf of Mexico. There are also some canals in Ohio which connect Lake Erie to the Ohio River. I don’t know offhand of ones in Indiana or Wisconsin bringing Great Lakes water to the Mississippi. I assume there are, though.

Then, too, there is the Erie Canal, and the other canals of the New York State Canal System. These link the Niagara River and Lake Erie and Lake Ontario to the Hudson River. The Pennsylvania Canal System, too, links Lake Erie to the Delaware River. The Delaware and the Hudson may bring my water to the mid-Atlantic. I don’t know the canal systems of Ontario well enough to say whether some water goes to Hudson Bay; I’d grant that’s possible, though.

Think of my poor cups of water, now. I had been sure their fate was the North Atlantic. But if they happen to be in the right spot? They visit my old home off the Jersey Shore. Or they flow through Louisiana and warmer weather. What is their fate?

I will have butterflies in here soon.

Imagine two adjacent drops of water, one about to be pulled into the Chicago River and one with Lake Huron in its future. There is almost no difference in their current states. Their destinies are wildly separate, though. It’s surprising that so small a difference matters. Thinking through the surprise, it’s fair that this can happen, even for a deterministic system. It happens that there is a border, separating those bound for the Gulf and those for the North Atlantic, between these drops.

But how did those water drops get there? Where were they an hour before? … Somewhere else, yes. But still, on opposite sides of the border between “Gulf of Mexico water” and “North Atlantic water”. A day before, the drops were somewhere else yet, and the border was still between them. This separation goes back to, even, if the two drops came from my cup of water. Within the Red Cedar River is a border between a destiny of flowing past Quebec and of flowing past Saint Louis. And between flowing past Quebec and flowing past Syracuse. Between Syracuse and Philadelphia.

How far apart are those borders in the Red Cedar River? If you’ll go along with my assumptions, smaller than my cup of water. Not that I have the cup in a special location. The borders between all these fates are, probably, a complicated spaghetti-tangle. Anywhere along the river would be as fortunate. But what happens if the borders are separated by a space smaller than a drop? Well, a “drop” is a vague size. What if the borders are separated by a width smaller than a water molecule? There’s surely no subtleties in defining the “size” of a molecule.

That these borders are so close does not make the system random. It is still deterministic. Put a drop of water on this side of the border and it will go to this fate. But how do we know which side of the line the drop is on? If I toss this new cup out to the left rather than the right, does that matter? If my pinky twitches during the toss? If I am breathing in rather than out? What if a change too small to measure puts the drop on the other side?

And here we have the butterfly effect. It is about how a difference too small to observe has an effect too large to ignore. It is not about a system being random. It is about how we cannot know the system well enough for its predictability to tell us anything.

The term comes from the modern study of chaotic systems. One of the first topics in which the chaos was noticed, numerically, was weather simulations. The difference between a number’s representation in the computer’s memory and its rounded-off printout was noticeable. Edward Lorenz posed it aptly in 1963, saying that “one flap of a sea gull’s wings would be enough to alter the course of the weather forever”. Over the next few years this changed to a butterfly. In 1972 Philip Merrilees titled a talk Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? My impression is that these days the butterflies may be anywhere, and they alter hurricanes.

That we settle on butterflies as agents of chaos we can likely credit to their image. They seem to be innocent things so slight they barely exist. Hummingbirds probably move with too much obvious determination to fit the role. The Big Bad Wolf huffing and puffing would realistically be almost as nothing as a butterfly. But he has the power of myth to make him seem mightier than the storms. There are other happy accidents supporting butterflies, though. Edward Lorenz’s 1960s weather model makes trajectories that, plotted, create two great ellipsoids. The figures look like butterflies, all different but part of the same family. And there is Ray Bradbury’s classic short story, A Sound Of Thunder. If you don’t remember 7th grade English class, in the story time-travelling idiots change history, putting a fascist with terrible spelling in charge of a dystopian world, by stepping on a butterfly.

The butterfly then is metonymy for all the things too small to notice. Butterflies, sea gulls, turning the ceiling fan on in the wrong direction, prying open the living room window so there’s now a cross-breeze. They can matter, we learn.

My 2018 Mathematics A To Z: Randomness

Today’s topic is an always rich one. It was suggested by aajohannas, who so far as I know has’t got an active blog or other project. If I’m mistaken please let me know. I’m glad to mention the creative works of people hanging around my blog.

Randomness.

An old Sydney Harris cartoon I probably won’t be able to find a copy of before this publishes. A couple people gather around an old fanfold-paper printer. On the printout is the sequence “1 … 2 … 3 … 4 … 5 … ” The caption: ‘Bizarre sequence of computer-generated random numbers’.

Randomness feels familiar. It feels knowable. It means surprise, unpredictability. The upending of patterns. The obliteration of structure. I imagine there are sociologists who’d say it’s what defines Modernity. It’s hard to avoid noticing that the first great scientific theories that embrace unpredictability — evolution and thermodynamics — came to public awareness at the same time impressionism came to arts, and the subconscious mind came to psychology. It’s grown since then. Quantum mechanics is built on unpredictable specifics. Chaos theory tells us even if we could predict statistics it would do us no good. Randomness feels familiar, even necessary. Even desirable. A certain type of nerd thinks eagerly of the Singularity, the point past which no social interactions are predictable anymore. We live in randomness.

And yet … it is hard to find randomness. At least to be sure we have found it. We might choose between options we find ambivalent by tossing a coin. This seems random. But anyone who was six years old and trying to cheat a sibling knows ways around that. Drop the coin without spinning it, from a half-inch above the table, and you know the outcome, all the way through to the sibling’s punching you. When we’re older and can be made to be better sports we’re fairer about it. We toss the coin and give it a spin. There’s no way we could predict the outcome. Unless we knew just how strong a toss we gave it, and how fast it spun, and how the mass of the coin was distributed. … Really, if we knew enough, our tossed coin would be as predictably as the coin we dropped as a six-year-old. At least unless we tossed in some chaotic way, where each throw would be deterministic, but we couldn’t usefully make a prediction.

Our instinctive idea of what randomness must be is flawed. That shouldn’t surprise. Our instinctive idea of anything is flawed. But randomness gives us trouble. It’s obvious, for example, that randomly selected things should have no pattern. But then how is that reasonable? If we draw letters from the alphabet at random, we should expect sometimes to get some cute pattern like ‘aaaaa’ or ‘qwertyuiop’ or the works of Shakespeare. Perhaps we mean we shouldn’t get patterns any more often than we would expect. All right; how often is that?

We can make tests. Some of them are obvious. Take something that generates possibly-random results. Look up how probable each of those outcomes is. Then run off a bunch of outcomes. Do we get about as many of each result as we should expect? Probability tells us we should get as close as we like to the expected frequency if we let the random process run long enough. If this doesn’t happen, great! We can conclude we don’t really have something random.

We can do more tests. Some of them are brilliantly clever. Suppose there’s a way to order the results. Since mathematicians usually want numbers, putting them in order is easy to do. If they’re not, there’s usually a way to match results to numbers. You’ll see me slide here into talking about random numbers as though that were the same as random results. But if I can distinguish different outcomes, then I can label them. If I can label them, I can use numbers as labels. If the order of the numbers doesn’t matter — should “red” be a 1 or a 2? Should “green” be a 3 or an 8? — then, fine; any order is good.

There are 120 ways to order five distinct things. So generate lots of sets of, say, five numbers. What order are they in? There’s 120 possibilities. Do each of the possibilities turn up as often as expected? If they don’t, great! We can conclude we don’t really have something random.

I can go on. There are many tests which will let us say something isn’t a truly random sequence. They’ll allow for something like Sydney Harris’s peculiar sequence of random numbers. Mostly by supposing that if we let it run long enough the sequence would stop. But these all rule out random number generators. Do we have any that rule them in? That say yes, this generates randomness?

I don’t know of any. I suspect there can’t be any, on the grounds that a test of a thousand or a thousand million or a thousand million quadrillion numbers can’t assure us the generator won’t break down next time we use it. If we knew the algorithm by which the random numbers were generated — oh, but there we’re foiled before we can start. An algorithm is the instructions of how to do a thing. How can an instruction tell us how to do a thing that can’t be predicted?

Algorithms seem, briefly, to offer a way to tell whether we do have a good random sequence, though. We can describe patterns. A strong pattern is easy to describe, the way a familiar story is easy to reference. A weak pattern, a random one, is hard to describe. It’s like a dream, in which you can just list events. So we can call random something which can’t be described any more efficiently than just giving a list of all the results. But how do we know that can’t be done? 7, 7, 2, 4, 5, 3, 8, 5, 0, 9 looks like a pretty good set of digits, whole numbers from 0 through 9. I’ll bet not more than one in ten of you guesses correctly what the next digit in the sequence is. Unless you’ve noticed that these are the digits in the square root of π, so that the next couple digits have to be 0, 5, 5, and 1.

We know, on theoretical grounds, that we have randomness all around us. Quantum mechanics depends on it. If we need truly random numbers we can set a sensor. It will turn the arrival of cosmic rays, or the decay of radioactive atoms, or the sighing of a material flexing in the heat into numbers. We trust we gather these and process them in a way that doesn’t spoil their unpredictability. To what end?

That is, why do we care about randomness? Especially why should mathematicians care? The image of mathematics is that it is a series of logical deductions. That is, things known to be true because they follow from premises known to be true. Where can randomness fit?

One answer, one close to my heart, is called Monte Carlo methods. These are techniques that find approximate answers to questions. They do well when exact answers are too hard for us to find. They use random numbers to approximate answers and, often, to make approximate answers better. This demands computations. The field didn’t really exist before computers, although there are some neat forebears. I mean the Buffon needle problem, which lets you calculate the digits of π about as slowly as you could hope to do.

Another, linked to Monte Carlo methods, is stochastic geometry. “Stochastic” is the word mathematicians attach to things when they feel they’ve said “random” too often, or in an undignified manner. Stochastic geometery is what we can know about shapes when there’s randomness about how the shapes are formed. This sounds like it’d be too weak a subject to study. That it’s built on relatively weak assumptions means it describes things in many fields, though. It can be seen in understanding how forests grow. How to find structures inside images. How to place cell phone towers. Why materials should act like they do instead of some other way. Why galaxies cluster.

There’s also a stochastic calculus, a bit of calculus with randomness added. This is useful for understanding systems where some persistent unpredictable behavior is there. It comes, if I understand the histories of this right, from studying the ways molecules will move around in weird zig-zagging twists. They do this even when there is no overall flow, just a fluid at a fixed temperature. It too has surprising applications. Without the assumption that some prices of things are regularly jostled by arbitrary and unpredictable forces, and the treatment of that by stochastic calculus methods, we wouldn’t have nearly the ability to hedge investments against weird chaotic events. This would be a bad thing, I am told by people with more sophisticated investments than I have. I personally own like ten shares of the Tootsie Roll corporation and am working my way to a \$2.00 rebate check from Boyer.

Given that we need randomness, but don’t know how to get it — or at least don’t know how to be sure we have it — what is there to do? We accept our failings and make do with “quasirandom numbers”. We find some process that generates numbers which look about like random numbers should. These have failings. Most important is that if we could predict them. They’re random like “the date Easter will fall on” is random. The date Easter will fall is not at all random; it’s defined by a specific and humanly knowable formula. But if the only information you have is that this year, Easter fell on the 1st of April (Gregorian computus), you don’t have much guidance to whether this coming year it’ll be on the 7th, 14th, or 21st of April the next year. Most notably, quasirandom number generators will tend to repeat after enough numbers are drawn. If we know we won’t need enough numbers to see a repetition, though? Another stereotype of the mathematician is that of a person who demands exactness. It is often more true to say she is looking for an answer good enough. We are usually all right with a merely good enough quasirandomness.

Boyer candies — Mallo Cups, most famously, although I more like the peanut butter Smoothies — come with a cardboard card backing. Each card has two play money “coins”, of values from 5 cents to 50 cents. These can be gathered up for a rebate check or for various prizes. Whether your coin is 5 cents, 10, 25, or 50 cents … well, there’s no way to tell, before you open the package. It’s, so far as you can tell, randomness.

My next A To Z post should be available at this link. It’s coming Tuesday and should be the letter ‘S’.

Theorem Thursday: The Intermediate Value Theorem

I am still taking requests for this Theorem Thursdays sequence. I intend to post each Thursday in June and July an essay talking about some theorem and what it means and why it’s important. I have gotten a couple of requests in, but I’m happy to take more; please just give me a little lead time. But I want to start with one that delights me.

The Intermediate Value Theorem

I own a Scion tC. It’s a pleasant car, about 2400 percent more sporty than I am in real life. I got it because it met my most important criteria: it wasn’t expensive and it had a sun roof. That it looks stylish is an unsought bonus.

But being a car, and a black one at that, it has a common problem. Leave it parked a while, then get inside. In the winter, it gets so cold that snow can fall inside it. In the summer, it gets so hot that the interior, never mind the passengers, risks melting. While pondering this slight inconvenience I wondered, isn’t there any outside temperature that leaves my car comfortable?

Of course there is. We know this before thinking about it. The sun heats the car, yes. When the outside temperature is low enough, there’s enough heat flowing out that the car gets cold. When the outside temperature’s high enough, not enough heat flows out. The car stays warm. There must be some middle temperature where just enough heat flows out that the interior doesn’t get particularly warm or cold. Not just one middle temperature, come to that. There is a range of temperatures that are comfortable to sit in. But that just means there’s a range of outside temperatures for which the car’s interior stays comfortable. We know this range as late April, early May, here. Most years, anyway.

The reasoning that lets us know there is a comfort-producing outside temperature we can see as a use of the Intermediate Value Theorem. It addresses a function f with domain [a, b], and range of the real numbers. The domain is closed; that is, the numbers we call ‘a’ and ‘b’ are both in the set. And f has to be a continuous function. If you want to draw it, you can do so without having to lift pen from paper. (WARNING: Do not attempt to pass your Real Analysis course with that definition. But that’s what the proper definition means.)

So look at the numbers f(a) and f(b). Pick some number between them, and I’ll call that number ‘g’. There must be at least one number ‘c’, that’s between ‘a’ and ‘b’, and for which f(c) equals g.

Bernard Bolzano, an early-19th century mathematician/logician/theologist/priest, gets the credit for first proving this theorem. Bolzano’s version was a little different. It supposes that f(a) and f(b) are of opposite sign. That is, f(a) is a positive and f(b) a negative number. Or f(a) is negative and f(b) is positive. And Bolzano’s theorem says there must be some number ‘c’ for which f(c) is zero.

You can prove this by drawing any wiggly curve at all and then a horizontal line in the middle of it. Well, that doesn’t prove it to mathematician’s satisfaction. But it will prove the matter in the sense that you’ll be convinced. It’ll also convince anyone you try explaining this to.

You might wonder why anyone needed this proved at all. It’s a bit like proving that as you pour water into the sink there’ll come a time the last dish gets covered with water. So it is. The need for a proof came about from the ongoing attempt to make mathematics rigorous. We have an intuitive idea of what it means for functions to be continuous; see my above comment about lifting pens from paper. Can that be put in terms that don’t depend on physical intuition? … Yes, it can. And we can divorce the Intermediate Value Theorem from our physical intuitions. We can know something that’s true even if we never see a car or a sink.

This theorem might leave you feeling a little hollow inside. Proving that there is some ‘c’ for which f(c) equals g, or even equals zero, doesn’t seem to tell us much about how to find it. It doesn’t even tell us that there’s only one ‘c’, rather than two or three or a hundred million candidates that meet our criteria. Fair enough. The Intermediate Value Theorem is more about proving the existence of solutions, rather than how to find them.

But knowing there is a solution can help us find them. The Intermediate Value Theorem as we know it grew out of finding roots for polynomials. One numerical method, easy to set up for any problem, is the bisection method. If you know that somewhere between ‘a’ and ‘b’ the function goes from positive to negative, then find the midpoint, ‘c’. The function is equal to zero either between ‘a’ and ‘c’, or between ‘c’ and ‘b’. Pick the side that it’s on, and bisect that. Pick the half of that which the zero must be in. Bisect that half. And repeat until you get close enough to the answer for your needs. (The same reasoning applies to a lot of problems in which you divide the search range in two each time until the answer appears.)

We can get some pretty heady results from the Intermediate Value Theorem, too, even if we don’t know where any of them are. An example you’ll see everywhere is that there must be spots on the opposite sides of the globe with the exact same temperature. Or humidity, or daily rainfall, or any other quantity like that. I had thought everyone was ripping that example off from Richard Courant and Herbert Robbins’s masterpiece What Is Mathematics?. But I can’t find this particular example in there. I wonder what we are all ripping it off from.

So here’s a neat example that is ripped off from them. Draw two blobs on the plane. Is there a straight line that bisects both of them at once? Bisecting here means there’s exactly as much of one blob on one side of the line as on the other. There certainly is. The trick is there are any number of lines that will bisect one blob, and then look at what that does to the other.

A similar ripped-off result you can do with a single blob of any shape you like. Draw any line that bisects it. There are a lot of candidates. Can you draw a line perpendicular to that so that the blob gets quartered, divided into four spots of equal area? Yes. Try it.

But surely the best use of the Intermediate Value Theorem is in the problem of wobbly tables. If the table has four legs, all the same length, and the problem is the floor isn’t level it’s all right. There is some way to adjust the table so it won’t wobble. (Well, the ground can’t be angled more than a bit over 35 degrees, but that’s all right. If the ground has a 35 degree angle you aren’t setting a table on it. You’re rolling down it.) Finally a mathematical proof can save us from despair!

Except that the proof doesn’t work if the table legs are uneven which, alas, they often are. But we can’t get everything.

Courant and Robbins put forth one more example that’s fantastic, although it doesn’t quite work. But it’s a train problem unlike those you’ve seen before. Let me give it to you as they set it out:

Suppose a train travels from station A to station B along a straight section of track. The journey need not be of uniform speed or acceleration. The train may act in any manner, speeding up, slowing down, coming to a halt, or even backing up for a while, before reaching B. But the exact motion of the train is supposed to be known in advance; that is, the function s = f(t) is given, where s is the distance of the train from station A, and t is the time, measured from the instant of departure.

On the floor of one of the cars a rod is pivoted so that it may move without friction either forward or backward until it touches the floor. If it does touch the floor, we assume that it remains on the floor henceforth; this wil be the case if the rod does not bounce.

Is it possible to place the rod in such a position that, if it is released at the instant when the train starts and allowed to move solely under the influence of gravity and the motion of the train, it will not fall to the floor during the entire journey from A to B?

They argue it is possible, and use the Intermediate Value Theorem to show it. They admit the range of angles it’s safe to start the rod from may be too small to be useful.

But they’re not quite right. Ian Stewart, in the revision of What Is Mathematics?, includes an appendix about this. Stewart credits Tim Poston with pointing out, in 1976, the flaw. It’s possible to imagine a path which causes the rod, from one angle, to just graze tipping over, let’s say forward, and then get yanked back and fall over flat backwards. This would leave no room for any starting angles that avoid falling over entirely.

It’s a subtle flaw. You might expect so. Nobody mentioned it between the book’s original publication in 1941, after which everyone liking mathematics read it, and 1976. And it is one that touches on the complications of spaces. This little Intermediate Value Theorem problem draws us close to chaos theory. It’s one of those ideas that weaves through all mathematics.

Reading the Comics, October 17, 2015: Rerun Edition

I hate to make it sound like I’m running out of things to say about mathematical comics. But the most recent bunch of strips have been reruns, as with Bill Amend’s FoxTrot or Tom Toles’s Randolph Itch, 2 am. And there’s some figurative reruns too, as a couple of things I’ve talked about before come around again. Also I’m not sure but I think I might have used this Edition Title before. It feels like one I might have. I hope you’ll enjoy anyway, please.

Bill Amend’s FoxTrot Classics for the 15th of October, originally run in 2004, is about binary numerals. It’s built on the fact the numeral ‘100’ represents a rather smaller number in base-two arithmetic than it does in base-ten. This is the sort of thing that’s funny to a mathematically-inclined nerd, such as Jason here. It’s the numerical equivalent of a pun, playing on how if you pretend something is in a different context, it would have a different meaning.

Dave Blazek’s Loose Parts for the 15th of October puts a shape other than a triangle into the orchestra pit. I’m amused, and it puts me in mind of the classic question, “Can One Hear The Shape Of A Drum?” The answer is tricky.

Bob Scott’s Molly and the Bear for the 15th of October is a Pi Day joke. I don’t believe it’s a rerun, but the engagingly-drawn strip is in reruns terribly often.

Tom Toles’s Randolph Itch, 2 am for the 15th of October is a rerun, not just from 1999 but from earlier this year. I don’t know if the strip is being run out of order or if the strip ran a shorter time than I thought. Anyway, it’s still a funny drawing and “r” doesn’t figure into it at all.

Rick Detorie’s One Big Happy for the 16th of October shows Ruthie teaching her stuffed dolls about the number 1. Ruthie is a bit confused about the difference between the number one and the numeral, the way we represent the number. That’s common enough.

She does kind of have a point, though. The number one gets represented as a vertical stroke in the Arabic numerals we commonly use; also in Roman numerals used in making dates harder to read; also in Ancient Egyptian numerals; also in Chinese numerals. One almost suspects everyone is copying each other, or just started off with a tally mark and kept with it. Things get more complicated around ‘three’ or ‘four’. But it isn’t really universal, of course. The Mayans used a single dot, which is admittedly pretty close as a scheme. The Babylonians used a vertical wedge, a little triangle atop a stem that was presumably easy to carve with the tools available.

Ruben Bolling’s Super-Fun-Pak Comix for the 16th of October reprings a Chaos Butterfly installment. And the reminder that a system can be deterministic yet unpredictable sets me up for …

The rerun of Tom Toles’s Randolph Itch, 2 am that appeared on the 17th. The page of horoscopes saying “what happens to you today will be random, based on laws of probability” is funny, although, “random”? There is, it appears, randomness deeply encoded in the universe. There seems to be no way that atoms and molecules could work if they could not be random. But randomness follows laws. Those laws are so fundamental, and imply averages so relentlessly, that they create a human-scale world which might as well be deterministic. (I am deliberately bundling up the question of whether beings have free will and putting it off to the corner, in a little box, where I will not bother it.) In principle, we should be able to predict the day; we just need enough information, and time to compute.

Of course in practice we can’t, and can’t even come close. We may be able to predict the broad strokes of the day, but it is filled with the unpredictable. We call that random, but that is really a confession of ignorance. It’s much the way we might say there is a “probability” of one in seven that you were born on a Tuesday. There’s no such thing. The probability is either 1, because you were born on a Tuesday, or 0, because you were not. What day any given date in the Julian or Gregorian calendar occurred is a determined thing. What we mean by “a probability of one in seven” is that we are ignorant of your birthday, or have not done the work of finding out what day of the week that was. Thus the day of the week appears random.

John Graziano’s Ripley’s Believe It or Not for the 17th of October claims that Les Stewart wrote out “every number from one to one million in words’, using seven typewriters, in a project that took sixteen years and seven months. Sixteen years and seven months is something close to half a billion seconds. So if we take this, he was averaging about fifty seconds to write out each number. This sounds unimpressive, but after all, he had to take some time to sleep and probably had other projects to work on as well. Perhaps he was also working on putting the numbers in alphabetical order.

Reading the Comics, September 16, 2015: Celebrity Appearance Edition

I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.

Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.

The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.

Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.

The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)

Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.

In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)

Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.

Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.

Dave Whamond’s Reality Check for the 16th of September shows off the Metropolitan Museum of Symmetry. This is probably meant to be an art museum. Symmetries are studied in mathematics too, though. Many symmetries, the ways you can swap shapes around, form interesting groups or rings. And in mathematical physics, symmetries give us useful information about the behavior of systems. That’s enough for me to claim this comic is mathematically linked.

Reading the Comics, September 10, 2015: Back To School Edition

I assume that Comic Strip Master Command ordered many mathematically-themed comic strips to coincide with the United States school system getting back up to full. That or they knew I’d have a busy week. This is only the first part of comic strips that have appeared since Tuesday.

Mel Henze’s Gentle Creatures for the 7th and the 8th of September use mathematical talk to fill out the technobabble. It’s a cute enough notion. These particular strips ran last year, and I talked about them then. The talk of a “Lagrangian model” interests me. It name-checks a real and important and interesting scientist who’s not Einstein or Stephen Hawking. But I’m still not aware of any “Lagrangian model” that would be relevant to starship operations.

Jon Rosenberg’s Scenes from a Multiverse for the 7th of September speaks of a society of “powerful thaumaturgic diagrammers” who used Venn diagrams not wisely but too well. The diagrammers got into trouble when one made “a Venn diagram that showed the intersection of all the Venns and all the diagrams”. I imagine this not to be a rigorous description of what happened. But Venn diagrams match up well with many logic problems. And self-referential logic, logic statements that describe their own truth or falsity, is often problematic. So I would accept a story in which Venn diagrams about Venn diagrams leads to trouble. The motif of tying logic and mathematics into magic is an old one. I understand it. A clever mathematical argument often feels like magic, especially the surprising ones. To me, the magical theorems are those that prove a set of seemingly irrelevant lemmas. Then, with that stock in hand, the theorem goes on to the main point in a few wondrous lines. If you can do that, why not transmute lead, or accidentally retcon a society out of existence?

Mark Anderson’s Andertoons for the 8th of September just delights me. Occasionally I feel a bit like Mark Anderson’s volunteer publicity department. A panel like this, though, makes me feel that he deserves it.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 8th of September is the first anthropomorphic-geometric-figures joke we’ve had here in a while.

Mike Baldwin’s Cornered for the 9th of September is a drug testing joke, and a gambling joke. Both are subjects driven by probabilities. Any truly interesting system is always changing. If we want to know whether something affects the system we have to know whether we can make a change that’s bigger than the system does on its own. And this gives us drug-testing and other statistical inference tests. If we apply a drug, or some treatment, or whatever, how does the system change? Does it change enough, consistently, that it’s not plausible that the change just happened by chance? Or by some other influence?

You might have noticed a controversy going around psychology journals. A fair number of experiments were re-run, by new experimenters following the original protocols as closely as possible. Quite a few of the reported results didn’t happen again, or happened in a weaker way. That’s produced some handwringing. No one thinks deliberate experimental fraud is that widespread in the field. There may be accidental fraud, people choosing data or analyses that heighten the effect they want to prove, or that pick out any effect. However, it may also simply be chance again. Psychology experiments tend to have a lower threshold of “this is sufficiently improbable that it indicates something is happening” than, say, physics has. Psychology has a harder time getting the raw data. A supercollider has enormous startup costs, but you can run the thing for as long as you like. And every electron is the same thing. A test of how sleep deprivation affects driving skills? That’s hard. No two sleepers or drivers are quite alike, even at different times of the day. There’s not an obvious cure. Independent replication of previously done experiments helps. That’s work that isn’t exciting — necessary as it is, it’s also repeating what others did — and it’s harder to get people to do it, or pay for it. But in the meantime it’s harder to be sure what interesting results to trust.

Ruben Bolling’s Super-Fun-Pak Comix for the 9th of September is another Chaos Butterfly installment. I don’t want to get folks too excited for posts I technically haven’t written yet, but there is more Chaos Butterfly soon.

Rick Stromoski’s Soup To Nutz for the 10th of September has Royboy guess the odds of winning a lottery are 50-50. Silly, yes, but only because we know that anyone is much more likely to lose a lottery than to win it. But then how do we know that?

Since the rules of a lottery are laid out clearly we can reason about the probability of winning. We can calculate the number of possible outcomes of the game, and how many of them count as winning. Suppose each of those possible outcomes are equally likely. Then the probability of winning is the number of winning outcomes divided by the number of probable outcomes. Quite easy.

— Of course, that’s exactly what Royboy did. There’s two possible outcomes, winning or losing. Lacking reason to think they aren’t equally likely he concluded a win and a loss were just as probable.

We have to be careful what we mean by “an outcome”. What we probably mean for a drawn-numbers lottery is the number of ways the lottery numbers can be drawn. For a scratch-off card we mean the number of tickets that can be printed. But we’re still stuck with this idea of “equally likely” outcomes. I suspect we know what we mean by this, but trying to say what that is clearly, and without question-begging, is hard. And even this works only because we know the rules by which the lottery operates. Or we can look them up. If we didn’t know the details of the lottery’s workings, past the assumption that it has consistently followed rules, what could we do?

Well, that’s what we have probability classes for, and particularly the field of Bayesian probability. This field tries to estimate the probabilities of things based on what actually happens. Suppose Royboy played the lottery fifty times and lost every time. That would smash the idea that his chances were 50-50, although that would not yet tell him what the chances really are.

Reading the Comics, August 14, 2015: Name-Dropping Edition

There have been fewer mathematically-themed comic strips than usual the past week, but they have been coming in yet. This week seems to have included a fair number of name-drops of interesting mathematical concepts.

David L Hoyt and Jeff Knurek’s Jumble (August 10) name-drops the abacus. It has got me wondering about how abacuses were made in the pre-industrial age. On the one hand they could in principle be made by anybody who has beads and rods. On the other hand, a skillfully made abacus will make the tool so much more effective. Who made and who sold them? I honestly don’t know.

Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel (August 11) has Tucker reveal that most of the mathematics he scrawls is just to make his work look harder. I suspect Tucker overdid his performance. My experience is you can get the audience’s eyes to glaze over with much less mathematics on the board.

Leigh Rubin’s Rubes (August 11) mentions chaos theory. It’s not properly speaking a Chaos Butterfly comic strip. But certainly it’s in the vicinity.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 11) name-drops Banach-Tarski. This is a reference to a famous-in-some-circles theorem, or paradox. The theorem, published in 1924 by Stefan Banach and Alfred Tarski, shows something astounding. It’s possible to take a ball, and disassemble it into a number of pieces. Then, doing nothing more than sliding and rotating the pieces, one can reassemble the pieces to get two balls each with the same volume of the original. If that doesn’t sound ridiculous enough, consider that it’s possible to do this trick by cutting the ball into as few as five pieces. (Four, if you’re willing to exclude the exact center of the original ball.) So you can see why this is called a paradox, and why this joke works for people who know the background.

Scott Hilburn’s The Argyle Sweater (August 12) illustrates that joke about rounding up the cattle you might have seen going around.

Reading the Comics, June 30, 2015: Fumigating The Theater Edition

One of my favorite ever episodes of The Muppet Show when I was a kid had the premise the Muppet Theater was being fumigated and so they had to put on a show from the train station instead. (It was the Loretta Lynn episode, third season, number eight.) I loved seeing them try to carry on as normal when not a single thing was as it should be. Since then — probably before, too, but I don’t remember that — I’ve loved seeing stuff trying to carry on in adverse circumstances.

Why this is mentioned here is that Sunday night my computer had a nasty freeze and some video card mishaps. I discovered that my early-2011 MacBook Pro might be among those recalled earlier this year for a service glitch. My computer is in for what I hope is a simple, free, and quick repair. But obviously I’m not at my best right now. I might be even longer than usual answering people and goodness knows how the statistics survey of June will go.

Anyway. Rick Kirkman and Jerry Scott’s Baby Blues (June 26) is a joke about motivating kids to do mathematics. And about how you can’t do mathematics over summer vacation.

Ruben Bolling’s Tom The Dancing Bug (June 26) features a return appearance of Chaos Butterfly. Chaos Butterfly does what Chaos Butterfly does best.

Charles Schulz’s Peanuts Begins (June 26; actually just the Peanuts of March 23, 1951) uses arithmetic as a test of smartness. And as an example of something impractical.

Alex Hallatt’s Arctic Circle (June 28) is a riff on the Good Will Hunting premise. That movie’s particular premise — the janitor solves an impossible problem left on the board — is, so far as I know, something that hasn’t happened. But it’s not impossible. Training will help one develop reasoning ability. Training will provide context and definitions and models to work from. But that’s not essential. All that’s essential is the ability to reason. Everyone has that ability; everyone can do mathematics. Someone coming from outside the academy could do first-rate work. However, I’d bet on the person with the advanced degree in mathematics. There is value in training.

But as many note, the Good Will Hunting premise has got a kernel of truth in it. In 1939, George Dantzig, a grad student in mathematics at University of California/Berkeley, came in late to class. He didn’t know that two problems on the board were examples of unproven theorems, and assumed them to be homework. So he did them, though he apologized for taking so long to do them. Before you draw too much inspiration from this, though, remember that Dantzig was a graduate student almost ready to start work on a PhD thesis. And the problems were not thought unsolvable, just conjectures not yet proven. Snopes, as ever, provides some explanation of the legend and some of the variant ways the story is told.

Mac King and Bill King’s Magic In A Minute (June 28) shows off a magic trick that you could recast as a permutations problem. If you’ve been studying group theory, and many of my Mathematics A To Z terms have readied you for group theory, you can prove why this trick works.

Guy Gilchrist’s Nancy (June 28) carries on Baby Blues‘s theme of mathematics during summer vacation being simply undoable.

Piers Baker’s Ollie and Quentin (June 28) is a gambler’s fallacy-themed joke. It was run — on ComicsKingdom, back then — back in December, and I talked some more about it then.

Mike Twohy’s That’s Life (June 28) is about the perils of putting too much attention into mental arithmetic. It’s also about how perilously hypnotic decimals are: if the pitcher had realized “fourteen million over three years” must be “four and two-thirds million per year” he’d surely have been less distracted.

Reading the Comics, March 4, 2015: Driving Me Crazy Edition

I like it when there are themes to these collections of mathematical comics, but since I don’t decide what subjects cartoonists write about — Comic Strip Master Command does — it depends on luck and my ability to dig out loose connections to find any. Sometimes, a theme just drops into my lap, though, as with today’s collection: several cartoonists tossed off bits that had me double-checking their work and trying to figure out what it was I wasn’t understanding. Ultimately I came to the conclusion that they just made mistakes, and that’s unnerving since how could a mathematical error slip through the rigorous editing and checking of modern comic strips?

Mac and Bill King’s Magic in a Minute (March 1) tries to show off how to do a magic trick based on parity, using the spots on a die to tell whether it was turned in one direction or another. It’s a good gimmick, and parity — whether something is odd or even — can be a great way to encode information or to do simple checks against slight errors. That said, I believe the Kings made a mistake in describing the system: I can’t figure out how the parity of the three sides of a die facing you could not change, from odd to even or from even to odd, as the die is rotated one turn. I believe they mean that you should just count the dots on the vertical sides, so that for example in the “Howdy Do It?” panel in the lower right corner, add two and one to make three. But with that corrected it should be a good trick.

Reading the Comics, January 29, 2015: Returned Motifs Edition

I do occasionally worry that my little blog is going to become nothing but a review of mathematics-themed comic strips, especially when Comic Strip Master Command sends out abundant crops like it has the past few weeks. This week’s offerings bring out the return of a lot of familiar motifs, like fighting with word problems and anthropomorphized numbers; and there’s one strip that suggests a pair of articles I wrote a while back might be useful yet.

Bill Amend’s FoxTrot (January 25, and not a rerun) puts out a little word problem, about what grade one needs to get a B in this class, in the sort of passive-aggressive sniping teachers long to get away with. As Paige notes, it really isn’t a geometry problem, although I wonder if there’s a sensible way to represent it as a geometry problem.

Ruben Bolling’s Super-Fun-Pax Comix superstar Chaos Butterfly appears not just in the January 25th installment but also gets a passing mention in Mark Heath’sSpot the Frog (January 29, rerun). Chaos Butterfly in all its forms seems to be popping up a lot lately; I wonder if it’s something in the air.