## My All 2020 Mathematics A to Z: Butterfly Effect

It’s a fun topic today, one suggested by Jacob Siehler, who I think is one of the people I met through Mathstodon. Mathstodon is a mathematics-themed instance of Mastodon, an open-source microblogging system. You can read its public messages here.

# Butterfly Effect.

I take the short walk from my home to the Red Cedar River, and I pour a cup of water in. What happens next? To the water, anyway. Me, I think about walking all the way back home with this empty cup.

Let me have some simplifying assumptions. Pretend the cup of water remains somehow identifiable. That it doesn’t evaporate or dissolve into the riverbed. That it isn’t scooped up by a city or factory, drunk by an animal, or absorbed into a plant’s roots. That it doesn’t meet any interesting ions that turn it into other chemicals. It just goes as the river flows dictate. The Red Cedar River merges into the Grand River. This then moves west, emptying into Lake Michigan. Water from that eventually passes the Straits of Mackinac into Lake Huron. Through the St Clair River it goes to Lake Saint Clair, the Detroit River, Lake Erie, the Niagara River, the Niagara Falls, and Lake Ontario. Then into the Saint Lawrence River, then the Gulf of Saint Lawrence, before joining finally the North Atlantic.

If I pour in a second cup of water, somewhere else on the Red Cedar River, it has a similar journey. The details are different, but the course does not change. Grand River to Lake Michigan to three more Great Lakes to the Saint Lawrence to the North Atlantic Ocean. If I wish to know when my water passes the Mackinac Bridge I have a difficult problem. If I just wish to know what its future is, the problem is easy.

So now you understand dynamical systems. There’s some details to learn before you get a job, yes. But this is a perspective that explains what people in the field do, and why that. Dynamical systems are, largely, physics problems. They are about collections of things that interact according to some known potential energy. They may interact with each other. They may interact with the environment. We expect that where these things are changes in time. These changes are determined by the potential energies; there’s nothing random in it. Start a system from the same point twice and it will do the exact same thing twice.

We can describe the system as a set of coordinates. For a normal physics system the coordinates are the positions and momentums of everything that can move. If the potential energy’s rule changes with time, we probably have to include the time and the energy of the system as more coordinates. This collection of coordinates, describing the system at any moment, is a point. The point is somewhere inside phase space, which is an abstract idea, yes. But the geometry we know from the space we walk around in tells us things about phase space, too.

Imagine tracking my cup of water through its journey in the Red Cedar River. It draws out a thread, running from somewhere near my house into the Grand River and Lake Michigan and on. This great thin thread that I finally lose interest in when it flows into the Atlantic Ocean.

Dynamical systems drops in phase space act much the same. As the system changes in time, the coordinates of its parts change, or we expect them to. So “the point representing the system” moves. Where it moves depends on the potentials around it, the same way my cup of water moves according to the flow around it. “The point representing the system” traces out a thread, called a trajectory. The whole history of the system is somewhere on that thread.

Phase space, like a map, has regions. For my cup of water there’s a region that represents “is in Lake Michigan”. There’s another that represents “is going over Niagara Falls”. There’s one that represents “is stuck in Sandusky Bay a while”. When we study dynamical systems we are often interested in what these regions are, and what the boundaries between them are. Then a glance at where the point representing a system is tells us what it is doing. If the system represents a satellite orbiting a planet, we can tell whether it’s in a stable orbit, about to crash into a moon, or about to escape to interplanetary space. If the system represents weather, we can say it’s calm or stormy. If the system is a rigid pendulum — a favorite system to study, because we can draw its phase space on the blackboard — we can say whether the pendulum rocks back and forth or spins wildly.

Come back to my second cup of water, the one with a different history. It has a different thread from the first. So, too, a dynamical system started from a different point traces out a different trajectory. To find a trajectory is, normally, to solve differential equations. This is often useful to do. But from the dynamical systems perspective we’re usually interested in other issues.

For example: when I pour my cup of water in, does it stay together? The cup of water started all quite close together. But the different drops of water inside the cup? They’ve all had their own slightly different trajectories. So if I went with a bucket, one second later, trying to scoop it all up, likely I’d succeed. A minute later? … Possibly. An hour later? A day later?

By then I can’t gather it back up, practically speaking, because the water’s gotten all spread out across the Grand River. Possibly Lake Michigan. If I knew the flow of the river perfectly and knew well enough where I dropped the water in? I could predict where each goes, and catch each molecule of water right before it falls over Niagara. This is tedious but, after all, if you start from different spots — as the first and the last drop of my cup do — you expect to, eventually, go different places. They all end up in the North Atlantic anyway.

Except … well, there is the Chicago Sanitary and Ship Canal. It connects the Chicago River to the Des Plaines River. The result is that some of Lake Michigan drains to the Ohio River, and from there the Mississippi River, and the Gulf of Mexico. There are also some canals in Ohio which connect Lake Erie to the Ohio River. I don’t know offhand of ones in Indiana or Wisconsin bringing Great Lakes water to the Mississippi. I assume there are, though.

Then, too, there is the Erie Canal, and the other canals of the New York State Canal System. These link the Niagara River and Lake Erie and Lake Ontario to the Hudson River. The Pennsylvania Canal System, too, links Lake Erie to the Delaware River. The Delaware and the Hudson may bring my water to the mid-Atlantic. I don’t know the canal systems of Ontario well enough to say whether some water goes to Hudson Bay; I’d grant that’s possible, though.

Think of my poor cups of water, now. I had been sure their fate was the North Atlantic. But if they happen to be in the right spot? They visit my old home off the Jersey Shore. Or they flow through Louisiana and warmer weather. What is their fate?

I will have butterflies in here soon.

Imagine two adjacent drops of water, one about to be pulled into the Chicago River and one with Lake Huron in its future. There is almost no difference in their current states. Their destinies are wildly separate, though. It’s surprising that so small a difference matters. Thinking through the surprise, it’s fair that this can happen, even for a deterministic system. It happens that there is a border, separating those bound for the Gulf and those for the North Atlantic, between these drops.

But how did those water drops get there? Where were they an hour before? … Somewhere else, yes. But still, on opposite sides of the border between “Gulf of Mexico water” and “North Atlantic water”. A day before, the drops were somewhere else yet, and the border was still between them. This separation goes back to, even, if the two drops came from my cup of water. Within the Red Cedar River is a border between a destiny of flowing past Quebec and of flowing past Saint Louis. And between flowing past Quebec and flowing past Syracuse. Between Syracuse and Philadelphia.

How far apart are those borders in the Red Cedar River? If you’ll go along with my assumptions, smaller than my cup of water. Not that I have the cup in a special location. The borders between all these fates are, probably, a complicated spaghetti-tangle. Anywhere along the river would be as fortunate. But what happens if the borders are separated by a space smaller than a drop? Well, a “drop” is a vague size. What if the borders are separated by a width smaller than a water molecule? There’s surely no subtleties in defining the “size” of a molecule.

That these borders are so close does not make the system random. It is still deterministic. Put a drop of water on this side of the border and it will go to this fate. But how do we know which side of the line the drop is on? If I toss this new cup out to the left rather than the right, does that matter? If my pinky twitches during the toss? If I am breathing in rather than out? What if a change too small to measure puts the drop on the other side?

And here we have the butterfly effect. It is about how a difference too small to observe has an effect too large to ignore. It is not about a system being random. It is about how we cannot know the system well enough for its predictability to tell us anything.

The term comes from the modern study of chaotic systems. One of the first topics in which the chaos was noticed, numerically, was weather simulations. The difference between a number’s representation in the computer’s memory and its rounded-off printout was noticeable. Edward Lorenz posed it aptly in 1963, saying that “one flap of a sea gull’s wings would be enough to alter the course of the weather forever”. Over the next few years this changed to a butterfly. In 1972 Philip Merrilees titled a talk Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? My impression is that these days the butterflies may be anywhere, and they alter hurricanes.

That we settle on butterflies as agents of chaos we can likely credit to their image. They seem to be innocent things so slight they barely exist. Hummingbirds probably move with too much obvious determination to fit the role. The Big Bad Wolf huffing and puffing would realistically be almost as nothing as a butterfly. But he has the power of myth to make him seem mightier than the storms. There are other happy accidents supporting butterflies, though. Edward Lorenz’s 1960s weather model makes trajectories that, plotted, create two great ellipsoids. The figures look like butterflies, all different but part of the same family. And there is Ray Bradbury’s classic short story, A Sound Of Thunder. If you don’t remember 7th grade English class, in the story time-travelling idiots change history, putting a fascist with terrible spelling in charge of a dystopian world, by stepping on a butterfly.

The butterfly then is metonymy for all the things too small to notice. Butterflies, sea gulls, turning the ceiling fan on in the wrong direction, prying open the living room window so there’s now a cross-breeze. They can matter, we learn.

## Reading the Comics, March 17, 2020: Random Edition

I thought last week’s comic strips mentioning mathematics in detail were still subjects easy to describe in one or two paragraphs each. I wasn’t quite right. So here’s a half of a week, even if it is a day later than I had wanted to post.

John Zakour and Scott Roberts’s Working Daze for the 15th is a straggler Pi Day joke, built on the nerd couple Roy and Kathy letting the date slip their minds. This is a very slight Pi Day reference but I feel the need to include it for completeness’s sake. It reminds me of the sequence where one year Schroeder forgot Beethoven’s birthday, and was devastated.

Lincoln Peirce’s Big Nate for the 15th is a wordy bit of Nate refusing the story problem. Nate complains about a lack of motivation for the characters in it. But then what we need for a story problem isn’t the characters to do something so much as it is the student to want to solve the problem. That’s hard work. Everyone’s fascinated by some mathematical problems, but it’s hard to think of something that will compel everyone to wonder what the answer could be.

At one point Nate wonders what happens if Todd stops for gas. Here he’s just ignoring the premise of the question: Todd is given as travelling an average 55 mph until he reaches Saint Louis, and that’s that. So this question at least is answered. But he might need advice to see how it’s implied.

So this problem is doable by long division: 1825 divided by 80, and 1192 divided by 55, and see what’s larger. Can we avoid dividing by 55 if we’re doing it by hand? I think so. Here’s what I see: 1825 divided by 80 is equal to 1600 divided by 80 plus 225 divided by 80. That first is 20; that second is … eh. It’s a little less than 240 divided by 80, which is 3. So Mandy will need a little under 23 hours.

Is 23 hours enough for Todd to get to Saint Louis? Well, 23 times 55 will be 23 times 50 plus 23 times 5. 23 times 50 is 22 times 50 plus 1 times 50. 22 times 50 is 11 times 100, or 1100. So 23 times 50 is 1150. And 23 times 5 has to be 150. That’s more than 1192. So Todd gets there first. I might want to figure just how much less than 23 hours Mandy needs, to be sure of my calculation, but this is how I do it without putting 55 into an ugly number like 1192.

Mark Leiknes’s Cow and Boy repeat for the 17th sees the Boy, Billy, trying to beat the lottery. He throws at it the terms chaos theory and nonlinear dynamical systems. They’re good and probably relevant systems. A “dynamical system” is what you’d guess from the name: a collection of things whose properties keep changing. They change because of other things in the collection. When “nonlinear” crops up in mathematics it means “oh but such a pain to deal with”. It has a more precise definition, but this is its meaning. More precisely: in a linear system, a change in the initial setup makes a proportional change in the outcome. If Todd drove to Saint Louis on a path two percent longer, he’d need two percent more time to get there. A nonlinear system doesn’t guarantee that; a two percent longer drive might take ten percent longer, or one-quarter the time, or some other weirdness. Nonlinear systems are really good for giving numbers that look random. There’ll be so many little factors that make non-negligible results that they can’t be predicted in any useful time. This is good for drawing number balls for a lottery.

Chaos theory turns up a lot in dynamical systems. Dynamical systems, even nonlinear ones, often have regions that behave in predictable patterns. We may not be able to say what tomorrow’s weather will be exactly, but we can say whether it’ll be hot or freezing. But dynamical systems can have regions where no prediction is possible. Not because they don’t follow predictable rules. But because any perturbation, however small, produces changes that overwhelm the forecast. This includes the difference between any possible real-world measurement and the real quantity.

Obvious question: how is there anything to study in chaos theory, then? Is it all just people looking at complicated systems and saying, yup, we’re done here? Usually the questions turn on problems such as how probable it is we’re in a chaotic region. Or what factors influence whether the system is chaotic, and how much of it is chaotic. Even if we can’t say what will happen, we can usually say something about when we can’t say what will happen, and why. Anyway if Billy does believe the lottery is chaotic, there’s not a lot he can be doing with predicting winning numbers from it. Cow’s skepticism is fair.

Ryan North’s Dinosaur Comics for the 17th is one about people asked to summon random numbers. Utahraptor is absolutely right. People are terrible at calling out random numbers. We’re more likely to summon odd numbers than we should be. We shy away from generating strings of numbers. We’d feel weird offering, say, 1234, though that’s as good a four-digit number as 1753. And to offer 2222 would feel really weird. Part of this is that there’s not really such a thing as “a” random number; it’s sequences of numbers that are random. We just pick a number from a random sequence. And we’re terrible at producing random sequences. Here’s one study, challenging people to produce digits from 1 through 9. Are their sequences predictable? If the numbers were uniformly distributed from 1 through 9, then any prediction of the next digit in a sequence should have a one chance in nine of being right. It turns out human-generated sequences form patterns that could be forecast, on average, 27% of the time. Individual cases could get forecast 45% of the time.

There are some neat side results from that study too, particularly that they were able to pretty reliably tell the difference between two individuals by their “random” sequences. We may be bad at thinking up random numbers but the details of how we’re bad can be unique.

And I’m not done yet. There’s some more comic strips from last week to discuss and I’ll have that post here soon. Thanks for reading.

## The Summer 2017 Mathematics A To Z: Volume Forms

I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

# Volume Forms.

So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” $y = 2x + 1$.

A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called $y = 2x + 1$ before? … That’s … some mess. And now $r = 2\theta + 1$ … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

And something to bother you a while. $y = 2x + 1$ is an equation that looks the same as $r = 2\theta + 1$. You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

## Peer Gibberish

Well, this is an embarrassing thing to see: according to Nature, the Springer publishing and the Institute of Electrical and Electronic Engineers (IEEE) have had to withdraw at least 120 papers from their subscription services, because the papers were gibberish produced by a program, SCIgen, that strings together words and phrases into computer science-ish texts. SCIgen and this sort of thing are meant for fun (Nature also linked to arXiv vs snarXiv, which lets you try to figure out whether titles are actual preprints on the arXiv server or gibberish), but such nonsense papers have been accepted for conferences or published in, typically, poorly-reviewed forums, to general amusement and embarrassment when it’s noticed.

I’m also reminded of a bit of folklore from my grad school days, in a class on dynamical systems. That’s the study of physics-type problems, with the attention being not so much on actually saying what something will do from this starting point — for example, if you push this swing this hard, how long will it take to stop swinging — and more on what the different kinds of behavior are — can you make the swing just rock around a little bit, or loop around once and then rock to a stop, or loop around twice, or loop around four hundred times, or so on — and what it takes to change that behavior mode. The instructor referred us to a paper that was an important result but warned us to not bother trying to read it because nobody had ever understood it from the paper. Instead, it was understood — going back to the paper’s introduction — by people having the salient points explained by other people who’d had it taught to them in conversations, all the way back to the first understanders, who got it from the original authors, possibly in talking mathematics over while at the bar. I’m embarrassed to say I don’t remember which paper it was (it was a while ago and there are a lot of key results in the field), so I haven’t even been able to figure how to search for the paper or the lore around it.

## October 2013’s Statistics

It’s been a month since I last looked over precisely how not-staggeringly-popular I am, so it’s time again.
For October 2013 I had 440 views, down from September’s 2013. These came from 220 distinct viewers, down again from the 237 that September gave me. This does mean there was a slender improvement in views per visitor, from 1.97 up to 2.00. Neither of these are records, although given that I had a poor updating record again this month that’s all tolerable.

The most popular articles from the past month are … well, mostly the comics, and the trapezoids come back again. I’ve clearly got to start categorizing the other kinds of polygons. Or else plunge directly into dynamical systems as that’s the other thing people liked. October 2013’s top hits were:

The country sending me the most readers again was the United States (226 of them), with the United Kingdom coming up second (37). Austria popped into third for, I think, the first time (25 views), followed by Denmark (21) and at long last Canada (18). I hope they still like me in Canada.

Sending just the lone reader each were a bunch of countries: Bermuda, Chile, Colombia, Costa Rica, Finland, Guatemala, Hong Kong, Laos, Lebanon, Malta, Mexico, the Netherlands, Oman, Romania, Saudi Arabia, Slovenia, Sweden, Turkey, and Ukraine. Finland and the Netherlands are repeats from last month, and the Netherlands is going on at least three months like this.