Reading the Comics, September 24, 2019: I Make Something Of This Edition


I trust nobody’s too upset that I postponed the big Reading the Comics posts of this week a day. There’s enough comics from last week to split them into two essays. Please enjoy.

Scott Shaw! and Stan Sakai’s Popeye’s Cartoon Club for the 22nd is one of a yearlong series of Sunday strips, each by different cartoonists, celebrating the 90th year of Popeye’s existence as a character. And, I’m a Popeye fan from all the way back when Popeye was still a part of the pop culture. So that’s why I’m bringing such focus to a strip that, really, just mentions the existence of algebra teachers and that they might present a fearsome appearance to people.

Popeye and Eugene popping into Goon Island. Popeye: 'Thanks for bringing us to Goon Island! Watch out, li'l Jeep! Them Goons are nutty monskers that need civilizin'! Here's Alice the Goon!' Alice: 'MNWMNWMNMN' . Popeye: 'Whatever you sez, Alice! --- !' (Sees a large Goon holding a fist over a baby Goon.) Popeye: 'He's about to squash that li'l Goon! That's all I can stands, I can't stands no more!' Popeye slugs the big Goon. Little Goon holds up a sign: 'You dummy! He's my algebra teacher!' Popeye: 'Alice, I am disgustipated with meself!' Alice: 'MWNMWN!'
Scott Shaw! and Stan Sakai’s Popeye’s Cartoon Club for the 22nd of September, 2019. This is the first (and likely last) time Popeye’s Cartoon Club has gotten a mention here. But appearances by this and by the regular Popeye comic strip (Thimble Theatre, if you prefer) should be gathered at this link.

Lincoln Pierce’s Big Nate for the 22nd has Nate seeking an omen for his mathematics test. This too seems marginal. But I can bring it back to mathematics. One of the fascinating things about having data is finding correlations between things. Sometimes we’ll find two things that seem to go together, including apparently disparate things like basketball success and test-taking scores. This can be an avenue for further research. One of these things might cause the other, or at least encourage it. Or the link may be spurious, both things caused by the same common factor. (Superstition can be one of those things: doing a thing ritually, in a competitive event, can help you perform better, even if you don’t believe in superstitions. Psychology is weird.)

Nate, holding a basketball, thinking: 'If I make this shot it means I'm gonna ace the math test!' He shoots, missing. Nate: 'If I make *this* shot I'm gonna ace the math test!' He shoots, missing. Nate: 'If *this* one goes in, I'll ace the math test!' He shoots, missing. Nate: 'THIS one COUNTS! If I make it it means I'll ace the math test!' He shoots, missing. Nate: 'OK, this is IT! If I make THIS, I WILL ace the math test!' It goes in. Dad: 'Aren't you supposed to be studying for the math test?' Nate: 'Got it covered.'
Lincoln Pierce’s Big Nate for the 22nd of September, 2019. Essays inspired by something in Big Nate, either new-run or the Big Nate: First Class vintage strips, are at this link.

But there are dangers too. Nate shows off here the danger of selecting the data set to give the result one wants. Even people with honest intentions can fall prey to this. Any real data set will have some points that just do not make sense, and look like a fluke or some error in data-gathering. Often the obvious nonsense can be safely disregarded, but you do need to think carefully to see that you are disregarding it for safe reasons. The other danger is that while two things do correlate, it’s all coincidence. Have enough pieces of data and sometimes they will seem to match up.

Norm Feuti’s Gil rerun for the 22nd has Gil practicing multiplication. It’s really about the difficulties of any kind of educational reform, especially in arithmetic. Gil’s mother is horrified by the appearance of this long multiplication. She dubs it both inefficient and harder than the way she learned. She doesn’t say the way she learned, but I’m guessing it’s the way that I learned too, which would have these problems done in three rows beneath the horizontal equals sign, with a bunch of little carry notes dotting above.

Gil: 'Mom, can you check my multiplication homework?' Mom: 'Sure .. is THIS how they're teaching you to do it?' (eg, 37x22 as 14 + 60 + 140 + 600 = 814) Gil: 'Yes.' Mom: 'You know, there's an easier way to do this?' Gil: 'My teacher said the old way was just memorizing an algorithm. The new way helps us understand what we're doing.' Mom: '*I* always understood what I was doing. It seems like they're just teaching you a less efficient algorithm.' Gil: 'Maybe I should just check my work with a calculator.' Mom: 'I have to start going to the PTA meetings.'
Norm Feuti’s Gil rerun for the 22nd of September, 2019. Essays inspired by either the rerun or the new Sunday Gil strips should be gathered at this link.

Gil’s Mother is horrified for bad reasons. Gil is doing exactly the same work that she was doing. The components of it are just written out differently. The only part of this that’s less “efficient” is that it fills out a little more paper. To me, who has no shortage of paper, this efficiency doens’t seem worth pursuing. I also like this way of writing things out, as it separates cleanly the partial products from the summations done with them. It also means that the carries from, say, multiplying the top number by the first digit of the lower can’t get in the way of carries from multiplying by the second digits. This seems likely to make it easier to avoid arithmetic errors, or to detect errors once suspected. I’d like to think that Gil’s Mom, having this pointed out, would drop her suspicions of this different way of writing things down. But people get very attached to the way they learned things, and will give that up only reluctantly. I include myself in this; there’s things I do for little better reason than inertia.

People will get hung up on the number of “steps” involved in a mathematical process. They shouldn’t. Whether, say, “37 x 2” is done in one step, two steps, or three steps is a matter of how you’re keeping the books. Even if we agree on how much computation is one step, we’re left with value judgements. Like, is it better to do many small steps, or few big steps? My own inclination is towards reliability. I’d rather take more steps than strictly necessary, if they can all be done more surely. If you want speed, my experience is, it’s better off aiming for reliability and consistency. Speed will follow from experience.

Profesor showing multiple paths from A to B on the chalkboard: 'The universe wants particles to take the easiest route from point A to point B. Mysteriously, the universe accomplishes this by first considering *every* possible path. It's doing an enormous amount of calculation just to be certain it's not taking a suboptimal route.' Caption: 'You can model reality pretty well if you imagine it's your dad planning a road trip.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 22nd of September, 2019. Essays which go into some aspect of Saturday Morning Breakfast Cereal turn up all the time, such as at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 22nd builds on mathematical physics. Lagrangian mechanics offers great, powerful tools for solving physics problems. It also offers a philosophically challenging interpretation of physics problems. Look at the space made up of all the possible configurations of the system. Take one point to represent the way the system starts. Take another point to represent the way the system ends. Grant that the system gets from that starting point to that ending point. How does it do that? What is the path in this configuration space that goes in-between this start and this end?

We can find the path by using the Lagrangian. Particularly, integrate the Lagrangian over every possible curve that connects the starting point and the ending point. This is every possible way to match start and end. The path that the system actually follows will be an extremum. The actual path will be one that minimizes (or maximizes) this integral, compared to all the other paths nearby that it might follow. Yes, that’s bizarre. How would the particle even know about those other paths?

This seems bad enough. But we can ignore the problem in classical mechanics. The extremum turns out to always match the path that we’d get from taking derivatives of the Lagrangian. Those derivatives look like calculating forces and stuff, like normal.

Then in quantum mechanics the problem reappears and we can’t just ignore it. In the quantum mechanics view no particle follows “a” “path”. It instead is found more likely in some configurations than in others. The most likely configurations correspond to extreme values of this integral. But we can’t just pretend that only the best-possible path “exists”.

Thus the strip’s point. We can represent mechanics quite well. We do this by pretending there are designated starting and ending conditions. And pretending that the system selects the best of every imaginable alternative. The incautious pop physics writer, eager to find exciting stuff about quantum mechanics, will describe this as a particle “exploring” or “considering” all its options before “selecting” one. This is true in the same way that we can say a weight “wants” to roll down the hill, or two magnets “try” to match north and south poles together. We should not mistake it for thinking that electrons go out planning their days, though. Newtonian mechanics gets us used to the idea that if we knew the positions and momentums and forces between everything in the universe perfectly well, we could forecast the future and retrodict the past perfectly. Lagrangian mechanics seems to invite us to imagine a world where everything “perceives” its future and all its possible options. It would be amazing if this did not capture our imaginations.

Billy, pointing a much older kid out to his mother: 'Mommy, you should see HIS math! He has to know numbers AND letters to do it!'
Bil Keane and Jeff Keane’s Family Circus for the 24th of September, 2019. I’m surprised there are not more appearance of this comic strip here. But Family Circus panels inspire essays at these links.

Bil Keane and Jeff Keane’s Family Circus for the 24th has young Billy amazed by the prospect of algebra, of doing mathematics with both numbers and letters. I’m assuming Billy’s awestruck by the idea of letters representing numbers. Geometry also uses quite a few letters, mostly as labels for the parts of shapes. But that seems like a less fascinating use of letters.


The second half of last week’s comics I hope to post here on Wednesday. Stick around and we’ll see how close I come to making it. Thank you.

Advertisements

Reading the Comics, September 28, 2019: Laconic Edition


There were more mathematically-themed comic strips last week than I had time to deal with. This is in part because of something Saturday which took several more hours than I had expected. So let me start this week with some of the comics that, last week, mentioned mathematics in a marginal enough way there’s nothing to say about them besides yeah, that’s a comic strip which mentioned mathematics.

Joey Alison Sayers and Jonathan Lemon’s Little Oop — a variation of Alley Oop — for the 22nd has the caveman struggling with mathematics homework. It’s fun that he has an abacus. Also that the strip keeps with the joke from earlier this year about their only dreaming of a number larger than three.

Jef Mallett’s Frazz for the 22nd sees Caulfield stressing out over a mathematics test.

Ralph Dunagin and Dana Summers’s The Middletons for the 24th has more kids stressing out over a mathematics test. Also about how time is represented in numbers.

Mark Parisi’s Off The Mark for the 24th is a bit of animal-themed wordplay on the New Math.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 24th has a parent offering excuses for not helping with mathematics homework.

Eric the Circle for the 27th, by GeoMaker this time, tries putting out a formula for the area of Eric the circle.

Jef Mallett’s Frazz for the 27th has a kid wondering why they need in-person instruction for arithmetic. (I’d agree that rehearsing arithmetic skills is very easy to automate. You can make practice problems pretty near without limit. How much this has to do with mathematics is a point of debate.)

Glenn McCoy and Gary McCoy’s The Flying McCoys for the 27th is a bit of wordplay and numerals humor.

Daniel Beyer’s Long Story Short for the 28th uses arithmetic, the ever-famous 2 + 2 =, as symbol for knowing anything.


With that, I’ve cleared the easy part of comics for the past week. When I get to the comics needing discussion the essay should post here, likely on Monday. And the Fall 2019 A to Z series should post on Tuesday, with ‘I’. Thanks for reading and for your forbearance.

Exploiting My A-to-Z Archives: Hat


I do love mathematics. Much of what I love, though, is about its history and its culture. Occasionally I get to write about mathematical conventions and notation. Doing that lets me explore both interests. Hat, from the End 2016 A-to-Z, was one such exercise. The rest of the End 2016 A-to-Z essays are at this link.

Exploiting My A-to-Z Archives: Graph


If there’s any A-to-Z I think I could improve on, it’s the first I wrote, in the summer of 2015. But it takes a couple attempts to figure out how to do anything. While there’s essays I think I could improve, there’s still quite worthwhile ones. Among them is Graph, which I think was my first essay to touch on graph theory. For a while in my academic career it looked like I might move into graph theory, so I’m always glad for chances to talk about it. This essay just lays out what graph-theory type graphs are.

My 2019 Mathematics A To Z: Hamiltonian


Today’s A To Z term is another I drew from Mr Wu, of the Singapore Math Tuition blog. It gives me more chances to discuss differential equations and mathematical physics, too.

The Hamiltonian we name for Sir William Rowan Hamilton, the 19th century Irish mathematical physicists who worked on everything. You might have encountered his name from hearing about quaternions. Or for coining the terms “scalar” and “tensor”. Or for work in graph theory. There’s more. He did work in Fourier analysis, which is what you get into when you feel at ease with Fourier series. And then wild stuff combining matrices and rings. He’s not quite one of those people where there’s a Hamilton’s Theorem for every field of mathematics you might be interested in. It’s close, though.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Hamiltonian.

When you first learn about physics you learn about forces and accelerations and stuff. When you major in physics you learn to avoid dealing with forces and accelerations and stuff. It’s not explicit. But you get trained to look, so far as possible, away from vectors. Look to scalars. Look to single numbers that somehow encode your problem.

A great example of this is the Lagrangian. It’s built on “generalized coordinates”, which are not necessarily, like, position and velocity and all. They include the things that describe your system. This can be positions. It’s often angles. The Lagrangian shines in problems where it matters that something rotates. Or if you need to work with polar coordinates or spherical coordinates or anything non-rectangular. The Lagrangian is, in your general coordinates, equal to the kinetic energy minus the potential energy. It’ll be a function. It’ll depend on your coordinates and on the derivative-with-respect-to-time of your coordinates. You can take partial derivatives of the Lagrangian. This tells how the coordinates, and the change-in-time of your coordinates should change over time.

The Hamiltonian is a similar way of working out mechanics problems. The Hamiltonian function isn’t anything so primitive as the kinetic energy minus the potential energy. No, the Hamiltonian is the kinetic energy plus the potential energy. Totally different in idea.

From that description you maybe guessed you can transfer from the Lagrangian to the Hamiltonian. Maybe vice-versa. Yes, you can, although we use the term “transform”. Specifically a “Legendre transform”. We can use any coordinates we like, just as with Lagrangian mechanics. And, as with the Lagrangian, we can find how coordinates change over time. The change of any coordinate depends on the partial derivative of the Hamiltonian with respect to a particular other coordinate. This other coordinate is its “conjugate”. (It may either be this derivative, or minus one times this derivative. By the time you’re doing work in the field you’ll know which.)

That conjugate coordinate is the important thing. It’s why we muck around with Hamiltonians when Lagrangians are so similar. In ordinary, common coordinate systems these conjugate coordinates form nice pairs. In Cartesian coordinates, the conjugate to a particle’s position is its momentum, and vice-versa. In polar coordinates, the conjugate to the angular velocity is the angular momentum. These are nice-sounding pairs. But that’s our good luck. These happen to match stuff we already think is important. In general coordinates one or more of a pair can be some fusion of variables we don’t have a word for and would never care about. Sometimes it gets weird. In the problem of vortices swirling around each other on an infinitely great plane? The horizontal position is conjugate to the vertical position. Velocity doesn’t enter into it. For vortices on the sphere the longitude is conjugate to the cosine of the latitude.

What’s valuable about these pairings is that they make a “symplectic manifold”. A manifold is a patch of space where stuff works like normal Euclidean geometry does. In this case, the space is in “phase space”. This is the collection of all the possible combinations of all the variables that could ever turn up. Every particular moment of a mechanical system matches some point in phase space. Its evolution over time traces out a path in that space. Call it a trajectory or an orbit as you like.

We get good things from looking at the geometry that this symplectic manifold implies. For example, if we know that one variable doesn’t appear in the Hamiltonian, then its conjugate’s value never changes. This is almost the kindest thing you can do for a mathematical physicist. But more. A famous theorem by Emmy Noether tells us that symmetries in the Hamiltonian match with conservation laws in the physics. Time-invariance, for example — time not appearing in the Hamiltonian — gives us the conservation of energy. If only distances between things, not absolute positions, matter, then we get conservation of linear momentum. Stuff like that. To find conservation laws in physics problems is the kindest thing you can do for a mathematical physicist.

The Hamiltonian was born out of planetary physics. These are problems easy to understand and, apart from the case of one star with one planet orbiting each other, impossible to solve exactly. That’s all right. The formalism applies to all kinds of problems. They’re very good at handling particles that interact with each other and maybe some potential energy. This is a lot of stuff.

More, the approach extends naturally to quantum mechanics. It takes some damage along the way. We can’t talk about “the” position or “the” momentum of anything quantum-mechanical. But what we get when we look at quantum mechanics looks very much like what Hamiltonians do. We can calculate things which are quantum quite well by using these tools. This though they came from questions like why Saturn’s rings haven’t fallen part and whether the Earth will stay around its present orbit.

It holds surprising power, too. Notice that the Hamiltonian is the kinetic energy of a system plus its potential energy. For a lot of physics problems that’s all the energy there is. That is, the value of the Hamiltonian for some set of coordinates is the total energy of the system at that time. And, if there’s no energy lost to friction or heat or whatever? Then that’s the total energy of the system for all time.

Here’s where this becomes almost practical. We often want to do a numerical simulation of a physics problem. Generically, we do this by looking up what all the values of all the coordinates are at some starting time t0. Then we calculate how fast these coordinates are changing with time. We pick a small change in time, Δ t. Then we say that at time t0 plus Δ t, the coordinates are whatever they started at plus Δ t times that rate of change. And then we repeat, figuring out how fast the coordinates are changing now, at this position and time.

The trouble is we always make some mistake, and once we’ve made a mistake, we’re going to keep on making mistakes. We can do some clever stuff to make the smallest error possible figuring out where to go, but it’ll still happen. Usually, we stick to calculations where the error won’t mess up our results.

But when we look at stuff like whether the Earth will stay around its present orbit? We can’t make each step good enough for that. Unless we get to thinking about the Hamiltonian, and our symplectic variables. The actual system traces out a path in phase space. Everyone on that path the Hamiltonian is a particular value, the energy of the system. So use the regular methods to project most of the variables to the new time, t0 + Δ t. But the rest? Pick the values that makes the Hamiltonian work out right. Also momentum and angular momentum and other stuff we know get conserved. We’ll still make an error. But it’s a different kind of error. It’ll project to a point that’s maybe in the wrong place on the trajectory. But it’s on the trajectory.

(OK, it’s near the trajectory. Suppose the real energy is, oh, the square root of 5. The computer simulation will have an energy of 2.23607. This is close but not exactly the same. That’s all right. Each step will stay close to the real energy.)

So what we’ll get is a projection of the Earth’s orbit that maybe puts it in the wrong place in its orbit. Putting the planet on the opposite side of the sun from Venus when we ought to see Venus transiting the Sun. That’s all right, if what we’re interested in is whether Venus and Earth are still in the solar system.

There’s a special cost for this. If there weren’t we’d use it all the time. The cost is computational complexity. It’s pricey enough that you haven’t heard about these “symplectic integrators” before. That’s all right. These are the kinds of things open to us once we look long into the Hamiltonian.


This wraps up my big essay-writing for the week. I will pluck some older essays out of obscurity to re-share tomorrow and Saturday. All of Fall 2019 A To Z posts should be at this link. Next week should have the letter I on Tuesday and J on Thursday. All of my A To Z essays should be available at this link. And I am still interested in topics I might use for the letters K through N. Thank you.

Reading the Comics, September 21, 2019: Prime Numbers and the Rest


This is almost all a post about some comics that don’t need more than a mention. You know, strips that just have someone in class not buying the word problem. These are the rest of last week’s.

Before I get there, though, I want to share something. I ran across an essay by Chris K Caldwell and Yeng Xiong: What Is The Smallest Prime? The topic is about 1, and whether that should be a prime number. Everyone who knows a little about mathematics knows that 1 is generally not considered a prime number. But we’re also a bit stumped to figure out why, since the idea of “a prime number is divisible by 1 and itself” seems to fit this, even if the fit is weird. And we have an explanation for this: 1 used to be thought of as prime, but it made various theorems more clumsy to present. So it was either cut 1 out of the definition or add the equivalent work to everything, and mathematicians went for the solution that was less work. I know that I’ve shared this story around here. (I’m surprised to find I didn’t share it in my Summer 2017 A-to-Z essay about prime numbers.)

The truth is more complicated than that. The truth of anything is always more complicated than its history. Even an excellent history’s. It’s not that the short story has things wrong, precisely. But that that matters are more complicated than that. The history includes things we forget were ever problems, like, the question of whether 1 should be a number. And that the question of whether mathematicians “used to” consider 1 a number is built on the supposition that mathematicians were a lot more uniform in their thinking than they were. Even to the individual: people were inconsistent in what they themselves wrote, because most mathematicians turn out to be people.

It’s an eight-page paper, and not at all technical, so if you’re just interested in the history of whether 1 is a prime number, this is quite readable. It also points out a word ready for resurrection that we could use to mean “1 and the prime numbers”: the incomposites.


So that’s some good reading. Now to the comic strips that you can glance at and agree are comic strips which say “math” somewhere in there. (They’d say “maths” if I read more British comic strips.)

Bob Scott’s Bear With Me for the 16th has Bear trying to help Molly get out of algebra.

Tim Rickard’s Brewster Rockit for the 17th mentions entropy, which is so central to understanding statistical mechanics and information theory. It’s in the popular understanding of entropy, that of it being a thing which makes stuff get worse. But that’s of mathematical importance too.

John Zakour and Scott Roberts’s Maria’s Day for the 18th is about Maria having trouble with a mathematics exam. By the 20th, though, she’s doing better, and she has reasons.

Jef Mallett’s Frazz for the 20th is set during mathematics class.


This wraps up last week’s comic strips. I hope to have my next Reading the Comics post on Sunday. And then tomorrow I get to ‘H’ in the Fall 2019 A to Z essays. Thank you for reading.

My 2019 Mathematics A To Z: Green’s function


Today’s A To Z term is Green’s function. Vayuputrii nominated the topic, and once again I went for one close to my own interests.

These are named for George Green, an English mathematician of the early 19th century. He’s one of those people who gave us our idea of mathematical physics. He’s credited with coining the term “potential”, as in potential energy, and in making people realize how studying this simplified problems. Mostly problems in electricity and magnetism, which were so very interesting back then. On the side also came work in multivariable calculus. His work most famous to mathematics and physics majors connects integrals over the surface of a shape with (different) integrals over the entire interior volume. In more specific problems, he did work on the behavior of water in canals.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Green’s function.

There’s a patch of (high school) algebra where you solve systems of equations in a couple variables. Like, you have to do one system where you’re solving, say,

6x + 1y - 2z = 1 \\  7x + 3y + z = 4 \\  -2x - y + 2z = -2

And then maybe later on you get a different problem, one that looks like:

6x + 1y - 2z = 14 \\  7x + 3y + z = -4 \\  -2x - y + 2z = -6

If you solve both of them you notice you’re doing a lot of the same work. All the same hard work. It’s only the part on the right-hand side of the equals signs that are different. Even then, the series of steps you follow on the right-hand-side are the same. They have different numbers is all. What makes the problem distinct is the stuff on the left-hand-side. It’s the set of what coefficients times what variables add together. If you get enough about matrices and vectors you get in the habit of writing this set of equations as one matrix equation, as

A\vec{x} = \vec{b}

Here \vec{x} holds all the unknown variables, your x and y and z and anything else that turns up. Your \vec{b} holds the right-hand side. Do enough of these problems and you notice something. You can describe how to find the solution for these equations before you even know what the right-hand-side is. You can do all the hard work of solving this set of equations for a generic set of right-hand-side constants. Fill them in when you need a particular answer.


I mentioned, while writing about Fourier series, how it turns out most of what you do to numbers you can also do to functions. This really proves itself in differential equations. Both partial and ordinary differential equations. A differential equation works with some not-yet-known function u(x). For what I’m discussing here it doesn’t matter whether ‘x’ is a single variable or a whole set of independent variables, like, x and y and z. I’ll use ‘x’ as shorthand for all that. The differential equation takes u(x) and maybe multiplies it by something, and adds to that some derivatives of u(x) multiplied by something. Those somethings can be constants. They can be other, known, functions with independent variable x. They can be functions that depend on u(x) also. But if they are, then this is a nonlinear differential equation and there’s no solving that.

So suppose we have a linear differential equation. Partial or ordinary, whatever you like. There’s terms that have u(x) or its derivatives in them. Move them all to the left-hand-side. Move everything else to the right-hand-side. This right-hand-side might be constant. It might depend on x. Doesn’t matter. This right-hand-side is some function which I’ll call f(x). This f(x) might be constant; that’s fine. That’s still a legitimate function.

Put this way, every differential equation looks like:

(\mbox{stuff with } u(x) \mbox{ and its derivatives}) = f(x)

That stuff with u(x) and its derivatives we can call an operator. An operator’s a function which has a domain of functions and a range of functions. So we can give give that a name. ‘L’ is a good name here, because if it’s not the operator for a linear differential equation — a linear operator — then we’re done anyway. So whatever our differential equation was we can write it:

Lu(x) = f(x)

Writing it Lu(x) makes it look like we’re multiplying L by u(x). We’re not. We’re really not. This is more like if ‘L’ is the predicate of a sentence and ‘u(x)’ is the object. Read it like, to make up an example, ‘L’ means ‘three times the second derivative plus two x times’ and ‘u(x)’ as ‘u(x)’.

Still, looking at Lu(x) = f(x) and then back up at A\vec{x} = \vec{b} tells you what I’m thinking. We can find some set of instructions to, for any \vec{b} , find the \vec{x} that makes A\vec{x} = \vec{b} true. So why can’t we find some set of instructions to, for any f(x) , find the u(x) that makes Lu(x) = f(x) true?

This is where a Green’s function comes in. Or, like everybody says, “the” Green’s function. “The” here we use like we might talk about “the” roots of a polynomial. Every polynomial has different roots. So, too, does every differential equation have a different Green’s function. What the Green’s function is depends on the equation. It can also depend on what domain the differential equation applies to. It can also depend on some extra information called initial values or boundary values.

The Green’s function for a differential equation has twice as many independent variables as the differential equation has. This seems like we’re making a mess of things. It’s all right. These new variables are the falsework, the scaffolding. Once they’ve helped us get our work done they disappear. This kind of thing we call a “dummy variable”. If x is the actual independent variable, then pick something else — s is a good choice — for the dummy variable. It’s from the same domain as the original x, though. So the Green’s function is some G(f, s) . All right, but how do you find it?

To get this, you have to solve a particular special case of the differential equation. You have to solve:

L G(f, s) = \delta(x - s)

This may look like we’re not getting anywhere. It may even look like we’re getting in more trouble. What is this \delta(x - s) , for example? Well, this is a particular and famous thing called the Dirac delta function. It’s called a function as a courtesy to our mathematical physics friends, who don’t care about whether it truly is a function. Dirac is Paul Dirac, from over in physics. The one whose biography is called The Strangest Man. His delta function is a strange function. Let me say that its independent variable is t. Then \delta(t) is zero, unless t is itself zero. If t is zero then \delta(t) is … something. What is that something? … Oh … something big. It’s … just … don’t look directly at it. What’s important is the integral of this function:

\int_D\delta(t) dt =  0, \mbox{ if 0 is not in D} \\  \int_D\delta(t) dt = 1, \mbox{ if 0 is in D}

I write it this way because there’s delta functions for two-dimensional spaces, three-dimensional spaces, everything. If you integrate over a region that includes the origin, the integral of the delta function is 1. If you integrate over a region that doesn’t, the integral of the delta function is 0.

The delta function has a neat property sometimes called filtering. This is what happens if you integrate some function times the Dirac delta function. Then …

\int_D f(t)\delta(t) dt =  0, \mbox{ if 0 is not in D} \\  \int_D f(t)\delta(t) dt = f(0), \mbox{ if 0 is in D}

This may look dumb. That’s fine. This scheme is so good at getting rid of integrals where you don’t want them. Or at getting integrals in where it’d be convenient to have.

So, I have a mental model of what the Dirac delta function does. It might help you. Think of beating a drum. It can sound like many different things. It depends on how hard you hit it, how fast you hit it, what kind of stick you use, where exactly you hit it. I think of each differential equation as a different drumhead. The Green’s function is then the sound of a specific, uniform, reference hit at a reference position. This produces a sound. I can use that sound to extrapolate how every different sort of drumming would sound on this particular drumhead.

So solving this one differential equation, to find the Green’s function for a particular case, may be hard. Maybe not. Often it’s easier than some particular f(x) because the Dirac delta function is so weird that it becomes kinda easy-ish. But you do have to find one solution to this differential equation, somehow.

Once you do, though? Once you have this G(x, s) ? That is glorious. Because then, whatever your f is? The solution to Lu(x) = f(x) is:

u(x) = \int G(x, s) f(s) ds

Here the integral is over whatever the domain of the differential equation is, and whatever the domain of f is. This last integral is where the dummy variable finally evaporates. All that remains is x, as we want.

A little bit of … arithmetic isn’t the right word. But symbol manipulation will convince you this is right, if you need convincing. (The trick is remembering that ‘x’ and ‘s’ are different variables. When you differentiate with respect to ‘x’, ‘s’ acts like a constant. When you integrate with respect to ‘s’, ‘x’ acts like a constant.)

What can make a Green’s function worth finding is that we do a lot of the same kinds of differential equations. We do a lot of diffusion problems. A lot of wave transmission problems. A lot of wave-transmission-with-losses problems. So there are many problems that can all use the same tools to solve.

Consider remote detection problems. This can include things like finding things underground. It also includes, like, medical sensors. We would like to know “what kind of thing produces a signal like this?” We can detect the signal easily enough. We can model how whatever it is between the thing and our sensors changes what we could detect. (This kind of thing we call an “inverse problem”, finding the thing that could produce what we know.) Green’s functions are one of the ways we can get at the source of what we can see.

Now, Green’s functions are a powerful and useful idea. They sprawl over a lot of mathematical applications. As they do, they pick up regional dialects. Things like deciding that LG(x, s) = - \delta(x - s) , for example. None of these are significant differences. But before you go poking into someone else’s field and solving their problems, take a moment. Double-check that their symbols do mean precisely what you think they mean. It’ll save you some petty quarrels.


I should have the ‘H’ essay in the Fall 2019 series on Thursday. That and all other Fall 2019 A To Z posts should be at this link.

Also, I really don’t like how those systems of equations turned out up at the top of this essay. But I couldn’t work out how to do arrays of equations all lined up along the equals sign, or other mildly advanced LaTeX stuff like doing a function-definition-by-cases. If someone knows of the Real Official Proper List of what you can and can’t do with the LaTeX that comes from a standard free WordPress.com blog I’d appreciate a heads-up. Thank you.

Reading the Comics, September 21, 2019: Filling Out The Week, Part 1 Edition


There were a couple more comic strips than made a good fit in yesterday’s recap. Here’s the two that I had much to write about.

Jason Poland’s Robbie and Bobby for the 18th is another rerun. I mentioned it back in December of 2016. Zeno’s Paradoxical Pasta plays on the most famous of Zeno’s Paradoxes, about how to get to a place one has to get halfway there, but to get halfway there requires getting halfway to halfway. This goes on in infinite regression. The paradox is not a failure to understand that we can get to a place, or finish swallowing a noodle.

Sock puppets at a restaurant table. Left sock: 'It all looks so good!' Right sock: 'Surprise me, Patrick!' Left: 'I'll have Zeno's Paradoxical Pasta for two!' Right: 'Oh, that sounds exotic!' Waiter sock: 'Legend has it that if your lips meet on the same noodle, you've found true love. Kali orexi!'
Jason Poland’s Robbie and Bobby rerun for the 18th of September, 2019. This is another strip I’m gathering has lapsed into perpetual reruns, so might drop it. But essays featuring Robbie and Bobby should be at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 21st gets that strip back to my attention after, like, days out of it. It’s a logic joke, as promised, and that’s mathematics enough for me. Of course the risk of dying from a lightning strike has to be even lower than the risk of being struck by lightning.

Question: 'What did the logician say to the man who was struck by lightning?' (Panel showing a logician watching someone hit by lightning.) Answer: Logician saying to the burnt man: 'Relax, the odds of dying from this are less than the odds of getting struck by lightning.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 21st of September, 2019. I too am surprised it’s been almost a month since an essay with Saturday Morning Breakfast Cereal, as gathered at this link. But then Andertoons went missing for like four months in 2018. All sorts of things will happen and we’re not ready for any of them.

And then there were comic strips that are just of too slight mathematical content for me to go into at length. Several of them all ran on the same day, the 15th of September. Let me give you them.

Jenny Campbell’s Flo and Friends has a couple senior citizens remembering mathematics lessons from their youth. And getting oddly smug about doing it without calculators.

Richard Thompson’s Richard’s Poor Almanac reruns a mention of infinite monkey authorship. Always fun, to my way of thinking.

Samson’s Dark Side of the Horse was the Roman Numerals joke for the week.


And that’s enough for just now. I expect to finish off the casual mentions with a Wednesday Reading the Comics post. The A to Z series should have ‘G’ tomorrow. And I’m still open for suggestions for the letters I through N. Thank you for reading.

Reading the Comics, September 20, 2019: Quarters and Bunnies Edition


Norm Feuti’s Gil did not last long enough in syndication. This is a shame. The characters were great, the humor in a mode I like, and young Gil’s fascination with shows about the paranormal was eerily close to my own young self. But it didn’t last; my understanding is newspapers were reluctant to bring in a comic strip starring an impoverished family. This is a many-faceted shame, not least because the eternal tension between Gil’s fantasy life and his reality made it one of the few strips to reproduce the most vital element of Calvin and Hobbes. But Feuti decided to resume drawing Sunday strips, and I choose to include that in my Reading the Comics reading, because this is my blog and I can make the rules here, at least.

So here’s Norm Feuti’s Gil for the 15th. A couple days ago I saw someone amazed at finally learning where sunflower seeds come from. They’re the black part in the center of a sunflower, the part that makes the big yellow flower stand out in such contrast. People were giving the poster a hard time, asking, where did he think they came from? And the answer is just, he hadn’t thought about it. Why would he? It’s quite reasonable to go through life never encountering a sunflower seed except as a snack or as part of bird or squirrel food. Where on the sunflower plant it’d even be just doesn’t come up. If you want to make this a dire commentary on society losing its sense of where things come from, all right, I won’t stop you. But I think it’s more that there are a billion things to notice in the world, and so many things have names that are fanciful or allusive or ironic, that it’s normal not to realize that a phrase might literally represent its content.

Gil: 'You ever wonder why they call it 'quarter past'?' Shandra: 'What do you mean?' Gil: 'A quarter is 25 cents, so why doesn't 'quarter past' mean 25 minutes past the hour?' Shandra: 'A 'quarter' is one fourth of something. A quarter of a dollar is 25 cents. A quarter of an hour is 15 minutes.' Gil: 'Oh ... I should make fewer observations out loud.' Shandra: 'Yeah.'
Norm Feuti’s Gil for the 15th of September, 2019. The comic doesn’t get much attention here and most of what does is repeats of the syndicated run. Still, essays mentioning Gil are at this link.

So Gil having so associated a quarter with 25 cents, rather than one-fourth of a something, makes sense to me. (Especially given, as noted, that he and his mother are poor, and so he grows up attentive to cash.)

Isaac Asimov, prolific writer of cozy mysteries, had one short story built on the idea that a person might misremember 5:50, seen on a digital clock, as half-past five. I mention this to show how the difference between a quarter of a hundred of things, and the quarter of sixty things, will get mixed together.

Greg Evans’s Luann Againn for the 15th sees Luann struggling with algebra. And thinking of ways to at least get the answers. One advantage mathematics instructors have which many other subjects don’t is that you can create more problems easily. If for some reason \frac{7x + 3}{x - 3} isn’t usable anymore, you can make it \frac{7x + 5}{x - 5} and still be testing the same skills. But if you want to (as is reasonable) stick to what’s in a published text, yeah, you’re vulnerable to this.

Luann, glaring at homework: 'Brad, did you have Crawford for algebra?' Brad: 'Yeah.' Luann: 'What grade did you get?' 'Dunno. A 'B' I guess.' 'Did you do all the problems in the book?' Yup.' 'You don't still have them, do you?' 'Yeah, I think all that stuff's in my room somewhere.' They think. Brad: '50 bucks!' Luann: 'A dollar.' Brad: '$49.' Luann: '$1.50.'
Greg Evans’s Luann Againn for the 15th of September, 2019. It originally ran the 15th of September, 1991. Essays mentioning either current Luann or vintage Luann Againn strips are at this link.

And you can’t always just change a problem arbitrarily. For example, the expression in the second panel of the top row — \frac{x^2 - 5x + 6}{x^2 + 5x + 4} — I notice factors into \frac{(x - 3)(x - 2)}{(x + 4)(x + 1)} . I don’t know the objective of Luann’s homework, but it would probably be messed up if the problem were just changed to \frac{x^2 - 5x + 8}{x^2 + 5x + 3} . Not that this couldn’t be worked, but that the work would involve annoying and complicated expressions instead of nice whole numbers or reasonable fractions.

Paul Trap’s Thatababy for the 15th presents Thatabay’s first counting-exponentially book, with the number of rabbits doubling every time. I admire the work Trap put in to drawing — in what we see here — 255 bunnies. I’m trusting there’s 128 in the last bunny panel; I’m not counting. At any rate he drew enough bunnies to not make it obvious to me where he repeats figures.

Children's book illustrations to match: 1 bunny! 2 bunnies! 4 bunnies! 8 bunnies! 16 bunnies! 32 bunnies! 64 bunnies! 128 bunnies!' Reveal that Mom is reading Baby 'My First Counting (Exponentially) Book'.
Paul Trap’s Thatababy for the 15th of September, 2019. Times that I’ve found reason to write about Thatababy are at this link.

The traditional ever-increasing bunny spiral is the Fibonacci series. But in that, each panel would on average have only about three-fifths more bunnies than the one before it. That’s good, but it isn’t going to overwhelm as fast as the promise of 256 bunnies on the next page will.

Eric the Circle for the 17th, this by Griffenetsabine, has come up here before. That was back in October of 2013, though, so I don’t blame you for forgetting.

At The Shape Singles Bar. A cube, seeing an octahedron enter, says to Eric, 'Wait, man, there she is. Wow, Eric. I think I've found my dual.'
Eric the Circle rerun for the 17th of September, 2019, this by Griffenetsabine. Since I am running across more repeats I may need to retire this strip from my regular featuring here. But Eric the Circle comics that give me something to write about are at this link.

The “dual” here is a mathematical term. Many mathematical things have duals. Polyhedrons have a commonly defined dual shape, though. Start with a polyhedron like, oh, the cube. The dual is a new polyhedron. The vertices of the dual are at the centers of the faces of the original polyhedron. And if two faces of the original polyhedron meet at an edge, then there’s an edge connecting the vertices at the centers of those faces. If several faces meet at a vertex in the original polyhedron, then in the dual there’s a face connecting the vertices dual to the original faces. Work all this out and you get, as you might expect, that the shape that’s dual to a cube is the octahedron we’re told just walked into the bar. The dual to the octahedron, meanwhile … well, that is a cube, which is nice and orderly. You might get a bit of a smile working out what the dual to a tetrahedron is.

Duals are useful, generically, because usually if you can prove something about a dual then you can prove it about the original thing. And we may find that something is easier to prove for the dual than for the original. This isn’t guaranteed, especially for geometric shapes like this, where it’s hard to say that either shape is harder to work with than the other. But it’s one of the tools we have to try sliding between the problem we need to do and the problem we can do.

Teacher: 'There's a lot of math in cooking, Nancy.' Nancy: 'Yeah, yeah, I know. 'Adding fractions'. 'Counting how much time has passed'. 'Multiplying all the amounts by 2 if I'm cooking for myself.' 'Multiplying all the amounts by 2.1 if I'm cooking for myself and Sluggo.'
Olivia Jaimes’s Nancy for the 17th of September, 2019. I’ve featured both the Ernie Bushmiller 1940s-vintage Nancy and the current run. Both vintage and current Nancy appear in essays at this link.

Olivia Jaimes’s Nancy for the 17th has claims about the usefulness of arithmetic. And Nancy skeptical of them, as you expect for a kid facing mathematics in a comic strip. I admit I’ve never needed to do much arithmetic when I cooked. The most would be figuring out how to adjust the cooking time when two things need very different temperatures. But I always do that by winging it. Now I’m curious whether there are good references for suggested alternate times.


I expect to have another Reading the Comics post here, on Monday. The A to Z series should pick up on Tuesday And I’m still glad for suggestions for the letters I through N. Thank you for reading.

Exploiting My A-To-Z Archives: Fredholm Alternative


There could only be one good choice for which of my letter-F essays to feature today. I got a request, I think from Vayuputrii, to explain the Fredholm Alternative for this year’s A-to-Z. I had an essay about exactly that for the End 2016 A-to-Z, and didn’t think I could improve it enough to justify rewriting. It’s an idea which comes from Functional Analysis, where mathematicians get really hard-core about doing to functions the stuff we otherwise do to numbers.

Happy to say, all of the End 2016 A-to-Z essays are gathered at this link. And I hope to continue this year’s sequence on Tuesday.

Exploiting My A-To-Z Archives: Energy


A-to-Z sequences tend to accumulate themes. There’s reasons for this. Usually someone suggesting one topic will suggest several topics. Everyone has their interests and would like to hear more about them. When I pick topics, I’m inclined to the ones I think I can be interesting about, and that biases me towards my strengths. But it’s still early to guess what will dominate this year’s sequence.

But I do feel like it’s leaning toward analysis, and to mathematical physics. Which brings up one my earlier mathematical physics pieces. The Leap Day 2016 A-to-Z had an essay on energy. Looking at the energy of a physical system can let us turn force and acceleration problems, with a lot of potentially complicated vector stuff, into nice simple stuff. Scalars. Calculations that you can do more quickly. So that’s worth looking at.

My 2019 Mathematics A To Z: Fourier series


Today’s A To Z term came to me from two nominators. One was @aajohannas, again offering a great topic. Another was Mr Wu, author of the Singapore Maths Tuition blog. I hope neither’s disappointed here.

Fourier series are named for Jean-Baptiste Joseph Fourier, and are maybe the greatest example of the theory that’s brilliantly wrong. Anyone can be wrong about something. There’s genius in being wrong in a way that gives us good new insights into things. Fourier series were developed to understand how the fluid we call “heat” flows through and between objects. Heat is not a fluid. So what? Pretending it’s a fluid gives us good, accurate results. More, you don’t need to use Fourier series to work with a fluid. Or a thing you’re pretending is a fluid. It works for lots of stuff. The Fourier series method challenged assumptions mathematicians had made about how functions worked, how continuity worked, how differential equations worked. These problems could be sorted out. It took a lot of work. It challenged and expended our ideas of functions.

Fourier also managed to hold political offices in France during the Revolution, the Consulate, the Empire, the Bourbon Restoration, the Hundred Days, and the Second Bourbon Restoration without getting killed for his efforts. If nothing else this shows the depth of his talents.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Fourier series.

So, how do you solve differential equations? As long as they’re linear? There’s usually something we can do. This is one approach. It works well. It has a bit of a weird setup.

The weirdness of the setup: you want to think of functions as points in space. The allegory is rather close. Think of the common association between a point in space and the coordinates that describe that point. Pretend those are the same thing. Then you can do stuff like add points together. That is, take the coordinates of both points. Add the corresponding coordinates together. Match that sum-of-coordinates to a point. This gives us the “sum” of two points. You can subtract points from one another, again by going through their coordinates. Multiply a point by a constant and get a new point. Find the angle between two points. (This is the angle formed by the line segments connecting the origin and both points.)

Functions can work like this. You can add functions together and get a new function. Subtract one function from another. Multiply a function by a constant. It’s even possible to describe an “angle” between two functions. Mathematicians usually call that the dot product or the inner product. But we will sometimes call two functions “orthogonal”. That means the ordinary everyday meaning of “orthogonal”, if anyone said “orthogonal” in ordinary everyday life.

We can take equations of a bunch of variables and solve them. Call the values of that solution the coordinates of a point. Then we talk about finding the point where something interesting happens. Or the points where something interesting happens. We can do the same with differential equations. This is finding a point in the space of functions that makes the equation true. Maybe a set of points. So we can find a function or a family of functions solving the differential equation.

You have reasons for skepticism, even if you’ll grant me treating functions as being like points in space. You might remember solving systems of equations. You need as many equations as there are dimensions of space; a two-dimensional space needs two equations. A three-dimensional space needs three equations. You might have worked four equations in four variables. You were threatened with five equations in five variables if you didn’t all settle down. You’re not sure how many dimensions of space “all the possible functions” are. It’s got to be more than the one differential equation we started with.

This is fair. The approach I’m talking about uses the original differential equation, yes. But it breaks it up into a bunch of linear equations. Enough linear equations to match the space of functions. We turn a differential equation into a set of linear equations, a matrix problem, like we know how to solve. So that settles that.

So suppose f(x) solves the differential equation. Here I’m going to pretend that the function has one independent variable. Many functions have more than this. Doesn’t matter. Everything I say here extends into two or three or more independent variables. It takes longer and uses more symbols and we don’t need that. The thing about f(x) is that we don’t know what it is, but would quite like to.

What we’re going to do is choose a reference set of functions that we do know. Let me call them g_0(x), g_1(x), g_2(x), g_3(x), \cdots going on to however many we need. It can be infinitely many. It certainly is at least up to some g_N(x) for some big enough whole number N. These are a set of “basis functions”. For any function we want to represent we can find a bunch of constants, called coefficients. Let me use a_0, a_1, a_2, a_3, \cdots to represent them. Any function we want is the sum of the coefficient times the matching basis function. That is, there’s some coefficients so that

f(x) = a_0\cdot g_0(x) + a_1\cdot g_1(x) + a_2\cdot g_2(x) + a_3\cdot g_3(x) + \cdots

is true. That summation goes on until we run out of basis functions. Or it runs on forever. This is a great way to solve linear differential equations. This is because we know the basis functions. We know everything we care to know about them. We know their derivatives. We know everything on the right-hand side except the coefficients. The coefficients matching any particular function are constants. So the derivatives of f(x) , written as the sum of coefficients times basis functions, are easy to work with. If we need second or third or more derivatives? That’s no harder to work with.

You may know something about matrix equations. That is that solving them takes freaking forever. The bigger the equation, the more forever. If you have to solve eight equations in eight unknowns? If you start now, you might finish in your lifetime. For this function space? We need dozens, hundreds, maybe thousands of equations and as many unknowns. Maybe infinitely many. So we seem to have a solution that’s great apart from how we can’t use it.

Except. What if the equations we have to solve are all easy? If we have to solve a bunch that looks like, oh, 2a_0 = 4 and 3a_1 = -9 and 2a_2 = 10 … well, that’ll take some time, yes. But not forever. Great idea. Is there any way to guarantee that?

It’s in the basis functions. If we pick functions that are orthogonal, or are almost orthogonal, to each other? Then we can turn the differential equation into an easy matrix problem. Not as easy as in the last paragraph. But still, not hard.

So what’s a good set of basis functions?

And here, about 800 words later than everyone was expecting, let me introduce the sine and cosine functions. Sines and cosines make great basis functions. They don’t grow without bounds. They don’t dwindle to nothing. They’re easy to differentiate. They’re easy to integrate, which is really special. Most functions are hard to integrate. We even know what they look like. They’re waves. Some have long wavelengths, some short wavelengths. But waves. And … well, it’s easy to make sets of them orthogonal.

We have to set some rules. The first is that each of these sine and cosine basis functions have a period. That is, after some time (or distance), they repeat. They might repeat before that. Most of them do, in fact. But we’re guaranteed a repeat after no longer than some period. Call that period ‘L’.

Each of these sine and cosine basis functions has to have a whole number of complete oscillations within the period L. So we can say something about the sine and cosine functions. They have to look like these:

s_j(x) = \sin\left(\frac{2\pi j}{L} x\right)

c_k(x) = \cos\left(\frac{2\pi k}{L} x\right)

Here ‘j’ and ‘k’ are some whole numbers. I have two sets of basis functions at work here. Don’t let that throw you. We could have labelled them all as g_k(x) , with some clever scheme that told us for a given k whether it represents a sine or a cosine. It’s less hard work if we have s’s and c’s. And if we have coefficients of both a’s and b’s. That is, we suppose the function f(x) is:

f(x) = \frac{1}{2}a_0 + b_1 s_1(x) + a_1 c_1(x) + b_2 s_2(x) + a_2 s_2(x) + b_3 s_3(x) + a_3 c_3(x) + \cdots

This, at last, is the Fourier series. Each function has its own series. A “series” is a summation. It can be of finitely many terms. It can be of infinitely many. Often infinitely many terms give more interesting stuff. Like this, for example. Oh, and there’s a bare \frac{1}{2}a_0 there, not multiplied by anything more complicated. It makes life easier. It lets us see that the Fourier series for, like, 3 + f(x) is the same as the Fourier series for f(x), except for the leading term. The ½ before that makes easier some work that’s outside the scope of this essay. Accept it as one of the merry, wondrous appearances of ‘2’ in mathematics expressions.

It’s great for solving differential equations. It’s also great for encryption. The sines and the cosines are standard functions, after all. We can send all the information we need to reconstruct a function by sending the coefficients for it. This can also help us pick out signal from noise. Noise has a Fourier series that looks a particular way. If you take the coefficients for a noisy signal and remove that? You can get a good approximation of the original, noiseless, signal.

This all seems great. That’s a good time to feel skeptical. First, like, not everything we want to work with looks like waves. Suppose we need a function that looks like a parabola. It’s silly to think we can add a bunch of sines and cosines and get a parabola. Like, a parabola isn’t periodic, to start with.

So it’s not. To use Fourier series methods on something that’s not periodic, we use a clever technique: we tell a fib. We declare that the period is something bigger than we care about. Say the period is, oh, ten million years long. A hundred light-years wide. Whatever. We trust that the difference between the function we do want, and the function that we calculate, will be small. We trust that if someone ten million years from now and a hundred light-years away wishes to complain about our work, we will be out of the office that day. Letting the period L be big enough is a good reliable tool.

The other thing? Can we approximate any function as a Fourier series? Like, at least chunks of parabolas? Polynomials? Chunks of exponential growths or decays? What about sawtooth functions, that rise and fall? What about step functions, that are constant for a while and then jump up or down?

The answer to all these questions is “yes,” although drawing out the word and raising a finger to say there are some issues we have to deal with. One issue is that most of the time, we need an infinitely long series to represent a function perfectly. TThis is fine if we’re trying to prove things about functions in general rather than solve some specific problem. It’s no harder to write the sum of infinitely many terms than the sum of finitely many terms. You write an &infty; symbol instead of an N in some important places. But if we want to solve specific problems? We probably want to deal with finitely many terms. (I hedge that statement on purpose. Sometimes it turns out we can find a formula for all the infinitely many coefficients.) This will usually give us an approximation of the f(x) we want. The approximation can be as good as we want, but to get a better approximation we need more terms. Fair enough. This kind of tradeoff doesn’t seem too weird.

Another issue is in discontinuities. If f(x) jumps around? If it has some point where it’s undefined? If it has corners? Then the Fourier series has problems. Summing up sines and cosines can’t give us a sudden jump or a gap or anything. Near a discontinuity, the Fourier series will get this high-frequency wobble. A bigger jump, a bigger wobble. You may not blame the series for not representing a discontinuity. But it does mean that what is, otherwise, a pretty good match for the f(x) you want gets this region where it stops being so good a match.

That’s all right. These issues aren’t bad enough, or unpredictable enough, to keep Fourier series from being powerful tools. Even when we find problems for which sines and cosines are poor fits, we use this same approach. Describe a function we would like to know as the sums of functions we choose to work with. Fourier series are one of those ideas that helps us solve problems, and guides us to new ways to solve problems.


This is my last big essay for the week. All of Fall 2019 A To Z posts should be at this link. The letter G should get its chance on Tuesday and H next Thursday. I intend to have A To Z essays should be available at this link. If you’d like to nominate topics for essays, I’m asking for the letters I through N at this link. Thank you.

Reading the Comics, September 14, 2019: Friday the 13th Edition


The past week included another Friday the 13th. Several comic strips found that worth mention. So that gives me a theme by which to name this look over the comic strips.

Charles Schulz’s Peanuts rerun for the 12th presents a pretty wordy algebra problem. And Peppermint Patty, in the grips of a math anxiety, freezing up and shutting down. One feels for her. Great long strings of words frighten anyone. The problem seems a bit complicated for kids Peppermint Patty’s and Franklin’s age. But the problem isn’t helping. One might notice, say, that a parent’s age will be some nice multiple of a child’s in a year or two. That in ten years a man’s age will be 14 greater than the combined age of their ages then? What imagination does that inspire?

Francis, reading: 'Problem 5. A man has a daughter and a son. The son is three years older than the daughter. In one year the man will be 6 times as old as the daughter is now, and in ten years he will be 14 years older than the combined ages of his children. What is the man's present age?' Peppermint Patty: 'I'm sorry, we are unable to complete your call. Please check the number and dial again!'
Charles Schulz’s Peanuts rerun for the 12th of September, 2019. It originally ran the 14th of September, 1972. Essays mentioning something inspired by Peanuts should be gathered at this link.

Grant Peppermint Patty her fears. The situation isn’t hopeless. It helps to write out just what know, and what we would like to know. At least what we would like to know if we’ve granted the problem worth solving. What we would like is to know the man’s age. That’s some number; let’s call it M. What we know are things about how M relates to his daughter’s and his son’s age, and how those relate to one another. Since we know several things about the daughter’s age and the son’s age it’s worth giving those names too. Let’s say D for the daughter’s age and S for the son’s.

So. We know the son is three years older than the daughter. This we can write as S = D + 3 . We know that in one year, the man will be six times as old as the daughter is now. In one year the man will be M + 1 years old. The daughter’s age now is D; six times that is 6D. So we know that M + 1 = 6D . In ten years the man’s age will be M + 10; the daughter’s age, D + 10; the son’s age, S + 10. In ten years, M + 10 will be 14 plus D + 10 plus S + 10. That is, M + 10 = 14 + D + 10 + S + 10 . Or if you prefer, M + 10 = D + S + 34 . Or even, M = D + S + 24 .

So this is a system of three equation, all linear, in three variables. This is hopeful. We can hope there will be a solution. And there is. There are different ways to find an answer. Since I’m grading this, you can use the one that feels most comfortable to you. The problem still seems a bit advanced for Peppermint Patty and Franklin.

Timmy, reading the news: 'A Stanford University math computer found the largest prime number, grandpa. It's 13 million digits long.' Burl: '13 million digits! That's gonna cost a fortune to print in kids' math books!' Dale: 'It's probably a mistake. Joy? Get me my pocket calculator out of my office supplies caboodle.'
Julie Larson’s The Dinette Set rerun for the 13th of September, 2019. It originally ran the 5th of November, 2008. Essays built on something from The Dinette Set should be gathered at this link.

Julie Larson’s The Dinette Set rerun for the 13th has a bit of talk about a mathematical discovery. The comic is accurate enough for its publication. In 2008 a number known as M43112609 was proven to be prime. The number, 243,112,609 – 1, is some 12,978,189 digits long. It’s still the fifth-largest known prime number (as I write this).

Prime numbers of the form 2N – 1 for some whole number N are known as Mersenne primes. These are named for Marin Mersenne, a 16th century French friar and mathematician. They’re a neat set of numbers. Each Mersenne prime matches some perfect number. Nobody knows whether there are finite or infinitely many Mersenne primes. Every even perfect number has a form that matches to some Mersenne prime. It’s unknown whether there are any odd perfect numbers. As often happens with number theory, the questions are easy to ask but hard to answer. But all the largest known prime numbers are Mersenne primes; they’re of a structure we can test pretty well. At least that electronic computers can test well; the last time the largest known prime was found by mere mechanical computer was 1951. The last time a non-Mersenne was the largest known prime was from 1989 to 1992, and before that, 1951.

Numeral 3, alongside a 1, at a counselor: 'We used to be unlucky, but we turned it around!'
Mark Parisi’s Off The Mark for the 13th of September, 2019. Essays including discussion of Off The Mark should be gathered at this link.

Mark Parisi’s Off The Mark for the 13th starts off the jokes about 13 for this edition. It’s also the anthropomorphic-numerals joke for the week.

Panel of good luck: ladybugs, the number 7, and four-leaf clovers. Panel of bad luck: black cats, the number 13, and serial killers. Panel of neutral luck: hamsters, the number 20, and yams.
Doug Savage’s Savage Chickens for the 13th of September, 2019. Appearances by the Savage Chickens in my essays are at this link.

Doug Savage’s Savage Chickens for the 13th is a joke about the connotations of numbers, with (in the western tradition) 7 lucky and 13 unlucky. And many numbers just lack any particular connotation.

Nervous cat: 'Today is Friday the 13th! Isn't 13 an unlucky number?' Snow: 'It's always beeen a lucky number for me.' (Panel reveals Snow to have 13 kittens.)
T Shepherd’s Snow Sez for the 13th of September, 2019. The occasional appearance by Snow Sez in my essays should be at this link.

T Shepherd’s Snow Sez for the 13th finishes off the unlucky-13 jokes. It observes that whatever a symbol might connote generally, your individual circumstances are more important. There are people for whom 13 is a good omen, or for whom Mondays are magnificent days, or for whom black cats are lucky.


These are all the comics I can write paragraphs about. There were more comics mentioning mathematics last week. Here were some of them:

Brian Walker, Greg Walker, and Chance Browne’s Hi and Lois for the 14th supposes that a “math nerd” can improve Thirsty’s golf game.

Bill Amend’s FoxTrot Classics for the 14th, rerunning a strip from 1997, is a word problem joke. I needed to re-read the panels to see what Paige’s complaint was about.

Greg Evans’s Luann Againn for the 14th, repeating a strip from 1991, is about prioritizing mathematics homework. I can’t disagree with putting off the harder problems. It’s good to have experience, and doing similar but easier problems can help one crack the harder ones.

Jonathan Lemon’s Rabbits Against Magic for the 14th is the Rubik’s Cube joke for the week.


And that’s my comic strips for the week. I plan to have the next Reading the Comics post here on Sunday. The A to Z series resumes tomorrow, all going well. I am seeking topics for the letters I through N, at this post. Thank you for reading, and for offering your thoughts.

My 2019 Mathematics A To Z: Encryption schemes


Today’s A To Z term is encryption schemes. It’s another suggested by aajohannas. It’s a chance to dip into information theory.

Mr Wu, author of the Mathtuition88 blog, suggested the Extreme Value Theorem. I was tempted and then realized that I had written this in the 2018 A-to-Z, as the “X” letter. The end of the alphabet has a shortage of good mathematics words. Sometimes we have to work around problems.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Encryption schemes.

Why encrypt anything?

The oldest reason to hide a message, at least from all but select recipients. Ancient encryption methods will substitute one letter for another, or will mix up the order of letters in a message. This won’t hide a message forever. But it will slow down a person trying to decrypt the message until they decide they don’t need to know what it says. Or decide to bludgeon the message-writer into revealing the secret.

Substituting one letter for another won’t stop an eavesdropper from working out the message. Not indefinitely, anyway. There are patterns in the language. Any language, but take English as an example. A single-letter word is either ‘I’ or ‘A’. A two-letter word has a great chance of being ‘in’, ‘on’, ‘by’, ‘of’, ‘an’, or a couple other choices. Solving this is a fun pastime, for people who like this. If you need it done fast, let a computer work it out.

To hide the message better requires being cleverer. For example, you could substitue letters according to a slightly different scheme for each letter in the original message. The Vignère cipher is an example of this. I remember some books from my childhood, written in the second person. They had programs that you-the-reader could type in to live the thrill of being a child secret agent computer programmer. This encryption scheme was one of the programs used for passing on messages. We can make the plans more complicated yet, but that won’t give us better insight yet.

The objective is to turn the message into something less predictable. An encryption which turns, say, ‘the’ into ‘rgw’ will slow the reader down. But if they pay attention and notice, oh, the text also has the words ‘rgwm’, ‘rgey’, and rgwb’ turn up a lot? It’s hard not to suspect these are ‘the’, ‘them’, ‘they’, and ‘then’. If a different three-letter code is used for every appearance of ‘the’, good. If there’s a way to conceal the spaces as something else, that’s even better, if we want it harder to decrypt the message.

So the messages hardest to decrypt should be the most random. We can give randomness a precise definition. We owe it to information theory, which is the study of how to encode and successfully transmit and decode messages. In this, the information content of a message is its entropy. Yes, the same word as used to describe broken eggs and cream stirred into coffee. The entropy measures how likely each possible message is. Encryption matches the message you really want with a message of higher entropy. That is, one that’s harder to predict. Decrypting reverses that matching.

So what goes into a message? We call them words, or codewords, so we have a clear noun to use. A codeword is a string of letters from an agreed-on alphabet. The terminology draws from common ordinary language. Cryptography grew out of sending sentences.

But anything can be the letters of the alphabet. Any string of them can be a codeword. An unavoidable song from my childhood told the story of a man asking his former lover to tie a yellow ribbon around an oak tree. This is a tiny alphabet, but it only had to convey two words, signalling whether she was open to resuming their relationship. Digital computers use an alphabet of two memory states. We label them ‘0’ and ‘1’, although we could as well label them +5 and -5, or A and B, or whatever. It’s not like actual symbols are scrawled very tight into the chips. Morse code uses dots and dashes and short and long pauses. Naval signal flags have a set of shapes and patterns to represent the letters of the alphabet, as well as common or urgent messages. There is not a single universally correct number of letters or length of words for encryption. It depends on what the code will be used for, and how.

Naval signal flags help me to my next point. There’s a single pattern which, if shown, communicates the message “I require a pilot”. Another, “I am on fire and have dangerous cargo”. Still another, “All persons should report on board as the vessel is about to set to sea”. These are whole sentences; they’re encrypted into a single letter.

And this is the second great use of encryption. English — any human language — has redundancy to it. Think of the sentence “No, I’d rather not go out this evening”. It’s polite, but is there anything in it not communicated by texting back “N”? An encrypted message is, often, shorter than the original. To send a message costs something. Time, if nothing else. To send it more briefly is typically better.

There are dangers to this. Strike out any word from “No, I’d rather not go out this evening”. Ask someone to guess what belongs there. Only the extroverts will have trouble. I guess if you strike out “evening” people might guess “today” or “tomorrow” or something. The sentiment of the sentence remains.

But strike out a letter from “N” and ask someone to guess what was meant. And this is a danger of encryption. The encrypted message has a higher entropy, a higher unpredictability. If some mistake happens in transmission, we’re lost.

We can fight this. It’s possible to build checks into an encryption. To carry a bit of extra information that lets one know that the message was garbled. These are “error-detecting codes”. It’s even possible to carry enough extra information to correct some errors. These are “error-correcting codes”. There are limits, of course. This kind of error-correcting takes calculation time and message space. We lose some economy but gain reliability. There is a general lesson in this.

And not everything can compress. There are (if I’m reading this right) 26 letter, 10 numeral, and four repeater flags used under the International Code of Symbols. So there are at most 40 signals that could be reduced to a single flag. If we need to communicate “I am on fire but have no dangerous cargo” we’re at a loss. We have to spell things out more. It’s a quick proof, by way of the pigeonhole principle, which tells us that not every message can compress. But this is all right. There are many messages we will never need to send. (“I am on fire and my cargo needs updates on Funky Winkerbean.”) If it’s mostly those that have no compressed version, who cares?

Encryption schemes are almost as flexible as language itself. There are families of kinds of schemes. This lets us fit schemes to needs: how many different messages do we need to be able to send? How sure do we need to be that errors are corrected? Or that errors are detected? How hard do we want it to be for eavesdroppers to decode the message? Are we able to set up information with the intended recipients separately? What we need, and what we are willing to do without, guide the scheme we use.


Thank you again for reading. All of Fall 2019 A To Z posts should be at this link. I hope to have a letter F piece on Thursday. All of the A To Z essays should be at this link and if I can sort out some trouble with the first two, they will be soon. And if you’d like to nominate topics for essays, I’m asking for the letters I through N at this link.

I Ask For The Second Topics For My Fall 2019 Mathematics A-to-Z


We’re only in the third week of the Fall 2019 Mathematics A-to-Z, but this is when I should be nailing down topics for the next several letters. So again, I ask you kind readers for suggestions. I’ve done five A-to-Z sequences before, from 2015 through 2018, and am listing the essays I’ve already written for the middle part of the alphabet. I’m open to revisiting topics, if I think I can improve on what I already wrote. But I reserve the right to use whatever topic feels most interesting to me.

To suggest anything for the letters I through N please leave the comment here. Also do please let me know if you have a mathematics blog, a Twitter or Mathstodon account, a YouTube channel, or anything else that you’d like to share.

I.

J.

K.

L.

M.

N.

I thank you again you for any thoughts you have. Please ask if there are any questions. I hope to be open to topics in any field of mathematics, including ones I don’t really know. The fun and terror of writing about a thing I’m only learning about is part of what I get from this kind of project.

Reading the Comics, September 12, 2019: This Threatens To Mess Up My Plan Edition


There were a healthy number of comic strips with at least a bit of mathematical content the past week. Enough that I would maybe be able to split them across three essays in all. This conflicts with my plans to post two A-To-Z essays, and two short pieces bringing archived things back to some attention, when you consider the other thing I need to post this week. Well, I’ll work out something, this week at least. But if Comic Strip Master Command ever sends me a really busy week I’m going to be in trouble.

Bud Blake’s Tiger rerun for the 7th has Punkinhead ask one of those questions so basic it ends up being good and deep. What is arithmetic, exactly? Other than that it’s the mathematics you learn in elementary school that isn’t geometry? — an answer that’s maybe not satisfying but at least has historical roots. The quadrivium, four of the seven liberal arts of old, were arithmetic, geometry, astronomy, and music. Each of these has a fair claim on being a mathematics study, though I’d agree that music is a small part of mathematics these days. (I first wrote a “minor” piece, and didn’t want people to think I was making a pun, but you’ll notice I’m sharing it anyway.) I can’t say what people who study music learn about mathematics these days. Still, I’m not sure I can give a punchy answer to the question.

Punkinhead: 'Can you answer an arithmetic question for me, Julian?' Julian: 'Sure.' Punkinhead: 'What is it?'
Bud Blake’s Tiger for the 7th of September, 2019. Essays built on something mentioned in Tiger should appear at this link.

Mathworld offers the not-quite-precise definition that arithmetic is the field of mathematics dealing with integers or, more generally, numerical computation. But then it also offers a mnemonic for the spelling of arithmetic, which I wouldn’t have put in the fourth sentence of an article on the subject. I’m also not confident in that limitation to integers. Arithmetic certainly is about things we do on the integers, like addition and subtraction, multiplication and division, powers, roots, and factoring. So, yes, adding five and two is certainly arithmetic. But would we say that adding one-fifth and two is not arithmetic? Most other definitions I find allow that it can be about the rational numbers, or the real numbers. Some even accept the complex-valued numbers. The core is addition and subtraction, multiplication and division.

Arithmetic blends almost seamlessly into more complicated fields. One is number theory, which is the posing of problems that anyone can understand and that nobody can solve. If you ever run across a mathematical conjecture that’s over 200 years old and that nobody’s made much progress on besides checking that it’s true for all the whole numbers below 21,000,000,000 – 1, it’s probably number theory. Another is group theory, in which we think about structures that look like arithmetic without necessarily having all its fancy features like, oh, multiplication or the ability to factor elements. And it weaves into computing. Most computers rely on some kind of floating-point arithmetic, which approximates a wide range of the rational numbers that we’d expect to actually need.

So arithmetic is one of those things so fundamental and universal that it’s hard to take a chunk and say that this is it.

Maria: 'So, Dad, we're doing division in school, OK? When ya divide two, ya get less, right? So now that you got me *an'* Lily, you got to divide your love, right?' Dad: 'Love doesn't work that way, sweetie. The more people you love, the more love you have to give!' Maria, later, to Lily: 'Know what? I don't understand love *or* math.' Lily, thinking: 'Hey, I just go with the flow.'
John Zakour and Scott Roberts’s Maria’s Day for the 8th of September, 2019. Essays with some mention of Maria’s Day should be gathered at this link.

John Zakour and Scott Roberts’s Maria’s Day for the 8th has Maria fretting over what division means for emotions. I was getting ready to worry about Maria having the idea division means getting less of something. Five divided by one-half is not less than either five or one-half. My understanding is this unsettles a great many people learning division. But she does explicitly say, divide two, which I’m reading as “divide by two”. (I mean to be charitable in my reading of comic strips. It’s only fair.)

Still, even division into two things does not necessarily make things less. One of the fascinating and baffling discoveries of the 20th century was the Banach-Tarski Paradox. It’s a paradox only in that it defies intuition. According to it, one ball can be divided into as few as five pieces, and the pieces reassembled to make two whole balls. I would not expect Maria’s Dad to understand this well enough to explain.

Slylock looking over a three-person lineup. 'One of these apes hijacked a truckload of bananas. When questioned, each one made a statement that was the opposite of the truth. Moe said: 'I took it.' Larry said: 'Moe took it.' Curly said: 'It wasn't Moe or Larry'. Help Slylock Fox decide which one is guilty.' Solution: the opposite of each ape's answer is ... moe: 'I didn't take it.' Larry: 'Moe didn't take it.' Curly: 'It was Moe or Larry.' If all three statements are true, only Larry could have hijacked the truck.'
Bob Weber Jr’s Slylock Fox and Comics for Kids for the 9th of September, 2019. I would have sworn there were more essays mentioning Slylock Fox than this, but here’s the whole set of tagged pieces. I guess they’re not doing as many logic puzzles and arithmetic games as I would have guessed.

Bob Weber Jr’s Slylock Fox and Comics for Kids for the 9th presents a logic puzzle. If you know the laws of Boolean algebra it’s a straightforward puzzle. But it’s light enough to understand just from ordinary English reading, too.

Joe, looking at a fortune cookie: 'WHAT?' Dad: 'What's your fortune cookie say?' Joe: ''A thousand plus two is your lucky number today.' It's not a fortune; it's a stinking math problem!'
Rick Detorie’s One Big Happy for the 12th of September, 2019. Essays mentioning something inspired by One Big Happy are at this link.

Rick Detorie’s One Big Happy for the 12th is a little joke about finding mathematics problems in everyday life. Or it’s about the different ways one can represent numbers.


There were naturally comic strips with too marginal a mention of mathematics to rate paragraphs. Among them the past week were these.

Stephen Bentley’s Herb and Jamaal rerun for the 11th portrays the aftermath of realizing a mathematics problem is easier than it seemed. Realizing this after a lot of work should feel good, as discovering a clever way around tedious work is great. But the lost time can still hurt.

Ernie Bushmiller’s Nancy Classics for the 11th, rerunning a strip from the 6th of December, 1949, has Sluggo trying to cheat in arithmetic.

Eric the Circle for the 13th, by “Naratex”, is the Venn Diagram joke for the week.

Jason Poland’s Robbie and Bobby for the 13th is a joke about randomness, and the old phrase about doing random acts of kindness.


And that’s where I’ll pause a while. Tuesday I hope to publish another in the Fall 2019 A To Z series, and Thursday the piece after that. I plan to have the other Reading the Comics post for the past week published here on Wednesday. The great thing about having plans is that without them, nothing can go wrong.

Exploiting My A-To-Z Archives: Dedekind Domain


For the Leap Day 2016 A-to-Z — it started around Leap Day; I didn’t write everything in one impossibly massive burst — I threw subjects open to nominations. This was a great choice. People ask me about things I would never have thought of. Or that I did not know about before being asked. So I would learn things just ahead of explaining them.

Dedekind Domains, here, were I think the first example of that. It was also the first time I threw out and rewrote an essay from scratch. The first essay tried to lay out all the rules that made up a Dedekind Domain. Which, for an audience I couldn’t be sure had ever heard of rings before, took paragraph after paragraph of definition. When I realized I wasn’t staying intersted in writing this, I understood I needed a different approach. So this essay taught me several things, one of them truly important.

It turns out I don’t have the Leap Day 2016 A-to-Z essays organized by a convenient tag either. I’ll have to fix that too.

Exploiting My A-To-Z Archives: Characteristic Function


For today I’d like to bring back attention to something from my original, summer 2015, A to Z. The characteristic function is a way of picking out sets of things. Like, there’s a characteristic function that picks out “all the real numbers between 2 and 4”. Or “all the prime integers”. Or “all the space within a set distance of this point”. It’s a handy tool for breaking up a problem into several smaller problems, without requiring that we write it out as more problems. We can use the tools for regular functions to deal with complicated and weird cases.

There are several things called characteristic functions. The other really important one turns up in probability, and the essay linked there isn’t about that one.

Other Summer 2015 A to Z essays are at this link. Also I learn I didn’t tag these essays the way I would come to later on. I’ll have to fix that sometime.

My 2019 Mathematics A To Z: Differential Equations


The thing most important to know about differential equations is that for short, we call it “diff eq”. This is pronounced “diffy q”. It’s a fun name. People who aren’t taking mathematics smile when they hear someone has to get to “diffy q”.

Sometimes we need to be more exact. Then the less exciting names “ODE” and “PDE” get used. The meaning of the “DE” part is an easy guess. The meaning of “O” or “P” will be clear by the time this essay’s finished. We can find approximate answers to differential equations by computer. This is known generally as “numerical solutions”. So you will encounter talk about, say, “NSPDE”. There’s an implied “of” between the S and the P there. I don’t often see “NSODE”. For some reason, probably a quite arbitrary historical choice, this is just called “numerical integration” instead.

To write about “differential equations” was suggested by aajohannas, who is on Twitter as @aajohannas.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Differential Equations.

One of algebra’s unsettling things is the idea that we can work with numbers without knowing their values. We can give them names, like ‘x’ or ‘a’ or ‘t’. We can know things about them. Often it’s equations telling us these things. We can make collections of numbers based on them all sharing some property. Often these things are solutions to equations. We can even describe changing those collections according to some rule, even before we know whether any of the numbers is 2. Often these things are functions, here matching one set of numbers to another.

One of analysis’s unsettling things is the idea that most things we can do with numbers we can also do with functions. We can give them names, like ‘f’ and ‘g’ and … ‘F’. That’s easy enough. We can add and subtract them. Multiply and divide. This is unsurprising. We can measure their sizes. This is odd but, all right. We can know things about functions even without knowing exactly what they are. We can group together collections of functions based on some properties they share. This is getting wild. We can even describe changing these collections according to some rule. This change is itself a function, but it is usually called an “operator”, saving us some confusion.

So we can describe a function in an equation. We may not know what f is, but suppose we know \sqrt{f(x) - 2} = x is true. We can suppose that if we cared we could find what function, or functions, f made that equation true. There is shorthand here. A function has a domain, a range, and a rule. The equation part helps us find the rule. The domain and range we get from the problem. Or we take the implicit rule that both are the biggest sets of real-valued numbers for which the rule parses. Sometimes biggest sets of complex-valued numbers. We get so used to saying “the function” to mean “the rule for the function” that we’ll forget to say that’s what we’re doing.

There are things we can do with functions that we can’t do with numbers. Or at least that are too boring to do with numbers. The most important here is taking derivatives. The derivative of a function is another function. One good way to think of a derivative is that it describes how a function changes when its variables change. (The derivative of a number is zero, which is boring except when it’s also useful.) Derivatives are great. You learn them in Intro Calculus, and there are a bunch of rules to follow. But follow them and you can pretty much take the derivative of any function even if it’s complicated. Yes, you might have to look up what the derivative of the arc-hyperbolic-secant is. Nobody has ever used the arc-hyperbolic-secant, except to tease a student.

And the derivative of a function is itself a function. So you can take a derivative again. Mathematicians call this the “second derivative”, because we didn’t expect someone would ask what to call it and we had to say something. We can take the derivative of the second derivative. This is the “third derivative” because by then changing the scheme would be awkward. If you need to talk about taking the derivative some large but unspecified number of times, this is the n-th derivative. Or m-th, if you’ve already used ‘n’ to mean something else.

And now we get to differential equations. These are equations in which we describe a function using at least one of its derivatives. The original function, that is, f, usually appears in the equation. It doesn’t have to, though.

We divide the earth naturally (we think) into two pairs of hemispheres, northern and southern, eastern and western. We divide differential equations naturally (we think) into two pairs of two kinds of differential equations.

The first division is into linear and nonlinear equations. I’ll describe the two kinds of problem loosely. Linear equations are the kind you don’t need a mathematician to solve. If the equation has solutions, we can write out procedures that find them, like, all the time. A well-programmed computer can solve them exactly. Nonlinear equations, meanwhile, are the kind no mathematician can solve. They’re just too hard. There’s no processes that are sure to find an answer.

You may ask. We don’t need mathematicians to solve linear equations. Mathematicians can’t solve nonlinear ones. So what do we need mathematicians for? The answer is that I exaggerate. Linear equations aren’t quite that simple. Nonlinear equations aren’t quite that hopeless. There are nonlinear equations we can solve exactly, for example. This usually involves some ingenious transformation. We find a linear equation whose solution guides us to the function we do want.

And that is what mathematicians do in such a field. A nonlinear differential equation may, generally, be hopeless. But we can often find a linear differential equation which gives us insight to what we want. Finding that equation, and showing that its answers are relevant, is the work.

The other hemispheres we call ordinary differential equations and partial differential equations. In form, the difference between them is the kind of derivative that’s taken. If the function’s domain is more than one dimension, then there are different kinds of derivative. Or as normal people put it, if the function has more than one independent variable, then there are different kinds of derivatives. These are partial derivatives and ordinary (or “full”) derivatives. Partial derivatives give us partial differential equations. Ordinary derivatives give us ordinary differential equations. I think it’s easier to understand a partial derivative.

Suppose a function depends on three variables, imaginatively named x, y, and z. There are three partial first derivatives. One describes how the function changes if we pretend y and z are constants, but let x change. This is the “partial derivative with respect to x”. Another describes how the function changes if we pretend x and z are constants, but let y change. This is the “partial derivative with respect to y”. The third describes how the function changes if we pretend x and y are constants, but let z change. You can guess what we call this.

In an ordinary differential equation we would still like to know how the function changes when x changes. But we have to admit that a change in x might cause a change in y and z. So we have to account for that. If you don’t see how such a thing is possible don’t worry. The differential equations textbook has an example in which you wish to measure something on the surface of a hill. Temperature, usually. Maybe rainfall or wind speed. To move from one spot to another a bit east of it is also to move up or down. The change in (let’s say) x, how far east you are, demands a change in z, how far above sea level you are.

That’s structure, though. What’s more interesting is the meaning. What kinds of problems do ordinary and partial differential equations usually represent? Partial differential equations are great for describing surfaces and flows and great bulk masses of things. If you see an equation about how heat transmits through a room? That’s a partial differential equation. About how sound passes through a forest? Partial differential equation. About the climate? Partial differential equations again.

Ordinary differential equations are great for describing a ball rolling on a lumpy hill. It’s given an initial push. There are some directions (downhill) that it’s easier to roll in. There’s some directions (uphill) that it’s harder to roll in, but it can roll if the push was hard enough. There’s maybe friction that makes it roll to a stop.

Put that way it’s clear all the interesting stuff is partial differential equations. Balls on lumpy hills are nice but who cares? Miniature golf course designers and that’s all. This is because I’ve presented it to look silly. I’ve got you thinking of a “ball” and a “hill” as if I meant balls and hills. Nah. It’s usually possible to bundle a lot of information about a physical problem into something that looks like a ball. And then we can bundle the ways things interact into something that looks like a hill.

Like, suppose we have two blocks on a shared track, like in a high school physics class. We can describe their positions as one point in a two-dimensional space. One axis is where on the track the first block is, and the other axis is where on the track the second block is. Physics problems like this also usually depend on momentum. We can toss these in too, an axis that describes the momentum of the first block, and another axis that describes the momentum of the second block.

We’re already up to four dimensions, and we only have two things, both confined to one track. That’s all right. We don’t have to draw it. If we do, we draw something that looks like a two- or three-dimensional sketch, maybe with a note that says “D = 4” to remind us. There’s some point in this four-dimensional space that describes these blocks on the track. That’s the “ball” for this differential equation.

The things that the blocks can do? Like, they can collide? They maybe have rubber tips so they bounce off each other? Maybe someone’s put magnets on them so they’ll draw together or repel? Maybe there’s a spring connecting them? These possible interactions are the shape of the hills that the ball representing the system “rolls” over. An impenetrable barrier, like, two things colliding, is a vertical wall. Two things being attracted is a little divot. Two things being repulsed is a little hill. Things like that.

Now you see why an ordinary differential equation might be interesting. It can capture what happens when many separate things interact.

I write this as though ordinary and partial differential equations are different continents of thought. They’re not. When you model something you make choices and they can guide you to ordinary or to partial differential equations. My own research work, for example, was on planetary atmospheres. Atmospheres are fluids. Representing how fluids move usually calls for partial differential equations. But my own interest was in vortices, swirls like hurricanes or Jupiter’s Great Red Spot. Since I was acting as if the atmosphere was a bunch of storms pushing each other around, this implied ordinary differential equations.

There are more hemispheres of differential equations. They have names like homogenous and non-homogenous. Coupled and decoupled. Separable and nonseparable. Exact and non-exact. Elliptic, parabolic, and hyperbolic partial differential equations. Don’t worry about those labels. They relate to how difficult the equations are to solve. What ways they’re difficult. In what ways they break computers trying to approximate their solutions.

What’s interesting about these, besides that they represent many physical problems, is that they capture the idea of feedback. Of control. If a system’s current state affects how it’s going to change, then it probably has a differential equation describing it. Many systems change based on their current state. So differential equations have long been near the center of professional mathematics. They offer great and exciting pure questions while still staying urgent and relevant to real-world problems. They’re great things.


Thanks again for reading. All Fall 2019 A To Z posts should be at this link. I should get to the letter E for Tuesday. All of the A To Z essays should be at this link. If you have thoughts about other topics I might cover, please offer suggestions for the letters G and H.

Reading the Comics, September 7, 2019: The Minor Ones of the Week


Part of my work reading lots of comic strips is to report the ones that mention mathematics, even if the mention is so casual there’s no building an essay around them. Here’s the minor mathematics mentions of last week.

Gordon Bess’s Redeye for the 1st is a joke about dividing up a prize. There’s also a side joke that amounts to a person having to do arithmetic on his fingers.

Bob Scott’s Bear With Me for the 3rd has Molly and the Bear in her geometry class. Bear’s shown as surprised the kids are still learning Euclidean geometry, which is your typical joke about the character with a particularly deep knowledge of a narrow field.

Wulff and Morgenthaler’s Truth Facts for the 4th is a Venn Diagram joke about the futility of attraction . I don’t know whether this is a repeat.

Gary Brookins’s Pluggers for the 5th is the old joke about how one never uses algebra in real life. The strip is not dated as a repeat. But I’d be surprised if this joke hasn’t run in Pluggers before. I didn’t have a tag for Pluggers before, but there was a time I wasn’t tagging the names of comic strips.

Richard Thompson’s Richard’s Poor Almanac for the 5th is a repeat (it has to be), featuring another of Thompson’s non-Euclidean plants.


And I continue to read the daily comics. Sunday at this link should be a fresh essay about the past week’s strips. Tomorrow, all going well, I’ll have the letter D’s representative in the Fall 2019 A-to-Z sequence. Thank you for reading.