Why does the Quantum Mechanics Momentum Operator look like that?


I don’t know. I say this for anyone this has unintentionally clickbaited, or who’s looking at a search engine’s preview of the page.

I come to this question from a friend, though, and it’s got me wondering. I don’t have a good answer, either. But I’m putting the question out there in case someone reading this, sometime, does know. Even if it’s in the remote future, it’d be nice to know.

And before getting to the question I should admit that “why” questions are, to some extent, a mug’s game. Especially in mathematics. I can ask why the sum of two consecutive triangular numbers a square number. But the answer is … well, that’s what we chose to mean by ‘triangular number’, ‘square number’, ‘sum’, and ‘consecutive’. We can show why the arithmetic of the combination makes sense. But that doesn’t seem to answer “why” the way, like, why Neil Armstrong was the first person to walk on the moon. It’s more a “why” like, “why are there Seven Sisters [ in the Pleiades ]?” [*]

But looking for “why” can, at least, give us hints to why a surprising result is reasonable. Draw dots representing a square number, slice it along the space right below a diagonal. You see dots representing two successive triangular numbers. That’s the sort of question I’m asking here.

From here, we get to some technical stuff and I apologize to readers who don’t know or care much about this kind of mathematics. It’s about the wave-mechanics formulation of quantum mechanics. In this, everything that’s observable about a system is contained within a function named \Psi . You find \Psi by solving a differential equation. The differential equation represents problems. Like, a particle experiencing some force that depends on position. This is written as a potential energy, because that’s easier to work with. But it’s the kind of problem done.

Grant that you’ve solved \Psi , since that’s hard and I don’t want to deal with it. You still don’t know, like, where the particle is. You never know that, in quantum mechanics. What you do know is its distribution: where the particle is more likely to be, where it’s less likely to be. You get from \Psi to this distribution for, like, particles by applying an operator to \Psi . An operator is a function with a domain and a range that are spaces. Almost always these are spaces of functions.

Each thing that you can possibly observe, in a quantum-mechanics context, matches an operator. For example, there’s the x-coordinate operator, which tells you where along the x-axis your particle’s likely to be found. This operator is, conveniently, just x. So evaluate x\Psi and that’s your x-coordinate distribution. (This is assuming that we know \Psi in Cartesian coordinates, ones with an x-axis. Please let me do that.) This looks just like multiplying your old function by x, which is nice and easy.

Or you might want to know momentum. The momentum in the x-direction has an operator, \hat{p_x} , which equals -\imath \hbar \frac{\partial}{\partial x} . The \partial is partial derivatives. The \hbar is Planck’s constant, a number which in normal systems of measurement is amazingly tiny. And you know how \imath^2 = -1 . That – symbol is just the minus or the subtraction symbol. So to find the momentum distribution, evaluate -\imath \hbar \frac{\partial}{\partial x}\Psi . This means taking a derivative of the \Psi you already had. And multiplying it by some numbers.

I don’t mind this multiplication by \hbar . That’s just a number and it’s a quirk of our coordinate system that it isn’t 1. If we wanted, we could set up our measurements of length and duration and stuff so that it was 1 instead.

But. Why is there a -\imath in the momentum operator rather than the position operator? Why isn’t one \sqrt{-\imath} x and the other \sqrt{-\imath} \frac{\partial}{\partial x} ? From a mathematical physics perspective, position and momentum are equally good variables. We tend to think of position as fundamental, but that’s surely a result of our happening to be very good at seeing where things are. If we were primarily good at spotting the momentum of things around us, we’d surely see that as the more important variable. When we get into Hamiltonian mechanics we start treating position and momentum as equally fundamental. Even the notation emphasizes how equal they are in importance, and treatment. We stop using ‘x’ or ‘r’ as the variable representing position. We use ‘q’ instead, a mirror to the ‘p’ that’s the standard for momentum. (‘p’ we’ve always used for momentum because … … … uhm. I guess ‘m’ was already committed, for ‘mass’. What I have seen is that it was taken as the first letter in ‘impetus’ with no other work to do. I don’t know that this is true. I’m passing on what I was told explains what looks like an arbitrary choice.)

So I’m supposing that this reflects how we normally set up \Psi as a function of position. That this is maybe why the position operator is so simple and bare. And then why the momentum operator has a minus, an imaginary number, and this partial derivative stuff. That if we started out with the wave function as a function of momentum, the momentum operator would be just the momentum variable. The position operator might be some mess with \imath and derivatives or worse.

I don’t have a clear guess why one and not the other operator gets full possession of the \imath though. I suppose that has to reflect convenience. If position and momentum are dual quantities then I’d expect we could put a mere constant like -\imath wherever we want. But this is, mostly, me writing out notes and scattered thoughts. I could be trying to explain something that might be as explainable as why the four interior angles of a rectangle are all right angles.

So I would appreciate someone pointing out the obvious reason these operators look like that. I may grumble privately at not having seen the obvious myself. But I’d like to know it anyway.


[*] Because there are not eight.

Reading the Comics, November 30, 2019: Big Embarrassing Mistake Edition


See if you can spot where I discover my having made a big embarrassing mistake. It’s fun! For people who aren’t me!

Lincoln Peirce’s Big Nate for the 24th has boy-genius Peter drawing “electromagnetic vortex flow patterns”. Nate, reasonably, sees this sort of thing as completely abstract art. I’m not precisely sure what Peirce means by “electromagnetic vortex flow”. These are all terms that mathematicians, and mathematical physicists, would be interested in. That specific combination, though, I can find only a few references for. It seems to serve as a sensing tool, though.

Nate: 'Ah, now that's what I'm talking about! A boy, paper, and crayons, the simple pleasures. I know you're a genius, Peter, but it's great to see you just being a kid for a change! And you're really letting it rip! You're not trying to make something that looks real! It's just colors and shapes and --- ' Peter: 'This is a diagram of electromagnetic vortex flow patterns.' Nate: 'I knew that.' Peter: 'Hand me the turquoise.'
Lincoln Peirce’s Big Nate for the 24th of November, 2019. So, did you know I’ve been spelling Lincoln Peirce’s name wrong all this time? Yeah, I didn’t realize either. But look at past essays with Big Nate discussed in them and you’ll see. I’m sorry for this and embarrassed to have done such a lousy job looking at the words in front of me for so long.

No matter. Electromagnetic fields are interesting to a mathematical physicist, and so mathematicians. Often a field like this can be represented as a system of vortices, too, points around which something swirls and which combine into the field that we observe. This can be a way to turn a continuous field into a set of discrete particles, which we might have better tools to study. And to draw what electromagnetic fields look like — even in a very rough form — can be a great help to understanding what they will do, and why. They also can be beautiful in ways that communicate even to those who don’t undrestand the thing modelled.

Megan Dong’s Sketchshark Comics for the 25th is a joke based on the reputation of the Golden Ratio. This is the idea that the ratio, 1:\frac{1}{2}\left(1 + \sqrt{5}\right) (roughly 1:1.6), is somehow a uniquely beautiful composition. You may sometimes see memes with some nice-looking animal and various boxes superimposed over it, possibly along with a spiral. The rectangles have the Golden Ratio ratio of width to height. And the ratio is kind of attractive since \frac{1}{2}\left(1 + \sqrt{5}\right) is about 1.618, and 1 \div \frac{1}{2}\left(1 + \sqrt{5}\right) is about 0.618. It’s a cute pattern, and there are other similar cute patterns.. There is a school of thought that this is somehow transcendently beautiful, though.

Man, shooing off a woman holding a cat: 'I don't like cute animals. I like BEAUTIFUL animals.' In front of portraits of an eagle, lion, and whale: 'Animals with golden-ratio proportions and nice bone-structure.'
Megan Dong’s Sketchshark Comics for the 25th of November, 2019. So far I’m aware I have never discussed this comic before, making this another new-tag day. This and future essays with Sketchshark Comics in them should be at this link.

It’s all bunk. People may find stuff that’s about one-and-a-half times as tall as it is wide, or as wide as it is tall, attractive. But experiments show that they aren’t more likely to find something with Golden Ratio proportions more attractive than, say, something with 1:1.5 proportions, or 1:1.8 , or even to be particularly consistent about what they like. You might be able to find (say) that the ratio of an eagle’s body length to the wing span is something close to 1:1.6 . But any real-world thing has a lot of things you can measure. It would be surprising if you couldn’t find something near enough a ratio you liked. The guy is being ridiculous.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th builds on the idea that everyone could be matched to a suitable partner, given a proper sorting algorithm. I am skeptical of any “simple algorithm” being any good for handling complex human interactions such as marriage. But let’s suppose such an algorithm could exist.

Mathematician: 'Thanks to computer science we no longer need dating. We can produce perfect marriages with simple algorithms.' Assistant: 'ooh!' [ AND SO ] Date-o-Tron, to the mathematician and her assistant: 'There are many women you'd be happier with, but they're already with people whom they prefer to you. Thus, you will be paired with your 4,291th favorite choice. We have a stable equilibrium.' Mathematician: 'Hooray!'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th of November, 2019. Someday I’ll go a week without an essay mentioning Saturday Morning Breakfast Cereal, but this is not that day. Or week. The phrasing gets a little necessarily awkward here.

This turns matchmaking into a problem of linear programming. Arguably it always was. But the best possible matches for society might not — likely will not be — the matches everyone figures to be their first choices. Or even top several choices. For one, our desired choices are not necessarily the ones that would fit us best. And as the punch line of the comic implies, what might be the globally best solution, the one that has the greatest number of people matched with their best-fit partners, would require some unlucky souls to be in lousy fits.

Although, while I believe that’s the intention of the comic strip, it’s not quite what’s on panel. The assistant is told he’ll be matched with his 4,291th favorite choice, and I admit having to go that far down the favorites list is demoralizing. But there are about 7.7 billion people in the world. This is someone who’ll be a happier match with him than 6,999,995,709 people would be. That’s a pretty good record, really. You can fairly ask how much worse that is than the person who “merely” makes him happier than 6,999,997,328 people would


And that’s all I have for last week. Sunday I hope to publish another Reading the Comics post, one way or another. And later this week I’ll have closing thoughts on the Fall 2019 A-to-Z sequence. And I do sincerely apologize to Lincoln Peirce for getting his name wrong, and this on a comic strip I’ve been reading since about 1991.

My 2019 Mathematics A To Z: Zeno’s Paradoxes


Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

A photograph of a blurry roller coaster passing through a vertical loop.
One of the many loops of Vortex, a roller coaster at Kings Island amusement park from 1987 to 2019. Taken by me the last day of the ride’s operation; this was one of the roller coaster’s runs after 7 pm, the close of the park the last day of the season.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.


And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

Reading the Comics, September 29, 2019: September 29, 2019 Edition


Several of the mathematically-themed comic strips from last week featured the fine art of calculation. So that was set to be my title for this week. Then I realized that all the comics worth some detailed mention were published last Sunday, and I do like essays that are entirely one-day affairs. There are a couple of other comic strips that mentioned mathematics tangentially and I’ll list those later this week.

John Hambrock’s The Brilliant Mind of Edison lee for the 29th has Edison show off an organic computer. This is a person, naturally enough. Everyone can do some arithmetic in their heads, especially if we allow that sometimes approximate answers are often fine. People with good speed and precision have always been wonders, though. The setup may also riff on the ancient joke of mathematicians being ways to turn coffee into theorems. (I would imagine that Hambrock has heard that joke. But it is enough to suppose that he’s aware many adult humans drink coffee.)

Edison: 'Welcome to Edison's Science Sunday. I'm going to show you how to build a simple organic calculator. I'll use a bale of hay, a pot of coffee, and Bob the postman. First, I'll have Bob sit on the hay.' Joules, rat: 'OK, now what?' Edison: 'Bob, what is 46 times 19?' Bob :'874.' Joules: 'You have GOT to be kidding me!' Edison: 'He's a whiz with numbers.' Joules: 'Where does the coffee come in?' Edison: 'It extends Bob's battery life.' Bob: 'Cream and sugar, please.'
John Hambrock’s The Brilliant Mind of Edison lee for the 29th of September, 2019. Essays featuring something mentioned in Edison Lee appear at this link.

John Kovaleski’s Daddy Daze for the 29th sees Paul, the dad, working out the calculations his son (Angus) proposed. It’s a good bit of arithmetic that Paul’s doing in his head. The process of multiplying an insubstantial thing by many, many times until you get something of moderate size happens all the time. Much of integral calculus is based on the idea that we can add together infinitely many infinitesimal numbers, and from that get something understandable on the human scale. Saving nine seconds every other day is useless for actual activities, though. You need a certain fungibility in the thing conserved for the bother to be worth it.

Kid: 'Ba ba'. Dad: 'A brilliant math-related idea?' Kid: 'Ba ba ba ba'. Dad: 'We don't need to wash *all* your toes every time you take a bath since they're not *that* dirty?' 'Ba ba ba ba ba' 'OK, if I've got this. There's 8 space between your 10 toes, each space takes 1.25 seconds to wash. If we wash only one space per bath we save 8.75 seconds each time. Three baths a week, this saves 1365 seconds (22.75 minutes) every year. Gee, what'll we do with all that extra time?' 'Ba ba ba'. 'Play 'This Little Piggy' 107.4 times.'
John Kovaleski’s Daddy Daze for the 29th of September, 2019. This is a new tag. Well, the comic is barely a year old. But this and other essays featuring Daddy Daze should be at this link.

Dan Thompson’s Harley for the 29th gets us into some comic strips not drawn by people named John. The comic has some mathematics in it qualitatively. The observation that you could jump a motorcycle farther, or higher, with more energy, and that you can get energy from rolling downhill. It’s here mostly because of the good fortune that another comic strip did a joke on the same topic, and did it quantitatively. That comic?

Harley, racing on the motorcycle: 'Speeding down this mountain should launch us over Pointy Rock Canyon.' Cat, riding behind: 'How do you figure that?' Harley: 'Math, my friend. Harley + Speed + Ramp = Jump The Canyon. It's so simple, it's genius!' Cat: 'We're going faster than we've ever gone!' Harley: 'I think I heard a sonic boom!' Cat: 'I see the rap!' Harley: 'I see my brilliance!' (They race up the ramp. Final panel, they're floating in space.) Cat: 'Didn't you flunk math in school?' Harley: 'Not the third time.'
Dan Thompson’s Harley for the 29th of September, 2019. This just barely misses being a new tag. This essay and the other time I mentioned Harley are at this link. I’ll keep you up dated if there are more essays to add to this pile.

Bill Amend’s FoxTrot for the 29th. Young prodigies Jason and Marcus are putting serious calculation into their Hot Wheels track and working out the biggest loop-the-loop possible from a starting point. Their calculations are right, of course. Bill Amend, who’d been a physics major, likes putting authentic mathematics and mathematical physics in. The key is making sure the car moves fast enough in the loop that it stays on the track. This means the car experiencing a centrifugal force that’s larger than that of gravity. The centrifugal force on something moving in a circle is proportional to the square of the thing’s speed, and inversely proportional to the radius of the circle. This for a circle in any direction, by the way.

So they need to know, if the car starts at the height A, how fast will it go at the top of the loop, at height B? If the car’s going fast enough at height B to stay on the track, it’s certainly going fast enough to stay on for the rest of the loop.

Diagram on ruled paper showing a track dropping down and circling around, with the conservation-of-energy implications resulting on the conclusion the largest possible loop-the-loop is 4/5 the starting height. Peter: 'I don't think this will work. Your calculations assume no friction.' Jason: 'Peter, please. We're not stupid.' (Jason's friend Marcus is working on the track.) Mom: 'Kids, why is there a Hot Wheels car soaking in a bowl of olive oil?'
Bill Amend’s FoxTrot for the 29th of September, 2019. Essays featuring either the current-run Sunday FoxTrot or the vintage FoxTrot comics from the 90s should be at this link.

The hard part would be figuring the speed at height B. Or it would be hard if we tried calculating the forces, and thus acceleration, of the car along the track. This would be a tedious problem. It would depend on the exact path of the track, for example. And it would be a long integration problem, which is trouble. There aren’t many integrals we can actually calculate directly. Most of the interesting ones we have to do numerically or work on approximations of the actual thing. This is all right, though. We don’t have to do that integral. We can look at potential energy instead. This turns what would be a tedious problem into the first three lines of work. And one of those was “Kinetic Energy = Δ Potential Energy”.

But as Peter observes, this does depend on supposing the track is frictionless. We always do this in basic physics problems. Friction is hard. It does depend on the exact path one follows, for example. And it depends on speed in complicated ways. We can make approximations to allow for friction losses, often based in experiment. Or try to make the problem one that has less friction, as Jason and Marcus are trying to do.

Caption: 'ODDITIONS'. Several people with large numerals as head stand around, reading scripts; the one with a 3 head recites, 'To be or not to be? That is the question.' A 9 leans in, saying, 'Next!'
Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 29th of September, 2019. The occasional essay featuring Mustard and Boloney appears at this link. I feel a bit glad to see this doesn’t seem to be a rerun, or at least it’s not one I’ve discussed before.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 29th is the anthropomorphic numerals joke for the week. This is a slight joke to include here. But there were many comic strips of slight mathematical content. I intend to list them in an essay on Wednesday.

Tuesday I plan to be a day for the Fall 2019 A-to-Z. Again, thank you for reading.

Reading the Comics, September 24, 2019: I Make Something Of This Edition


I trust nobody’s too upset that I postponed the big Reading the Comics posts of this week a day. There’s enough comics from last week to split them into two essays. Please enjoy.

Scott Shaw! and Stan Sakai’s Popeye’s Cartoon Club for the 22nd is one of a yearlong series of Sunday strips, each by different cartoonists, celebrating the 90th year of Popeye’s existence as a character. And, I’m a Popeye fan from all the way back when Popeye was still a part of the pop culture. So that’s why I’m bringing such focus to a strip that, really, just mentions the existence of algebra teachers and that they might present a fearsome appearance to people.

Popeye and Eugene popping into Goon Island. Popeye: 'Thanks for bringing us to Goon Island! Watch out, li'l Jeep! Them Goons are nutty monskers that need civilizin'! Here's Alice the Goon!' Alice: 'MNWMNWMNMN' . Popeye: 'Whatever you sez, Alice! --- !' (Sees a large Goon holding a fist over a baby Goon.) Popeye: 'He's about to squash that li'l Goon! That's all I can stands, I can't stands no more!' Popeye slugs the big Goon. Little Goon holds up a sign: 'You dummy! He's my algebra teacher!' Popeye: 'Alice, I am disgustipated with meself!' Alice: 'MWNMWN!'
Scott Shaw! and Stan Sakai’s Popeye’s Cartoon Club for the 22nd of September, 2019. This is the first (and likely last) time Popeye’s Cartoon Club has gotten a mention here. But appearances by this and by the regular Popeye comic strip (Thimble Theatre, if you prefer) should be gathered at this link.

Lincoln Pierce’s Big Nate for the 22nd has Nate seeking an omen for his mathematics test. This too seems marginal. But I can bring it back to mathematics. One of the fascinating things about having data is finding correlations between things. Sometimes we’ll find two things that seem to go together, including apparently disparate things like basketball success and test-taking scores. This can be an avenue for further research. One of these things might cause the other, or at least encourage it. Or the link may be spurious, both things caused by the same common factor. (Superstition can be one of those things: doing a thing ritually, in a competitive event, can help you perform better, even if you don’t believe in superstitions. Psychology is weird.)

Nate, holding a basketball, thinking: 'If I make this shot it means I'm gonna ace the math test!' He shoots, missing. Nate: 'If I make *this* shot I'm gonna ace the math test!' He shoots, missing. Nate: 'If *this* one goes in, I'll ace the math test!' He shoots, missing. Nate: 'THIS one COUNTS! If I make it it means I'll ace the math test!' He shoots, missing. Nate: 'OK, this is IT! If I make THIS, I WILL ace the math test!' It goes in. Dad: 'Aren't you supposed to be studying for the math test?' Nate: 'Got it covered.'
Lincoln Pierce’s Big Nate for the 22nd of September, 2019. Essays inspired by something in Big Nate, either new-run or the Big Nate: First Class vintage strips, are at this link.

But there are dangers too. Nate shows off here the danger of selecting the data set to give the result one wants. Even people with honest intentions can fall prey to this. Any real data set will have some points that just do not make sense, and look like a fluke or some error in data-gathering. Often the obvious nonsense can be safely disregarded, but you do need to think carefully to see that you are disregarding it for safe reasons. The other danger is that while two things do correlate, it’s all coincidence. Have enough pieces of data and sometimes they will seem to match up.

Norm Feuti’s Gil rerun for the 22nd has Gil practicing multiplication. It’s really about the difficulties of any kind of educational reform, especially in arithmetic. Gil’s mother is horrified by the appearance of this long multiplication. She dubs it both inefficient and harder than the way she learned. She doesn’t say the way she learned, but I’m guessing it’s the way that I learned too, which would have these problems done in three rows beneath the horizontal equals sign, with a bunch of little carry notes dotting above.

Gil: 'Mom, can you check my multiplication homework?' Mom: 'Sure .. is THIS how they're teaching you to do it?' (eg, 37x22 as 14 + 60 + 140 + 600 = 814) Gil: 'Yes.' Mom: 'You know, there's an easier way to do this?' Gil: 'My teacher said the old way was just memorizing an algorithm. The new way helps us understand what we're doing.' Mom: '*I* always understood what I was doing. It seems like they're just teaching you a less efficient algorithm.' Gil: 'Maybe I should just check my work with a calculator.' Mom: 'I have to start going to the PTA meetings.'
Norm Feuti’s Gil rerun for the 22nd of September, 2019. Essays inspired by either the rerun or the new Sunday Gil strips should be gathered at this link.

Gil’s Mother is horrified for bad reasons. Gil is doing exactly the same work that she was doing. The components of it are just written out differently. The only part of this that’s less “efficient” is that it fills out a little more paper. To me, who has no shortage of paper, this efficiency doens’t seem worth pursuing. I also like this way of writing things out, as it separates cleanly the partial products from the summations done with them. It also means that the carries from, say, multiplying the top number by the first digit of the lower can’t get in the way of carries from multiplying by the second digits. This seems likely to make it easier to avoid arithmetic errors, or to detect errors once suspected. I’d like to think that Gil’s Mom, having this pointed out, would drop her suspicions of this different way of writing things down. But people get very attached to the way they learned things, and will give that up only reluctantly. I include myself in this; there’s things I do for little better reason than inertia.

People will get hung up on the number of “steps” involved in a mathematical process. They shouldn’t. Whether, say, “37 x 2” is done in one step, two steps, or three steps is a matter of how you’re keeping the books. Even if we agree on how much computation is one step, we’re left with value judgements. Like, is it better to do many small steps, or few big steps? My own inclination is towards reliability. I’d rather take more steps than strictly necessary, if they can all be done more surely. If you want speed, my experience is, it’s better off aiming for reliability and consistency. Speed will follow from experience.

Profesor showing multiple paths from A to B on the chalkboard: 'The universe wants particles to take the easiest route from point A to point B. Mysteriously, the universe accomplishes this by first considering *every* possible path. It's doing an enormous amount of calculation just to be certain it's not taking a suboptimal route.' Caption: 'You can model reality pretty well if you imagine it's your dad planning a road trip.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 22nd of September, 2019. Essays which go into some aspect of Saturday Morning Breakfast Cereal turn up all the time, such as at this link.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 22nd builds on mathematical physics. Lagrangian mechanics offers great, powerful tools for solving physics problems. It also offers a philosophically challenging interpretation of physics problems. Look at the space made up of all the possible configurations of the system. Take one point to represent the way the system starts. Take another point to represent the way the system ends. Grant that the system gets from that starting point to that ending point. How does it do that? What is the path in this configuration space that goes in-between this start and this end?

We can find the path by using the Lagrangian. Particularly, integrate the Lagrangian over every possible curve that connects the starting point and the ending point. This is every possible way to match start and end. The path that the system actually follows will be an extremum. The actual path will be one that minimizes (or maximizes) this integral, compared to all the other paths nearby that it might follow. Yes, that’s bizarre. How would the particle even know about those other paths?

This seems bad enough. But we can ignore the problem in classical mechanics. The extremum turns out to always match the path that we’d get from taking derivatives of the Lagrangian. Those derivatives look like calculating forces and stuff, like normal.

Then in quantum mechanics the problem reappears and we can’t just ignore it. In the quantum mechanics view no particle follows “a” “path”. It instead is found more likely in some configurations than in others. The most likely configurations correspond to extreme values of this integral. But we can’t just pretend that only the best-possible path “exists”.

Thus the strip’s point. We can represent mechanics quite well. We do this by pretending there are designated starting and ending conditions. And pretending that the system selects the best of every imaginable alternative. The incautious pop physics writer, eager to find exciting stuff about quantum mechanics, will describe this as a particle “exploring” or “considering” all its options before “selecting” one. This is true in the same way that we can say a weight “wants” to roll down the hill, or two magnets “try” to match north and south poles together. We should not mistake it for thinking that electrons go out planning their days, though. Newtonian mechanics gets us used to the idea that if we knew the positions and momentums and forces between everything in the universe perfectly well, we could forecast the future and retrodict the past perfectly. Lagrangian mechanics seems to invite us to imagine a world where everything “perceives” its future and all its possible options. It would be amazing if this did not capture our imaginations.

Billy, pointing a much older kid out to his mother: 'Mommy, you should see HIS math! He has to know numbers AND letters to do it!'
Bil Keane and Jeff Keane’s Family Circus for the 24th of September, 2019. I’m surprised there are not more appearance of this comic strip here. But Family Circus panels inspire essays at these links.

Bil Keane and Jeff Keane’s Family Circus for the 24th has young Billy amazed by the prospect of algebra, of doing mathematics with both numbers and letters. I’m assuming Billy’s awestruck by the idea of letters representing numbers. Geometry also uses quite a few letters, mostly as labels for the parts of shapes. But that seems like a less fascinating use of letters.


The second half of last week’s comics I hope to post here on Wednesday. Stick around and we’ll see how close I come to making it. Thank you.

My 2019 Mathematics A To Z: Hamiltonian


Today’s A To Z term is another I drew from Mr Wu, of the Singapore Math Tuition blog. It gives me more chances to discuss differential equations and mathematical physics, too.

The Hamiltonian we name for Sir William Rowan Hamilton, the 19th century Irish mathematical physicists who worked on everything. You might have encountered his name from hearing about quaternions. Or for coining the terms “scalar” and “tensor”. Or for work in graph theory. There’s more. He did work in Fourier analysis, which is what you get into when you feel at ease with Fourier series. And then wild stuff combining matrices and rings. He’s not quite one of those people where there’s a Hamilton’s Theorem for every field of mathematics you might be interested in. It’s close, though.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Hamiltonian.

When you first learn about physics you learn about forces and accelerations and stuff. When you major in physics you learn to avoid dealing with forces and accelerations and stuff. It’s not explicit. But you get trained to look, so far as possible, away from vectors. Look to scalars. Look to single numbers that somehow encode your problem.

A great example of this is the Lagrangian. It’s built on “generalized coordinates”, which are not necessarily, like, position and velocity and all. They include the things that describe your system. This can be positions. It’s often angles. The Lagrangian shines in problems where it matters that something rotates. Or if you need to work with polar coordinates or spherical coordinates or anything non-rectangular. The Lagrangian is, in your general coordinates, equal to the kinetic energy minus the potential energy. It’ll be a function. It’ll depend on your coordinates and on the derivative-with-respect-to-time of your coordinates. You can take partial derivatives of the Lagrangian. This tells how the coordinates, and the change-in-time of your coordinates should change over time.

The Hamiltonian is a similar way of working out mechanics problems. The Hamiltonian function isn’t anything so primitive as the kinetic energy minus the potential energy. No, the Hamiltonian is the kinetic energy plus the potential energy. Totally different in idea.

From that description you maybe guessed you can transfer from the Lagrangian to the Hamiltonian. Maybe vice-versa. Yes, you can, although we use the term “transform”. Specifically a “Legendre transform”. We can use any coordinates we like, just as with Lagrangian mechanics. And, as with the Lagrangian, we can find how coordinates change over time. The change of any coordinate depends on the partial derivative of the Hamiltonian with respect to a particular other coordinate. This other coordinate is its “conjugate”. (It may either be this derivative, or minus one times this derivative. By the time you’re doing work in the field you’ll know which.)

That conjugate coordinate is the important thing. It’s why we muck around with Hamiltonians when Lagrangians are so similar. In ordinary, common coordinate systems these conjugate coordinates form nice pairs. In Cartesian coordinates, the conjugate to a particle’s position is its momentum, and vice-versa. In polar coordinates, the conjugate to the angular velocity is the angular momentum. These are nice-sounding pairs. But that’s our good luck. These happen to match stuff we already think is important. In general coordinates one or more of a pair can be some fusion of variables we don’t have a word for and would never care about. Sometimes it gets weird. In the problem of vortices swirling around each other on an infinitely great plane? The horizontal position is conjugate to the vertical position. Velocity doesn’t enter into it. For vortices on the sphere the longitude is conjugate to the cosine of the latitude.

What’s valuable about these pairings is that they make a “symplectic manifold”. A manifold is a patch of space where stuff works like normal Euclidean geometry does. In this case, the space is in “phase space”. This is the collection of all the possible combinations of all the variables that could ever turn up. Every particular moment of a mechanical system matches some point in phase space. Its evolution over time traces out a path in that space. Call it a trajectory or an orbit as you like.

We get good things from looking at the geometry that this symplectic manifold implies. For example, if we know that one variable doesn’t appear in the Hamiltonian, then its conjugate’s value never changes. This is almost the kindest thing you can do for a mathematical physicist. But more. A famous theorem by Emmy Noether tells us that symmetries in the Hamiltonian match with conservation laws in the physics. Time-invariance, for example — time not appearing in the Hamiltonian — gives us the conservation of energy. If only distances between things, not absolute positions, matter, then we get conservation of linear momentum. Stuff like that. To find conservation laws in physics problems is the kindest thing you can do for a mathematical physicist.

The Hamiltonian was born out of planetary physics. These are problems easy to understand and, apart from the case of one star with one planet orbiting each other, impossible to solve exactly. That’s all right. The formalism applies to all kinds of problems. They’re very good at handling particles that interact with each other and maybe some potential energy. This is a lot of stuff.

More, the approach extends naturally to quantum mechanics. It takes some damage along the way. We can’t talk about “the” position or “the” momentum of anything quantum-mechanical. But what we get when we look at quantum mechanics looks very much like what Hamiltonians do. We can calculate things which are quantum quite well by using these tools. This though they came from questions like why Saturn’s rings haven’t fallen part and whether the Earth will stay around its present orbit.

It holds surprising power, too. Notice that the Hamiltonian is the kinetic energy of a system plus its potential energy. For a lot of physics problems that’s all the energy there is. That is, the value of the Hamiltonian for some set of coordinates is the total energy of the system at that time. And, if there’s no energy lost to friction or heat or whatever? Then that’s the total energy of the system for all time.

Here’s where this becomes almost practical. We often want to do a numerical simulation of a physics problem. Generically, we do this by looking up what all the values of all the coordinates are at some starting time t0. Then we calculate how fast these coordinates are changing with time. We pick a small change in time, Δ t. Then we say that at time t0 plus Δ t, the coordinates are whatever they started at plus Δ t times that rate of change. And then we repeat, figuring out how fast the coordinates are changing now, at this position and time.

The trouble is we always make some mistake, and once we’ve made a mistake, we’re going to keep on making mistakes. We can do some clever stuff to make the smallest error possible figuring out where to go, but it’ll still happen. Usually, we stick to calculations where the error won’t mess up our results.

But when we look at stuff like whether the Earth will stay around its present orbit? We can’t make each step good enough for that. Unless we get to thinking about the Hamiltonian, and our symplectic variables. The actual system traces out a path in phase space. Everyone on that path the Hamiltonian is a particular value, the energy of the system. So use the regular methods to project most of the variables to the new time, t0 + Δ t. But the rest? Pick the values that makes the Hamiltonian work out right. Also momentum and angular momentum and other stuff we know get conserved. We’ll still make an error. But it’s a different kind of error. It’ll project to a point that’s maybe in the wrong place on the trajectory. But it’s on the trajectory.

(OK, it’s near the trajectory. Suppose the real energy is, oh, the square root of 5. The computer simulation will have an energy of 2.23607. This is close but not exactly the same. That’s all right. Each step will stay close to the real energy.)

So what we’ll get is a projection of the Earth’s orbit that maybe puts it in the wrong place in its orbit. Putting the planet on the opposite side of the sun from Venus when we ought to see Venus transiting the Sun. That’s all right, if what we’re interested in is whether Venus and Earth are still in the solar system.

There’s a special cost for this. If there weren’t we’d use it all the time. The cost is computational complexity. It’s pricey enough that you haven’t heard about these “symplectic integrators” before. That’s all right. These are the kinds of things open to us once we look long into the Hamiltonian.


This wraps up my big essay-writing for the week. I will pluck some older essays out of obscurity to re-share tomorrow and Saturday. All of Fall 2019 A To Z posts should be at this link. Next week should have the letter I on Tuesday and J on Thursday. All of my A To Z essays should be available at this link. And I am still interested in topics I might use for the letters K through N. Thank you.

My 2019 Mathematics A To Z: Differential Equations


The thing most important to know about differential equations is that for short, we call it “diff eq”. This is pronounced “diffy q”. It’s a fun name. People who aren’t taking mathematics smile when they hear someone has to get to “diffy q”.

Sometimes we need to be more exact. Then the less exciting names “ODE” and “PDE” get used. The meaning of the “DE” part is an easy guess. The meaning of “O” or “P” will be clear by the time this essay’s finished. We can find approximate answers to differential equations by computer. This is known generally as “numerical solutions”. So you will encounter talk about, say, “NSPDE”. There’s an implied “of” between the S and the P there. I don’t often see “NSODE”. For some reason, probably a quite arbitrary historical choice, this is just called “numerical integration” instead.

To write about “differential equations” was suggested by aajohannas, who is on Twitter as @aajohannas.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Differential Equations.

One of algebra’s unsettling things is the idea that we can work with numbers without knowing their values. We can give them names, like ‘x’ or ‘a’ or ‘t’. We can know things about them. Often it’s equations telling us these things. We can make collections of numbers based on them all sharing some property. Often these things are solutions to equations. We can even describe changing those collections according to some rule, even before we know whether any of the numbers is 2. Often these things are functions, here matching one set of numbers to another.

One of analysis’s unsettling things is the idea that most things we can do with numbers we can also do with functions. We can give them names, like ‘f’ and ‘g’ and … ‘F’. That’s easy enough. We can add and subtract them. Multiply and divide. This is unsurprising. We can measure their sizes. This is odd but, all right. We can know things about functions even without knowing exactly what they are. We can group together collections of functions based on some properties they share. This is getting wild. We can even describe changing these collections according to some rule. This change is itself a function, but it is usually called an “operator”, saving us some confusion.

So we can describe a function in an equation. We may not know what f is, but suppose we know \sqrt{f(x) - 2} = x is true. We can suppose that if we cared we could find what function, or functions, f made that equation true. There is shorthand here. A function has a domain, a range, and a rule. The equation part helps us find the rule. The domain and range we get from the problem. Or we take the implicit rule that both are the biggest sets of real-valued numbers for which the rule parses. Sometimes biggest sets of complex-valued numbers. We get so used to saying “the function” to mean “the rule for the function” that we’ll forget to say that’s what we’re doing.

There are things we can do with functions that we can’t do with numbers. Or at least that are too boring to do with numbers. The most important here is taking derivatives. The derivative of a function is another function. One good way to think of a derivative is that it describes how a function changes when its variables change. (The derivative of a number is zero, which is boring except when it’s also useful.) Derivatives are great. You learn them in Intro Calculus, and there are a bunch of rules to follow. But follow them and you can pretty much take the derivative of any function even if it’s complicated. Yes, you might have to look up what the derivative of the arc-hyperbolic-secant is. Nobody has ever used the arc-hyperbolic-secant, except to tease a student.

And the derivative of a function is itself a function. So you can take a derivative again. Mathematicians call this the “second derivative”, because we didn’t expect someone would ask what to call it and we had to say something. We can take the derivative of the second derivative. This is the “third derivative” because by then changing the scheme would be awkward. If you need to talk about taking the derivative some large but unspecified number of times, this is the n-th derivative. Or m-th, if you’ve already used ‘n’ to mean something else.

And now we get to differential equations. These are equations in which we describe a function using at least one of its derivatives. The original function, that is, f, usually appears in the equation. It doesn’t have to, though.

We divide the earth naturally (we think) into two pairs of hemispheres, northern and southern, eastern and western. We divide differential equations naturally (we think) into two pairs of two kinds of differential equations.

The first division is into linear and nonlinear equations. I’ll describe the two kinds of problem loosely. Linear equations are the kind you don’t need a mathematician to solve. If the equation has solutions, we can write out procedures that find them, like, all the time. A well-programmed computer can solve them exactly. Nonlinear equations, meanwhile, are the kind no mathematician can solve. They’re just too hard. There’s no processes that are sure to find an answer.

You may ask. We don’t need mathematicians to solve linear equations. Mathematicians can’t solve nonlinear ones. So what do we need mathematicians for? The answer is that I exaggerate. Linear equations aren’t quite that simple. Nonlinear equations aren’t quite that hopeless. There are nonlinear equations we can solve exactly, for example. This usually involves some ingenious transformation. We find a linear equation whose solution guides us to the function we do want.

And that is what mathematicians do in such a field. A nonlinear differential equation may, generally, be hopeless. But we can often find a linear differential equation which gives us insight to what we want. Finding that equation, and showing that its answers are relevant, is the work.

The other hemispheres we call ordinary differential equations and partial differential equations. In form, the difference between them is the kind of derivative that’s taken. If the function’s domain is more than one dimension, then there are different kinds of derivative. Or as normal people put it, if the function has more than one independent variable, then there are different kinds of derivatives. These are partial derivatives and ordinary (or “full”) derivatives. Partial derivatives give us partial differential equations. Ordinary derivatives give us ordinary differential equations. I think it’s easier to understand a partial derivative.

Suppose a function depends on three variables, imaginatively named x, y, and z. There are three partial first derivatives. One describes how the function changes if we pretend y and z are constants, but let x change. This is the “partial derivative with respect to x”. Another describes how the function changes if we pretend x and z are constants, but let y change. This is the “partial derivative with respect to y”. The third describes how the function changes if we pretend x and y are constants, but let z change. You can guess what we call this.

In an ordinary differential equation we would still like to know how the function changes when x changes. But we have to admit that a change in x might cause a change in y and z. So we have to account for that. If you don’t see how such a thing is possible don’t worry. The differential equations textbook has an example in which you wish to measure something on the surface of a hill. Temperature, usually. Maybe rainfall or wind speed. To move from one spot to another a bit east of it is also to move up or down. The change in (let’s say) x, how far east you are, demands a change in z, how far above sea level you are.

That’s structure, though. What’s more interesting is the meaning. What kinds of problems do ordinary and partial differential equations usually represent? Partial differential equations are great for describing surfaces and flows and great bulk masses of things. If you see an equation about how heat transmits through a room? That’s a partial differential equation. About how sound passes through a forest? Partial differential equation. About the climate? Partial differential equations again.

Ordinary differential equations are great for describing a ball rolling on a lumpy hill. It’s given an initial push. There are some directions (downhill) that it’s easier to roll in. There’s some directions (uphill) that it’s harder to roll in, but it can roll if the push was hard enough. There’s maybe friction that makes it roll to a stop.

Put that way it’s clear all the interesting stuff is partial differential equations. Balls on lumpy hills are nice but who cares? Miniature golf course designers and that’s all. This is because I’ve presented it to look silly. I’ve got you thinking of a “ball” and a “hill” as if I meant balls and hills. Nah. It’s usually possible to bundle a lot of information about a physical problem into something that looks like a ball. And then we can bundle the ways things interact into something that looks like a hill.

Like, suppose we have two blocks on a shared track, like in a high school physics class. We can describe their positions as one point in a two-dimensional space. One axis is where on the track the first block is, and the other axis is where on the track the second block is. Physics problems like this also usually depend on momentum. We can toss these in too, an axis that describes the momentum of the first block, and another axis that describes the momentum of the second block.

We’re already up to four dimensions, and we only have two things, both confined to one track. That’s all right. We don’t have to draw it. If we do, we draw something that looks like a two- or three-dimensional sketch, maybe with a note that says “D = 4” to remind us. There’s some point in this four-dimensional space that describes these blocks on the track. That’s the “ball” for this differential equation.

The things that the blocks can do? Like, they can collide? They maybe have rubber tips so they bounce off each other? Maybe someone’s put magnets on them so they’ll draw together or repel? Maybe there’s a spring connecting them? These possible interactions are the shape of the hills that the ball representing the system “rolls” over. An impenetrable barrier, like, two things colliding, is a vertical wall. Two things being attracted is a little divot. Two things being repulsed is a little hill. Things like that.

Now you see why an ordinary differential equation might be interesting. It can capture what happens when many separate things interact.

I write this as though ordinary and partial differential equations are different continents of thought. They’re not. When you model something you make choices and they can guide you to ordinary or to partial differential equations. My own research work, for example, was on planetary atmospheres. Atmospheres are fluids. Representing how fluids move usually calls for partial differential equations. But my own interest was in vortices, swirls like hurricanes or Jupiter’s Great Red Spot. Since I was acting as if the atmosphere was a bunch of storms pushing each other around, this implied ordinary differential equations.

There are more hemispheres of differential equations. They have names like homogenous and non-homogenous. Coupled and decoupled. Separable and nonseparable. Exact and non-exact. Elliptic, parabolic, and hyperbolic partial differential equations. Don’t worry about those labels. They relate to how difficult the equations are to solve. What ways they’re difficult. In what ways they break computers trying to approximate their solutions.

What’s interesting about these, besides that they represent many physical problems, is that they capture the idea of feedback. Of control. If a system’s current state affects how it’s going to change, then it probably has a differential equation describing it. Many systems change based on their current state. So differential equations have long been near the center of professional mathematics. They offer great and exciting pure questions while still staying urgent and relevant to real-world problems. They’re great things.


Thanks again for reading. All Fall 2019 A To Z posts should be at this link. I should get to the letter E for Tuesday. All of the A To Z essays should be at this link. If you have thoughts about other topics I might cover, please offer suggestions for the letters G and H.

In Our Time podcast repeated its Emmy Noether episode


One of the podcasts I regularly listen to is the BBC’s In Our Time. This is a roughly 50-minute chat, each week, about some topic of general interest. It’s broad in its subjects; they can be historical, cultural, scientific, artistic, and even sometimes mathematical.

Recently they repeated an episode about Emmy Noether. I knew, before, that she was one of the great figures in our modern understanding of physics. Noether’s Theorem tells us how the geometry of a physics problem constrains the physics we have, and in useful ways. That, for example, what we understand as the conservation of angular momentum results from a physical problem being rotationally symmetric. (That if we rotated everything about the problem by the same angle around the same axis, we’d not see any different behaviors.) Similarly, that you could start a physics scenario at any time, sooner or later, without changing the results forces the physics scenario to have a conservation of energy. This is a powerful and stunning way to connect physics and geometry.

What I had not appreciated until listening to this episode was her work in group theory, and in organizing it in the way we still learn the subject. This startled and embarrassed me. It forced me to realize I knew little about the history of group theory. Group theory has over the past two centuries been a key piece of mathematics. It’s given us results as basic as showing there are polynomials that no quadratic formula-type expression will ever solve. It’s given results as esoteric as predicting what kinds of exotic subatomic particles we should expect to exist. And her work’s led into the modern understanding of the fundamentals of mathematics. So it’s exciting to learn some more about this.

This episode of In Our Time should be at this link although I just let iTunes grab episodes from the podcast’s feed. There are a healthy number of mathematics- and science-related conversations in its archives.

Reading the Comics, August 16, 2019: The Comments Drive Me Crazy Edition


Last week was another light week of work from Comic Strip Master Command. One could fairly argue that nothing is worth my attention. Except … one comic strip got onto the calendar. And that, my friends, is demanding I pay attention. Because the comic strip got multiple things wrong. And then the comments on GoComics got it more wrong. Got things wrong to the point that I could not be sure people weren’t trolling each other. I know how nerds work. They do this. It’s not pretty. So since I have the responsibility to correct strangers online I’ll focus a bit on that.

Robb Armstrong’s JumpStart for the 13th starts off all right. The early Roman calendar had ten months, December the tenth of them. This was a calendar that didn’t try to cover the whole year. It just started in spring and ran into early winter and that was it. This may seem baffling to us moderns, but it is, I promise you, the least confusing aspect of the Roman calendar. This may seem less strange if you think of the Roman calendar as like a sports team’s calendar, or a playhouse’s schedule of shows, or a timeline for a particular complicated event. There are just some fallow months that don’t need mention.

Joe: 'Originally December was the tenth month of the calendar year. Guess what happens every 823 years? December is about to have five Saturdays, five Sundays, and five Mondays! It's a rare phenomenon!' Crunchy: 'Kinda like a cop who trusts the Internet.'
Robb Armstrong’s JumpStart for the 13th of August, 2019. Essays featuring JumpStart should appear at this link. I am startled to learn that this is a new tag, though. I hope the comic makes more appearances; it’s pleasantly weird in low-key ways. Well, I mean, those are cops driving an ice cream truck and that’s one of the more mundane things about the comic, you know?

Things go wrong with Rob’s claim that December will have five Saturdays, five Sundays, and five Mondays. December 2019 will have no such thing. It has four Saturdays. There are five Sundays, Mondays, and Tuesdays. From Crunchy’s response it sounds like Joe’s run across some Internet Dubious Science Folklore. You know, where you see a claim that (like) Saturn will be larger in the sky than anytime since the glaciers receded or something. And as you’d expect, it’s gotten a bit out of date. December 2018 had five Saturdays, Sundays, and Mondays. So did December 2012. And December 2007.

And as this shows, that’s not a rare thing. Any month with 31 days will have five of some three days in the week. August 2019, for example, has five Thursdays, Fridays, and Saturdays. October 2019 will have five Tuesdays, Wednesdays, and Thursdays. This we can show by the pigeonhole principle. And there are seven months each with 31 days in every year.

It’s not every year that has some month with five Saturdays, Sundays, and Mondays in it. 2024 will not, for example. But a lot of years do. I’m not sure why December gets singled out for attention here. From the setup about December having long ago been the tenth month, I guess it’s some attempt to link the fives of the weekend days to the ten of the month number. But we get this kind of December about every five or six years.

This 823 years stuff, now that’s just gibberish. The Gregorian calendar has its wonders and mysteries yes. None of them have anything to do with 823 years. Here, people in the comments got really bad at explaining what was going on.

So. There are fourteen different … let me call them year plans, available to the Gregorian calendar. January can start on a Sunday when it is a leap year. Or January can start on a Sunday when it is not a leap year. January can start on a Monday when it is a leap year. January can start on a Monday when it is not a leap year. And so on. So there are fourteen possible arrangements of the twelve months of the year, what days of the week the twentieth of January and the thirtieth of December can occur on. The incautious might think this means there’s a period of fourteen years in the calendar. This comes from misapplying the pigeonhole principle.

Here’s the trouble. January 2019 started on a Tuesday. This implies that January 2020 starts on a Wednesday. January 2025 also starts on a Wednesday. But January 2024 starts on a Monday. You start to see the pattern. If this is not a leap year, the next year starts one day of the week later than this one. If this is a leap year, the next year starts two days of the week later. This is all a slightly annoying pattern, but it means that, typically, it takes 28 years to get back where you started. January 2019 started on Tuesday; January 2020 on Wednesday, and January 2021 on Friday. the same will hold for January 2047 and 2048 and 2049. There are other successive years that will start on Tuesday and Wednesday and Friday before that.

Except.

The important difference between the Julian and the Gregorian calendars is century years. 1900. 2000. 2100. These are all leap years by the Julian calendar reckoning. Most of them are not, by the Gregorian. Only century years divisible by 400 are. 2000 was a leap year; 2400 will be. 1900 was not; 2100 will not be, by the Gregorian scheme.

These exceptions to the leap-year-every-four-years pattern mess things up. The 28-year-period does not work if it stretches across a non-leap-year century year. By the way, if you have a friend who’s a programmer who has to deal with calendars? That friend hates being a programmer who has to deal with calendars.

There is still a period. It’s just a longer period. Happily the Gregorian calendar has a period of 400 years. The whole sequence of year patterns from 2000 through 2019 will reappear, 2400 through 2419. 2800 through 2819. 3200 through 3219.

(Whether they were also the year patterns for 1600 through 1619 depends on where you are. Countries which adopted the Gregorian calendar promptly? Yes. Countries which held out against it, such as Turkey or the United Kingdom? No. Other places? Other, possibly quite complicated, stories. If you ask your computer for the 1619 calendar it may well look nothing like 2019’s, and that’s because it is showing the Julian rather than Gregorian calendar.)

Except.

This is all in reference to the days of the week. The date of Easter, and all of the movable holidays tied to Easter, is on a completely different cycle. Easter is set by … oh, dear. Well, it’s supposed to be a simple enough idea: the Sunday after the first spring full moon. It uses a notional moon that’s less difficult to predict than the real one. It’s still a bit of a mess. The date of Easter is periodic again, yes. But the period is crazy long. It would take 5,700,000 years to complete its cycle on the Gregorian calendar. It never will. Never try to predict Easter. It won’t go well. Don’t believe anything amazing you read about Easter online.

Norm, pondering: 'I have a new theory about life.' (Illustrated with a textbook, 'Quantum Silliness'.) 'It's not as simple as everything-is-easy, or everything-is-hard.' (Paper with 1 + 1 = 2; another with Phi = BA.) 'Instead, life is only hard when it should be easy and easy when it's expected to be hard. That way you're never prepared.' (The papers are torn up.) Friend: 'Seems to me you've stepped right into the middle of chaos theory.' Norm: 'Or just my 30s.'
Michael Jantze’s The Norm (Classics) for the 15th of August, 2019. I had just written how I wanted to share this strip more. Essays about The Norm, both the current (“4.0”) run and older reruns (“Classics”), are at this link.

Michael Jantze’s The Norm (Classics) for the 15th is much less trouble. It uses some mathematics to represent things being easy and things being hard. Easy’s represented with arithmetic. Hard is represented with the calculations of quantum mechanics. Which, oddly, look very much like arithmetic. \phi = BA even has fewer symbols than 1 + 1 = 2 has. But the symbols mean different abstract things. In a quantum mechanics context, ‘A’ and ‘B’ represent — well, possibly matrices. More likely operators. Operators work a lot like functions and I’m going to skip discussing the ways they don’t. Multiplying operators together — B times A, here — works by using the range of one function as the domain of the other. Like, imagine ‘B’ means ‘take the square of’ and ‘A’ means ‘take the sine of’. Then ‘BA’ would mean ‘take the square of the sine of’ (something). The fun part is the ‘AB’ would mean ‘take the sine of the square of’ (something). Which is fun because most of the time, those won’t have the same value. We accept that, mathematically. It turns out to work well for some quantum mechanics properties, even though it doesn’t work like regular arithmetic. So \phi = BA holds complexity, or at least strangeness, in its few symbols.

Moose, bringing change and food back from the beach snack stand: 'Arch gave me five and a single so he gets ... $2.11 in change!' Archie: 'Right, Moose! Thanks!' (To Betty.) 'Notice how Moose can do math faster at the beach than he can anywhere else?' Betty: 'Why is that?' Moose, pointing to his feet: 'Easy! I don't have to take off my shoes to count my toes!'
Henry Scarpelli and Craig Boldman’s Archie rerun for the 16th of August, 2019. Essays exploring something mentioned by Archie ought to be at this link. The strip is in perpetual reruns but I don’t think I’ve exhausted the cycle of comics they reprint yet.

Henry Scarpelli and Craig Boldman’s Archie for the 16th is a joke about doing arithmetic on your fingers and toes. That’s enough for me.


There were some more comic strips which just mentioned mathematics in passing.

Brian Boychuk and Ron Boychuk’s The Chuckle Brothers rerun for the 11th has a blackboard of mathematics used to represent deep thinking. Also, it I think, the colorist didn’t realize that they were standing in front of a blackboard. You can see mathematicians doing work in several colors, either to convey information in shorthand or because they had several colors of chalk. Not this way, though.

Mark Leiknes’s Cow and Boy rerun for the 16th mentions “being good at math” as something to respect cows for. The comic’s just this past week started over from its beginning. If you’re interested in deeply weird and long-since cancelled comics this is as good a chance to jump on as you can get.

And Stephen ‘s Herb and Jamaal rerun for the 16th has a kid worried about a mathematics test.


That’s the mathematically-themed comic strips for last week. All my Reading the Comics essays should be at this link. I’ve traditionally run at least one essay a week on Sunday. But recently that’s moved to Tuesday for no truly compelling reason. That seems like it’s working for me, though. I may stick with it. If you do have an opinion about Sunday versus Tuesday please let me know.

Don’t let me know on Twitter. I continue to have this problem where Twitter won’t load on Safari. I don’t know why. I’m this close to trying it out on a different web browser.

And, again, I’m planning a fresh A To Z sequence. It’s never to early to think of mathematics topics that I might explain. I should probably have already started writing some. But you’ll know the official announcement when it comes. It’ll have art and everything.

Reading the Comics, July 20, 2019: What Are The Chances Edition


The temperature’s cooled. So let me get to the comics that, Saturday, I thought were substantial enough to get specific discussion. It’s possible I was overestimating how much there was to say about some of these. These are the risks I take.

Paige Braddock’s Jane’s World for the 15th sees Jane’s niece talk about enjoying mathematics. I’m glad to see. You sometimes see comic strip characters who are preposterously good at mathematics. Here I mean Jason and Marcus over in Bill Amend’s FoxTrot. But even they don’t often talk about why mathematics is appealing. There is no one answer for all people. I suspect even for a single person the biggest appeal changes over time. That mathematics seems to offer certainty, though, appeals to many. Deductive logic promises truths that can be known independent of any human failings. (The catch is actually doing a full proof, because that takes way too many boring steps. Mathematicians more often do enough of a prove to convince anyone that the full proof could be produced if needed.)

Alexa: 'I sort of like math.' Jane: 'Hm. You could have a fever.' Alexa: 'No, really. Math is stable, not like emotional stuff or social stuff that's all over the place. Math is comforting. ... Because, in math, there is always a right answer.' Jane: 'Who cares if there's a right answer if I DON'T KNOW WHAT IT IS?' Alexa: 'Aunt Jane, I was talking about me.'
Paige Braddock’s Jane’s World for the 15th of July, 2019. The comic originally ran, if I’m reading the dates right, the 28th of October, 2002. Essays mentioning Jane’s World should appear at this link. I think that so far the only mention would be Sunday’s post, when I pointed out the existence of this storyline.

Alexa also enjoys math for there always being a right answer. Given her age there probably always is. There are mathematical questions for which there is no known right answer. Some of these are questions for which we just don’t know the answer, like, “is there an odd perfect number?” Some of these are more like value judgements, though. Is Euclidean geometry or non-Euclidean geometry more correct? The answer depends on what you want to do. There’s no more a right answer to that question than there is a right answer to “what shall I eat for dinner”.

Jane is disturbed by the idea of there being a right answer that she doesn’t know. She would not be happy to learn about “existence proofs”. This is a kind of proof in which the goal is not to find an answer. It’s just to show that there is an answer. This might seem pointless. But there are problems for which there can’t be an answer. If an answer’s been hard to find, it’s worth checking whether there are answers to find.

Son: 'I heard the chances of winning the lottery are the same as the chances of being hit by lightning!' Father: 'That's probably true. Did you know Uncle Ted was once hit by lightning on the golf course?' Son: 'No kidding? Did he buy a lottery ticket?'
Art Sansom and Chip Sansom’s The Born Loser for the 16th of July, 2019. There are a couple of essays mentioning The Born Loser, gathered at this link.

Art Sansom and Chip Sansom’s The Born Loser for the 16th builds on comparing the probability of winning the lottery to that of being hit by lightning. This comparison’s turned up a couple of times, including in Mister Boffo and The Wandering Melon, when I learned that Peter McCathie had both won the lottery and been hit by lightning.

Fun With Barfly And Schrodinger! Schrodinger: 'The pirate told the sailor he would walk the plank. The pirate explained that it would not happen until the sky had risen high enough in the sky to illuminate the deck. The sailor asked 'Why? Isn't the plank constant?' The pirate replied 'How the h would I know?''
Pab Sungenis’s New Adventures of Queen Victoria for the 17th of July, 2019. I thought I mentioned this strip more than it seems I have. Well, the essays inspired by something in New Adventures of Queen Victoria should be at this link.

Pab Sungenis’s New Adventures of Queen Victoria for the 17th is maybe too marginal for full discussion. It’s just reeling off a physics-major joke. The comedy is from it being a pun: Planck’s Constant is a number important in many quantum mechanics problems. It’s named for Max Planck, one of the pioneers of the field. The constant is represented in symbols as either h or as \hbar . The constant \hbar is equal to \frac{h}{2 \pi} and might be used even more often. It turns out \frac{h}{2 \pi} appears all over the place in quantum mechanics, so it’s convenient to write it with fewer symbols. \hbar is maybe properly called the reduced Planck’s constant, although in my physics classes I never encountered anyone calling it “reduced”. We just accepted there were these two Planck’s Constants and trusted context to make clear which one we wanted. It was \hbar . Planck’s Constant made some news among mensuration fans recently. The International Bureau of Weights and Measures chose to fix the value of this constant. This, through various physics truths, thus fixes the mass of the kilogram in terms of physical constants. This is regarded as better than the old method, where we just had a lump of metal that we used as reference.

Weenus: 'What's all the noise? I have work in the morning and I'm trying to sleep.' Eight-ball: 'Lettuce [rabbit] just dropped a slice of toast butter-side-up twenty times in a row!' Next panel, they're racing, dragging Lettuce to a flight to Las Vegas.
Jonathan Lemon’s Rabbits Against Magic for the 17th of July, 2019. This comic is trying to become the next Andertoons. Essays mentioninng Rabbits Against Magic are at this link.

Jonathan Lemon’s Rabbits Against Magic for the 17th is another probability joke. If a dropped piece of toast is equally likely to land butter-side-up or butter-side-down, then it’s quite unlikely to have it turn up the same way twenty times in a row. There’s about one chance in 524,288 of doing it in a string of twenty toast-flips. (That is, of twenty butter-side-up or butter-side-down in a row. If all you want is twenty butter-side-up, then there’s one chance in 1,048,576.) It’s understandable that Eight-Ball would take Lettuce to be quite lucky just now.

But there’s problems with the reasoning. First is the supposition that toast is as likely to fall butter-side-up as butter-side-down. I have a dim recollection of a mid-2000s pop physics book explaining why, given how tall a table usually is, a piece of toast is more likely to make half a turn — to land butter-side-down — before falling. Lettuce isn’t shown anywhere near a table, though. She might be dropping toast from a height that makes butter-side-up more likely. And there’s no reason to suppose that luck in toast-dropping connects to any formal game of chance. Or that her luck would continue to hold: even if she can drop the toast consistently twenty times there’s not much reason to think she could do it twenty-five times, or even twenty-one.

And then there’s this, a trivia that’s flawed but striking. Suppose that all seven billion people in the world have, at some point, tossed a coin at least twenty times. Then there should be seven thousand of them who had the coin turn up tails every single one of the first twenty times they’ve tossed a coin. And, yes, not everyone in the world has touched a coin, much less tossed it twenty times. But there could reasonably be quite a few people who grew up just thinking that every time you toss a coin it comes up tails. That doesn’t mean they’re going to have any luck gambling.


Thanks for waiting for me. The weather looks like I should have my next Reading the Comics post at this link, and on time. I’ll let you know if circumstances change.

Reading the Comics, July 12, 2019: Ricci Tensor Edition


So a couple days ago I was chatting with a mathematician friend. He mentioned how he was struggling with the Ricci Tensor. Not the definition, not exactly, but its point. What the Ricci Tensor was for, and why it was a useful thing. He wished he knew of a pop mathematics essay about the thing. And this brought, slowly at first, to my mind that I knew of one. I wrote such a pop-mathematics essay about the Ricci Tensor, as part of my 2017 A To Z sequence. In it, I spend several paragraphs admitting that I’m not sure I understand what the Ricci tensor is for, and why it’s a useful thing.

Caption: 'Physics Hypotheses That Are Still on The Table'. The No-Boundary Proposal (illustrated with a wireframe of what looks like an open wine glass). The Weyl Conjecture (illustrated with a wireframe of what looks like a football). The Victoria Principal (illustrated with a tableful of cosmetics).
Daniel Beyer’s Long Story Short for the 11th of July, 2019. Essays inspired by something mentioned in Long Story Short should be at this link.

Daniel Beyer’s Long Story Short for the 11th mentions some physics hypotheses. These are ideas about how the universe might be constructed. Like many such cosmological thoughts they blend into geometry. The no-boundary proposal, also known as the Hartle-Hawking state (for James Hartle and Stephen Hawking), is a hypothesis about the … I want to write “the start of time”. But I am not confident that this doesn’t beg the question. Well, we think we know what we mean by “the start of the universe”. A natural question in mathematical physics is, what was the starting condition? At the first moment that there was anything, what did it look like? And this becomes difficult to answer, difficult to even discuss, because part of the creation of the universe was the creation of spacetime. In this no-boundary proposal, the shape of spacetime at the creation of the universe is such that there just isn’t a “time” dimension at the “moment” of the Big Bang. The metaphor I see reprinted often about this is how there’s not a direction south of the south pole, even though south is otherwise a quite understandable concept on the rest of the Earth. (I agree with this proposal, but I feel like analogy isn’t quite tight enough.)

Still, there are mathematical concepts which seem akin to this. What is the start of the positive numbers, for example? Any positive number you might name has some smaller number we could have picked instead, until we fall out of the positive numbers altogether and into zero. For a mathematical physics concept there’s absolute zero, the coldest temperature there is. But there is no achieving absolute zero. The thermodynamical reasons behind this are hard to argue. (I’m not sure I could put them in a two-thousand-word essay, not the way I write.) It might be that the “moment of the Big Bang” is similarly inaccessible but, at least for the correct observer, incredibly close by.

The Weyl Curvature is a creation of differential geometry. So it is important in relativity, in describing the curve of spacetime. It describes several things that we can think we understand. One is the tidal forces on something moving along a geodesic. Moving along a geodesic is the general-relativity equivalent of moving in a straight line at a constant speed. Tidal forces are those things we remember reading about. They come from the Moon, sometimes the Sun, sometimes from a black hole a theoretical starship is falling into. Another way we are supposed to understand it is that it describes how gravitational waves move through empty space, space which has no other mass in it. I am not sure that this is that understandable, but it feels accessible.

The Weyl tensor describes how the shapes of things change under tidal forces, but it tracks no information about how the volume changes. The Ricci tensor, in contrast, tracks how the volume of a shape changes, but not the shape. Between the Ricci and the Weyl tensors we have all the information about how the shape of spacetime affects the things within it.

Ted Baum, writing to John Baez, offers a great piece of advice in understanding what the Weyl Tensor offers. Baum compares the subject to electricity and magnetism. If one knew all the electric charges and current distributions in space, one would … not quite know what the electromagnetic fields were. This is because there are electromagnetic waves, which exist independently of electric charges and currents. We need to account for those to have a full understanding of electromagnetic fields. So, similarly, the Weyl curvature gives us this for gravity. How is a gravitational field affected by waves, which exist and move independently of some source?

I am not sure that the Weyl Curvature is truly, as the comic strip proposes, a physics hypothesis “still on the table”. It’s certainly something still researched, but that’s because it offers answers to interesting questions. But that’s also surely close enough for the comic strip’s needs.

Elderly man: 'Remember coefficients?' Elderly woman: 'No.' Elderly man: 'Me neither.' Caption: 'Nostalgebra.'
Dave Coverly’s Speed Bump for the 11th of July, 2019. Essays which discuss something that appeared in Speed Bump should be at this link.

Dave Coverly’s Speed Bump for the 11th is a wordplay joke, and I have to admit its marginality. I can’t say it’s false for people who (presumably) don’t work much with coefficients to remember them after a long while. I don’t do much with French verb tenses, so I don’t remember anything about the pluperfect except that it existed. (I have a hazy impression that I liked it, but not an idea why. I think it was something in the auxiliary verb.) Still, this mention of coefficients nearly forms a comic strip synchronicity with Mike Thompson’s Grand Avenue for the 11th, in which a Math Joke allegedly has a mistaken coefficient as its punch line.

Gabby: 'It's craft time here at summer camp.' Michael: 'Finally! An activity that won't hurt my brain. Are we weaving? Painting? Making placemats?' Gabby: 'No. We're making probability flash cards.' Michael: 'The probability of us enjoying that activity? Zero.' Gabby: 'Finally! An answer at math camp that we can get right.'
Mike Thompson’s Grand Avenue for the 12th of July, 2019. The fair number of essays in which I complain about Grand Avenue I gather at this link.

Mike Thompson’s Grand Avenue for the 12th is the one I’m taking as representative for the week, though. The premise has been that Gabby and Michael were sent to Math Camp. They do not want to go to Math Camp. They find mathematics to be a bewildering set of arbitrary and petty rules to accomplish things of no interest to them. From their experience, it’s hard to argue. The comic has, since I started paying attention to it, consistently had mathematics be a chore dropped on them. And not merely from teachers who want them to solve boring story problems. Their grandmother dumps workbooks on them, even in the middle of summer vacation, presenting it as a chore they must do. Most comic strips present mathematics as a thing students would rather not do, and that’s both true enough and a good starting point for jokes. But I don’t remember any that make mathematics look so tedious. Anyway, I highlight this one because of the Math Camp jokes it, and the coefficients mention above, are the most direct mention of some mathematical thing. The rest are along the lines of the strip from the 9th, asserting that the “Math Camp Activity Board” spelled the last word wrong. The joke’s correct but it’s not mathematical.


So I had to put this essay to bed before I could read Saturday’s comics. Were any of them mathematically themed? I may know soon! And were there comic strips with some mention of mathematics, but too slight for me to make a paragraph about? What could be even slighter than the mathematical content of the Speed Bump and the Grand Avenue I did choose to highlight? Please check the Reading the Comics essay I intend to publish Tuesday. I’m curious myself.

Particle Physics Made Hard


A friend was playing with that cute little particle-physics simulator idea I mentioned last week. And encountered a problem. With a little bit of thought, I was able to not solve the problem. But I was able to explain why it was a subtler and more difficult problem than they had realized. These are the moments that make me feel justified calling myself a mathematician.

The proposed simulation was simple enough: imagine a bunch of particles that interact by rules that aren’t necessarily symmetric. Like, the attraction particle A exerts on particle B isn’t the same as what B exerts on A. Or there are multiple species of particles. So (say) red particles are attracted to blue but repelled by green. But green is attracted to red and repelled by blue twice as strongly as red is attracted to blue. Your choice.

Give a mathematician a perfectly good model of something. She’ll have the impulse to try tinkering with it. One reliable way to tinker with it is to change the domain on which it works. If your simulation supposes you have particles moving on the plane, then, what if they were in space instead? Or on the surface of a sphere? Or what if something was strange about the plane? My friend had this idea: what if the particles were moving on the surface of a cube?

And the problem was how to find the shortest distance between two particles on the surface of a cube. The distance matters since most any attraction rule depends on the distance. This may be as simple as “particles more than this distance apart don’t interact in any way”. The obvious approach, or if you prefer the naive approach, is to pretend the cube is a sphere and find distances that way. This doesn’t get it right, not if the two points are on different faces of the cube. If they’re on adjacent faces, ones which share an edge — think the floor and the wall of a room — it seems straightforward enough. My friend got into trouble with points on opposite faces. Think the floor and the ceiling.

This problem was posed (to the public) in January 1905 by Henry Ernest Dudeney. Dudeney was a newspaper columnist with an exhaustive list of mathematical puzzles. A couple of the books collecting them are on Project Gutenberg. The puzzles show their age in spots. Some in language; some in problems that ask to calculate money in pounds-shillings-and-pence. Many of them are chess problems. But many are also still obviously interesting, and worth thinking about. This one, I was able to find, was a variation of The Spider and the Fly, problem 75 in The Canterbury Puzzles:

Inside a rectangular room, measuring 30 feet in length and 12 feet in width and height, a spider is at a point on the middle of one of the end walls, 1 foot from the ceiling, as at A; and a fly is on the opposite wall, 1 foot from the floor in the centre, as shown at B. What is the shortest distance that the spider must crawl in order to reach the fly, which remains stationary? Of course the spider never drops or uses its web, but crawls fairly.

(Also I admire Dudeney’s efficient closing off of the snarky, problem-breaking answer someone was sure to give. It suggests experienced thought about how to pose problems.)

What makes this a puzzle, even a paradox, is that the obvious answer is wrong. At least, what seems like the obvious answer is to start at point A, move to one of the surfaces connecting the spider’s and the fly’s starting points, and from that move to the fly’s surface. But, no: you get a shorter answer by using more surfaces. Going on a path that seems like it wanders more gets you a shorter distance. The solution’s presented here, along with some follow-up problems. In this case, the spider’s shortest path uses five of the six surfaces of the room.

The approach to finding this is an ingenious one. Imagine the room as a box, and unfold it into something flat. Then find the shortest distance on that flat surface. Then fold the box back up. It’s a good trick. It turns out to be useful in many problems. Mathematical physicists often have reason to ponder paths of things on flattenable surfaces like this. Sometimes they’re boxes. Sometimes they’re toruses, the shape of a doughnut. This kind of unfolding often makes questions like “what’s the shortest distance between points” easier to solve.

There are wrinkles to the unfolding. Of course there are. How interesting would it be if there weren’t? The wrinkles amount to this. Imagine you start at the corner of the room, and walk up a wall at a 45 degree angle to the horizon. You’ll get to the far corner eventually, if the room has proportions that allow it. All right. But suppose you walked up at an angle of 30 degrees to the horizon? At an angle of 75 degrees? You’ll wind your way around the walls (and maybe floor and ceiling) some number of times, each path you start with. Probably different numbers of times. Some path will be shortest, and that’s fine. But … like, think about the path that goes along the walls and ceiling and floor three times over. The room, unfolded into a flat panel, has only one floor and one ceiling and each wall once. The straight line you might be walking goes right off the page.

And this is the wrinkle. You might need to tile the room. In a column of blocks (like in Dudeney’s solution) every fourth block might be the floor, with, between any two of them, a ceiling. This is fine, and what’s needed. It can be a bit dizzying to imagine such a state of affairs. But if you’ve ever zoomed a map of the globe out far enough that you see Australia six times over then you’ve understood how this works.

I cannot attest that this has helped my friend in the slightest. I am glad that my friend wanted to think about the surface of the cube. The surface of a dodecahedron would be far, far past my ability to help with.

A Neat Fake Particle Physics Simulator


A friend sent me this video, after realizing that I had missed an earlier mention of it and thought it weird I never commented on it. And I wanted to pass it on, partly because it’s neat and partly because I haven’t done enough writing about topics besides the comics recently.

Particle Life: A Game Of Life Made Of Particles is, at least in video form, a fascinating little puzzle. The Game of Life referenced is one that anybody reading a pop mathematics blog is likely to know. But here goes. The Game of Life is this iterative process. We look at a grid of points, with each point having one of a small set of possible states. Traditionally, just two. At each iteration we go through every grid location. We might change that state. Whether we do depends on some simple rules. In the original Game of Life it’s (depending on your point of view) two or either three rules. A common variation is to include “mutations”, where a location’s state changes despite what the other rules would dictate. And the fascinating thing is that these very simple rules can yield incredibly complicated and beautiful patterns. It’s a neat mathematical refutation of the idea that life is so complicated that it must take a supernatural force to generate. It turns out that many things following simple rules can produce complicated patterns. We will often call them “unpredictable”, although (unless we do have mutations) they are literally perfectly predictable. They’re just chaotic, with tiny changes in the starting conditions often resulting in huge changes in behavior quickly.

This Particle Life problem is built on similar principles. The model is different. Instead of grid locations there are a cloud of particles. The rules are a handful of laws of attraction-or-repulsion. That is, that each particle exerts a force on all the other particles in the system. This is very like the real physics, of clouds of asteroids or of masses of electrically charged gasses or the like. But, like, a cloud of asteroids has everything following the same rule, everything attracts everything else with an intensity that depends on their distance apart. Masses of charged particles follow two rules, particles attracting or repelling each other with an intensity that depends on their distance apart.

This simulation gets more playful. There can be many kinds of particles. They can follow different and non-physically-realistic rules. Like, a red particle can be attracted to a blue, while a blue particle is repelled by a red. A green particle can be attracted to a red with twice the intensity that a red particle’s attracted to a green. Whatever; set different rules and you create different mock physics.

The result is, as the video shows, particles moving in “unpredictable” ways. Again, here, it’s “unpredictable” in the same way that I couldn’t predict when my birthday will next fall on a Tuesday. That is to say, it’s absolutely predictable; it’s just not obvious before you do the calculations. Still, it’s wonderful watching and tinkering with, if you have time to create some physics simulators. There’s source code for one in C++ that you might use. If you’re looking for little toy projects to write on your own, I suspect this would be a good little project to practice your Lua/LOVE coding, too.

Reading the Comics, May 25, 2019: Slighter Comics Edition.


It turned out to be Thursday. These things happen. The comics for the second half of last week were more marginal

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th is a joke about holographic cosmology, proving that there are such things as jokes about holographic cosmology. Cosmology is about the big picture stuff, like, why there is a universe and why it looks like that. It’s a rather mathematical field, owing to the difficulty of doing controlled experiments. Holograms are that same technology used back in the 80s to put shoddy three-dimensional-ish pictures of eagles on credit cards. (In the United States. I imagine they were other animals in other countries.) Holograms, at least when they’re well-made, encode the information needed to make a three-dimensional image in a two-dimensional surface. (Please pretend that anything made of matter is two-dimensional like that.)

Professor: '... therefore, we can explain our apparent three-dimensional universe as a hologram encoded in a two-dimensional field! You see, brothers and sisters? We were right all along!' Caption: 'Every so often, Professor Susskind sneaks into meetings of the Flat Earth Society to promote holographic cosmology.'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th of May, 2019. Always glad to discuss Saturday Morning Breakfast Cereal, as you can see from these essays.

Holographic cosmology is a mathematical model for the universe. It represents the things in a space with a description of information on the boundary of this space. This seems bizarre and it won’t surprise you that key inspiration was in the strange physics of black holes. Properties of everything which falls into a black hole manifest in the event horizon, the boundary between normal space and whatever’s going on inside the black hole. The black hole is this three-dimensional volume, but in some way everything there is to say about it is the two-dimensional edge.

Dr Leonard Susskind did much to give this precise mathematical form. You didn’t think the character name was just a bit of whimsy, did you? Susskind’s work showed how the information of a particle falling into a black hole — information here meaning stuff like its position and momentum — turn into oscillations in the event horizon. The holographic principle argues this can be extended to ordinary space, the whole of the regular universe. Is this so? It’s hard to say. It’s a corner of string theory. It’s difficult to run experiments that prove very much. And we are stuck with an epistemological problem. If all the things in the universe and their interactions are equally well described as a three-dimensional volume or as a two-dimensional surface, which is “real”? It may seem intuitively obvious that we experience a three-dimensional space. But that intuition is a way we organize our understanding of our experiences. That’s not the same thing as truth.

Researcher one: 'Using simulated neural nets and quantum computing ... ' Researcher two: 'we've made a breakthrough in advanced AI. Behold.' One: 'Computer, two plus two equals five.' Computer: 'False. Two plus two equals four.' One, ready to yank the power cords out: 'Computer, two plus two equals five.' Computer: 'Correct, two plus two equals five.' Two: 'Adaptive reasoning, aka sense of self-preservation.' Duane: 'Impressive.'
Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde for the 22nd of May, 2019. Essays which mention some aspect of Barney and Clyde should appear at this link.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde for the 22nd is a joke about power, and how it can coerce someone out of truth. Arithmetic serves as an example of indisputable truth. It could be any deductive logic statement, or for that matter a definition. Arithmetic is great for the comic purpose needed here, though. Anyone can understand, at least the simpler statements, and work out their truth or falsity. And need very little word balloon space for it.

Caption: 'Why taco sauce? Why not steak sauce? Or Hollandaise? Barbecue?' Dingburg resident one: 'It's got to be taco sauce!' Dingburg resident two: 'Any other sauce would be sacrilegious!' Caption: 'But in an abandoned warehouse in Teaneck, New Jersey, a team of non-believers are at work!' One: 'This mix of duck sauce and salsa is just about ready!' Two: 'Piquant, yet chewy!' Caption: 'The new sauce gradually makes its way to Dingburg supermarkets, labelled Taco Sauce X-Treme.' Dingburger Three: 'After a swig, I feel all rationally ... ' Dingburger four: 'I think I just understood algebra!' Caption: 'An unexpected side effect of the new brew was a sudden ability to think logically for up to an hour after chugging a bottle.' Dingburger Five: 'Stop me before I rewrite the tax codes!'
Bill Griffith’s Zippy the Pinhead for the 25th of May, 2019. My attempts to form a quite rational and faintly linear discussion out of Zippy the Pinhead should be gathered here.

Bill Griffith’s Zippy the Pinhead for the 25th also features a quick mention of algebra as the height of rationality. Also as something difficult to understand. Most fields are hard to understand, when you truly try. But algebra works well for this writing purpose. Anyone who’d read Zippy the Pinhead has an idea of what understanding algebra would be like, the way they might not have an idea of holographic cosmology.

Two-bubble Venn diagram. The left bubble is 'Ryan Gosling', the right 'John Krasinski', and the intersection is 'Ryan Reynolds'. Caption: 'Menn Diagram'.
Teresa Logan’s Laughing Redhead Comics for the 25th of May, 2019. This one is a new tag. So there’s just the one Laughing Redhead Comics essay at this link. But that might change any day now!

Teresa Logan’s Laughing Redhead Comics for the 25th is the Venn diagram joke for the week, this one with a celebrity theme. Your choice whether the logic of the joke makes sense. Ryan Reynolds and John Krasinski are among those celebrities that I keep thinking I don’t know, but that it turns out I do know. Ryan Gosling I’m still not sure about.

And then there are a couple strips too slight even to appear in this collection. Dean Young and John Marshall’s Blondie on the 22nd did a lottery joke, with discussion of probability along the way. (And I hadn’t had a tag for ‘Blondie’ before, so that’s an addition which someday will baffle me.) Bob Shannon’s Tough Town for the 23rd mentions mathematics teaching. It’s in service of a pun.


And now I’ve had the past week covered. The next Reading the Comics post should be at this link come Sunday.

Reading the Comics, March 26, 2019: March 26, 2019 Edition


And we had another of those peculiar days where a lot of strips are on-topic enough for me to talk about.

Eric the Circle, this one by Kyle, for the 26th has a bit of mathematical physics in it. This is the kind of diagram you’ll see all the time, at least if you do the mathematics that tells you where things will be and when. The particular example is an easy problem, a thing rolling down an inclined plane. But the work done for it applies to more complicated problems. The question it’s for is, “what happens when this thing slides down the plane?” And that depends on the forces at work. There’s gravity, certainly . If there were something else it’d be labelled. Gravity’s represented with that arrow pointing straight down. That gives us the direction. The label (Eric)(g) gives us how strong this force is.

Caption: Eric on an inclined plane. It shows a circle on a right triangle, with the incline of the angle labelled 'x'. The force of gravity is pointing vertically down, labelled (Eric)(g). The force parallel to the incline is labelled (Eric)(g)sin(x); the force perpendicular to the incline is labelled (Eric)(g)cos(x).
Eric the Circle, by Kyle, for the 26th of March, 2019. Essays inspired at all by Eric the Circle are at this link.

Where the diagram gets interesting, and useful, are those dashed lines ending in arrows. One of those lines is, or at least means to be, parallel to the incline. The other is perpendicular to it. These both reflect gravity. We can represent the force of gravity as a vector. That means, we can represent the force of gravity as the sum of vectors. This is like how we can can write “8” or we can write “3 + 5”, depending on what’s more useful for what we’re doing. (For example, if you wanted to work out “67 + 8”, you might be better off doing “67 + 3 + 5”.) The vector parallel to the plane and the one perpendicular to the plane add up to the original gravity vector.

The force that’s parallel to the plane is the only force that’ll actually accelerate Eric. The force perpendicular to the plane just … keeps it snug against the plane. (Well, it can produce friction. We try not to deal with that in introductory physics because it is so hard. At most we might look at whether there’s enough friction to keep Eric from starting to slide downhill.) The magnitude of the force parallel to the plane, and perpendicular to the plane, are easy enough to work out. These two forces and the original gravity can be put together into a little right triangle. It’s the same shape but different size to the right triangle made by the inclined plane plus a horizontal and a vertical axis. So that’s how the diagram knows the parallel force is the original gravity times the sine of x. And that the perpendicular force is the original gravity times the cosine of x.

The perpendicular force is often called the “normal” force. This because mathematical physicists noticed we had only 2,038 other, unrelated, things called “normal”.

Rick Detorie’s One Big Happy for the 26th sees Ruthie demand to know who this Venn person was. Fair question. Mathematics often gets presented as these things that just are. That someone first thought about these things gets forgotten.

Ruthie, on the phone: 'Homework hot line? On the Same/Different page of our workbook there are two circles like this. They're called Venn diagrams and I wanna know who this Venn person is. And if I put two squares together, can we call it the Ruthie diagram, and how much money do I get for that? ... Huh? Well, I'll wait here 'til you find somebody who DOES know!'
Rick Detorie’s One Big Happy for the 26th of March, 2019. This is a rerun from … 2007, I want to say? There are two separate feeds, one of current and one of several-years-old, strips on the web. Essays including One Big Happy, current or years-old reruns, should be at this link.

John Venn, who lived from 1834 to 1923 — he died the 4th of April, it happens — was an English mathematician and philosopher and logician and (Anglican) priest. This is not a rare combination of professions. From 1862 he was a lecturer in Moral Science at Cambridge. This included work in logic, yes. But he also worked on probability questions. Wikipedia credits his 1866 Logic Of Chance with advancing the frequentist interpretation of probability. This is one of the major schools of thought about what the “probability of an event” is. It’s the one where you list all the things that could possibly happen, and consider how many of those are the thing you’re interested in. So, when you do a problem like “what’s the probability of rolling two six-sided dice and getting a total of four”? You’re doing a frequentist probability problem.

Venn Diagrams he presented to the world around 1880. These show the relationships between different sets. And the relationships of mathematical logic problems they represent. Venn, if my sources aren’t fibbing, didn’t take these diagrams to be a new invention of his own. He wrote of them as “Euler diagrams”. Venn diagrams, properly, need to show all the possible intersections of all the sets in play. You just mark in some way the intersections that happen to have nothing in them. Euler diagrams don’t require this overlapping. The name “Venn diagram” got attached to these pictures in the early 20th century. Euler here is Leonhard Euler, who created every symbol and notation mathematicians use for everything, and who has a different “Euler’s Theorem” that’s foundational to every field of mathematics, including the ones we don’t yet know exist. I exaggerate by 0.04 percent here.

Although we always start Venn diagrams off with circles, they don’t have to be. Circles are good shapes if you have two or three sets. It gets hard to represent all the possible intersections with four circles, though. This is when you start seeing weirder shapes. Wikipedia offers some pictures of Venn diagrams for four, five, and six sets. Meanwhile Mathworld has illustrations for seven- and eleven-set Venn diagrams. At this point, the diagrams are more for aesthetic value than to clarify anything, though. You could draw them with squares. Some people already do. Euler diagrams, particularly, are often squares, sometimes with rounded corners.

Venn had his other projects, too. His biography at St Andrews writes of his composing The Biographical History of Gonville and Caius College (Cambridge). And then he had another history of the whole Cambridge University. It also mentions his skills in building machines, though only cites one, a device for bowling cricket balls. The St Andrews biography says that in 1909 “Venn’s machine clean bowled one of [the Australian Cricket Team’s] top stars four times”. I do not know precisely what it means but I infer it to be a pretty good showing for the machine. His Wikipedia biography calls him a “passionate gardener”. Apparently the Cambridgeshire Horticultural Society awarded him prizes for his roses in July 1885 and for white carrots in September that year. And that he was a supporter of votes for women.

An illustration of an abacus. Caption: 'No matter what the category, you'll usually find me in the upper 99%.'
Ashleigh Brilliant’s Pot-Shots for the 26th of March, 2019. The strip originally appeared sometime in 1979. Essays discussing anything from Pot-Shots should appear at this link.

Ashleigh Brilliant’s Pot-Shots for the 26th makes a cute and true claim about percentiles. That a person will usually be in the upper 99% of whatever’s being measured? Hard to dispute. But, measure enough things and eventually you’ll fall out of at least one of them. How many things? This is easy to calculate if we look at different things that are independent of each other. In that case we could look at 69 things before there we’d expect a 50% chance of at least one not being in the upper 99%.

It’s getting that independence that’s hard. There’s often links between things. For example, a person’s height does not tell us much about their weight. But it does tell us something. A person six foot, ten inches tall is almost certainly not also 35 pounds, even though a person could be that size or could be that weight. A person’s scores on a reading comprehension test and their income? But test-taking results and wealth are certainly tied together. Age and income? Most of us have a bigger income at 46 than at 6. This is part of what makes studying populations so hard.

Snow, cat, to a kitten: '1 + 1 = 2 ... unless it's spring.' (Looking at a bird's nest with five eggs.) 'Then 1 + 1 = 5.'
T Shepherd’s Snow Sez for the 26th of March, 2019. Essays including an appearance of Essays inspired at all by Snow Sez should be gathered at this link. They will be, anyway; this is a new tag.

T Shepherd’s Snow Sez for the 26th is finally a strip I can talk about briefly, for a change. Snow does a bit of arithmetic wordplay, toying with what an expression like “1 + 1” might represent.


There were a lot of mathematically-themed comic strips last week. There’ll be another essay soon, and it should appear at this link. And then there’s always Sunday, as long as I stay ahead of deadline. I am never ahead of deadline.

Six Or Arguably Four Things For Pi Day


I hope you’ll pardon me for being busy. I haven’t had the chance to read all the Pi Day comic strips yet today. But I’d be a fool to let the day pass without something around here. I confess I’m still not sure that Pi Day does anything lasting to encourage people to think more warmly of mathematics. But there is probably some benefit if people temporarily think more fondly of the subject. Certainly I’ll do more foolish things than to point at things and say, “pi, cool, huh?” this week alone.

I’ve got a couple of essays that discuss π some. The first noteworthy one is Calculating Pi Terribly, discussing a way to calculate the value of π using nothing but a needle, a tile floor, and a hilariously excessive amount of time. Or you can use an HTML5-and-JavaScript applet and slightly less time, and maybe even experimentally calculate the digits of π to two decimal places, if you get lucky.

Randolph dreaming about his presentation; it shows a Pie Chart: Landed On Stage, 28%. Back wall, 13%. Glancing blow off torso, 22%. Hit podium, 12%. Direct hit in face, 25%. Several pies have been thrown, hitting the stage, back wall, his torso, the podium, his face. Corner illustration: 'I turn now to the bar graph.'
Tom Toles’s Randolph Itch, 2am for the 11th of June, 2018. I’m not sure when it did first run, past that it was in 2000, but I’ve featured it at least two times before, both of those in 2015, peculiarly. So in short I have no idea how GoComics picks its reruns for this strip.

In Calculating Pi Less Terribly I showed a way to calculate π that’s … well, you see where that sentence was going. This is a method that uses an alternating series. To get π exactly correct you have to do an infinite amount of work. But if you just want π to a certain precision, all right. This will even tell you how much work you have to do. There are other formulas that will get you digits of π with less work, though, and maybe I’ll write up one of those sometime.

Jack-o-lantern standing on a scale: 'Hey! I weigh exactly 3.14 pounds!' Caption: 'Pumpkin Pi'.
Dave Whamond’s Reality Check for the 27th of October, 2018. Does the weight count if the jack-o-lantern is wearing sneakers?

And the last of the relevant essays I’ve already written is an A To Z essay about normal numbers. I don’t know whether π is a normal number. No human, to the best of my knowledge, does. Well, anyone with an opinion on the matter would likely say, of course it’s normal. There’s fantastic reasons to think it is. But none of those amount to a proof it is.

[PI sces ] Guy at bar talking to Pi: 'Wow, so you were born on March 14th at 1:59, 26 seconds? What're the odds?'
Scott Hilburn’s The Argyle Sweater for the 14th of March, 2018. Also a free probability question, if you’re going to assume that every second of the year is equally likely to be the time of birth.

That’s my three items. After that I’d like to share … I don’t know whether to classify this as one or three pieces. They’re YouTube videos which a couple months ago everybody in the world was asking me if I’d seen. Now it’s your turn. I apologize if you too got this, a couple months ago, but don’t worry. You can tell people you watched and not actually do it. I’ll alibi you.

Pi figure, wearing glasses, reading The Neverending Story.
Mark Parisi’s Off The Mark for the 14th of March, 2018. Really the book seems a little short for that.

It’s a string of videos posted on youTube by 3Blue1Brown. The first lays out the matter with a neat physics problem. Imagine you have an impenetrable wall, a frictionless floor, and two blocks. One starts at rest. The other is sliding towards the first block and the wall. How many times will one thing collide with another? That is, will one block collide with another block, or will one block collide with a wall?

[ How ancient mathematicians amused themselves, AKA how to celebrate Pi Day today; third annual Pi-Easting Contest. Emcee: 'And HERE he is, our defending champ, that father of conic sections --- ARCHIMEDES!' They're all eating cakes shaped like pi.
Michael Cavna’s Warped for the 14th of March, 2018. Yes, but have you seen Pythagoras and his golden thigh?

The answer seems like it should depend on many things. What it actually depends on is the ratio of the masses of the two blocks. If they’re the same mass, then there are three collisions. You can probably work that sequence out in your head and convince yourself it’s right. If the outer block has ten times the mass of the inner block? There’ll be 31 collisions before all the hits are done. You might work that out by hand. I did not. You will not work out what happens if the outer block has 100 times the mass of the inner block. That’ll be 314 collisions. If the outer block has 1,000 times the mass of the inner block? 3,141 collisions. You see where this is going.

[ To Stephen Hawking, Thanks for making the Universe a little easier for the rest of us to understand ] Jay: 'I suppose it's only appropriate that he'd go on Pi Day.' Roy: 'Not to mention, Einstein's birthday.' Katherine: 'I'll bet they're off in some far reach of the universe right now playing backgammon.'
John Zakour and Scott Roberts’s Working Daze for the 15th of March, 2018. No, you should never read the comments, but here, really, don’t read the comments.

The second video in the sequence explains why the digits of π turn up in this. And shows how to calculate this. You could, in principle, do this all using Newtonian mechanics. You will not live long enough to finish that, though.

Pie chart. Most of the chart: 'likes pie'. Small wedge of the chart: 'likes charts'.
Daniel Beyer’s Long Story Short for the 14th of March, 2015.

The video shows a way that saves an incredible load of work. But you save on that tedious labor by having to think harder. Part of it is making use of conservation laws, that energy and linear momentum are conserved in collisions. But part is by recasting the problem. Recast it into “phase space”. This uses points in an abstract space to represent different configurations of a system. Like, how fast blocks are moving, and in what direction. The recasting of the problem turns something that’s impossibly tedious into something that’s merely … well, it’s still a bit tedious. But it’s much less hard work. And it’s a good chance to show off you remember the Inscribed Angle Theorem. You do remember the Inscribed Angle Theorem, don’t you? The video will catch you up. It’s a good show of how phase spaces can make physics problems so much more manageable.

'Happy Pi Day.' 'Mmm. I love apple pie.' 'Pi day, not Pie Day. Pi ... you know ... 3.14 ... March 14th. Get it?' 'Today is a pie-eating holiday?' 'Sort of. They do celebrate it with pie, but it's mostly about pi.' 'I don't understand what that kid says half the time.'
John Hambrock’s The Brilliant Mind of Edison Lee for the 14th of March, 2016. The strip is like this a lot.

The third video recasts the problem yet again. In this form, it’s about rays of light reflecting between mirrors. And this is a great recasting. That blocks bouncing off each other and walls should have anything to do with light hitting mirrors seems ridiculous. But set out your phase space, and look hard at what collisions and reflections are like, and you see the resemblance. The sort of trick used to make counting reflections easy turns up often in phase spaces. It also turns up in physics problems on toruses, doughnut shapes. You might ask when do we ever do anything on a doughnut shape. Well, real physical doughnuts, not so much. But problems where there are two independent quantities, and both quantities are periodic? There’s a torus lurking in there. There might be a phase space using that shape, and making your life easier by doing so.

Anthropomorphic numerals at a cocktail party. 2: 'You're greater than me. I could listen to you forever.' Pi: 'Aw, shucks. I'm blushing.' (It is.) Caption: 'Humble Pi.'
Scott Hilburn’s The Argyle Sweater for the 14th of March, 2017. And while the strip is true, arguably, 2 goes on forever also; it’s just not very interesting how it does.

That’s my promised four or maybe six items. Pardon, please, now, as I do need to get back to reading the comics.

Reading the Comics, December 22, 2018: Christmas Break Edition


There were just enough mathematically-themed comic strips last week for me to make two posts out of it. This current week? Is looking much slower, at least as of Wednesday night. But that’s a problem for me to worry about on Sunday.

Eric the Circle for the 20th, this one by Griffinetsabine, mentions a couple of shapes. That’s enough for me, at least on a slow comics week. There is a fictional tradition of X marking the spot. It can be particularly credited to Robert Louis Stevenson’s Treasure Island. Any symbol could be used to note a special place on maps, certainly. Many maps are loaded with a host of different symbols to convey different information. Circles and crosses have the advantage of being easy to draw and difficult to confuse for one another. Squares, triangles, and stars are good too.

Eric on a treasure hunt: Eric asking, 'Wait ... triangle marks the spot? No ... rhombus marks the spot? Dodecahedron marks the spot?' A square sighs; an X coughs, 'ahem!'
Eric the Circle for the 20th of December, 2018, this one by Griffinetsabine. This and other essays featuring Eric the Circle are at this link.

Bill Whitehead’s Free Range for the 22nd spoofs Wheel of Fortune with “theoretical mathematics”. Making a game out of filling in parts of a mathematical expression isn’t ridiculous, although it is rather niche. I don’t see how the revealed string of mathematical expressions build to a coherent piece, but perhaps a few further pieces would help.

Wheel Of Theoretical Mathematics. Contestant: 'I'd like to buy a sqrt(x).' On the board are several mathematical expressions, including 'dx = sqrt{pi}', 'a^2 + b^2 = (a + b)', and 'dy/dx x^4 - (1 - x^2)^4$.
Bill Whitehead’s Free Range for the 22nd of December, 2018. Appearances by Free Range should be at this link.

The parts shown are all legitimate enough expressions. Well, like a^2 + b^2 = (a + b) is only true for some specific numbers ‘a’ and ‘b’, but you can find solutions. -b \pm \sqrt{b^2 - x^2y^2} is just an expression, not picking out any particular values of ‘b’ or ‘x’ or ‘y’ as interesting. But in conjunction with a^2 + b^2 = (a + b) or other expressions there might be something useful. On the second row is a graph, highlighting a region underneath a curve (and above the x-axis) between two vertical lines. This is often the sort of thing looked at in calculus. It also turns up in probability, as the area under a curve like this can show the chance that an experiment will turn up something in a range of values. And \frac{dy}{dx} = x^4 - \left(1 - x^2\right)^4 is a straightforward differential equation. Its solution is a family of similar-looking polynomials.

Scientist guy runs in to the Lucky Cow restaurant. The scientist begs of cashier Neil, the left half of Schrodinger's equation equals ?!? Neil thinks about it some and then provides the answer, earning the scientist's gratitude and the admiration of his coworkers. Later, out in back, Neil pays off the scientist.
Mark Pett’s Lucky Cow for the 22nd of December, 2018. This and other discussions inspired by Lucky Cow should be at this link.

Mark Pett’s Lucky Cow for the 22nd has run before. I’ve even made it the title strip for a Reading the Comics post back in 2014. So it’s probably time to drop this from my regular Reading the Comics reporting. The physicists comes running in with the left half of the time-dependent Schrödinger Equation. This is all over quantum mechanics. In this form, quantum mechanics contains information about how a system behaves by putting it into a function named \psi . Its value depends on space (‘x’). It can also depend on time (‘t’). The physicists pretends to not be able to complete this. Neil arranges to give the answer.

Schrödinger’s Equation looks very much like a diffusion problem. Normal diffusion problems don’t have that \imath which appears in the part of Neil’s answer. But this form of equation turns up a lot. If you have something that acts like a fluid — and heat counts — then a diffusion problem is likely important in understanding it.

And, yes, the setup reminds me of a mathematical joke that I only encounter in lists of mathematics jokes. That one I told the last time this strip came up in the rotation. You might chuckle, or at least be convinced that it is a correctly formed joke.


Each of the Reading the Comics posts should all be at this link. And I have finished the alphabet in my Fall 2018 Mathematics A To Z glossary. There should be a few postscript thoughts to come this week, though.

Reading the Comics, August 4, 2018: August 4, 2018 Edition


And finally, at last, there’s a couple of comics left over from last week and that all ran the same day. If I hadn’t gone on forever about negative Kelvin temperatures I might have included them in the previous essay. That’s all right. These are strips I expect to need relatively short discussions to explore. Watch now as I put out 2,400 words explaining Wavehead misinterpreting the teacher’s question.

Dave Whamond’s Reality Check for the 4th is proof that my time spent writing about which is better, large numbers or small last week wasn’t wasted. There I wrote about four versus five for Beetle Bailey. Here it’s the same joke, but with compound words. Well, that’s easy to take care of.

[ Caption: Most people have a forehead --- Dave has a Five-Head. ] (Dave has an extremely tall head with lots of space between his eyebrows and his hair.) Squirrel in the corner: 'He'll need a 12-gallon hat.'
Dave Whamond’s Reality Check for the 4th of August, 2018. I’m sure it’s a coincidence that the tall-headed person shares a name with the cartoonist.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th is driving me slightly crazy. The equation on the board looks like an electrostatics problem to me. The ‘E’ is a common enough symbol for the strength of an electric field. And the funny-looking K’s look to me like the Greek kappa. This often represents the dielectric constant. That measures how well an electric field can move through a material. The upside-down triangles, known in the trade as Delta, describe — well, that’s getting complicated. By themselves, they describe measuring “how much the thing right after this changes in different directions”. When there’s a x symbol between the Delta and the thing, it measures something called the “curl”. This roughly measures how much the field inspires things caught up in it to turn. (Don’t try passing this off to your thesis defense committee.) The Delta x Delta x E describes the curl of the curl of E. Oh, I don’t like visualizing that. I don’t blame you if you don’t want to either.

Professor Ridley: 'Imagine an infinitely thin rod. Visualize it but don't laugh at it. I know it's difficult. Now, the following equations hold for ... ' [ Caption: Professor Ridley's cry for help goes unnoticed. ]
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th of August, 2018. Really not clear what the cry for help would be about. Just treat the rod as a limiting case of an enormous number of small spheres placed end to end and you’re done.

Anyway. So all this looks like it’s some problem about a rod inside an electric field. Fine enough. What I don’t know and can’t work out is what the problem is studying exactly. So I can’t tell you whether the equation, so far as we see it, is legitimately something to see in class. Envisioning a rod that’s infinitely thin is a common enough mathematical trick, though. Three-dimensional objects are hard to deal with. They have edges. These are fussy to deal with. Making sure the interior, the boundary, and the exterior match up in a logically consistent way is tedious. But a wire? A plane? A single point? That’s easy. They don’t have an interior. You don’t have to match up the complicated stuff.

For real world problems, yeah, you have to deal with the interior. Or you have to work out reasons why the interiors aren’t important in your problem. And it can be that your object is so small compared to the space it has to work in that the fact it’s not infinitely thin or flat or smooth just doesn’t matter. Mathematical models, such as give us equations, are a blend of describing what really is there and what we can work with.

Lotto official looking over a burnt, shattered check: 'What are the ODDS?! First he wins the lottery and then he gets struck by lightning!'
Mike Shiell’s The Wandering Melon for the 4th of August, 2018. Still, impressive watchband that it’s stood up to all that trouble.

Mike Shiell’s The Wandering Melon for the 4th is a probability joke, about two events that nobody’s likely to experience. The chance any individual will win a lottery is tiny, but enough people play them that someone wins just about any given week. The chance any individual will get struck by lightning is tiny too. But it happens to people. The combination? Well, that’s obviously impossible.

In July of 2015, Peter McCathie had this happen. He survived a lightning strike first. And then won the Atlantic Lotto 6/49. This was years apart, but the chance of both happening the same day, or same week? … Well, the world is vast and complicated. Unlikely things will happen.


And that’s all that I have for the past week. Come Sunday I should have my next Reading the Comics post, and you can find it and other essays at this link. Other essays that mention Reality Check are at this link. The many other essays which talk about Saturday Morning Breakfast Cereal are at this link. And other essays about The Wandering Melon are at this link. Thanks.

Without Tipping My Hand To My Plans For The Next Couple Weeks


I wanted to get this out of the way before I did it:

And the supplemental reading:


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Reading the Comics, November 4, 2017: Slow, Small Week Edition


It was a slow week for mathematically-themed comic strips. What I have are meager examples. Small topics to discuss. The end of the week didn’t have anything even under loose standards of being on-topic. Which is fine, since I lost an afternoon of prep time to thunderstorms that rolled through town and knocked out power for hours. Who saw that coming? … If I had, I’d have written more the day before.

Mac King and Bill King’s Magic in a Minute for the 29th of October looks like a word problem. Well, it is a word problem. It looks like a problem about extrapolating a thing (price) from another thing (quantity). Well, it is an extrapolation problem. The fun is in figuring out what quantities are relevant. Now I’ve spoiled the puzzle by explaining it all so.

Olivia Walch’s Imogen Quest for the 30th doesn’t say it’s about a mathematics textbook. But it’s got to be. What other kind of textbook will have at least 28 questions in a section and only give answers to the odd-numbered problems in back? You never see that in your social studies text.

Eric the Circle for the 30th, this one by Dennill, tests how slow a week this was. I guess there’s a geometry joke in Jane Austen? I’ll trust my literate readers to tell me. My doing the world’s most casual search suggests there’s no mention of triangles in Pride and Prejudice. The previous might be the most ridiculously mathematics-nerdy thing I have written in a long while.

Tony Murphy’s It’s All About You for the 31st does some advanced-mathematics name-dropping. In so doing, it’s earned a spot taped to the door of two people in any mathematics department with more than 24 professors across the country. Or will, when they hear there was a gap unification theory joke in the comics. I’m not sure whether Murphy was thinking of anything particular in naming the subject “gap unification theory”. It sounds like a field of mathematical study. But as far as I can tell there’s just one (1) paper written that even says “gap unification theory”. It’s in partition theory. Partition theory is a rich and developed field, which seems surprising considering it’s about breaking up sets of the counting numbers into smaller sets. It seems like a time-waster game. But the game sneaks into everything, so the field turns out to be important. Gap unification, in the paper I can find, is about studying the gaps between these smaller sets.

There’s also a “band-gap unification” problem. I could accept this name being shortened to “gap unification” by people who have to say its name a lot. It’s about the physics of semiconductors, or the chemistry of semiconductors, as you like. The physics or chemistry of them is governed by the energies that electrons can have. Some of these energies are precise levels. Some of these energies are bands, continuums of possible values. When will bands converge? When will they not? Ask a materials science person. Going to say that’s not mathematics? Don’t go looking at the papers.

Whether partition theory or materials since it seems like a weird topic. Maybe Murphy just put together words that sounded mathematical. Maybe he has a friend in the field.

Bill Amend’s FoxTrot Classics for the 1st of November is aiming to be taped up to the high school teacher’s door. It’s easy to show how the square root of two is irrational. Takes a bit longer to show the square root of three is. Turns out all the counting numbers are either perfect squares — 1, 4, 9, 16, and so on — or else have irrational square roots. There’s no whole number with a square root of, like, something-and-three-quarters or something-and-85-117ths. You can show that, easily if tediously, for any particular whole number. What’s it look like to show for all the whole numbers that aren’t perfect squares already? (This strip originally ran the 8th of November, 2006.)

Guy Gilchrist’s Nancy for the 1st does an alphabet soup joke, so like I said, it’s been a slow week around here.

John Zakour and Scott Roberts’s Maria’s Day for the 2nd is really just mathematics being declared hated, so like I said, it’s been a slow week around here.