My All 2020 Mathematics A to Z: Unitary Matrix


I assume that last week I disappointed Mr Wu, of the Singapore Maths Tuition blog, last week when I passed on a topic he suggested to unintentionally rewrite a good enough essay. I hope to make it up this week with a piece of linear algebra.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Unitary Matrix.

A Unitary Matrix — note the article; there is not a singular the Unitary Matrix — starts with a matrix. This is an ordered collection of scalars. The scalars we call elements. I can’t think of a time I ever saw a matrix represented except as a rectangular grid of elements, or as a capital letter for the name of a matrix. Or a block inside a matrix. In principle the elements can be anything. In practice, they’re almost always either real numbers or complex numbers. To speak of Unitary Matrixes invokes complex-valued numbers. If a matrix that would be Unitary has only real-valued elements, we call that an Orthogonal Matrix. It’s not wrong to call an Orthogonal matrix “Unitary”. It’s like pointing to a known square, though, and calling it a parallelogram. Your audience will grant that’s true. But it wonder what you’re getting at, unless you’re talking about a bunch of parallelograms and some of them happen to be squares.

As with polygons, though, there are many names for particular kinds of matrices. The flurry of them settles down on the Intro to Linear Algebra student and it takes three or four courses before most of them feel like familiar names. I will try to keep the flurry clear. First, we’re talking about square matrices, ones with the same number of rows as columns.

Start with any old square matrix. Give it the name U because you see where this is going. There are a couple of new matrices we can derive from it. One of them is the complex conjugate. This is the matrix you get by taking the complex conjugate of every term. So, if one element is 3 + 4\imath , in the complex conjugate, that element would be 3 - 4\imath . Reverse the plus or minus sign of the imaginary component. The shorthand for “the complex conjugate to matrix U” is U^* . Also we’ll often just say “the conjugate”, taking the “complex” part as implied.

Start back with any old square matrix, again called U. Another thing you can do with it is take the transposition. This matrix, U-transpose, you get by keeping the order of elements but changing rows and columns. That is, the elements in the first row become the elements in the first column. The elements in the second row become the elements in the second column. Third row becomes the third column, and so on. The diagonal — first row, first column; second row, second column; third row, third column; and so on — stays where it was. The shorthand for “the transposition of U” is U^T .

You can chain these together. If you start with U and take both its complex-conjugate and its transposition, you get the adjoint. We write that with a little dagger: U^{\dagger} = (U^*)^T . For a wonder, as matrices go, it doesn’t matter whether you take the transpose or the conjugate first. It’s the same U^{\dagger} = (U^T)^* . You may ask how people writing this out by hand never mistake U^T for U^{\dagger} . This is a good question and I hope to have an answer someday. (I would write it as U^{A} in my notes.)

And the last thing you can maybe do with a square matrix is take its inverse. This is like taking the reciprocal of a number. When you multiply a matrix by its inverse, you get the Identity Matrix. Not every matrix has an inverse, though. It’s worse than real numbers, where only zero doesn’t have a reciprocal. You can have a matrix that isn’t all zeroes and that doesn’t have an inverse. This is part of why linear algebra mathematicians command the big money. But if a matrix U has an inverse, we write that inverse as U^{-1} .

The Identity Matrix is one of a family of square matrices. Every element in an identity matrix is zero, except on the diagonal. That is, the element at row one, column one, is the number 1. The element at row two, column two is also the number 1. Same with row three, column three: another one. And so on. This is the “identity” matrix because it works like the multiplicative identity. Pick any matrix you like, and multiply it by the identity matrix; you get the original matrix right back. We use the name I for an identity matrix. If we have to be clear how many rows and columns the matrix has, we write that as a subscript: I_2 or I_3 or I_N or so on.

So this, finally, lets me say what a Unitary Matrix is. It’s any square matrix U where the adjoint, U^{\dagger} is the same matrix as the inverse, U^{-1} . It’s wonderful to learn you have a Unitary Matrix. Not just because, most of the time, finding the inverse of a matrix is a long and tedious procedure. Here? You have to write the elements in a different order and change the plus-or-minus sign on the imaginary numbers. The only way it would be easier if you had only real numbers, and didn’t have to take the conjugates.

That’s all a nice heap of terms. What makes any of them important, other than so Intro to Linear Algebra professors can test their students?

Well, you know mathematicians. If we like something like this, it’s usually because it holds out the prospect of turning a hard problems into easier ones. So it is. Start out with any old matrix. Call it A. Then there exist some unitary matrixes, call them U and V. And their product does something wonderful: UAV is a “diagonal” matrix. A diagonal matrix has zeroes for every element except the diagonal ones. That is, row one, column one; row two, column two; row three, column three; and so on. The elements that trace a path from the upper-left to the lower-right corner of the matrix. (The diagonal from the upper-right to the lower-left we have nothing to do with.) Everything we might do with matrices is easier on a diagonal matrix. So we process our matrix A into this diagonal matrix D. Process it by whatever the heck we’re doing. If we then multiply this by the inverses of U and V? If we calculate V^{-1}DU^{-1} ? We get whatever our process would have given us had we done it to A. And, since U and V are unitary matrices, it’s easy to find these inverses. Wonderful!

Also this sounds like I just said Unitary Matrixes are great because they solve a problem you never heard of before.

The 20th Century’s first great use for Unitary Matrixes, and I imagine the impulse for Mr Wu’s suggestion, was quantum mechanics. (A later use would be data compression.) Unitary Matrixes help us calculate how quantum systems evolve. This should be a little easier to understand if I use a simple physics problem as demonstration.

So imagine three blocks, all the same mass. They’re connected in a row, left to right. There’s two springs, one between the left and the center mass, one between the center and the right mass. The springs have the same strength. The blocks can only move left-to-right. But, within those bounds, you can do anything you like with the blocks. Move them wherever you like and let go. Let them go with a kick moving to the left or the right. The only restraint is they can’t pass through one another; you can’t slide the center block to the right of the right block.

This is not quantum mechanics, by the way. But it’s not far, either. You can turn this into a fine toy of a molecule. For now, though, think of it as a toy. What can you do with it?

A bunch of things, but there’s two really distinct ways these blocks can move. These are the ways the blocks would move if you just hit it with some energy and let the system do what felt natural. One is to have the center block stay right where it is, and the left and right blocks swinging out and in. We know they’ll swing symmetrically, the left block going as far to the left as the right block goes to the right. But all these symmetric oscillations look about the same. They’re one mode.

The other is … not quite antisymmetric. In this mode, the center block moves in one direction and the outer blocks move in the other, just enough to keep momentum conserved. Eventually the center block switches direction and swings the other way. But the outer blocks switch direction and swing the other way too. If you’re having trouble imagining this, imagine looking at it from the outer blocks’ point of view. To them, it’s just the center block wobbling back and forth. That’s the other mode.

And it turns out? It doesn’t matter how you started these blocks moving. The movement looks like a combination of the symmetric and the not-quite-antisymmetric modes. So if you know how the symmetric mode evolves, and how the not-quite-antisymmetric mode evolves? Then you know how every possible arrangement of this system evolves.

So here’s where we get to quantum mechanics. Suppose we know the quantum mechanics description of a system at some time. This we can do as a vector. And we know the Hamiltonian, the description of all the potential and kinetic energy, for how the system evolves. The evolution in time of our quantum mechanics description we can see as a unitary matrix multiplied by this vector.

The Hamiltonian, by itself, won’t (normally) be a Unitary Matrix. It gets the boring name H. It’ll be some complicated messy thing. But perhaps we can find a Unitary Matrix U, so that UHU^{\dagger} is a diagonal matrix. And then that’s great. The original H is hard to work with. The diagonalized version? That one we can almost always work with. And then we can go from solutions on the diagonalized version back to solutions on the original. (If the function \psi describes the evolution of UHU^{\dagger} , then U^{\dagger}\psi U describes the evolution of H .) The work that U (and U^{\dagger} ) does to H is basically what we did with that three-block, two-spring model. It’s picking out the modes, and letting us figure out their behavior. Then put that together to work out the behavior of what we’re interested in.

There are other uses, besides time-evolution. For instance, an important part of quantum mechanics and thermodynamics is that we can swap particles of the same type. Like, there’s no telling an electron that’s on your nose from an electron that’s in one of the reflective mirrors the Apollo astronauts left on the Moon. If they swapped positions, somehow, we wouldn’t know. It’s important for calculating things like entropy that we consider this possibility. Two particles swapping positions is a permutation. We can describe that as multiplying the vector that describes what every electron on the Earth and Moon is doing by a Unitary Matrix. Here it’s a matrix that does nothing but swap the descriptions of these two electrons. I concede this doesn’t sound thrilling. But anything that goes into calculating entropy is first-rank important.

As with time-evolution and with permutation, though, any symmetry matches a Unitary Matrix. This includes obvious things like reflecting across a plane. But it also covers, like, being displaced a set distance. And some outright obscure symmetries too, such as the phase of the state function \Psi . I don’t have a good way to describe what this is, physically; we can’t observe it directly. This symmetry, though, manifests as the conservation of electric charge, a thing we rather like.

This, then, is the sort of problem that draws Unitary Matrixes to our attention.


Thank you for reading. This and all of my 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. Next week, I hope to have something to say for the letter V.

My All 2020 Mathematics A to Z: Renormalization


I have again Elke Stangl, author of elkemental Force, to thank for the subject this week. Again, Stangl’s is a blog of wide-ranging theme interests. And it’s got more poetry this week again, this time haikus about the Dirac delta function.

I also have Kerson Huang, of the Massachusetts Institute of Technology and of Nanyang Technological University, to thank for much insight into the week’s subject. Huang published this A Critical History of Renormalization, which gave me much to think about. It’s likely a paper that would help anyone hoping to know the history of the technique better.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Renormalization.

There is a mathematical model, the Ising Model, for how magnets work. The model has the simplicity of a toy model given by a professor (Wilhelm Lenz) to his grad student (Ernst Ising). Suppose matter is a uniform, uniformly-spaced grid. At each point on the grid we have either a bit of magnetism pointed up (value +1) or down (value -1). It is a nearest-neighbor model. Each point interacts with its nearest neighbors and none of the other points. For a one-dimensional grid this is easy. It’s the stuff of thermodynamics homework for physics majors. They don’t understand it, because you need the hyperbolic trigonometric functions. But they could. For two dimensions … it’s hard. But doable. And interesting. It describes important things like phase changes. The way that you can take a perfectly good strong magnet and heat it up until it’s an iron goo, then cool it down to being a strong magnet again.

For such a simple model it works well. A lot of the solids we find interesting are crystals, or are almost crystals. These are molecules arranged in a grid. So that part of the model is fine. They do interact, foremost, with their nearest neighbors. But not exclusively. In principle, every molecule in a crystal interacts with every other molecule. Can we account for this? Can we make a better model?

Yes, many ways. Here’s one. It’s designed for a square grid, the kind you get by looking at the intersections on a normal piece of graph paper. Each point is in a row and a column. The rows are a distance ‘a’ apart. So are the columns.

Now draw a new grid, on top of the old. Do it by grouping together two-by-two blocks of the original. Draw new rows and columns through the centers of these new blocks. Put at the new intersections a bit of magnetism. Its value is the mean of whatever the four blocks around it are. So, could be 1, could be -1, could be 0, could be ½, could be -½. There’s more options. But look at what we have. It’s still an Ising-like model, with interactions between nearest-neighbors. There’s more choices for what value each point can have. And the grid spacing is now 2a instead of a. But it all looks pretty similar.

And now the great insight, that we can trace to Leo P Kadanoff in 1966. What if we relabel the distance between grid points? We called it 2a before. Call it a, now, again. What’s important that’s different from the Ising model we started with?

There’s the not-negligible point that there’s five different values a point can have, instead of two. But otherwise? In the operations we do, not much is different. How about in what it models? And there it’s interesting. Think of the original grid points. In the original scaling, they interacted only with units one original-row or one original-column away. Now? Their average interacts with the average of grid points that were as far as three original-rows or three original-columns away. It’s a small change. But it’s closer to reflecting the reality of every molecule interacting with every other molecule.

You know what happens when mathematicians get one good trick. We figure what happens if we do it again. Take the rescaled grid, the one that represents two-by-two blocks of the original. Rescale it again, making two-by-two blocks of these two-by-two blocks. Do the same rules about setting the center points as a new grid. And then re-scaling. What we have now are blocks that represent averages of four-by-four blocks of the original. And that, imperfectly, let a point interact with a point seven original-rows or original-columns away. (Or farther: seven original-rows down and three original-columns to the left, say. Have fun counting all the distances.) And again: we have eight-by-eight blocks and even more range. Again: sixteen-by-sixteen blocks and double the range again. Why not carry this on forever?

This is renormalization. It’s a specific sort, called the block-spin renormalization group. It comes from condensed matter physics, where we try to understand how molecules come together to form bulks of matter. Kenneth Wilson stretched this over to studying the Kondo Effect. This is a problem in how magnetic impurities affect electrical resistance. (It’s named for Jun Kondo.) It’s great work. It (in part) earned Wilson a Nobel Prize. But the idea is simple. We can understand complex interactions by making them simple ones. The interactions have a natural scale, cutting off at the nearest neighbor. But we redefine ‘nearest neighbor’, again and again, until it reaches infinitely far away.

This problem, and its solution, come from thermodynamics. Particularly, statistical mechanics. This is a bit ahistoric. Physicists first used renormalization in quantum mechanics. This is all right. As a general guideline, everything in statistical mechanics turns into something in quantum mechanics, and vice-versa. What quantum mechanics lacked, for a generation, was logical rigor for renormalization. This statistical mechanics approach provided that.

Renormalization in quantum mechanics we needed because of virtual particles. Quantum mechanics requires that particles can pop into existence, carrying momentum, and then pop back out again. This gives us electromagnetism, and the strong nuclear force (which holds particles together), and the weak nuclear force (which causes nuclear decay). Leave gravity over on the side. The more momentum in the virtual particle, the shorter a time it can exist. It’s actually the more energy, the shorter the particle lasts. In that guise you know it as the Uncertainty Principle. But it’s momentum that’s important here. This means short-range interactions transfer more momentum, and long-range ones transfer less. And here we had thought forces got stronger as the particles interacting got closer together.

In principle, there is no upper limit to how much momentum one of these virtual particles can have. And, worse, the original particle can interact with its virtual particle. This by exchanging another virtual particle. Which is even higher-energy and shorter-range. The virtual particle can also interact with the field that’s around the original particle. Pairs of virtual particles can exchange more virtual particles. And so on. What we get, when we add this all together, seems like it should be infinitely large. Every particle the center of an infinitely great bundle of energy.

Renormalization, the original renormalization, cuts that off. Sets an effective limit on the system. The limit is not “only particles this close will interact” exactly. It’s more “only virtual particles with less than this momentum will”. (Yes, there’s some overlap between these ideas.) This seems different to us mere dwellers in reality. But to a mathematical physicist, knowing that position and momentum are conjugate variables? Limiting one is the same work as limiting the other.

This, when developed, left physicists uneasy. It’s for good reasons. The cutoff is arbitrary. Its existence, sure, but we often deal with arbitrary cutoffs for things. When we calculate a weather satellite’s orbit we do not care that other star systems exist. We barely care that Jupiter exists. Still, where to put the cutoff? Quantum Electrodynamics, using this, could provide excellent predictions of physical properties. But shouldn’t we get different predictions with different cutoffs? How do we know we’re not picking a cutoff because it makes our test problem work right? That we’re not picking one that produces garbage for every other problem? Read the writing of a physicist of the time and — oh, why be coy? We all read Richard Feynman, his QED at least. We see him sulking about a technique he used to brilliant effect.

Wilson-style renormalization answered Feynman’s objections. (Though not to Feynman’s satisfaction, if I understand the history right.) The momentum cutoff serves as a scale. Or if you prefer, the scale of interactions we consider tells us the cutoff. Different scales give us different quantum mechanics. One scale, one cutoff, gives us the way molecules interact together, on the scale of condensed-matter physics. A different scale, with a different cutoff, describes the particles of Quantum Electrodynamics. Other scales describe something more recognizable as classical physics. Or the Yang-Mills gauge theory, as describes the Standard Model of subatomic particles, all those quarks and leptons.

Renormalization offers a capsule of much of mathematical physics, though. It started as an arbitrary trick to avoid calculation problems. In time, we found a rationale for the trick. But found it from looking at a problem that seemed unrelated. On learning the related trick well, though, we see they’re different aspects of the same problem. It’s a neat bit of work.


This and all the other 2020 A-to-Z essays should be at this link. Essays from every A-to-Z series should be gathered at this link. I am looking eagerly for topics for the letters S, T, and U, and am scouting ahead for V, W, and X topics also. Thanks for your thoughts, and thank you for reading.

In Our Time podcast repeats episode on Zeno’s Paradoxes


It seems like barely yesterday I was giving people a tip about this podcast. In Our Time, a BBC panel-discussion programme about topics of general interest, this week repeated an episode about Zeno’s Paradoxes. It originally ran in 2016.

The panel this time is two philosophers and a mathematician, which is probably about the correct blend to get the topic down. The mathematician here is Marcus du Sautoy, with the University of Oxford, who’s a renowned mathematics popularizer in his own right. That said I think he falls into a trap that we STEM types often have in talking about Zeno, that of thinking the problem is merely “how can we talk about an infinity of something”. Or “how can we talk about an infinitesimal of something”. Mathematicians have got what seem to be a pretty good hold on how to do these calculations. But that we can provide a logically coherent way to talk about, say, how a line can be composed of points with no length does not tell us where the length of a line comes from. Still, du Sautoy knows rather a few things that I don’t. (The philosophers are Barbara Sattler, with the University of St Andrews, and James Warren, with the University of Cambridge. I know nothing further of either of them.)

The episode also discusses the Quantum Zeno Effect. This is physics, not mathematics, but it’s unsettling nonetheless. The time-evolution of certain systems can be stopped, or accelerated, by frequent measurements of the system. This is not something Zeno would have been pondering. But it is a challenge to our intuition about how change ought to work.

I’ve written some of my own thoughts about some of Zeno’s paradoxes, as well as on the Sorites paradox, which is discussed along the way in this episode. And the episode has prompted new thoughts in me, particularly about what it might mean to do infinitely many things. And what a “thing” might be. This is probably a topic Zeno was hoping listeners would ponder.

Paul Dirac discussed on the In Our Time podcast


It’s a touch off my professed mathematics focus. Also off my comic strips focus. But Paul Dirac was one of the 20th century’s greatest physicists, this in a century rich in great physicists. Part of his genius was in innovative mathematics, and in trusting strange implications of his mathematics.

This week the BBC podcast In Our Time, a not-quite-hourlong panel show discussing varied topics, came to Paul Dirac. It can be heard here, or from other podcast sources. I get it off iTunes myself. The discussion is partly about his career and about the magnitude of his work. It’s not going to make anyone suddenly understand how to do any of his groundbreaking work in quantum mechanics. But it is, after all, an hourlong podcast for the general audience about, in this case, a physicist. It couldn’t explain spinors.

And even if you know a fair bit about Dirac and his work you might pick up something new. This might be slight: one of the panelists mentioned Dirac, in retirement, getting to know Sting. This is not something impossible, but it’s also not a meeting I would have ever imagined happening. So my week has been broadened a bit.

The web site for In Our Time doesn’t have a useful archive category for mathematics, at least that I could find. But many mathematical topics are included in the archive of science subjects, including important topics like the kinetic theory of gases and the work of Emmy Noether.

Why does the Quantum Mechanics Momentum Operator look like that?


I don’t know. I say this for anyone this has unintentionally clickbaited, or who’s looking at a search engine’s preview of the page.

I come to this question from a friend, though, and it’s got me wondering. I don’t have a good answer, either. But I’m putting the question out there in case someone reading this, sometime, does know. Even if it’s in the remote future, it’d be nice to know.

And before getting to the question I should admit that “why” questions are, to some extent, a mug’s game. Especially in mathematics. I can ask why the sum of two consecutive triangular numbers a square number. But the answer is … well, that’s what we chose to mean by ‘triangular number’, ‘square number’, ‘sum’, and ‘consecutive’. We can show why the arithmetic of the combination makes sense. But that doesn’t seem to answer “why” the way, like, why Neil Armstrong was the first person to walk on the moon. It’s more a “why” like, “why are there Seven Sisters [ in the Pleiades ]?” [*]

But looking for “why” can, at least, give us hints to why a surprising result is reasonable. Draw dots representing a square number, slice it along the space right below a diagonal. You see dots representing two successive triangular numbers. That’s the sort of question I’m asking here.

From here, we get to some technical stuff and I apologize to readers who don’t know or care much about this kind of mathematics. It’s about the wave-mechanics formulation of quantum mechanics. In this, everything that’s observable about a system is contained within a function named \Psi . You find \Psi by solving a differential equation. The differential equation represents problems. Like, a particle experiencing some force that depends on position. This is written as a potential energy, because that’s easier to work with. But it’s the kind of problem done.

Grant that you’ve solved \Psi , since that’s hard and I don’t want to deal with it. You still don’t know, like, where the particle is. You never know that, in quantum mechanics. What you do know is its distribution: where the particle is more likely to be, where it’s less likely to be. You get from \Psi to this distribution for, like, particles by applying an operator to \Psi . An operator is a function with a domain and a range that are spaces. Almost always these are spaces of functions.

Each thing that you can possibly observe, in a quantum-mechanics context, matches an operator. For example, there’s the x-coordinate operator, which tells you where along the x-axis your particle’s likely to be found. This operator is, conveniently, just x. So evaluate x\Psi and that’s your x-coordinate distribution. (This is assuming that we know \Psi in Cartesian coordinates, ones with an x-axis. Please let me do that.) This looks just like multiplying your old function by x, which is nice and easy.

Or you might want to know momentum. The momentum in the x-direction has an operator, \hat{p_x} , which equals -\imath \hbar \frac{\partial}{\partial x} . The \partial is partial derivatives. The \hbar is Planck’s constant, a number which in normal systems of measurement is amazingly tiny. And you know how \imath^2 = -1 . That – symbol is just the minus or the subtraction symbol. So to find the momentum distribution, evaluate -\imath \hbar \frac{\partial}{\partial x}\Psi . This means taking a derivative of the \Psi you already had. And multiplying it by some numbers.

I don’t mind this multiplication by \hbar . That’s just a number and it’s a quirk of our coordinate system that it isn’t 1. If we wanted, we could set up our measurements of length and duration and stuff so that it was 1 instead.

But. Why is there a -\imath in the momentum operator rather than the position operator? Why isn’t one \sqrt{-\imath} x and the other \sqrt{-\imath} \frac{\partial}{\partial x} ? From a mathematical physics perspective, position and momentum are equally good variables. We tend to think of position as fundamental, but that’s surely a result of our happening to be very good at seeing where things are. If we were primarily good at spotting the momentum of things around us, we’d surely see that as the more important variable. When we get into Hamiltonian mechanics we start treating position and momentum as equally fundamental. Even the notation emphasizes how equal they are in importance, and treatment. We stop using ‘x’ or ‘r’ as the variable representing position. We use ‘q’ instead, a mirror to the ‘p’ that’s the standard for momentum. (‘p’ we’ve always used for momentum because … … … uhm. I guess ‘m’ was already committed, for ‘mass’. What I have seen is that it was taken as the first letter in ‘impetus’ with no other work to do. I don’t know that this is true. I’m passing on what I was told explains what looks like an arbitrary choice.)

So I’m supposing that this reflects how we normally set up \Psi as a function of position. That this is maybe why the position operator is so simple and bare. And then why the momentum operator has a minus, an imaginary number, and this partial derivative stuff. That if we started out with the wave function as a function of momentum, the momentum operator would be just the momentum variable. The position operator might be some mess with \imath and derivatives or worse.

I don’t have a clear guess why one and not the other operator gets full possession of the \imath though. I suppose that has to reflect convenience. If position and momentum are dual quantities then I’d expect we could put a mere constant like -\imath wherever we want. But this is, mostly, me writing out notes and scattered thoughts. I could be trying to explain something that might be as explainable as why the four interior angles of a rectangle are all right angles.

So I would appreciate someone pointing out the obvious reason these operators look like that. I may grumble privately at not having seen the obvious myself. But I’d like to know it anyway.


[*] Because there are not eight.

Reading the Comics, August 16, 2019: The Comments Drive Me Crazy Edition


Last week was another light week of work from Comic Strip Master Command. One could fairly argue that nothing is worth my attention. Except … one comic strip got onto the calendar. And that, my friends, is demanding I pay attention. Because the comic strip got multiple things wrong. And then the comments on GoComics got it more wrong. Got things wrong to the point that I could not be sure people weren’t trolling each other. I know how nerds work. They do this. It’s not pretty. So since I have the responsibility to correct strangers online I’ll focus a bit on that.

Robb Armstrong’s JumpStart for the 13th starts off all right. The early Roman calendar had ten months, December the tenth of them. This was a calendar that didn’t try to cover the whole year. It just started in spring and ran into early winter and that was it. This may seem baffling to us moderns, but it is, I promise you, the least confusing aspect of the Roman calendar. This may seem less strange if you think of the Roman calendar as like a sports team’s calendar, or a playhouse’s schedule of shows, or a timeline for a particular complicated event. There are just some fallow months that don’t need mention.

Joe: 'Originally December was the tenth month of the calendar year. Guess what happens every 823 years? December is about to have five Saturdays, five Sundays, and five Mondays! It's a rare phenomenon!' Crunchy: 'Kinda like a cop who trusts the Internet.'
Robb Armstrong’s JumpStart for the 13th of August, 2019. Essays featuring JumpStart should appear at this link. I am startled to learn that this is a new tag, though. I hope the comic makes more appearances; it’s pleasantly weird in low-key ways. Well, I mean, those are cops driving an ice cream truck and that’s one of the more mundane things about the comic, you know?

Things go wrong with Rob’s claim that December will have five Saturdays, five Sundays, and five Mondays. December 2019 will have no such thing. It has four Saturdays. There are five Sundays, Mondays, and Tuesdays. From Crunchy’s response it sounds like Joe’s run across some Internet Dubious Science Folklore. You know, where you see a claim that (like) Saturn will be larger in the sky than anytime since the glaciers receded or something. And as you’d expect, it’s gotten a bit out of date. December 2018 had five Saturdays, Sundays, and Mondays. So did December 2012. And December 2007.

And as this shows, that’s not a rare thing. Any month with 31 days will have five of some three days in the week. August 2019, for example, has five Thursdays, Fridays, and Saturdays. October 2019 will have five Tuesdays, Wednesdays, and Thursdays. This we can show by the pigeonhole principle. And there are seven months each with 31 days in every year.

It’s not every year that has some month with five Saturdays, Sundays, and Mondays in it. 2024 will not, for example. But a lot of years do. I’m not sure why December gets singled out for attention here. From the setup about December having long ago been the tenth month, I guess it’s some attempt to link the fives of the weekend days to the ten of the month number. But we get this kind of December about every five or six years.

This 823 years stuff, now that’s just gibberish. The Gregorian calendar has its wonders and mysteries yes. None of them have anything to do with 823 years. Here, people in the comments got really bad at explaining what was going on.

So. There are fourteen different … let me call them year plans, available to the Gregorian calendar. January can start on a Sunday when it is a leap year. Or January can start on a Sunday when it is not a leap year. January can start on a Monday when it is a leap year. January can start on a Monday when it is not a leap year. And so on. So there are fourteen possible arrangements of the twelve months of the year, what days of the week the twentieth of January and the thirtieth of December can occur on. The incautious might think this means there’s a period of fourteen years in the calendar. This comes from misapplying the pigeonhole principle.

Here’s the trouble. January 2019 started on a Tuesday. This implies that January 2020 starts on a Wednesday. January 2025 also starts on a Wednesday. But January 2024 starts on a Monday. You start to see the pattern. If this is not a leap year, the next year starts one day of the week later than this one. If this is a leap year, the next year starts two days of the week later. This is all a slightly annoying pattern, but it means that, typically, it takes 28 years to get back where you started. January 2019 started on Tuesday; January 2020 on Wednesday, and January 2021 on Friday. the same will hold for January 2047 and 2048 and 2049. There are other successive years that will start on Tuesday and Wednesday and Friday before that.

Except.

The important difference between the Julian and the Gregorian calendars is century years. 1900. 2000. 2100. These are all leap years by the Julian calendar reckoning. Most of them are not, by the Gregorian. Only century years divisible by 400 are. 2000 was a leap year; 2400 will be. 1900 was not; 2100 will not be, by the Gregorian scheme.

These exceptions to the leap-year-every-four-years pattern mess things up. The 28-year-period does not work if it stretches across a non-leap-year century year. By the way, if you have a friend who’s a programmer who has to deal with calendars? That friend hates being a programmer who has to deal with calendars.

There is still a period. It’s just a longer period. Happily the Gregorian calendar has a period of 400 years. The whole sequence of year patterns from 2000 through 2019 will reappear, 2400 through 2419. 2800 through 2819. 3200 through 3219.

(Whether they were also the year patterns for 1600 through 1619 depends on where you are. Countries which adopted the Gregorian calendar promptly? Yes. Countries which held out against it, such as Turkey or the United Kingdom? No. Other places? Other, possibly quite complicated, stories. If you ask your computer for the 1619 calendar it may well look nothing like 2019’s, and that’s because it is showing the Julian rather than Gregorian calendar.)

Except.

This is all in reference to the days of the week. The date of Easter, and all of the movable holidays tied to Easter, is on a completely different cycle. Easter is set by … oh, dear. Well, it’s supposed to be a simple enough idea: the Sunday after the first spring full moon. It uses a notional moon that’s less difficult to predict than the real one. It’s still a bit of a mess. The date of Easter is periodic again, yes. But the period is crazy long. It would take 5,700,000 years to complete its cycle on the Gregorian calendar. It never will. Never try to predict Easter. It won’t go well. Don’t believe anything amazing you read about Easter online.

Norm, pondering: 'I have a new theory about life.' (Illustrated with a textbook, 'Quantum Silliness'.) 'It's not as simple as everything-is-easy, or everything-is-hard.' (Paper with 1 + 1 = 2; another with Phi = BA.) 'Instead, life is only hard when it should be easy and easy when it's expected to be hard. That way you're never prepared.' (The papers are torn up.) Friend: 'Seems to me you've stepped right into the middle of chaos theory.' Norm: 'Or just my 30s.'
Michael Jantze’s The Norm (Classics) for the 15th of August, 2019. I had just written how I wanted to share this strip more. Essays about The Norm, both the current (“4.0”) run and older reruns (“Classics”), are at this link.

Michael Jantze’s The Norm (Classics) for the 15th is much less trouble. It uses some mathematics to represent things being easy and things being hard. Easy’s represented with arithmetic. Hard is represented with the calculations of quantum mechanics. Which, oddly, look very much like arithmetic. \phi = BA even has fewer symbols than 1 + 1 = 2 has. But the symbols mean different abstract things. In a quantum mechanics context, ‘A’ and ‘B’ represent — well, possibly matrices. More likely operators. Operators work a lot like functions and I’m going to skip discussing the ways they don’t. Multiplying operators together — B times A, here — works by using the range of one function as the domain of the other. Like, imagine ‘B’ means ‘take the square of’ and ‘A’ means ‘take the sine of’. Then ‘BA’ would mean ‘take the square of the sine of’ (something). The fun part is the ‘AB’ would mean ‘take the sine of the square of’ (something). Which is fun because most of the time, those won’t have the same value. We accept that, mathematically. It turns out to work well for some quantum mechanics properties, even though it doesn’t work like regular arithmetic. So \phi = BA holds complexity, or at least strangeness, in its few symbols.

Moose, bringing change and food back from the beach snack stand: 'Arch gave me five and a single so he gets ... $2.11 in change!' Archie: 'Right, Moose! Thanks!' (To Betty.) 'Notice how Moose can do math faster at the beach than he can anywhere else?' Betty: 'Why is that?' Moose, pointing to his feet: 'Easy! I don't have to take off my shoes to count my toes!'
Henry Scarpelli and Craig Boldman’s Archie rerun for the 16th of August, 2019. Essays exploring something mentioned by Archie ought to be at this link. The strip is in perpetual reruns but I don’t think I’ve exhausted the cycle of comics they reprint yet.

Henry Scarpelli and Craig Boldman’s Archie for the 16th is a joke about doing arithmetic on your fingers and toes. That’s enough for me.


There were some more comic strips which just mentioned mathematics in passing.

Brian Boychuk and Ron Boychuk’s The Chuckle Brothers rerun for the 11th has a blackboard of mathematics used to represent deep thinking. Also, it I think, the colorist didn’t realize that they were standing in front of a blackboard. You can see mathematicians doing work in several colors, either to convey information in shorthand or because they had several colors of chalk. Not this way, though.

Mark Leiknes’s Cow and Boy rerun for the 16th mentions “being good at math” as something to respect cows for. The comic’s just this past week started over from its beginning. If you’re interested in deeply weird and long-since cancelled comics this is as good a chance to jump on as you can get.

And Stephen ‘s Herb and Jamaal rerun for the 16th has a kid worried about a mathematics test.


That’s the mathematically-themed comic strips for last week. All my Reading the Comics essays should be at this link. I’ve traditionally run at least one essay a week on Sunday. But recently that’s moved to Tuesday for no truly compelling reason. That seems like it’s working for me, though. I may stick with it. If you do have an opinion about Sunday versus Tuesday please let me know.

Don’t let me know on Twitter. I continue to have this problem where Twitter won’t load on Safari. I don’t know why. I’m this close to trying it out on a different web browser.

And, again, I’m planning a fresh A To Z sequence. It’s never to early to think of mathematics topics that I might explain. I should probably have already started writing some. But you’ll know the official announcement when it comes. It’ll have art and everything.

My 2018 Mathematics A To Z: Commutative


Today’s A to Z term comes from Reynardo, @Reynardo_red on Twitter, and is a challenge. And the other A To Z posts for this year should be at this link.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Commutative.

Some terms are hard to discuss. This is among them. Mathematicians find commutative things early on. Addition of whole numbers. Addition of real numbers. Multiplication of whole numbers. Multiplication of real numbers. Multiplication of complex-valued numbers. It’s easy to think of this commuting as just having liberty to swap the order of things. And it’s easy to think of commuting as “two things you can do in either order”. It inspires physical examples like rotating a dial, clockwise or counterclockwise, however much you like. Or outside the things that seem obviously mathematical. Add milk and then cereal to the bowl, or cereal and then milk. As long as you don’t overfill the bowl, there’s not an important different. Per Wikipedia, if you’re putting one sock on each foot, it doesn’t matter which foot gets a sock first.

When something is this accessible, and this universal, it gets hard to talk about. It threatens to be invisible. It was hard to say much interesting about the still air in a closed room, at least before there was a chemistry that could tell it wasn’t a homogenous invisible something, and before there was a statistical mechanics that it was doing something even when it was doing nothing.

But commutativity is different. It’s easy to think of mathematics that doesn’t commute. Subtraction doesn’t, for all that it’s as familiar as addition. And despite that we try, in high school algebra, to fuse it into addition. Division doesn’t either, for all that we try to think of it as multiplication. Rotating things in three dimensions doesn’t commute. Nor does multiplying quaternions, which are a kind of number still. (I’m double-dipping here. You can use quaternions to represent three-dimensional rotations, and vice-versa. So they aren’t quite different examples, even though you can use quaternions to do things unrelated to rotations.) Clothing is a mass of things that can and can’t be put on first.

We talk about commuting as if it’s something in (or not in) the operations we do. Adding. Rotating. Walking in some direction. But it’s not entirely in that. Consider walking directions. From an intersection in the city, walk north to the first intersection you encounter. And walk east to the first intersection you encounter. Does it matter whether you walk north first and then east, or east first and then north? In some cases, no; famously, in Midtown Manhattan there’s no difference. At least if we pretend Broadway doesn’t exist.

Also if we don’t start from near the edge of the island, or near Central Park. An operation, even something familiar like addition, is a function. Its domain is an ordered pair. Each thing in the pair is from the set of whatever might be added together. (Or multiplied, or whatever the name of the operation is.) The operation commutes if the order of the pair doesn’t matter. It’s easy to find sets and operations that won’t commute. I suppose it’s for the same reason it’s easier to find rectangular rather than square things. We’re so used to working with operations like multiplication that we forget that multiplication needs things to multiply.

Whether a thing commutes turns up often in group theory. This shouldn’t surprise. Group theory studies how arithmetic works. A “group”, which is a set of things with an operation like multiplication on it, might or might not commute. A “ring”, which has a set of things and two operations, has some commutativity built into it. One ring operation is something like addition. That commutes, or else you don’t have a ring. The other operation is something like multiplication. That might or might not commute. It depends what you need for your problem. A ring with commuting multiplication, plus some other stuff, can reach the heights of being a “field”. Fields are neat. They look a lot like the real numbers, but they can be all weird, too.

But even in a group, that doesn’t have to have a commuting multiplication, we can tease out commutativity. There is a thing named the “commutator”, which is this particular way of multiplying elements together. You can use it to split the original group in the way that odds and evens split the whole numbers. That splitting is based on the same multiplication as the original group. But its domain is now classes based on elements of the original group. What’s created, the “commutator subgroup”, is commutative. We can find a thing, based on what we are interested in, which offers commutativity right nearby.

It reaches further. In analysis, it can be useful to think of functions as “mappings”. We describe this as though a function took a domain and transformed it into a range. We can compose these functions together: take the range from one function and use it as the domain for another. Sometimes these chains of functions will commute. We can get from the original set to the final set by several paths. This can produce fascinating and beautiful proofs that look as if you just drew a lattice-work. The MathWorld page on “Commutative Diagram” has some examples of this, and I recommend just looking at the pictures. Appreciate their aesthetic, particularly the ones immediately after the sentence about “Commutative diagrams are usually composed by commutative triangles and commutative squares”.

Whether these mappings commute can have meaning. This takes us, maybe inevitably, to quantum mechanics. Mathematically, this represents systems as either a wave function or a matrix, whichever is more convenient. We can use this to find the distribution of positions or momentums or energies or anything else we would like to know. Distributions are as much as we can hope for from quantum mechanics. We can say what (eg) the position of something is most likely to be but not what it is. That’s all right.

The mathematics of finding these distributions is just applying an operator, taking a mapping, on this wave function or this matrix. Some pairs of these operators commute, like the ones that let us find momentum and find kinetic energy. Some do not, like those to find position and angular momentum.

We can describe how much two operators do or don’t commute. This is through a thing called the “commutator”. Its form looks almost playfully simple. Call the operators ‘f’ and ‘g’. And that by ‘fg’ we mean, “do g, then do f”. (This seems awkward. But if you think of ‘fg’ as ‘f(g(x))’, where ‘x’ is just something in the domain of g, then this seems less awkward.) The commutator of ‘f’ and ‘g’ is then whatever ‘fg – gf’ is. If it’s always zero, then ‘f’ and ‘g’ commute. If it’s ever not zero, then they don’t.

This is easy to understand physically. Imagine starting from a point on the surface of the earth. Travel south one mile and then west one mile. You are at a different spot than you would be, had you instead travelled west one mile and then south one mile. How different? That’s the commutator. It’s obviously zero, for just multiplying some regular old numbers together. It’s sometimes zero, for these paths on the Earth’s surface. It’s never zero, for finding-the-position and finding-the-angular-momentum. The amount by which that’s never zero we can see as the famous Uncertainty Principle, the limits of what kinds of information we can know about the world.

Still, it is a hard subject to describe. Things which commute are so familiar that it takes work to imagine them not commuting. (How could three times four equal anything but four times three?) Things which do not commute either obviously shouldn’t (add hot water to the instant oatmeal, and eat it), or are unfamiliar enough people need to stop and think about them. (Rotating something in one direction and then another, in three dimensions, generally doesn’t commute. But I wouldn’t fault you for testing this out with a couple objects on hand before being sure about it.) But it can be noticed, once you know to explore.

Reading the Comics, December 11, 2017: Vamping For Andertoons Edition


So Mark Anderson’s Andertoons has been missing from the list of mathematically-themed the last couple weeks. Don’t think I haven’t been worried about that. But it’s finally given another on-topic-enough strip and I’m not going to include it here. I’ve had a terrible week and I’m going to use the comics we got in last week slowly.

Hector D Cantu and Carlos Castellanos’s Baldo for the 10th of December uses algebra as the type for homework you’d need help with. It reads plausibly enough to me, at least so far as I remember learning algebra.

Greg Evans’s Luann Againn for the 10th reprints the strip of the 10th of December, 1989. And as often happens, mathematics is put up as the stuff that’s too hard to really do. The expressions put up don’t quite parse; there’s nothing to solve. But that’s fair enough for a panicked brain. To not recognize what the problem even is makes it rather hard to solve.

Ruben Bolling’s Super-Fun-Pak Comix for the 10th is an installation of Quantum Mechanic, playing on the most fun example of non-commutative processes I know. That’s the uncertainty principle, which expresses itself as pairs of quantities that can’t be precisely measured simultaneously. There are less esoteric kinds of non-commutative processes. Like, rotating something 90 degrees along a horizontal and then along a vertical axis will turn stuff different from 90 degrees vertical and then horizontal. But that’s too easy to understand to capture the imagination, at least until you’re as smart as an adult and as thoughtful as a child.

Maria Scrivan’s Half Full for the 11th features Albert Einstein and one of the few equations that everybody knows. So that’s something.

Jeff Stahler’s Moderately Confused for the 11th features the classic blackboard full of equations, this time to explain why Christmas lights wouldn’t work. There is proper mathematics in lights not working. It’s that electrical-engineering work about the flow of electricity. The problem is, typically, a broken or loose bulb. Maybe a burnt-out fuse, although I have never fixed a Christmas lights problem by replacing the fuse. It’s just something to do so you can feel like you’ve taken action before screaming in rage and throwing the lights out onto the front porch. More interesting to me is the mathematics of strands getting tangled. The idea — a foldable thread, marked at regular intervals by points that can hook together — seems trivially simple. But it can give insight into how long molecules, particularly proteins, will fold together. It may help someone frustrated to ponder that their light strands are knotted for the same reasons life can exist. But I’m not sure it ever does.

Reading the Comics, September 24, 2017: September 24, 2017 Edition


Comic Strip Master Command sent a nice little flood of comics this week, probably to make sure that I transitioned from the A To Z project to normal activity without feeling too lost. I’m going to cut the strips not quite in half because I’m always delighted when I can make a post that’s just a single day’s mathematically-themed comics. Last Sunday, the 24th of September, was such a busy day. I’m cheating a little on what counts as noteworthy enough to talk about here. But people like comic strips, and good on them for liking them.

Norm Feuti’s Gil for the 24th sees Gil discover and try to apply some higher mathematics. There’s probably a good discussion about what we mean by division to explain why Gil’s experiment didn’t pan out. I would pin it down to eliding the difference between “dividing in half” and “dividing by a half”, which is a hard one. Terms that seem almost alike but mean such different things are probably the hardest part of mathematics.

Gil, eating cookies and doing mathematics. 'Dividing fractions. 1/2 divided by 1/2', which he works out to be 1. 'One half divided in half equals one? Wait a minute. If these calculations are correct, then that means ... ' And he takes a half-cookie and snaps it in half, to his disappointment. 'Humph. what's the point of this advanced math if it only works on paper?'
Norm Feuti’s Gil for the 24th of September, 2017, didn’t appear on Gocomics.com or Comics Kingdom, my usual haunts for these comics. But I started reading the strip when it was on Comics Kingdom, and keep reading its reruns. Feuti has continued the comic strip on his own web site, and posts it on Twitter. So it’s quite easy to pick the strip back up, if you have a Twitter account or can read RSS from it. I assume you can read RSS from it.

Russell Myers’s Broom Hilda looks like my padding. But the last panel of the middle row gets my eye. The squirrels talk about how on the equinox night and day “can never be of identical length, due to the angular size of the sun and atmospheric refraction”. This is true enough for the equinox. While any spot on the Earth might see twelve hours facing the sun and twelve hours facing away, the fact the sun isn’t a point, and that the atmosphere carries light around to the “dark” side of the planet, means daylight lasts a little longer than night.

Ah, but. This gets my mathematical modelling interest going. Because it is true that, at least away from the equator, there’s times of year that day is way shorter than night. And there’s times of year that day is way longer than night. Shouldn’t there be some time in the middle when day is exactly equal to night?

The easy argument for is built on the Intermediate Value Theorem. Let me define a function, with domain each of the days of the year. The range is real numbers. It’s defined to be the length of day minus the length of night. Let me say it’s in minutes, but it doesn’t change things if you argue that it’s seconds, or milliseconds, or hours, if you keep parts of hours in also. So, like, 12.015 hours or something. At the height of winter, this function is definitely negative; night is longer than day. At the height of summer, this function is definitely positive; night is shorter than day. So therefore there must be some time, between the height of winter and the height of summer, when the function is zero. And therefore there must be some day, even if it isn’t the equinox, when night and day are the same length

There’s a flaw here and I leave that to classroom discussions to work out. I’m also surprised to learn that my onetime colleague Dr Helmer Aslaksen’s grand page of mathematical astronomy and calendar essays doesn’t seem to have anything about length of day calculations. But go read that anyway; you’re sure to find something fascinating.

Mike Baldwin’s Cornered features an old-fashioned adding machine being used to drown an audience in calculations. Which makes for a curious pairing with …

Bill Amend’s FoxTrot, and its representation of “math hipsters”. I hate to encourage Jason or Marcus in being deliberately difficult. But there are arguments to make for avoiding digital calculators in favor of old-fashioned — let’s call them analog — calculators. One is that people understand tactile operations better, or at least sooner, than they do digital ones. The slide rule changes multiplication and division into combining or removing lengths of things, and we probably have an instinctive understanding of lengths. So this should train people into anticipating what a result is likely to be. This encourages sanity checks, verifying that an answer could plausibly be right. And since a calculation takes effort, it encourages people to think out how to arrange the calculation to require less work. This should make it less vulnerable to accidents.

I suspect that many of these benefits are what you get in the ideal case, though. Slide rules, and abacuses, are no less vulnerable to accidents than anything else is. And if you are skilled enough with the abacus you have no trouble multiplying 18 by 7, you probably would not find multiplying 17 by 8 any harder, and wouldn’t notice if you mistook one for the other.

Jef Mallett’s Frazz asserts that numbers are cool but the real insight is comparisons. And we can argue that comparisons are more basic than numbers. We can talk about one thing being bigger than another even if we don’t have a precise idea of numbers, or how to measure them. See every mathematics blog introducing the idea of different sizes of infinity.

Bill Whitehead’s Free Range features Albert Einstein, universal symbol for really deep thinking about mathematics and physics and stuff. And even a blackboard full of equations for the title panel. I’m not sure whether the joke is a simple absent-minded-professor joke, or whether it’s a relabelled joke about Werner Heisenberg. Absent-minded-professor jokes are not mathematical enough for me, so let me point once again to American Cornball. They’re the first subject in Christopher Miller’s encyclopedia of comic topics. So I’ll carry on as if the Werner Heisenberg joke were the one meant.

Heisenberg is famous, outside World War II history, for the Uncertainty Principle. This is one of the core parts of quantum mechanics, under which there’s a limit to how precisely one can know both the position and momentum of a thing. To identify, with absolutely zero error, where something is requires losing all information about what its momentum might be, and vice-versa. You see the application of this to a traffic cop’s question about knowing how fast someone was going. This makes some neat mathematics because all the information about something is bundled up in a quantity called the Psi function. To make a measurement is to modify the Psi function by having an “operator” work on it. An operator is what we call a function that has domains and ranges of other functions. To measure both position and momentum is equivalent to working on Psi with one operator and then another. But these operators don’t commute. You get different results in measuring momentum and then position than you do measuring position and then momentum. And so we can’t know both of these with infinite precision.

There are pairs of operators that do commute. They’re not necessarily ones we care about, though. Like, the total energy commutes with the square of the angular momentum. So, you know, if you need to measure with infinite precision the energy and the angular momentum of something you can do it. If you had measuring tools that were perfect. You don’t, but you could imagine having them, and in that case, good. Underlying physics wouldn’t spoil your work.

Probably the panel was an absent-minded professor joke.

The Summer 2017 Mathematics A To Z: Young Tableau


I never heard of today’s entry topic three months ago. Indeed, three weeks ago I was still making guesses about just what Gaurish, author of For the love of Mathematics,, was asking about. It turns out to be maybe the grand union of everything that’s ever been in one of my A To Z sequences. I overstate, but barely.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Young Tableau.

The specific thing that a Young Tableau is is beautiful in its simplicity. It could almost be a recreational mathematics puzzle, except that it isn’t challenging enough.

Start with a couple of boxes laid in a row. As many or as few as you like.

Now set another row of boxes. You can have as many as the first row did, or fewer. You just can’t have more. Set the second row of boxes — well, your choice. Either below the first row, or else above. I’m going to assume you’re going below the first row, and will write my directions accordingly. If you do things the other way you’re following a common enough convention. I’m leaving it on you to figure out what the directions should be, though.

Now add in a third row of boxes, if you like. Again, as many or as few boxes as you like. There can’t be more than there are in the second row. Set it below the second row.

And a fourth row, if you want four rows. Again, no more boxes in it than the third row had. Keep this up until you’ve got tired of adding rows of boxes.

How many boxes do you have? I don’t know. But take the numbers 1, 2, 3, 4, 5, and so on, up to whatever the count of your boxes is. Can you fill in one number for each box? So that the numbers are always increasing as you go left to right in a single row? And as you go top to bottom in a single column? Yes, of course. Go in order: ‘1’ for the first box you laid down, then ‘2’, then ‘3’, and so on, increasing up to the last box in the last row.

Can you do it in another way? Any other order?

Except for the simplest of arrangements, like a single row of four boxes or three rows of one box atop another, the answer is yes. There can be many of them, turns out. Seven boxes, arranged three in the first row, two in the second, one in the third, and one in the fourth, have 35 possible arrangements. It doesn’t take a very big diagram to get an enormous number of possibilities. Could be fun drawing an arbitrary stack of boxes and working out how many arrangements there are, if you have some time in a dull meeting to pass.

Let me step away from filling boxes. In one of its later, disappointing, seasons Futurama finally did a body-swap episode. The gimmick: two bodies could only swap the brains within them one time. So would it be possible to put Bender’s brain back in his original body, if he and Amy (or whoever) had already swapped once? The episode drew minor amusement in mathematics circles, and a lot of amazement in pop-culture circles. The writer, a mathematics major, found a proof that showed it was indeed always possible, even after many pairs of people had swapped bodies. The idea that a theorem was created for a TV show impressed many people who think theorems are rarer and harder to create than they necessarily are.

It was a legitimate theorem, and in a well-developed field of mathematics. It’s about permutation groups. These are the study of the ways you can swap pairs of things. I grant this doesn’t sound like much of a field. There is a surprising lot of interesting things to learn just from studying how stuff can be swapped, though. It’s even of real-world relevance. Most subatomic particles of a kind — electrons, top quarks, gluons, whatever — are identical to every other particle of the same kind. Physics wouldn’t work if they weren’t. What would happen if we swap the electron on the left for the electron on the right, and vice-versa? How would that change our physics?

A chunk of quantum mechanics studies what kinds of swaps of particles would produce an observable change, and what kind of swaps wouldn’t. When the swap doesn’t make a change we can describe this as a symmetric operation. When the swap does make a change, that’s an antisymmetric operation. And — the Young Tableau that’s a single row of two boxes? That matches up well with this symmetric operation. The Young Tableau that’s two rows of a single box each? That matches up with the antisymmetric operation.

How many ways could you set up three boxes, according to the rules of the game? A single row of three boxes, sure. One row of two boxes and a row of one box. Three rows of one box each. How many ways are there to assign the numbers 1, 2, and 3 to those boxes, and satisfy the rules? One way to do the single row of three boxes. Also one way to do the three rows of a single box. There’s two ways to do the one-row-of-two-boxes, one-row-of-one-box case.

What if we have three particles? How could they interact? Well, all three could be symmetric with each other. This matches the first case, the single row of three boxes. All three could be antisymmetric with each other. This matches the three rows of one box. Or you could have two particles that are symmetric with each other and antisymmetric with the third particle. Or two particles that are antisymmetric with each other but symmetric with the third particle. Two ways to do that. Two ways to fill in the one-row-of-two-boxes, one-row-of-one-box case.

This isn’t merely a neat, aesthetically interesting coincidence. I wouldn’t spend so much time on it if it were. There’s a matching here that’s built on something meaningful. The different ways to arrange numbers in a set of boxes like this pair up with a select, interesting set of matrices whose elements are complex-valued numbers. You might wonder who introduced complex-valued numbers, let alone matrices of them, into evidence. Well, who cares? We’ve got them. They do a lot of work for us. So much work they have a common name, the “symmetric group over the complex numbers”. As my leading example suggests, they’re all over the place in quantum mechanics. They’re good to have around in regular physics too, at least in the right neighborhoods.

These Young Tableaus turn up over and over in group theory. They match up with polynomials, because yeah, everything is polynomials. But they turn out to describe polynomial representations of some of the superstar groups out there. Groups with names like the General Linear Group (square matrices), or the Special Linear Group (square matrices with determinant equal to 1), or the Special Unitary Group (that thing where quantum mechanics says there have to be particles whose names are obscure Greek letters with superscripts of up to five + marks). If you’d care for more, here’s a chapter by Dr Frank Porter describing, in part, how you get from Young Tableaus to the obscure baryons.

Porter’s chapter also lets me tie this back to tensors. Tensors have varied ranks, the number of different indicies you can have on the things. What happens when you swap pairs of indices in a tensor? How many ways can you swap them, and what does that do to what the tensor describes? Please tell me you already suspect this is going to match something in Young Tableaus. They do this by way of the symmetries and permutations mentioned above. But they are there.

As I say, three months ago I had no idea these things existed. If I ever ran across them it was from seeing the name at MathWorld’s list of terms that start with ‘Y’. The article shows some nice examples (with each rows a atop the previous one) but doesn’t make clear how much stuff this subject runs through. I can’t fit everything in to a reasonable essay. (For example: the number of ways to arrange, say, 20 boxes into rows meeting these rules is itself a partition problem. Partition problems are probability and statistical mechanics. Statistical mechanics is the flow of heat, and the movement of the stars in a galaxy, and the chemistry of life.) I am delighted by what does fit.

The Summer 2017 Mathematics A To Z: Morse Theory


Today’s A To Z entry is a change of pace. It dives deeper into analysis than this round has been. The term comes from Mr Wu, of the Singapore Maths Tuition blog, whom I thank for the request.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Morse Theory.

An old joke, as most of my academia-related ones are. The young scholar says to his teacher how amazing it was in the old days, when people were foolish, and thought the Sun and the Stars moved around the Earth. How fortunate we are to know better. The elder says, ah yes, but what would it look like if it were the other way around?

There are many things to ponder packed into that joke. For one, the elder scholar’s awareness that our ancestors were no less smart or perceptive or clever than we are. For another, the awareness that there is a problem. We want to know about the universe. But we can only know what we perceive now, where we are at this moment. Even a note we’ve written in the past, or a message from a trusted friend, we can’t take uncritically. What we know is that we perceive this information in this way, now. When we pay attention to our friends in the philosophy department we learn that knowledge is even harder than we imagine. But I’ll stop there. The problem is hard enough already.

We can put it in a mathematical form, one that seems immune to many of the worst problems of knowledge. In this form it looks something like this: if what can we know about the universe, if all we really know is what things in that universe are doing near us? The things that we look at are functions. The universe we’re hoping to understand is the domain of the functions. One filter we use to see the universe is Morse Theory.

We don’t look at every possible function. Functions are too varied and weird for that. We look at functions whose range is the real numbers. And they must be smooth. This is a term of art. It means the function has derivatives. It has to be continuous. It can’t have sharp corners. And it has to have lots of derivatives. The first derivative of a smooth function has to also be continuous, and has to also lack corners. And the derivative of that first derivative has to be continuous, and to lack corners. And the derivative of that derivative has to be the same. A smooth function can can differentiate over and over again, infinitely many times. None of those derivatives can have corners or jumps or missing patches or anything. This is what makes it smooth.

Most functions are not smooth, in much the same way most shapes are not circles. That’s all right. There are many smooth functions anyway, and they describe things we find interesting. Or we think they’re interesting, anyway. Smooth functions are easy for us to work with, and to know things about. There’s plenty of smooth functions. If you’re interested in something else there’s probably a smooth function that’s close enough for practical use.

Morse Theory builds on the “critical points” of these smooth functions. A critical point, in this context, is one where the derivative is zero. Derivatives being zero usually signal something interesting going on. Often they show where the function changes behavior. In freshman calculus they signal where a function changes from increasing to decreasing, so the critical point is a maximum. In physics they show where a moving body no longer has an acceleration, so the critical point is an equilibrium. Or where a system changes from one kind of behavior to another. And here — well, many things can happen.

So take a smooth function. And take a critical point that it’s got. (And, erg. Technical point. The derivative of your smooth function, at that critical point, shouldn’t be having its own critical point going on at the same spot. That makes stuff more complicated.) It’s possible to approximate your smooth function near that critical point with, of course, a polynomial. It’s always polynomials. The shape of these polynomials gives you an index for these points. And that can tell you something about the shape of the domain you’re on.

At least, it tells you something about what the shape is where you are. The universal model for this — based on skimming texts and papers and popularizations of this — is of a torus standing vertically. Like a doughnut that hasn’t tipped over, or like a tire on a car that’s working as normal. I suspect this is the best shape to use for teaching, as anyone can understand it while it still shows the different behaviors. I won’t resist.

Imagine slicing this tire horizontally. Slice it close to the bottom, below the central hole, and the part that drops down is a disc. At least, it could be flattened out tolerably well to a disc.

Slice it somewhere that intersects the hole, though, and you have a different shape. You can’t squash that down to a disc. You have a noodle shape. A cylinder at least. That’s different from what you got the first slice.

Slice the tire somewhere higher. Somewhere above the central hole, and you have … well, it’s still a tire. It’s got a hole in it, but you could imagine patching it and driving on. There’s another different shape that we’ve gotten from this.

Imagine we were confined to the surface of the tire, but did not know what surface it was. That we start at the lowest point on the tire and ascend it. From the way the smooth functions around us change we can tell how the surface we’re on has changed. We can see its change from “basically a disc” to “basically a noodle” to “basically a doughnut”. We could work out what the surface we’re on has to be, thanks to how these smooth functions around us change behavior.

Occasionally we mathematical-physics types want to act as though we’re not afraid of our friends in the philosophy department. So we deploy the second thing we know about Immanuel Kant. He observed that knowing the force of gravity falls off as the square of the distance between two things implies that the things should exist in a three-dimensional space. (Source: I dunno, I never read his paper or book or whatever and dunno I ever heard anyone say they did.) It’s a good observation. Geometry tells us what physics can happen, but what physics does happen tells us what geometry they happen in. And it tells the philosophy department that we’ve heard of Immanuel Kant. This impresses them greatly, we tell ourselves.

Morse Theory is a manifestation of how observable physics teaches us the geometry they happen on. And in an urgent way, too. Some of Edward Witten’s pioneering work in superstring theory was in bringing Morse Theory to quantum field theory. He showed a set of problems called the Morse Inequalities gave us insight into supersymmetric quantum mechanics. The link between physics and doughnut-shapes may seem vague. This is because you’re not remembering that mathematical physics sees “stuff happening” as curves drawn on shapes which represent the kind of problem you’re interested in. Learning what the shapes representing the problem look like is solving the problem.

If you’re interested in the substance of this, the universally-agreed reference is J Milnor’s 1963 text Morse Theory. I confess it’s hard going to read, because it’s a symbols-heavy textbook written before the existence of LaTeX. Each page reminds one why typesetters used to get hazard pay, and not enough of it.

The End 2016 Mathematics A To Z: Algebra


So let me start the End 2016 Mathematics A To Z with a word everybody figures they know. As will happen, everybody’s right and everybody’s wrong about that.

Algebra.

Everybody knows what algebra is. It’s the point where suddenly mathematics involves spelling. Instead of long division we’re on a never-ending search for ‘x’. Years later we pass along gifs of either someone saying “stop asking us to find your ex” or someone who’s circled the letter ‘x’ and written “there it is”. And make jokes about how we got through life without using algebra. And we know it’s the thing mathematicians are always doing.

Mathematicians aren’t always doing that. I expect the average mathematician would say she almost never does that. That’s a bit of a fib. We have a lot of work where we do stuff that would be recognizable as high school algebra. It’s just we don’t really care about that. We’re doing that because it’s how we get the problem we are interested in done. the most recent few pieces in my “Why Stuff can Orbit” series include a bunch of high school algebra-style work. But that was just because it was the easiest way to answer some calculus-inspired questions.

Still, “algebra” is a much-used word. It comes back around the second or third year of a mathematics major’s career. It comes in two forms in undergraduate life. One form is “linear algebra”, which is a great subject. That field’s about how stuff moves. You get to imagine space as this stretchy material. You can stretch it out. You can squash it down. You can stretch it in some directions and squash it in others. You can rotate it. These are simple things to build on. You can spend a whole career building on that. It becomes practical in surprising ways. For example, it’s the field of study behind finding equations that best match some complicated, messy real data.

The second form is “abstract algebra”, which comes in about the same time. This one is alien and baffling for a long while. It doesn’t help that the books all call it Introduction to Algebra or just Algebra and all your friends think you’re slumming. The mathematics major stumbles through confusing definitions and theorems that ought to sound comforting. (“Fermat’s Little Theorem”? That’s a good thing, right?) But the confusion passes, in time. There’s a beautiful subject here, one of my favorites. I’ve talked about it a lot.

We start with something that looks like the loosest cartoon of arithmetic. We get a bunch of things we can add together, and an ‘addition’ operation. This lets us do a lot of stuff that looks like addition modulo numbers. Then we go on to stuff that looks like picking up floor tiles and rotating them. Add in something that we call ‘multiplication’ and we get rings. This is a bit more like normal arithmetic. Add in some other stuff and we get ‘fields’ and other structures. We can keep falling back on arithmetic and on rotating tiles to build our intuition about what we’re doing. This trains mathematicians to look for particular patterns in new, abstract constructs.

Linear algebra is not an abstract-algebra sort of algebra. Sorry about that.

And there’s another kind of algebra that mathematicians talk about. At least once they get into grad school they do. There’s a huge family of these kinds of algebras. The family trait for them is that they share a particular rule about how you can multiply their elements together. I won’t get into that here. There are many kinds of these algebras. One that I keep trying to study on my own and crash hard against is Lie Algebra. That’s named for the Norwegian mathematician Sophus Lie. Pronounce it “lee”, as in “leaning”. You can understand quantum mechanics much better if you’re comfortable with Lie Algebras and so now you know one of my weaknesses. Another kind is the Clifford Algebra. This lets us create something called a “hypercomplex number”. It isn’t much like a complex number. Sorry. Clifford Algebra does lend to a construct called spinors. These help physicists understand the behavior of bosons and fermions. Every bit of matter seems to be either a boson or a fermion. So you see why this is something people might like to understand.

Boolean Algebra is the algebra of this type that a normal person is likely to have heard of. It’s about what we can build using two values and a few operations. Those values by tradition we call True and False, or 1 and 0. The operations we call things like ‘and’ and ‘or’ and ‘not’. It doesn’t sound like much. It gives us computational logic. Isn’t that amazing stuff?

So if someone says “algebra” she might mean any of these. A normal person in a non-academic context probably means high school algebra. A mathematician speaking without further context probably means abstract algebra. If you hear something about “matrices” it’s more likely that she’s speaking of linear algebra. But abstract algebra can’t be ruled out yet. If you hear a word like “eigenvector” or “eigenvalue” or anything else starting “eigen” (or “characteristic”) she’s more probably speaking of abstract algebra. And if there’s someone’s name before the word “algebra” then she’s probably speaking of the last of these. This is not a perfect guide. But it is the sort of context mathematicians expect other mathematicians notice.

Reading the Comics, August 27, 2016: Calm Before The Term Edition


Here in the United States schools are just lurching back into the mode where they have students come in and do stuff all day. Perhaps this is why it was a routine week. Comic Strip Master Command wants to save up a bunch of story problems for us. But here’s what the last seven days sent into my attention.

Jeff Harris’s Shortcuts educational feature for the 21st is about algebra. It’s got a fair enough blend of historical trivia and definitions and examples and jokes. I don’t remember running across the “number cruncher” joke before.

Mark Anderson’s Andertoons for the 23rd is your typical student-in-lecture joke. But I do sympathize with students not understanding when a symbol gets used for different meanings. It throws everyone. But sometimes the things important to note clearly in one section are different from the needs in another section. No amount of warning will clear things up for everybody, but we try anyway.

Tom Thaves’s Frank and Ernest for the 23rd tells a joke about collapsing wave functions, which is why you never see this comic in a newspaper but always see it on a physics teacher’s door. This is properly physics, specifically quantum mechanics. But it has mathematical import. The most practical model of quantum mechanics describes what state a system is in by something called a wave function. And we can turn this wave function into a probability distribution, which describes how likely the system is to be in each of its possible states. “Collapsing” the wave function is a somewhat mysterious and controversial practice. It comes about because if we know nothing about a system then it may have one of many possible values. If we observe, say, the position of something though, then we have one possible value. The wave functions before and after the observation are different. We call it collapsing, reflecting how a universe of possibilities collapsed into a mere fact. But it’s hard to find an explanation for what that is that’s philosophically and physically satisfying. This problem leads us to Schrödinger’s Cat, and to other challenges to our sense of how the world could make sense. So, if you want to make your mark here’s a good problem for you. It’s not going to be easy.

John Allison’s Bad Machinery for the 24th tosses off a panel full of mathematics symbols as proof of hard thinking. In other routine references John Deering’s Strange Brew for the 26th is just some talk about how hard fractions are.

While it’s outside the proper bounds of mathematics talk, Tom Toles’s Randolph Itch, 2 am for the 23rd is a delight. My favorite strip of this bunch. Should go on the syllabus.

Reading the Comics, August 12, 2016: Skipping Saturday Edition


I have no idea how many or how few comic strips on Saturday included some mathematical content. I was away most of the day. We made a quick trip to the Michigan’s Adventure amusement park and then to play pinball in a kind-of competitive league. The park turned out to have every person in the world there. If I didn’t wave to you from the queue on Shivering Timbers I apologize but it hasn’t got the greatest lines of sight. The pinball stuff took longer than I expected too and, long story short, we got back home about 4:15 am. So I’m behind on my comics and here’s what I did get to.

Tak Bui’s PC and Pixel for the 8th depicts the classic horror of the cleaning people wiping away an enormous amount of hard work. It’s a primal fear among mathematicians at least. Boards with a space blocked off with the “DO NOT ERASE” warning are common. At this point, though, at least, the work is probably savable. You can almost always reconstruct work, and a few smeared lines like this are not bad at all.

The work appears to be quantum mechanics work. The tell is in the upper right corner. There’s a line defining E (energy) as equal to something including \imath \hbar \frac{\partial}{\partial t}\phi(r, t) . This appears in the time-dependent Schrödinger Equation. It describes how probability waveforms look when the potential energies involved may change in time. These equations are interesting and impossible to solve exactly. We have to resort to approximations, including numerical approximations, all the time. So that’s why the computer lab would be working on this.

Mark Anderson’s Andertoons! Where would I be without them? Besides short on content. The strip for the 10th depicts a pollster saying to “put the margin of error at 50%”, guaranteeing the results are right. If you follow elections polls you do see the results come with a margin of error, usually of about three percent. But every sampling technique carries with it a margin of error. The point of a sample is to learn something about the whole without testing everything in it, after all. And probability describes how likely it is the quantity measured by a sample will be far from the quantity the whole would have. The logic behind this is independent of the thing being sampled. It depends on what the whole is like. It depends on how the sampling is done. It doesn’t matter whether you’re sampling voter preferences or whether there are the right number of peanuts in a bag of squirrel food.

So a sample’s measurement will almost never be exactly the same as the whole population’s. That’s just requesting too much of luck. The margin of error represents how far it is likely we’re off. If we’ve sampled the voting population fairly — the hardest part — then it’s quite reasonable the actual vote tally would be, say, one percent different from our poll. It’s implausible that the actual votes would be ninety percent different. The margin of error is roughly the biggest plausible difference we would expect to see.

Except. Sometimes we do, even with the best sampling methods possible, get a freak case. Rarely noticed beside the margin of error is the confidence level. This is what the probability is that the actual population value is within the sampling error of the sample’s value. We don’t pay much attention to this because we don’t do statistical-sampling on a daily basis. The most normal people do is read election polling results. And most election polls settle for a confidence level of about 95 percent. That is, 95 percent of the time the actual voting preference will be within the three or so percentage points of the survey. The 95 percent confidence level is popular maybe because it feels like a nice round number. It’ll be off only about one time out of twenty. It also makes a nice balance between a margin of error that doesn’t seem too large and that doesn’t need too many people to be surveyed. As often with statistics the common standard is an imperfectly-logical blend of good work and ease of use.

For the 11th Mark Anderson gives me less to talk about, but a cute bit of wordplay. I’ll take it.

Anthony Blades’s Bewley for the 12th is a rerun. It’s at least the third time this strip has turned up since I started writing these Reading The Comics posts. For the record it ran also the 27th of April, 2015 and on the 24th of May, 2013. It also suggests mathematicians have a particular tell. Try this out next time you do word problem poker and let me know how it works for you.

Julie Larson’s The Dinette Set for the 12th I would have sworn I’d seen here before. I don’t find it in my archives, though. We are meant to just giggle at Larson’s characters who bring their penny-wise pound-foolishness to everything. But there is a decent practical mathematics problem here. (This is why I thought it had run here before.) How far is it worth going out of one’s way for cheaper gas? How much cheaper? It’s simple algebra and I’d bet many simple Javascript calculator tools. The comic strip originally ran the 4th of October, 2005. Possibly it’s been rerun since.

Bill Amend’s FoxTrot Classics for the 12th is a bunch of gags about a mathematics fighting game. I think Amend might be on to something here. I assume mathematics-education contest games have evolved from what I went to elementary school on. That was a Commodore PET with a game where every time you got a multiplication problem right your rocket got closer to the ASCII Moon. But the game would probably quickly turn into people figuring how to multiply the other person’s function by zero. I know a game exploit when I see it.

The most obscure reference is in the third panel one. Jason speaks of “a z = 0 transform”. This would seem to be some kind of z-transform, a thing from digital signals processing. You can represent the amplification, or noise-removal, or averaging, or other processing of a string of digits as a polynomial. Of course you can. Everything is polynomials. (OK, sometimes you must use something that looks like a polynomial but includes stuff like the variable z raised to a negative power. Don’t let that throw you. You treat it like a polynomial still.) So I get what Jason is going for here; he’s processing Peter’s function down to zero.

That said, let me warn you that I don’t do digital signal processing. I just taught a course in it. (It’s a great way to learn a subject.) But I don’t think a “z = 0 transform” is anything. Maybe Amend encountered it as an instructor’s or friend’s idiosyncratic usage. (Amend was a physics student in college, and shows his comfort with mathematics-major talk often. He by the way isn’t even the only syndicated cartoonist with a physics degree. Bud Grace of The Piranha Club was also a physics major.) I suppose he figured “z = 0 transform” would read clearly to the non-mathematician and be interpretable to the mathematician. He’s right about that.

A Leap Day 2016 Mathematics A To Z: Yukawa Potential


Yeah, ‘Y’ is a lousy letter in the Mathematics Glossary. I have a half-dozen mathematics books on the shelf by my computer. Some is semi-popular stuff like Richard Courant and Herbert Robbins’s What Is Mathematics? (the Ian Stewart revision). Some is fairly technical stuff, by which I mean Hidetoshi Nishimori’s Statistical Physics of Spin Glasses and Information Processing. There’s just no ‘Y’ terms in any of them worth anything. But I can rope something into the field. For example …

Yukawa Potential

When you as a physics undergraduate first take mechanics it’s mostly about very simple objects doing things according to one rule. The objects are usually these indivisible chunks. They’re either perfectly solid or they’re points, too tiny to have a surface area or volume that might mess things up. We draw them as circles or as blocks because they’re too hard to see on the paper or board otherwise. We spend a little time describing how they fall in a room. This lends itself to demonstrations in which the instructor drops a rubber ball. Then we go on to a mass on a spring hanging from the ceiling. Then to a mass on a spring hanging to another mass.

Then we go onto two things sliding on a surface and colliding, which would really lend itself to bouncing pool balls against one another. Instead we use smaller solid balls. Sometimes those “Newton’s Cradle” things with the five balls that dangle from wires and just barely touch each other. They give a good reason to start talking about vectors. I mean positional vectors, the ones that say “stuff moving this much in this direction”. Normal vectors, that is. Then we get into stars and planets and moons attracting each other by gravity. And then we get into the stuff that really needs calculus. The earlier stuff is helped by it, yes. It’s just by this point we can’t do without.

The “things colliding” and “balls dropped in a room” are the odd cases in this. Most of the interesting stuff in an introduction to mechanics course is about things attracting, or repelling, other things. And, particularly, they’re particles that interact by “central forces”. Their attraction or repulsion is along the line that connects the two particles. (Impossible for a force to do otherwise? Just wait until Intro to Mechanics II, when magnetism gets in the game. After that, somewhere in a fluid dynamics course, you’ll see how a vortex interacts with another vortex.) The potential energies for these all vary with distance between the points.

Yeah, they also depend on the mass, or charge, or some kind of strength-constant for the points. They also depend on some universal constant for the strength of the interacting force. But those are, well, constant. If you move the particles closer together or farther apart the potential changes just by how much you moved them, nothing else.

Particles hooked together by a spring have a potential that looks like \frac{1}{2}k r^2 . Here ‘r’ is how far the particles are from each other. ‘k’ is the spring constant; it’s just how strong the spring is. The one-half makes some other stuff neater. It doesn’t do anything much for us here. A particle attracted by another gravitationally has a potential that looks like -G M \frac{1}{r} . Again ‘r’ is how far the particles are from each other. ‘G’ is the gravitational constant of the universe. ‘M’ is the mass of the other particle. (The particle’s own mass doesn’t enter into it.) The electric potential looks like the gravitational potential but we have different symbols for stuff besides the \frac{1}{r} bit.

The spring potential and the gravitational/electric potential have an interesting property. You can have “closed orbits” with a pair of them. You can set a particle orbiting another and, with time, get back to exactly the original positions and velocities. (Three or more particles you’re not guaranteed of anything.) The curious thing is this doesn’t always happen for potentials that look like “something or other times r to a power”. In fact, it never happens, except for the spring potential, the gravitational/electric potential, and — peculiarly — for the potential k r^7 . ‘k’ doesn’t mean anything there, and we don’t put a one-seventh or anything out front for convenience, because nobody knows anything that needs anything like that, ever. We can have stable orbits, ones that stay within a minimum and a maximum radius, for a potential k r^n whenever n is larger than -2, at least. And that’s it, for potentials that are nothing but r-to-a-power.

Ah, but does the potential have to be r-to-a-power? And here we see Dr Hideki Yukawa’s potential energy. Like these springs and gravitational/electric potentials, it varies only with the distance between particles. Its strength isn’t just the radius to a power, though. It uses a more complicated expression:

-K \frac{e^{-br}}{r}

Here ‘K’ is a scaling constant for the strength of the whole force. It’s the kind of thing we have ‘G M’ for in the gravitational potential, or ‘k’ in the spring potential. The ‘b’ is a second kind of scaling. And that a kind of range. A range of what? It’ll help to look at this potential rewritten a little. It’s the same as -\left(K \frac{1}{r}\right) \cdot \left(e^{-br}\right) . That’s the gravitational/electric potential, times e-br. That’s a number that will be very large as r is small, but will drop to zero surprisingly quickly as r gets larger. How quickly will depend on b. The larger a number b is, the faster this drops to zero. The smaller a number b is, the slower this drops to zero. And if b is equal to zero, then e-br is equal to 1, and we have the gravitational/electric potential all over again.

Yukawa introduced this potential to physics in the 1930s. He was trying to model the forces which keep an atom’s nucleus together. It represents the potential we expect from particles that attract one another by exchanging some particles with a rest mass. This rest mass is hidden within that number ‘b’ there. If the rest mass is zero, the particles are exchanging something like light, and that’s just what we expect for the electric potential. For the gravitational potential … um. It’s complicated. It’s one of the reasons why we expect that gravitons, if they exist, have zero rest mass. But we don’t know that gravitons exist. We have a lot of trouble making theoretical gravitons and quantum mechanics work together. I’d rather be skeptical of the things until we need them.

Still, the Yukawa potential is an interesting mathematical creature even if we ignore its important role in modern physics. When I took my Introduction to Mechanics final one of the exam problems was deriving the equivalent of Kepler’s Laws of Motion for the Yukawa Potential. I thought then it was a brilliant problem. I still do. It struck me while writing this that I don’t remember whether it allows for closed orbits, except when b is zero. I’m a bit afraid to try to work out whether it does, lest I learn that I can’t follow the reasoning for that anymore. That would be a terrible thing to learn.

More Things To Read


My long streak of posting something every day will end. There’s just no keeping up mathematics content like this indefinitely, not with my stamina. But it’s not over just quite yet. I wanted to share some stuff that people had brought to my attention and that’s just too interesting to pass up.

The first comes from … I’m not really sure. I lost my note about wherever it did come from. It’s from the Continuous Everywhere But Differentiable Nowhere blog. It’s about teaching the Crossed Chord Theorem. It’s one I had forgotten about, if I heard it in the first place. The result is one of those small, neat things and it’s fun to work through how and why it might be true.

Next comes from a comment by Gerry on a rather old article, “What’s The Worst Way To Pack?” Gerry located a conversation on MathOverflow.net that’s about finding low-density packings of discs, circles, on the plane. As these sorts of discussions go, it gets into some questions about just what we mean by packings, and whether Wikipedia has typos. This is normal for discovering new mathematics. We have to spend time pinning down just what we mean to talk about. Then we can maybe figure out what we’re saying.

And the last I picked up from Elke Stangl, of what’s now known as the elkemental Force blog. She had pointed me first to lecture notes from Dr Scott Aaronson which try to explain quantum mechanics from starting principles. Normally, almost invariably, they’re taught in historical sequence. Aaronson here skips all the history to look at what mathematical structures make quantum mechanics make sense. It’s not for casual readers, I’m afraid. It assumes you’re comfortable with things like linear transformations and p-norms. But if you are, then it’s a great overview. I figure to read it over several more times myself.

Those notes are from a class in Quantum Computing. I haven’t had nearly the time to read them all. But the second lecture in the series is on Set Theory. That’s not quite friendly to a lay audience, but it is friendlier, at least.

Reading the Comics, April 5, 2016: April 5, 2016 Edition


I’ve mentioned I like to have five or six comic strips for a Reading The Comics entry. On the 5th, it happens, I got a set of five all at once. Perhaps some are marginal for mathematics content but since when does that stop me? Especially when there’s the fun of a single-day Reading The Comics post to consider. So here goes:

Mark Anderson’s Andertoons is a student-resisting-the-problem joke. And it’s about long division. I can’t blame the student for resisting. Long division’s hard to learn. It’s probably the first bit of arithmetic in which you just have to make an educated guess for an answer and face possibly being wrong. And this is a problem that’ll have a remainder in it. I think I remember early on in long division finding a remainder left over feeling like an accusation. Surely if I’d done it right, the divisor would go into the original number a whole number of times, right? No, but you have to warm up to being comfortable with that.

'This problem has me stumped, Hazel.' 'Think of it this way. Pac-Man gobbles up four monsters. Along come six space demons ... '
Ted Key’s Hazel rerun the 5th of April, 2016. Were the Pac-Man ghosts ever called space demons? It seems like they might’ve been described that way in some boring official manual that nobody ever read. I always heard them as “ghosts” anyway.

Ted Key’s Hazel feels less charmingly out-of-date when you remember these are reruns. Ted Key — who created Peabody’s Improbable History as well as the sitcom based on this comic panel — retired in 1993. So Hazel’s attempt to create a less abstract version of the mathematics problem for Harold is probably relatively time-appropriate. And recasting a problem as something less abstract is often a good way to find a solution. It’s all right to do side work as a way to get the work you want to do.

John McNamee’s Pie Comic is a joke about the uselessness of mathematics. Tch. I wonder if the problem here isn’t the abstractness of a word like “hypotenuse”. I grant the word doesn’t evoke anything besides “hypotenuse”. But one irony is that hypotenuses are extremely useful things. We can use them to calculate how far away things are, without the trouble of going out to the spot. We can imagine post-apocalyptic warlords wanting to know how far things are, so as to better aim the trebuchets.

Percy Crosby’s Skippy is a rerun from 1928, of course. It’s also only marginally on point here. The mention of arithmetic is irrelevant to the joke. But it’s a fine joke and I wanted people to read it. Longtime readers know I’m a Skippy fan. (Saturday’s strip follows up on this. It’s worth reading too.)

Zippy knows just enough about quantum mechanics to engender a smug attitude. 'BLACK HOLES don't exist! Actually, they're MAUVE!' He sees 14 dimensions inside a vitamin D pill. 'Wait! I just saw 3 more! I'll call them Larry, Moe, and Curly!' He holds sway on street corners even when no one is listening. 'GRAVITY is like a sheet of saran wrap! And we are DRUMSTICKS, encased in a riddle!' Does Zippy actually believe what he says when in rant mode? 'God doesn't play dice with the universe! He plays PARCHEESI!'
Bill Griffith’s Zippy the Pinhead for the 5th of April, 2016. Not what Neil DeGrasse Tyson is like when he’s not been getting enough sleep.

Bill Griffith’s Zippy the Pinhead has picked up some quantum mechanics talk. At least he’s throwing around the sorts of things we see in pop science and, er, pop mathematical talk about the mathematics of cutting-edge physics. I’m not aware of any current models of everything which suppose there to be fourteen, or seventeen, dimensions of space. But high-dimension spaces are common points of speculation. Most of those dimensions appear to be arranged in ways we don’t see in the everyday world, but which leave behind mathematical traces. The crack about God not playing dice with the universe is famously attributed to Albert Einstein. Einstein was not comfortable with the non-deterministic nature of quantum mechanics, that there is this essential randomness to this model of the world.

Reading the Comics, January 27, 2015: Rabbit In A Trapezoid Edition


So the reason I fell behind on this Reading the Comics post is that I spent more time than I should have dithering about which ones to include. I hope it’s not disillusioning to learn that I have no clearly defined rules about what comics to include and what to leave out. It depends on how clearly mathematical in content the comic strip is; but it also depends on how much stuff I have gathered. If there’s a slow week, I start getting more generous about what I might include. And last week gave me a string of comics that I could argue my way into including, but few that obviously belonged. So I had a lot of time dithering.

To make it up to you, at the end of the post I should have our pet rabbit tucked within a trapezoid of his own construction. If that doesn’t make everything better I don’t know what will.

Mark Pett’s Mr Lowe for the 22nd of January (a rerun from the 19th of January, 2001) is really a standardized-test-question joke. But it brings up a debate about cultural biases in standardized tests that I don’t remember hearing lately. I may just be moving in the wrong circles. I remember self-assured rich white males explaining how it’s absurd to think cultural bias could affect test results since, after all, they’re standardized tests. I’ve sulked some around these parts about how I don’t buy mathematics’ self-promoted image of being culturally neutral either. A mathematical truth may be universal, but that we care about this truth is not. Anyway, Pett uses a mathematics word problem to tell the joke. That was probably the easiest way to put a cultural bias into a panel that

T Lewis and Michael Fry’s Over The Hedge for the 25th of January uses a bit of calculus to represent “a lot of hard thinking”. Hammy the Squirrel particularly is thinking of the Fundamental Theorem of Calculus. This particular part is the one that says the derivative of the integral of a function is the original function. It’s part of how integration and differentiation link together. And it shows part of calculus’s great appeal. It has those beautiful long-s integral signs that make this part of mathematics look like artwork.

Leigh Rubin’s Rubes for the 25th of January is a panel showing “Schrödinger’s Job Application”. It’s referring to Schrödinger’s famous thought experiment, meant to show there are things we don’t understand about quantum mechanics. It sets up a way that a quantum phenomenon can be set up to have distinct results in the everyday environment. The mathematics suggests that a cat, poisoned or not by toxic gas released or not by the decay of one atom, would be both alive and dead until some outside observer checks and settles the matter. How can this be? For that matter, how can the cat not be a qualified judge to whether it’s alive? Well, there are things we don’t understand about quantum mechanics.

Roy Schneider’s The Humble Stumble for the 26th of January (a rerun from the 30th of January, 2007) uses a bit of mathematics to mark Tommy, there, as a frighteningly brilliant weirdo. The equation is right, although trivial. The force it takes to keep something with a mass m moving in a circle of radius R at the linear speed v is \frac{m v^2}{R} . The radius of the Moon’s orbit around the Earth is strikingly close to sixty times the Earth’s radius. The Ancient Greeks were able to argue that from some brilliantly considered geometry. Here, RE gets used as a name for “the radius of the Earth”. So the force holding the Moon in its orbit has to be approximately \frac{m v^2}{60 R_e} . That’s if we say m is the mass of the Moon, and v is its linear speed, and if we suppose the Moon’s orbit is a circle. It nearly is, and this would give us a good approximate answer to how much force holds the Moon in its orbit. It would be only a start, though; the precise movements of the Moon are surprisingly complicated. Newton himself could not fully explain them, even with the calculus and physics tools he invented for the task.

Dave Whamond’s Reality Check for the 26th of January isn’t quite the anthropomorphic-numerals joke for this essay. But we do get personified geometric constructs, which is close, and some silly wordplay. Much as I like the art for Over The Hedge showcasing a squirrel so burdened with thoughts that his head flops over, this might be my favorite of this bunch.

Dave Blazek’s Loose Parts for the 27th of January is a runner-up for the silly jokes trophy this time around.

Our pet rabbit flopped out inside of a cardboard box. The box was set up, upside-down, so he could go inside and chew on the contents. He's pulled the side flaps inward, so that the base is a trapezoidal prism.
Cardboard boxes are normally pretty good environments for rabbits, given that they’re places the rabbits can do in and not be seen. We set the box up, but he did all the chewing.

Now I know what you’re thinking: isn’t that actually a trapezoidal prism, underneath a rectangular prism? Yes, I suppose so. The only people who’re going to say so are trying to impress people by saying so, though. And those people won’t be impressed by it. I’m sorry. We gave him the box because rabbits generally like having cardboard boxes to go in and chew apart. He did on his own the pulling-in of the side flaps to make it stand so trapezoidal.

Reading the Comics, November 18, 2015: All Caught Up Edition


Yes, I feel a bit bad that I didn’t have anything posted yesterday. I’d had a nice every-other-day streak going for a couple weeks there. But I had honestly expected more mathematically themed comic strips, and there just weren’t enough in my box by the end of the 17th. So I didn’t have anything to schedule for a post the 18th. The 18th came through, though, and now I’ve got enough to talk about. And that before I get to reading today’s comics. So, please, enjoy.

Scott Adams’s Dilbert Classics for the 16th of November (originally published the 21st of September, 1992) features Dilbert discovering Bell’s Theorem. Bell’s Theorem is an important piece of our understanding of quantum mechanics. It’s a theorem that excites people who first hear about it. It implies quantum mechanics can’t explain reality unless it can allow information to be transmitted between interacting particles faster than light. And quantum mechanics does explain reality. The thing is, and the thing that casual readers don’t understand, is that there’s no way to use this to send a signal. Imagine that I took two cards, one an ace and one an eight, seal them in envelopes, and gave them to astronauts. The astronauts each travel to ten light-years away from me in opposite directions. (They took extreme offense at something I said and didn’t like one another anyway.) Then one of them opens her envelope, finding that she’s got the eight. Then instantly, even though they’re twenty light-years apart, she knows the other astronaut has an ace in her envelope. But there is no way the astronauts can use this to send information to one another, which is what people want Bell’s Theorem to tell us. (My example is not legitimate quantum mechanics and do not try to use it to pass your thesis defense. It just shows why Bell’s Theorem does not give us a way to send information we care about faster than light.) The next day Dilbert’s Garbageman, the Smartest Man in the World, mentions Dilbert’s added something to Bell’s Theorem. It’s the same thing everybody figuring they can use quantum entanglement to communicate adds to the idea.

Tom Thaves’ Frank and Ernest for the 16th of November riffs on the idea of a lottery as a “tax on people who are bad at math”. Longtime readers here know that I have mixed feelings about that, and not just because I’m wary of cliché. If the jackpot is high enough, you can reach the point where the expectation value of the prize is positive. That is, you would expect to make money if you played the game under the same conditions often enough. But that chance is still vanishingly small. Even playing a million times would not make it likely you would more earn money than you spent. I’m not dogmatic enough to say what your decision should be, at least if the prize is big enough. (And that’s not considering the value placed on the fun of playing. One may complain that it shouldn’t be any fun to buy a soon-to-be-worthless ticket. But many people do enjoy it and I can’t bring myself to say they’re all wrong about feeling enjoyment.)

And it happens that on the 18th Brant Parker and Johnny Hart’s Wizard of Id Classics (originally run the 20th of November, 1965) did a lottery joke. That one is about a lottery one shouldn’t play, except that the King keeps track of who refuses to buy a ticket. I know when we’re in a genre.

Peter Mann’s The Quixote Syndrome for the 16th of November explores something I had never known but that at least the web seems to think is true. Apparently in 1958 Samuel Beckett knew the 12-year-old André Roussimoff. People of my age cohort have any idea who that is when they hear Roussimoff became pro wrestling star André the Giant. And Beckett drove the kid to school. Mann — taking, I think, a break from his usual adaptations of classic literature — speculates on what they might have talked about. His guess: Beckett attempting to ease one of his fears through careful study and mathematical treatment. The problem is goofily funny. But the treatment is the sort of mathematics everyone understands needing and understands using.

John Deering’s Strange Brew for the 17th of November tells a rounding up joke. Scott Hilburn’s The Argyle Sweater told it back in August. I suspect the joke is just in the air. Most jokes were formed between 1922 and 1978 anyway, and we’re just shuffling around the remains of that fruitful era.

Tony Cochrane’s Agnes for the 18th of November tells a resisting-the-word-problem joke. I admit expecting better from Cochrane. But casting arithmetic problems into word problems is fraught with peril. It isn’t enough to avoid obsolete references. (If we accept trains as obsolete. I’m from the United States Northeast, where subways and even commuter trains are viable things.) The problem also has to ask something the problem-solver can imagine wanting to know. It may not matter whether the question asks how far apart two trains, two cars, or two airplanes are, if the student can’t see their distance as anything but trivia. We may need better practice in writing stories if we’re to write story problems.

Reading the Comics, November 1, 2015: Uncertainty and TV Schedules Edition


Brian Fies’s Mom’s Cancer is a heartbreaking story. It’s compelling reading, but people who are emotionally raw from lost love ones, or who know they’re particularly sensitive to such stories, should consider before reading that the comic is about exactly what the title says.

But it belongs here because in the October 29th and the November 2nd installments are about a curiosity of area, and volume, and hypervolume, and more. That is that our perception of how big a thing is tends to be governed by one dimension, the length or the diameter of the thing. But its area is the square of that, its volume the cube of that, its hypervolume some higher power yet of that. So very slight changes in the diameter produce great changes in the volume. Conversely, though, great changes in volume will look like only slight changes. This can hurt.

Tom Toles’s Randolph Itch, 2 am from the 29th of October is a Roman numerals joke. I include it as comic relief. The clock face in the strip does depict 4 as IV. That’s eccentric but not unknown for clock faces; IIII seems to be more common. There’s not a clear reason why this should be. The explanation I find most nearly convincing is an aesthetic one. Roman numerals are flexible things, and can be arranged for artistic virtue in ways that Arabic numerals make impossible.

The aesthetic argument is that the four-character symbol IIII takes up nearly as much horizontal space as the VIII opposite it. The two-character IV would look distractingly skinny. Now, none of the symbols takes up exactly the same space as their counterpart. X is shorter than II, VII longer than V. But IV-versus-VIII does seem like the biggest discrepancy. Still, Toles’s art shows it wouldn’t look all that weird. And he had to conserve line strokes, so that the clock would read cleanly in newsprint. I imagine he also wanted to avoid using different representations of “4” so close together.

Jon Rosenberg’s Scenes From A Multiverse for the 29th of October is a riff on both quantum mechanics — Schödinger’s Cat in a box — and the uncertainty principle. The uncertainty principle can be expressed as a fascinating mathematical construct. It starts with Ψ, a probability function that has spacetime as its domain, and the complex-valued numbers as its range. By applying a function to this function we can derive yet another function. This function-of-a-function we call an operator, because we’re saying “function” so much it’s starting to sound funny. But this new function, the one we get by applying an operator to Ψ, tells us the probability that the thing described is in this place versus that place. Or that it has this speed rather than that speed. Or this angular momentum — the tendency to keep spinning — versus that angular momentum. And so on.

If we apply an operator — let me call it A — to the function Ψ, we get a new function. What happens if we apply another operator — let me call it B — to this new function? Well, we get a second new function. It’s much the way if we take a number, and multiply it by another number, and then multiply it again by yet another number. Of course we get a new number out of it. What would you expect? This operators-on-functions things looks and acts in many ways like multiplication. We even use symbols that look like multiplication: AΨ is operator A applied to function Ψ, and BAΨ is operator B applied to the function AΨ.

Now here is the thing we don’t expect. What if we applied operator B to Ψ first, and then operator A to the product? That is, what if we worked out ABΨ? If this was ordinary multiplication, then, nothing all that interesting. Changing the order of the real numbers we multiply together doesn’t change what the product is.

Operators are stranger creatures than real numbers are. It can be that BAΨ is not the same function as ABΨ. We say this means the operators A and B do not commute. But it can be that BAΨ is exactly the same function as ABΨ. When this happens we say that A and B do commute.

Whether they do or they don’t commute depends on the operators. When we know what the operators are we can say whether they commute. We don’t have to try them out on some functions and see what happens, although that sometimes is the easiest way to double-check your work. And here is where we get the uncertainty principle from.

The operator that lets us learn the probability of particles’ positions does not commute with the operator that lets us learn the probability of particles’ momentums. We get different answers if we measure a particle’s position and then its velocity than we do if we measure its velocity and then its position. (Velocity is not the same thing as momentum. But they are related. There’s nothing you can say about momentum in this context that you can’t say about velocity.)

The uncertainty principle is a great source for humor, and for science fiction. It seems to allow for all kinds of magic. Its reality is no less amazing, though. For example, it implies that it is impossible for an electron to spiral down into the nucleus of an atom, collapsing atoms the way satellites eventually fall to Earth. Matter can exist, in ways that let us have solid objects and chemistry and biology. This is at least as good as a cat being perhaps boxed.

Jan Eliot’s Stone Soup Classics for the 29th of October is a rerun from 1995. (The strip itself has gone to Sunday-only publication.) It’s a joke about how arithmetic is easy when you have the proper motivation. In 1995 that would include catching TV shows at a particular time. You see, in 1995 it was possible to record and watch TV shows when you wanted, but it required coordinating multiple pieces of electronics. It would often be easier to just watch when the show actually aired. Today we have it much better. You can watch anything you want anytime you want, using any piece of consumer electronics you have within reach, including several current models of microwave ovens and programmable thermostats. This does, sadly, remove one motivation for doing arithmetic. Also, I’m not certain the kids’ TV schedule is actually consistent with what was on TV in 1995.

Oh, heck, why not. Obviously we’re 14 minutes before the hour. Let me move onto the hour for convenience. It’s 744 minutes to the morning cartoons; that’s 12.4 hours. Taking the morning cartoons to start at 8 am, that means it’s currently 14 minutes before 24 minutes before 8 pm. I suspect a rounding error. Let me say they’re coming up on 8 pm. 194 minutes to Jeopardy implies the game show is on at 11 pm. 254 minutes to The Simpsons puts that on at midnight, which is probably true today, though I don’t think it was so in 1995 just yet. 284 minutes to Grace puts that on at 12:30 am.

I suspect that Eliot wanted it to be 978 minutes to the morning cartoons, which would bump Oprah to 4:00, Jeopardy to 7:00, Simpsons and Grace to 8:00 and 8:30, and still let the cartoons begin at 8 am. Or perhaps the kids aren’t that great at arithmetic yet.

Stephen Beals’s Adult Children for the 30th of October tries to build a “math error” out of repeated use of the phrase “I couldn’t care less”. The argument is that the thing one cares least about is unique. But why can’t there be two equally least-cared-about things?

We can consider caring about things as an optimization problem. Optimization problems are about finding the most of something given some constraints. If you want the least of something, multiply the thing you have by minus one and look for the most of that. You may giggle at this. But it’s the sensible thing to do. And many things can be equally high, or low. Take a bundt cake pan, and drizzle a little water in it. The water separates into many small, elliptic puddles. If the cake pan were perfectly formed, and set on a perfectly level counter, then the bottom of each puddle would be at the same minimum height. I grant a real cake pan is not perfect; neither is any counter. But you can imagine such.

Just because you can imagine it, though, must it exist? Think of the “smallest positive number”. The idea is simple. Positive numbers are a set of numbers. Surely there’s some smallest number. Yet there isn’t; name any positive number and we can name a smaller number. Divide it by two, for example. Zero is smaller than any positive number, but it’s not itself a positive number. A minimum might not exist, at least not within the confines where we are to look. It could be there is not something one could not care less about.

So a minimum might or might not exist, and it might or might not be unique. This is why optimization problems are exciting, challenging things.

A bedbug declares that 'according to our quantum mechanical computations, our entire observable universe is almost certainly Fred Wardle's bed.'
Niklas Eriksson’s Carpe Diem for the 1st of November, 2015. I’m not sure how accurately the art depicts bedbugs, although I’m also not sure how accurately Eriksson should.

Niklas Eriksson’s Carpe Diem for the 1st of November is about understanding the universe by way of observation and calculation. We do rely on mathematics to tell us things about the universe. Immanuel Kant has a bit of reputation in mathematical physics circles for this observation. (I admit I’ve never seen the original text where Kant observed this, so I may be passing on an urban legend. My love has several thousands of pages of Kant’s writing, but I do not know if any of them touch on natural philosophy.) If all we knew about space was that gravitation falls off as the square of the distance between two things, though, we could infer that space must have three dimensions. Otherwise that relationship would not make geometric sense.

Jeff Harris’s kids-information feature Shortcuts for the 1st of November was about the Harvard Computers. By this we mean the people who did the hard work of numerical computation, back in the days before this could be done by electrical and then electronic computer. Mathematicians relied on people who could do arithmetic in those days. There is the folkloric belief that mathematicians are inherently terrible at arithmetic. (I suspect the truth is people assume mathematicians must be better at arithmetic than they really are.) But here, there’s the mathematics of thinking what needs to be calculated, and there’s the mathematics of doing the calculations.

Their existence tends to be mentioned as a rare bit of human interest in numerical mathematics books, usually in the preface in which the author speaks with amazement of how people who did computing were once called computers. I wonder if books about font and graphic design mention how people who typed used to be called typewriters in their prefaces.

Reading the Comics, September 22, 2015: Rock Star Edition


The good news is I’ve got a couple of comic strips I feel responsible including the pictures for. (While I’m confident I could include all the comics I talk about as fair use — I make comments which expand on the strips’ content and which don’t make sense without the original — Gocomics.com links seem reasonably stable and likely to be there in the future. Comics Kingdom links generally expire after a month except to subscribers and I don’t know how long Creators.com links last.) And a couple of them talk about rock bands, so, that’s why I picked that titel.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 17th of September is a subverted-fairy-tale-moral strip, naturally enough. It’s also a legitimate point, though. Unlikely events do happen sometimes, and it’s a mistake to draw too-strong conclusions from them. This is why it’s important to reproduce interesting results. It’s also why, generally, we like larger sample sizes. It’s not likely that twenty fair coins flipped will all come up tails at once. But it’s far more likely that will happen than that two hundred fair coins flipped will all come up tails. And that’s far more likely than that two thousand fair coins will. For that matter, it’s more likely that three-quarters of twenty fair coins flipped will come up tails than that three-quarters of two hundred fair coins will. And the chance that three-quarters of two thousand fair coins will come up tails is ignorable. If that happens, then something interesting has been found.

In Juba’s Viivi and Wagner for the 17th of September, Wagner announces his decision to be a wandering mathematician. I applaud his ambition. If I had any idea where to find someone who needed mathematics done I’d be doing that myself. If you hear something give me a call. I’ll be down at the City Market, in front of my love’s guitar case, multiplying things by seven. I may get it wrong, but nobody will know how to correct me.

A whole panel full of calculations allows him to work out when the next Tool album will be finished.
Daniel Beyer’s Long Story Short for the 18th of September, 2015. I never actually heard of Tool before this comic.

Daniel Beyer’s Long Story Short for the 18th of September uses a page full of calculations to predict when prog-rock band Tool will release their next album. (Wikipedia indicates they’re hoping for sometime before the end of 2015, but they’ve been working on it since 2008.) Some of the symbols make a bit of sense as resembling those of quantum physics. An expression like (in the lower left of the board) \langle \psi_1 u_1 | {H}_{\gamma} | \psi_1 \rangle resembles a probability distribution calculation. (There should be a ^ above the H there, but that’s a little beyond what WordPress can render in the simple mathematical LaTeX tools it has available. It’s in the panel, though.) The letter ψ stands for a probability wave, containing somehow all the information about a system. The composition of symbols literally means to calculate how an operator — a function that has a domain of functions and a range of functions — changes that probability distribution. In quantum mechanics every interesting physical property has a matching operator, and calculating this set of symbols tells us the distribution of whatever that property is. H generally suggests the total energy of the system, so the implication is this measures, somehow, what energies are more and are less probable. I’d be interested to know if Beyer took the symbols from a textbook or paper and what the original context was.

Dave Whamond’s Reality Check for the 19th of September brings in another band to this review. It uses a more basic level of mathematics, though.

Percy Crosby’s Skippy from the 19th of September — rerun from sometime in 1928 — is a clever way to get a word problem calculated. It also shows off what’s probably been the most important use of arithmetic, which is keeping track of money. Accountants and shopkeepers get little attention in histories of mathematics, but a lot of what we do has been shaped by their needs for speed, efficiency, and accuracy. And one of Gocomics’s commenters pointed out that the shopkeeper didn’t give the right answer. Possibly the shopkeeper suspected what was up.

Paul Trap’s Thatababy for the 20th of September uses a basic geometry fact as an example of being “very educated”. I don’t think the area of the circle rises to the level of “very” — the word means “truly”, after all — but I would include it as part of the general all-around awareness of the world people should have. Also it fits in the truly confined space available. I like the dad’s eyes in the concluding panel. Also, there’s people who put eggplant on pizza? Really? Also, bacon? Really?

Gordo's luck is incredible, as he's won twenty card hands in a row. Some people make their own luck; he put his down to finding a leprechaun in a box of Lucky Charms.
Alex Hallatt’s Arctic Circle for the 21st of September, 2015.

Alex Hallatt’s Arctic Circle for the 21st of September is about making your own luck. I find it interesting in that it rationalizes magic as a thing which manipulates probability. As ways to explain magic for stories go that isn’t a bad one. We can at least imagine the rigging of card decks and weighting of dice. And its plot happens in the real world, too: people faking things — deceptive experimental results, rigged gambling devices, financial fraud — can often be found because the available results are too improbable. For example, a property called Benford’s Law tells us that in many kinds of data the first digit is more likely to be a 1 than a 2, a 2 than a 3, a 3 than a 4, et cetera. This fact serves to uncover fraud surprisingly often: people will try to steal money close to but not at some limit, like the $10,000 (United States) limit before money transactions get reported to the federal government. But that means they work with checks worth nine thousand and something dollars much more often than they do checks worth one thousand and something dollars, which is suspicious. Randomness can be a tool for honesty.

Peter Maresca’s Origins of the Sunday Comics feature for the 21st of September ran a Rube Goldberg comic strip from the 19th of November, 1913. That strip, Mike and Ike, precedes its surprisingly grim storyline with a kids-resisting-the-word-problem joke. The joke interests me because it shows a century-old example of the joke about word problems being strings of non sequiturs stuffed with unpleasant numbers. I enjoyed Mike and Ike’s answer, and the subversion of even that answer.

Mark Anderson’s Andertoons for the 22nd of September tries to optimize its targeting toward me by being an anthropomorphized-mathematical-objects joke and a Venn diagram joke. Also being Mark Anderson’s Andertoons today. If I didn’t identify this as my favorite strip of this set Anderson would just come back with this, but featuring monkeys at typewriters too.

Reading the Comics, September 14, 2015: Back To School Edition, Part II


Today’s mathematical-comics blog should get us up to the present. Or at least to Monday. Yes, I’m aware of which paradox of Zeno this evokes. Be nice.

Scott Adams’s Dilbert Classics for the 11th of September is a rerun from, it looks like, the 18th of July, 1992. Anyway, Dilbert has acquired a supercomputer and figures to calculate π to a lot of decimal places, to finally do something about the areas of circles. The calculation of the digits of pi is done, often on home-brewed supercomputers, to lots of digits. But the calculation of areas, or volumes, or circumferences or whatever, isn’t why. We just don’t need that many digits; forty digits of pi would be plenty for almost any calculation involving measuring things in the real world.

It’s actually a bit mysterious why the digits of pi should be worth bothering with. It’s not yet known if pi is a “normal” number, and that’s of some interest. In a normal number every finite sequence of digits appears, and as often as every other sequence of digits just as long. That is, if you break the digits of pi up into four-digit blocks, you should see “1701” and “2038” and “2468” and “9999” and all, each appearing roughly once per thousand blocks. It’s almost certain that pi is normal, because just about every number is. But it’s not proven. And if there were numerical evidence that pi wasn’t normal that would be mathematically interesting, though I wouldn’t blame everybody not in number theory for saying “huh” and “so what?” before moving on. As it is, calculating digits of pi is a good, challenging task that can be used to prove coding and computing abilities, and it might turn up something interesting. It may as well be that as anything.

Steve Breen and Mike Thompson’s Grand Avenue for the 11th of September is a “motivate the word problem” joke. So is Bill Amend’s FoxTrot Classics for the 14th (a rerun from the same day in 2004). I like Amend’s version better, partly because it gives more realistic problems. I also like that it mixes in a bit of French class. It’s not always mathematics that needs motivation.

J C Duffy’s Lug Nuts for the 11th of September is another pun strip badly timed for Pi Day.

Ruben Bolling’s Tom The Dancing Bug gave us a Super Fun-Pak Comics installment on the 11th. And that included a Chaos Butterfly installment pitting deterministic chaos against Schrödinger’s Cat. The Cat represents one (of many) interpretations of quantum mechanics, the “superposition” interpretation. It’s difficult to explain the idea philosophically, to say what is really going on. The mathematics is straightforward, though. In the most common model of quantum mechanics we describe what is going on by a probability distribution, a function that describes how likely each possible outcome is. Quantum mechanics describes how that distribution changes in time. In the superpositioning we have two, or more, probability distributions that describe very different states, but (in a way) averaged together. The changes of this combined distribution then become our idea of how the system changes in time. It’s just hard to say what it could mean when the superposition implies very different things, like a cat being both wet and dry, being equally true at once.

Julie Larson’s Dinette Set for the 12th of September is about double negatives. It’s also about the doomed attempt to bring logic to the constructions of English. At least in English a double negative — “not unwanted”, say — generally parses to a positive, even if the connotation is that the thing is only a bit wanted. This corresponds to what logicians would say. A logican might use “C” to stand in for some statement that can only be true or false. Then, saying “not not C” — an “is true” gets implicitly added to the end of that — is equivalent to saying “C [is true]”. My love, the philosopher, who knows much more Spanish than I do has pointed out that in Spanish the “not not” construction can intensify the strength of the negation, rather than annulling it. This causes us to wonder if Spanish-speaking logic students have a harder time understanding the “not not C” construction. I don’t know and would welcome insight. (Also I hope I have it right that a “not not” is an intensifier, rather than a softener. But I suppose it doesn’t matter, as long as the Spanish equivalent of “not not wanted” still connotes “unwanted”.)

Dan Collins’s Looks Good On Paper for the 12th of September is a simple early-autumn panorama kind of strip. Mathematics — particularly, geometry — gets used as the type case for elementary school. I suppose as long as diagramming sentences is out of fashion there’s no better easy-to-draw choice.

When he showed his wife the abacus he'd bought, she thought it was ``_ _ _ - _ _ _''.
David L Hoyt and Jeff Knurek’s Jumble for the 14th of September.

David L Hoyt and Jeff Knurek’s Jumble for the 14th of September is an abacus joke. For folks who want to do the Jumble themselves, a hint: the second word is not “Dummy” however appealing an unscramble that looks.

Stephen Beals’s Adult Children for the 14th builds on the idea of what if the universe were made wrong. And that’s expressed as a mathematics error in the building of the universe. The idea of mathematics as a transcendent and even god-touching thing is an old one. I imagine this reflects how a logically deduced fact has some kind of universal truth, that a sound argument is sound regardless of who makes it, or considers it. It’s a heady idea. Mathematics also allows us to say some very specific, and remarkable, things about the infinite. This is another god-touching notion. But we don’t have sound reason to think that universe-making must be mathematical. Mathematics can describe many aspects of the universe eerily well, yes. But is it necessary that a universe be mathematically consistent? The question seems to defy any kind of empirical answer; we must listen to philosophers, who can at least give us a reasoned answer.

Tom Thaves’s Frank and Ernest for the 14th of September depicts cave-Frank and cave-Ernest at the dawn of numbers. It suggests the symbol 1 being a representation of a stick, and 0 as a stone. The 1 as a stick seems at least imaginable; counting off things by representing them as sticks or as stroke marks feels natural. Of course I say that coming from a long heritage of doing just that. 0, as I understand it, seems to derive from making with a dot a place where zero of whatever was to be studied should appear; the dot grew into a loop probably to make it harder to miss.

Reading the Comics, June 21, 2015: Blatantly Padded Edition, Part 2


I said yesterday I was padding one mathematics-comics post into two for silly reasons. And I was. But there were enough Sunday comics on point that splitting one entry into two has turned out to be legitimate. Nice how that works out sometimes.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. (June 19) uses mathematics as something to heap upon a person until they yield to your argument. It’s a fallacious way to argue, but it does work. Even at a mathematical conference the terror produced by a screen full of symbols can chase follow-up questions away. On the 21st, they present mathematics as a more obviously useful thing. Well, mathematics with a bit of physics.

Nate Frakes’s Break Of Day (June 19) is this week’s anthropomorphic algebra joke.

Life at the quantum level: one subatomic particle suspects the other of being unfaithful because both know he could be in two places at once.
Niklas Eriksson’s Carpe Diem for the 20th of June, 2015.

Niklas Eriksson’s Carpe Diem (June 20) is captioned “Life at the Quantum Level”. And it’s built on the idea that quantum particles could be in multiple places at once. Whether something can be in two places at once depends on coming up with a clear idea about what you mean by “thing” and “places” and for that matter “at once”; when you try to pin the ideas down they prove to be slippery. But the mathematics of quantum mechanics is fascinating. It cries out for treating things we would like to know about, such as positions and momentums and energies of particles, as distributions instead of fixed values. That is, we know how likely it is a particle is in some region of space compared to how likely it is somewhere else. In statistical mechanics we resort to this because we want to study so many particles, or so many interactions, that it’s impractical to keep track of them all. In quantum mechanics we need to resort to this because it appears this is just how the world works.

(It’s even less on point, but Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 21st of June has a bit of riffing on Schrödinger’s Cat.)

Brian and Ron Boychuk’s Chuckle Brothers (June 20) name-drops algebra as the kind of mathematics kids still living with their parents have trouble with. That’s probably required by the desire to make a joking definition of “aftermath”, so that some specific subject has to be named. And it needs parents to still be watching closely over their kids, something that doesn’t quite fit for college-level classes like Intro to Differential Equations. So algebra, geometry, or trigonometry it must be. I am curious whether algebra reads as the funniest of that set of words, or if it just fits better in the space available. ‘Geometry’ is as long a word as ‘algebra’, but it may not have the same connotation of being an impossibly hard class.

Little Iodine does badly in arithmetic in class. But she's very good at counting the calories, and the cost, of what her teacher eats.
Jimmy Hatlo’s Little Iodine for the 18th of April, 1954, and rerun the 18th of June, 2015.

And from the world of vintage comic strips, Jimmy Hatlo’s Little Iodine (June 21, originally run the 18th of April, 1954) reminds us that anybody can do any amount of arithmetic if it’s something they really want to calculate.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney (June 21) is another strip using the idea of mathematics — and particularly word problems — to signify great intelligence. I suppose it’s easier to recognize the form of a word problem than it is to recognize a good paper on the humanities if you only have two dozen words to show it in.

Juba’s Viivi and Wagner (June 21) is a timely reminder that while sudokus may be fun logic puzzles, they are ultimately the puzzle you decide to make of them.

Reading the Comics, March 15, 2015: Pi Day Edition


I had kind of expected the 14th of March — the Pi Day Of The Century — would produce a flurry of mathematics-themed comics. There were some, although they were fewer and less creatively diverse than I had expected. Anyway, between that, and the regular pace of comics, there’s plenty for me to write about. Recently featured, mostly on Gocomics.com, a little bit on Creators.com, have been:

Brian Anderson’s Dog Eat Doug (March 11) features a cat who claims to be “pondering several quantum equations” to prove something about a parallel universe. It’s an interesting thing to claim because, really, how can the results of an equation prove something about reality? We’re extremely used to the idea that equations can model reality, and that the results of equations predict real things, to the point that it’s easy to forget that there is a difference. A model’s predictions still need some kind of validation, reason to think that these predictions are meaningful and correct when done correctly, and it’s quite hard to think of a meaningful way to validate a predication about “another” universe.

Continue reading “Reading the Comics, March 15, 2015: Pi Day Edition”

Reading The Comics, November 9, 2014: Finally, A Picture Edition


I knew if I kept going long enough some cartoonist not on Gocomics.com would have to mention mathematics. That finally happened with one from Comics Kingdom, and then one from the slightly freak case of Rick Detorie’s One Big Happy. Detorie’s strip is on Gocomics.com, but a rerun from several years ago. He has a different one that runs on the normal daily pages. This is for sound economic reasons: actual newspapers pay much better than the online groupings of them (considering how cheap Comics Kingdom and Gocomics are for subscribers I’m not surprised) so he doesn’t want his current strips run on Gocomics.com. As for why his current strips do appear on, for example, the fairly good online comics page of AZcentral.com, that’s a good question, and one that deserves a full answer.

The psychiatric patient is looking for something in the middle of curved space-time.
Vic Lee’s Pardon My Planet for the 6th of November, 2014.

Vic Lee’s Pardon My Planet (November 9), which broke the streak of Comics Kingdom not making it into these pages, builds around a quote from Einstein I never heard of before but which sounds like the sort of vaguely inspirational message that naturally attaches to famous names. The patient talks about the difficulty of finding something in “the middle of four-dimensional curved space-time”, although properly speaking it could be tricky finding anything within a bounded space, whether it’s curved or not. The generic mathematics problem you’d build from this would be to have some function whose maximum in a region you want to find (if you want the minimum, just multiply your function by minus one and then find the maximum of that), and there’s multiple ways to do that. One obvious way is the mathematical equivalent of getting to the top of a hill by starting from wherever you are and walking the steepest way uphill. Another way is to just amble around, picking your next direction at random, always taking directions that get you higher and usually but not always refusing directions that bring you lower. You can probably see some of the obvious problems with either approach, and this is why finding the spot you want can be harder than it sounds, even if it’s easy to get started looking.

Reuben Bolling’s Super Fun-Pak Comix (November 6), which is technically a rerun since the Super Fun-Pak Comix have been a longrunning feature in his Tom The Dancing Bug pages, is primarily a joke about the Heisenberg Uncertainty Principle, that there is a limit to what information one can know about the universe. This limit can be understood mathematically, though. The wave formulation of quantum mechanics describes everything there is to know about a system in terms of a function, called the state function and normally designated Ψ, the value of which can vary with location and time. Determining the location or the momentum or anything about the system is done by a process called “applying an operator to the state function”. An operator is a function that turns one function into another, which sounds like pretty sophisticated stuff until you learn that, like, “multiply this function by minus one” counts.

In quantum mechanics anything that can be observed has its own operator, normally a bit tricker than just “multiply this function by minus one” (although some are not very much harder!), and applying that operator to the state function is the mathematical representation of making that observation. If you want to observe two distinct things, such as location and momentum, that’s a matter of applying the operator for the first thing to your state function, and then taking the result of that and applying the operator for the second thing to it. And here’s where it gets really interesting: it doesn’t have to, but it can depend what order you do this in, so that you get different results applying the first operator and then the second from what you get applying the second operator and then the first. The operators for location and momentum are such a pair, and the result is that we can’t know to arbitrary precision both at once. But there are pairs of operators for which it doesn’t make a difference. You could, for example, know both the momentum and the electrical charge of Scott Baio simultaneously to as great a precision as your Scott-Baio-momentum-and-electrical-charge-determination needs are, and the mathematics will back you up on that.

Ruben Bolling’s Tom The Dancing Bug (November 6), meanwhile, was a rerun from a few years back when it looked like the Large Hadron Collider might never get to working and the glitches started seeming absurd, as if an enormous project involving thousands of people and millions of parts could ever suffer annoying setbacks because not everything was perfectly right the first time around. There was an amusing notion going around, illustrated by Bolling nicely enough, that perhaps the results of the Large Hadron Collider would be so disastrous somehow that the universe would in a fit of teleological outrage prevent its successful completion. It’s still a funny idea, and a good one for science fiction stories: Isaac Asimov used the idea in a short story dubbed “Thiotimoline and the Space Age”, published 1959, which resulted in the attempts to manipulate a compound which dissolves before it adds water might have accidentally sent hurricanes Carol, Edna, and Diane into New England in 1954 and 1955.

Chip Sansom’s The Born Loser (November 7) gives me a bit of a writing break by just being a pun strip that you can save for next March 14.

Dan Thompson’s Brevity (November 7), out of reruns, is another pun strip, though with giant monsters.

Francesco Marciuliano’s Medium Large (November 7) is about two of the fads of the early 80s, those of turning everything into a breakfast cereal somehow and that of playing with Rubik’s Cubes. Rubik’s Cubes have long been loved by a certain streak of mathematicians because they are a nice tangible representation of group theory — the study of things that can do things that look like addition without necessarily being numbers — that’s more interesting than just picking up a square and rotating it one, two, three, or four quarter-turns. I still think it’s easier to just peel the stickers off (and yet, the die-hard Rubik’s Cube Popularizer can point out there’s good questions about polarity you can represent by working out the rules of how to peel off only some stickers and put them back on without being detected).

Ruthie questions whether she'd be friends with people taking carrot sticks from her plate, or whether anyone would take them in the first place. Word problems can be tricky things.
Rick Detorie’s One Big Happy for the 9th of November, 2014.

Rick Detorie’s One Big Happy (November 9), and I’m sorry, readers about a month in the future from now, because that link’s almost certainly expired, is another entry in the subject of word problems resisted because the thing used to make the problem seem less abstract has connotations that the student doesn’t like.

Fred Wagner’s Animal Crackers (November 9) is your rare comic that could be used to teach positional notation, although when you actually pay attention you realize it doesn’t actually require that.

Mac and Bill King’s Magic In A Minute (November 9) shows off a mathematically-based slight-of-hand trick, describing a way to make it look like you’re reading your partner-monkey’s mind. This is probably a nice prealgebra problem to work out just why it works. You could also consider this a toe-step into the problem of encoding messages, finding a way to send information about something in a way that the original information can be recovered, although obviously this particular method isn’t terribly secure for more than a quick bit of stage magic.

Realistic Modeling


“Economic Realism (Wonkish)”, a blog entry by Paul Krugman in The New York Times, discusses a paper, “Chameleons: The Misuse Of Mathematical Models In Finance And Economics”, by Paul Pfleiderer of Stanford University, which surprises me by including a color picture of a chameleon right there on the front page, and in an academic paper at that, and I didn’t know you could have color pictures included just for their visual appeal in academia these days. Anyway, Pfleiderer discusses the difficulty of what they term filtering, making sure that the assumptions one makes to build a model — which are simplifications and abstractions of the real-world thing in which you’re interested — aren’t too far out of line with the way the real thing behaves.

This challenge, which I think of as verification or validation, is important when you deal with pure mathematical or physical models. Some of that will be at the theoretical stage: is it realistic to model a fluid as if it had no viscosity? Unless you’re dealing with superfluid helium or something exotic like that, no, but you can do very good work that isn’t too far off. Or there’s a classic model of the way magnetism forms, known as the Ising model, which in a very special case — a one-dimensional line — is simple enough that a high school student could solve it. (Well, a very smart high school student, one who’s run across an exotic function called the hyperbolic cosine, could do it.) But that model is so simple that it can’t model the phase change, that, if you warm a magnet up past a critical temperature it stops being magnetic. Is the model no good? If you aren’t interested in the phase change, it might be.

And then there is the numerical stage: if you’ve set up a computer program that is supposed to represent fluid flow, does it correctly find solutions? I’ve heard it claimed that the majority of time spent on a numerical project is spent in validating the results, and that isn’t even simply in finding and fixing bugs in the code. Even once the code is doing perfectly what we mean it to do, it must be checked that what we mean it to do is relevant to what we want to know.

Pfleiderer’s is an interesting paper and I think worth the read; despite its financial mathematics focus (and a brief chat about quantum mechanics) it doesn’t require any particularly specialized training. There’s some discussions of particular financial models, but what’s important are the assumptions being made behind those models, and those are intelligible without prior training in the field.

Reading the Comics, March 1, 2014: Isn’t It One-Half X Squared Plus C? Edition


So the subject line references here a mathematics joke that I never have heard anybody ever tell, and only encounter in lists of mathematics jokes. It goes like this: a couple professors are arguing at lunch about whether normal people actually learn anything about calculus. One of them says he’s so sure normal people learn calculus that even their waiter would be able to answer a basic calc question, and they make a bet on that. He goes back and finds their waiter and says, when she comes with the check he’s going to ask her if she knows what the integral of x is, and she should just say, “why, it’s one-half x squared, of course”. She agrees. He goes back and asks her what the integral of x is, and she says of course it’s one-half x squared, and he wins the bet. As he’s paid off, she says, “But excuse me, professor, isn’t it one-half x squared plus C?”

Let me explain why this is an accurately structured joke construct and must therefore be classified as funny. “The integral of x”, as the question puts it, has not just one correct answer but rather a whole collection of correct answers, which are different from one another only by the addition of a constant whole number, by convention denoted C, and the inclusion of that “plus C” denotes that whole collection. The professor was being sloppy in referring to just a single example from that collection instead of the whole set, as the waiter knew to do. You’ll see why this is relevant to today’s collection of mathematics-themed comics.

Jef Mallet’s Frazz (February 22) points out one of the grand things about mathematics, that if you follow the proper steps in a mathematical problem you get to be right, and to be extraordinarily confident in that rightness. And that’s true, although, at least to me a good part of what’s fun in mathematics is working out what the proper steps are: figuring out what the important parts of something you want to study should be, and what follows from your representation of them, and — particularly if you’re trying to represent a complicated real-world phenomenon with a model — whether you’re representing the things you find interesting in the real-world phenomenon well. So, while following the proper steps gets you an answer that is correct within the limits of whatever it is you’re doing, you still get to work out whether you’re working on the right problem, which is the real fun.

Mark Pett’s Lucky Cow (February 23, rerun) uses that ambiguous place between mathematics and physics to represent extreme smartness. The equation the physicist brings to Neil is the (time-dependent) Schrödinger Equation, describing how probability evolves in time, and the answer is correct. If Neil’s coworkers at Lucky Cow were smarter they’d realize the scam, though: while the equation is impressively scary-looking to people not in the know, a particle physicist would have about as much chance of forgetting this as of forgetting the end of “E equals m c … ”.

Hilary Price’s Rhymes With Orange (February 24) builds on the familiar infinite-monkeys metaphor, but misses an important point. Price is right that yes, an infinite number of monkeys already did create the works of Shakespeare, as a result of evolving into a species that could have a Shakespeare. But the infinite monkeys problem is about selecting letters at random, uniformly: the letter following “th” is as likely to be “q” as it is to be “e”. An evolutionary system, however, encourages the more successful combinations in each generation, and discourages the less successful: after writing “th” Shakespeare would be far more likely to put “e” and never “q”, which makes calculating the probability rather less obvious. And Shakespeare was writing with awareness that the words mean things and they must be strings of words which make reasonable sense in context, which the monkeys on typewriters would not. Shakespeare could have followed the line “to be or not to be” with many things, but one of the possibilities would never be “carport licking hammer worbnoggle mrxl 2038 donkey donkey donkey donkey donkey donkey donkey”. The typewriter monkeys are not so selective.

Dan Thompson’s Brevity (February 26) is a cute joke about a number’s fashion sense.

Mark Pett’s Lucky Cow turns up again (February 28, rerun) for the Rubik’s Cube. The tolerably fun puzzle and astoundingly bad Saturday morning cartoon of the 80s can be used to introduce abstract algebra. When you rotate the nine little cubes on the edge of a Rubik’s cube, you’re doing something which is kind of like addition. Think of what you can do with the top row of cubes: you can leave it alone, unchanged; you can rotate it one quarter-turn clockwise; you can rotate it one quarter-turn counterclockwise; you can rotate it two quarter-turns clockwise; you can rotate it two quarter-turns counterclockwise (which will result in something suspiciously similar to the two quarter-turns clockwise); you can rotate it three quarter-turns clockwise; you can rotate it three quarter-turns counterclockwise.

If you rotate the top row one quarter-turn clockwise, and then another one quarter-turn clockwise, you’ve done something equivalent to two quarter-turns clockwise. If you rotate the top row two quarter-turns clockwise, and then one quarter-turn counterclockwise, you’ve done the same as if you’d just turned it one quarter-turn clockwise and walked away. You’re doing something that looks a lot like addition, without being exactly like it. Something odd happens when you get to four quarter-turns either clockwise or counterclockwise, particularly, but it all follows clear rules that become pretty familiar when you notice how much it’s like saying four hours after 10:00 will be 2:00.

Abstract algebra marks one of the things you have to learn as a mathematics major that really changes the way you start looking at mathematics, as it really stops being about trying to solve equations of any kind. You instead start looking at how structures are put together — rotations are seen a lot, probably because they’re familiar enough you still have some physical intuition, while still having significant new aspects — and, following this trail can get for example to the parts of particle physics where you predict some exotic new subatomic particle has to exist because there’s this structure that makes sense if it does.

Jenny Campbell’s Flo and Friends (March 1) is set off with the sort of abstract question that comes to mind when you aren’t thinking about mathematics: how many five-card combinations are there in a deck of (52) cards? Ruthie offers an answer, although — as the commenters get to disputing — whether she’s right depends on what exactly you mean by a “five-card combination”. Would you say that a hand of “2 of hearts, 3 of hearts, 4 of clubs, Jack of diamonds, Queen of diamonds” is a different one to “3 of hearts, Jack of diamonds, 4 of clubs, Queen of diamonds, 2 of hearts”? If you’re playing a game in which the order of the deal doesn’t matter, you probably wouldn’t; but, what if the order does matter? (I admit I don’t offhand know a card game where you’d get five cards and the order would be important, but I don’t know many card games.)

For that matter, if you accept those two hands as the same, would you accept “2 of clubs, 3 of clubs, 4 of diamonds, Jack of spades, Queen of spades” as a different hand? The suits are different, yes, but they’re not differently structured: you’re still three cards away from a flush, and two away from a straight. Granted there are some games in which one suit is worth more than another, in which case it matters whether you had two diamonds or two spades; but if you got the two-of-clubs hand just after getting the two-of-hearts hand you’d probably be struck by how weird it was you got the same hand twice in a row. You can’t give a correct answer to the question until you’ve thought about exactly what you mean when you say two hands of cards are different.

Reading the Comics, February 21, 2014: Circumferences and Monkeys Edition


And now to finish off the bundle of mathematic comics that I had run out of time for last time around. Once again the infinite monkeys situation comes into play; there’s also more talk about circumferences than average.

Brian and Ron Boychuk’s The Chuckle Brothers (February 13) does a little wordplay on how “circumference” sounds like it could kind of be a knightly name, which I remember seeing in a minor Bugs Bunny cartoon back in the day. “Circumference” the word derives from the Latin, “circum” meaning around and “fero” meaning “to carry”; and to my mind, the really interesting question is why do we have the words “perimeter” and “circumference” when it seems like either one would do? “Circumference” does have the connotation of referring to just the boundary of a circular or roughly circular form, but why should the perimeter of circular things be so exceptional as to usefully have its own distinct term? But English is just like that, I suppose.

Paul Trapp’s Thatababy (February 13) brings back the infinite-monkey metaphor. The infinite monkeys also appear in John Deering’s Strange Brew (February 20), which is probably just a coincidence based on how successfully tossing in lots of monkeys can produce giggling. Or maybe the last time Comic Strip Master Command issued its orders it sent out a directive, “more infinite monkey comics!”

Ruben Bolling’s Tom The Dancing Bug (February 14) delivers some satirical jabs about Biblical textual inerrancy by pointing out where the Bible makes mathematical errors. I tend to think nitpicking the Bible mostly a waste of good time on everyone’s part, although the handful of arithmetic errors are a fair wedge against the idea that the text can’t have any errors and requires no interpretation or even forgiveness, with the Ezra case the stronger one. The 1 Kings one is about the circumference and the diameter for a vessel being given, and those being incompatible, but it isn’t hard to come up with a rationalization that brings them plausibly in line (you have to suppose that the diameter goes from outer wall to outer wall, while the circumference is that of an inner wall, which may be a bit odd but isn’t actually ruled out by the text), which is why I think it’s the weaker.

Bill Whitehead’s Free Range (February 16) uses a blackboard full of mathematics as a generic “this is something really complicated” signifier. The symbols as written don’t make a lot of sense, although I admit it’s common enough while working out a difficult problem to work out weird bundles of partly-written expressions or abuses of notation (like on the middle left of the board, where a bracket around several equations is shown as being less than a bracket around fewer equations), just because ideas are exploding faster than they can be written out sensibly. Hopefully once the point is proven you’re able to go back and rebuild it all in a form which makes sense, either by going into standard notation or by discovering that you have soem new kind of notation that has to be used. It’s very exciting to come up with some new bit of notation, even if it’s only you and a couple people you work with who ever use it. Developing a good way of writing a concept might be the biggest thrill in mathematics, even better than proving something obscure or surprising.

Jonathan Lemon’s Rabbits Against Magic (February 18) uses a blackboard full of mathematics symbols again to give the impression of someone working on something really hard. The first two lines of equations on 8-Ball’s board are the time-dependent Schrödinger Equations, describing how the probability distribution for something evolves in time. The last line is Euler’s formula, the curious and fascinating relationship between pi, the base of the natural logarithm e, imaginary numbers, one, and zero.

Todd Clark’s Lola (February 20) uses the person-on-an-airplane setup for a word problem, in this case, about armrest squabbling. Interesting to me about this is that the commenters get into a squabble about how airplane speeds aren’t measured in miles per hour but rather in nautical miles, although nobody not involved in air traffic control really sees that. What amuses me about this is that what units you use to measure the speed of the plane don’t matter; the kind of work you’d do for a plane-travelling-at-speed problem is exactly the same whatever the units are. For that matter, none of the unique properties of the airplane, such as that it’s travelling through the air rather than on a highway or a train track, matter at all to the problem. The plane could be swapped out and replaced with any other method of travel without affecting the work — except that airplanes are more likely than trains (let’s say) to have an armrest shortage and so the mock question about armrest fights is one in which it matters that it’s on an airplane.

Bill Watterson’s Calvin and Hobbes (February 21) is one of the all-time classics, with Calvin wondering about just how fast his sledding is going, and being interested right up to the point that Hobbes identifies mathematics as the way to know. There’s a lot of mathematics to be seen in finding how fast they’re going downhill. Measuring the size of the hill and how long it takes to go downhill provides the average speed, certainly. Working out how far one drops, as opposed to how far one travels, is a trigonometry problem. Trying the run multiple times, and seeing how the speed varies, introduces statistics. Trying to answer questions like when are they travelling fastest — at a single instant, rather than over the whole run — introduce differential calculus. Integral calculus could be found from trying to tell what the exact distance travelled is. Working out what the shortest or the fastest possible trips introduce the calculus of variations, which leads in remarkably quick steps to optics, statistical mechanics, and even quantum mechanics. It’s pretty heady stuff, but I admit, yeah, it’s math.

From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace


I’m frightfully late on following up on this, but ElKement has another entry in the series regarding quantum field theory, this one engagingly titled “On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”. The objective is to introduce the concept of phase space, a way of looking at physics problems that marks maybe the biggest thing one really needs to understand if one wants to be not just a physics major (or, for many parts of the field, a mathematics major) and a grad student.

As an undergraduate, it’s easy to get all sorts of problems in which, to pick an example, one models a damped harmonic oscillator. A good example of this is how one models the way a car bounces up and down after it goes over a bump, when the shock absorbers are working. You as a student are given some physical properties — how easily the car bounces, how well the shock absorbers soak up bounces — and how the first bounce went — how far the car bounced upward, how quickly it started going upward — and then work out from that what the motion will be ever after. It’s a bit of calculus and you might do it analytically, working out a complicated formula, or you might do it numerically, letting one of many different computer programs do the work and probably draw a picture showing what happens. That’s shown in class, and then for homework you do a couple problems just like that but with different numbers, and for the exam you get another one yet, and one more might turn up on the final exam.

Continue reading “From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”

From ElKement: May The Force Field Be With You


I’m derelict in mentioning this but ElKement’s blog, Theory And Practice Of Trying To Combine Just Anything, has published the second part of a non-equation-based description of quantum field theory. This one, titled “May The Force Field Be With You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory”, is about introducing the idea of a field, and a bit of how they can be understood in quantum mechanics terms.

A field, in this context, means some quantity that’s got a defined value for every point in space and time that you’re studying. As ElKement notes, the temperature is probably the most familiar to people. I’d imagine that’s partly because it’s relatively easy to feel the temperature change as one goes about one’s business — after all, gravity is also a field, but almost none of us feel it appreciably change — and because weather maps make the changes of that in space and in time available in attractive pictures.

The thing the field contains can be just about anything. The temperature would be just a plain old number, or as mathematicians would have it a “scalar”. But you can also have fields that describe stuff like the pull of gravity, which is a certain amount of pull and pointing, for us, toward the center of the earth. You can also have fields that describe, for example, how quickly and in what direction the water within a river is flowing. These strengths-and-directions are called “vectors” [1], and a field of vectors offers a lot of interesting mathematics and useful physics. You can also plunge into more exotic mathematical constructs, but you don’t have to. And you don’t need to understand any of this to read ElKement’s more robust introduction to all this.

[1] The independent student newspaper for the New Jersey Institute of Technology is named The Vector, and has as motto “With Magnitude and Direction Since 1924”. I don’t know if other tech schools have newspapers which use a similar joke.

From ElKement: Space Balls, Baywatch, and the Geekiness of Classical Mechanics


Over on Elkement’s blog, Theory and Practice of Trying To Combine Just Anything, is the start of a new series about quantum field theory. Elke Stangl is trying a pretty impressive trick here in trying to describe a pretty advanced field without resorting to the piles of equations that maybe are needed to be precise, but, which also fill the page with piles of equations.

The first entry is about classical mechanics, and contrasting the familiar way that it gets introduced to people —- the whole forceequalsmasstimesacceleration bit — and an alternate description, based on what’s called the Principle of Least Action. This alternate description is as good as the familiar old Newton’s Laws in describing what’s going on, but it also makes a host of powerful new mathematical tools available. So when you get into serious physics work you tend to shift over to that model; and, if you want to start talking Modern Physics, stuff like quantum mechanics, you pretty nearly have to start with that if you want to do anything.

So, since it introduces in clear language a fascinating and important part of physics and mathematics, I’d recommend folks try reading the essay. It’s building up to an explanation of fields, as the modern physicist understands them, too, which is similarly an important topic worth being informed about.

Reading the Comics, October 13, 2012


I suppose it’s been long enough to resume the review of math-themed comic strips. I admit there are weeks I don’t have much chance to write regular articles and then I feel embarrassed that I post only comic strips links, but I do enjoy the comics and the comic strip reviews. This one gets slightly truncated because King Features Syndicate has indeed locked down their Comics Kingdom archives of its strips, making it blasted inconvenient to read and nearly impossible to link to them in any practical, archivable way. They do offer a service, DailyInk.com, with their comic strips, but I can hardly expect every reader of mine to pay up over there just for the odd day when Mandrake the Magician mentions something I can build a math problem from. Until I work out an acceptable-to-me resolution, then, I’ll be dropping to gocomics.com and a few oddball strips that the Houston Chronicle carries.

Continue reading “Reading the Comics, October 13, 2012”

%d bloggers like this: