Reading the Comics, November 30, 2019: Big Embarrassing Mistake Edition


See if you can spot where I discover my having made a big embarrassing mistake. It’s fun! For people who aren’t me!

Lincoln Peirce’s Big Nate for the 24th has boy-genius Peter drawing “electromagnetic vortex flow patterns”. Nate, reasonably, sees this sort of thing as completely abstract art. I’m not precisely sure what Peirce means by “electromagnetic vortex flow”. These are all terms that mathematicians, and mathematical physicists, would be interested in. That specific combination, though, I can find only a few references for. It seems to serve as a sensing tool, though.

Nate: 'Ah, now that's what I'm talking about! A boy, paper, and crayons, the simple pleasures. I know you're a genius, Peter, but it's great to see you just being a kid for a change! And you're really letting it rip! You're not trying to make something that looks real! It's just colors and shapes and --- ' Peter: 'This is a diagram of electromagnetic vortex flow patterns.' Nate: 'I knew that.' Peter: 'Hand me the turquoise.'
Lincoln Peirce’s Big Nate for the 24th of November, 2019. So, did you know I’ve been spelling Lincoln Peirce’s name wrong all this time? Yeah, I didn’t realize either. But look at past essays with Big Nate discussed in them and you’ll see. I’m sorry for this and embarrassed to have done such a lousy job looking at the words in front of me for so long.

No matter. Electromagnetic fields are interesting to a mathematical physicist, and so mathematicians. Often a field like this can be represented as a system of vortices, too, points around which something swirls and which combine into the field that we observe. This can be a way to turn a continuous field into a set of discrete particles, which we might have better tools to study. And to draw what electromagnetic fields look like — even in a very rough form — can be a great help to understanding what they will do, and why. They also can be beautiful in ways that communicate even to those who don’t undrestand the thing modelled.

Megan Dong’s Sketchshark Comics for the 25th is a joke based on the reputation of the Golden Ratio. This is the idea that the ratio, 1:\frac{1}{2}\left(1 + \sqrt{5}\right) (roughly 1:1.6), is somehow a uniquely beautiful composition. You may sometimes see memes with some nice-looking animal and various boxes superimposed over it, possibly along with a spiral. The rectangles have the Golden Ratio ratio of width to height. And the ratio is kind of attractive since \frac{1}{2}\left(1 + \sqrt{5}\right) is about 1.618, and 1 \div \frac{1}{2}\left(1 + \sqrt{5}\right) is about 0.618. It’s a cute pattern, and there are other similar cute patterns.. There is a school of thought that this is somehow transcendently beautiful, though.

Man, shooing off a woman holding a cat: 'I don't like cute animals. I like BEAUTIFUL animals.' In front of portraits of an eagle, lion, and whale: 'Animals with golden-ratio proportions and nice bone-structure.'
Megan Dong’s Sketchshark Comics for the 25th of November, 2019. So far I’m aware I have never discussed this comic before, making this another new-tag day. This and future essays with Sketchshark Comics in them should be at this link.

It’s all bunk. People may find stuff that’s about one-and-a-half times as tall as it is wide, or as wide as it is tall, attractive. But experiments show that they aren’t more likely to find something with Golden Ratio proportions more attractive than, say, something with 1:1.5 proportions, or 1:1.8 , or even to be particularly consistent about what they like. You might be able to find (say) that the ratio of an eagle’s body length to the wing span is something close to 1:1.6 . But any real-world thing has a lot of things you can measure. It would be surprising if you couldn’t find something near enough a ratio you liked. The guy is being ridiculous.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th builds on the idea that everyone could be matched to a suitable partner, given a proper sorting algorithm. I am skeptical of any “simple algorithm” being any good for handling complex human interactions such as marriage. But let’s suppose such an algorithm could exist.

Mathematician: 'Thanks to computer science we no longer need dating. We can produce perfect marriages with simple algorithms.' Assistant: 'ooh!' [ AND SO ] Date-o-Tron, to the mathematician and her assistant: 'There are many women you'd be happier with, but they're already with people whom they prefer to you. Thus, you will be paired with your 4,291th favorite choice. We have a stable equilibrium.' Mathematician: 'Hooray!'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th of November, 2019. Someday I’ll go a week without an essay mentioning Saturday Morning Breakfast Cereal, but this is not that day. Or week. The phrasing gets a little necessarily awkward here.

This turns matchmaking into a problem of linear programming. Arguably it always was. But the best possible matches for society might not — likely will not be — the matches everyone figures to be their first choices. Or even top several choices. For one, our desired choices are not necessarily the ones that would fit us best. And as the punch line of the comic implies, what might be the globally best solution, the one that has the greatest number of people matched with their best-fit partners, would require some unlucky souls to be in lousy fits.

Although, while I believe that’s the intention of the comic strip, it’s not quite what’s on panel. The assistant is told he’ll be matched with his 4,291th favorite choice, and I admit having to go that far down the favorites list is demoralizing. But there are about 7.7 billion people in the world. This is someone who’ll be a happier match with him than 6,999,995,709 people would be. That’s a pretty good record, really. You can fairly ask how much worse that is than the person who “merely” makes him happier than 6,999,997,328 people would


And that’s all I have for last week. Sunday I hope to publish another Reading the Comics post, one way or another. And later this week I’ll have closing thoughts on the Fall 2019 A-to-Z sequence. And I do sincerely apologize to Lincoln Peirce for getting his name wrong, and this on a comic strip I’ve been reading since about 1991.

Reading the Comics, May 31, 2017: Feast Week Edition


You know we’re getting near the end of the (United States) school year when Comic Strip Master Command orders everyone to clear out their mathematics jokes. I’m assuming that’s what happened here. Or else a lot of cartoonists had word problems on their minds eight weeks ago. Also eight weeks ago plus whenever they originally drew the comics, for those that are deep in reruns. It was busy enough to split this week’s load into two pieces and might have been worth splitting into three, if I thought I had publishing dates free for all that.

Larry Wright’s Motley Classics for the 28th of May, a rerun from 1989, is a joke about using algebra. Occasionally mathematicians try to use the the ability of people to catch things in midair as evidence of the sorts of differential equations solution that we all can do, if imperfectly, in our heads. But I’m not aware of evidence that anyone does anything that sophisticated. I would be stunned if we didn’t really work by a process of making a guess of where the thing should be and refining it as time allows, with experience helping us make better guesses. There’s good stuff to learn in modeling how to catch stuff, though.

Michael Jantze’s The Norm Classics rerun for the 28th opines about why in algebra you had to not just have an answer but explain why that was the answer. I suppose mathematicians get trained to stop thinking about individual problems and instead look to classes of problems. Is it possible to work out a scheme that works for many cases instead of one? If it isn’t, can we at least say something interesting about why it’s not? And perhaps that’s part of what makes algebra classes hard. To think about a collection of things is usually harder than to think about one, and maybe instructors aren’t always clear about how to turn the specific into the general.

Also I want to say some very good words about Jantze’s graphical design. The mock textbook cover for the title panel on the left is so spot-on for a particular era in mathematics textbooks it’s uncanny. The all-caps Helvetica, the use of two slightly different tans, the minimalist cover art … I know shelves stuffed full in the university mathematics library where every book looks like that. Plus, “[Mathematics Thing] And Their Applications” is one of the roughly four standard approved mathematics book titles. He paid good attention to his references.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 28th deploys a big old whiteboard full of equations for the “secret” of the universe. This makes a neat change from finding the “meaning” of the universe, or of life. The equations themselves look mostly like gibberish to me, but Wise and Aldrich make good uses of their symbols. The symbol \vec{B} , a vector-valued quantity named B, turns up a lot. This symbol we often use to represent magnetic flux. The B without a little arrow above it would represent the intensity of the magnetic field. Similarly an \vec{H} turns up. This we often use for magnetic field strength. While I didn’t spot a \vec{E} — electric field — which would be the natural partner to all this, there are plenty of bare E symbols. Those would represent electric potential. And many of the other symbols are what would naturally turn up if you were trying to model how something is tossed around by a magnetic field. Q, for example, is often the electric charge. ω is a common symbol for how fast an electromagnetic wave oscillates. (It’s not the frequency, but it’s related to the frequency.) The uses of symbols is consistent enough, in fact, I wonder if Wise and Aldrich did use a legitimate sprawl of equations and I’m missing the referenced problem.

John Graziano’s Ripley’s Believe It Or Not for the 28th mentions how many symbols are needed to write out the numbers from 1 to 100. Is this properly mathematics? … Oh, who knows. It’s just neat to know.

Mark O’Hare’s Citizen Dog rerun for the 29th has the dog Fergus struggle against a word problem. Ordinary setup and everything, but I love the way O’Hare draws Fergus in that outfit and thinking hard.

The Eric the Circle rerun for the 29th by ACE10203040 is a mistimed Pi Day joke.

Bill Amend’s FoxTrot Classicfor the 31st, a rerun from the 7th of June, 2006, shows the conflation of “genius” and “good at mathematics” in everyday use. Amend has picked a quixotic but in-character thing for Jason Fox to try doing. Euclid’s Fifth Postulate is one of the classic obsessions of mathematicians throughout history. Euclid admitted the thing — a confusing-reading mess of propositions — as a postulate because … well, there’s interesting geometry you can’t do without it, and there doesn’t seem any way to prove it from the rest of his geometric postulates. So it must be assumed to be true.

There isn’t a way to prove it from the rest of the geometric postulates, but it took mathematicians over two thousand years of work at that to be convinced of the fact. But I know I went through a time of wanting to try finding a proof myself. It was a mercifully short-lived time that ended in my humbly understanding that as smart as I figured I was, I wasn’t that smart. We can suppose Euclid’s Fifth Postulate to be false and get interesting geometries out of that, particularly the geometries of the surface of the sphere, and the geometry of general relativity. Jason will surely sometime learn.

A Leap Day 2016 Mathematics A To Z: Yukawa Potential


Yeah, ‘Y’ is a lousy letter in the Mathematics Glossary. I have a half-dozen mathematics books on the shelf by my computer. Some is semi-popular stuff like Richard Courant and Herbert Robbins’s What Is Mathematics? (the Ian Stewart revision). Some is fairly technical stuff, by which I mean Hidetoshi Nishimori’s Statistical Physics of Spin Glasses and Information Processing. There’s just no ‘Y’ terms in any of them worth anything. But I can rope something into the field. For example …

Yukawa Potential

When you as a physics undergraduate first take mechanics it’s mostly about very simple objects doing things according to one rule. The objects are usually these indivisible chunks. They’re either perfectly solid or they’re points, too tiny to have a surface area or volume that might mess things up. We draw them as circles or as blocks because they’re too hard to see on the paper or board otherwise. We spend a little time describing how they fall in a room. This lends itself to demonstrations in which the instructor drops a rubber ball. Then we go on to a mass on a spring hanging from the ceiling. Then to a mass on a spring hanging to another mass.

Then we go onto two things sliding on a surface and colliding, which would really lend itself to bouncing pool balls against one another. Instead we use smaller solid balls. Sometimes those “Newton’s Cradle” things with the five balls that dangle from wires and just barely touch each other. They give a good reason to start talking about vectors. I mean positional vectors, the ones that say “stuff moving this much in this direction”. Normal vectors, that is. Then we get into stars and planets and moons attracting each other by gravity. And then we get into the stuff that really needs calculus. The earlier stuff is helped by it, yes. It’s just by this point we can’t do without.

The “things colliding” and “balls dropped in a room” are the odd cases in this. Most of the interesting stuff in an introduction to mechanics course is about things attracting, or repelling, other things. And, particularly, they’re particles that interact by “central forces”. Their attraction or repulsion is along the line that connects the two particles. (Impossible for a force to do otherwise? Just wait until Intro to Mechanics II, when magnetism gets in the game. After that, somewhere in a fluid dynamics course, you’ll see how a vortex interacts with another vortex.) The potential energies for these all vary with distance between the points.

Yeah, they also depend on the mass, or charge, or some kind of strength-constant for the points. They also depend on some universal constant for the strength of the interacting force. But those are, well, constant. If you move the particles closer together or farther apart the potential changes just by how much you moved them, nothing else.

Particles hooked together by a spring have a potential that looks like \frac{1}{2}k r^2 . Here ‘r’ is how far the particles are from each other. ‘k’ is the spring constant; it’s just how strong the spring is. The one-half makes some other stuff neater. It doesn’t do anything much for us here. A particle attracted by another gravitationally has a potential that looks like -G M \frac{1}{r} . Again ‘r’ is how far the particles are from each other. ‘G’ is the gravitational constant of the universe. ‘M’ is the mass of the other particle. (The particle’s own mass doesn’t enter into it.) The electric potential looks like the gravitational potential but we have different symbols for stuff besides the \frac{1}{r} bit.

The spring potential and the gravitational/electric potential have an interesting property. You can have “closed orbits” with a pair of them. You can set a particle orbiting another and, with time, get back to exactly the original positions and velocities. (Three or more particles you’re not guaranteed of anything.) The curious thing is this doesn’t always happen for potentials that look like “something or other times r to a power”. In fact, it never happens, except for the spring potential, the gravitational/electric potential, and — peculiarly — for the potential k r^7 . ‘k’ doesn’t mean anything there, and we don’t put a one-seventh or anything out front for convenience, because nobody knows anything that needs anything like that, ever. We can have stable orbits, ones that stay within a minimum and a maximum radius, for a potential k r^n whenever n is larger than -2, at least. And that’s it, for potentials that are nothing but r-to-a-power.

Ah, but does the potential have to be r-to-a-power? And here we see Dr Hideki Yukawa’s potential energy. Like these springs and gravitational/electric potentials, it varies only with the distance between particles. Its strength isn’t just the radius to a power, though. It uses a more complicated expression:

-K \frac{e^{-br}}{r}

Here ‘K’ is a scaling constant for the strength of the whole force. It’s the kind of thing we have ‘G M’ for in the gravitational potential, or ‘k’ in the spring potential. The ‘b’ is a second kind of scaling. And that a kind of range. A range of what? It’ll help to look at this potential rewritten a little. It’s the same as -\left(K \frac{1}{r}\right) \cdot \left(e^{-br}\right) . That’s the gravitational/electric potential, times e-br. That’s a number that will be very large as r is small, but will drop to zero surprisingly quickly as r gets larger. How quickly will depend on b. The larger a number b is, the faster this drops to zero. The smaller a number b is, the slower this drops to zero. And if b is equal to zero, then e-br is equal to 1, and we have the gravitational/electric potential all over again.

Yukawa introduced this potential to physics in the 1930s. He was trying to model the forces which keep an atom’s nucleus together. It represents the potential we expect from particles that attract one another by exchanging some particles with a rest mass. This rest mass is hidden within that number ‘b’ there. If the rest mass is zero, the particles are exchanging something like light, and that’s just what we expect for the electric potential. For the gravitational potential … um. It’s complicated. It’s one of the reasons why we expect that gravitons, if they exist, have zero rest mass. But we don’t know that gravitons exist. We have a lot of trouble making theoretical gravitons and quantum mechanics work together. I’d rather be skeptical of the things until we need them.

Still, the Yukawa potential is an interesting mathematical creature even if we ignore its important role in modern physics. When I took my Introduction to Mechanics final one of the exam problems was deriving the equivalent of Kepler’s Laws of Motion for the Yukawa Potential. I thought then it was a brilliant problem. I still do. It struck me while writing this that I don’t remember whether it allows for closed orbits, except when b is zero. I’m a bit afraid to try to work out whether it does, lest I learn that I can’t follow the reasoning for that anymore. That would be a terrible thing to learn.