Reading the Comics, December 2, 2019: Laconic Week Edition


You know, I had picked these comic strips out as the ones that, last week, had the most substantial mathematics content. And on preparing this essay I realize there’s still not much. Maybe I could have skipped out on the whole week instead.

Bill Amend’s FoxTrot for the 1st is mostly some wordplay. Jason’s finding ways to represent the counting numbers with square roots. The joke plays more tightly than one might expect. Root beer was, traditionally, made with sassafras root, hence the name. (Most commercial root beers don’t use actual sassafras anymore as the safrole in it is carcinogenic.) The mathematical term root, meanwhile, derives from the idea that the root of a number is the thing which generates it. That 2 is the fourth root of 16, because four 2’s multiplied together is 16. That idea. This draws on the metaphor of the roots of a plant being the thing which lets the plant grow. This isn’t one of those cases where two words have fused together into one set of letters.

Jason, pouring pop: 'Sqrt(9) ounces .. sqrt(16) ounces ... sqrt(81) ounces ... sqrt(144) cold, delicious ounces!' Paige: 'Weirdo.' Jason: 'I take my root beer pouring seriously.'
Bill Amend’s FoxTrot for the 1st of December, 2019. Essays mentioning either the reprint or Sunday-only new issues of FoxTrot appear at this link.

Jef Mallett’s Frazz for the 1st is set up with an exponential growth premise. The kid — I can’t figure out his name — promises to increase the number of push-ups he does each day by ten percent, with exciting forecasts for how many that will be before long. As Frazz observes, it’s not especially realistic. It’s hard to figure someone working themselves up from nothing to 300 push-ups a day in only two months.

Also much else of the kid’s plan doesn’t make sense. On the second day he plans to do 1.1 push-ups? On the third 1.21 push-ups? I suppose we can rationalize that, anyway, by taking about getting a fraction of the way through a push-up. But if we do that, then, I make out by the end of the month that he’d be doing about 15.863 push-ups a day. At the end of two months, at this rate, he’d be at 276.8 push-ups a day. That’s close enough to three hundred that I’d let him round it off. But nobody could be generous enough to round 15.8 up to 90.

Kid: 'I'm going to do one push-up today. And I'm going to keep doing push-ups every day for a month. And: I'm going to increase the number of push-ups by a modest 10 percent each day. Know how many push-ups I'll do on the last day of the month? 90! And if I keep it up one more month, I'll be up to 300 push-ups at a time!' Frazz: 'Well-intended, if not especially realistic.' Kid: 'Also by then, the world will have completely forgotten about this history assignment I'm avoiding right now.' Frazz: 'Realistic, if not especially well-intended.'
Jef Mallett’s Frazz for the 1st of December, 2019. Essays which mention something from Frazz should be at this link.

An alternate interpretation of his plans would be to say that each day he’s doing ten percent more, and round that up. So that, like, on the second day he’d do 1.1 rounded up to 2 push-ups, and on the third day 2.2 rounded up to 3 push-ups, and so on. Then day thirty looks good: he’d be doing 94. But the end of two months is a mess as by then he’d be doing 1,714 push-ups a day. I don’t see a way to fit all these pieces together. I’m curious what the kid thought his calculation was. Or, possibly, what Jef Mallett thought the calculation was.

Kid: 'I'm not gonna be an accountant like you, dad! [Holding guitar] I'll become a musician so I don't have to work a real job!' [In front of computer, in suit.] 'I can just sit with my guitar, optimizing search results and maximizing click velocity and ... ' [ Realizing he's studying spreadsheets, clicks-per-ad-dollar; curses himself ]
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd of December, 2019. There are a lot of essays that get into Saturday Morning Breakfast Cereal, and those essays are gathered here.

Zach Weinersmith’s for the 2nd has a kid rejecting accounting in favor of his art. But, wanting to do that art with optimum efficiency … ends up doing accounting. It’s a common story. A common question after working out that someone can do a thing is how to do it best. Best has many measures, yes. But the logic behind how to find it stays the same. Here I admit my favorite kinds of games tend to have screen after screen of numbers, with the goal being to make some number as great as possible considering. If they ever made Multiple Entry Accounting Simulator none of you would ever hear from me again.


Which may be some time! Between Reading the Comics, A to Z, recap posts, and the occasional bit of filler I’ve just finished slightly over a hundred days in a row posting something. That is, however, at its end. I don’t figure to post anything tomorrow. I may not have anything before Sunday’s Reading the Comics post, at this link. I’ll be letting my typing fingers sleep in instead. Thanks for reading.

Why does the Quantum Mechanics Momentum Operator look like that?


I don’t know. I say this for anyone this has unintentionally clickbaited, or who’s looking at a search engine’s preview of the page.

I come to this question from a friend, though, and it’s got me wondering. I don’t have a good answer, either. But I’m putting the question out there in case someone reading this, sometime, does know. Even if it’s in the remote future, it’d be nice to know.

And before getting to the question I should admit that “why” questions are, to some extent, a mug’s game. Especially in mathematics. I can ask why the sum of two consecutive triangular numbers a square number. But the answer is … well, that’s what we chose to mean by ‘triangular number’, ‘square number’, ‘sum’, and ‘consecutive’. We can show why the arithmetic of the combination makes sense. But that doesn’t seem to answer “why” the way, like, why Neil Armstrong was the first person to walk on the moon. It’s more a “why” like, “why are there Seven Sisters [ in the Pleiades ]?” [*]

But looking for “why” can, at least, give us hints to why a surprising result is reasonable. Draw dots representing a square number, slice it along the space right below a diagonal. You see dots representing two successive triangular numbers. That’s the sort of question I’m asking here.

From here, we get to some technical stuff and I apologize to readers who don’t know or care much about this kind of mathematics. It’s about the wave-mechanics formulation of quantum mechanics. In this, everything that’s observable about a system is contained within a function named \Psi . You find \Psi by solving a differential equation. The differential equation represents problems. Like, a particle experiencing some force that depends on position. This is written as a potential energy, because that’s easier to work with. But it’s the kind of problem done.

Grant that you’ve solved \Psi , since that’s hard and I don’t want to deal with it. You still don’t know, like, where the particle is. You never know that, in quantum mechanics. What you do know is its distribution: where the particle is more likely to be, where it’s less likely to be. You get from \Psi to this distribution for, like, particles by applying an operator to \Psi . An operator is a function with a domain and a range that are spaces. Almost always these are spaces of functions.

Each thing that you can possibly observe, in a quantum-mechanics context, matches an operator. For example, there’s the x-coordinate operator, which tells you where along the x-axis your particle’s likely to be found. This operator is, conveniently, just x. So evaluate x\Psi and that’s your x-coordinate distribution. (This is assuming that we know \Psi in Cartesian coordinates, ones with an x-axis. Please let me do that.) This looks just like multiplying your old function by x, which is nice and easy.

Or you might want to know momentum. The momentum in the x-direction has an operator, \hat{p_x} , which equals -\imath \hbar \frac{\partial}{\partial x} . The \partial is partial derivatives. The \hbar is Planck’s constant, a number which in normal systems of measurement is amazingly tiny. And you know how \imath^2 = -1 . That – symbol is just the minus or the subtraction symbol. So to find the momentum distribution, evaluate -\imath \hbar \frac{\partial}{\partial x}\Psi . This means taking a derivative of the \Psi you already had. And multiplying it by some numbers.

I don’t mind this multiplication by \hbar . That’s just a number and it’s a quirk of our coordinate system that it isn’t 1. If we wanted, we could set up our measurements of length and duration and stuff so that it was 1 instead.

But. Why is there a -\imath in the momentum operator rather than the position operator? Why isn’t one \sqrt{-\imath} x and the other \sqrt{-\imath} \frac{\partial}{\partial x} ? From a mathematical physics perspective, position and momentum are equally good variables. We tend to think of position as fundamental, but that’s surely a result of our happening to be very good at seeing where things are. If we were primarily good at spotting the momentum of things around us, we’d surely see that as the more important variable. When we get into Hamiltonian mechanics we start treating position and momentum as equally fundamental. Even the notation emphasizes how equal they are in importance, and treatment. We stop using ‘x’ or ‘r’ as the variable representing position. We use ‘q’ instead, a mirror to the ‘p’ that’s the standard for momentum. (‘p’ we’ve always used for momentum because … … … uhm. I guess ‘m’ was already committed, for ‘mass’. What I have seen is that it was taken as the first letter in ‘impetus’ with no other work to do. I don’t know that this is true. I’m passing on what I was told explains what looks like an arbitrary choice.)

So I’m supposing that this reflects how we normally set up \Psi as a function of position. That this is maybe why the position operator is so simple and bare. And then why the momentum operator has a minus, an imaginary number, and this partial derivative stuff. That if we started out with the wave function as a function of momentum, the momentum operator would be just the momentum variable. The position operator might be some mess with \imath and derivatives or worse.

I don’t have a clear guess why one and not the other operator gets full possession of the \imath though. I suppose that has to reflect convenience. If position and momentum are dual quantities then I’d expect we could put a mere constant like -\imath wherever we want. But this is, mostly, me writing out notes and scattered thoughts. I could be trying to explain something that might be as explainable as why the four interior angles of a rectangle are all right angles.

So I would appreciate someone pointing out the obvious reason these operators look like that. I may grumble privately at not having seen the obvious myself. But I’d like to know it anyway.


[*] Because there are not eight.

Reading the Comics, December 6, 2019: The Glances Edition


Although I’m out of the A to Z sequence, I like the habit of posting just the comic strips that name-drop mathematics for the Sunday post. It frees up so much of my Saturday, at the cost of committing my Sunday. So here’s last week’s casual mentions of some mathematics topic.

Wayno and Piraro’s Bizarro for the 3rd of December has a kid doing badly in arithmetic and blaming forces beyond their control.

Bill Holbrook’s On The Fastrack for the 5th has the CEO of Fastrack, Inc, disappointed in what analytics can do. Analytics, here, is the search for statistical correlations, traits that are easy to spot and that indicate greater risks or opportunities. The desire to find these is great and natural. Real data is, though, tantalizingly not quite good enough to answer most interesting questions.

Ruben Bolling’s Super-Fun-Pak Comix for the 5th repeats A Voice From Another Dimension, Bolling’s riff on the Flatland premise.

Tauhid Bondia’s Crabgrass for the 6th uses a background panel of calculus work as part of illustrating deep thinking about something, in this case, how to fairly divide chocolate. One of calculus’s traditional strengths is calculating the volumes of interesting figures.

Richard Thompson’s Richard’s Poor Almanac for the 6th reprints the Christmas Tree guide with a Cubist Fir that “no longer inhabits Euclidean space”.

The World And The Way It Would Be If Numbers Never Existed. Man looking at an elevator control panel: 'This is why I don't like elevators. It's always pot luck.'
Joe Martin’s Mr Boffo for the 6th of December, 2019. The occasional essay mentioning Mr Boffo is put at this link.

Joe Martin’s Mr Boffo for the 6th is a cute joke on one of the uses of numbers, that of being a convenient and inexhaustible index. The strip ran on Friday and I don’t know how to link to the archives in a stable way. This is why I’ve put the comic up here.

And that’s enough comics for just now. Later this week I’ll get to the comics that inspire me to write more.

Here Are All My Past A To Z Sequences


While I’m not necessarily going to continue highlighting old A to Z essays every Friday and Saturday, it is a fact I’ve now got six pages listing all the topics for the six A to Z’s that I have completed. So let me share them here. This may be convenient for you, the reader, to see what kinds of things I’ve written up. It’s certainly convenient for me, since someday I’ll want all this stuff organized. The past A to Z sequences have been:

  • Summer 2015. Featuring anzatz, into, and well-posed problem.
  • Leap Day 2016. With continued fractions, polynomials, quaternions, and transcendental numbers.
  • End 2016. Featuring the Fredholm alternative, general covariance, normal numbers, and the Monster Group.
  • Summer 2017. Starring Benford’s Law, topology, and x.
  • Fall 2018. Featuring Jokes, the Infinite Monkey Theorem, the Sorites Paradox, and the Pigeonhole Priciple.
  • Fall 2019. With Buffon’s Needle, Versine, the Julia Set, and Fourier Series.

What You Need To Pass This Class


It’s near the end of the (US) college fall semester. So it’s a good time to point out again that it is possible to work out exactly what you need on the final exam to get whatever grade you want in the course. What it’s not possible to do is study just exactly enough to get that grade, mind you. I suppose it can give you some idea where a good study session can most make a difference, but really, what you need is to study routinely and to get enough sleep.

But if you are just trying to get a rough idea of what you need, here’s a table with common cases, of final exam weight and to-date averages. If that chart misses what your particular case needs (and, remember, there is no point to looking to great precision) here’s the formula to work out specifically your problem.

And, as long as I have this open, let me share an episode of The Vic and SadeCast, about the renowned and strange 15-minute old-time-radio serial comedy Vic and Sade. Most episodes of the serial were two or three people talking past one another. The show may not be to your tastes, but if it is, it’s very much to your taste. This episode of the podcast features an October 1941 show aptly titled It’s Algebra, Uncle Fletcher.

What I Learned Doing My Fall 2019 Mathematics A To Z


Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

The most important thing I learned this time around was that I should have started a week or two earlier. Not that this should have been a Summer A to Z. It would be true for any season. It’s more that I started soliciting subjects for the first letters of the alphabet about two weeks ahead of publication. I didn’t miss a deadline this time around, and I didn’t hit the dread of starting an essay the day of publication. But the great thing about an A to Z sequence like this is being able to write well ahead of publication and I never got near that.

The Reading the Comics posts are already, necessarily, done close to publication. The only way to alter that is to make the Reading the Comics posts go even more than a week past the comics’ publication. Or lean on syndicated cartoonists to send me heads-ups. Anyway, if neither Reading the Comics nor A to Zs can give me breathing room, then what’s going wrong? So probably having topics picked as much as a month ahead of publication is the way I should go.

Picking topics is always the hardest part of writing things here. The A to Z gimmick makes it easy to get topics, though. The premise is both open and structured. I’m not sure I’d have as fruitful a response if I tossed out “explainer Fridays” or something and hoped people had ideas. A structured choice tends to be easier to make.

The biggest structural experiment this time around is that I put in two “recap” posts each week. These were little one- and two-paragraph things pointing to past A to Z essays. I’ve occasionally reblogged a piece, or done a post that points to old posts. Never systematically, though. Two recap posts a week seemed to work well enough. Some old stuff got new readers and nobody seemed upset. I even got those, at least, done comfortably ahead of deadline. When I finished a Thursday post I could feel like I was luxuriating in a long weekend, until I remembered the comics needed to be read.

Also, this now completes the sixth of my A to Z sequences. I’ve got enough that if I really wanted, I could drop to one new post a week, and do nothing but recaps the rest of the time. It would give me six months posting something every day. I have got nearly nine years’ worth of material here. Much of it is Reading the Comics posts, which date instantly. But the rest of the stuff in principle hasn’t aged, except in how my prose style has changed.

Another thing learned, and a bit of a surprise, was that I found a lot of fundamentals this time around. Things like “differential equations” or “Fourier series” or “Taylor series”. These are things that any mathematics major would know. These are even things that anyone a bit curious about mathematics might know. There is a place for very specific, technical terms. But some big-picture essays turn out to be comfortable too.

One of the things I wanted to write about and couldn’t was the Yang-Mills Equation. It would have taken too many words for me to write. If I’d used earlier essays as lemmas, to set up parts of this, I might have made it. In past A to Z sequences some essays built on one another. But by the time I was considering Y, the prerequisite letters had already been filled. This is an argument for soliciting for the whole alphabet from the start, rather than breaking it up into several requests for topics. But even then I’d have had to be planning Y, at a time when I know I’d be trying to think about D’s and E’s. I’m not sure that’s plausible. It does imply, as I started out thinking, that I need to work farther ahead of deadline anyway.

The versine is a fun little thing.

How November 2019 Treated My Mathematics Blog


A couple months back I switched to looking at comparing monthly readership figures to a twelve-month running average. Running averages offer some advantages in looking for any signal. They make statistics less sensitive to fluke events. The cost, of course, is that they take longer to recognize trends starting. But in October I had a singular freak event, with the A to Z essay on linear programming getting liked to from some forum vastly larger than mine. So that got an extra 4,900 page views in one day, and an extra six hundred or so the next, and so on. Can’t expect that to be regular, though.

There were a “mere” 2,333 page views around here in November. That’s small only compared to October’s spike. It’s a little down from September, but still, it’s above the twelve-month running average of 1,996.9 views in a month. Those views came from 1,568 unique visitors, which compares nicely to the running average of 1,330.3 views per month.

Bar chart of monthly views and visitors for the past two years. There's a spike to a bit under 9000 views from over 5000 visitors in October; it's fallen back to 2,333 views and 1,568 visitors for November.
As with my humor blog this month, I was so delighted that I was able to get the statistics at exactly the start of December, WordPress server time, that I forgot to do the URL hacking to get like five years’ data at once. Ah well.

There were 95 likes given to things around here in November, which is also above the running average of 68.8 likes in a month. And 23 comments, once again above the running average of 17.5 comments. So, posting stuff every single day works; who would have guessed, apart from everyone who knows anything about attracting audiences?

Well, more about posting to a predictable schedule, and stuff that people are interested in. But “just post a lot” can work too.

Or can it? November saw 77.8 views per posting, which is close to what September offered. But both are below the twelve-month running average of 114.0 views per posting. There were 52.3 visitors per posting, down from the average of 75.4 visitors per post. It’s back to around September’s 46.2 visitors per post though. There were 3.2 likes per post, down from the running average of 4.4. And there were 0.8 comments per posting, below the average of 1.1. It all implies there’s a best rate for these things. Or that filling out Fridays and Saturdays with mentions of older posts is not all that engaging.

Counting my home page there were 300 pages that got any views at all in November. There’d been 311 in October and 296 in September. 160 of them got more than one view, a bit undre the 187 of October and 172 of September. 42 posts got at least ten views, down from October’s 52 but comparable to September’s 37. The most popular pieces, meanwhile, were:

Nice to see trapezoids back again. Also I’m happy that the versine’s been liked. I’m coming to enjoy this obscure trig function, although not so much as to use it for anything I care about.

Mercator-style map of the world, with the United States in darkest red, India in a less-dark red, and much of the Americas, Europe, and Pacific Asia in a roughly uniform pink. Not much from Africa, though, nor an arc from Syria through Iran and up to Kazakhstan.
I’m a touch surprised to learn I haven’t had a single page view from Madagascar all this year. Less surprised that I can’t tell whether I’ve had any page views from Comoros. (It’s several islands between Africa and Madagascar and is invisible at WordPress’s mapsize.)

94 countries or country-like entities sent me any page views in November. That’s down from October’s 116, and even September’s 69. 24 of these were single-reader countries, the same count as in October and above September’s 19. Here’s the roster of reading lands:

Country Readers
United States 1,205
India 172
Philippines 91
Canada 89
United Kingdom 72
Australia 55
Germany 50
Finland 36
Spain 33
Singapore 28
France 25
Hong Kong SAR China 23
Latvia 22
Mexico 21
Malaysia 20
South Africa 19
Ireland 16
Italy 16
Pakistan 16
Brazil 15
Sweden 14
Turkey 14
Poland 13
Netherlands 12
Bangladesh 11
Indonesia 10
Norway 10
Vietnam 10
Austria 9
Belgium 9
Greece 9
Japan 8
Ukraine 8
Israel 7
Nigeria 7
Bulgaria 6
China 6
Malta 6
Romania 6
Switzerland 6
Thailand 6
Belarus 5
Colombia 5
Ecuador 5
Kenya 5
New Zealand 5
Portugal 5
Taiwan 5
Egypt 4
Morocco 4
Myanmar (Burma) 4
Russia 4
Serbia 4
South Korea 4
United Arab Emirates 4
Croatia 3
Czech Republic 3
Hungary 3
Slovakia 3
Tanzania 3
Algeria 2
Cyprus 2
El Salvador 2
European Union 2
Ghana 2
Luxembourg 2
Mongolia 2
Saudi Arabia 2
Slovenia 2
Uganda 2
Albania 1 (*)
Argentina 1
Azerbaijan 1 (*)
Bosnia & Herzegovina 1
Botswana 1
Brunei 1
Chile 1
Denmark 1
Estonia 1
Jordan 1
Laos 1
Lithuania 1
Macedonia 1
Marshall Islands 1
Mauritius 1
Moldova 1
Nicaragua 1
Palestinian Territories 1
Papua New Guinea 1
Puerto Rico 1
Rwanda 1 (*)
Somalia 1
Sri Lanka 1
Trinidad & Tobago 1 (*)

Albania, Azerbaijan, Rwanda, and Trinidad & Tobago were single-view countries in October too. No countries are on a two-month single-view streak. The Philippines are back to being among the three countries sending me the greatest number of page views. Hi, whoever there finds me interesting.

From the start of this blog through the start of December I’ve posted 1,385 things. These have drawn a total of 96,191 page views, from 52,069 logged unique visitors, which does not count people from the earliest couple years.

From the start of 2019 to the start of December I’d posted 183 things, putting me one up over all of 2018 already. Only 2015 (188 posts) and 2016 (213 posts) have had more, to date. I’ve had 164,245 words published so far this year, which is also already my third most verbose year on record. 24,185 of these words were posted in November, for an average post of 808 and one-sixth words per posting in November. That’s below the year’s average of 898 words per post. October’s posts averaged 803.8 words, by the way, so apparently I’ve stabilized some.

Tomorrow I hope to post thoughts on what I learned doing the Fall 2019 A to Z sequence, the traditional close of that sort of project. And I do hope to keep up at least one Reading the Comics post per week. Past that, who can say what I’ll do?

If you’d like to know, you can follow me regularly. The easiest way is to add https://nebusresearch.wordpress.com/feed/ to your RSS reader. If you don’t have an RSS reader, a free account at Dreamwidth or Livejournal will serve as one: you can add it RSS feeds to your Friends page. If you’ve already got a WordPress account, you can use “Follow Nebusresearch”, a button on the upper right corner of this page. My Twitter account as @Nebusj remains fallow, but still posts announcements, so that hasn’t broken at least.

In any case, thank you for reading, however it is you do it.

What I Wrote About In My 2019 Mathematics A To Z


And I have made it to the end! As is traditional, I mean to write a few words about what I learned in doing all of this. Also as is traditional, I need to collapse after the work of thirteen weeks of two essays per week describing a small glossary of terms mostly suggested by kind readers. So while I wait to do that, let me gather in one bundle a list of all the essays from this project. If this seems to you like a lazy use of old content to fill a publication hole let me assure you: this will make my life so much easier next time I do an A-to-Z. I’ve learned that, at least, over the years.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Reading the Comics, November 30, 2019: Big Embarrassing Mistake Edition


See if you can spot where I discover my having made a big embarrassing mistake. It’s fun! For people who aren’t me!

Lincoln Peirce’s Big Nate for the 24th has boy-genius Peter drawing “electromagnetic vortex flow patterns”. Nate, reasonably, sees this sort of thing as completely abstract art. I’m not precisely sure what Peirce means by “electromagnetic vortex flow”. These are all terms that mathematicians, and mathematical physicists, would be interested in. That specific combination, though, I can find only a few references for. It seems to serve as a sensing tool, though.

Nate: 'Ah, now that's what I'm talking about! A boy, paper, and crayons, the simple pleasures. I know you're a genius, Peter, but it's great to see you just being a kid for a change! And you're really letting it rip! You're not trying to make something that looks real! It's just colors and shapes and --- ' Peter: 'This is a diagram of electromagnetic vortex flow patterns.' Nate: 'I knew that.' Peter: 'Hand me the turquoise.'
Lincoln Peirce’s Big Nate for the 24th of November, 2019. So, did you know I’ve been spelling Lincoln Peirce’s name wrong all this time? Yeah, I didn’t realize either. But look at past essays with Big Nate discussed in them and you’ll see. I’m sorry for this and embarrassed to have done such a lousy job looking at the words in front of me for so long.

No matter. Electromagnetic fields are interesting to a mathematical physicist, and so mathematicians. Often a field like this can be represented as a system of vortices, too, points around which something swirls and which combine into the field that we observe. This can be a way to turn a continuous field into a set of discrete particles, which we might have better tools to study. And to draw what electromagnetic fields look like — even in a very rough form — can be a great help to understanding what they will do, and why. They also can be beautiful in ways that communicate even to those who don’t undrestand the thing modelled.

Megan Dong’s Sketchshark Comics for the 25th is a joke based on the reputation of the Golden Ratio. This is the idea that the ratio, 1:\frac{1}{2}\left(1 + \sqrt{5}\right) (roughly 1:1.6), is somehow a uniquely beautiful composition. You may sometimes see memes with some nice-looking animal and various boxes superimposed over it, possibly along with a spiral. The rectangles have the Golden Ratio ratio of width to height. And the ratio is kind of attractive since \frac{1}{2}\left(1 + \sqrt{5}\right) is about 1.618, and 1 \div \frac{1}{2}\left(1 + \sqrt{5}\right) is about 0.618. It’s a cute pattern, and there are other similar cute patterns.. There is a school of thought that this is somehow transcendently beautiful, though.

Man, shooing off a woman holding a cat: 'I don't like cute animals. I like BEAUTIFUL animals.' In front of portraits of an eagle, lion, and whale: 'Animals with golden-ratio proportions and nice bone-structure.'
Megan Dong’s Sketchshark Comics for the 25th of November, 2019. So far I’m aware I have never discussed this comic before, making this another new-tag day. This and future essays with Sketchshark Comics in them should be at this link.

It’s all bunk. People may find stuff that’s about one-and-a-half times as tall as it is wide, or as wide as it is tall, attractive. But experiments show that they aren’t more likely to find something with Golden Ratio proportions more attractive than, say, something with 1:1.5 proportions, or 1:1.8 , or even to be particularly consistent about what they like. You might be able to find (say) that the ratio of an eagle’s body length to the wing span is something close to 1:1.6 . But any real-world thing has a lot of things you can measure. It would be surprising if you couldn’t find something near enough a ratio you liked. The guy is being ridiculous.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th builds on the idea that everyone could be matched to a suitable partner, given a proper sorting algorithm. I am skeptical of any “simple algorithm” being any good for handling complex human interactions such as marriage. But let’s suppose such an algorithm could exist.

Mathematician: 'Thanks to computer science we no longer need dating. We can produce perfect marriages with simple algorithms.' Assistant: 'ooh!' [ AND SO ] Date-o-Tron, to the mathematician and her assistant: 'There are many women you'd be happier with, but they're already with people whom they prefer to you. Thus, you will be paired with your 4,291th favorite choice. We have a stable equilibrium.' Mathematician: 'Hooray!'
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th of November, 2019. Someday I’ll go a week without an essay mentioning Saturday Morning Breakfast Cereal, but this is not that day. Or week. The phrasing gets a little necessarily awkward here.

This turns matchmaking into a problem of linear programming. Arguably it always was. But the best possible matches for society might not — likely will not be — the matches everyone figures to be their first choices. Or even top several choices. For one, our desired choices are not necessarily the ones that would fit us best. And as the punch line of the comic implies, what might be the globally best solution, the one that has the greatest number of people matched with their best-fit partners, would require some unlucky souls to be in lousy fits.

Although, while I believe that’s the intention of the comic strip, it’s not quite what’s on panel. The assistant is told he’ll be matched with his 4,291th favorite choice, and I admit having to go that far down the favorites list is demoralizing. But there are about 7.7 billion people in the world. This is someone who’ll be a happier match with him than 6,999,995,709 people would be. That’s a pretty good record, really. You can fairly ask how much worse that is than the person who “merely” makes him happier than 6,999,997,328 people would


And that’s all I have for last week. Sunday I hope to publish another Reading the Comics post, one way or another. And later this week I’ll have closing thoughts on the Fall 2019 A-to-Z sequence. And I do sincerely apologize to Lincoln Peirce for getting his name wrong, and this on a comic strip I’ve been reading since about 1991.

Reading the Comics, November 30, 2019: The Glances Edition


I like this scheme where I use the Sunday publication slot to list comics that mention mathematics without inspiring conversation. I may need a better name for that branch of the series, though. But, nevertheless, here are comic strips from last week that don’t need much said about them.

Mell Lazarus’s Momma rerun for the 24th has Momma complain about Francis’s ability to do arithmetic. It originally ran the 23rd of February, 2014.

John Deering’s Strange Brew for the 24th features Pythagoras, here being asked about his angles. I’m not aware of anything actually called a Pythagorean Angle, but there’s enough geometric things with Pythagoras’s name attached for the joke to make sense.

Maria Scrivan’s Half Full for the 25th is a Venn Diagram joke for the week. It doesn’t quite make sense as a Venn Diagram, as it’s not clear to me that “invasive questions” is sensibly a part of “food”. But it’s a break from every comic strip doing a week full of jokes about turkeys preferring to not be killed.

Tony Carrillo’s F Minus for the 26th is set in mathematics class. And talks about how the process of teaching mathematics is “an important step on the road to hating math”, which is funny because it’s painfully true.

Jonathan Mahood’s Bleeker: The Rechargeable Dog for the 27th had Bleeker trying to help Skip with his mathematics homework. By the 28th Skip was not getting much done.

Bill Watterson’s Calvin and Hobbes rerun for the 30th wrapped up a storyline that saw Calvin being distracted away from his mathematics homework. The strip originally ran the 2nd of December, 1989.


And that’s that. Later this week I’ll publish something on the comic strips with substantial mathematics mention. And I do hope to have a couple thoughts on the recently-concluded Fall 2019 A-to-Z sequence. Plus, it’s the start of a new month, so that means I’ll be posting a map of the world. Maybe some other things too.

Exploiting my A-to-Z Archives: Z-transform


I have several times taught a class in a subject I did not already know well. This is always exciting, and is even sometimes fun. It depends on how well you cope with discovering all your notes for the coming week are gibberish to yourself and will need a complete rewriting. One of the courses I taught in those conditions was on digital signal processing. This was a delight, and I’m sorry to not have more excuses to write about it. In the Summer 2015 A-to-Z I wrote about the z-transform, something we get to know really well in signal processing. The z-transform is also related to the Fourier transform, which is related to Fourier series, which do a lot to turn differential equations into polynomials. (And I am surprised I don’t yet have an essay about the Fourier transform specifically. Maybe sometime later.) The z-transform is a good place to finish off the spotlights shone on these older A-to-Z essays.

Exploiting my A-to-Z Archives: Yamada Polynomial


For a while there in grad school I thought I would do a thesis in knot theory. I didn’t, ultimately. I do better in problems that I can set a computer to, and then start thinking about once it has teased some interesting phenomenon out of simulations. But the affection, at least from me towards knot theory, remains. In the Fall 2018 A-to-Z sequence I got to share several subjects from this field. One of them is the Yamada Polynomial, a polynomial-like construct that lets us describe knots. I don’t know how anyone might not find that a fascinating prospect, even if they aren’t good at making the polynomials themselves.

My 2019 Mathematics A To Z: Zeno’s Paradoxes


Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

A photograph of a blurry roller coaster passing through a vertical loop.
One of the many loops of Vortex, a roller coaster at Kings Island amusement park from 1987 to 2019. Taken by me the last day of the ride’s operation; this was one of the roller coaster’s runs after 7 pm, the close of the park the last day of the season.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.


And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

A Weird Kind Of Ruler


I ran across something neat. It’s something I’ve seen before, but the new element is that I have a name for it. This is the Golomb Ruler. It’s a ruler made with as few marks as possible. The marks are supposed to be arranged so that the greatest possible number of different distances can be made, by measuring between selected pairs of points.

So, like, in a regularly spaced ruler, you have a lot of ways to measure a distance of 1 unit of length. Only one fewer way to measure a distance of 2 units. One fewer still ways to measure a distance of 3 units and so on. Convenient but wasteful of marks. A Golomb ruler might, say, put marks only where the regularly spaced ruler has the units 1, 2, and 4. Then by choosing the correct pairs you can measure a distance of 1, 2, 3, or 4 units.

There’s applications of the Golomb ruler, stuff in information theory and sensor design and stuff. Also logistics. Never mind those. They present a neat little puzzle: can you find, for a given number of marks, the best possible arrangement of them into a ruler? That would be the arrangement that allows the greatest number of different lengths. Or perhaps the one that allows the longest string of whole-number differences. Your definition of best-possible determines what the answer is.

As a number theory problem it won’t surprise you to know there’s not a general answer. If I’m reading accurately most of the known best arrangements — the ones that allow the greatest number of differences — were proven by testing out cases. The 24-mark arrangement needed a test of 555,529,785,505,835,800 different rulers. MathWorld’s page on this tells me that optimal mark placement isn’t known for 25 or more marks. It also says that the 25-mark ruler’s optimal arrangement was published in 2008. So it isn’t just Wikipedia where someone will write an article, and then someone else will throw a new heap of words onto it, and nobody will read to see if the whole thing still makes sense. Wikipedia meanwhile lists optimal configurations for up to 27 points, demonstrated by 2014.

And as this suggests, you aren’t going to discover an optimal arrangement for some number of marks yourself. Unless you should be the first person to figure out an algorithm to do it. It’s not even known how complex an algorithm has to be. It’s suspected that it has to be NP-hard, though. But, while you won’t discover anything new to mathematics in pondering this, you can still have the fun of working out arrangements yourself, at least for a handful of points. There are numbers of points with more than one optimal arrangement.

(Golomb here is Solomon W Golomb, a mathematician and electrical engineer with a long history in information theory and also recreational mathematics problems. There are several parties who independently invented the problem. But Golomb actually did work with rulers, so at least they aren’t incorrectly named.)

My 2019 Mathematics A To Z: The Game of ‘Y’


Today’s A To Z term is … well, my second choice. Goldenoj suggested Yang-Mills and I was so interested. Yang-Mills describes a class of mathematical structures. They particularly offer insight into how to do quantum mechanics. Especially particle physics. It’s of great importance. But, on thinking out what I would have to explain I realized I couldn’t write a coherent essay about it. Getting to what the theory is made of would take explaining a bunch of complicated mathematical structures. If I’d scheduled the A-to-Z differently, setting up matters like Lie algebras, maybe I could do it, but this time around? No such help. And I don’t feel comfortable enough in my knowledge of Yang-Mills to describe it without describing its technical points.

That said I hope that Jacob Siehler, who suggested the Game of ‘Y’, does not feel slighted. I hadn’t known anything of the game going in to the essay-writing. When I started research I was delighted. I have yet to actually play a for-real game of this. But I like what I see, and what I can think I can write about it.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Game of ‘Y’.

This is, as the name implies, a game. It has two players. They have the same objective: to create a ‘y’. Here, they do it by laying down tokens representing their side. They take turns, each laying down one token in a turn. They do this on a shape with three edges. The ‘y’ is created when there’s a continuous path of their tokens that reaches all three edges. Yes, it counts to have just a single line running along one edge of the board. This makes a pretty sorry ‘y’ but it suggests your opponent isn’t trying.

There are details of implementation. The board is a mesh of, mostly, hexagons. I take this to be for the same reason that so many conquest-type strategy games use hexagons. They tile space well, they give every space a good number of neighbors, and the distance from the centers of one neighbor to another is constant. In a square grid, the centers of diagonal neighbors are farther than the centers of left-right or up-down neighbors. Hexagons do well for this kind of game, where the goal is to fill space, or at least fill paths in space. There’s even a game named Hex, slightly older than Y, with similar rules. In that the goal is to draw a continuous path from one end of the rectangular grid to another. The grid of commercial boards, that I see, are around nine hexagons on a side. This probably reflects a desire to have a big enough board that games go on a while, but not so big that they go on forever

Mathematicians have things to say about this game. It fits nicely in game theory. It’s well-designed to show some things about game theory. It’s the kind of game which has perfect information game, for example. Each player knows, at all times, the moves all the players have made. Just look at the board and see where they’ve placed their tokens. A player might have forgotten the order the tokens were placed in, but that’s the player’s problem, not the game’s. Anyway in Y, the order of token-placing doesn’t much matter.

It’s also a game of complete information. Every player knows, at every step, what the other player could do. And what objective they’re working towards. One party, thinking enough, could forecast the other’s entire game. This comes close to the joke about the prisoners telling each other jokes by shouting numbers out to one another.

It is also a game in which a draw is impossible. Play long enough and someone must win. This even if both parties are for some reason trying to lose. There are ingenious proofs of this, but we can show it by considering a really simple game. Imagine playing Y on a tiny board, one that’s just one hex on each side. Definitely want to be the first player there.

So now imagine playing a slightly bigger board. Augment this one-by-one-by-one board by one row. That is, here, add two hexes along one of the sides of the original board. So there’s two pieces here; one is the original territory, and one is this one-row augmented territory. Look first at the original territory. Suppose that one of the players has gotten a ‘Y’ for the original territory. Will that player win the full-size board? … Well, sure. The other player can put a token down on either hex in the augmented territory. But there’s two hexes, either of which would make a path that connects the three edges of the board. The first player can put a token down on the other hex in the augmented territory, and now connects all three of the new sides again. First player wins.

All right, but how about a slightly bigger board? So take that two-by-two-by-two board and augment it, adding three hexes along one of the sides. Imagine a player’s won the original territory board. Do they have to win the full-size board? … Sure. The second player can put something in the augmented territory. But there’s again two hexes that would make the path connecting all three sides of the full board. The second player can put a token in one of those hexes. But the first player can put a token in the other of those. First player wins again.

How about a slightly bigger board yet? … Same logic holds. Really the only reason that the first player doesn’t always win is that, at some point, the first player screws up. And this is an existence proof, showing that the first player can always win. It doesn’t give any guidance into how to play, though. If the first player plays perfectly, she’s compelled to win. This is something which happens in many two-player, symmetric games. A symmetric game is one where either player has the same set of available moves, and can make the same moves with the same results. This proof needs to be tightened up to really hold. But it should convince you, at least, that the first player has an advantage.

So given that, the question becomes why play this game after you’ve decided who’ll go first? The reason you might if you were playing a game is, what, you have something else to do? And maybe you think you’ll make fewer mistakes than your opponent. One approach often used in symmetric games like this is the “pie rule”. The name comes from the story about how to slice a pie so you and your sibling don’t fight over the results. One cuts the pie, the other gets first pick of the slice, and then you fight anyway. In this game, though, one player makes a tentative first move. The other decides whether they will be Player One with that first move made or whether they’ll be Player Two, responding.

There are some neat quirks in the commercial Y games. One is that they don’t actually show hexes, and you don’t put tokens in the middle of hexes. Instead you put tokens on the spots that would be the center of the hex. On the board are lines pointing to the neighbors. This makes the board actually a string of triangles. This is the dual to the hex grid. It shows a set of vertices, and their connections, instead of hexes and their neighbors. Whether you think the hex grid or this dual makes it easier to tell when you’ve connected all three edges is a matter of taste. It does make the edges less jagged all around.

Another is that there will be three vertices that don’t connect to six others. They connect to five others, instead. Their spaces would be pentagons. As I understand the literature on this, this is a concession to game balance. It makes it easier for one side to fend off a path coming from the center.

It has geometric significance, though. A pure hexagonal grid is a structure that tiles the plane. A mostly hexagonal grid, with a couple of pentagons, though? That can tile the sphere. To cover the whole sphere you need something like at least twelve irregular spots. But this? With the three pentagons? That gives you a space that’s topographically equivalent to a hemisphere, or at least a slice of the sphere. If we do imagine the board to be a hemisphere covered, then the result of the handful of pentagon spaces is to make the “pole” closer to the equator.

So as I say the game seems fun enough to play. And it shows off some of the ways that game theorists classify games. And the questions they ask about games. Is the game always won by someone? Does one party have an advantage? Can one party always force a win? It also shows the kinds of approach game theorists can use to answer these questions. This before they consider whether they’d enjoy playing it.


I am excited to say that there’s just the one more time this year that I will realize: it’s Wednesday evening and I’m 1200 words short. Please stop in Thursday when I hope to have the letter Z represented. That, and all of this year’s A-to-Z essays, should appear at this link. And if that isn’t enough, I’ll feature some past essays on Friday and Saturday, and have most of my past A-to-Z essays at this link. Thank you.

Reading the Comics, November 21, 2019: Computational Science Edition


There were just a handful of comic strips that mentioned mathematical topics I found substantial. Of those that did, computational science came up a couple times. So that’s how we got to here.

Rick Detorie’s One Big Happy for the 17th has Joe writing an essay on the history of computing. It’s basically right, too, within the confines of space and understandable mistakes like replacing Pennsylvania with an easier-to-spell state. And within the confines of simplification for the sake of getting the idea across briefly. Most notable is Joe explaining ENIAC as “the first electronic digital computer”. Anyone calling anything “the first” of an invention is simplifying history, possibly to the point of misleading. But we must simplify any history to have it be understandable. ENIAC is among the first computers that anyone today would agree is of a kind with the laptop I use. And it’s certainly the one that, among its contemporaries, most captured the public imagination.

Kid's report on Computers, with illustrations: 'Before computers there were calculators, and the first calculator was an abacus. [Caveman counting ug, tug, trug, frug on one.] The first mechanical kind of calculator wsa built by a French kid named Blaise Pascal in 1644. [Kid saying, yo, Papa, look!] In 1886 an American named Herman Hollerith invented a punch card machine to be used in the 1890 census. [ Hollerith dragging a computer on a cart and saying, 'I'm coming to my census!' ] Then in 1946 some smart guys in Pennsa^H Penssy^H Ohio invented the first electronic digital computer called ENIAC, which was bigger than a houseboat, but couldn't float. [ computer sinking in water ] In the 1970s the microprocessor was invented, and computers got small enough to come into your house and be personal [ computer waking someone from bed saying 'Good morning, Larry ] Some personal computers are called laptops because if they were called lapbottoms you might sit on them. [ guy yiking after sitting on one ] Computers are now in a lot of very important things, like talking action figures, video games, and bionic superheroes. Computers help with just about everything, except writing this report, because my mom told me to do it the caveman way with paper and pencils and books.'
Rick Detorie’s One Big Happy for the 17th of November, 2019. This strip is a reprint of one from several years ago (all the ones on GoComics are reruns; the ones on Creators.com are new releases), but I don’t know when it originally appeared. This and other essays mentioning One Big Happy, current run or repeats, should be at this link.

Incidentally, Heman Hollerith was born on Leap Day, 1860; this coming year will in that sense see only his 39th birthday.

Ryan North’s Dinosaur Comics for the 18th is based on the question of whether P equals NP. This is, as T-Rex says, the greatest unsolved problem in computer science. These are what appear to be two different kinds of problems. Some of them we can solve in “polynomial time”, with the number of steps to find a solution growing as some polynomial function of the size of the problem. Others seem to be “non-polynomial”, meaning the number of steps to find a solution grows as … something not a polynomial.

T-Rex: 'God, do you like poutine?' God: 'Man, does P equal NP?' T-Rex: 'Um. Maybe? It's kinda the greatest unsolved problem in computer science! If P=NP then a whole class of problems are easily solvable! But we've been trying to efficiently solve these for years. But if P doesn't equal NP, why haven't we been able to prove it? So are you saying 'probably I hate poutine, but it's really hard to prove'? Or are you saying, 'If I like poutine, then all public-key crypto is insecure?' Utahraptor: 'So who likes poutine?' T-Rex: 'God! Possible. And the problem is equivalent to the P=NP problem.' Utahraptor: 'So the Clay Mathematics Institute has a $1,000,000 prize for the first correct solution to the question 'Does God like poutine'?' T-Rex: 'Yes. This is the world we live in: 'does God like poutine' is the most important question in computer science. Dr Professor Stephen Cook first pondered whether God likes poutine in 1971; his seminal paper on the subject has made him one of computational complexity theory/God poutine ... actually, that's awesome. I'm glad we live in this wicked sweet world!'
Ryan North’s Dinosaur Comics for the 18th of November, 2019. I take many chances to write about this strip. Essays based on Dinosaur Comics should appear at this link.

You see one problem. Not knowing a way to solve a problem in polynomial time does not necessarily mean there isn’t a solution. It may mean we just haven’t thought of one. If there is a way we haven’t thought of, then we would say P equals NP. And many people assume that very exciting things would then follow. Part of this is because computational complexity researchers know that many NP problems are isomorphic to one another. That is, we can describe any of these problems as a translation of another of these problems. This is the other part which makes this joke: the declaration that ‘whether God likes poutine’ is isomorphic to the question ‘does P equal NP’.

We tend to assume, also, that if P does equal NP then NP problems, such as breaking public-key cryptography, are all suddenly easy. This isn’t necessarily guaranteed. When we describe something as polynomial or non-polynomial time we’re talking about the pattern by which the number of steps needed to find the solution grows. In that case, then, an algorithm that takes one million steps plus one billion times the size-of-the-problem to the one trillionth power is polynomial time. An algorithm that takes two raised to the size-of-the-problem divided by one quintillion (rounded up to the next whole number) is non-polynomial. But for most any problem you’d care to do, this non-polynomial algorithm will be done sooner. If it turns out P does equal NP, we still don’t necessarily know that NP problems are practical to solve.

Dolly, writing out letters on a paper, explaining to Jeffy: 'The alphabet ends at 'Z', but numbers just keep going.'
Bil Keane and Jeff Keane’s The family Circus for the 20th of November, 2019. Essays with some discussion of The Family Circus appear at this link.

Bil Keane and Jeff Keane’s The Family Circus for the 20th has Dolly explaining to Jeff about the finiteness of the alphabet and infinity of numbers. I remember in my childhood coming to understand this and feeling something unjust in the difference between the kinds of symbols. That we can represent any of those whole numbers with just ten symbols (thirteen, if we include commas, decimals, and a multiplication symbol for the sake of using scientific notation) is an astounding feat of symbolic economy.

Zach Weinersmth’s Saturday Morning Breakfast cereal for the 21st builds on the statistics of genetics. In studying the correlations between one thing and another we look at something which varies, usually as the result of many factors, including some plain randomness. If there is a correlation between one variable and another we usually can describe how much of the change in one quantity depends on the other. This is what the scientist means on saying the presence of this one gene accounts for 0.1% of the variance in eeeeevil. The way this is presented, the activity of one gene is responsible for about one-thousandth of the level of eeeeevil in the person.

Scientist: 'I'm afraid your baby has ... THE SATAN GENE!' Father: 'My baby!' Scientist: 'Yes! The Satan Gene is responsible for 0.1% of the variance in EEEEEEVIL!' Father: 'Did you say 0.1%?' Scientist: 'It's ONE GENE, dude! That's a really high correlation!'
Zach Weinersmth’s Saturday Morning Breakfast cereal for the 21st of November, 2019. Some of the many appearances by Saturday Morning Breakfast Cereal in these essays are gathered at this link. I’m probably missing several.

As the father observes, this doesn’t seem like much. This is because there are a lot of genes describing most traits. And that before we consider epigenetics, the factors besides what is in DNA that affect how an organism develops. I am, unfortunately, too ignorant of the language of genetics to be able to say what a typical variation for a single gene would be, and thus to check whether Weinersmith has the scale of numbers right.


This finishes the mathematically-themed comic strips from this past week. If all goes to my plan, Tuesday and Thursday will find the last of this year’s A-to-Z postings for this year. And Wednesday? I’ll try to think of something for Wednesday. It’d be a shame to just leave it hanging loose like it might.

Reading the Comics, November 22, 2019: The Minor Comics of the Week Edition


I’m finding it surprisingly good for my workflow to use Sundays for the comic strips which mention mathematics only casually. Tomorrow or so I’ll get to the ones with substantial material, in an essay available at this link.

Scott Hilburn’s The Argyle Sweater for the 18th is a wordplay joke, based on a word containing syllables which roughly sound like “algebra”.

Jim Meddick’s Monty for the 19th is a sudoku joke, with Monty filling in things that aren’t numerals. Many of them are commonly used mathematical symbols. The ones that I don’t recognize I suspect come from physics applications, especially particle physics. These rely heavily on differential equations and group theory and are likely where Meddick got things like the \Omega_b and the \nu^{\pm} from.

Samson’s Dark Side of the Horse for the 22nd is the Roman Numerals joke for the week.

Thank you. And please stop in Tuesday when I hope to reach the next-to-final of my A-to-Z essays for the year.

Exploiting my A-to-Z Archives: X


And for today’s revival I offer something from the 2017 A-to-Z. This is about maybe the most mathematical of possible subjects: x. This was a fun one as I got to get into the cultural import of a mathematical thing, which is right up my alley. This and other of the 2017 A-to-Z essays are at this link.

Exploiting my A-to-Z Archives: Wlog


I’d like today to bring up something from the Fall 2019 A-to-Z. It’s a term which may seem unexciting, but it turns up all over the place. Wlog, short for without-loss-of-generality, is one of those phrases that turns up all over mathematical proofs. It’s usually difficult solving abstract problems. It’s usually less hard solving specific ones. Sometimes, we can find a specific problem that solves all of an abstract problem. Isn’t it wonderful when that happens? That and the other Leap Day 2016 A-to-Z essays are at this link.

My 2019 Mathematics A To Z: Chi-squared test


Today’s A To Z term is another from Mr Wu, of mathtuition88.com. The term does not, technically, start with X. But the Greek letter χ certainly looks like an X. And the modern English letter X traces back to that χ. So that’s near enough for my needs.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

χ2 Test.

The χ2 test is a creature of statistics. Creatures, really. But if one just says “the χ2 test” without qualification they mean Pearson’s χ2 test. Pearson here is a familiar name to anyone reading the biographical sidebar in their statistics book. He was Karl Pearson, who in the late 19th and early 20th century developed pretty much every tool of inferential statistics.

Pearson was, besides a ferocious mathematical talent, a white supremacist and eugenicist. This is something to say about many pioneers of statistics. Many of the important basics of statistics were created to prove that some groups of humans were inferior to the kinds of people who get offers an OBE. They were created at a time that white society was very afraid that it might be out-bred by Italians or something even worse. This is not to say the tools of statistics are wrong, or bad. It is to say that anyone telling you mathematics is a socially independent, politically neutral thing is a fool or a liar.

Inferential statistics is the branch of statistics used to test hypotheses. The hypothesis, generally, is about whether one sample of things is really distinguishable from a population of things. It is different from descriptive statistics, which is that thing I do each month when I say how many pages got a view and from how many countries. Descriptive statistics give us a handful of numbers with which to approximate a complicated things. Both do valuable work, although I agree it seems like descriptive statistics are the boring part. Without them, though, inferential statistics has nothing to study.

The χ2 test works like many hypothesis-testing tools do. It takes two parts. One of this is observations. We start with something that comes in two or more categories. Categories come in many kinds: the postal code where a person comes from. The color of a car. The number of years of schooling someone has had. The species of flower. What is important is that the categories be mutually exclusive. One has either been a smoker for more than one year or else one has not.

Count the number of observations of … whatever is interesting … for each category. There is some fraction of observations that belong to the first category, some fraction that belong to the second, some to the third, and so on. Find those fractions. This is all easy enough stuff, really. Counting and dividing by the total number of observations. Which is a hallmark of many inferential statistics tools. They are often tedious, involving a lot of calculation. But they rarely involve difficult calculations. Square roots are often where they top out.

That covers observations. What we also need are expectations. This is our hypothesis for what fraction “ought” to be in each category. How do you know what there “ought” to be? … This is the hard part of inferential statistics. Often we are interested in showing that some class is more likely than another to have whatever we’ve observed happen. So we can use as a hypothesis that the thing is observed just as much in one case as another. If we want to test whether one sample is indistinguishable from another, we use the proportions from the other sample. If we want to test whether one sample matches a theoretical ideal, we use that theoretical ideal. People writing probability and statistics problems love throwing dice. Let me make that my example. We hypothesize that on throwing a six-sided die a thousand times, each number comes up exactly one-sixth of the time.

It’s impossible that each number will come up exactly one-sixth of the time, in a thousand throws. We could only hope to achieve this if we tossed some ridiculous number like a thousand and two times. But even if we went to that much extra work, it’s impossible that each number would come up exactly the 167 times. Here I mean it’s “impossible” in the same way it’s impossible I could drop eight coins from my pocket and have them all come up tails. Undoubtedly, some number will be unlucky and just not turn up the full 167 times. Some other number will come up a bit too much. But it’s not required; it’s just like that. Some coin lands heads.

This doesn’t necessarily mean the die is biased. The question is whether the observations are too far off from the prediction. How far is that? For each category, take the difference between the observed frequency and the expected frequency. Square that. Divide it by the expected frequency. Once you’ve done that for every category, add up all these numbers. This is χ2. Do all this and you’ll get some nice nonnegative number like, oh, 5.094 or 11.216 or, heck, 20.482.

The χ2 test works like many inferential-statistics tests do. It tells us how likely it is that, if the hypothetical expected values were right, that random chance would give us the observed data. The farther χ2 is from zero, the less likely it is this was pure chance. Which, all right. But how big does it have to be?

It depends on two important things. First is the number of categories that you have. Or, to use the lingo, the degrees of freedom in your problem. This is one minus the total number of categories. The greater the number of degrees of freedom, the bigger χ2 can be without it saying this difference can’t just be chance.

The second important thing is called the alpha level. This is a judgement call. This is how unlikely you want a result to be before you’ll declare that it couldn’t be chance. We have an instinctive idea of this. If you toss a coin twenty times and it comes up tails every time, you’ll declare that was impossible and the coin must be rigged. But it isn’t impossible. Start a run of twenty coin flips right now. You have a 0.000 095 37% chance of it being all tails. But I would be comfortable, on the 20th tail, to say something is up. I accept that I am ascribing to malice what is in fact just one of those things.

So the choice of alpha level is a measure of how willing we are to make a mistake in our conclusions. In a simple science like particle physics we can set very stringent standards. There are many particles around and we can smash them as long as the budget holds out. In more difficult sciences, such as epidemiology, we must let alpha be larger. We often accept an alpha of five-percent or one-percent.

What we must do, then, is find for an alpha level and a number of degrees of freedom, what the threshold χ2 is. If the sample’s χ2 is below that threshold, OK. The observations are consistent with the hypothesis. If the sample’s χ2 is larger than that threshold, OK. It’s less-than-the-alpha-level percent likely that the observations are consistent with the hypothesis. This is what most statistical inference tests are like. You calculate a number and check whether it is above or below a threshold. If it’s below the threshold, the observation is consistent with the hypothesis. If it’s above the threshold, there’s less than the alpha-level chance that the observation is consistent with the hypothesis.

How do we find these threshold values? … Well, under no circumstances do we try to calculate those. They’re based on a thing called the χ2 distributions, the name you’d expect. They’re hard to calculate. There is no earthly reason for you to calculate them. You can find them in the back of your statistics textbook. Or do a web search for χ2 test tables. I’m sure Matlab has a function to give you this. If it doesn’t, there’s a function you can download from somebody to work it out. There’s no need to calculate that yourself. Which is again common to inferential statistics tests. You find the thresholds by just looking them up.

χ2 tests are just one of the hypothesis-testing tools of inferential statistics. They are a good example of such. They’re designed for observations that can be fit into several categories, and comparing those to an expected forecast. But the calculations one does, and the way one interprets them, are typical for these tests. Even the way they are more tedious than hard is typical. It’s a good example of the family of tools.


I have two letters, and one more week, to go in this series. I hope to have the letter Y published on Tuesday. All the other A-to-Z essays for this year are also at that link. Past A-to-Z essays are at this link, and for the end of this week I’ll feature two past essays at this link. Thank you for reading all this.