Comic Strip Master Command decided to respect my need for a writing break. At least a break around here. So here’s the first half of last week’s comic strips that mention mathematics. None of them get into material substantial enough that I feel justified including pictures. Some of them are even repeats, at least to my Reading the Comics essays.

Ed Allison’s Unstrange Phenomena for the 8th riffs on mathematics-formula rules of thumb. Here, by presenting a complicated expression for the Woolly Bear Caterpillar’s judgement.

John Kovaleski’s Bo Nanas rerun for the 9th shows someone proclaiming himself the superhero “Perpendicular Man”. Then, “Parallel Man”. It’s basically wordplay with implied slapstick.

You know, I had picked these comic strips out as the ones that, last week, had the most substantial mathematics content. And on preparing this essay I realize there’s still not much. Maybe I could have skipped out on the whole week instead.

Bill Amend’s FoxTrot for the 1st is mostly some wordplay. Jason’s finding ways to represent the counting numbers with square roots. The joke plays more tightly than one might expect. Root beer was, traditionally, made with sassafras root, hence the name. (Most commercial root beers don’t use actual sassafras anymore as the safrole in it is carcinogenic.) The mathematical term root, meanwhile, derives from the idea that the root of a number is the thing which generates it. That 2 is the fourth root of 16, because four 2’s multiplied together is 16. That idea. This draws on the metaphor of the roots of a plant being the thing which lets the plant grow. This isn’t one of those cases where two words have fused together into one set of letters.

Jef Mallett’s Frazz for the 1st is set up with an exponential growth premise. The kid — I can’t figure out his name — promises to increase the number of push-ups he does each day by ten percent, with exciting forecasts for how many that will be before long. As Frazz observes, it’s not especially realistic. It’s hard to figure someone working themselves up from nothing to 300 push-ups a day in only two months.

Also much else of the kid’s plan doesn’t make sense. On the second day he plans to do 1.1 push-ups? On the third 1.21 push-ups? I suppose we can rationalize that, anyway, by taking about getting a fraction of the way through a push-up. But if we do that, then, I make out by the end of the month that he’d be doing about 15.863 push-ups a day. At the end of two months, at this rate, he’d be at 276.8 push-ups a day. That’s close enough to three hundred that I’d let him round it off. But nobody could be generous enough to round 15.8 up to 90.

An alternate interpretation of his plans would be to say that each day he’s doing ten percent more, and round that up. So that, like, on the second day he’d do 1.1 rounded up to 2 push-ups, and on the third day 2.2 rounded up to 3 push-ups, and so on. Then day thirty looks good: he’d be doing 94. But the end of two months is a mess as by then he’d be doing 1,714 push-ups a day. I don’t see a way to fit all these pieces together. I’m curious what the kid thought his calculation was. Or, possibly, what Jef Mallett thought the calculation was.

Zach Weinersmith’s for the 2nd has a kid rejecting accounting in favor of his art. But, wanting to do that art with optimum efficiency … ends up doing accounting. It’s a common story. A common question after working out that someone can do a thing is how to do it best. Best has many measures, yes. But the logic behind how to find it stays the same. Here I admit my favorite kinds of games tend to have screen after screen of numbers, with the goal being to make some number as great as possible considering. If they ever made Multiple Entry Accounting Simulator none of you would ever hear from me again.

Which may be some time! Between Reading the Comics, A to Z, recap posts, and the occasional bit of filler I’ve just finished slightly over a hundred days in a row posting something. That is, however, at its end. I don’t figure to post anything tomorrow. I may not have anything before Sunday’s Reading the Comics post, at this link. I’ll be letting my typing fingers sleep in instead. Thanks for reading.

I don’t know. I say this for anyone this has unintentionally clickbaited, or who’s looking at a search engine’s preview of the page.

I come to this question from a friend, though, and it’s got me wondering. I don’t have a good answer, either. But I’m putting the question out there in case someone reading this, sometime, does know. Even if it’s in the remote future, it’d be nice to know.

And before getting to the question I should admit that “why” questions are, to some extent, a mug’s game. Especially in mathematics. I can ask why the sum of two consecutive triangular numbers a square number. But the answer is … well, that’s what we chose to mean by ‘triangular number’, ‘square number’, ‘sum’, and ‘consecutive’. We can show why the arithmetic of the combination makes sense. But that doesn’t seem to answer “why” the way, like, why Neil Armstrong was the first person to walk on the moon. It’s more a “why” like, “why are there Seven Sisters [ in the Pleiades ]?” [*]

But looking for “why” can, at least, give us hints to why a surprising result is reasonable. Draw dots representing a square number, slice it along the space right below a diagonal. You see dots representing two successive triangular numbers. That’s the sort of question I’m asking here.

From here, we get to some technical stuff and I apologize to readers who don’t know or care much about this kind of mathematics. It’s about the wave-mechanics formulation of quantum mechanics. In this, everything that’s observable about a system is contained within a function named . You find by solving a differential equation. The differential equation represents problems. Like, a particle experiencing some force that depends on position. This is written as a potential energy, because that’s easier to work with. But it’s the kind of problem done.

Grant that you’ve solved , since that’s hard and I don’t want to deal with it. You still don’t know, like, where the particle is. You never know that, in quantum mechanics. What you do know is its distribution: where the particle is more likely to be, where it’s less likely to be. You get from to this distribution for, like, particles by applying an operator to . An operator is a function with a domain and a range that are spaces. Almost always these are spaces of functions.

Each thing that you can possibly observe, in a quantum-mechanics context, matches an operator. For example, there’s the x-coordinate operator, which tells you where along the x-axis your particle’s likely to be found. This operator is, conveniently, just x. So evaluate and that’s your x-coordinate distribution. (This is assuming that we know in Cartesian coordinates, ones with an x-axis. Please let me do that.) This looks just like multiplying your old function by x, which is nice and easy.

Or you might want to know momentum. The momentum in the x-direction has an operator, , which equals . The is partial derivatives. The is Planck’s constant, a number which in normal systems of measurement is amazingly tiny. And you know how . That – symbol is just the minus or the subtraction symbol. So to find the momentum distribution, evaluate . This means taking a derivative of the you already had. And multiplying it by some numbers.

But. Why is there a in the momentum operator rather than the position operator? Why isn’t one and the other ? From a mathematical physics perspective, position and momentum are equally good variables. We tend to think of position as fundamental, but that’s surely a result of our happening to be very good at seeing where things are. If we were primarily good at spotting the momentum of things around us, we’d surely see that as the more important variable. When we get into Hamiltonian mechanics we start treating position and momentum as equally fundamental. Even the notation emphasizes how equal they are in importance, and treatment. We stop using ‘x’ or ‘r’ as the variable representing position. We use ‘q’ instead, a mirror to the ‘p’ that’s the standard for momentum. (‘p’ we’ve always used for momentum because … … … uhm. I guess ‘m’ was already committed, for ‘mass’. What I have seen is that it was taken as the first letter in ‘impetus’ with no other work to do. I don’t know that this is true. I’m passing on what I was told explains what looks like an arbitrary choice.)

So I’m supposing that this reflects how we normally set up as a function of position. That this is maybe why the position operator is so simple and bare. And then why the momentum operator has a minus, an imaginary number, and this partial derivative stuff. That if we started out with the wave function as a function of momentum, the momentum operator would be just the momentum variable. The position operator might be some mess with and derivatives or worse.

I don’t have a clear guess why one and not the other operator gets full possession of the though. I suppose that has to reflect convenience. If position and momentum are dual quantities then I’d expect we could put a mere constant like wherever we want. But this is, mostly, me writing out notes and scattered thoughts. I could be trying to explain something that might be as explainable as why the four interior angles of a rectangle are all right angles.

So I would appreciate someone pointing out the obvious reason these operators look like that. I may grumble privately at not having seen the obvious myself. But I’d like to know it anyway.

Although I’m out of the A to Z sequence, I like the habit of posting just the comic strips that name-drop mathematics for the Sunday post. It frees up so much of my Saturday, at the cost of committing my Sunday. So here’s last week’s casual mentions of some mathematics topic.

Bill Holbrook’s On The Fastrack for the 5th has the CEO of Fastrack, Inc, disappointed in what analytics can do. Analytics, here, is the search for statistical correlations, traits that are easy to spot and that indicate greater risks or opportunities. The desire to find these is great and natural. Real data is, though, tantalizingly not quite good enough to answer most interesting questions.

Tauhid Bondia’s Crabgrass for the 6th uses a background panel of calculus work as part of illustrating deep thinking about something, in this case, how to fairly divide chocolate. One of calculus’s traditional strengths is calculating the volumes of interesting figures.

Joe Martin’s Mr Boffo for the 6th is a cute joke on one of the uses of numbers, that of being a convenient and inexhaustible index. The strip ran on Friday and I don’t know how to link to the archives in a stable way. This is why I’ve put the comic up here.

And that’s enough comics for just now. Later this week I’ll get to the comics that inspire me to write more.

While I’m not necessarily going to continue highlighting old A to Z essays every Friday and Saturday, it is a fact I’ve now got six pages listing all the topics for the six A to Z’s that I have completed. So let me share them here. This may be convenient for you, the reader, to see what kinds of things I’ve written up. It’s certainly convenient for me, since someday I’ll want all this stuff organized. The past A to Z sequences have been:

Summer 2015. Featuring anzatz, into, and well-posed problem.

Leap Day 2016. With continued fractions, polynomials, quaternions, and transcendental numbers.

End 2016. Featuring the Fredholm alternative, general covariance, normal numbers, and the Monster Group.

Summer 2017. Starring Benford’s Law, topology, and x.

Fall 2018. Featuring Jokes, the Infinite Monkey Theorem, the Sorites Paradox, and the Pigeonhole Priciple.

Fall 2019. With Buffon’s Needle, Versine, the Julia Set, and Fourier Series.

It’s near the end of the (US) college fall semester. So it’s a good time to point out again that it is possible to work out exactly what you need on the final exam to get whatever grade you want in the course. What it’s not possible to do is study just exactly enough to get that grade, mind you. I suppose it can give you some idea where a good study session can most make a difference, but really, what you need is to study routinely and to get enough sleep.

And, as long as I have this open, let me share an episode of The Vic and SadeCast, about the renowned and strange 15-minute old-time-radio serial comedy Vic and Sade. Most episodes of the serial were two or three people talking past one another. The show may not be to your tastes, but if it is, it’s very much to your taste. This episode of the podcast features an October 1941 show aptly titled It’s Algebra, Uncle Fletcher.

The most important thing I learned this time around was that I should have started a week or two earlier. Not that this should have been a Summer A to Z. It would be true for any season. It’s more that I started soliciting subjects for the first letters of the alphabet about two weeks ahead of publication. I didn’t miss a deadline this time around, and I didn’t hit the dread of starting an essay the day of publication. But the great thing about an A to Z sequence like this is being able to write well ahead of publication and I never got near that.

The Reading the Comics posts are already, necessarily, done close to publication. The only way to alter that is to make the Reading the Comics posts go even more than a week past the comics’ publication. Or lean on syndicated cartoonists to send me heads-ups. Anyway, if neither Reading the Comics nor A to Zs can give me breathing room, then what’s going wrong? So probably having topics picked as much as a month ahead of publication is the way I should go.

Picking topics is always the hardest part of writing things here. The A to Z gimmick makes it easy to get topics, though. The premise is both open and structured. I’m not sure I’d have as fruitful a response if I tossed out “explainer Fridays” or something and hoped people had ideas. A structured choice tends to be easier to make.

The biggest structural experiment this time around is that I put in two “recap” posts each week. These were little one- and two-paragraph things pointing to past A to Z essays. I’ve occasionally reblogged a piece, or done a post that points to old posts. Never systematically, though. Two recap posts a week seemed to work well enough. Some old stuff got new readers and nobody seemed upset. I even got those, at least, done comfortably ahead of deadline. When I finished a Thursday post I could feel like I was luxuriating in a long weekend, until I remembered the comics needed to be read.

Also, this now completes the sixth of my A to Z sequences. I’ve got enough that if I really wanted, I could drop to one new post a week, and do nothing but recaps the rest of the time. It would give me six months posting something every day. I have got nearly nine years’ worth of material here. Much of it is Reading the Comics posts, which date instantly. But the rest of the stuff in principle hasn’t aged, except in how my prose style has changed.

Another thing learned, and a bit of a surprise, was that I found a lot of fundamentals this time around. Things like “differential equations” or “Fourier series” or “Taylor series”. These are things that any mathematics major would know. These are even things that anyone a bit curious about mathematics might know. There is a place for very specific, technical terms. But some big-picture essays turn out to be comfortable too.

One of the things I wanted to write about and couldn’t was the Yang-Mills Equation. It would have taken too many words for me to write. If I’d used earlier essays as lemmas, to set up parts of this, I might have made it. In past A to Z sequences some essays built on one another. But by the time I was considering Y, the prerequisite letters had already been filled. This is an argument for soliciting for the whole alphabet from the start, rather than breaking it up into several requests for topics. But even then I’d have had to be planning Y, at a time when I know I’d be trying to think about D’s and E’s. I’m not sure that’s plausible. It does imply, as I started out thinking, that I need to work farther ahead of deadline anyway.

A couple months back I switched to looking at comparing monthly readership figures to a twelve-month running average. Running averages offer some advantages in looking for any signal. They make statistics less sensitive to fluke events. The cost, of course, is that they take longer to recognize trends starting. But in October I had a singular freak event, with the A to Z essay on linear programming getting liked to from some forum vastly larger than mine. So that got an extra 4,900 page views in one day, and an extra six hundred or so the next, and so on. Can’t expect that to be regular, though.

There were a “mere” 2,333 page views around here in November. That’s small only compared to October’s spike. It’s a little down from September, but still, it’s above the twelve-month running average of 1,996.9 views in a month. Those views came from 1,568 unique visitors, which compares nicely to the running average of 1,330.3 views per month.

There were 95 likes given to things around here in November, which is also above the running average of 68.8 likes in a month. And 23 comments, once again above the running average of 17.5 comments. So, posting stuff every single day works; who would have guessed, apart from everyone who knows anything about attracting audiences?

Well, more about posting to a predictable schedule, and stuff that people are interested in. But “just post a lot” can work too.

Or can it? November saw 77.8 views per posting, which is close to what September offered. But both are below the twelve-month running average of 114.0 views per posting. There were 52.3 visitors per posting, down from the average of 75.4 visitors per post. It’s back to around September’s 46.2 visitors per post though. There were 3.2 likes per post, down from the running average of 4.4. And there were 0.8 comments per posting, below the average of 1.1. It all implies there’s a best rate for these things. Or that filling out Fridays and Saturdays with mentions of older posts is not all that engaging.

Counting my home page there were 300 pages that got any views at all in November. There’d been 311 in October and 296 in September. 160 of them got more than one view, a bit undre the 187 of October and 172 of September. 42 posts got at least ten views, down from October’s 52 but comparable to September’s 37. The most popular pieces, meanwhile, were:

Nice to see trapezoids back again. Also I’m happy that the versine’s been liked. I’m coming to enjoy this obscure trig function, although not so much as to use it for anything I care about.

94 countries or country-like entities sent me any page views in November. That’s down from October’s 116, and even September’s 69. 24 of these were single-reader countries, the same count as in October and above September’s 19. Here’s the roster of reading lands:

Country

Readers

United States

1,205

India

172

Philippines

91

Canada

89

United Kingdom

72

Australia

55

Germany

50

Finland

36

Spain

33

Singapore

28

France

25

Hong Kong SAR China

23

Latvia

22

Mexico

21

Malaysia

20

South Africa

19

Ireland

16

Italy

16

Pakistan

16

Brazil

15

Sweden

14

Turkey

14

Poland

13

Netherlands

12

Bangladesh

11

Indonesia

10

Norway

10

Vietnam

10

Austria

9

Belgium

9

Greece

9

Japan

8

Ukraine

8

Israel

7

Nigeria

7

Bulgaria

6

China

6

Malta

6

Romania

6

Switzerland

6

Thailand

6

Belarus

5

Colombia

5

Ecuador

5

Kenya

5

New Zealand

5

Portugal

5

Taiwan

5

Egypt

4

Morocco

4

Myanmar (Burma)

4

Russia

4

Serbia

4

South Korea

4

United Arab Emirates

4

Croatia

3

Czech Republic

3

Hungary

3

Slovakia

3

Tanzania

3

Algeria

2

Cyprus

2

El Salvador

2

European Union

2

Ghana

2

Luxembourg

2

Mongolia

2

Saudi Arabia

2

Slovenia

2

Uganda

2

Albania

1 (*)

Argentina

1

Azerbaijan

1 (*)

Bosnia & Herzegovina

1

Botswana

1

Brunei

1

Chile

1

Denmark

1

Estonia

1

Jordan

1

Laos

1

Lithuania

1

Macedonia

1

Marshall Islands

1

Mauritius

1

Moldova

1

Nicaragua

1

Palestinian Territories

1

Papua New Guinea

1

Puerto Rico

1

Rwanda

1 (*)

Somalia

1

Sri Lanka

1

Trinidad & Tobago

1 (*)

Albania, Azerbaijan, Rwanda, and Trinidad & Tobago were single-view countries in October too. No countries are on a two-month single-view streak. The Philippines are back to being among the three countries sending me the greatest number of page views. Hi, whoever there finds me interesting.

From the start of this blog through the start of December I’ve posted 1,385 things. These have drawn a total of 96,191 page views, from 52,069 logged unique visitors, which does not count people from the earliest couple years.

From the start of 2019 to the start of December I’d posted 183 things, putting me one up over all of 2018 already. Only 2015 (188 posts) and 2016 (213 posts) have had more, to date. I’ve had 164,245 words published so far this year, which is also already my third most verbose year on record. 24,185 of these words were posted in November, for an average post of 808 and one-sixth words per posting in November. That’s below the year’s average of 898 words per post. October’s posts averaged 803.8 words, by the way, so apparently I’ve stabilized some.

If you’d like to know, you can follow me regularly. The easiest way is to add https://nebusresearch.wordpress.com/feed/ to your RSS reader. If you don’t have an RSS reader, a free account at Dreamwidth or Livejournal will serve as one: you can add it RSS feeds to your Friends page. If you’ve already got a WordPress account, you can use “Follow Nebusresearch”, a button on the upper right corner of this page. My Twitter account as @Nebusj remains fallow, but still posts announcements, so that hasn’t broken at least.

In any case, thank you for reading, however it is you do it.

And I have made it to the end! As is traditional, I mean to write a few words about what I learned in doing all of this. Also as is traditional, I need to collapse after the work of thirteen weeks of two essays per week describing a small glossary of terms mostly suggested by kind readers. So while I wait to do that, let me gather in one bundle a list of all the essays from this project. If this seems to you like a lazy use of old content to fill a publication hole let me assure you: this will make my life so much easier next time I do an A-to-Z. I’ve learned that, at least, over the years.

See if you can spot where I discover my having made a big embarrassing mistake. It’s fun! For people who aren’t me!

Lincoln Peirce’s Big Nate for the 24th has boy-genius Peter drawing “electromagnetic vortex flow patterns”. Nate, reasonably, sees this sort of thing as completely abstract art. I’m not precisely sure what Peirce means by “electromagnetic vortex flow”. These are all terms that mathematicians, and mathematical physicists, would be interested in. That specific combination, though, I can find only a few references for. It seems to serve as a sensing tool, though.

No matter. Electromagnetic fields are interesting to a mathematical physicist, and so mathematicians. Often a field like this can be represented as a system of vortices, too, points around which something swirls and which combine into the field that we observe. This can be a way to turn a continuous field into a set of discrete particles, which we might have better tools to study. And to draw what electromagnetic fields look like — even in a very rough form — can be a great help to understanding what they will do, and why. They also can be beautiful in ways that communicate even to those who don’t undrestand the thing modelled.

Megan Dong’s Sketchshark Comics for the 25th is a joke based on the reputation of the Golden Ratio. This is the idea that the ratio, (roughly 1:1.6), is somehow a uniquely beautiful composition. You may sometimes see memes with some nice-looking animal and various boxes superimposed over it, possibly along with a spiral. The rectangles have the Golden Ratio ratio of width to height. And the ratio is kind of attractive since is about 1.618, and is about 0.618. It’s a cute pattern, and there are other similar cute patterns.. There is a school of thought that this is somehow transcendently beautiful, though.

It’s all bunk. People may find stuff that’s about one-and-a-half times as tall as it is wide, or as wide as it is tall, attractive. But experiments show that they aren’t more likely to find something with Golden Ratio proportions more attractive than, say, something with proportions, or , or even to be particularly consistent about what they like. You might be able to find (say) that the ratio of an eagle’s body length to the wing span is something close to . But any real-world thing has a lot of things you can measure. It would be surprising if you couldn’t find something near enough a ratio you liked. The guy is being ridiculous.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 26th builds on the idea that everyone could be matched to a suitable partner, given a proper sorting algorithm. I am skeptical of any “simple algorithm” being any good for handling complex human interactions such as marriage. But let’s suppose such an algorithm could exist.

This turns matchmaking into a problem of linear programming. Arguably it always was. But the best possible matches for society might not — likely will not be — the matches everyone figures to be their first choices. Or even top several choices. For one, our desired choices are not necessarily the ones that would fit us best. And as the punch line of the comic implies, what might be the globally best solution, the one that has the greatest number of people matched with their best-fit partners, would require some unlucky souls to be in lousy fits.

Although, while I believe that’s the intention of the comic strip, it’s not quite what’s on panel. The assistant is told he’ll be matched with his 4,291th favorite choice, and I admit having to go that far down the favorites list is demoralizing. But there are about 7.7 billion people in the world. This is someone who’ll be a happier match with him than 6,999,995,709 people would be. That’s a pretty good record, really. You can fairly ask how much worse that is than the person who “merely” makes him happier than 6,999,997,328 people would

I like this scheme where I use the Sunday publication slot to list comics that mention mathematics without inspiring conversation. I may need a better name for that branch of the series, though. But, nevertheless, here are comic strips from last week that don’t need much said about them.

John Deering’s Strange Brew for the 24th features Pythagoras, here being asked about his angles. I’m not aware of anything actually called a Pythagorean Angle, but there’s enough geometric things with Pythagoras’s name attached for the joke to make sense.

Maria Scrivan’s Half Full for the 25th is a Venn Diagram joke for the week. It doesn’t quite make sense as a Venn Diagram, as it’s not clear to me that “invasive questions” is sensibly a part of “food”. But it’s a break from every comic strip doing a week full of jokes about turkeys preferring to not be killed.

Tony Carrillo’s F Minus for the 26th is set in mathematics class. And talks about how the process of teaching mathematics is “an important step on the road to hating math”, which is funny because it’s painfully true.

I have several times taught a class in a subject I did not already know well. This is always exciting, and is even sometimes fun. It depends on how well you cope with discovering all your notes for the coming week are gibberish to yourself and will need a complete rewriting. One of the courses I taught in those conditions was on digital signal processing. This was a delight, and I’m sorry to not have more excuses to write about it. In the Summer 2015 A-to-Z I wrote about the z-transform, something we get to know really well in signal processing. The z-transform is also related to the Fourier transform, which is related to Fourier series, which do a lot to turn differential equations into polynomials. (And I am surprised I don’t yet have an essay about the Fourier transform specifically. Maybe sometime later.) The z-transform is a good place to finish off the spotlights shone on these older A-to-Z essays.

For a while there in grad school I thought I would do a thesis in knot theory. I didn’t, ultimately. I do better in problems that I can set a computer to, and then start thinking about once it has teased some interesting phenomenon out of simulations. But the affection, at least from me towards knot theory, remains. In the Fall 2018 A-to-Z sequence I got to share several subjects from this field. One of them is the Yamada Polynomial, a polynomial-like construct that lets us describe knots. I don’t know how anyone might not find that a fascinating prospect, even if they aren’t good at making the polynomials themselves.

Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

I ran across something neat. It’s something I’ve seen before, but the new element is that I have a name for it. This is the Golomb Ruler. It’s a ruler made with as few marks as possible. The marks are supposed to be arranged so that the greatest possible number of different distances can be made, by measuring between selected pairs of points.

So, like, in a regularly spaced ruler, you have a lot of ways to measure a distance of 1 unit of length. Only one fewer way to measure a distance of 2 units. One fewer still ways to measure a distance of 3 units and so on. Convenient but wasteful of marks. A Golomb ruler might, say, put marks only where the regularly spaced ruler has the units 1, 2, and 4. Then by choosing the correct pairs you can measure a distance of 1, 2, 3, or 4 units.

There’s applications of the Golomb ruler, stuff in information theory and sensor design and stuff. Also logistics. Never mind those. They present a neat little puzzle: can you find, for a given number of marks, the best possible arrangement of them into a ruler? That would be the arrangement that allows the greatest number of different lengths. Or perhaps the one that allows the longest string of whole-number differences. Your definition of best-possible determines what the answer is.

As a number theory problem it won’t surprise you to know there’s not a general answer. If I’m reading accurately most of the known best arrangements — the ones that allow the greatest number of differences — were proven by testing out cases. The 24-mark arrangement needed a test of 555,529,785,505,835,800 different rulers. MathWorld’s page on this tells me that optimal mark placement isn’t known for 25 or more marks. It also says that the 25-mark ruler’s optimal arrangement was published in 2008. So it isn’t just Wikipedia where someone will write an article, and then someone else will throw a new heap of words onto it, and nobody will read to see if the whole thing still makes sense. Wikipedia meanwhile lists optimal configurations for up to 27 points, demonstrated by 2014.

And as this suggests, you aren’t going to discover an optimal arrangement for some number of marks yourself. Unless you should be the first person to figure out an algorithm to do it. It’s not even known how complex an algorithm has to be. It’s suspected that it has to be NP-hard, though. But, while you won’t discover anything new to mathematics in pondering this, you can still have the fun of working out arrangements yourself, at least for a handful of points. There are numbers of points with more than one optimal arrangement.

(Golomb here is Solomon W Golomb, a mathematician and electrical engineer with a long history in information theory and also recreational mathematics problems. There are several parties who independently invented the problem. But Golomb actually did work with rulers, so at least they aren’t incorrectly named.)

Today’s A To Z term is … well, my second choice. Goldenoj suggested Yang-Mills and I was so interested. Yang-Mills describes a class of mathematical structures. They particularly offer insight into how to do quantum mechanics. Especially particle physics. It’s of great importance. But, on thinking out what I would have to explain I realized I couldn’t write a coherent essay about it. Getting to what the theory is made of would take explaining a bunch of complicated mathematical structures. If I’d scheduled the A-to-Z differently, setting up matters like Lie algebras, maybe I could do it, but this time around? No such help. And I don’t feel comfortable enough in my knowledge of Yang-Mills to describe it without describing its technical points.

That said I hope that Jacob Siehler, who suggested the Game of ‘Y’, does not feel slighted. I hadn’t known anything of the game going in to the essay-writing. When I started research I was delighted. I have yet to actually play a for-real game of this. But I like what I see, and what I can think I can write about it.

Game of ‘Y’.

This is, as the name implies, a game. It has two players. They have the same objective: to create a ‘y’. Here, they do it by laying down tokens representing their side. They take turns, each laying down one token in a turn. They do this on a shape with three edges. The ‘y’ is created when there’s a continuous path of their tokens that reaches all three edges. Yes, it counts to have just a single line running along one edge of the board. This makes a pretty sorry ‘y’ but it suggests your opponent isn’t trying.

There are details of implementation. The board is a mesh of, mostly, hexagons. I take this to be for the same reason that so many conquest-type strategy games use hexagons. They tile space well, they give every space a good number of neighbors, and the distance from the centers of one neighbor to another is constant. In a square grid, the centers of diagonal neighbors are farther than the centers of left-right or up-down neighbors. Hexagons do well for this kind of game, where the goal is to fill space, or at least fill paths in space. There’s even a game named Hex, slightly older than Y, with similar rules. In that the goal is to draw a continuous path from one end of the rectangular grid to another. The grid of commercial boards, that I see, are around nine hexagons on a side. This probably reflects a desire to have a big enough board that games go on a while, but not so big that they go on forever

Mathematicians have things to say about this game. It fits nicely in game theory. It’s well-designed to show some things about game theory. It’s the kind of game which has perfect information game, for example. Each player knows, at all times, the moves all the players have made. Just look at the board and see where they’ve placed their tokens. A player might have forgotten the order the tokens were placed in, but that’s the player’s problem, not the game’s. Anyway in Y, the order of token-placing doesn’t much matter.

It’s also a game of complete information. Every player knows, at every step, what the other player could do. And what objective they’re working towards. One party, thinking enough, could forecast the other’s entire game. This comes close to the joke about the prisoners telling each other jokes by shouting numbers out to one another.

It is also a game in which a draw is impossible. Play long enough and someone must win. This even if both parties are for some reason trying to lose. There are ingenious proofs of this, but we can show it by considering a really simple game. Imagine playing Y on a tiny board, one that’s just one hex on each side. Definitely want to be the first player there.

So now imagine playing a slightly bigger board. Augment this one-by-one-by-one board by one row. That is, here, add two hexes along one of the sides of the original board. So there’s two pieces here; one is the original territory, and one is this one-row augmented territory. Look first at the original territory. Suppose that one of the players has gotten a ‘Y’ for the original territory. Will that player win the full-size board? … Well, sure. The other player can put a token down on either hex in the augmented territory. But there’s two hexes, either of which would make a path that connects the three edges of the board. The first player can put a token down on the other hex in the augmented territory, and now connects all three of the new sides again. First player wins.

All right, but how about a slightly bigger board? So take that two-by-two-by-two board and augment it, adding three hexes along one of the sides. Imagine a player’s won the original territory board. Do they have to win the full-size board? … Sure. The second player can put something in the augmented territory. But there’s again two hexes that would make the path connecting all three sides of the full board. The second player can put a token in one of those hexes. But the first player can put a token in the other of those. First player wins again.

How about a slightly bigger board yet? … Same logic holds. Really the only reason that the first player doesn’t always win is that, at some point, the first player screws up. And this is an existence proof, showing that the first player can always win. It doesn’t give any guidance into how to play, though. If the first player plays perfectly, she’s compelled to win. This is something which happens in many two-player, symmetric games. A symmetric game is one where either player has the same set of available moves, and can make the same moves with the same results. This proof needs to be tightened up to really hold. But it should convince you, at least, that the first player has an advantage.

So given that, the question becomes why play this game after you’ve decided who’ll go first? The reason you might if you were playing a game is, what, you have something else to do? And maybe you think you’ll make fewer mistakes than your opponent. One approach often used in symmetric games like this is the “pie rule”. The name comes from the story about how to slice a pie so you and your sibling don’t fight over the results. One cuts the pie, the other gets first pick of the slice, and then you fight anyway. In this game, though, one player makes a tentative first move. The other decides whether they will be Player One with that first move made or whether they’ll be Player Two, responding.

There are some neat quirks in the commercial Y games. One is that they don’t actually show hexes, and you don’t put tokens in the middle of hexes. Instead you put tokens on the spots that would be the center of the hex. On the board are lines pointing to the neighbors. This makes the board actually a string of triangles. This is the dual to the hex grid. It shows a set of vertices, and their connections, instead of hexes and their neighbors. Whether you think the hex grid or this dual makes it easier to tell when you’ve connected all three edges is a matter of taste. It does make the edges less jagged all around.

Another is that there will be three vertices that don’t connect to six others. They connect to five others, instead. Their spaces would be pentagons. As I understand the literature on this, this is a concession to game balance. It makes it easier for one side to fend off a path coming from the center.

It has geometric significance, though. A pure hexagonal grid is a structure that tiles the plane. A mostly hexagonal grid, with a couple of pentagons, though? That can tile the sphere. To cover the whole sphere you need something like at least twelve irregular spots. But this? With the three pentagons? That gives you a space that’s topographically equivalent to a hemisphere, or at least a slice of the sphere. If we do imagine the board to be a hemisphere covered, then the result of the handful of pentagon spaces is to make the “pole” closer to the equator.

So as I say the game seems fun enough to play. And it shows off some of the ways that game theorists classify games. And the questions they ask about games. Is the game always won by someone? Does one party have an advantage? Can one party always force a win? It also shows the kinds of approach game theorists can use to answer these questions. This before they consider whether they’d enjoy playing it.

There were just a handful of comic strips that mentioned mathematical topics I found substantial. Of those that did, computational science came up a couple times. So that’s how we got to here.

Rick Detorie’s One Big Happy for the 17th has Joe writing an essay on the history of computing. It’s basically right, too, within the confines of space and understandable mistakes like replacing Pennsylvania with an easier-to-spell state. And within the confines of simplification for the sake of getting the idea across briefly. Most notable is Joe explaining ENIAC as “the first electronic digital computer”. Anyone calling anything “the first” of an invention is simplifying history, possibly to the point of misleading. But we must simplify any history to have it be understandable. ENIAC is among the first computers that anyone today would agree is of a kind with the laptop I use. And it’s certainly the one that, among its contemporaries, most captured the public imagination.

Incidentally, Heman Hollerith was born on Leap Day, 1860; this coming year will in that sense see only his 39th birthday.

Ryan North’s Dinosaur Comics for the 18th is based on the question of whether P equals NP. This is, as T-Rex says, the greatest unsolved problem in computer science. These are what appear to be two different kinds of problems. Some of them we can solve in “polynomial time”, with the number of steps to find a solution growing as some polynomial function of the size of the problem. Others seem to be “non-polynomial”, meaning the number of steps to find a solution grows as … something not a polynomial.

You see one problem. Not knowing a way to solve a problem in polynomial time does not necessarily mean there isn’t a solution. It may mean we just haven’t thought of one. If there is a way we haven’t thought of, then we would say P equals NP. And many people assume that very exciting things would then follow. Part of this is because computational complexity researchers know that many NP problems are isomorphic to one another. That is, we can describe any of these problems as a translation of another of these problems. This is the other part which makes this joke: the declaration that ‘whether God likes poutine’ is isomorphic to the question ‘does P equal NP’.

We tend to assume, also, that if P does equal NP then NP problems, such as breaking public-key cryptography, are all suddenly easy. This isn’t necessarily guaranteed. When we describe something as polynomial or non-polynomial time we’re talking about the pattern by which the number of steps needed to find the solution grows. In that case, then, an algorithm that takes one million steps plus one billion times the size-of-the-problem to the one trillionth power is polynomial time. An algorithm that takes two raised to the size-of-the-problem divided by one quintillion (rounded up to the next whole number) is non-polynomial. But for most any problem you’d care to do, this non-polynomial algorithm will be done sooner. If it turns out P does equal NP, we still don’t necessarily know that NP problems are practical to solve.

Bil Keane and Jeff Keane’s The Family Circus for the 20th has Dolly explaining to Jeff about the finiteness of the alphabet and infinity of numbers. I remember in my childhood coming to understand this and feeling something unjust in the difference between the kinds of symbols. That we can represent any of those whole numbers with just ten symbols (thirteen, if we include commas, decimals, and a multiplication symbol for the sake of using scientific notation) is an astounding feat of symbolic economy.

Zach Weinersmth’s Saturday Morning Breakfast cereal for the 21st builds on the statistics of genetics. In studying the correlations between one thing and another we look at something which varies, usually as the result of many factors, including some plain randomness. If there is a correlation between one variable and another we usually can describe how much of the change in one quantity depends on the other. This is what the scientist means on saying the presence of this one gene accounts for 0.1% of the variance in eeeeevil. The way this is presented, the activity of one gene is responsible for about one-thousandth of the level of eeeeevil in the person.

As the father observes, this doesn’t seem like much. This is because there are a lot of genes describing most traits. And that before we consider epigenetics, the factors besides what is in DNA that affect how an organism develops. I am, unfortunately, too ignorant of the language of genetics to be able to say what a typical variation for a single gene would be, and thus to check whether Weinersmith has the scale of numbers right.

I’m finding it surprisingly good for my workflow to use Sundays for the comic strips which mention mathematics only casually. Tomorrow or so I’ll get to the ones with substantial material, in an essay available at this link.

Jim Meddick’s Monty for the 19th is a sudoku joke, with Monty filling in things that aren’t numerals. Many of them are commonly used mathematical symbols. The ones that I don’t recognize I suspect come from physics applications, especially particle physics. These rely heavily on differential equations and group theory and are likely where Meddick got things like the and the from.