The week was looking ready to be one where I have my five paragraphs about how something shows off a word problem and that’s it. And then Comic Strip Master Command turned up the flow of comics for Saturday. So, here’s my five paragraphs about something being word problems and we’ll pick up the other half of them soon.

Bill Whitehead’s Free Range for the 10th is an Albert Einstein joke. That’s usually been enough. That it mentions curved space, the exotic geometries that make general relativity so interesting, gives it a little more grounding as a mathematical comic. It’s a bit curious, surely, that curved space strikes people as so absurd. Nobody serious argues whether we live on a curved space, though, not when we see globes and think about shapes that cover a big part of the surface of the Earth. But there is something different about thinking of three-dimensional space as curved; it’s hard to imagine curved around what.

Brian Basset’s Red and Rover started some word problems on the 11th, this time with trains travelling in separate directions. The word problem seemed peculiar, since the trains wouldn’t be 246 miles apart at any whole number of hours. But they will be at a reasonable fraction more than a whole number of hours, so I guess Red has gotten to division with fractions.

Red and Rover are back at it the 12th with basically the same problem. This time it’s with airplanes. Also this time it’s a much worse problem. While you can do the problem still, the numbers are uglier. It’ll be just enough over two hours and ten minutes that I wonder if the numbers got rewritten away from some nicer set. For example, if the planes had been flying at 360 and 540 miles per hour, and the question was when they would be 2,100 miles apart, then you’d have a nice two-and-a-third hours.

I’ve in the past done essays about what I’ve taken away from an A to Z project. Please indulge me with this.

The big thing I learned from the Summer 2017 A To Z, besides that it would have been a little better started two weeks earlier? (I couldn’t have started it two weeks earlier. July was a frightfully busy month. As it was I was writing much too close to deadline. Starting sooner would have been impossible.)

Category theory, mostly. Many of the topics requested had some category theory component. Next would be tensors and tensor-related subjects. This is exciting and dangerous. Neither’s a field I know well. Both are fields I want to know better. It’s a truism that to really learn an advanced subject you have to teach a course in it. That’s how I picked up what I know about signal processing and about numerical quantum mechanics. Still, it’s perilous, especially when I would realize the subject asked-for wasn’t what I faintly remembered had been asked for, and that I’d been composing an essay for in my head for a week already.

Also, scheduling. The past A To Z sequences were relatively low-stress things for me. I could get as many as six essays ahead of what I needed to post. That’s so comfortable a place to be. This time around, I was working much closer to deadline, with some pieces needing major rewriting as few as fifteen hours before my posting hour. More needed minor editing the day of posting. There’s several causes for this. But the biggest is that I wrote much longer this time. Past A To Z sequences could have at least a couple essays that were a few paragraphs. This time around I don’t think any piece came in at under a thousand words, and the default was getting to be around 1500 words. I don’t think I broke 2,000 words, but I came close.

That’s fine, because the essays came out great. This has been the A To Z sequence I’m proudest of, so far. They’re the ones that make me think my father’s ever-supportive assurance that I could put these into a book that people would give me actual money for can be right. Still, the combination of writing about stuff I had to research more first and writing longer pieces made the workload more than I’d figured on. When I get to doing this again — and I will, when the exhaustion’s faded enough from memory — I will need more lead time between asking for topics and starting to write. And will need to freeze topics farther in advance than I did this time. I still suspect my father’s too supportive to say I could get money for this. But it’s a less unrealistic thought than I had figured before.

Also learned: hire an artist! I got a better-banner-than-I-paid-for from Thomas K Dye for this series. His work added a snappy bit of visual appeal to my sentence heaps. I’d also gotten from him a banner for the Why Stuff Can Orbit sequence, which I mean to resume now that I have some more writing time. But the banners give a needed bit of unity to my writing, and the automatically-generated Twitter announcements of these posts, and that’s helped the look of the place. Something like nine-tenths of the people I know online are visual artists of one kind or another. (The rest are writers, my siblings, and my mother.) I should be making reasons to commission them. For example, if I want to describe something too complicated to do in words alone I should turn it over to them. Remember, I don’t do the few-pictures thing because I’m a good writer. It’s because I’m too lazy to make an illustration myself. A bit of money can be as good as effort.

Speaking of effort, between the A To Z essays and Reading the Comics posts, and a couple miscellaneous other pieces, I wrote five to six thousand words per week for two months. That’s probably not sustainable indefinitely, but a slightly lower pace? And for a specific big project? It’s good to know that’s something I can do, albeit possibly by putting this blog on hold.

Learned to my personal everlasting humiliation: I spelled “Klein Bottle” wrong. Fortunately, I only spelled it “Klien” in the title of the essay, so it sits there in my tweet publicizing the post and in the full-length URL to the post, forever. I’ll recover, I hope.

The most interesting mathematically-themed comic strips from last week were also reruns. So be it; at least I have an excuse to show a 1931-vintage comic. Also, after discovering my old theme didn’t show the category of essay I was posting, I did literally minutes of search for a new theme that did. And that showed tags. And that didn’t put a weird color behind LaTeX inline equations. So I’m using the same theme as my humor blog does, albeit with a different typeface, and we’ll hope that means I don’t post stuff to the wrong blog. As it is I start posting something to the wrong place about once every twenty times. All I want is a WordPress theme with all the good traits of the themes I look at and none of the drawbacks; why is that so hard to get?

Elzie Segar’s Thimble Theatre rerun for the 5th originally ran the 25th of April, 1931. It’s just a joke about Popeye not being good at bookkeeping. In the story, Popeye’s taking the $50,000 reward from his last adventure and opened a One-Way Bank, giving people whatever money they say they need. And now you understand how the first panel of the last row has several jokes in it. The strip is partly a joke about Popeye being better with stuff he can hit than anything else, of course. I wonder if there’s an old stereotype of sailors being bad at arithmetic. I remember reading about pirate crews that, for example, not-as-canny-as-they-think sailors would demand a fortieth or a fiftieth of the prizes as their pay, instead of a mere thirtieth. But it’s so hard to tell what really happened and what’s just a story about the stupidity of people. Marginal? Maybe, but I’m a Popeye fan and this is my blog, so there.

Norm Feuti’s Gil rerun for the 6th is a subverted word problem joke. And it’s a reminder of how hard story problems can be. You need something that has a mathematics question on point. And the question has to be framed as asking something someone would actually care to learn. Plus the story has to make sense. Much easier when you’re teaching calculus, I think.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 6th is a parent-can’t-help-with-homework joke, done with arithmetic since it’s hard to figure another subject that would make the joke possible. I suppose a spelling assignment could be made to work. But that would be hard to write so it didn’t seem contrived.

Thaves’ Frank and Ernest for the 7thfeels like it’s a riff on the old saw about Plato’s Academy. (The young royal sent home with a coin because he asked what the use of this instruction was, and since he must get something from everything, here’s his drachma.) Maybe. Or it’s just the joke that you make if you have “division” and “royals” in mind.

Mark Tatulli’s Lio for the 7th is not quite the anthropomorphic symbols joke for this past week. It’s circling that territory, though.

It was another busy week in mathematically-themed comic strips last week. Busy enough I’m comfortable rating some as too minor to include. So it’s another week where I post two of these Reading the Comics roundups, which is fine, as I’m still recuperating from the Summer 2017 A To Z project. This first half of the week includes a lot of rerun comics, and you’ll see why my choice of title makes sense.

Ashleigh Brilliant’s Pot-Shots for the 1st is a rerun from sometime in 1975. And it’s an example of the time-honored tradition of specifying how many statistics are made up. Here it comes in at 43 percent of statistics being “totally worthless” and I’m curious how the number attached to this form of joke changes over time.

The Joey Alison Sayers Comic for the 2nd uses a blackboard with mathematics — a bit of algebra and a drawing of a sphere — as the designation for genius. That’s all I have to say about this. I remember being set straight about the difference between ponies and horses and it wasn’t by my sister, who’s got a professional interest in the subject.

Mark Pett’s Lucky Cow rerun for the 2nd is a joke about cashiers trying to work out change. As one of the GoComics.com commenters mentions, the probably best way to do this is to count up from the purchase to the amount you have to give change for. That is, work out $12.43 to $12.50 is seven cents, then from $12.50 to $13.00 is fifty more cents (57 cents total), then from $13.00 to $20.00 is seven dollars ($7.57 total) and then from $20 to $50 is thirty dollars ($37.57 total).

It does make me wonder, though: what did Neil enter as the amount tendered, if it wasn’t $50? Maybe he hit “exact change” or whatever the equivalent was. It’s been a long, long time since I worked a cash register job and while I would occasionally type in the wrong amount of money, the kinds of errors I would make would be easy to correct for. (Entering $30 instead of $20 for the tendered amount, that sort of thing.) But the cash register works however Mark Pett decides it works, so who am I to argue?

Keith Robinson’s Making It rerun for the 2nd includes a fair bit of talk about ratios and percentages, and how to inflate percentages. Also about the underpaying of employees by employers.

Mark Anderson’s Andertoons for the 3rd continues the streak of being Mark Anderson Andertoons for this sort of thing. It has the traditional form of the student explaining why the teacher’s wrong to say the answer was wrong.

Brian Fies’s The Last Mechanical Monster for the 4th includes a bit of legitimate physics in the mad scientist’s captioning. Ballistic arcs are about a thing given an initial speed in a particular direction, moving under constant gravity, without any of the complicating problems of the world involved. No air resistance, no curvature of the Earth, level surfaces to land on, and so on. So, if you start from a given height (‘y_{0}‘) and a given speed (‘v’) at a given angle (‘θ’) when the gravity is a given strength (‘g’), how far will you travel? That’s ‘d’. How long will you travel? That’s ‘t’, as worked out here.

(I should maybe explain the story. The mad scientist here is the one from the first, Fleischer Studios, Superman cartoon. In it the mad scientist sends mechanical monsters out to loot the city’s treasures and whatnot. As the cartoon has passed into the public domain, Brian Fies is telling a story of that mad scientist, finally out of jail, salvaging the one remaining usable robot. Here, training the robot to push aside bank tellers has gone awry. Also, the ground in his lair is not level.)

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th uses the time-honored tradition of little bits of physics equations as designation of many deep thoughts. And then it gets into a bit more pure mathematics along the way. It also reflects the time-honored tradition of people who like mathematics and physics supposing that those are the deepest and most important kinds of thoughts to have. But I suppose we all figure the things we do best are the things it’s important to do best. It’s traditional.

And by the way, if you’d like more of these Reading the Comics posts, I put them all in the category ‘Comic Strips’ and I just now learned the theme I use doesn’t show categories for some reason? This is unsettling and unpleasant. Hm.

The rest of last week had more mathematically-themed comic strips than Sunday alone did. As sometimes happens, I noticed an objectively unimportant detail in one of the comics and got to thinking about it. Whether I could solve the equation as posted, or whether at least part of it made sense as a mathematics problem. Well, you’ll see.

Patrick McDonnell’s Mutts for the 25th of September I include because it’s cute and I like when I can feature some comic in these roundups. Maybe there’s some discussion that could be had about what “equals” means in ordinary English versus what it means in mathematics. But I admit that’s a stretch.

Olivia Walch’s Imogen Quest for the 25th uses, and describes, the mathematics of a famous probability problem. This is the surprising result of how few people you need to have a 50 percent chance that some pair of people have a birthday in common. It then goes over to some other probability problems. The examples are silly. But the reasoning is sound. And the approach is useful. To find the chance of something happens it’s often easiest to work out the chance it doesn’t. Which is as good as knowing the chance it does, since a thing can either happen or not happen. At least in probability problems, which define “thing” and “happen” so there’s not ambiguity about whether it happened or not.

Piers Baker’s Ollie and Quentin rerun for the 26th I’m pretty sure I’ve written about before, although back before I included pictures of the Comics Kingdom strips. (The strip moved from Comics Kingdom over to GoComics, which I haven’t caught removing old comics from their pages.) Anyway, it plays on a core piece of probability. It sets out the world as things, “events”, that can have one of multiple outcomes, and which must have one of those outcomes. Coin tossing is taken to mean, by default, an event that has exactly two possible outcomes, each equally likely. And that is near enough true for real-world coin tossing. But there is a little gap between “near enough” and “true”.

Rick Stromoski’s Soup To Nutz for the 27th is your standard sort of Dumb Royboy joke, in this case about him not knowing what percentages are. You could do the same joke about fractions, including with the same breakdown of what part of the mathematics geek population ruins it for the remainder.

Nate Fakes’s Break of Day for the 28th is not quite the anthropomorphic-numerals joke for the week. Anthropomorphic mathematics problems, anyway. The intriguing thing to me is that the difficult, calculus, problem looks almost legitimate to me. On the right-hand-side of the first two lines, for example, the calculation goes from

to

This is a little sloppy. The first line ought to end in a ‘dt’, and the second ought to have a constant of integration. If you don’t know what these calculus things are let me explain: they’re calculus things. You need to include them to express the work correctly. But if you’re just doing a quick check of something, the mathematical equivalent of a very rough preliminary sketch, it’s common enough to leave that out.

It doesn’t quite parse or mean anything precisely as it is. But it looks like the sort of thing that some context would make meaningful. That there’s repeated appearances of , or , particularly makes me wonder if Frakes used a problem he (or a friend) was doing for some reason.

Mark Anderson’s Andertoons for the 29th is a welcome reassurance that something like normality still exists. Something something student blackboard story problem something.

So, pretty well. That’s a common trait to months when I’m running an A To Z. I post something in the sequence three times a week, and that, plus “Reading the Comics” features, and the occasional fill-in extra mean I have a lot of stuff that people find interesting. According to WordPress’s statistics there were 1,232 pages viewed around here in September, which is comfortably over the 1,000 mark that I think is important for some reason. It’s also the third-highest monthly total I have, coming in just behind the March-April 2016 Leap Day A To Z peak. Back then I went two whole months with something posted every day. Some of that, back then, was reblogs, but that’s all right. It looks the same to the statistics page. September it looks like somebody did a deep archive binge at least once, but again, that’s all people looking at something they find interesting enough to try. There had been 1,030 pages viewed in August, and a relatively mere 911 in July. But in August and September there were 21 and 20 posts, compared to only 13 in July.

The number of unique visitors was off, but not by much: down to 672 from August’s 680. In July there had been 568. This isn’t quite the peaks of March-April 2016, but it’s not far off. I seem to do fairly well getting a reliable number of readers in, lately, although June and July of this year were low. (But those were also months I was pulled away, repeatedly, from WordPress and writing.)

For all that, and for as happy as I was with my writing — I think this A To Z was my best glossary sequence yet — it got fewer reader ‘like’s. Only 98 in September, down from August’s 147 and even July’s 118. I’ve been in a rut with those lately and I’m not sure what I need do. In the first A to Z month I ever did there were 518 likes clicked, and where all those potential readers have gone is beyond me. Especially since the number of pages viewed has not shrunk in all that time.

Also mysterious: while the month felt like a chatty one in my comments, it wasn’t really. 42 comments posted, including my own, in September, down from August’s 46 and July’s 45. That beats the doldrom months before that, but again. June 2015: 114 comments. Same number of page views as back then. Even more unique visitors than back then. I don’t mean to say things that shy people away from commenting, but I seem to be doing it anyway.

The popular articles were one perennial, one comics, and three A To Z posts:

There’s no real sense to deciding what you want your audience to like. They’ll like what they do and you have to yield gracefully to that. But I am glad with those three being the top A To Z posts this past month. They’re ones I think I did well on. I also think that if it had come earlier in the month, then X would have made the top five. Maybe it’ll make next month.

So: what are the countries my readers come from? And is this really quite as popular a thing as I always say it is? Here we go.

Country

Readers

United States

644

United Kingdom

156

Philippines

83

India

55

Canada

33

Austria

28

Singapore

19

Denmark

17

Australia

14

Germany

14

Brazil

12

Sweden

10

France

9

Spain

9

Thailand

9

Hong Kong SAR China

8

Slovenia

8

Mexico

6

Argentina

5

Russia

4

South Africa

4

Bangladesh

3

Costa Rica

3

Finland

3

Italy

3

Netherlands

3

Nigeria

3

Pakistan

3

Romania

3

Switzerland

3

Vietnam

3

Barbados

2

Hungary

2

Israel

2

Japan

2

Nepal

2

Norway

2

Portugal

2

Saudi Arabia

2

Ukraine

2

Angola

1

Armenia

1 (*)

Belarus

1

Belgium

1

Bulgaria

1

Chile

1 (*)

Cyprus

1

Ghana

1

Greece

1

Guam

1

Indonesia

1

Ireland

1

Kenya

1

Luxembourg

1

Madagascar

1

Malaysia

1

New Zealand

1

Paraguay

1

Puerto Rico

1 (*)

Serbia

1

Slovakia

1

South Korea

1

Turkey

1

United Arab Emirates

1 (*)

Venezuela

1 (*)

There were, I honestly believe, 65 countries sending me readers this past month. 25 of them were single-reader countries. In August there were 62 countries sending readers, if you count the European Union and the US Virgin Islands, and for that matter Puerto Rico, as distinct countries. This month, yeah, WordPress lists Guam and Puerto Rico as countries. Also September made me aware of how many of my countrymen apparently didn’t hear about the War of 1898 somehow? I honestly don’t know. I mean, I realize that I’m an unusually history-oriented person, in that I have, without exaggeration, delighted people with trivia about the Webster-Ashburton Treaty. But jeez, this was war with Spain and the coming-out party of American imperialism. You’d think word would have filtered through. Anyway, in September there were 20 single-reader countries with the usual sorts of notes about that.

Armenia, Chile, Puerto Rico, United Arab Emirates, and Venezuela were single-reader countries last month; no country’s on a two- or more-month streak.

Insights says my most popular day for reading was Monday, with 20 percent of page views then. Last month it was Wednesday with 18 percent of page views. The most popular hour was 6 pm, with 8 percent of page views. 6 pm WordPress Time is when I schedule stuff to post, so you’d expect that to be popular. But 8 percent not exactly a major bump. I guess people come whenever it’s convenient to their schedule, not my publication. Which is fine.

I start the month with 53,298 page views, from an admitted 24,673, though that’s a probably incomplete count. I’ve also got 717 followers, most of them by WordPress — as you can do from the “Follow Nebusresearch” button at the upper-right corner of the page — and a handful from email. That you can do by the “Follow Blog Via E-Mail” button up there too.

On Twitter I’m @Nebusj. I’m a lot like I am here, there, but shorter. Please feel free to join me there.

Comic Strip Master Command sent a nice little flood of comics this week, probably to make sure that I transitioned from the A To Z project to normal activity without feeling too lost. I’m going to cut the strips not quite in half because I’m always delighted when I can make a post that’s just a single day’s mathematically-themed comics. Last Sunday, the 24th of September, was such a busy day. I’m cheating a little on what counts as noteworthy enough to talk about here. But people like comic strips, and good on them for liking them.

Norm Feuti’s Gil for the 24th sees Gil discover and try to apply some higher mathematics. There’s probably a good discussion about what we mean by division to explain why Gil’s experiment didn’t pan out. I would pin it down to eliding the difference between “dividing in half” and “dividing by a half”, which is a hard one. Terms that seem almost alike but mean such different things are probably the hardest part of mathematics.

Russell Myers’s Broom Hilda looks like my padding. But the last panel of the middle row gets my eye. The squirrels talk about how on the equinox night and day “can never be of identical length, due to the angular size of the sun and atmospheric refraction”. This is true enough for the equinox. While any spot on the Earth might see twelve hours facing the sun and twelve hours facing away, the fact the sun isn’t a point, and that the atmosphere carries light around to the “dark” side of the planet, means daylight lasts a little longer than night.

Ah, but. This gets my mathematical modelling interest going. Because it is true that, at least away from the equator, there’s times of year that day is way shorter than night. And there’s times of year that day is way longer than night. Shouldn’t there be some time in the middle when day is exactly equal to night?

The easy argument for is built on the Intermediate Value Theorem. Let me define a function, with domain each of the days of the year. The range is real numbers. It’s defined to be the length of day minus the length of night. Let me say it’s in minutes, but it doesn’t change things if you argue that it’s seconds, or milliseconds, or hours, if you keep parts of hours in also. So, like, 12.015 hours or something. At the height of winter, this function is definitely negative; night is longer than day. At the height of summer, this function is definitely positive; night is shorter than day. So therefore there must be some time, between the height of winter and the height of summer, when the function is zero. And therefore there must be some day, even if it isn’t the equinox, when night and day are the same length

There’s a flaw here and I leave that to classroom discussions to work out. I’m also surprised to learn that my onetime colleague Dr Helmer Aslaksen’s grand page of mathematical astronomy and calendar essays doesn’t seem to have anything about length of day calculations. But go read that anyway; you’re sure to find something fascinating.

Mike Baldwin’s Cornered features an old-fashioned adding machine being used to drown an audience in calculations. Which makes for a curious pairing with …

Bill Amend’s FoxTrot, and its representation of “math hipsters”. I hate to encourage Jason or Marcus in being deliberately difficult. But there are arguments to make for avoiding digital calculators in favor of old-fashioned — let’s call them analog — calculators. One is that people understand tactile operations better, or at least sooner, than they do digital ones. The slide rule changes multiplication and division into combining or removing lengths of things, and we probably have an instinctive understanding of lengths. So this should train people into anticipating what a result is likely to be. This encourages sanity checks, verifying that an answer could plausibly be right. And since a calculation takes effort, it encourages people to think out how to arrange the calculation to require less work. This should make it less vulnerable to accidents.

I suspect that many of these benefits are what you get in the ideal case, though. Slide rules, and abacuses, are no less vulnerable to accidents than anything else is. And if you are skilled enough with the abacus you have no trouble multiplying 18 by 7, you probably would not find multiplying 17 by 8 any harder, and wouldn’t notice if you mistook one for the other.

Jef Mallett’s Frazz asserts that numbers are cool but the real insight is comparisons. And we can argue that comparisons are more basic than numbers. We can talk about one thing being bigger than another even if we don’t have a precise idea of numbers, or how to measure them. See every mathematics blog introducing the idea of different sizes of infinity.

Bill Whitehead’s Free Range features Albert Einstein, universal symbol for really deep thinking about mathematics and physics and stuff. And even a blackboard full of equations for the title panel. I’m not sure whether the joke is a simple absent-minded-professor joke, or whether it’s a relabelled joke about Werner Heisenberg. Absent-minded-professor jokes are not mathematical enough for me, so let me point once again to American Cornball. They’re the first subject in Christopher Miller’s encyclopedia of comic topics. So I’ll carry on as if the Werner Heisenberg joke were the one meant.

Heisenberg is famous, outside World War II history, for the Uncertainty Principle. This is one of the core parts of quantum mechanics, under which there’s a limit to how precisely one can know both the position and momentum of a thing. To identify, with absolutely zero error, where something is requires losing all information about what its momentum might be, and vice-versa. You see the application of this to a traffic cop’s question about knowing how fast someone was going. This makes some neat mathematics because all the information about something is bundled up in a quantity called the Psi function. To make a measurement is to modify the Psi function by having an “operator” work on it. An operator is what we call a function that has domains and ranges of other functions. To measure both position and momentum is equivalent to working on Psi with one operator and then another. But these operators don’t commute. You get different results in measuring momentum and then position than you do measuring position and then momentum. And so we can’t know both of these with infinite precision.

There are pairs of operators that do commute. They’re not necessarily ones we care about, though. Like, the total energy commutes with the square of the angular momentum. So, you know, if you need to measure with infinite precision the energy and the angular momentum of something you can do it. If you had measuring tools that were perfect. You don’t, but you could imagine having them, and in that case, good. Underlying physics wouldn’t spoil your work.

Probably the panel was an absent-minded professor joke.

Today Gaurish, of For the love of Mathematics, gives me the last subject for my Summer 2017 A To Z sequence. And also my greatest challenge: the Zeta function. The subject comes to all pop mathematics blogs. It comes to all mathematics blogs. It’s not difficult to say something about a particular zeta function. But to say something at all original? Let’s watch.

Zeta Function.

The spring semester of my sophomore year I had Intro to Complex Analysis. Monday Wednesday 7:30; a rare evening class, one of the few times I’d eat dinner and then go to a lecture hall. There I discovered something strange and wonderful. Complex Analysis is a far easier topic than Real Analysis. Both are courses about why calculus works. But why calculus for complex-valued numbers works is a much easier problem than why calculus for real-valued numbers works. It’s dazzling. Part of this is that Complex Analysis, yes, builds on Real Analysis. So Complex can take for granted some things that Real has to prove. I didn’t mind. Given the way I crashed through Intro to Real Analysis I was glad for a subject that was, relatively, a breeze.

As we worked through Complex Variables and Applications so many things, so very many things, got to be easy. The basic unit of complex analysis, at least as we young majors learned it, was in contour integrals. These are integrals whose value depends on the values of a function on a closed loop. The loop is in the complex plane. The complex plane is, well, your ordinary plane. But we say the x-coordinate and the y-coordinate are parts of the same complex-valued number. The x-coordinate is the real-valued part. The y-coordinate is the imaginary-valued part. And we call that summation ‘z’. In complex-valued functions ‘z’ serves the role that ‘x’ does in normal mathematics.

So a closed loop is exactly what you think. Take a rubber band and twist it up and drop it on the table. That’s a closed loop. Suppose you want to integrate a function, ‘f(z)’. If you can always take its derivative on this loop and on the interior of that loop, then its contour integral is … zero. No matter what the function is. As long as it’s “analytic”, as the terminology has it. Yeah, we were all stunned into silence too. (Granted, mathematics classes are usually quiet, since it’s hard to get a good discussion going. Plus many of us were in post-dinner digestive lulls.)

Integrating regular old functions of real-valued numbers is this tedious process. There’s sooooo many rules and possibilities and special cases to consider. There’s sooooo many tricks that get you the integrals of some functions. And then here, with complex-valued integrals for analytic functions, you know the answer before you even look at the function.

As you might imagine, since this is only page 113 of a 341-page book there’s more to it. Most functions that anyone cares about aren’t analytic. At least they’re not analytic everywhere inside regions that might be interesting. There’s usually some points where an interesting function ‘f(z)’ is undefined. We call these “singularities”. Yes, like starships are always running into. Only we rarely get propelled into other universes or other times or turned into ghosts or stuff like that.

So much of the rest of the course turns into ways to avoid singularities. Sometimes you can spackel them over. This is when the function happens not to be defined somewhere, but you can see what it ought to be. Sometimes you have to do something more. This turns into a search for “removable” singularities. And this does something so brilliant it looks illicit. You modify your closed loop, so that it comes up very close, as close as possible, to the singularity, but studiously avoids it. Follow this game of I’m-not-touching-you right and you can turn your integral into two parts. One is the part that’s equal to zero. The other is the part that’s a constant times whatever the function is at the singularity you’re removing. And that ought to be easy to find the value for. (Being able to find a function’s value doesn’t mean you can find its derivative.)

Those tricks were hard to master. Not because they were hard. Because they were easy, in a context where we expected hard. But after that we got into how to move singularities. That is, how to do a change of variables that moved the singularities to where they’re more convenient for some reason. How could this be more convenient? Because of chapter five, series. In regular old calculus we learn how to approximate well-behaved functions with polynomials. In complex-variable calculus, we learn the same thing all over again. They’re polynomials of complex-valued variables, but it’s the same sort of thing. And not just polynomials, but things that look like polynomials except they’re powers of instead. These open up new ways to approximate functions, and to remove singularities from functions.

And then we get into transformations. These are about turning a problem that’s hard into one that’s easy. Or at least different. They’re a change of variable, yes. But they also change what exactly the function is. This reshuffles the problem. Makes for a change in singularities. Could make ones that are easier to work with.

One of the useful, and so common, transforms is called the Laplace-Stieltjes Transform. (“Laplace” is said like you might guess. “Stieltjes” is said, or at least we were taught to say it, like “Stilton cheese” without the “ton”.) And it tends to create functions that look like a series, the sum of a bunch of terms. Infinitely many terms. Each of those terms looks like a number times another number raised to some constant times ‘z’. As the course came to its conclusion, we were all prepared to think about these infinite series. Where singularities might be. Which of them might be removable.

These functions, these results of the Laplace-Stieltjes Transform, we collectively call ‘zeta functions’. There are infinitely many of them. Some of them are relatively tame. Some of them are exotic. One of them is world-famous. Professor Walsh — I don’t mean to name-drop, but I discovered the syllabus for the course tucked in the back of my textbook and I’m delighted to rediscover it — talked about it.

That world-famous one is, of course, the Riemann Zeta function. Yes, that same Riemann who keeps turning up, over and over again. It looks simple enough. Almost tame. Take the counting numbers, 1, 2, 3, and so on. Take your ‘z’. Raise each of the counting numbers to that ‘z’. Take the reciprocals of all those numbers. Add them up. What do you get?

A mass of fascinating results, for one. Functions you wouldn’t expect are concealed in there. There’s strips where the real part is zero. There’s strips where the imaginary part is zero. There’s points where both the real and imaginary parts are zero. We know infinitely many of them. If ‘z’ is -2, for example, the sum is zero. Also if ‘z’ is -4. -6. -8. And so on. These are easy to show, and so are dubbed ‘trivial’ zeroes. To say some are ‘trivial’ is to say that there are others that are not trivial. Where are they?

Professor Walsh explained. We know of many of them. The nontrivial zeroes we know of all share something in common. They have a real part that’s equal to 1/2. There’s a zero that’s at about the number . Also at . There’s one at about . Also about . (There’s a symmetry, you maybe guessed.) Every nontrivial zero we’ve found has a real component that’s got the same real-valued part. But we don’t know that they all do. Nobody does. It is the Riemann Hypothesis, the great unsolved problem of mathematics. Much more important than that Fermat’s Last Theorem, which back then was still merely a conjecture.

What a prospect! What a promise! What a way to set us up for the final exam in a couple of weeks.

I had an inspiration, a kind of scheme of showing that a nontrivial zero couldn’t be within a given circular contour. Make the size of this circle grow. Move its center farther away from the z-coordinate to match. Show there’s still no nontrivial zeroes inside. And therefore, logically, since I would have shown nontrivial zeroes couldn’t be anywhere but on this special line, and we know nontrivial zeroes exist … I leapt enthusiastically into this project. A little less enthusiastically the next day. Less so the day after. And on. After maybe a week I went a day without working on it. But came back, now and then, prodding at my brilliant would-be proof.

The Riemann Zeta function was not on the final exam, which I’ve discovered was also tucked into the back of my textbook. It asked more things like finding all the singular points and classifying what kinds of singularities they were for functions like instead. If the syllabus is accurate, we got as far as page 218. And I’m surprised to see the professor put his e-mail address on the syllabus. It was merely “bwalsh@math”, but understand, the Internet was a smaller place back then.

I finished the course with an A-, but without answering any of the great unsolved problems of mathematics.

The back half of last week’s mathematically themed comic strips aren’t all that deep. They make up for it by being numerous. This is how calculus works, so, good job, Comic Strip Master Command. Here’s what I have for you.

Mark Anderson’s Andertoons for the 20th marks its long-awaited return to these Reading The Comics posts. It’s of the traditional form of the student misunderstanding the teacher’s explanations. Arithmetic edition.

Marty Links’s Emmy Lou for the 20th was a rerun from the 22nd of September, 1976. It’s just a name-drop. It’s not like it matters for the joke which textbook was lost. I just include it because, what the heck, might as well.

Jef Mallett’s Frazz for the 21st uses the form of a story problem. It’s a trick question anyway; there’s really no way the Doppler effect is going to make an ice cream truck’s song unrecognizable, not even at highway speeds. Too distant to hear, that’s a possibility. Also I don’t know how strictly regional this is but the ice cream trucks around here have gone in for interrupting the music every couple seconds with some comical sound effect, like a “boing” or something. I don’t know what this hopes to achieve besides altering the timeline of when the ice cream seller goes mad.

Mark Litzler’s Joe Vanilla for the 21st I already snuck in here last week, in talking about ‘x’. The variable does seem like a good starting point. And, yeah, hypothesis block is kind of a thing. There’s nothing quite like staring at a problem that should be interesting and having no idea where to start. This happens even beyond grade school and the story problems you do then. What to do about it? There’s never one thing. Study it a good while, read about related problems a while. Maybe work on something that seems less obscure a while. It’s very much like writer’s block.

Ryan North’s Dinosaur Comics rerun for the 22nd straddles the borders between mathematics, economics, and psychology. It’s a problem about making forecasts about other people’s behavior. It’s a mystery of game theory. I don’t know a proper analysis for this game. I expect it depends on how many rounds you get to play: if you have a sense of what people typically do, you can make a good guess of what they will do. If everyone gets a single shot to play, all kinds of crazy things might happen.

Jef Mallet’s Frazz gets in again on the 22nd with some mathematics gibberish-talk, including some tossing around of the commutative property. Among other mistakes Caulfield was making here, going from “less is more to therefore more is less” isn’t commutation. Commutation is about binary operations, where you match a pair of things to a single thing. The operation commutes if it never matters what the order of the pair of things is. It doesn’t commute if it ever matters, even a single time, what the order is. Commutativity gets introduced in arithmetic where there are some good examples of the thing. Addition and multiplication commute. Subtraction and division don’t. From there it gets forgotten until maybe eventually it turns up in matrix multiplication, which doesn’t commute. And then it gets forgotten once more until maybe group theory. There, whether operations commute or not is as important a divide as the one between vertebrates and invertebrates. But I understand kids not getting why they should care about commuting. Early on it seems like a longwinded way to say what’s obvious about addition.

Bud Blake’s Tiger rerun for the 23rd starts with a real-world example of your classic story problem. I like the joke in it, and I also like Hugo’s look of betrayal and anger in the second panel. A spot of expressive art will do so good for a joke.

I never heard of today’s entry topic three months ago. Indeed, three weeks ago I was still making guesses about just what Gaurish, author of For the love of Mathematics,, was asking about. It turns out to be maybe the grand union of everything that’s ever been in one of my A To Z sequences. I overstate, but barely.

Young Tableau.

The specific thing that a Young Tableau is is beautiful in its simplicity. It could almost be a recreational mathematics puzzle, except that it isn’t challenging enough.

Start with a couple of boxes laid in a row. As many or as few as you like.

Now set another row of boxes. You can have as many as the first row did, or fewer. You just can’t have more. Set the second row of boxes — well, your choice. Either below the first row, or else above. I’m going to assume you’re going below the first row, and will write my directions accordingly. If you do things the other way you’re following a common enough convention. I’m leaving it on you to figure out what the directions should be, though.

Now add in a third row of boxes, if you like. Again, as many or as few boxes as you like. There can’t be more than there are in the second row. Set it below the second row.

And a fourth row, if you want four rows. Again, no more boxes in it than the third row had. Keep this up until you’ve got tired of adding rows of boxes.

How many boxes do you have? I don’t know. But take the numbers 1, 2, 3, 4, 5, and so on, up to whatever the count of your boxes is. Can you fill in one number for each box? So that the numbers are always increasing as you go left to right in a single row? And as you go top to bottom in a single column? Yes, of course. Go in order: ‘1’ for the first box you laid down, then ‘2’, then ‘3’, and so on, increasing up to the last box in the last row.

Can you do it in another way? Any other order?

Except for the simplest of arrangements, like a single row of four boxes or three rows of one box atop another, the answer is yes. There can be many of them, turns out. Seven boxes, arranged three in the first row, two in the second, one in the third, and one in the fourth, have 35 possible arrangements. It doesn’t take a very big diagram to get an enormous number of possibilities. Could be fun drawing an arbitrary stack of boxes and working out how many arrangements there are, if you have some time in a dull meeting to pass.

Let me step away from filling boxes. In one of its later, disappointing, seasons Futurama finally did a body-swap episode. The gimmick: two bodies could only swap the brains within them one time. So would it be possible to put Bender’s brain back in his original body, if he and Amy (or whoever) had already swapped once? The episode drew minor amusement in mathematics circles, and a lot of amazement in pop-culture circles. The writer, a mathematics major, found a proof that showed it was indeed always possible, even after many pairs of people had swapped bodies. The idea that a theorem was created for a TV show impressed many people who think theorems are rarer and harder to create than they necessarily are.

It was a legitimate theorem, and in a well-developed field of mathematics. It’s about permutation groups. These are the study of the ways you can swap pairs of things. I grant this doesn’t sound like much of a field. There is a surprising lot of interesting things to learn just from studying how stuff can be swapped, though. It’s even of real-world relevance. Most subatomic particles of a kind — electrons, top quarks, gluons, whatever — are identical to every other particle of the same kind. Physics wouldn’t work if they weren’t. What would happen if we swap the electron on the left for the electron on the right, and vice-versa? How would that change our physics?

A chunk of quantum mechanics studies what kinds of swaps of particles would produce an observable change, and what kind of swaps wouldn’t. When the swap doesn’t make a change we can describe this as a symmetric operation. When the swap does make a change, that’s an antisymmetric operation. And — the Young Tableau that’s a single row of two boxes? That matches up well with this symmetric operation. The Young Tableau that’s two rows of a single box each? That matches up with the antisymmetric operation.

How many ways could you set up three boxes, according to the rules of the game? A single row of three boxes, sure. One row of two boxes and a row of one box. Three rows of one box each. How many ways are there to assign the numbers 1, 2, and 3 to those boxes, and satisfy the rules? One way to do the single row of three boxes. Also one way to do the three rows of a single box. There’s two ways to do the one-row-of-two-boxes, one-row-of-one-box case.

What if we have three particles? How could they interact? Well, all three could be symmetric with each other. This matches the first case, the single row of three boxes. All three could be antisymmetric with each other. This matches the three rows of one box. Or you could have two particles that are symmetric with each other and antisymmetric with the third particle. Or two particles that are antisymmetric with each other but symmetric with the third particle. Two ways to do that. Two ways to fill in the one-row-of-two-boxes, one-row-of-one-box case.

This isn’t merely a neat, aesthetically interesting coincidence. I wouldn’t spend so much time on it if it were. There’s a matching here that’s built on something meaningful. The different ways to arrange numbers in a set of boxes like this pair up with a select, interesting set of matrices whose elements are complex-valued numbers. You might wonder who introduced complex-valued numbers, let alone matrices of them, into evidence. Well, who cares? We’ve got them. They do a lot of work for us. So much work they have a common name, the “symmetric group over the complex numbers”. As my leading example suggests, they’re all over the place in quantum mechanics. They’re good to have around in regular physics too, at least in the right neighborhoods.

These Young Tableaus turn up over and over in group theory. They match up with polynomials, because yeah, everything is polynomials. But they turn out to describe polynomial representations of some of the superstar groups out there. Groups with names like the General Linear Group (square matrices), or the Special Linear Group (square matrices with determinant equal to 1), or the Special Unitary Group (that thing where quantum mechanics says there have to be particles whose names are obscure Greek letters with superscripts of up to five + marks). If you’d care for more, here’s a chapter by Dr Frank Porter describing, in part, how you get from Young Tableaus to the obscure baryons.

Porter’s chapter also lets me tie this back to tensors. Tensors have varied ranks, the number of different indicies you can have on the things. What happens when you swap pairs of indices in a tensor? How many ways can you swap them, and what does that do to what the tensor describes? Please tell me you already suspect this is going to match something in Young Tableaus. They do this by way of the symmetries and permutations mentioned above. But they are there.

As I say, three months ago I had no idea these things existed. If I ever ran across them it was from seeing the name at MathWorld’s list of terms that start with ‘Y’. The article shows some nice examples (with each rows a atop the previous one) but doesn’t make clear how much stuff this subject runs through. I can’t fit everything in to a reasonable essay. (For example: the number of ways to arrange, say, 20 boxes into rows meeting these rules is itself a partition problem. Partition problems are probability and statistical mechanics. Statistical mechanics is the flow of heat, and the movement of the stars in a galaxy, and the chemistry of life.) I am delighted by what does fit.

Comic Strip Master Command apparently doesn’t want me talking about the chances of Friday’s Showcase Showdown. They sent me enough of a flood of mathematically-themed strips that I don’t know when I’ll have the time to talk about the probability of that episode. (The three contestants spinning the wheel all tied, each spinning $1.00. And then in the spin-off, two of the three contestants also spun $1.00. And this after what was already a perfect show, in which the contestants won all six of the pricing games.) Well, I’ll do what comic strips I can this time, and carry on the last week of the Summer 2017 A To Z project, and we’ll see if I can say anything timely for Thursday or Saturday or so.

Jim Scancarelli’s Gasoline Alley for the 17th is a joke about the student embarrassing the teacher. It uses mathematics vocabulary for the specifics. And it does depict one of those moments that never stops, as you learn mathematics. There’s always more vocabulary. There’s good reasons to have so much vocabulary. Having names for things seems to make them easier to work with. We can bundle together ideas about what a thing is like, and what it may do, under a name. I suppose the trouble is that we’ve accepted a convention that we should define terms before we use them. It’s nice, like having the dramatis personae listed at the start of the play. But having that list isn’t the same as saying why anyone should care. I don’t know how to balance the need to make clear up front what one means and the need to not bury someone under a heap of similar-sounding names.

Mac King and Bill King’s Magic in a Minute for the 17th is another puzzle drawn from arithmetic. Look at it now if you want to have the fun of working it out, as I can’t think of anything to say about it that doesn’t spoil how the trick is done. The top commenter does have a suggestion about how to do the problem by breaking one of the unstated assumptions in the problem. This is the kind of puzzle created for people who want to motivate talking about parity or equivalence classes. It’s neat when you can say something of substance about a problem using simple information, though.

Terri Libenson’s Pajama Diaries for the 18th uses trigonometry as the marker for deep thinking. It comes complete with a coherent equation, too. It gives the area of a triangle with two legs that meet at a 45 degree angle. I admit I am uncomfortable with promoting the idea that people who are autistic have some super-reasoning powers. (Also with the pop-culture idea that someone who spots things others don’t is probably at least a bit autistic.) I understand wanting to think someone’s troubles have some compensation. But people are who they are; it’s not like they need to observe some “balance”.

Lee Falk and Wilson McCoy’s The Phantom for the 10th of August, 1950 was rerun Monday. It’s a side bit of joking about between stories. And it uses knowledge of mathematics — and an interest in relativity — as signifier of civilization. I can only hope King Hano does better learning tensors on his own than I do.

Mike Thompson’s Grand Avenue for the 18th goes back to classrooms and stuff for clever answers that subvert the teacher. And I notice, per the title given this edition, that the teacher’s trying to make the abstractness of three minus two tangible, by giving it an example. Which pairs it with …

Will Henry’s Wallace the Brace for the 18th, wherein Wallace asserts that arithmetic is easier if you visualize real things. I agree it seems to help with stuff like basic arithmetic. I wouldn’t want to try taking the cosine of an apple, though. Separating the quantity of a thing from the kind of thing measured is one of those subtle breakthroughs. It’s one of the ways that, for example, modern calculations differ from those of the Ancient Greeks. But it does mean thinking of numbers in, we’d say, a more abstract way than they did, and in a way that seems to tax us more.

Wallace the Brave recently had a book collection published, by the way. I mention because this is one of a handful of comics with a character who likes pinball, and more, who really really loves the Williams game FunHouse. This is an utterly correct choice for favorite pinball game. It’s one of the games that made me a pinball enthusiast.

Ryan North’s Dinosaur Comics rerun for the 19th I mention on loose grounds. In it T-Rex suggests trying out an alternate model for how gravity works. The idea, of what seems to be gravity “really” being the shade cast by massive objects in a particle storm, was explored in the late 17th and early 18th century. It avoids the problem of not being able to quite say what propagates gravitational attraction. But it also doesn’t work, analytically. We would see the planets orbit differently if this were how gravity worked. And there’s the problem about mass and energy absorption, as pointed out in the comic. But it can often be interesting or productive to play with models that don’t work. You might learn something about models that do, or that could.

We come now almost to the end of the Summer 2017 A To Z. Possibly also the end of all these A To Z sequences. Gaurish of, For the love of Mathematics, proposed that I talk about the obvious logical choice. The last promising thing I hadn’t talked about. I have no idea what to do for future A To Z’s, if they’re even possible anymore. But that’s a problem for some later time.

X.

Some good advice that I don’t always take. When starting a new problem, make a list of all the things that seem likely to be relevant. Problems that are worth doing are usually about things. They’ll be quantities like the radius or volume of some interesting surface. The amount of a quantity under consideration. The speed at which something is moving. The rate at which that speed is changing. The length something has to travel. The number of nodes something must go across. Whatever. This all sounds like stuff from story problems. But most interesting mathematics is from a story problem; we want to know what this property is like. Even if we stick to a purely mathematical problem, there’s usually a couple of things that we’re interested in and that we describe. If we’re attacking the four-color map theorem, we have the number of territories to color. We have, for each territory, the number of territories that touch it.

Next, select a name for each of these quantities. Write it down, in the table, next to the term. The volume of the tank is ‘V’. The radius of the tank is ‘r’. The height of the tank is ‘h’. The fluid is flowing in at a rate ‘r’. The fluid is flowing out at a rate, oh, let’s say ‘s’. And so on. You might take a moment to go through and think out which of these variables are connected to which other ones, and how. Volume, for example, is surely something to do with the radius times something to do with the height. It’s nice to have that stuff written down. You may not know the thing you set out to solve, but you at least know you’ve got this under control.

I recommend this. It’s a good way to organize your thoughts. It establishes what things you expect you could know, or could want to know, about the problem. It gives you some hint how these things relate to each other. It sets you up to think about what kinds of relationships you figure to study when you solve the problem. It gives you a lifeline, when you’re lost in a sea of calculation. It’s reassurance that these symbols do mean something. Better, it shows what those things are.

I don’t always do it. I have my excuses. If I’m doing a problem that’s very like one I’ve already recently done, the things affecting it are probably the same. The names to give these variables are probably going to be about the same. Maybe I’ll make a quick sketch to show how the parts of the problem relate. If it seems like less work to recreate my thoughts than to write them down, I skip writing them down. Not always good practice. I tell myself I can always go back and do things the fully right way if I do get lost. So far that’s been true.

So, the names. Suppose I am interested in, say, the length of the longest rod that will fit around this hallway corridor. Then I am in a freshman calculus book, yes. Fine. Suppose I am interested in whether this pinball machine can be angled up the flight of stairs that has a turn in it Then I will measure things like the width of the pinball machine. And the width of the stairs, and of the landing. I will measure this carefully. Pinball machines are heavy and there are many hilarious sad stories of people wedging them into hallways and stairwells four and a half stories up from the street. But: once I have identified, say, ‘width of pinball machine’ as a quantity of interest, why would I ever refer to it as anything but?

This is no dumb question. It is always dangerous to lose the link between the thing we calculate and the thing we are interested in. Without that link we are less able to notice mistakes in either our calculations or the thing we mean to calculate. Without that link we can’t do a sanity check, that reassurance that it’s not plausible we just might fit something 96 feet long around the corner. Or that we estimated that we could fit something of six square feet around the corner. It is common advice in programming computers to always give variables meaningful names. Don’t write ‘T’ when ‘Total’ or, better, ‘Total_Value_Of_Purchase’ is available. Why do we disregard this in mathematics, and switch to ‘T’ instead?

First reason is, well, try writing this stuff out. Your hand (h) will fall off (f_{off}) in about fifteen minutes, twenty seconds. (15′ 20”). If you’re writing a program, the programming environment you have will auto-complete the variable after one or two letters in. Or you can copy and paste the whole name. It’s still good practice to leave a comment about what the variable should represent, if the name leaves any reasonable ambiguity.

Another reason is that sure, we do specific problems for specific cases. But a mathematician is naturally drawn to thinking of general problems, in abstract cases. We see something in common between the problem “a length and a quarter of the length is fifteen feet; what is the length?” and the problem “a volume plus a quarter of the volume is fifteen gallons; what is the volume?”. That one is about lengths and the other about volumes doesn’t concern us. We see a saving in effort by separating the quantity of a thing from the kind of the thing. This restores danger. We must think, after we are done calculating, about whether the answer could make sense. But we can minimize that, we hope. At the least we can check once we’re done to see if our answer makes sense. Maybe even whether it’s right.

For centuries, as the things we now recognize as algebra developed, we would use words. We would talk about the “thing” or the “quantity” or “it”. Some impersonal name, or convenient pronoun. This would often get shortened because anything you write often you write shorter. “Re”, perhaps. In the late 16th century we start to see the “New Algebra”. Here mathematics starts looking like … you know … mathematics. We start to see stuff like “addition” represented with the + symbol instead of an abbreviation for “addition” or a p with a squiggle over it or some other shorthand. We get equals signs. You start to see decimals and exponents. And we start to see letters used in place of numbers whose value we don’t know.

There are a couple kinds of “numbers whose value we don’t know”. One is the number whose value we don’t know, but hope to learn. This is the classic variable we want to solve for. Another kind is the number whose value we don’t know because we don’t care. I mean, it has some value, and presumably it doesn’t change over the course of our problem. But it’s not like our work will be so different if, say, the tank is two feet high rather than four.

Is there a problem? If we pick our letters to fit a specific problem, no. Presumably all the things we want to describe have some clear name, and some letter that best represents the name. It’s annoying when we have to consider, say, the pinball machine width and the corridor width. But we can work something out.

But what about general problems?

Is an easy problem to solve?

If we want to figure what ‘m’ is, yes. Similarly ‘y’. If we want to know what ‘b’ is, it’s tedious, but we can do that. If we want to know what ‘e’ is? Run and hide, that stuff is crazy. If you have to, do it numerically and accept an estimate. Don’t try figuring what that is.

And so we’ve developed conventions. There are some letters that, except in weird circumstances, are coefficients. They’re numbers whose value we don’t know, but either don’t care about or could look up. And there are some that, by default, are variables. They’re the ones whose value we want to know.

These conventions started forming, as mentioned, in the late 16th century. François Viète here made a name that lasts to mathematics historians at least. His texts described how to do algebra problems in the sort of procedural methods that we would recognize as algebra today. And he had a great idea for these letters. Use the whole alphabet, if needed. Use the consonants to represent the coefficients, the numbers we know but don’t care what they are. Use the vowels to represent the variables, whose values we want to learn. So he would look at that equation and see right away: it’s a terrible mess. (I exaggerate. He doesn’t seem to have known the = sign, and I don’t know offhand when ‘log’ and ‘cos’ became common. But suppose the rest of the equation were translated into his terminology.)

It’s not a bad approach. Besides the mnemonic value of consonant-coefficient, vowel-variable, it’s true that we usually have fewer variables than anything else. The more variables in a problem the harder it is. If someone expects you to solve an equation with ten variables in it, you’re excused for refusing. So five or maybe six or possibly seven choices for variables is plenty.

But it’s not what we settled on. René Descartes had a better idea. He had a lot of them, but here’s one. Use the letters at the end of the alphabet for the unknowns. Use the letters at the start of the alphabet for coefficients. And that is, roughly, what we’ve settled on. In my example nightmare equation, we’d suppose ‘y’ to probably be the variable we want to solve for.

And so, and finally, x. It is almost the variable. It says “mathematics” in only two strokes. Even π takes more writing. Descartes used it. We follow him. It’s way off at the end of the alphabet. It starts few words, very few things, almost nothing we would want to measure. (Xylem … mass? Flow? What thing is the xylem anyway?) Even mathematical dictionaries don’t have much to say about it. The letter transports almost no connotations, no messy specific problems to it. If it suggests anything, it suggests the horizontal coordinate in a Cartesian system. It almost is mathematics. It signifies nothing in itself, but long use has given it an identity as the thing we hope to learn by study.

And pirate treasure maps. I don’t know when ‘X’ became the symbol of where to look for buried treasure. My casual reading suggests “never”. Treasure maps don’t really exist. Maps in general don’t work that way. Or at least didn’t before cartoons. X marking the spot seems to be the work of Robert Louis Stevenson, renowned for creating a fanciful map and then putting together a book to justify publishing it. (I jest. But according to Simon Garfield’s On The Map: A Mind-Expanding Exploration of the Way The World Looks, his map did get lost on the way to the publisher, and he had to re-create it from studying the text of Treasure Island. This delights me to no end.) It makes me wonder if Stevenson was thinking of x’s service in mathematics. But the advantages of x as a symbol are hard to ignore. It highlights a point clearly. It’s fast to write. Its use might be coincidence.

But it is a letter that does a needed job really well.

It’s the last full week of the Summer 2017 A To Z! Four more essays and I’ll have completed this project and curl up into a word coma. But I’m not there yet. Today’s request is another from Gaurish, who’s given me another delightful topic to write about. Gaurish hosts a fine blog, For the love of Mathematics, which I hope you’ve given a try.

Well-Ordering Principle.

An old mathematics joke. Or paradox, if you prefer. What is the smallest whole number with no interesting properties?

Not one. That’s for sure. We could talk about one forever. It’s the first number we ever know. It’s the multiplicative identity. It divides into everything. It exists outside the realm of prime or composite numbers. It’s — all right, we don’t need to talk about one forever. Two? The smallest prime number. The smallest even number. The only even prime. The only — yeah, let’s move on. Three; the smallest odd prime number. Triangular number. One of only two prime numbers that isn’t one more or one less than a multiple of six. Let’s move on. Four. A square number. The smallest whole number that isn’t 1 or a prime. Five. Prime number. First sum of two prime numbers. Part of the first prime pair. Six. Smallest perfect number. Smallest product of two different prime numbers. Let’s move on.

And so on. Somewhere around 22 or so, the imagination fails and we can’t think of anything not-boring about this number. So we’ve found the first number that hasn’t got any interesting properties! … Except that being the smallest boring number must be interesting. So we have to note that this is otherwise the smallest boring number except for that bit where it’s interesting. On to 23, which used to be the default funny number. 24. … Oh, carry on. Maybe around 31 things settle down again. Our first boring number! Except that, again, being the smallest boring number is interesting. We move on to 32, 33, 34. When we find one that couldn’t be interesting, we find that’s interesting. We’re left to conclude there is no such thing as a boring number.

This would be a nice thing to say for numbers that otherwise get no attention, if we pretend they can have hurt feelings. But we do have to admit, 1729 is actually only interesting because it’s a part of the legend of Srinivasa Ramanujan. Enjoy the silliness for a few paragraphs more.

(This is, if I’m not mistaken, a form of the heap paradox. Don’t remember that? Start with a heap of sand. Remove one grain; you’ve still got a heap of sand. Remove one grain again. Still a heap of sand. Remove another grain. Still a heap of sand. And yet if you did this enough you’d leave one or two grains, not a heap of sand. Where does that change?)

Another problem, something you might consider right after learning about fractions. What’s the smallest positive number? Not one-half, since one-third is smaller and still positive. Not one-third, since one-fourth is smaller and still positive. Not one-fourth, since one-fifth is smaller and still positive. Pick any number you like and there’s something smaller and still positive. This is a difference between the positive integers and the positive real numbers. (Or the positive rational numbers, if you prefer.) The thing positive integers have is obvious, but it is not a given.

The difference is that the positive integers are well-ordered, while the positive real numbers aren’t. Well-ordering we build on ordering. Ordering is exactly what you imagine it to be. Suppose you can say, for any two things in a set, which one is less than another. A set is well-ordered if whenever you have a non-empty subset you can pick out the smallest element. Smallest means exactly what you think, too.

The positive integers are well-ordered. And more. The way they’re set up, they have a property called the “well-ordering principle”. This means any non-empty set of positive integers has a smallest number in it.

This is one of those principles that seems so obvious and so basic that it can’t teach anything interesting. That it serves a role in some proofs, sure, that’s easy to imagine. But something important?

Look back to the joke/paradox I started with. It proves that every positive integer has to be interesting. Every number, including the ones we use every day. Including the ones that no one has ever used in any mathematics or physics or economics paper, and never will. We can avoid that paradox by attacking the vagueness of “interesting” as a word. Are you interested to know the 137th number you can write as the sum of cubes in two different ways? Before you say ‘yes’, consider whether you could name it ten days after you’ve heard the number.

(Granted, yes, it would be nice to know the 137th such number. But would you ever remember it? Would you trust that it’ll be on some Wikipedia page that somehow is never threatened with deletion for not being noteworthy? Be honest.)

But suppose we have some property that isn’t so mushy. Suppose that we can describe it in some way that’s indexed by the positive integers. Furthermore, suppose that we show that in any set of the positive integers it must be true for the smallest number in that set. What do we know?

— We know that it must be true for all the positive integers. There’s a smallest positive integer. The positive integers have this well-ordered principle. So any subset of the positive integers has some smallest member. And if we can show that something or other is always true for the smallest number in a subset of the positive integers, there you go.

This technique we call, when it’s introduced, induction. It’s usually a baffling subject because it’s usually taught like this: suppose the thing you want to show is indexed to the positive integers. Show that it’s true when the index is ‘1’. Show that if the thing is true for an arbitrary index ‘n’, then you know it’s true for ‘n + 1’. It’s baffling because that second part is hard to visualize. The student makes a lot of mistakes in learning, on examples of what the sum of the first ‘N’ whole numbers or their squares or cubes are. I don’t think induction is ever taught in this well-ordering principle method. But it does get used in proofs, once you get to the part of analysis where you don’t have to interact with actual specific numbers much anymore.

The well-ordering principle also gives us the method of infinite descent. You encountered this in learning proofs about, like, how the square root of two must be an irrational number. In this, you show that if something is true for some positive integer, then it must also be true for some other, smaller positive integer. And therefore some other, smaller positive integer again. And again, until you get into numbers small enough you can check by hand.

It keeps creeping in. The Fundamental Theorem of Arithmetic says that every positive whole number larger than one is a product of a unique string of prime numbers. (Well, the order of the primes doesn’t matter. 2 times 3 times 5 is the same number as 3 times 2 times 5, and so on.) The well-ordering principle guarantees you can factor numbers into a product of primes. Watch this slick argument.

Suppose you have a set of whole numbers that isn’t the product of prime numbers. There must, by the well-ordering principle, be some smallest number in that set. Call that number ‘n’. We know that ‘n’ can’t be prime, because if it were, then that would be its prime factorization. So it must be the product of at least two other numbers. Let’s suppose it’s two numbers. Call them ‘a’ and ‘b’. So, ‘n’ is equal to ‘a’ times ‘b’.

Well, ‘a’ and ‘b’ have to be less than ‘n’. So they’re smaller than the smallest number that isn’t a product of primes. So, ‘a’ is the product of some set of primes. And ‘b’ is the product of some set of primes. And so, ‘n’ has to equal the primes that factor ‘a’ times the primes that factor ‘b’. … Which is the prime factorization of ‘n’. So, ‘n’ can’t be in the set of numbers that don’t have prime factorizations. And so there can’t be any numbers that don’t have prime factorizations. It’s for the same reason we worked out there aren’t any numbers with nothing interesting to say about them.

And isn’t it delightful to find so simple a principle can prove such specific things?

I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

Volume Forms.

So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” .

A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called before? … That’s … some mess. And now … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

And something to bother you a while. is an equation that looks the same as . You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

It was an ordinary enough week when I realized I wasn’t sure about the name of the schoolmarm in Barney Google and Snuffy Smith. So I looked it up on Comics Kingdom’s official cast page for John Rose’s comic strip. And then I realized something about the Smiths’ next-door neighbor Elviney and Jughaid’s teacher Miss Prunelly:

Are … are they the same character, just wearing different glasses? I’ve been reading this comic strip for like forty years and I’ve never noticed this before. I’ve also never heard any of you all joking about this, by the way, so I stand by my argument that if they’re prominent enough then, yes, glasses could be an adequate disguise for Superman. Anyway, I’m startled. (Are they sisters? Cousins? But wouldn’t that make mention on the cast page? There are missing pieces here.)

Mac King and Bill King’s Magic In A Minute feature for the 10th sneaks in here yet again with a magic trick based in arithmetic. Here, they use what’s got to be some Magic Square-based technology for a card trick. This probably could be put to use with other arrangements of numbers, but cards have the advantage of being stuff a magician is likely to have around and that are expected to do something weird.

Susan Camilleri Konair’s Six Chix for the 13th name-drops mathematics as the homework likely to be impossible doing. I think this is the first time Konair’s turned up in a Reading The Comics survey.

Thom Bluemel’s Birdbrains for the 13th is an Albert Einstein Needing Help panel. It’s got your blackboard full of symbols, not one of which is the famous E = mc^{2} equation. But given the setup it couldn’t feature that equation, not and be a correct joke.

John Rose’s Barney Google for the 14th does a little more work than necessary for its subtraction-explained-with-candy joke. I non-sarcastically appreciate Rose’s dodging the obvious joke in favor of a guy-is-stupid joke.

Niklas Eriksson’s Carpe Diem for the 14th is a kind of lying-with-statistics joke. That’s as much as it needs to be. Still, thought always should go into exactly how one presents data, especially visually. There are connotations to things. Just inverting an axis is dangerous stuff, though. The convention of matching an increase in number to moving up on the graph is so ingrained that it should be avoided only for enormous cause.

This joke also seems conceptually close, to me, to the jokes about the strangeness of how a “negative” medical test is so often the good news.

Olivia Walch’s Imogen Quest for the 15th is not about solitaire. But “solving” a game by simulating many gameplays and drawing strategic advice from that is a classic numerical mathematics trick. Whether a game is fun once it’s been solved so is up to you. And often in actual play, for a game with many options at each step, it’s impossible without a computer to know the best possible move. You could use simulations like this to develop general guidelines, and a couple rules that often pan out.

Gaurish, of For the love of Mathematics, asked me about one of those modestly famous (among mathematicians) mathematical figures. Yeah, I don’t have a picture of it. Too much effort. It’s easier to write instead.

Ulam’s Spiral.

Boredom is unfairly maligned in our society. I’ve said this before, but that was years ago, and I have some different readers today. We treat boredom as a terrible thing, something to eliminate. We treat it as a state in which nothing seems interesting. It’s not. Boredom is a state in which anything, however trivial, engages the mind. We would not count the tiles on the floor, or time the rocking of a chandelier, or wonder what fraction of solitaire games can be won if we were never bored. A bored mind is a mind ready to discover things. We should welcome the state.

Several times in the 20th century Stanislaw Ulam was bored. I mention solitaire games because, according to Ulam, he spent some time in 1946 bored, convalescent and playing a lot of solitaire. He got to wondering what’s the probability a particular solitaire game is winnable? (He was specifically playing Canfield solitaire. The game’s also called Demon, Chameleon, or Storehouse, if Wikipedia is right.) What’s the chance the cards can’t be played right, no matter how skilled the player is? It’s a problem impossible to do exactly. Ulam was one of the mathematicians designing and programming the computers of the day.

He, with John von Neumann, worked out how to get a computer to simulate many, many rounds of cards. They would get an answer that I have never seen given in any history of the field. The field is Monte Carlo simulations. It’s built on using random numbers to conduct experiments that approximate an answer. (They’re also what my specialty is in. I mention this for those who’ve wondered what, if any, mathematics field I do consider myself competent in. This is not it.) The chance of a winnable deal is about 71 to 72 percent, although actual humans can’t hope to do more than about 35 percent. My evening’s experience with this Canfield Solitaire game suggests the chance of winning is about zero.

In 1963, Ulam told Martin Gardner, he was bored again during a paper’s presentation. Ulam doodled, and doodled something interesting enough to have a computer doodle more than mere pen and paper could. It was interesting enough to feature in Gardner’s Mathematical Games column for March 1964. It started with what the name suggested, a spiral.

Write down ‘1’ in the center. Write a ‘2’ next to it. This is usually done to the right of the ‘1’. If you want the ‘2’ to be on the left, or above, or below, fine, it’s your spiral. Write a ‘3’ above the ‘2’. (Or below if you want, or left or right if you’re doing your spiral that way. You’re tracing out a right angle from the “path” of numbers before that.) A ‘4’ to the left of that, a ‘5’ under that, a ‘6’ under that, a ‘7’ to the right of that, and so on. A spiral, for as long as your paper or your patience lasts. Now draw a circle around the ‘2’. Or a box. Whatever. Highlight it. Also do this for the ‘3’, and the ‘5’, and the ‘7’ and all the other prime numbers. Do this for all the numbers on your spiral. And look at what’s highlighted.

It looks like …

It’s …

Well, it’s something.

It’s hard to say what exactly. There’s a lot of diagonal lines to it. Not uninterrupted lines. Every diagonal line has some spottiness to it. There are blank regions too. There are some long stretches of numbers not highlighted, many of them horizontal or vertical lines with no prime numbers in them. Those stop too. The eye can’t help seeing clumps, especially. Imperfect diagonal stitching across the fabric of the counting numbers.

Maybe seeing this is some fluke. Start with another number in the center. 2, if you like. 41, if you feel ambitious. Repeat the process. The details vary. But the pattern looks much the same. Regions of dense-packed broken diagonals, all over the plane.

It begs us to believe there’s some knowable pattern here. That we could get an artist to draw a figure, with each spot in the figure corresponding to a prime number. This would be great. We know many things about prime numbers, but we don’t really have any system to generate a lot of prime numbers. Not much better than “here’s a thing, try dividing it”. Back in the 80s and 90s we had the big Fractal Boom. Everybody got computers that could draw what passed for pictures. And we could write programs that drew them. The Ulam Spiral was a minor but exciting prospect there. Was it a fractal? I don’t know. I’m not sure if anyone knows. (The spiral like you’d draw on paper wouldn’t be. The spiral that went out to infinitely large numbers might conceivably be.) It seemed plausible enough for computing magazines to be interested in. Maybe we could describe the pattern by something as simple as the Koch curve (that wriggly triangular snowflake shape). Or as easy to program as the Mandelbrot Curve.

We haven’t found one. As keeps happening with prime numbers, the answers evade us. We can understand why diagonals should appear. Write a polynomial of the form . Evaluate it for n of 1, 2, 3, 4, and so on. Highlight those numbers. This will tend to highlight numbers that, in this spiral, are diagonal or horizontal or vertical lines. A lot of polynomials like this give a string of some prime numbers. But the polynomials all peter out. The lines all have interruptions.

There are other patterns. One, predating Ulam’s boring paper by thirty years, was made by Laurence Klauber. Klauber was a herpetologist of some renown, if Wikipedia isn’t misleading me. It claims his Rattlesnakes: Their Habits, Life Histories, and Influence on Mankind is still an authoritative text. I don’t know and will defer to people versed in the field. It also credits him with several patents in electrical power transmission.

Anyway, Klauber’s Triangle sets a ‘1’ at the top of the triangle. The numbers ‘2 3 4’ under that, with the ‘3’ directly beneath the ‘1’. The numbers ‘5 6 7 8 9’ beneath that, the ‘7’ directly beneath the ‘3’. ’10 11 12 13 14 15 16′ beneath that, the ’13’ underneath the ‘7’. And so on. Again highlight the prime numbers. You get again these patterns of dots and lines. Many vertical lines. Some lines in isometric view. It looks like strands of Morse Code.

In 1994 Robert Sacks created another variant. This one places the counting numbers on an Archimedian spiral. Space the numbers correctly and highlight the primes. The primes will trace out broken curves. Some are radial. Some spiral in (or out, if you rather). Some open up islands. The pattern looks like a Saul Bass logo for a “Nifty Fifty”-era telecommunications firm or maybe an airline.

You can do more. Draw a hexagonal spiral. Triangular ones. Other patterns of laying down numbers. You get patterns. The eye can’t help seeing order there. We can’t quite pin down what it is. Prime numbers keep evading our full understanding. Perhaps it would help to doodle a little during a tiresome conference call.

Stanislaw Ulam did enough fascinating numerical mathematics that I could probably do a sequence just on his work. I do want to mention one thing. It’s part of information theory. You know the game Twenty Questions. Play that, but allow for some lying. The game is still playable. Ulam did not invent this game; Alfréd Rényi did. (I do not know anything else about Rényi.) But Ulam ran across Rényi’s game, and pointed out how interesting it was, and mathematicians paid attention to him.

Today’s glossary entry comes from Elke Stangl, author of the Elkemental Force blog. I’ll do my best, although it would have made my essay a bit easier if I’d had the chance to do another topic first. We’ll get there.

Topology.

Start with a universe. Nice thing to have around. Call it ‘M’. I’ll get to why that name.

I’ve talked a fair bit about weird mathematical objects that need some bundle of traits to be interesting. So this will change the pace some. Here, I request only that the universe have a concept of “sets”. OK, that carries a little baggage along with it. We have to have intersections and unions. Those come about from having pairs of sets. The intersection of two sets is all the things that are in both sets simultaneously. The union of two sets is all the things that are in one set, or the other, or both simultaneously. But it’s hard to think of something that could have sets that couldn’t have intersections and unions.

So from your universe ‘M’ create a new collection of things. Call it ‘T’. I’ll get to why that name. But if you’ve formed a guess about why, then you know. So I suppose I don’t need to say why, now. ‘T’ is a collection of subsets of ‘M’. Now let’s suppose these four things are true.

First. ‘M’ is one of the sets in ‘T’.

Second. The empty set ∅ (which has nothing at all in it) is one of the sets in ‘T’.

Third. Whenever two sets are in ‘T’, their intersection is also in ‘T’.

Fourth. Whenever two (or more) sets are in ‘T’, their union is also in ‘T’.

Got all that? I imagine a lot of shrugging and head-nodding out there. So let’s take that. Your universe ‘M’ and your collection of sets ‘T’ are a topology. And that’s that.

Yeah, that’s never that. Let me put in some more text. Suppose we have a universe that consists of two symbols, say, ‘a’ and ‘b’. There’s four distinct topologies you can make of that. Take the universe plus the collection of sets {∅}, {a}, {b}, and {a, b}. That’s a topology. Try it out. That’s the first collection you would probably think of.

Here’s another collection. Take this two-thing universe and the collection of sets {∅}, {a}, and {a, b}. That’s another topology and you might want to double-check that. Or there’s this one: the universe and the collection of sets {∅}, {b}, and {a, b}. Last one: the universe and the collection of sets {∅} and {a, b} and nothing else. That one barely looks legitimate, but it is. Not a topology: the universe and the collection of sets {∅}, {a}, and {b}.

The number of toplogies grows surprisingly with the number of things in the universe. Like, if we had three symbols, ‘a’, ‘b’, and ‘c’, there would be 29 possible topologies. The universe of the three symbols and the collection of sets {∅}, {a}, {b, c}, and {a, b, c}, for example, would be a topology. But the universe and the collection of sets {∅}, {a}, {b}, {c}, and {a, b, c} would not. It’s a good thing to ponder if you need something to occupy your mind while awake in bed.

With four symbols, there’s 355 possibilities. Good luck working those all out before you fall asleep. Five symbols have 6,942 possibilities. You realize this doesn’t look like any expected sequence. After ‘4’ the count of topologies isn’t anything obvious like “two to the number of symbols” or “the number of symbols factorial” or something.

Are you getting ready to call me on being inconsistent? In the past I’ve talked about topology as studying what we can know about geometry without involving the idea of distance. How’s that got anything to do with this fiddling about with sets and intersections and stuff?

So now we come to that name ‘M’, and what it’s finally mnemonic for. I have to touch on something Elke Stangl hoped I’d write about, but a letter someone else bid on first. That would be a manifold. I come from an applied-mathematics background so I’m not sure I ever got a proper introduction to manifolds. They appeared one day in the background of some talk about physics problems. I think they were introduced as “it’s a space that works like normal space”, and that was it. We were supposed to pretend we had always known about them. (I’m translating. What we were actually told would be that it “works like R^{3}”. That’s how mathematicians say “like normal space”.) That was all we needed.

Properly, a manifold is … eh. It’s something that works kind of like normal space. That is, it’s a set, something that can be a universe. And it has to be something we can define “open sets” on. The open sets for the manifold follow the rules I gave for a topology above. You can make a collection of these open sets. And the empty set has to be in that collection. So does the whole universe. The intersection of two open sets in that collection is itself in that collection. The union of open sets in that collection is in that collection. If all that’s true, then we have a manifold.

And now the piece that makes every pop mathematics article about topology talk about doughnuts and coffee cups. It’s possible that two topologies might be homeomorphic to each other. “Homeomorphic” is a term of art. But you understand it if you remember that “morph” means shape, and suspect that “homeo” is probably close to “homogenous”. Two things being homeomorphic means you can match their parts up. In the matching there’s nothing left over in the first thing or the second. And the relations between the parts of the first thing are the same as the relations between the parts of the second thing.

So. Imagine the snippet of the number line for the numbers larger than -π and smaller than π. Think of all the open sets you can use to cover that. It will have a set like “the numbers bigger than 0 and less than 1”. A set like “the numbers bigger than -π and smaller than 2.1”. A set like “the numbers bigger than 0.01 and smaller than 0.011”. And so on.

Now imagine the points that exist on a circle, if you’ve omitted one point. Let’s say it’s the unit circle, centered on the origin, and that what we’re leaving out is the point that’s exactly to the left of the origin. The open sets for this are the arcs that cover some part of this punctured circle. There’s the arc that corresponds to the angles from 0 to 1 radian measure. There’s the arc that corresponds to the angles from -π to 2.1 radians. There’s the arc that corresponds to the angles from 0.01 to 0.011 radians. You see where this is going. You see why I say we can match those sets on the number line to the arcs of this punctured circle. There’s some details to fill in here. But you probably believe me this could be done if I had to.

There’s two (or three) great branches of topology. One is called “algebraic topology”. It’s the one that makes for fun pop mathematics articles about imaginary rubber sheets. It’s called “algebraic” because this field makes it natural to study the holes in a sheet. And those holes tend to form groups and rings, basic pieces of Not That Algebra. The field (I’m told) can be interpreted as looking at functors on groups and rings. This makes for some neat tying-together of subjects this A To Z round.

The other branch is called “differential topology”, which is a great field to study because it sounds like what Mister Spock is thinking about. It inspires awestruck looks where saying you study, like, Bayesian probability gets blank stares. Differential topology is about differentiable functions on manifolds. This gets deep into mathematical physics.

As you study mathematical physics, you stop worrying about ever solving specific physics problems. Specific problems are petty stuff. What you like is solving whole classes of problems. A steady trick for this is to try to find some properties that are true about the problem regardless of what exactly it’s doing at the time. This amounts to finding a manifold that relates to the problem. Consider a central-force problem, for example, with planets orbiting a sun. A planet can’t move just anywhere. It can only be in places and moving in directions that give the system the same total energy that it had to start. And the same linear momentum. And the same angular momentum. We can match these constraints to manifolds. Whatever the planet does, it does it without ever leaving these manifolds. To know the shapes of these manifolds — how they are connected — and what kinds of functions are defined on them tells us something of how the planets move.

The maybe-third branch is “low-dimensional topology”. This is what differential topology is for two- or three- or four-dimensional spaces. You know, shapes we can imagine with ease in the real world. Maybe imagine with some effort, for four dimensions. This kind of branches out of differential topology because having so few dimensions to work in makes a lot of problems harder. We need specialized theoretical tools that only work for these cases. Is that enough to count as a separate branch? It depends what topologists you want to pick a fight with. (I don’t want a fight with any of them. I’m over here in numerical mathematics when I’m not merely blogging. I’m happy to provide space for anyone wishing to defend her branch of topology.)

But each grows out of this quite general, quite abstract idea, also known as “point-set topology”, that’s all about sets and collections of sets. There is much that we can learn from thinking about how to collect the things that are possible.

I don’t actually like it when a split week has so many more comics one day than the next, but I also don’t like splitting across a day if I can avoid it. This week, I had to do a little of both since there were so many comic strips that were relevant enough on the 8th. But they were dominated by the idea of going back to school, yet.

Randy Glasbergen’s Glasbergen Cartoons rerun for the 8th is another back-to-school gag. And it uses arithmetic as the mathematics at its most basic. Arithmetic might not be the most fundamental mathematics, but it does seem to be one of the parts we understand first. It’s probably last to be forgotten even on a long summer break.

Mark Pett’s Mr Lowe rerun for the 8th is built on the familiar old question of why learn arithmetic when there’s computers. Quentin is unconvinced of this as motive for learning long division. I’ll grant the case could be made better. I admit I’m not sure how, though. I think long division is good as a way to teach, especially, the process of estimating and improving estimates of a calculation. There’s a lot of real mathematics in doing that.

Guy Gilchrist’s Nancy for the 8th is another back-to-school strip. Nancy’s faced with “this much math” so close to summer. Her given problem’s a bit of a mess to me. But it’s mostly teaching whether the student’s got the hang of the order of operations. And the instructor clearly hasn’t got the sense right. People can ask whether we should parse “12 divided by 3 times 4” as “(12 divided by 3) times 4” or as “12 divided by (3 times 4)”, and that does make a major difference. Multiplication commutes; you can do it in any order. Division doesn’t. Leaving ambiguous phrasing is the sort of thing you learn, instinctively, to avoid. Nancy would be justified in refusing to do the problem on the grounds that there is no unambiguous way to evaluate it, and that the instructor surely did not mean for her to evaluate it all four different plausible ways.

By the way, I’ve seen going around Normal Person Twitter this week a comment about how they just discovered the division symbol ÷, the obelus, is “just” the fraction bar with dots above and below where the unknown numbers go. I agree this is a great mnemonic for understanding what is being asked for with the symbol. But I see no evidence that this is where the symbol, historically, comes from. We first see ÷ used for division in the writings of Johann Henrich Rahn, in 1659, and the symbol gained popularity particularly when John Pell picked it up nine years later. But it’s not like Rahn invented the symbol out of nowhere; it had been used for subtraction for over 125 years at that point. There were also a good number of writers using : or / or \ for division. There were some people using a center dot before and after a / mark for this, like the % sign fell on its side. That ÷ gained popularity in English and American writing seems to be a quirk of fate, possibly augmented by it being relatively easy to produce on a standard typewriter. (Florian Cajori notes that the National Committee on Mathematical Requirements recommended dropping ÷ altogether in favor of a symbol that actually has use in non-mathematical life, the / mark. The Committee recommended this in 1923, so you see how well the form agenda is doing.)

Mark Leiknes’s Cow and Boy rerun for the 9th only mentions mathematics, and that as a course that Billy would rather be skipping. But I like the comic strip and want to promote its memory as much as possible. It’s a deeply weird thing, because it has something like 400 running jokes, and it’s hard to get into because the first couple times you see a pastoral conversation interrupted by an orca firing a bazooka at a cat-helicopter while a panda brags of blowing up the moon it seems like pure gibberish. If you can get through that, you realize why this is funny.

Dave Blazek’s Loose Parts for the 9th uses chalkboards full of stuff as the sign of a professor doing serious thinking. Mathematics is will-suited for chalkboards, at least in comic strips. It conveys a lot of thought and doesn’t need much preplanning. Although a joke about the difficulties in planning out blackboard use does take that planning. Yes, there is a particular pain that comes from having more stuff to write down in the quick yet easily collaborative medium of the chalkboard than there is board space to write.

Brian Basset’s Red and Rover for the 9th also really only casually mentions mathematics. But it’s another comic strip I like a good deal so would like to talk up. Anyway, it does show Red discovering he doesn’t mind doing mathematics when he sees the use.

I have two pieces to assemble for this. One is in factors. We can take any counting number, a positive whole number, and write it as the product of prime numbers. 2038 is equal to the prime 2 times the prime 1019. 4312 is equal to 2 raised to the third power times 7 raised to the second times 11. 1040 is 2 to the fourth power times 5 times 13. 455 is 5 times 7 times 13.

There are many ways to divide up numbers like this. Here’s one. Is there a square number among its factors? 2038 and 455 don’t have any. They’re each a product of prime numbers that are never repeated. 1040 has a square among its factors. 2 times 2 divides into 1040. 4312, similarly, has a square: we can write it as 2 squared times 2 times 7 squared times 11. So that is my first piece. We can divide counting numbers into squarefree and not-squarefree.

The other piece is in binomial coefficients. These are numbers, often quite big numbers, that get dumped on the high school algebra student as she tries to work with some expression like . They’re also dumped on the poor student in calculus, as something about Newton’s binomial coefficient theorem. Which we hear is something really important. In my experience it wasn’t explained why this should rank up there with, like, the differential calculus. (Spoiler: it’s because of polynomials.) But it’s got some great stuff to it.

Binomial coefficients are among those utility players in mathematics. They turn up in weird places. In dealing with polynomials, of course. They also turn up in combinatorics, and through that, probability. If you run, for example, 10 experiments each of which could succeed or fail, the chance you’ll get exactly five successes is going to be proportional to one of these binomial coefficients. That they touch on polynomials and probability is a sign we’re looking at a thing woven into the whole universe of mathematics. We saw them some in talking, last A-To-Z around, about Yang Hui’s Triangle. That’s also known as Pascal’s Triangle. It has more names too, since it’s been found many times over.

The theorem under discussion is about central binomial coefficients. These are one specific coefficient in a row. The ones that appear, in the triangle, along the line of symmetry. They’re easy to describe in formulas. for a whole number ‘n’ that’s greater than or equal to zero, evaluate what we call 2n choose n:

If ‘n’ is zero, this number is or 1. If ‘n’ is 1, this number is or 2. If ‘n’ is 2, this number is 6. If ‘n’ is 3, this number is (sparing the formula) 20. The numbers keep growing. 70, 252, 924, 3432, 12870, and so on.

So. 1 and 2 and 6 are squarefree numbers. Not much arguing that. But 20? That’s 2 squared times 5. 70? 2 times 5 times 7. 252? 2 squared times 3 squared times 7. 924? That’s 2 squared times 3 times 7 times 11. 3432? 2 cubed times 3 times 11 times 13; there’s a 2 squared in there. 12870? 2 times 3 squared times it doesn’t matter anymore. It’s not a squarefree number.

There’s a bunch of not-squarefree numbers in there. The question: do we ever stop seeing squarefree numbers here?

So here’s Sárközy’s Theorem. It says that this central binomial coefficient is never squarefree as long as ‘n’ is big enough. András Sárközy showed in 1985 that this was true. How big is big enough? … We have a bound, at least, for this theorem. If ‘n’ is larger than the number then the corresponding coefficient can’t be squarefree. It might not surprise you that the formulas involved here feature the Riemann Zeta function. That always seems to turn up for questions about large prime numbers.

That’s a common state of affairs for number theory problems. Very often we can show that something is true for big enough numbers. I’m not sure there’s a clear reason why. When numbers get large enough it can be more convenient to deal with their logarithms, I suppose. And those look more like the real numbers than the integers. And real numbers are typically easier to prove stuff about. Maybe that’s it. This is vague, yes. But to ask ‘why’ some things are easy and some are hard to prove is a hard question. What is a satisfying ’cause’ here?

It’s tempting to say that since we know this is true for all ‘n’ above a bound, we’re done. We can just test all the numbers below that bound, and the rest is done. You can do a satisfying proof this way: show that eventually the statement is true, and show all the special little cases before it is. This particular result is kind of useless, though. is a number that’s something like 241 digits long. For comparison, the total number of things in the universe is something like a number about 80 digits long. Certainly not more than 90. It’d take too long to test all those cases.

That’s all right. Since Sárközy’s proof in 1985 there’ve been other breakthroughs. In 1988 P Goetgheluck proved it was true for a big range of numbers: every ‘n’ that’s larger than 4 and less than . That’s a number something more than 12 million digits long. In 1991 I Vardi proved we had no squarefree central binomial coefficients for ‘n’ greater than 4 and less than , which is a number about 233 million digits long. And then in 1996 Andrew Granville and Olivier Ramare showed directly that this was so for all ‘n’ larger than 4.

So that 70 that turned up just a few lines in is the last squarefree one of these coefficients.

Is this surprising? Maybe, maybe not. I’ll bet most of you didn’t have an opinion on this topic twenty minutes ago. Let me share something that did surprise me, and continues to surprise me. In 1974 David Singmaster proved that any integer divides almost all the binomial coefficients out there. “Almost all” is here a term of art, but it means just about what you’d expect. Imagine the giant list of all the numbers that can be binomial coefficients. Then pick any positive integer you like. The number you picked will divide into so many of the giant list that the exceptions won’t be noticeable. So that square numbers like 4 and 9 and 16 and 25 should divide into most binomial coefficients? … That’s to be expected, suddenly. Into the central binomial coefficients? That’s not so obvious to me. But then so much of number theory is strange and surprising and not so obvious.