## My 2018 Mathematics A To Z: Hyperbolic Half-Plane

Today’s term was one of several nominations I got for ‘H’. This one comes from John Golden, @mathhobre on Twitter and author of the Math Hombre blog on Blogspot. He brings in a lot of thought about mathematics education and teaching tools that you might find interesting or useful or, better, both.

# Hyperbolic Half-Plane.

The half-plane part is easy to explain. By the “plane” mathematicians mean, well, the plane. What you’d get if a sheet of paper extended forever. Also if it had zero width. To cut it in half … well, first we have to think hard what we mean by cutting an infinitely large thing in half. Then we realize we’re overthinking this. Cut it by picking a line on the plane, and then throwing away everything on one side or the other of that line. Maybe throw away everything on the line too. It’s logically as good to pick any line. But there are a couple lines mathematicians use all the time. This is because they’re easy to describe, or easy to work with. At least once you fix an origin and, with it, x- and y-axes. The “right half-plane”, for example, is everything in the positive-x-axis direction. Every point with coordinates you’d describe with positive x-coordinate values. Maybe the non-negative ones, if you want the edge included. The “upper half plane” is everything in the positive-y-axis direction. All the points whose coordinates have a positive y-coordinate value. Non-negative, if you want the edge included. You can make guesses about what the “left half-plane” or the “lower half-plane” are. You are correct.

The “hyperbolic” part takes some thought. What is there to even exaggerate? Wrong sense of the word “hyperbolic”. The word here is the same one used in “hyperbolic geometry”. That takes explanation.

The Western mathematics tradition, as we trace it back to Ancient Greece and Ancient Egypt and Ancient Babylon and all, gave us “Euclidean” geometry. It’s a pretty good geometry. It describes how stuff on flat surfaces works. In the Euclidean formation we set out a couple of axioms that aren’t too controversial. Like, lines can be extended indefinitely and that all right angles are congruent. And one axiom that is controversial. But which turns out to be equivalent to the idea that there’s only one line that goes through a point and is parallel to some other line.

And it turns out that you don’t have to assume that. You can make a coherent “spherical” geometry, one that describes shapes on the surface of a … you know. You have to change your idea of what a line is; it becomes a “geodesic” or, on the globe, a “great circle”. And it turns out that there’s no lines geodesics that go through a point and that are parallel to some other line geodesic. (I know you want to think about globes. I do too. You maybe want to say the lines of latitude are parallel one another. They’re even called parallels, sometimes. So they are. But they’re not geodesics. They’re “little circles”. I am not throwing in ad hoc reasons I’m right and you’re not.)

There is another, though. This is “hyperbolic” geometry. This is the way shapes work on surfaces that mathematicians call saddle-shaped. I don’t know what the horse enthusiasts out there call these shapes. My guess is they chuckle and point out how that would be the most painful saddle ever. Doesn’t matter. We have surfaces. They act weird. You can draw, through a point, infinitely many lines parallel to a given other line.

That’s some neat stuff. That’s weird and interesting. They’re even called “hyperparallel lines” if that didn’t sound great enough. You can see why some people would find this worth studying. The catch is that it’s hard to order a pad of saddle-shaped paper to try stuff out on. It’s even harder to get a hyperbolic blackboard. So what we’d like is some way to represent these strange geometries using something easier to work with.

The hyperbolic half-plane is one of those approaches. This uses the upper half-plane. It works by a move as brilliant and as preposterous as that time Q told Data and LaForge how to stop that falling moon. “Simple. Change the gravitational constant of the universe.”

What we change here is the “metric”. The metric is a function. It tells us something about how points in a space relate to each other. It gives us distance. In Euclidean geometry, plane geometry, we use the Euclidean metric. You can find the distance between point A and point B by looking at their coordinates, $(x_A, y_A)$ and $(x_B, y_B)$. This distance is $\sqrt{\left(x_B - x_A\right)^2 + \left(y_B - y_A\right)^2}$. Don’t worry about the formulas. The lines on a sheet of graph paper are a reflection of this metric. Each line is (normally) a fixed distance from its parallel neighbors. (Yes, there are polar-coordinate graph papers. And there are graph papers with logarithmic or semilogarithmic spacing. I mean graph paper like you can find at the office supply store without asking for help.)

But the metric is something we choose. There are some rules it has to follow to be logically coherent, yes. But those rules give us plenty of room to play. By picking the correct metric, we can make this flat plane obey the same geometric rules as the hyperbolic surface. This metric looks more complicated than the Euclidean metric does, but only because it has more terms and takes longer to write out. What’s important about it is that the distance your thumb put on top of the paper covers up is bigger if your thumb is near the bottom of the upper-half plane than if your thumb is near the top of the paper.

So. There are now two things that are “lines” in this. One of them is vertical lines. The graph paper we would make for this has a nice file of parallel lines like ordinary paper does. The other thing, though … well, that’s half-circles. They’re half-circles with a center on the edge of the half-plane. So our graph paper would also have a bunch of circles, of different sizes, coming from regularly-spaced sources on the bottom of the paper. A line segment is a piece of either these vertical lines or these half-circles. You can make any polygon you like with these, if you pick out enough line segments. They’re there.

There are many ways to represent hyperbolic surfaces. This is one of them. It’s got some nice properties. One of them is that it’s “conformal”. Angles that you draw using this metric are the same size as those on the corresponding hyperbolic surface. You don’t appreciate how sweet that is until you’re working in non-Euclidean geometries. Circles that are entirely within the hyperbolic half-plane match to circles on a hyperbolic surface. Once you’ve got your intuition for this hyperbolic half-plane, you can step into hyperbolic half-volumes. And that lets you talk about the geometry of hyperbolic spaces that reach into four or more dimensions of human-imaginable spaces. Isometries — picking up a shape and moving it in ways that don’t change distance — match up with the Möbius Transformations. These are a well-understood set of altering planes that comes from a different corner of geometry. Also from that fellow with the strip, August Ferdinand Möbius. It’s always exciting to find relationships like that in mathematical structures.

Pictures often help. I don’t know why I don’t include them. But here is a web site with pages, and pictures, that describe much of the hyperbolic half-plane. It includes code to use with the Geometer Sketchpad software, which I have never used and know nothing about. That’s all right. There’s at least one page there showing a wondrous picture. I hope you enjoy.

This and other essays in the Fall 2018 A-To-Z should be at this link. And I’ll start paneling for more letters soon.

## Reading the Comics, October 14, 2018: Possessive Edition

The first two comics for this essay have titles of the form Name’s Thing, so, that’s why this edition title. That’s good enough, isn’t it? And besides this series there was a Perry Bible Fellowship which at least depicted mathematical symbols. It’s a rerun, though, even among those shown on GoComics.com. It was rerun recently enough that I featured it around here back in June. It’s a bit risque. But the strip was rerun the 12th. Maybe I also need to drop Perry Bible Fellowship from the roster of comics I read for this.

On to the comics I haven’t dropped.

Tony Buino and Gary Markstein’s Daddy’s Home for the 11th tries using specific examples to teach mathematics. There’s strangeness to arithmetic. It’s about these abstract things like “thirty” and “addition” and such. But these things match very well the behaviors of discrete objects, ones that don’t blend together or shatter by themselves. So we can use the intuition we have for specific things to get comfortable working with the abstract. This doesn’t stop, either. Mathematicians like to work on general, abstract questions; they let us answer big swaths of questions all at once. But working out a specific case is usually easier, both to prove and to understand. I don’t know what’s the most advanced mathematics that could be usefully practiced by thinking about cupcakes. Probably something in group theory, in studying the rotations of objects that are perfectly, or nearly, rotationally symmetric.

John Zakour and Scott Roberts’s Maria’s Day for the 11th is a follow-up to a strip featured last week. Maria’s been getting help on her mathematics from one of her closet monsters. And includes the usual joke about Common Core being such a horrible thing that it must come from monsters. I don’t know whether in the comic strip’s universe the monster is supposed to be imaginary. (Usually, in a comic strip, the question of whether a character is imaginary-or-real is pointless. I think Richard Thompson’s Cul de Sac is the only one to have done something good with it.) But if the closet monster is in Maria’s imagination, it’s quite in line for her to think that teaching comes from some malevolent and inscrutable force.

Olivia Jaimes’s Nancy for the 12th features one of the first interesting mathematics questions you do in physics. This is often done with calculus. Not much, but more than Nancy and Esther could realistically have. It could be worked out experimentally, and that’s likely what the teacher was hoping for. Calculus isn’t really necessary, although it does show skeptical students there’s some value in all this d-dx business they’ve been working through. You can find the same answers by dimensional analysis, which is less intimidating. But you’d still need to know some trigonometry functions. That’s beyond whatever Nancy’s grade level is too. In any case, Nancy is an expert at identifying unstated assumptions, and working out loopholes in them. I’m curious whether the teacher would respect Nancy’s skill here. (The way the writing’s been going, I think she would.)

Francesco Marciuliano and Jim Keefe’s Sally Forth for the 13th is about new-friend Jenny trying to work out her relationship with Hilary-Faye-and-Nona. It’s a good bit of character work, but that is outside my subject here. In the last panel Nona admits she’s been talking, or at least thinking about τ versus π. This references a minor nerd-squabble that’s been going on a couple years. π is an incredibly well-known, useful number. It’s the only transcendental number you can expect a normal person to have ever heard of. Humans noticed it, historically, because the length of the circumference of a circle is π times the length of its diameter. Going between “the distance across” and “the distance around” turns out to be useful.

The thing is, many mathematical and physics formulas find it more convenient to write things in terms of the radius of a circle or sphere. And this makes 2π show up in formulas. A lot. Even in things that don’t obviously have circles in them. For example, the Gaussian distribution, which describes how much a sample looks like the population it’s sampled from, has 2π in it. So, the τ argument does, why write out 2π all these places? Why not decide that that’s the useful number to think about, give it the catchy name τ, and use that instead? All the interesting questions about π have exact, obvious parallel questions about τ. Any answers about one give us answers about the other. So why not make this switch and then … pocket the savings in having shorter formulas?

You may sense in me a certain skepticism. I don’t see where changing over gets us anything worth the bother. But there are fashions in mathematics as with everything else. Perhaps τ has some ability to clarify things in ways we’ll come to better appreciate.

This and my other Reading the Comics posts are this link. Essays inspired by Daddy’s Home are at this link. Other essays that mention Maria’s Day discussions should be at this link. Essays with a mention of Nancy, old and new, are at this link. And essays in which Sally Forth gets discussed will be at this link. It’s a new tag today, which does surprise me.

## My 2018 Mathematics A To Z: Group Action

I got several great suggestions for topics for ‘g’. The one that most caught my imagination was mathtuition88’s, the group action. Mathtuition88 is run by Mr Wu, a mathematics tutor in Singapore. His mathematics blog recounts his own explorations of interesting topics.

# Group Action.

This starts from groups. A group, here, means a pair of things. The first thing is a set of elements. The second is some operation. It takes a pair of things in the set and matches it to something in the set. For example, try the integers as the set, with addition as the operation. There are many kinds of groups you can make. There can be finite groups, ones with as few as one element or as many as you like. (The one-element groups are so boring. We usually need at least two to have much to say about them.) There can be infinite groups, like the integers. There can be discrete groups, where there’s always some minimum distance between elements. There can be continuous groups, like the real numbers, where there’s no smallest distance between distinct elements.

Groups came about from looking at how numbers work. So the first examples anyone gets are based on numbers. The integers, especially, and then the integers modulo something. For example, there’s $Z_2$, which has two numbers, 0 and 1. Addition works by the rule that 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 0. There’s similar rules for $Z_3$, which has three numbers, 0, 1, and 2.

But after a few comfortable minutes on this, group theory moves on to more abstract things. Things with names like the “permutation group”. This starts with some set of things and we don’t even care what the things are. They can be numbers. They can be letters. They can be places. They can be anything. We don’t care. The group is all of the ways to swap elements around. All the relabellings we can do without losing or gaining an item. Or another, the “symmetry group”. This is, for some given thing — plates, blocks, and wallpaper patterns are great examples — all the ways you can rotate or move or reflect the thing without changing the way it looks.

And now we’re creeping up on what a “group action” is. Let me just talk about permutations here. These are where you swap around items. Like, start out with a list of items “1 2 3 4”. And pick out a permutation, say, swap the second with the fourth item. We write that, in shorthand, as (2 4). Maybe another permutation too. Say, swap the first item with the third. Write that out as (1 3). We can multiply these permutations together. Doing these permutations, in this order, has a particular effect: it swaps the second and fourth items, and swaps the first and third items. This is another permutation on these four items.

These permutations, these “swap this item with that” rules, are a group. The set for the group is instructions like “swap this with that”, or “swap this with that, and that with this other thing, and this other thing with the first thing”. Or even “leave this thing alone”. The operation between two things in the set is, do one and then the other. For example, (2 3) and then (3 4) has the effect of moving the second thing to the fourth spot, the (original) fourth thing to the third spot, and the original third thing to the second spot. That is, it’s the permutation (2 3 4). If you ever need something to doodle during a slow meeting, try working out all the ways you can shuffle around, say, six things. And what happens as you do all the possible combinations of these things. Hey, you’re only permuting six items. How many ways could that be?

So here’s what sounds like a fussy point. The group here is made up the ways you can permute these items. The items aren’t part of the group. They just gave us something to talk about. This is where I got so confused, as an undergraduate, working out groups and group actions.

When we move back to talking about the original items, then we get a group action. You get a group action by putting together a group with some set of things. Let me call the group ‘G’ and the set ‘X’. If I need something particular in the group I’ll call that ‘g’. If I need something particular from the set ‘X’ I’ll call that ‘x’. This is fairly standard mathematics notation. You see how subtly clever this notation is. The group action comes from taking things in G and applying them to things in X, to get things in X. Usually other things, but not always. In the lingo, we say the group action maps the pair of things G and X to the set X.

There are rules these actions have to follow. They’re what you would expect, if you’ve done any fiddling with groups. Don’t worry about them. What’s interesting is what we get from group actions.

First is group orbits. Take some ‘g’ out of the group G. Take some ‘x’ out of the set ‘X’. And build this new set. First, x. Then, whatever g does to x, which we write as ‘gx’. But ‘gx’ is still something in ‘X’, so … what does g do to that? So toss in ‘ggx’. Which is still something in ‘X’, so, toss in ‘gggx’. And ‘ggggx’. And keep going, until you stop getting new things. If ‘X’ is finite, this sequence has to be finite. It might be the whole set of X. It might be some subset of X. But if ‘X’ is finite, it’ll get back, eventually, to where you started, which is why we call this the “group orbit”. We use the same term even if X isn’t finite and we can’t guarantee that all these iterations of g on x eventually get back to the original x. This is a subgroup of X, based on the same group operation that G has.

There can be other special groups. Like, are there elements ‘g’ that map ‘x’ to ‘x’? Sure. The has to be at least one, since the group G has an identity element. There might be others. So, for any given ‘x’, what are all the elements in ‘g’ that don’t change it? The set of all the values of g for which gx is x is the “isotropy group” Gx. Or the “stabilizer subgroup”. This is a subgroup of G, based on x.

Yes, but the point?

Well, the biggest thing we get from group actions is the chance to put group theory principles to work on specific things. A group might describe the ways you can rotate or reflect a square plate without leaving an obvious change in the plate. The group action lets you make this about the plate. Much of modern physics is about learning how the geometry of a thing affects its behavior. This can be the obvious sorts of geometry, like, whether it’s rotationally symmetric. But it can be subtler things, like, whether the forces in the system are different at different times. Group actions let us put what we know from geometry and topology to work in specifics.

A particular favorite of mine is that they let us express the wallpaper groups. These are the ways we can use rotations and reflections and translations (linear displacements) to create different patterns. There are fewer different patterns than you might have guessed. (Different, here, overlooks such petty things as whether the repeated pattern is a diamond, a flower, or a hexagon. Or whether the pattern repeats every two inches versus every three inches.)

And they stay useful for abstract mathematical problems. All this talk about orbits and stabilizers lets us find something called the Orbit Stabilization Theorem. This connects the size of the group G to the size of orbits of x and of the stabilizer subgroups. This has the exciting advantage of letting us turn many proofs into counting arguments. A counting argument is just what you think: showing there’s as many of one thing as there are another. here’s a nice page about the Orbit Stabilization Theorem, and how to use it. This includes some nice, easy-to-understand problems like “how many different necklaces could you make with three red, two green, and one blue bead?” Or if that seems too mundane a problem, an equivalent one from organic chemistry: how many isomers of naphthol could there be? You see where these group actions give us useful information about specific problems.

If you should like a more detailed introduction, although one that supposes you’re more conversant with group theory than I do here, this is a good sequence: Group Actions I, which actually defines the things. Group actions II: the orbit-stabilizer theorem, which is about just what it says. Group actions III — what’s the point of them?, which has the sort of snappy title I like, but which gives points that make sense when you’re comfortable talking about quotient groups and isomorphisms and the like. And what I think is the last in the sequence, Group actions IV: intrinsic actions, which is about using group actions to prove stuff. And includes a mention of one of my favorite topics, the points the essay-writer just didn’t get the first time through. (And more; there’s a point where the essay goes wrong, and needs correction. I am not the Joseph who found the problem.)

## Reading the Comics, October 11, 2018: Under Weather Edition

I ended up not finding more comics on-topic on GoComics yesterday. So this past week’s mathematically-themed strips should fit into two posts well. I apologize for any loss of coherence in this essay, as I’m getting a bit of a cold. I’m looking forward to what this cold does for the A To Z essays coming Tuesday and Friday this week, too.

Stephen Beals’s Adult Children for the 7th uses Albert Einstein’s famous equation as shorthand for knowledge. I’m a little surprised it’s written out in words, rather than symbols. This might reflect that $E = mc^2$ is often understood just as this important series of sounds, rather than as an equation relating things to one another. Or it might just reflect the needs of the page composition. It could be too small a word balloon otherwise.

Julie Larson’s The Dinette Set for the 9th continues the thread of tip-calculation jokes around here. I have no explanation for this phenomenon. In this case, Burl is doing the calculation correctly. If the tip is supposed to be 15% of the bill, and the bill is reduced 10%, then the tip would be reduced 10%. If you already have the tip calculated, it might be quicker to figure out a tenth of that rather than work out 15% of the original bill. And, yes, the characters are being rather unpleasantly penny-pinching. That was just the comic strip’s sense of humor.

Todd Clark’s Lola for the 9th take the form of your traditional grumbling about story problems. It also shows off the motif of updating of the words in a story problem to be awkwardly un-hip. The problem seems to be starting in a confounding direction anyway. The first sentence isn’t out and it’s introducing the rate at which Frank is shedding social-media friends over time and the rate at which a train is travelling, some distancer per time. Having one quantity with dimensions friends-per-time and another with dimensions distance-per-time is begging for confusion. Or for some weird gibberish thing, like, determining something to be (say) ninety mile-friends. There’s trouble ahead.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 10th proposes naming a particular kind of series. A series is the sum of a sequence of numbers. It doesn’t have to be a sequence with infinitely many numbers in it, but it usually is, if it’s to be an interesting series. Properly, a series gets defined by something like the symbols in the upper caption of the panel:

$\sum_{i = 1}^{\infty} a_i$

Here the ‘i’ is a “dummy variable”, of no particular interest and not even detectable once the calculation is done. It’s not that thing with the square roots of -1 in thise case. ‘i’ is specifically known as the ‘index’, since it indexes the terms in the sequence. Despite the logic of i-index, I prefer to use ‘j’, ‘k’, or ‘n’. This avoids confusion with that square-root-of-minus-1 meaning for i. The index starts at some value, the one to the right of the equals sign underneath the capital sigma; in this case, 1. The sequence evaluates whatever the formula described by $a_i$ is, for each whole number between that lowest ‘i’, in this case 1, and whatever the value above the sigma is. For the infinite series, that’s infinitely large. That is, work out $a_i$ for every counting number ‘i’. For the first sum in the caption, that highest number is 4, and you only need to evaluate four terms and add them together. There’s no rule given for $a_i$ in the caption; that just means that, in this case, we don’t yet have reason to care what the formula is.

This is the way to define a series if we’re being careful, and doing mathematics properly. But there are shorthands, and we fall back on them all the time. On the blackboard is one of them: $24 + 12 + 6 + 3 + \cdots$. The $\cdots$ at the end of a summation like this means “carry on this pattern for infinitely many terms”. If it appears in the middle of a summation, like $2 + 4 + 6 + 8 + \cdots + 20$ it means “carry on this pattern for the appropriate number of terms”. In that case, it would be $10 + 12 + 14 + 16 + 18$.

The flaw with this “carry on this pattern” is that, properly, there’s no such thing as “the” pattern. There are infinitely many ways to continue from whatever the start was, and they’re all equally valid. What lets this scheme work is cultural expectations. We expect the difference between one term and the next to follow some easy patterns. They increase or decrease by the same amount as we’ve seen before (an arithmetic progression, like 2 + 4 + 6 + 8, increasing by two each time). They increase or decrease by the same ratio as we’ve seen before (a geometric progression, like 24 + 12 + 6 + 3, cutting in half each time). Maybe the sign alternates, or changes by some straightforward rule. If it isn’t one of these, then we have to fall back on being explicit. In this case, it would be that $a_i = 24 \cdot \left(\frac{1}{2}\right)^{i - 1}$.

The capital-sigma as shorthand for “sum” traces to Leonhard Euler, because of course. I’m finding it hard, in my copy of Florian Cajori’s History of Mathematical Notations, to find just where the series notation as we use it got started. Also I’m not finding where ellipses got into mathematical notation either. It might reflect everybody realizing this was a pretty good way to represent “we’re not going to write out the whole thing here”.

Norm Feuti’s Retail for the 11th riffs on how many people, fundamentally, don’t know what percentages are. I think it reflects thinking of a percentage as some kind of unit. We get used to measurements of things, like, pounds or seconds or dollars or degrees or such that are fixed in value. But a percentage is relative. It’s a fraction of some original quantity. A difference of (say) two pounds in weight is the same amount of weight whatever the original was; why wouldn’t two percent of the weight behave similarly? … Gads, yes, I feel for the next retailer who gets these customers.

I think I’ve already used the story from when I worked in the bookstore about the customer concerned whether the ten-percent-off sticker applied before or after sales tax was calculated. So I’ll only share if people ask to hear it. (They won’t ask.)

When I’m not getting a bit ill, I put my Reading the Comics posts at this link. Essays which mention Adult Children are at this link. Essays with The Dinette Set discussions should be at this link. The essays inspired by Lola are at this link. There’s some mention of Saturday Morning Breakfast Cereal in essays at link, or pretty much every Reading the Comics post. And Retail gets discussed at this link.

## Reading the Comics, October 6, 2018: Square Root of 144 Edition

And I have three last strips from last week to talk about. For those curious, I have ten comics for this week that I flagged for mention, at least before reading the Saturday GoComics pages. So that will probably be two or three installments next week. It’ll depend how many Saturday GoComics strips raise a point I feel like discussing.

Jim Toomey’s Sherman’s Lagoon for the 5th uses arithmetic as the archetypical homework problem that’s short enough to fit in a panel but also too hard for an adult to do. And, neatly, easy for a computer to do. Were I either shark here I’d have reasoned out the square root of 144 something like this: they’re not getting homework where they’d be asked the square root of something that wasn’t a perfect square. So it’s got to be a whole number. 144 is between 100 and 400, so it’s got to be the square root of something between 10 and 20. 144 is pretty close to 100, so 144’s square root is probably close to 10. The square of 1 is 1, so 11 squared has to be one-hundred-something-and-one. The square of 2 is 4, so 12 squared has to be one-hundred-something-and-four. The square of 3 is 9, so 13 squared has to be one-hundred-something-and-nine. The square of 4 is 16, so 14 squared has to be at least one-hundred-something-and-six. And by then we’re getting pretty far from 10. So the only plausible candidate is 12. Test that out and, what do you know, there it is.

Greg Cravens’s The Buckets for the 6th is a riff on the monkeys-at-keyboards joke. Well, what keeps monkeys-at-typewriters from writing interesting things is that they don’t have any selection. They just produce text to no end, in principle. Picking out characters and words that carry narrative is what makes essayists and playwrights. … That said, I think every instructor has faced the essay that is, somehow, worse than gibberish. The process is to try to find anything that could be credited, even if it’s just including at least one of the words from the topic of the essay, and move briskly on.

Larry Wright’s Motley for the 6th is a riff on the idea tips are impossibly complicated to calculate. And that any mathematics might as well be algebra. My question: what the heck calculation is Debbie describing here? There are different ways to find a 15 percent tip. One two-step one is to divide the bill by ten, which is easy and gets you 10 percent. Then divide that by two, which is not-hard, and gets you 5 percent. Add together the 10 percent and 5 percent and you get 15 percent. A one-step method is to just divide by six. This gets you a bit under 17 percent, but that’s close enough. It just requires an ability to divide by six.

There’s other ways to go about it, surely. There are many ways to do any calculation you like. Some of them have the advantages of requiring fewer steps. Some require more steps, but hopefully easier steps. Debbie is, obviously, just describing a nonsensically complicated calculation, to fit the needs of the joke. I’m just trying to think of what a plausible process would lead into the first panel and still get the right answer.

My many Reading the Comics posts are at this link. Essays which mention Sherman’s Lagoon should be at this link. Other essays with The Buckets should appear at this link. And other essays discussing Motley Classics should be here.

## My 2018 Mathematics A To Z: Fermat’s Last Theorem

Today’s topic is another request, this one from a Dina. I’m not sure if this is Dina Yagodich, who’d also suggested using the letter ‘e’ for the number ‘e’. Trusting that it is, Dina Yagodich has a YouTube channel of mathematics videos. They cover topics like how to convert degrees and radians to one another, what the chance of a false positive (or false negative) on a medical test is, ways to solve differential equations, and how to use computer tools like MathXL, TI-83/84 calculators, or Matlab. If I’m mistaken, original-commenter Dina, please let me know and let me know if you have any creative projects that should be mentioned here.

# Fermat’s Last Theorem.

It comes to us from number theory. Like many great problems in number theory, it’s easy to understand. If you’ve heard of the Pythagorean Theorem you know, at least, there are triplets of whole numbers so that the first number squared plus the second number squared equals the third number squared. It’s easy to wonder about generalizing. Are there quartets of numbers, so the squares of the first three add up to the square of the fourth? Quintuplets? Sextuplets? … Oh, yes. That’s easy. What about triplets of whole numbers, including negative numbers? Yeah, and that turns out to be boring. Triplets of rational numbers? Turns out to be the same as triplets of whole numbers. Triplets of real-valued numbers? Turns out to be very boring. Triplets of complex-valued numbers? Also none too interesting.

Ah, but, what about a triplet of numbers, only raised to some other power? All three numbers raised to the first power is easy; we call that addition. To the third power, though? … The fourth? Any other whole number power? That’s hard. It’s hard finding, for any given power, a trio of numbers that work, although some come close. I’m informed there was an episode of The Simpsons which included, as a joke, the equation $1782^{12} + 1841^{12} = 1922^{12}$. If it were true, this would be enough to show Fermat’s Last Theorem was false. … Which happens. Sometimes, mathematicians believe they have found something which turns out to be wrong. Often this comes from noticing a pattern, and finding a proof for a specific case, and supposing the pattern holds up. This equation isn’t true, but it is correct for the first nine digits. An episode of The Wizard of Evergreen Terrace puts forth $3987^{12} + 4365^{12} = 4472^{12}$, which apparently matches ten digits. This includes the final digit, also known as “the only one anybody could check”. (The last digit of 398712 is 1. Last digit of 436512 is 5. Last digit of 447212 is 6, and there you go.) Really makes you think there’s something weird going on with 12th powers.

For a Fermat-like example, Leonhard Euler conjectured a thing about “Sums of Like Powers”. That for a whole number ‘n’, you need at least n whole numbers-raised-to-an-nth-power to equal something else raised to an n-th power. That is, you need at least three whole numbers raised to the third power to equal some other whole number raised to the third power. At least four whole numbers raised to the fourth power to equal something raised to the fourth power. At least five whole numbers raised to the fifth power to equal some number raised to the fifth power. Euler was wrong, in this case. L J Lander and T R Parkin published, in 1966, the one-paragraph paper Counterexample to Euler’s Conjecture on Sums of Like Powers. $27^5 + 84^5 + 110^5 + 133^5 = 144^5$ and there we go. Thanks, CDC 6600 computer!

But Fermat’s hypothesis. Let me put it in symbols. It’s easier than giving everything long, descriptive names. Suppose that the power ‘n’ is a whole number greater than 2. Then there are no three counting numbers ‘a’, ‘b’, and ‘c’ which make true the equation $a^n + b^n = c^n$. It looks doable. It looks like once you’ve mastered high school algebra you could do it. Heck, it looks like if you know the proof about how the square root of two is irrational you could approach it. Pierre de Fermat himself said he had a wonderful little proof of it.

He was wrong. No shame in that. He was right about a lot of mathematics, including a lot of stuff that leads into the basics of calculus. And he was right in his feeling that this $a^n + b^n = c^n$ stuff was impossible. He was wrong that he had a proof. At least not one that worked for every possible whole number ‘n’ larger than 2.

For specific values of ‘n’, though? Oh yes, that’s doable. Fermat did it himself for an ‘n’ of 4. Euler, a century later, filed in ‘n’ of 3. Peter Dirichlet, a great name in number theory and analysis, and Joseph-Louis Lagrange, who worked on everything, proved the case of ‘n’ of 5. Dirichlet, in 1832, proved the case for ‘n’ of 14. And there were more partial solutions. You could show that if Fermat’s Last Theorem were ever false, it would have to be false for some prime-number value of ‘n’. That’s great work, answering as it does infinitely many possible cases. It just leaves … infinitely many to go.

And that’s how things went for centuries. I don’t know that every mathematician made some attempt on Fermat’s Last Theorem. But it seems hard to imagine a person could love mathematics enough to spend their lives doing it and not at least take an attempt at it. Nobody ever found it, though. In a 1989 episode of Star Trek: The Next Generation, Captain Picard muses on how eight centuries after Fermat nobody’s proven his theorem. This struck me at the time as too pessimistic. Granted humans were stumped for 400 years. But for 800 years? And stumping everyone in a whole Federation of a thousand worlds? And more than a thousand mathematical traditions? And, for some of these species, tens of thousands of years of recorded history? … Still, there wasn’t much sign of the solving the problem. In 1992 Analog Science Fiction Magazine published a funny short-short story by Ian Randal Strock, “Fermat’s Legacy”. In it, Fermat — jealous of figures like René Descartes and Blaise Pascal who upstaged his mathematical accomplishments — jots down the note. He figures an unsupported claim like that will earn true lasting fame.

So that takes us to 1993, when the world heard about elliptic integrals for the first time. Elliptic curves are neat things. They’re polynomials. They have some nice mathematical properties. People first noticed them in studying how long arcs of ellipses are. (This is why they’re called elliptic curves, even though most of them have nothing to do with any ellipse you’d ever tolerate in your presence.) They look ready to use for encryption. And in 1985, Gerhard Frey noticed something. Suppose you did have, for some ‘n’ bigger than 2, a solution $a^n + b^n = c^n$. Then you could use that a, b, and n to make a new elliptic curve. That curve is the one that satisfies $y^2 = x\cdot\left(x - a^n\right)\cdot\left(x + b^n\right)$. And then that elliptic curve would not be “modular”.

I would like to tell you what it means for an elliptic curve to be modular. But getting to that point would take at least four subsidiary essays. MathWorld has a description of what it means to be modular, and even links to explaining terms like “meromorphic”. It’s getting exotic stuff.

Frey didn’t show whether elliptic curves of this time had to be modular or not. This is normal enough, for mathematicians. You want to find things which are true and interesting. This includes conjectures like this, that if elliptic curves are all modular then Fermat’s Last Theorem has to e true. Frey was working on consequences of the Taniyama-Shimura Conjecture, itself three decades old at that point. Yutaka Taniyama and Goro Shimura had found there seemed to be a link between elliptic curves and these “modular forms”, which are a kind of group. That is, a group-theory thing.

So in fall of 1993 I was taking an advanced, though still undergraduate, course in (not-high-school) algebra at Rutgers. It’s where we learn group theory, after Intro to Algebra introduced us to group theory. Some exciting news came out. This fellow named Andrew Wiles at Princeton had shown an impressive bunch of things. Most important, that the Taniyama-Shimura Conjecture was true for semistable elliptic curves. This includes the kind of elliptic curve Frey made out of solutions to Fermat’s Last Theorem. So the curves based on solutions to Fermat’s Last Theorem would have be modular. But Frey had shown any curves based on solutions to Fermat’s Last Theorem couldn’t be modular. The conclusion: there can’t be any solutions to Fermat’s Last Theorem. Our professor did his best to explain the proof to us. Abstract Algebra was the undergraduate course closest to the stuff Wiles was working on. It wasn’t very close. When you’re still trying to work out what it means for something to be an ideal it’s hard to even follow the setup of the problem. The proof itself was inaccessible.

Which is all right. Wiles’s original proof had some flaws. At least this mathematics major shrugged when that news came down and wondered, well, maybe it’ll be fixed someday. Maybe not. I remembered how exciting cold fusion was for about six weeks, too. But this someday didn’t take long. Wiles, with Richard Taylor, revised the proof and published about a year later. So far as I’m aware, nobody has any serious qualms about the proof.

So does knowing Fermat’s Last Theorem get us anything interesting? … And here is a sad anticlimax. It’s neat to know that $a^n + b^n = c^n$ can’t be true unless ‘n’ is 1 or 2, at least for positive whole numbers. But I’m not aware of any neat results that follow from that, or that would follow if it were untrue. There are results that follow from the Taniyama-Shimura Conjecture that are interesting, according to people who know them and don’t seem to be fibbing me. But Fermat’s Last Theorem turns out to be a cute little aside.

Which is not to say studying it was foolish. This easy-to-understand, hard-to-solve problem certainly attracted talented minds to think about mathematics. Mathematicians found interesting stuff in trying to solve it. Some of it might be slight. I learned that in a Pythagorean triplet — ‘a’, ‘b’, and ‘c’ with $a^2 + b^2 = c^2$ — that I was not the infinitely brilliant mathematician at age fifteen I hoped I might be. Also that if ‘a’, ‘b’, and ‘c’ are relatively prime, you can’t have ‘a’ and ‘b’ both odd and ‘c’ even. You had to have ‘c’ and either ‘a’ or ‘b’ odd, with the other number even. Other mathematicians of more nearly infinite ability found stuff of greater import. Ernst Eduard Kummer in the 19th century developed ideals. These are an important piece of group theory. He was busy proving special cases of Fermat’s Last Theorem.

Kind viewers have tried to retcon Picard’s statement about Fermat’s Last Theorem. They say Picard was really searching for the proof Fermat had, or believed he had. Something using the mathematical techniques available to the early 17th century. Or that follow closely enough from that. The Taniyama-Shimura Conjecture definitely isn’t it. I don’t buy the retcon, but I’m willing to play along for the sake of not causing trouble. I suspect there’s not a proof of the general case that uses anything Fermat could have recognized, or thought he had. That’s all right. The search for a thing can be useful even if the thing doesn’t exist.

## Reading the Comics, October 6, 2018: Curve Edition

There’s three more comics from last week I want to talk about. To ease my workload I’m going to put those off until Saturday. This is not an attempt to inflate the number of posts I make so that I can do a post-a-day-for-a-month again, as has happened in previous A-to-Z series. I already missed yesterday anyway. I just didn’t have time to think of things to write about six comics yesterday.

Morrie Turner’s Wee Pals for the 3rd has an interesting description of a circle. Definitions are a big part of mathematical work. This is especially so as we tend to think of mathematical objects as things that relate to one another in different ways. You want a definition that includes the relationships that are important, and excludes the ones you don’t want.

Nipper’s definition of a circle … well, eh. I wouldn’t say that captures a circle. A ‘closed smooth curve’, yes. It’s closed because the ends join up. It’s smooth because there aren’t any corners, any kinks in it. It’s a curve because … well, there you go. There are many interesting shapes that are closed smooth curves. You can find some by tossing a rubber band in the air and seeing what it looks like when it lands. But I think what most people find important about circles are ideas like all the points on a curve being the same distance from some single “center” point. Nipper would probably realize his definition didn’t work by experimenting. Try drawing shapes that meet the rule he set out, but that aren’t what he thinks a circle ought to be.

This can be fruitful. It can develop a sharper idea of what a definition ought to have. Or it might force you to accept, in order to get the cases you want included, that something which seems wrong has to count too. This mathematicians faced in the late 19th and early 20th centuries. We learned that the best definition we’ve had for an idea like “a continuous function” means we have to allow weird conclusions, like that it’s possible to have a function continuous at a single point and nowhere else. But any other definition rules out things we absolutely have to call continuous, so, what’s there to do?

Jenny Campbell’s Flo and Friends for the 4th presents algebra as one of the burdens of youth. And one that’s so harsh that it makes old age more pleasant. I get the unpleasantness of being stuck in a class one doesn’t understand or like. But my own slight experience with that thing where you wake up, and a thing hurts, and there’s no good reason but eventually it either goes away or you get so used to it you don’t realize it still actually hurts? I would take the boring class, most of the time.

John Zakour and Scott Roberts’s Maria’s Day for the 4th is a joke about how hard mathematics is. Maria’s finding the monsters in her room less frightening than arithmetic. Well, as long as she’s picking up a couple useful things about multiplication.

I do at least one Reading the Comics post per week, and often two.They’ll be at this link. Other appearances by Wee Pals should be at this link. Topics raised by Flo and Friends are discussed at this link. And essays mentioning Maria’s Day are at this link. Thanks as ever for reading. I’m trusting that you did, or you wouldn’t be seeing this.

## My 2018 Mathematics A To Z: e

I’m back to requests! Today’s comes from commenter Dina Yagodich. I don’t know whether Yagodich has a web site, YouTube channel, or other mathematics-discussion site, but am happy to pass along word if I hear of one.

# e.

Let me start by explaining integral calculus in two paragraphs. One of the things done in it is finding a definite integral’. This is itself a function. The definite integral has as its domain the combination of a function, plus some boundaries, and its range is numbers. Real numbers, if nobody tells you otherwise. Complex-valued numbers, if someone says it’s complex-valued numbers. Yes, it could have some other range. But if someone wants you to do that they’re obliged to set warning flares around the problem and precede and follow it with flag-bearers. And you get at least double pay for the hazardous work. The function that gets definite-integrated has its own domain and range. The boundaries of the definite integral have to be within the domain of the integrated function.

For real-valued functions this definite integral has a great physical interpretation. A real-valued function means the domain and range are both real numbers. You see a lot of these. Call the function ‘f’, please. Call its independent variable ‘x’ and its dependent variable ‘y’. Using Euclidean coordinates, or as normal people call it “graph paper”, draw the points that make true the equation “y = f(x)”. Then draw in the x-axis, that is, the points where “y = 0”. The boundaries of the definite integral are going to be two values of ‘x’, a lower and an upper bound. Call that lower bound ‘a’ and the upper bound ‘b’. And heck, call that a “left boundary” and a “right boundary”, because … I mean, look at them. Draw the vertical line at “x = a” and the vertical line at “x = b”. If ‘f(x)’ is always a positive number, then there’s a shape bounded below by “y = 0”, on the left by “x = a”, on the right by “x = b”, and above by “y = f(x)”. And the definite integral is the area of that enclosed space. If ‘f(x)’ is sometimes zero, then there’s several segments, but their combined area is the definite integral. If ‘f(x)’ is sometimes below zero, then there’s several segments. The definite integral is the sum of the areas of parts above “y = 0” minus the area of the parts below “y = 0”.

(Why say “left boundary” instead of “lower boundary”? Taste, pretty much. But I look at the words “lower boundary” and think about the lower edge, that is, the line where “y = 0” here. And “upper boundary” makes sense as a way to describe the curve where “y = f(x)” as well as “x = b”. I’m confusing enough without making the simple stuff ambiguous.)

Don’t try to pass your thesis defense on this alone. But it’s what you need to understand ‘e’. Start out with the function ‘f’, which has domain of the positive real numbers and range of the positive real numbers. For every ‘x’ in the domain, ‘f(x)’ is the reciprocal, one divided by x. This is a shape you probably know well. It’s a hyperbola. Its asymptotes are the x-axis and the y-axis. It’s a nice gentle curve. Its plot passes through such famous points as (1, 1), (2, 1/2), (1/3, 3), and pairs like that. (10, 1/10) and (1/100, 100) too. ‘f(x)’ is always positive on this domain. Use as left boundary the line “x = 1”. And then — let’s think about different right boundaries.

If the right boundary is close to the left boundary, then this area is tiny. If it’s at, like, “x = 1.1” then the area can’t be more than 0.1. (It’s less than that. If you don’t see why that’s so, fit a rectangle of height 1 and width 0.1 around this curve and these boundaries. See?) But if the right boundary is farther out, this area is more. It’s getting bigger if the right boundary is “x = 2” or “x = 3”. It can get bigger yet. Give me any positive number you like. I can find a right boundary so the area inside this is bigger than your number.

Is there a right boundary where the area is exactly 1? … Well, it’s hard to see how there couldn’t be. If a quantity (“area between x = 1 and x = b”) changes from less than one to greater than one, it’s got to pass through 1, right? … Yes, it does, provided some technical points are true, and in this case they are. So that’s nice.

And there is. It’s a number (settle down, I see you quivering with excitement back there, waiting for me to unveil this) a slight bit more than 2.718. It’s a neat number. Carry it out a couple more digits and it turns out to be 2.718281828. So it looks like a great candidate to memorize. It’s not. It’s an irrational number. The digits go off without repeating or falling into obvious patterns after that. It’s a transcendental number, which has to do with polynomials. Nobody knows whether it’s a normal number, because remember, a normal number is just any real number that you never heard of. To be a normal number, every finite string of digits has to appear in the decimal expansion, just as often as every other string of digits of the same length. We can show by clever counting arguments that roughly every number is normal. Trick is it’s hard to show that any particular number is.

So let me do another definite integral. Set the left boundary to this “x = 2.718281828(etc)”. Set the right boundary a little more than that. The enclosed area is less than 1. Set the right boundary way off to the right. The enclosed area is more than 1. What right boundary makes the enclosed area ‘1’ again? … Well, that will be at about “x = 7.389”. That is, at the square of 2.718281828(etc).

Repeat this. Set the left boundary at “x = (2.718281828etc)2”. Where does the right boundary have to be so the enclosed area is 1? … Did you guess “x = (2.718281828etc)3”? Yeah, of course. You know my rhetorical tricks. What do you want to guess the area is between, oh, “x = (2.718281828etc)3” and “x = (2.718281828etc)5”? (Notice I put a ‘5’ in the superscript there.)

Now, relationships like this will happen with other functions, and with other left- and right-boundaries. But if you want it to work with a function whose rule is as simple as “f(x) = 1 / x”, and areas of 1, then you’re going to end up noticing this 2.718281828(etc). It stands out. It’s worthy of a name.

Which is why this 2.718281828(etc) is a number you’ve heard of. It’s named ‘e’. Leonhard Euler, whom you will remember as having written or proved the fundamental theorem for every area of mathematics ever, gave it that name. He used it first when writing for his own work. Then (in November 1731) in a letter to Christian Goldbach. Finally (in 1763) in his textbook Mechanica. Everyone went along with him because Euler knew how to write about stuff, and how to pick symbols that worked for stuff.

Once you know ‘e’ is there, you start to see it everywhere. In Western mathematics it seems to have been first noticed by Jacob (I) Bernoulli, who noticed it in toy compound interest problems. (Given this, I’d imagine it has to have been noticed by the people who did finance. But I am ignorant of the history of financial calculations. Writers of the kind of pop-mathematics history I read don’t notice them either.) Bernoulli and Pierre Raymond de Montmort noticed the reciprocal of ‘e’ turning up in what we’ve come to call the ‘hat check problem’. A large number of guests all check one hat each. The person checking hats has no idea who anybody is. What is the chance that nobody gets their correct hat back? … That chance is the reciprocal of ‘e’. The number’s about 0.368. In a connected but not identical problem, suppose something has one chance in some number ‘N’ of happening each attempt. And it’s given ‘N’ attempts given for it to happen. What’s the chance that it doesn’t happen? The bigger ‘N’ gets, the closer the chance it doesn’t happen gets to the reciprocal of ‘e’.

It comes up in peculiar ways. In high school or freshman calculus you see it defined as what you get if you take $\left(1 + \frac{1}{x}\right)^x$ for ever-larger real numbers ‘x’. (This is the toy-compound-interest problem Bernoulli found.) But you can find the number other ways. Tou can calculate it — if you have the stamina — by working out the value of

$1 + 1 + \frac12\left( 1 + \frac13\left( 1 + \frac14\left( 1 + \frac15\left( 1 + \cdots \right)\right)\right)\right)$

There’s a simpler way to write that. There always is. Take all the nonnegative whole numbers — 0, 1, 2, 3, 4, and so on. Take their factorials. That’s 1, 1, 2, 6, 24, and so on. Take the reciprocals of all those. That’s … 1, 1, one-half, one-sixth, one-twenty-fourth, and so on. Add them all together. That’s ‘e’.

This ‘e’ turns up all the time. Any system whose rate of growth depends on its current value has an ‘e’ lurking in its description. That’s true if it declines, too, as long as the decline depends on its current value. It gets stranger. Cross ‘e’ with complex-valued numbers and you get, not just growth or decay, but oscillations. And many problems that are hard to solve to start with become doable, even simple, if you rewrite them as growths and decays and oscillations. Through ‘e’ problems too hard to do become problems of polynomials, or even simpler things.

Simple problems become that too. That property about the area underneath “f(x) = 1/x” between “x = 1” and “x = b” makes ‘e’ such a natural base for logarithms that we call it the base for natural logarithms. Logarithms let us replace multiplication with addition, and division with subtraction, easier work. They change exponentiation problems to multiplication, again easier. It’s a strange touch, a wondrous one.

There are some numbers interesting enough to attract books about them. π, obviously. 0. The base of imaginary numbers, $\imath$, has a couple. I only know one pop-mathematics treatment of ‘e’, Eli Maor’s e: The Story Of A Number. I believe there’s room for more.

Oh, one little remarkable thing that’s of no use whatsoever. Mathworld’s page about approximations to ‘e’ mentions this. Work out, if you can coax your calculator into letting you do this, the number:

$\left(1 + 9^{-(4^{(42)})}\right)^{\left(3^{(2^{85})}\right)}$

You know, the way anyone’s calculator will let you raise 2 to the 85th power. And then raise 3 to whatever number that is. Anyway. The digits of this will agree with the digits of ‘e’ for the first 18,457,734,525,360,901,453,873,570 decimal digits. One Richard Sabey found that, by what means I do not know, in 2004. The page linked there includes a bunch of other, no less amazing, approximations to numbers like ‘e’ and π and the Euler-Mascheroni Constant.

## How September 2018 Treated My Mathematics Blog, Finally

I like to do my monthly recap of my readership, like, at the start of the month. It’s just that between the Carnival, the A-to-Z, Reading the Comics, and my being busy on Friday I didn’t have the time before now. I’d say that it doesn’t matter because these statistics-review posts are mostly for my own entertainment. But I do feel there’s something untidy in my being a week late.

It was a well-read month! My second-most-read month, if I’m not missing something. WordPress reports 1,505 page views, up from August’s 1,421 and July’s 1,058. (The highest I have on record is March’s 1,779 page views.) Third-highest number of unique visitors, as well: 874 of them. August had 913 unique visitors. July a mere 668, but that was a more normal month. (March had 999 unique visitors and yes, it still burns me up that I didn’t have just the one more.)

The number of ‘likes’ around here rose, to 65. Had been 57 in August and 37 in July. That’s still tiny, though, compared to what was normal around here even a year ago (98 in September 2017, and that was down from all of 2016). The number of comments was up, to 36 from August’s 27 and July’s 28. But the number of comments around here is so erratic that I’ve mostly given up on figuring any kind of pattern.

The top most popular articles for September were one perennial, one that I’d have expected to be a perennial (and was the number-two post last month), comics, and the carnival:

I have the suspicion that the Playful Mathematics Education Blog Carnival #121 post will be most popular next month too. And that only my publishing it the last day of September kept it from being on top for that month too.

52 countries sent me any readers at all in August. 16 of them were single-reader countries. The same numbers accurately described countries and single-reader countries for July. And for September? … Here’s the roster.

United States 885
Philippines 149
United Kingdom 64
India 63
Australia 34
Turkey 23
Singapore 22
Germany 21
Denmark 16
Slovenia 16
European Union 12
France 8
South Africa 8
Switzerland 8
Brazil 7
Netherlands 7
Pakistan 7
Brunei 6
Kazakhstan 6
Ghana 5
Puerto Rico 5
Russia 5
Spain 5
Egypt 4
Ireland 4
Malaysia 4
Sweden 4
Taiwan 4
Thailand 4
Austria 3
Finland 3
Morocco 3
Norway 3
Poland 3
South Korea 3
Belize 2
Greece 2
Iceland 2
Indonesia 2
Jamaica 2
Kenya 2
Mexico 2
Vietnam 2
Algeria 1
Argentina 1 (*)
Belgium 1
Chile 1 (*)
Colombia 1 (*)
Croatia 1
Czech Republic 1
Guam 1
Israel 1
Kuwait 1
New Zealand 1
Saudi Arabia 1
Slovakia 1
United Arab Emirates 1

So that’s 58 countries total, with only 14 of them single-readers. Argentina, Chile, and Colombia were single-reader countries in August; nobody else was. No countries are on a longer than two-month streak. I don’t think I’ve ever seen more than a hundred readers from the Philippines. Also 12 readers listed as from the European Union, distinct from the countries participating in it, seems unusually many.

According to the Insights panel I’d had 119 posts this year, right before October began. And had gathered 67,885 page views. This would be from a total of 33,475 acknowledged unique visitors. (My blog started before WordPress told us anything about unique visitors.) As of the start of October, there had been 313 total comments. This makes an average of 2.6 comments per post on average. At the start of August there had been 2.6 comments per post on average. But remember that was fifteen fewer posts. At the end of September I’d gotten 696 total likes, for an average of 5.8 likes per post. That’s down from 6.0 at the end of August. By the end of September I’d had a total of 112,648 words posted around here. 15,014 of them were posted in September. Since there were fifteen posts altogether that’s an average of 1000.1 words per post in September. For the year, through the end of September, that’s 946.6 words per post this year. At the end of August that had been 930 words per post. So as ever, my attempts to write more quick, simple, short things that don’t wear me out has failed. And I have the rest of an A to Z to write, too! I’m so doomed.

## Reading the Comics, October 2, 2018: Frazz Loves Mathematics Edition

Jef Mallet’s Frazz did its best to take over my entire Reading-the-Comics bit this week. I won’t disrespect his efforts, especially as I take the viewpoint of the strip to be that arithmetic is a good thing to learn. Meanwhile let me offer another mention of Playful Mathematics Education Blog Carnival #121, hosted here last week. And to point out the Fall 2018 Mathematics A To Z continues this week with the letters ‘E’ and ‘F’. And I’m still looking for topics to discuss for select letters between H and M yet.

Sandra Bell-Lundy’s Between Friends for the 1st is a Venn Diagram joke to start off the week. The form looks wrong, though. This can fool the reader into thinking the cartoonist messed up the illustration. Here’s why. The point of a Venn Diagram is to show the two or more groups of things and identify what they have in common. It is true that any life will have regrets about things done. And regrets about things not done. But what are the things that one both ‘did do’ and ‘didn’t do’? Unless you accept the weasel-wording of “did halfheartedly”, there is nothing that one both did and did not.

And here is where I will argue Bell-Lundy did this right. The overlap of things one ‘did do’ and ‘didn’t do’ must be empty. Do not be fooled by there being area in common in the overlap. One thing Venn Diagrams help us establish are the different kinds of things we are studying, and to work out whether that kind of thing can have any examples. And if the set of things in your life that you regret is empty — well! Is it not “living your best life”, as the caption advances, to have nothing one regrets doing, and nothing one regrets not doing? Thus I say to you the jury of readers, Sandra Bell-Lundy has correctly used the Venn Diagram form to make a “No Regrets” art.

That said, I can’t explain why the protagonist on the left is slumping and looking depressed. I suppose we have to take that she hasn’t lived her best life, but does have information about what might have been.

Jeff Mallet’s Frazz for the 1st starts a string of mathematics class jokes. Here is one about story problems, particularly ones about pricing apples and groups of apples. I don’t know whether apples are used as story problem examples. They seem like good example objects. They’re reasonably familiar. A person can have up to several dozen of them without it being ridiculously many. (Count a half-bushel of apples sometime.) You can imagine dividing them among people or tasks. You can even imagine halving and quartering them without getting ridiculous. Great set of traits. But the kid has overlooked that if Mrs Olsen wanted the price of an apple she would just look at the price sign.

(Every time I’m at the market I mean to check the apple prices, and I do, and I forget the total on the way out. I mention because I live in the same area as Jef Mallet. So there is a small but not-ridiculous chance he and I have bought apples from the same place. If he has a strip mentioning the place with the free coffee, popcorn, and gelato samples I’ll know to my satisfaction.)

Jeff Mallet’s Frazz for the 2nd has a complaint about having to show one’s work. But as with apple prices, we don’t really care whether someone has the right answer. We care whether they have the right method for finding an answer. Or, better, whether they have a method that could plausibly find the right answer, and an idea of how to check whether they did get it. This is why it’s worth, for example, working out a rough expected answer before doing a final calculation.

The talk about flight paths reminds me of a story passed around sci.space.history back in the day. The story is about development of the automatic landing computers used for the Apollo Missions. The guidance computers were programmed to get the lunar module from this starting point to a final point on the lunar surface. This turns into a question of polynomial interpolation. That’s coming up with a curve that fits some data points, particularly, the positions and velocities the last couple times those were known plus the intended landing position. You can always find a polynomial that passes smoothly through a finite bunch of data points. That’s not hard. But, allegedly, the guidance computer would project paths where the height above the lunar surface was negative for a while. Numerically, there’s nothing wrong with a negative number. It’s just got some practical problems, as the earliest Apollo missions were before any subway tunnels could be built.

Jeff Mallet’s Frazz for the 3rd continues the protest against showing one’s work. I do like the analogy of arithmetic skills for mathematics being like spelling skills for writing. You can carry on without these skills, for either mathematics or writing. But knowing them makes your life easier. And enjoying these building-block units foreshadows enjoying the whole. But yeah, addition and multiplication tables can look like tedium if you don’t find something at least a little thrilling in how, say, 9 times 7 is 63.

Tim Lachowski’s Get a Life for the 2nd is a bit of mathematics wordplay. So that closes the essay out well.

Thanks for reading Reading the Comics. Other comic strip review essays are at this link. More essays with Between Friends should be at this link. Other essays with Frazz in them are at this link. And appearances by Get A Life should be at this link.

## Some More Mathematics I’ve Been Reading, 6 October 2018

I have a couple links I’d not included in the recent Playful Mathematics Education Blog Carnival. Looking at them, I can’t say why.

The top page of this asks, with animated text, whether you want to see something amazing. Forgive its animated text. It does do something amazing. This paper by Javier Cilleruelo, Florian Luca, and Lewis Baxter proves that every positive whole number is the sum of at most three palindromic numbers. The web site, by Mathstodon host Christian Lawson-Perfect, demonstrates it. Enter a number and watch the palindromes appear and add up.

Next bit is an article that relates to my years-long odd interest in pasta making. Mathematicians solve age-old spaghetti mystery reports a group of researchers at MIT — the renowned “Rensselaer Polytechnic Institute of Boston” [*] — studying why dry spaghetti fractures the way it does. Like many great problems, it sounds ridiculous to study at first. Who cares why, basically, you can’t snap a dry spaghetti strand in two equal pieces by bending it at the edges? The problem has familiarity to it and seems to have little else. But then you realize this is a matter of how materials work, and how they break. And realize it’s a great question. It’s easy to understand and subtle to solve.

And then, how about quaternions? Everybody loves quaternions. Well, @SheckyR here links to an article from Thatsmath.com, The Many Modern Uses of Quaternions. It’s some modern uses anyway. The major uses for quaternions are in rotations. They’re rather good at representing rotations. And they’re really good at representing doing several rotations, along different axes, in a row.

The article finishes with (as teased in the tweet above) a report of an electric toothbrush that should keep track of positions inside the user’s head, even as the head rotates. This is intriguing. I say as a person who’s reluctantly started using an electric toothbrush. I’m one of those who brushes, manually, too hard, to the point of damaging my gums. The electric toothbrush makes that harder to do. I’m not sure how an orientation-aware electric toothbrush will improve the situation any, but I’m open-minded.

[*] I went to graduate school at Rensselaer Polytechnic Institute, the “RPI of New York”. The school would be a rival to MIT if RPI had any self-esteem. I’m guessing, as I never went to a school that had self-esteem.

## My 2018 Mathematics A To Z: Distribution (probability)

Today’s term ended up being a free choice. Nobody found anything appealing in the D’s to ask about. That’s all right.

I’m still looking for topics for the letters G through M, excluding L, if you’d like in on those letters.

And for my own sake, please check out the Playful Mathematics Education Blog Carnival, #121, if you haven’t already.

# Distribution (probability).

I have to specify. There’s a bunch of mathematics concepts called distribution’. Some of them are linked. Some of them are just called that because we don’t have a better word. Like, what else would you call multiplying the sum of something? I want to describe a distribution that comes to us in probability and in statistics. Through these it runs through modern physics, as well as truly difficult sciences like sociology and economics.

We get to distributions through random variables. These are variables that might be any one of multiple possible values. There might be as few as two options. There might be a finite number of possibilities. There might be infinitely many. They might be numbers. At the risk of sounding unimaginative, they often are. We’re always interested in measuring things. And we’re used to measuring them in numbers.

What makes random variables hard to deal with is that, if we’re playing by the rules, we never know what it is. Once we get through (high school) algebra we’re comfortable working with an ‘x’ whose value we don’t know. But that’s because we trust that, if we really cared, we would find out what it is. Or we would know that it’s a ‘dummy variable’, whose value is unimportant but gets us to something that is. A random variable is different. Its value matters, but we can’t know what it is.

Instead we get a distribution. This is a function which gives us information about what the outcomes are, and how likely they are. There are different ways to organize this data. If whoever’s talking about it doesn’t say just what they’re doing, bet on it being a “probability distribution function”. This follows slightly different rules based on whether the range of values is discrete or continuous, but the idea is roughly the same. Every possible outcome has a probability at least zero but not more than one. The total probability over every possible outcome is exactly one. There’s rules about the probability of two distinct outcomes happening. Stuff like that.

Distributions are interesting enough when they’re about fixed things. In learning probability this is stuff like hands of cards or totals of die rolls or numbers of snowstorms in the season. Fun enough. These get to be more personal when we take a census, or otherwise sample things that people do. There’s something wondrous in knowing that while, say, you might not know how long a commute your neighbor has, you know there’s an 80 percent change it’s between 15 and 25 minutes (or whatever). It’s also good for urban planners to know.

It gets exciting when we look at how distributions can change. It’s hard not to think of that as “changing over time”. (You could make a fair argument that “change” is “time”.) But it doesn’t have to. We can take a function with a domain that contains all the possible values in the distribution, and a range that’s something else. The image of the distribution is some new distribution. (Trusting that the function doesn’t do something naughty.) These functions — these mappings — might reflect nothing more than relabelling, going from (say) a distribution of “false and true” values to one of “-5 and 5” values instead. They might reflect regathering data; say, going from the distribution of a die’s outcomes of “1, 2, 3, 4, 5, or 6” to something simpler, like, “less than two, exactly two, or more than two”. Or they might reflect how something does change in time. They’re all mappings; they’re all ways to change what a distribution represents.

These mappings turn up in statistical mechanics. Processes will change the distribution of positions and momentums and electric charges and whatever else the things moving around do. It’s hard to learn. At least my first instinct was to try to warm up to it by doing a couple test cases. Pick specific values for the random variables and see how they change. This can help build confidence that one’s calculating correctly. Maybe give some idea of what sorts of behaviors to expect.

But it’s calculating the wrong thing. You need to look at the distribution as a specific thing, and how that changes. It’s a change of view. It’s like the change in view from thinking of a position as an x- and y- and maybe z-coordinate to thinking of position as a vector. (Which, I realize now, gave me slightly similar difficulties in thinking of what to do for any particular calculation.)

Distributions can change in time, just the way that — in simpler physics — positions might change. Distributions might stabilize, forming an equilibrium. This can mean that everything’s found a place to stop and rest. That will never happen for any interesting problem. What you might get is an equilibrium like the rings of Saturn. Everything’s moving, everything’s changing, but the overall shape stays the same. (Roughly.)

There are many specifically named distributions. They represent patterns that turn up all the time. The binomial distribution, for example, which represents what to expect if you have a lot of examples of something that can be one of two values each. The Poisson distribution, for representing how likely something that could happen any time (or any place) will happen in a particular span of time (or space). The normal distribution, also called the Gaussian distribution, which describes everything that isn’t trying to be difficult. There are like 400 billion dozen more named ones, each really good at describing particular kinds of problems. But they’re all distributions.

## Reading the Comics, September 29, 2018: Vintage Comics Edition

Four more comics from last week struck me as worth mentioning. Two of them are over sixty years old.

Incidentally, Walt Kelly’s Pogo first appeared in the newspaper seventy years ago today. I don’t know anyone rerunning the comics the way Skippy or Thimble Theatre (Popeye) or the like are, which is a shame. (Few if any strips would be on-point around here, but it’s still worth reading.) But I did think some of the folks around here would like to know.

Percy Crosby’s Skippy for the 25th is a vintage-1931 strip about the miseries of learning arithmetic. Skippy’s scheme to both improve by copying one another’s 50-percent-right papers is not necessarily a bad one. It depends on a couple things to work. For example, do they both get the same questions wrong? Possibly; it’d be natural for both students to do worse on the harder questions. But suppose that the questions Skippy and Sooky get wrong are independent of one another. That is, knowing that Skippy got a question right doesn’t affect our estimate of the probability whether Sooky got that question right. In that case, we’d expect both of them to get about 25% of the questions right. And at least one of them would get about 75% of the questions right. So, if they could copy the right answers, they could get a 25-point improvement. That’s pretty good.

Telling which are the right answers is hard. But, it’s typically easier to check whether an answer is right than it is to find an answer. Arithmetic is a point where this might not be usefully so. You can verify that 25 – 17 is indeed 8 by trying to calculate 17 + 8. But I don’t know that one equation is easier than the other.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde for the 26th is a percentages joke. Miss Latham is making the supposition that one hundred percent effort is needed to get the assignment done correctly. That’s fair if the full effort to make is “what effort it takes to do the assignment correctly”. Tautological, but indisputable. If the one-hundred-percent-effort is whatever’s considered the appropriate standard effort to make for an assignment this size … well, that’s harder to agree with. Some assignments, some days, are easy; some just aren’t. Depends on what’s being asked.

Bill Schorr’s The Grizzwells for the 27th says it’s about mathematics. The particular question is about how many quarts go into a gallon. Measurement questions like this do get bundled into mathematics. It’s a bit hard to say why, though. It’s arbitrary how big a unit is; all we really demand is that it be convenient for whatever we’re doing. It’s even more arbitrary what the subdivisions of a unit are. A quart — well, the name gives away, it should be a quarter of something bigger. But there’s no reason we couldn’t have divided a gallon into three pieces, or six, or twelve instead. We just didn’t happen to do that. And similarly for subdividing a quart (or whatever name it would get, if it were a sixth of a gallon).

I suppose it’s from thinking of arithmetic as a tool for clerks and shopkeepers. These calculations would need to carry along units. Even the currency might need to carry units. Decimal currency obscures the units. Older-style pound-shilling-pence units (or whatever they were called in the local language) don’t allow that. So I’m guessing that it was natural to think of, say, “quadruple three quarts” as the same sort of problem as “one-sixth of 8s/4d”.

Charles Schulz’s Peanuts Begins for the 29th speaks of “a perfect circle”. Violet asks an excellent question. But to say “a perfect circle” does communicate something. We name things like circles and lines and squares and agree they have certain properties. Also that the circles or lines or squares that we see in the world don’t have those properties. We might emphasize that something is a perfect circle or a straight line or something, to insist that it approaches this ideal of circle-ness. I’m not well-versed in the philosophy of mathematics. But it does seem hard to avoid Platonist thoughts about it. It’s hard to do geometry without pictures. But we insist to ourselves that the pictures may lie to us.

My other Reading the Comics posts should appear at this link. Percy Crosby’s Skippy gets mentions in essays at this link. There’s not many of them, but I really like the strip, so I hope there’s chances for more soon. Essays discussing topics raised by Barney and Clyde are at this link. Essays which discuss The Grizzwells are at this link. And Peanuts — both the 1970s “current” runs syndicated to newspapers and the 1950s “vintage” rerun only online — are at this link. And please stick around; there’ll be another A to Z post in about a day unless things go wrong.

## I’m Looking For Some More Topics For My 2018 Mathematics A-To-Z

As I’d said about a month ago, I’m hoping to panel topics for this year’s A-To-Z in a more piecemeal manner. Mostly this is so I don’t lose track of requests. I’m hoping not to go more than about three weeks between when a topic gets brought up and when I actually commit words to page.

But please, if you have any mathematical topics with a name that starts G through M, let me know! I generally take topics on a first-come, first-serve basis for each letter. But I reserve the right to use a not-first-pick choice if I realize the topic’s enchanted me. Also to use a synonym or an alternate phrasing if both topics for a particular letter interest me. Also when you do make a request, please feel free to mention your blog, Twitter feed, Mathstodon account, or any other project of yours that readers might find interesting. I’m happy to throw in a mention as I get to the word of the day.

So! I’m open for nominations. Here are the words I’ve used in past A to Z sequences, for reference. I probably don’t want to revisit them, but if someone’s interested, I’ll at least think over whether I have new opinions about them. Thank you.

#### Excerpted From The Summer 2017 A To Z

And there we go! … To avoid confusion I’ll mark off here when I have taken a letter.

#### Available Letters for the Fall 2018 A To Z:

• G
• H
• I
• J
• K
• L
• M

And all the Fall 2018 Mathematics A-To-Z should appear at this link, along with some extra stuff like these topic-request pages and such.

## My 2018 Mathematics A To Z: Commutative

Today’s A to Z term comes from Reynardo, @Reynardo_red on Twitter, and is a challenge. And the other A To Z posts for this year should be at this link.

# Commutative.

Some terms are hard to discuss. This is among them. Mathematicians find commutative things early on. Addition of whole numbers. Addition of real numbers. Multiplication of whole numbers. Multiplication of real numbers. Multiplication of complex-valued numbers. It’s easy to think of this commuting as just having liberty to swap the order of things. And it’s easy to think of commuting as “two things you can do in either order”. It inspires physical examples like rotating a dial, clockwise or counterclockwise, however much you like. Or outside the things that seem obviously mathematical. Add milk and then cereal to the bowl, or cereal and then milk. As long as you don’t overfill the bowl, there’s not an important different. Per Wikipedia, if you’re putting one sock on each foot, it doesn’t matter which foot gets a sock first.

When something is this accessible, and this universal, it gets hard to talk about. It threatens to be invisible. It was hard to say much interesting about the still air in a closed room, at least before there was a chemistry that could tell it wasn’t a homogenous invisible something, and before there was a statistical mechanics that it was doing something even when it was doing nothing.

But commutativity is different. It’s easy to think of mathematics that doesn’t commute. Subtraction doesn’t, for all that it’s as familiar as addition. And despite that we try, in high school algebra, to fuse it into addition. Division doesn’t either, for all that we try to think of it as multiplication. Rotating things in three dimensions doesn’t commute. Nor does multiplying quaternions, which are a kind of number still. (I’m double-dipping here. You can use quaternions to represent three-dimensional rotations, and vice-versa. So they aren’t quite different examples, even though you can use quaternions to do things unrelated to rotations.) Clothing is a mass of things that can and can’t be put on first.

We talk about commuting as if it’s something in (or not in) the operations we do. Adding. Rotating. Walking in some direction. But it’s not entirely in that. Consider walking directions. From an intersection in the city, walk north to the first intersection you encounter. And walk east to the first intersection you encounter. Does it matter whether you walk north first and then east, or east first and then north? In some cases, no; famously, in Midtown Manhattan there’s no difference. At least if we pretend Broadway doesn’t exist.

Also of we don’t start from near the edge of the island, or near Central Park. An operation, even something familiar like addition, is a function. Its domain is an ordered pair. Each thing in the pair is from the set of whatever might be added together. (Or multiplied, or whatever the name of the operation is.) The operation commutes if the order of the pair doesn’t matter. It’s easy to find sets and operations that won’t commute. I suppose it’s for the same reason it’s easier to find rectangular rather than square things. We’re so used to working with operations like multiplication that we forget that multiplication needs things to multiply.

Whether a thing commutes turns up often in group theory. This shouldn’t surprise. Group theory studies how arithmetic works. A “group”, which is a set of things with an operation like multiplication on it, might or might not commute. A “ring”, which has a set of things and two operations, has some commutativity built into it. One ring operation is something like addition. That commutes, or else you don’t have a ring. The other operation is something like multiplication. That might or might not commute. It depends what you need for your problem. A ring with commuting multiplication, plus some other stuff, can reach the heights of being a “field”. Fields are neat. They look a lot like the real numbers, but they can be all weird, too.

But even in a group, that doesn’t have to have a commuting multiplication, we can tease out commutativity. There is a thing named the “commutator”, which is this particular way of multiplying elements together. You can use it to split the original group in the way that odds and evens split the whole numbers. That splitting is based on the same multiplication as the original group. But its domain is now classes based on elements of the original group. What’s created, the “commutator subgroup”, is commutative. We can find a thing, based on what we are interested in, which offers commutativity right nearby.

It reaches further. In analysis, it can be useful to think of functions as “mappings”. We describe this as though a function took a domain and transformed it into a range. We can compose these functions together: take the range from one function and use it as the domain for another. Sometimes these chains of functions will commute. We can get from the original set to the final set by several paths. This can produce fascinating and beautiful proofs that look as if you just drew a lattice-work. The MathWorld page on “Commutative Diagram” has some examples of this, and I recommend just looking at the pictures. Appreciate their aesthetic, particularly the ones immediately after the sentence about “Commutative diagrams are usually composed by commutative triangles and commutative squares”.

Whether these mappings commute can have meaning. This takes us, maybe inevitably, to quantum mechanics. Mathematically, this represents systems as either a wave function or a matrix, whichever is more convenient. We can use this to find the distribution of positions or momentums or energies or anything else we would like to know. Distributions are as much as we can hope for from quantum mechanics. We can say what (eg) the position of something is most likely to be but not what it is. That’s all right.

The mathematics of finding these distributions is just applying an operator, taking a mapping, on this wave function or this matrix. Some pairs of these operators commute, like the ones that let us find momentum and find kinetic energy. Some do not, like those to find position and angular momentum.

We can describe how much two operators do or don’t commute. This is through a thing called the “commutator”. Its form looks almost playfully simple. Call the operators ‘f’ and ‘g’. And that by ‘fg’ we mean, “do g, then do f”. (This seems awkward. But if you think of ‘fg’ as ‘f(g(x))’, where ‘x’ is just something in the domain of g, then this seems less awkward.) The commutator of ‘f’ and ‘g’ is then whatever ‘fg – gf’ is. If it’s always zero, then ‘f’ and ‘g’ commute. If it’s ever not zero, then they don’t.

This is easy to understand physically. Imagine starting from a point on the surface of the earth. Travel south one mile and then west one mile. You are at a different spot than you would be, had you instead travelled west one mile and then south one mile. How different? That’s the commutator. It’s obviously zero, for just multiplying some regular old numbers together. It’s sometimes zero, for these paths on the Earth’s surface. It’s never zero, for finding-the-position and finding-the-angular-momentum. The amount by which that’s never zero we can see as the famous Uncertainty Principle, the limits of what kinds of information we can know about the world.

Still, it is a hard subject to describe. Things which commute are so familiar that it takes work to imagine them not commuting. (How could three times four equal anything but four times three?) Things which do not commute either obviously shouldn’t (add hot water to the instant oatmeal, and eat it), or are unfamiliar enough people need to stop and think about them. (Rotating something in one direction and then another, in three dimensions, generally doesn’t commute. But I wouldn’t fault you for testing this out with a couple objects on hand before being sure about it.) But it can be noticed, once you know to explore.

## Reading the Comics, September 24, 2018: Carnival Delay Edition

It’s unusual for me to have a Reading the Comics post on Monday, but that’s what fits my schedule. The Playful Mathematics Education Blog Carnival took my Sunday spot, and Tuesday and Friday I hope to continue the A to Z posts. It’s going to be a rather full week. I’m looking forward to, I hope, surviving. Meanwhile, here’s some comics.

Mike Thompson’s Grand Avenue for the 23rd resumes its efforts to become my archenemy with a strip about why learn arithmetic. Michael is right that we don’t need people to do multiplication. So why should we learn it? Grandmom Kate offers only the answer that he’ll be punished if he doesn’t learn them. This could motivate Michael to practice multiplication tables. But it’ll never convince him that learning multiplication tables is something of value.

That said, what would convince him? It’s ridiculous to suppose Michael would be in a spot where he’d need to know eight times seven right away and without a computer to tell him. I find a certain amount of arithmetic-doing fun. But I already like doing it. (I admit a bootstrapping problem. Do I find it fun because I do it well, or do I do arithmetic well because I find it fun? I don’t know.) And that I find something fun is a lousy argument that everyone should learn to do it. I can argue that practicing multiplication tables is practice for finding neat patterns in other things, in higher mathematics. But is that reason to care? If Michael isn’t interested in eight times seven, is he going to be interested in the outer products of the set of symmetries on the octagon and the permutations of the heptagon?

I don’t have an actual answer here. I think it’s worth learning to do arithmetic. But not because we need people to do arithmetic. At least not except when we’re too lazy to take out our phones. But “or else you’ll lose money” is a terrible reason.

Dave Whamond’s Reality Check for the 23rd is a smorgasbord strip of things cartoonists get told too often. It comes in here because I like the strip, and because the punch line is built in the fear of arithmetic. It’s traditional to think that cartoonists, as artists, haven’t got an interest in mathematics or science. I can’t deny that the time it takes to learn how to draw, and the focus it takes to make a syndication-worthy comic strip, hurt someone’s ability to study much mathematics. And vice-versa. But people are a varied bunch. Bill Amend, of FoxTrot, and Bud Grace, of the discontinued The Piranha Club, were both physics majors. Darrin Bell, of Candorville and Rudy Park, writes well about mathematical (and scientific) topics. Crockett Johnson, of the renowned 1940s comic strip Barnaby and the Harold and the Purple Crayon books, was literate enough in mathematics to do over a hundred paintings based on geometry theorems. Part of why I note when the mathematics put into the background of a strip is that I do like pointing out there’s no reason artists and mathematicians or scientists need to be separate people.

Tony Carrillo’s F Minus for the 24th uses the form of the story problem. This one of the classic form of apples distributed amongst people. The problem presented makes its politics bare. But any narrative, however thin, carries along with it cultural values. That mathematicians may work out things whose truth is (we believe) independent of the posed problem doesn’t mean the posed problem is universal.

Steve Boreman’s Little Dog Lost rerun for the 24th is the Roman Numerals joke for the week. There is a connotation of great age to anything written in Roman Numerals. Likely because we are centuries past the time they were used for anything but ornament. And even in ornament they seem to be declining in age. I do wonder if the puniness of, say, ‘MMI’ or ‘MMXX’ as a sequence of numerals, compared to (say) ‘MCMXLVII’ makes it look better to just write ‘2001’ or ‘2020’ instead.

The full set of Reading the Comics posts should be at this link. Essays that discuss Grand Avenue should be at this link. This and other appearances by Reality Check should be at this link. Appearances by F Minus are at this link. And other essays with Little Dog Lost should be at this link. Thanks for reading along.

## Playful Mathematics Education Blog Carnival #121

Greetings one and all! Come, gather round! Wonder and spectate and — above all else — tell your friends of the Playful Mathematics Blog Carnival! Within is a buffet of delights and treats, fortifications for the mind and fire for the imagination.

121 is a special number. When I was a mere tot, growing in the wilds of suburban central New Jersey, it stood there. It held a spot of privilege in the multiplication tables on the inside front cover of composition books. On the forward diagonal, yet insulated from the borders. It anchors the safe interior. A square number, eleventh of that set in the positive numbers.

## The First Tent

The first wonder to consider is Iva Sallay’s Find the Factors blog. She brings each week a sequence of puzzles, all factoring challenges. The result of each, done right, is a scrambling of the multiplication tables; it’s up to you the patron to find the scramble. She further examines each number in turn, finding its factors and its interesting traits. And furthermore, usually, when beginning a new century of digits opens a horserace, to see which of the numbers have the greatest number of factorizations. She furthermore was the host of this Playful Mathematics Education Carnival for August of 2018.

121 is more than just a square. It is the lone square known to be the sum of the first several powers of a prime number: it is $1 + 3 + 3^2 + 3^3 + 3^4$, a fantastic combination. If there is another square that is such a sum of primes, it is unknown to any human — and must be at least 35 digits long.

We look now for a moment at some astounding animals. From the renowned Dr Nic: Introducing Cat Maths cards, activities, games and lessons — a fine collection of feline companions, such toys as will enterain them. A dozen attributes each; twenty-seven value cards. These cats, and these cards, and these activity puzzles, promise games and delights, to teach counting, subtraction, statistics, and inference!

Next and no less incredible is the wooly Mathstodon. Christian Lawson-Perfect hosts this site, an instance of the open-source Twitter-like service Mastodon. Its focus: a place for people interested in mathematicians to write of what they know. To date over 1,300 users have joined, and have shared nearly 25,000 messages. You need not join to read many of these posts — your host here has yet to — but may sample its wares as you like.

## The Second Tent

121 is one of only two perfect squares known to be four less than the cube of a whole number. The great Fermat conjectured that 4 and 121 are the only such numbers; no one has found a counter-example. Nor a proof.

Friends, do you know the secret to popularity? There is an astonishing truth behind it. Elias Worth of the MathSection blog explains the Friendship Paradox. This mind-warping phenomenon tells us your friends have more friends than you do. It will change forever how you look at your followers and following accounts.

And now to thoughts of learning. Stepping forward now is Monica Utsey, @Liveonpurpose47 of Chocolate Covered Boy Joy. Her declaration: “I incorporated Montessori Math materials with my right brain learner because he needed literal representations of the work we were doing. It worked and we still use it.” See now for yourself the representations, counting and comparing and all the joys of several aspects of arithmetic.

Take now a moment for your own fun. Blog Carnival patron and organizer Denise Gaskins wishes us to know: “The fun of mathematical coloring isn’t limited to one day. Enjoy these coloring resources all year ’round!” Happy National Coloring Book Day offers the title, and we may keep the spirit of National Coloring Book Day all the year round.

Confident in that? Then take on a challenge. Can you scroll down faster than Christian Lawson-Perfect’s web site can find factors? Prove your speed, prove your endurance, and see if you can overcome this infinite scroll.

## The Third Tent

121 is a star number, the fifth of that select set. 121 identical items can be tiled to form a centered hexagon. You may have seen it in the German game of Chinese Checkers, as the board of that has 121 holes.

We come back again to teaching. “Many homeschoolers struggle with teaching their children math. Here are some tips to make it easier”, offers Denise Gaskins. Step forth and benefit from this FAQ: Struggling with Arithmetic, a collection of tips and thoughts and resources to help make arithmetic the more manageable.

Step now over to the arcade, and to the challenge of Pac-Man. This humble circle-inspired polygon must visit the entirety of a maze, and avoid ghosts as he does. Matthew Scroggs of Chalk Dust Magazine here seeks and shows us Optimal Pac-Man. Graph theory tells us there are thirteen billion different paths to take. Which of them is shortest? Which is fastest? Can it be known, and can it help you through the game?

And now a recreation, one to become useful if winter arrives. Think of the mysteries of the snowball rolling down a hill. How does it grow in size? How does it speed up? When does it stop? Rodolfo A Diaz, Diego L Gonzalez, Francisco Marin, and R Martinez satisfy your curiosity with Comparative kinetics of the snowball respect to other dynamical objects. Be warned! This material is best suited for the college-age student of the mathematical snow sciences.

## The Fourth Tent

121 is furthermore the sixth of the centered octagonal numbers. 121 of a thing may be set into six concentric octagons of one, then two, then three, then four, then five, and then six of them on a side.

To teach is to learn! And we have here an example of such learning. James Sheldon writing for the American Mathematical Society Graduate Student blog offers Teaching Lessons from a Summer of Taking Mathematics Courses. What secrets has Sheldon to reveal? Come inside and learn what you may.

And now step over to the games area. The game Entanglement wraps you up in knots, challenging you to find the longest knot possible. David Richeson of Division By Zero sees in this A game for budding knot theorists. What is the greatest score that could be had in this game? Can it ever be found? Only Richeson has your answer.

Step now back to the amazing Mathstodon. Gaze in wonder at the account @dudeney_puzzles. Since the September of 2017 it has brought out challenges from Henry Ernest Dudeney’s Amusements in Mathematics. Puzzles given, yes, with answers that follow along. The impatient may find Dudeney’s 1917 book on Project Gutenberg among other places.

## The Fifth Tent

Sum the digits of 121; you will find that you have four. Take its prime factors, 11 and 11, and sum their digits; you will find that this is four again. This makes 121 a Smith number. These marvels of the ages were named by Albert Wilansky, in honor of his brother-in-law, a man known to history as Harold Smith, and whose telephone number of 4,937,775 was one such.

Now let us consider terror. What is it to enter a PhD program? Many have attempted it; some have made it through. Mathieu Besançon gives to you a peek behind academia’s curtain. A year in PhD describes some of this life.

And now to an astounding challenge. Imagine an assassin readies your death. Can you protect yourself? At all? Tai-Danae Bradley invites you to consider: Is the Square a Secure Polygon? This question takes you on a tour of geometries familiar and exotic. Learn how mathematicians consider how to walk between places on a torus — and the lessons this has for a square room. The fate of the universe itself may depend on the methods described herein — the techniques used to study it relate to those that study whether a physical system can return to its original state. And then J2kun turned this into code, Visualizing an Assassin Puzzle, for those who dare to program it.

Have you overcome this challenge? Then step into the world of linear algebra, and this delight from the Mathstodon account of Christian Lawson-Perfect. The puzzle is built on the wonders of eigenvectors, those marvels of matrix multiplication. They emerge from multiplication longer or shorter but unchanged in direction. Lawson-Perfect uses whole numbers, represented by Scrabble tiles, and finds a great matrix with a neat eigenvalue. Can you prove that this is true?

## The Sixth Tent

Another wonder of the digits of 121. Take them apart, then put them together again. Contorted into the form 112 they represent the same number. 121 is, in the base ten commonly used in the land, a Friedman Number, second of that line. These marvels, in the Arabic, the Roman, or even the Mayan numerals schemes, are named for Erich Friedman, a figure of mystery from the Stetson University.

We draw closer to the end of this carnival’s attractions! To the left I show a tool for those hoping to write mathematics: Donald E Knuth, Tracy Larrabee, and Paul M Roberts’s Mathematical Writing. It’s a compilation of thoughts about how one may write to be understood, or to avoid being misunderstood. Either would be a marvel for the ages.

To the right please see Gregory Taylor’s web comic Any ~Qs. Taylor — @mathtans on Twitter — brings a world of math-tans, personifications of mathematical concepts, together for adventures and wordplay. And if the strip is not to your tastes, Taylor is working on ε Project, a serialized written story with new installments twice a month.

If you will look above you will see the marvels of curved space. On YouTube, Eigenchris hopes to learn differential geometry, and shares what he has learned. While he has a series under way he suggested Episode 15, ‘Geodesics and Christoffel Symbols as one that new viewers could usefully try. Episode 16, ‘Geodesic Examples on Plane and Sphere, puts this work to good use.

And as we reach the end of the fairgrounds, please take a moment to try Find the Factors Puzzle number 121, a challenge from 2014 that still speaks to us today!

And do always stop and gaze in awe at the fantastic and amazing geometrical constructs of Robert Loves Pi. You shall never see stellations of its like elsewhere!

## The Concessions Tent

With no thought of the risk to my life or limb I read the newspaper comics for mathematical topics they may illuminate! You may gape in awe at the results here. And furthermore this week and for the remainder of this calendar year of 2018 I dare to explain one and only one mathematical concept for each letter of our alphabet! I remind the sensitive patron that I have already done not one, not two, not three, but four previous entries all finding mathematical words for the letter “X” — will there be one come December? There is but one way you might ever know.

Denise Gaskins coordinates the Playful Mathematics Education Blog Carnival. Upcoming scheduled carnivals, including the chance to volunteer to host it yourself, or to recommend your site for mention, are listed here. And October’s 122nd Playful Mathematics Education Blog Carnival is scheduled to be hosted by Arithmophobia No More, and may this new host have the best of days!

## My 2018 Mathematics A To Z: Box-And-Whisker Plot

Today’s A To Z term is another from Iva Sallay, Find The Factors blog creator and, as with asymptote, friend of the blog. Thank you for it.

# Box-And-Whisker Plot.

People can’t remember many things at once. This has effects. Some of them are obvious. Like, how a phone number, back in the days you might have to memorize them, wouldn’t be more than about seven or eight digits. Some are subtle, such as that we have descriptive statistics. We have descriptive statistics because we want to understand collections of a lot of data. But we can’t understand all the data. We have to simplify it. From this we get many numbers, based on data, that try to represent it. Means. Medians. Variance. Quartiles. All these.

And it’s not enough. We try to understand data further by visualization. Usually this is literal, making pictures that represent data. Now and then somebody visualizes data by something slick, like turning it into an audio recording. (Somewhere here I have an early-60s album turning 18 months of solar radio measurements into something music-like.) But that’s rare, and usually more of an artistic statement. Mostly it’s pictures. Sighted people learn much of the world from the experience of seeing it and moving around it. Visualization turns arithmetic into geometry. We can support our sense of number with our sense of space.

Many of the ways we visualize data came from the same person. William Playfair set out the rules for line charts and area charts and bar charts and pie charts and circle graphs. Florence Nightingale used many of them in her reports on medical care in the Crimean War. And this made them public and familiar enough that we still use them.

Box-and-whisker plots are not among them. I’m startled too. Playfair had a great talent for these sorts of visualizations. That he missed this is a reminder to us all. There are great, simple ideas still available for us to discover.

At least for the brilliant among us to discover. Box-and-whisker plots were introduced in 1969. I’m surprised it’s that recent. John Tukey developed them. Computer scientists remember Tukey’s name; he coined the term ‘bit’, as in the element of computer memory. They also remember he was an early user, if not the coiner, of the term ‘software’. Mathematicians know Tukey’s name too. He and James Cooley developed the Fast Fourier Transform. The Fast Fourier Transform appears on every list of the Most Important Algorithms of the 20th Century. Sometimes the Most Important Algorithms of All Time. The Fourier Transform is this great thing. It’s a way of finding patterns in messy, complicated data. It’s hard to calculate, though. Cooley and Tukey, though, found that the calculations you have to do can be made simpler, and much quicker. (In certain conditions. Mostly depending on how the data’s gathered. Fortunately, computers encourage gathering data in ways that make the Fast Fourier Transform possible. And then go and calculate it nice and fast.)

Box-and-whisker plots are a way to visualize sets of data. Too many data points to look at all at once, not without getting confused. They extract a couple bits of information about the distribution. Distributions say what ranges a data point, picked at random, are likely to be in, and are unlikely to be in. Distributions can be good things to look at. They let you know what typical experiences of a thing are likely to be. And they’re stable. A handful of weird fluke events don’t change them much. If you have a lot of fluke events, that changes the distribution. But if you have a lot of fluke events, they’re not flukes. They’re just events.

Box-and-whisker plots start from the median. This is the second of the three things commonly called “average”. It’s the data point that half the remaining data is less than, and half the remaining data is greater than. It’s a nice number to know. Start your box-and-whisker plot with a short line, horizontal or vertical as fits your worksheet, and labelled with that median.

Around this line we’ll draw a box. It’ll be as wide as the line you made for the median. But how tall should it be?

That is, normally, based on the first and third quartiles. These are the data points like the median. The first quartile has one-quarter the data points less than it, and three-quarters the data points more than it. The third quartile has three-quarters the data points less than it, and one-quarter the data points more than it. (And now you might ask if we can’t call the median the “second quartile”. We sure can. And will if we want to think about how the quartiles relate to each other.) Between the first and the third quartile are half of all the data points. The first and the third quartiles the boundaries of your box. They’re where the edges of the rectangle are.

That’s the box. What are the whiskers?

Well, they’re vertical lines. Or horizontal lines. Whatever’s perpendicular to how you started. They start at the quartile lines. Should they go to the maximum or minimum data points?

Maybe. Maximum and minimum data are neat, yes. But they’re also suspect. They’re extremes. They’re not quite reliable. If you went back to the same source of data, and collected it again, you’d get about the same median, and the same first and third quartile. You’d get different minimums and maximums, though. Often crazily different. Still, if you want to understand the data you did get, it’s hard to ignore that this is the data you have. So one choice for representing these is to just use the maximum and minimum points. Draw the whiskers out to the maximum and minimum, and then add a little cross bar or a circle at the end. This makes clear you meant the line to end there, rather than that your ink ran out. (Making a figure safe against misprinting is one of the understated essentials of good visualization.)

But again, the very highest and lowest data may be flukes. So we could look at other, more stable endpoints for the whiskers. The point of this is to show the range of what we believe most data points are. There are different ways to do this. There’s not one that’s always right. It’s important, when showing a box-and-whisker plot, to explain how far out the whiskers go.

Tukey’s original idea, for example, was to extend the whiskers based on the interquartile range. This is the difference between the third quartile and the first quartile. Like, just subtraction. Find a number that’s one-and-a-half times the interquartile range above the third quartile. The upper whisker goes to the data point that’s closest to that boundary without going over. This might well be the maximum already. The other number is the one that’s the first quartile minus one-and-a-halt times the interquartile range. The lower whisker goes to the data point that’s closest to that boundary without falling underneath it. And this might be the minimum. It depends how the data’s distributed. The upper whisker and the lower whisker aren’t guaranteed to be the same lengths. If there are data outside these whisker ranges, mark them with dots or x’s or something else easy to spot. There’ll typically be only a few of these.

But you can use other rules too. Again as long as you are clear about what they represent. The whiskers might go out, for example, to particular percentiles. Or might reach out a certain number of standard deviations from the mean.

The point of doing this box-and-whisker plot is to show where half the data are. That’s inside the box. And where the rest of the non-fluke data is. That’s the whiskers. And the flukes, those are the odd little dots left outside the whiskers. And it doesn’t take any deep calculations. You need to sort the data in ascending order. You need to count how many data points there are, to find the median and the first and third quartiles. (You might have to do addition and division. If you have, for example, twelve distinct data points, then the median is the arithmetic mean of the sixth and seventh values. The first quartile is the arithmetic mean of the third and fourth values. The third quartile is the arithmetic mean of the ninth and tenth values.) You (might) need to subtract, to find the interquartile range. And multiply that by one and a half, and add or subtract that from the quartiles.

This shows you what are likely and what are improbable values. They give you a cruder picture than, say, the standard deviation and the coefficients of variance do. But they need no hard calculations. None of what you need for box-and-whisker plots is computationally intensive. Heck, none of what you need is hard. You knew everything you needed to find these numbers by fourth grade. And yet they tell you about the distribution. You can compare whether two sets of data are similar by eye. Telling whether sets of data are similar becomes telling whether two shapes look about the same. It’s brilliant to represent so much from such simple work.

## Reading the Comics, September 22, 2018: Last Chance Edition

I plan tomorrow to have another of my Mathematics A To Z posts. This weekend I’ll publish this month’s Playful Mathematics Blog Carnival. So if you’ve seen any web site, blog, video, podcast, or other reference that had something that delighted and taught you something, this is your last chance to let me know, and let my audience know about it. Please leave a comment if you know about anything I ought to see. Thank you.

Mark Tatulli’s Lio for the 20th is a numerals and a wordplay joke. It is not hard to make numerals tattooed on a person an alarming thing. But when done with (I trust) the person’s consent, and done whimsically like this, it’s more a slightly odd bit of play.

Tony Cochrane’s Agnes for the 21st is ultimately a strip about motivating someone to learn arithmetic. Agnes’s reasoning is sound, though. If the only reason to learn this unpleasant chore is because your job may need it, why not look at another job? We wouldn’t try to convince someone who didn’t want to learn French that they’ll need it for their job as … a tour guide in Quebec? There’s plenty of work that doesn’t need that. I suspect kids don’t buy “this is good for your future job” as a reason. Even if it were, general education should not be job training either.

Juba’s Viivi and Wagner for the 21st gives Wagner a short-lived ambition to be a wandering mathematician. The abacus serves as badge of office. There are times and places that his ambition wouldn’t be completely absurd. Before the advent of electric and electronic computing, people who could calculate were worth hiring for their arithmetic. In 18th Century London there was a culture of “penny universities”, people with academic training making a living by giving lectures and courses to whatever members of the public cared to come to their talk, often in coffee-houses or barns.

Mathematicians learn that there used to be public spectacles, mathematicians challenging one another to do problems, with real cash or jobs on the line. They learn this because one such challenge figures in to the story of Gerolamo Cardano and Niccolò Fontana, known as Tartaglia. It’s about how we learned formulas to solve some kinds of polynomials. You may sense uncertainty in my claim there. It’s because it turns out it’s hard to find clear records of this sort of challenge outside the Cardano-Tartaglia match. That isn’t to say these things weren’t common. It’s just that I’ve been slowly learning to be careful about my claims.

(I’m aided here by a startling pair of episodes of The History of Philosophy Without Any Gaps podcast. This pair — “Trivial Pursuits: Fourteenth Century Logic” and “Sara Uckleman on Obligations” — describe a fascinating logic game that sounds like it would still be a great party game, for which there’s numerous commentaries and rule sets and descriptions of how to play. But no records of people actually ever playing it, or talking about games they had played, or complaining about being cheated out of a win or stuff like that. It’s a strong reminder to look closely at what your evidence does support.)

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 22nd is the comforting return of Zach Weinersmith to these essays. And yes, it’s horrible parenting to promise something fun and have it turn out to be a mathematics lecture, but that’s part of the joke.

Karl Weierstrass was a real person, and a great mathematician best known for giving us a good, rigorous idea of what a limit is. We need limits because, besides their being nice things to have, calculus depends on them. At least, calculus depends on thinking about calculations on infinitely many things. Or on things infinitesimally small. Trying to do this works pretty well, much of the time. But you can also start calculating like this and get nonsense. How to tell whether your particular calculation works out or is nonsense?

Weierstrass worked out a good, rigorous idea for what we mean by a limit. It mostly tracks with what we’d intuitively expect. And it avoids all the dangerous spots we’ve noticed so far. Particularly, it doesn’t require us to ever look at anything that’s infinitely vast, or infinitesimally small. Anything we calculate on is done with regular arithmetic, that we’re quite confident in. But it lets us draw conclusions about the infinitely numerous or tiny. It’s brilliant work. When it’s presented to someone in the start of calculus, it leaves them completely baffled but they can maybe follow along with the rules. When it’s presented to mathematics majors in real analysis, it leaves them largely baffled but they can maybe follow along with the reasons. Somewhere around grad school I got comfortable with it, even excited. Weierstrass’s sort of definition turns up all over the place in real and in functional analysis. So at the least you get very comfortable with it.

So it is part of Weinersmith’s joke that this is way above that kid’s class level. As a joke, that fails for me. The luchador might as well be talking complete nonsense and the kid would realize that right away. There’s not the threat that this is something he ought to be able to understand. But it will probably always be funny to imagine mathematician wrestlers. Can count on that. I didn’t mean that as a joke, but you’ll notice I’m letting it stand.

And with that, you know what I figure to post on Sunday. It and my other Reading the Comics posts should be at this tag. Other appearances of Lio should be at this link. The mentions of Agnes should be at this link. Essays with some mention of Viivi and Wagner will be at this link, although it’s a new tag, so who knows how long it’ll take for the next to appear? And other essays with Saturday Morning Breakfast Cereal will be at this link when there’s any to mention.

## My 2018 Mathematics A To Z: Asymptote

Welcome, all, to the start of my 2018 Mathematics A To Z. Twice each week for the rest of the year I hope to have a short essay explaining a term from mathematics. These are fun and exciting for me to do, since I mostly take requests for the words, and I always think I’m going to be father farther ahead of deadline than I actually am.

Today’s word comes from longtime friend of my blog Iva Sallay, whose Find the Factors page offers a nice daily recreational logic puzzle. Also trivia about each whole number, in turn.

# Asymptote.

You know how everything feels messy and complicated right now? But you also feel that, at least in the distant past, they were simpler and easier to understand? And how you hope that, sometime in the future, all our current woes will have faded and things will be simple again? Hold that thought.

There is no one thing that every mathematician does, apart from insist to friends that they can’t do arithmetic well. But there are things many mathematicians do. One of those is to work with functions. A function is this abstract concept. It’s a triplet of things. One is a domain, a set of things that we draw the independent variables from. One is a range, a set of things that we draw the dependent variables from. And last thing is a rule, something that matches each thing in the domain to one thing in the range.

The domain and range can be the same thing. They’re often things like “the real numbers”. They don’t have to be. The rule can be almost anything. It can be simple. It can be complicated. Usually, if it’s interesting, there’s at least something complicated about it.

The asymptote, then, is an expression of our hope that we have to work with something that’s truly simple, but has some temporary complicated stuff messing it up just now. Outside some local embarrassment, our function is close enough to this simpler asymptote. The past and the future are these simpler things. It’s only the present, the local area, that’s messy and confusing.

We can make this precise. Start off with some function we both agree is interesting. Reach deep into the imagination to call it ‘f’. Suppose that there is an asymptote. That’s also a function, with the same domain and range as ‘f’. Let me call it ‘g’, because that’s a letter very near ‘f’.

You give me some tolerance for error. This number mathematicians usually call ‘ε’. We usually think of it as a small thing. But all we need is that it’s larger than zero. Anyway, you give me that ε. Then I can give you, for that ε, some bounded region in the domain. Everywhere outside that region, the difference between ‘f’ and ‘g’ is smaller than ε. That is, our complicated original function ‘f’ and the asymptote ‘g’ are indistinguishable enough. At least everywhere except this little patch of the domain. There’s different regions for different ε values, unless something weird is going on. The smaller then ε the bigger the region of exceptions. But if the domain is something like the real numbers, well, big deal. Our function and our asymptote are indistinguishable roughly everywhere.

If there is an asymptote. We’re not guaranteed there is one. But if there is, we know some nice things. We know what our function looks like, at least outside this local range of extra complication. If the domain represents something like time or space, and it often does, then the asymptote represents the big picture. What things look like in deep time. What things look like globally. When studying a function we can divide it into the easy part of the asymptote and the local part that’s “function minus the asymptote”.

Usually we meet asymptotes in high school algebra. They’re a pair of crossed lines that hang around hyperbolas. They help you sketch out the hyperbola. Find equations for the asymptotes. Draw these crossed lines. Figure whether the hyperbola should go above-and-below or left-and-right of the crossed lines. Draw discs accordingly. Then match them up to the crossed lines. Asymptotes don’t seem to do much else there. A parabola, the other exotic shape you meet about the same time, doesn’t have any asymptote that’s any simpler than itself. A circle or an ellipse, which you met before but now have equations to deal with, doesn’t have an asymptote at all. They aren’t big enough to have any. So at first introduction asymptotes seem like a lot of mechanism for a slight problem. We don’t need accurate hand-drawn graphs of hyperbolas that much.

In more complicated mathematics they get useful again. In dynamical systems we look at descriptions of how something behaves in time. Often its behavior will have an asymptote. Not always, but it’s nice to see when it does. When we study operations, how long it takes to do a task, we see asymptotes all over the place. How long it takes to perform a task depends on how big a problem it is we’re trying to solve. The relationship between how big the thing is and how long it takes to do is some function. The asymptote appears when thinking about solving huge examples of the problem. What rule most dominates how hard the biggest problems are? That’s the asymptote, in this case.

Not everything has an asymptote. Some functions are always as complicated as they started. Oscillations, for example, if they don’t dampen out. A sine wave isn’t complicated. Not if you’re the kind of person who’ll write things like “a sine wave isn’t complicated”. But if the size of the oscillations doesn’t decrease, then there can’t be an asymptote. Functions might be chaotic, with values that vary along some truly complicated system, and so never have an asymptote.

But often we can find a simpler function that looks enough like the function we care about. Everywhere except some local little embarrassment. We can enjoy the promise that things were understandable at one point, and maybe will be again.