If there is a theme to the last comic strips from the previous week, it’s that kids find arithmetic hard. That’s a title for you.
Bill Watterson’s Calvin and Hobbes for the 2nd is one of the classics, of course. Calvin’s made the mistake of supposing that mathematics is only about getting true answers. We’ll accept the merely true, if that’s what we can get. But we want interesting. Which is stuff that’s not just true but is unexpected or unforeseeable in some way. We see this when we talk about finding a “proper” answer, or subset, or divisor, or whatever. Some things are true for every question, and so, who cares?
Also, is it really true that Calvin doesn’t know any of his homework problems? It’s possible, but did he check?
Were I grading, I would accept an “I don’t know”, at least for partial credit, in certain conditions. Those involve the student writing out what they would like to do to try to solve the problem. If the student has a fair idea of something that ought to find a correct answer, then the student’s showing some mathematical understanding. But there are times that what’s being tested is proficiency at an operation, and a blank “I don’t know” would not help much with that.
Patrick Roberts’s Todd the Dinosaur for the 2nd has an arithmetic cameo. Fractions, particularly. They’re mentioned as something too dull to stay awake through. So for the joke’s purpose this could have been any subject that has an exposition-heavy segment. Fractions do have more complicated rules than adding whole numbers do. And introducing those rules can be hard. But anything where you introduce rules instead of showing what you can do with them is hard. I’m thinking here of several times people have tried to teach me board games by listing all the rules, instead of setting things up and letting me ask “what am I allowed to do now?” the first couple turns. I’m not sure how that would translate to fractions, but there might be something.
John Zakour and Scott Roberts’s Maria’s Day for the 2nd has another of Maria’s struggles with arithmetic. It’s presented as a challenge so fierce it can defeat even superheroes. Could be any subject, really. It’s hard to beat the visual economy of having it be a division problem, though.
Rick Kirkman and Jerry Scott’s Baby Blues for the 3rd shows a bit of youthful enthusiasm. Hammie’s parents would rather that enthusiasm be put to memorizing multiplication facts. I’m not sure this would match the fun of building stuff. But I remember finding patterns inside the multiplication table fascinating. Like how you could start from a perfect square and get the same sequence of numbers as you moved out along a diagonal. Or tracing out where the same number appeared in different rows and columns, like how just everything could multiply into 24. Might be worth playing with some.
I had a free choice of topics for today! Nobody had a suggestion for the letter ‘N’, so, I’ll take one of my own. If you did put in a suggestion, I apologize; I somehow missed the comment in which you did. I’ll try to do better in future.
Nearest Neighbor Model.
Why are restaurants noisy?
It’s one of those things I wondered while at a noisy restaurant. I have heard it is because restauranteurs believe patrons buy more, and more expensive stuff, in a noisy place. I don’t know that I have heard this correctly, nor that what I heard was correct. I’ll leave it to people who work that end of restaurants to say. But I wondered idly whether mathematics could answer why.
It’s easy to form a rough model. Suppose I want my brilliant words to be heard by the delightful people at my table. Then I have to be louder, to them, than the background noise is. Fine. I don’t like talking loudly. My normal voice is soft enough even I have a hard time making it out. And I’ll drop the ends of sentences when I feel like I’ve said all the interesting parts of them. But I can overcome my instinct if I must.
The trouble comes from other people thinking of themselves the way I think of myself. They want to be heard over how loud I have been. And there’s no convincing them they’re wrong. If there’s bunches of tables near one another, we’re going to have trouble. We’ll each by talking loud enough to drown one another out, until the whole place is a racket. If we’re close enough together, that is. If the tables around mine are empty, chances are my normal voice is enough for the cause. If they’re not, we might have trouble.
So this inspires a model. The restaurant is a space. The tables are set positions, points inside it. Each table is making some volume of noise. Each table is trying to be louder than the background noise. At least until the people at the table reach the limits of their screaming. Or decide they can’t talk, they’ll just eat and go somewhere pleasant.
Making calculations on this demands some more work. Some is obvious: how do you represent “quiet” and “loud”? Some is harder: how far do voices carry? Grant that a loud table is still loud if you’re near it. How far away before it doesn’t sound loud? How far away before you can’t hear it anyway? Imagine a dining room that’s 100 miles long. There’s no possible party at one end that could ever be heard at the other. Never mind that a 100-mile-long restaurant would be absurd. It shows that the limits of people’s voices are a thing we have to consider.
There are many ways to model this distance effect. A realistic one would fall off with distance, sure. But it would also allow for echoes and absorption by the walls, and by other patrons, and maybe by restaurant decor. This would take forever to get answers from, but if done right it would get very good answers. A simpler model would give answers less fitted to your actual restaurant. But the answers may be close enough, and let you understand the system. And may be simple enough that you can get answers quickly. Maybe even by hand.
And so I come to the “nearest neighbor model”. The common English meaning of the words suggest what it’s about. We get it from models, like my restaurant noise problem. It’s made of a bunch of points that have some value. For my problem, tables and their noise level. And that value affects stuff in some region around these points.
In the “nearest neighbor model”, each point directly affects only its nearest neighbors. Saying which is the nearest neighbor is easy if the points are arranged in some regular grid. If they’re evenly spaced points on a line, say. Or a square grid. Or a triangular grid. If the points are in some other pattern, you need to think about what the nearest neighbors are. This is why people working in neighbor-nearness problems get paid the big money.
Suppose I use a nearest neighbor model for my restaurant problem. In this, I pretend the only background noise at my table is that of the people the next table over, in each direction. Two tables over? Nope. I don’t hear them at my table. I do get an indirect effect. Two tables over affects the table that’s between mine and theirs. But vice-versa, too. The table that’s 100 miles away can’t affect me directly, but it can affect a table in-between it and me. And that in-between table can affect the next one closer to me, and so on. The effect is attenuated, yes. Shouldn’t it be, if we’re looking at something farther away?
This sort of model is easy to work with numerically. I’m inclined toward problems that work numerically. Analytically … well, it can be easy. It can be hard. There’s a one-dimensional version of this problem, a bunch of evenly-spaced sites on an infinitely long line. If each site is limited to one of exactly two values, the problem becomes easy enough that freshman physics majors can solve it exactly. They don’t, not the first time out. This is because it requires recognizing a trigonometry trick that they don’t realize would be relevant. But once they know the trick, they agree it’s easy, when they go back two years later and look at it again. It just takes familiarity.
This comes up in thermodynamics, because it makes a nice model for how ferromagnetism can work. More realistic problems, like, two-dimensional grids? … That’s harder to solve exactly. Can be done, though not by undergraduates. Three-dimensional can’t, last time I looked. Weirdly, four-dimensional can. You expect problems to only get harder with more dimensions of space, and then you get a surprise like that.
The nearest-neighbor-model is a first choice. It’s hardly the only one. If I told you there were a next-nearest-neighbor model, what would you suppose it was? Yeah, you’d be right. As long as you supposed it was “things are affected by the nearest and the next-nearest neighbors”. Mathematicians have heard of loopholes too, you know.
As for my restaurant model? … I never actually modelled it. I did think about the model. I concluded my model wasn’t different enough from ferromagnetism models to need me to study it more. I might be mistaken. There may be interesting weird effects caused by the facts of restaurants. That restaurants are pretty small things. That they can have echo-y walls and ceilings. That they can have sound-absorbing things like partial walls or plants. Perhaps I gave up too easily when I thought I knew the answer. Some of my idle thoughts end up too idle.
The edition title says it all. Comic Strip Master Command sent me enough strips the past week for two editions and I made an unhappy discovery about one of the comics in today’s.
Dave Coverly’s Speed Bump for the 28th is your anthropomorphic-numerals joke for the week. We get to know the lowest common denominator from fractions. It’s easier to compute anything with a fraction in it if you can put everything under a common denominator. But it’s also — usually — easier to work with smaller denominators than larger ones. It’s always okay to multiply a number by 1. It may not help, but it can always be done. This has the result of multiplying both the numerator and denominator by the same number. So suppose you have something that’s written in terms of sixths, and something else written in terms of eighths. You can multiply the first thing by four-fourths, and the second thing by three-thirds. Then both fractions are in terms of 24ths and your calculation is, probably, easier.
So this strip is the rare one where I have to say the joke doesn’t work on mathematical grounds. Coverly was mislead by the association between “lowest” and “smallest”. 2 is going to be the lowest common denominator very rarely. Everything in the problem needs to be in terms of even denominators to start with, and even that won’t guarantee it. I hate to do that, since the point of a comic strip is humor and getting any mathematics right is a bonus. But in this case, knowing the terminology shatters the joke. Coverly would have a mathematically valid joke were 9 offering the consolation “you’re not always the greatest common divisor”, the largest number that goes into a set of numbers. But nobody thinks being called the “greatest” anything ever needs consolation, so the joke would fail all but mathematics class.
Randy Glasbergen’s Glasbergen Cartoons for the 29th is a joke of the why-learn-mathematics model. “Because we always have done this” is not a reason compelling by the rules of deductive logic. It can have great practical value. Experience can encode things which are hard to state explicitly, or to untangle from one another. And an experienced system will have workarounds for the most obvious problems, ones that a new system will not have. And any attempt at educational reform, however well-planned or meant, must answer parents’ reasonable question of why their child should be your test case.
I do sometimes see algebra attacked as being too little-useful for the class time given. I could see good cases made for spending the time on other fields of mathematics. (Probability and statistics always stands out as potentially useful; the subjects were born from things people urgently needed to know.) I’m not competent to judge those arguments and so shall not.
Carl Skanberg’s That New Carl Smell for the 29th is a riff on jokes about giving more than 100%. Interpreting this giving-more-than-everything as running a deficit is a reasonable one. I’ve given my usual talk about “100% of what?” enough times now; I don’t need to repeat it until I think of something fresh to say.
Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 30th uses mathematics — story problems, specifically — as icons of intelligence. I can’t speak to the Mensa experience, but intellectual types trying to out-do each other? Yes, that’s a thing that happens. I mostly dodge attempts to put me to a fun mathematics puzzle. I’m embarrassed by how long it can take me to actually do one of these, when put on the spot. (I have a similar reaction to people testing my knowledge of trivia in the stuff I actually do know a ridiculous amount about.) Mostly I hope Dave Coverly doesn’t think I’m being this kid.
Two commenters suggested the topic for today’s A to Z post. I suspect I’d have been interested in it if only one had. (Although Dina Yagoditch’s suggestion of the Menger Sponge is hard to resist.) But a double domination? The topic got suggested by Mr Wu, author of MathTuition88, and by John Golden, author of Math Hombre. My thanks to all for interesting things to think about.
So you know how in the first car you ever owned the alternator was always going bad? If you’re lucky, you reach a point where you start owning cars good enough that the alternator is not the thing always going bad. Once you’re there, congratulations. Now the thing that’s always going bad in your car will be the manifold. That one’s for my dad.
Manifolds are a way to do normal geometry on weird shapes. What’s normal geometry? It’s … you know, the way shapes work on your table, or in a room. The Euclidean geometry that we’re so used to that it’s hard to imagine it not working. Why worry about weird shapes? They’re interesting, for one. And they don’t have to be that weird to count as weird. A sphere, like the surface of the Earth, can be weird. And these weird shapes can be useful. Mathematical physics, for example, can represent the evolution of some complicated thing as a path drawn on a weird shape. Bringing what we know about geometry from years of study, and moving around rooms, to a problem that abstract makes our lives easier.
We use language that sounds like that of map-makers when discussing manifolds. We have maps. We gather together charts. The collection of charts describing a surface can be an atlas. All these words have common meanings. Mercifully, these common meanings don’t lead us too far from the mathematical meanings. We can even use the problem of mapping the surface of the Earth to understand manifolds.
If you love maps, the geography kind, you learn quickly that there’s no making a perfect two-dimensional map of the Earth’s surface. Some of these imperfections are obvious. You can distort shapes trying to make a flat map of the globe. You can distort sizes. But you can’t represent every point on the globe with a point on the paper. Not without doing something that really breaks continuity. Like, say, turning the North Pole into the whole line at the top of the map. Like in the Equirectangular projection. Or skipping some of the points, like in the Mercator projection. Or adding some cuts into a surface that doesn’t have them, like in the Goode homolosine projection. You may recognize this as the one used in classrooms back when the world had first begun.
But what if we don’t need the whole globedone in a single map? Turns out we can do that easy. We can make charts that cover a part of the surface. No one chart has to cover the whole of the Earth’s surface. It only has to cover some part of it. It covers the globe with a piece that looks like a common ordinary Euclidean space, where ordinary geometry holds. It’s the collection of charts that covers the whole surface. This collection of charts is an atlas. You have a manifold if it’s possible to make a coherent atlas. For this every point on the manifold has to be on at least one chart. It’s okay if a point is on several charts. It’s okay if some point is on all the charts. Like, suppose your original surface is a circle. You can represent this with an atlas of two charts. Each chart maps the circle, except for one point, onto a line segment. The two charts don’t both skip the same point. All but two points on this circle are on all the maps of this chart. That’s cool. What’s not okay is if some point can’t be coherently put onto some chart.
This sad fate can happen. Suppose instead of a circle you want to chart a figure-eight loop. That won’t work. The point where the figure crosses itself doesn’t look, locally, like a Euclidean space. It looks like an ‘x’. There’s no getting around that. There’s no atlas that can cover the whole of that surface. So that surface isn’t a manifold.
But many things are manifolds nevertheless. Toruses, the doughnut shapes, are. Möbius strips and Klein bottles are. Ellipsoids and hyperbolic surfaces are, or at least can be. Mathematical physics finds surfaces that describe all the ways the planets could move and still conserve the energy and momentum and angular momentum of the solar system. That cheesecloth surface stretched through 54 dimensions, is a manifold. There are many possible atlases, with many more charts. But each of those means we can, at least locally, for particular problems, understand them the same way we understand cutouts of triangles and pentagons and circles on construction paper.
So to get back to cars: no one has ever said “my car runs okay, but I regret how I replaced the brake covers the moment I suspected they were wearing out”. Every car problem is easier when it’s done as soon as your budget and schedule allow.
I expected there to be a fair number of readers here in October. The A to Z project, particularly, implied that. A To Z months are exhausting, but they give me a lot of posts. And I’m as sure as I can be without actually checking that the number of posts is the biggest determining factor in how many readers I attract. There were 23 posts in October, compared to September’s 15 and my summertime usual of 12 to 14.
Then there’s the Playful Mathematics Education Blog Carnival. This posted in September — by six hours — but it was destined to bring new and, I hope, happy readers in. And that happened also. Here’s what the WordPress statistics showed me for the month:
So this was my highest-readership month since the blog started seven years ago. 2,010 page views, from 1,063 unique visitors. That’s also the greatest number of unique visitors I’ve had in one month, and the first time I’ve broken a thousand visitors. September had 1,505 page views from 874 unique visitors; August, 1,421 page views from 913 unique visitors.
There was a bit of an upswing in the number of likes: 94 of them issued in October, compared to September’s 65 and August’s 57. This is on the higher side for this year, but it is down a good bit from the comparable month two or three years ago. In June 2015, for my first A to Z, I drew over 500 likes; I don’t know where likers have gone.
There were 60 comments on the blog in October, partly people who liked or wanted to talk about A To Z topics, partly people suggesting others. It’s the greatest number of comments I’ve had in one month in two years now. September had 36 commenters; August, 27. Have to go back to March 2016 to find a month when more people said anything around here. That, too, was an A-to-Z month, and one of the handful of months when I posted something every single day.
There were a lot of popular posts this month, naturally. This might be the first time in years that none of the top five were Reading the Comics posts. The A to Z and the Playful Mathematics Education Blog Carnival squeezed out very nearly everything:
52 countries sent me any readers in August. 58 did so in September. There were 16 single-reader countries in August and 14 in September. For busy October?
Hong Kong SAR China
Macau SAR China
74 countries sending me any readers at all. 23 countries sent me a single reader for the month. Czech republic has sent me a single reader two months in a row now; Columbia, three months in a row.
Insights tells me that I started November with a total of 69,895 page views, from a logged 34,538 unique visitors. As ever, please recall the first couple years WordPress didn’t tell us anything about the unique visitor count, so for all I know there “should” be more.
In October I published 28,733 words, which is nearly double September’s total. Whew. I’d posted 142 things this year by the start of November, and gathered a total of 391 comments and 787 likes for the year to date. This averaged to 3.1 comments per post on average, up from 2.6 at the start of October. And 5.5 likes per posting, down from 5.8 at the start of October. The 142 posts through the start of November averaged 996 words each. That’s up from 946 words per post at the start of October. I’m going to crush myself beneath a pile of words that I meant to be less deep.
While putting together the last comics from a week ago I realized there was a repeat among them. And a pretty recent repeat too. I’m supposing this is a one-off, but who can be sure? We’ll get there. I figure to cover last week’s mathematically-themed comics in posts on Wednesday and Thursday, subject to circumstances.
As fits the joke, the bit of calculus in this textbook paragraph is wrong. does not equal . This is even ignoring that we should expect, with an indefinite integral like this, a constant of integration. An indefinite integral like this is equal to a family of related functions. But it’s common shorthand to write out one representative function. But the indefinite integral of is not . You can confirm that by differentiating . The result is nothing like . Differentiating an indefinite integral should get the original function back. Here are the rules you need to do that for yourself.
As I make it out, a correct indefinite integral would be:
Plus that “constant of integration” the value of which we can’t tell just from the function we want to indefinitely-integrate. I admit I haven’t double-checked that I’m right in my work here. I trust someone will tell me if I’m not. I’m going to feel proud enough if I can get the LaTeX there to display.
Stephen Beals’s Adult Children for the 27th has run already. It turned up in late March of this year. Michael Spivak’s Calculus is a good choice for representative textbook. Calculus holds its terrors, too. Even someone who’s gotten through trigonometry can find the subject full of weird, apparently arbitrary rules. And formulas like those in the above paragraph.
Rob Harrell’s Big Top for the 27th is a strip about the difficulties of splitting a restaurant bill. And they’ve not even got to calculating the tip. (Maybe it’s just a strip about trying to push the group to splitting the bill a way that lets you off cheap. I haven’t had to face a group bill like this in several years. My skills with it are rusty.)
I’ve settled to a pace of about four comics each essay. It makes for several Reading the Comics posts each week. But none of them are monsters that eat up whole evenings to prepare. Except that last week there were enough comics which made my initial cut that I either have to write a huge essay or I have to let last week’s strips spill over to Sunday. I choose that option. It’s the only way to square it with the demands of the A to Z posts, which keep creeping above a thousand words each however much I swear that this next topic is a nice quick one.
Roy Schneider’s The Humble Stumble for the 25th has some mathematics in a supporting part. It’s used to set up how strange Tommy is. Mathematics makes a good shorthand for this. It’s usually compact to write, important for word balloons. And it’s usually about things people find esoteric if not hilariously irrelevant to life. Tommy’s equation is an accurate description of what centripetal force would be needed to keep the Moon in a circular orbit at about the distance it really is. I’m not sure how to take Tommy’s doubts. If he’s just unclear about why this should be so, all right. Part of good mathematical learning can be working out the logic of some claim. If he’s not sure that Newtonian mechanics is correct — well, fair enough to wonder how we know it’s right. Spoiler: it is right. (For the problem of the Moon orbiting the Earth it’s right, at least to any reasonable precision.)
Stephan Pastis’s Pearls Before Swine for the 25th shows how we can use statistics to improve our lives. At least, it shows how tracking different things can let us find correlations. These correlations might give us information about how to do things better. It’s usually a shaky plan to act on a correlation before you have a working hypothesis about why the correlation should hold. But it can give you leads to pursue.
Eric the Circle for the 26th, this one by Vissoro, is a “two types of people in the world” joke. Given the artwork I believe it’s also riffing on the binary-arithmetic version of the joke. Which is, “there are 10 types of people in the world, those who understand binary and those who don’t”.
I got an irresistible topic for today’s essay. It’s courtesy Peter Mander, author of Carnot Cycle, “the classical blog about thermodynamics”. It’s bimonthly and it’s one worth waiting for. Some of the essays are historical; some are statistical-mechanics; many are mixtures of them. You could make a fair argument that thermodynamics is the most important field of physics. It’s certainly one that hasn’t gotten the popularization treatment it deserves, for its importance. Mander is doing something to correct that.
It is hard to think of limits without thinking of motion. The language even professional mathematicians use suggests it. We speak of the limit of a function “as x goes to a”, or “as x goes to infinity”. Maybe “as x goes to zero”. But a function is a fixed thing, a relationship between stuff in a domain and stuff in a range. It can’t change any more than January, AD 1988 can change. And ‘x’ here is a dummy variable, part of the scaffolding to let us find what we want to know. I suppose ‘x’ can change, but if we ever see it, something’s gone very wrong. But we want to use it to learn something about a function for a point like ‘a’ or ‘infinity’ or ‘zero’.
The language of motion helps us learn, to a point. We can do little experiments: if , then, what should we expect it to be for x near zero? It’s irresistible to try out the calculator. Let x be 0.1. 0.01. 0.001. 0.0001. The numbers say this f(x) gets closer and closer to 1. That’s good, right? We know we can’t just put in an x of zero, because there’s some trouble that makes. But we can imagine creeping up on the zero we really wanted. We might spot some obvious prospects for mischief: what if x is negative? We should try -0.1, -0.01, -0.001 and so on. And maybe we won’t get exactly the right answer. But if all we care about is the first (say) three digits and we try out a bunch of x’s and the corresponding f(x)’s agree to those three digits, that’s good enough, right?
This is good for giving an idea of what to expect a limit to look like. It should be, well, what it really really really looks like a function should be. It takes some thinking to see where it might go wrong. It might go to different numbers based on which side you approach from. But that seems like something you can rationalize. Indeed, we do; we can speak of functions having different limits based on what direction you approach from. Sometimes that’s the best one can say about them.
But it can get worse. It’s possible to make functions that do crazy weird things. Some of these look like you’re just trying to be difficult. Like, set f(x) equal to 1 if x is rational and 0 if x is irrational. If you don’t expect that to be weird you’re not paying attention. Can’t blame someone for deciding that falls outside the realm of stuff you should be able to find limits for. And who would make, say, an f(x) that was 1 if x was 0.1 raised to some power, but 2 if x was 0.2 raised to some power, and 3 otherwise? Besides someone trying to prove a point?
Fine. But you can make a function that looks innocent and yet acts weird if the domain is two-dimensional. Or more. It makes sense to say that the functions I wrote in the above paragraph should be ruled out of consideration. But the limit of at the origin? You get different results approaching in different directions. And the function doesn’t give obvious signs of imminent danger here.
We need a better idea. And we even have one. This took centuries of mathematical wrangling and arguments about what should and shouldn’t be allowed. This should inspire sympathy with Intro Calc students who don’t understand all this by the end of week three. But here’s what we have.
I need a supplementary idea first. That is the neighborhood. A point has a neighborhood if there’s some open set that contains it. We represent this by drawing a little blob around the point we care about. If we’re looking at the neighborhood of a real number, then this is a little interval, that’s all. When we actually get around to calculating, we make these neighborhoods little circles. Maybe balls. But when we’re doing proofs about how limits work, or how we use them to prove things, we make blobs. This “neighborhood” idea looks simple, but we need it, so here we go.
So start with a function, named ‘f’. It has a domain, which I’ll call ‘D’. And a range, which I want to call ‘R’, but I don’t think I need the shorthand. Now pick some point ‘a’. This is the point at which we want to evaluate the limit. This seems like it ought to be called the “limit point” and it’s not. I’m sorry. Mathematicians use “limit point” to talk about something else. And, unfortunately, it makes so much sense in that context that we aren’t going to change away from that.
‘a’ might be in the domain ‘D’. It might not. It might be on the border of ‘D’. All that’s important is that there be a neighborhood inside ‘D’ that contains ‘a’.
I don’t know what f(a) is. There might not even be an f(a), if a is on the boundary of the domain ‘D’. But I do know that everything inside the neighborhood of ‘a’, apart from ‘a’, is in the domain. So we can look at the values of f(x) for all the x’s in this neighborhood. This will create a set, in the range, that’s known as the image of the neighborhood. It might be a continuous chunk in the range. It might be a couple of chunks. It might be a single point. It might be some crazy-quilt set. Depends on ‘f’. And the neighborhood. No matter.
Now I need you to imagine the reverse. Pick a point in the range. And then draw a neighborhood around it. Then pick out what we call the pre-image of it. That’s all the points in the domain that get matched to values inside that neighborhood. Don’t worry about trying to do it; that’s for the homework practice. Would you agree with me that you can imagine it?
I hope so because I’m about to describe the part where Intro Calc students think hard about whether they need this class after all.
All right. Then I want something in the range. I’m going to call it ‘L’. And it’s special. It’s the limit of ‘f’ at ‘a’ if this following bit is true:
Think of every neighborhood you could pick of ‘L’. Can be big, can be small. Just has to be a neighborhood of ‘L’. Now think of the pre-image of that neighborhood. Is there always a neighborhood of ‘a’ inside that pre-image? It’s okay if it’s a tiny neighborhood. Just has to be an open neighborhood. It doesn’t have to contain ‘a’. You can allow a pinpoint hole there.
If you can always do this, however tiny the neighborhood of ‘L’ is, then the limit of ‘f’ at ‘a’ is ‘L’. If you can’t always do this — if there’s even a single exception — then there is no limit of ‘f’ at ‘a’.
I know. I felt like that the first couple times through the subject too. The definition feels backward. Worse, it feels like it begs the question. We suppose there’s an ‘L’ and then test these properties about it and then if it works we say we’re done? I know. It’s a pain when you start calculating this with specific formulas and all that, too. But supposing there is an answer and then learning properties about it, including whether it can exist? That’s a slick trick. We can use it.
Thing is, the pain is worth it. We can calculate with it and not have to out-think tricky functions. It works for domains with as many dimensions as you need. It works for limits that aren’t inside the domain. It works with domains and ranges that aren’t real numbers. It works for functions with weird and complicated domains. We can adapt it if we want to consider limits that are constrained in some way. It won’t be fooled by tricks like I put up above, the f(x) with different rules for the rational and irrational numbers.
So mathematicians shrug, and do enough problems that they get the hang of it, and use this definition. It’s worth it, once you get there.
Brian Fies’s The Last Mechanical Monster for the 24th is a repeat. I included it last October, when I first saw it on GoComics. Still, the equations in it are right, for ballistic flight. Ballistic means that something gets an initial velocity in a particular direction and then travels without any further impulse. Just gravity. It’s a pretty good description for any system where acceleration’s done for very brief times. So, things fired from guns. Rockets, which typically have thrust for a tiny slice of their whole journey and coast the rest of the time. Anything that gets dropped. Or, as in here, a mad scientist training his robot to smash his way through a bank, and getting flung so.
The symbols in the equations are not officially standardized. But they might as well be. ‘v’ here means the speed that something’s tossed into the air. It really wants to be ‘velocity’, but velocity, in the trades, carries with it directional information. And here that’s buried in ‘θ’, the angle with respect to vertical that the thing starts flight in. ‘g’ is the acceleration of gravity, near enough constant if you don’t travel any great distance over the surface of the Earth. ‘y0‘ is the height from which the thing started to fly. And so then ‘d’ becomes the distance travelled, while ‘t’ is the time it takes to travel. I’m impressed the mad scientist (the one from the original Superman cartoon, in 1941; Fies wrote a graphic novel about that man after his release from jail in the present day.)
Greg Cravens’s Hubris! for the 24th jokes about the dangers of tangled earbuds. For once, mathematics can help! There’s even a whole field of mathematics about this. Not earbuds specifically, but about knots. It’s called knot theory. I trust field was named by someone caught by surprise by the question. A knot, in this context, is made of a loop of thread that’s assumed to be infinitely elastic, so you can always stretch it out or twist it around some. And it’s frictionless, so you can slide the surface against itself without resistance. And you can push it along an end. These are properties that real-world materials rarely have.
But. They can be close enough. And knot theory tells us some great, useful stuff. Among them: your earbuds are never truly knotted. To be a knot at all, the string has to loop back and join itself. That is, it has to be like a rubber band, or maybe an infinity scarf. If it’s got loose ends, it’s no knot. It’s topologically just a straight thread with some twists made in the surface. They can come loose.
All that holds these earbuds together is the friction of the wires against each other. (That the earbud wire splits into a left and a right bud doesn’t matter, here.) They can be loosened. Let me share how.
My love owns, among other things, a marionette dragon. And once, despite it being stored properly, the threads for it got tangled, and those things are impossible to untangle on purpose. I, having had one (1) whole semester of knot theory in grad school, knew an answer. I held the marionette upside-down, by the dragon. The tangled wires and the crossed sticks that control it hung loose underneath. And then shook the puppet around. This made the wires, and the sticks, shake around. They untangled, quickly.
What held the marionette strings, and what holds earbuds, together, is just friction. It’s hard to make the wire slide loosely against itself. Shaking it around, though? That gives it some energy. That gives the wire some play. And here we have one of the handful of cases where entropy does something useful for us. There’s a limit to how tightly a wire can loop around itself. There’s no limit to how loosely it can go. Little, regular, random shakes will tend to loosen the wire. When it’s loose enough, it untangles naturally.
You can help this along. We all know how. Use a pen-point or a toothpick a needle to pry some of the wires apart. That makes the “knot” easier to remove. This works by the same principle. If you reduce how much the wire contacts itself, you reduce the friction on the wire. The wire can slide more easily into the un-knot that it truly is. The comic’s tech support guy gave up too easily.
Samson’s Dark Side of the Horse for the 25th is the Roman numerals joke for this essay. And a cute bit about coincidences between what you can spell out with Roman numerals and sounds people might make. Writing out calculations evokes peculiar, magical prowess. When they include, however obliquely, words? Or parts of words? Can’t blame people for seeing the supernatural in it.
We’re at the end of another month. So it’s a good chance to set out requests for the next several week’s worth of my mathematics A-To-Z. As I say, I’ve been doing this piecemeal so that I can keep track of requests better. I think it’s been working out, too.
If you have any mathematical topics with a name that starts N through T, let me know! I usually go by a first-come, first-serve basis for each letter. But I will vary that if I realize one of the alternatives is more suggestive of a good essay topic. And I may use a synonym or an alternate phrasing if both topics for a particular letter interest me.
Also when you do make a request, please feel free to mention your blog, Twitter feed, Mathstodon account, or any other project of yours that readers might find interesting. I’m happy to throw in a mention as I get to the word of the day.
So! I’m open for nominations. Here are the words I’ve used in past A to Z sequences. I probably don’t want to revisit them. But I will think over, if I get a request, whether I might have new opinions.
Today’s request is another from John Golden, @mathhombre on Twitter and similarly on Blogspot. It’s specifically for Kelvin — “scientist or temperature unit”, the sort of open-ended goal I delight in. I decided on the scientist. But that’s a lot even for what I honestly thought would be a quick little essay. So I’m going to take out a tiny slice of a long and amazingly fruitful career. There’s so much more than this.
Before I get into what I did pick, let me repeat an important warning about historical essays. Every history is incomplete, yes. But any claim about something being done for the first time is simplified to the point of being wrong. Any claim about an individual discovering or inventing something is simplified to the point of being wrong. Everything is more complicated and, especially, more ambiguous than this. If you do not love the challenge of working out a coherent narrative when the most discrete and specific facts are also the ones that are trivia, do not get into history. It will only break your heart and mislead your readers. With that disclaimer, let me try a tiny slice of the life of William Thomson, the Baron Kelvin.
Kelvin (the scientist).
The great thing about a magnetic compass is that it’s easy. Set the thing on an axis and let it float freely. It aligns itself to the magnetic poles. It’s easy to see why this looks like magic.
The trouble is that it’s not quite right. It’s near enough for many purposes. But the direction a magnetic compass points out to be north is not the true geographic north. Fortunately, we’ve got a fair idea just how far off north that is. It depends on where you are. If you have a rough idea where you already are, you can make a correction. We can print up charts saying how much of a correction to make.
The trouble is that it’s still not quite right. The location of the magnetic north and south poles wanders. Fortunately we’ve got a fair idea of how quickly it’s moving, and in what direction. So if you have a rough idea how out of date your chart is, and what direction the poles were moving in, you can make a correction. We can communicate how much the variance between true north and magnetic north vary.
The trouble is that it’s still not quite right. The size of the variation depends on the season of the year. But all right; we should have a rough idea what season it is. We can correct for that. The size of the variation also depends on what time of day it is. Compasses point farther east at around 8 am (sun time) than they do the rest of the day, and farther west around 1 pm. At least they did when Alan Gurney’s Compass: A Story of Exploration and Innovation was published. I would be unsurprised if that’s changed since the book came out a dozen years ago. Still. These are all, we might say, global concerns. They’s based on where you are and when you look at the compass. But they don’t depend on you, the specific observer.
The trouble is that it’s still not quite right yet. Almost as soon as compasses were used for navigation, on ships, mariners noticed the compass could vary. And not just because compasses were often badly designed and badly made. The ships themselves got in the way. The problem started with guns, the iron of which led compasses astray. When it was just the ship’s guns the problem could be coped with. Set the compass binnacle far from any source of iron, and the error should be small enough.
The trouble is when the time comes to make ships with iron. There are great benefits you get from cladding ships in iron, or making them of iron altogether. Losing the benefits of navigation, though … that’s a bit much.
There’s an obvious answer. Suppose you know the construction of the ship throws off compass bearings. Then measure what the compass reads, at some point when you know what it should read. Use that to correct your measurements when you aren’t sure. From the early 1800s mariners could use a method called “swinging the ship”, setting the ship at known angles and comparing what the compass read. It’s a bit of a chore. And you should arrange things you need to do so that it’s harder to make a careless mistake at them.
In the 1850s John Gray of Liverpool patented a binnacle — the little pillar that holds the compass — which used the other obvious but brilliant approach. If the iron which builds the ship sends the compass awry, why not put iron near the compass to put the compass back where it should be? This set up a contraption of a binnacle surrounded by adjustable, correcting magnets.
Enter finally William Thomson, who would become Baron Kelvin in 1892. In 1871 the magazine Good Words asked him to write an article about the marine compass. In 1874 he published his first essay on the subject. The second part appeared five years after that. I am not certain that this is directly related to the tiny slice of story I tell. I just mention it to reassure every academic who’s falling behind on their paper-writing, which is all of them.
But come the 1880s Thomson patented an improved binnacle. Thomson had the sort of talents normally associated only with the heroes of some lovable yet dopey space-opera of the 1930s. He was a talented scientist, competent in thermodynamics and electricity and magnetism and fluid flow. He was a skilled mathematician, as you’d need to be to keep up with all that and along the way prove the Stokes theorem. (This is one of those incredibly useful theorems that gives information about the interior of a volume using only integrals over the surface.) He was a magnificent engineer, with a particular skill at developing instruments that would brilliantly measure delicate matters. He’s famous for saving the trans-Atlantic telegraph cable project. He recognized that what was needed was not more voltage to drive signal through three thousand miles of dubiously made copper wire, but rather ways to pick up the feeble signals that could come across, and amplify them into usability. And also described the forces at work on a ship that is laying a long line of submarine cable. And he was a manufacturer, able to turn these designs into mass-produced products. This through collaborating with James White, of Glasgow, for over half a century. And a businessman, able to convince people and organizations to use the things. He’s an implausible protagonist; and yet, there he is.
Thomson’s revision for the binnacle made it simpler. A pair of spheres, flanking the compass, and adjustable. The Royal Museums Greenwich web site offers a picture of this sort of system. It’s not so shiny as others in the collection. But this angle shows how adjustable the system would be. It’s a design that shows brilliance behind it. What work you might have to do to use it is obvious. At least it’s obvious once you’re told the spheres are adjustable. To reduce a massive, lingering, challenging problem to something easy is one of the great accomplishments of any practical mathematician.
This was not all Thomson did in maritime work. He’d developed an analog computer which would calculate the tides. Wikipedia tells me that Thomson claimed a similar mechanism could solve arbitrary differential equations. I’d accept that claim, if he made it. Thomson also developed better tools for sounding depths. And developed compasses proper, not just the correcting tools for binnacles. A maritime compass is a great practical challenge. It has to be able to move freely, so that it can give a correct direction even as the ship changes direction. But it can’t move too freely, or it becomes useless in rolling seas. It has to offer great precision, or it loses its use in directing long journeys. It has to be quick to read, or it won’t be consulted. Thomson designed a compass that was, my readings indicate, a great fit for all these constraints. By the time of his death in 1907 Kelvin and White (the company had various names) had made something like ten thousand compasses and binnacles.
And this from a person attached to all sorts of statistical mechanics stuff and who’s important for designing electrical circuits and the like.
It’s another week with several on-topic installments of Frazz. Again, Jef Mallet, you and I live in the same metro area. Wave to me at the farmer’s market or something. I’m kind of able to talk to people in real life, if I can keep in view three different paths to escape and know two bathrooms to hide in. Horrock’s is great for that.
Jef Mallet’s Frazz for the 22nd is a bit of wordplay. It’s built on the association between “negative” and “wrong”. And the confusing fact that multiplying a negative number by a negative number results in a positive number. It sounds like a trick. Still, negative numbers are tricky. The name connotes something that’s gone a bit wrong. It took time to understand what they were and how they should work. This weird multiplication rule follows from that. If we don’t suppose this to be true, then we break other ideas we have about multiplication and comparative sizes and such. Mathematicians needed to get comfortable with negative numbers. For a long time, for example, mathematicians would treat and as different kinds of polynomials to solve. Today we see a -4 as no harder than a +4, now that we’re good at multiplying it out. And I have read, but have not seen explained, that there was uncertainty among the philosophers of mathematics about whether we should consider negative numbers, as a group, to be greater than or less than positive numbers. (I have reasons for thinking this a mighty interesting speculation.) There’s reasons to doubt them, is what I have to say.
Bob Weber Jr and Jay Stephens’s Oh Brother for the 22nd reminds me of my childhood. At some point I was pairing up the counting numbers and the letters of the alphabet, and realized that the alphabet ended while the numbers did not. Something about that offended my young sense of justice. I’m not sure how, anymore. But that it was always possible to find a bigger number than whatever you thought was the biggest caught my imagination.
There is, surely, a largest finite number that anybody will ever use for something, even if it’s just hyperbole. I’m curious what it will be. Surely we can’t have already used it. A number named Skewes’s Number was famous, for a while, as the largest number actually used in a proof of something. The fame came from Isaac Asimov writing an essay about the number, and why someone might care, and how hard it is just describing how big the number is in a comprehensible way. Wikipedia tells me this number’s far been exceeded by, among other things, something called Rayo’s Number. It’s “the smallest number bigger than any finite number named by an expression in the language of set theory with a googol symbols or less” (plus some technical points to keep you from cheating). Which, all right, but I’d like to know if we think the first digit is a 1, maybe a 2? Somehow I don’t demand that of Skewes, perhaps because I read that Asimov essay when I was at an impressionable age.
Jef Mallet’s Frazz for the 23rd has Caulfield talk about a fraction divided by a fraction. And particularly he says “a fraction divided by a fraction is just a fraction times a flipped fraction”. This offends me, somehow. This even though that is how I’d calculate the value of the division, if I needed to know that. But it seems to me like automatically going to that process skips recognizing that, say, shouldn’t be surprising if it turns out not to be a fraction. Well, Caulfield’s just looking to cause trouble with a string of wordplay. I can think of how to divide a fraction by a fraction and get zero.
Ashleigh Brilliant’s Pot-Shots for the 23rd promises to recapitulate the whole history of mathematics in a single panel. Ambitious bit of work. It’s easy to picture going from the idea of 1 to any of the positive whole numbers, though. It’s so easy it doesn’t even need humans to do it; animals can count, at least a bit. We just carry on to a greater extent than the crows or the raccoons do, so far as we’ve heard. From those, it takes some squinting, but you can think of negative whole numbers. And from that you get zero pretty quickly. You can also get rational numbers. The western mathematical tradition did this by looking at … er … ratios, that something might be to another thing as two is to five. Circumlocutions like that. Getting to irrational numbers is harder. Can be harder. Some irrational numbers beg you to notice them: the square root of two, for example. Square root of three. Numbers that come up from solving polynomial equations. But there are more number than those. Many more numbers. You might suspect the existence of a transcendental number, that isn’t the root of any polynomial that’s decently behaved. But finding one? Or finding that there are more transcendental number than there are real numbers? This takes a certain brilliance to suspect, and to prove out. But we can get there with rational numbers — which we get to from collections of ones — and the idea of cutting sets of numbers into those smaller than and those bigger than something. Ashleigh Brilliant has more truth than, perhaps, he realized when he drew this panel.
Niklas Eriksson’s Carpe Diem for the 24th has goldfish work out the shape of space. A goldfish in this case has the advantage of being able to go nearly everywhere in the space. But working out what the universe must look like, when you can only run local experiments, is a great geometric problem. It’s akin to working out that the Earth must be a sphere, and about how big a sphere, from the surveying job one can do without travelling more than a few hundred kilometers.
For today’s entry, Iva Sallay, of Find The Factors, gave me an irresistible topic. I did not resist.
What’s purple and commutes?
An Abelian grape.
Whatever else you say about mathematics we are human. We tell jokes. I will tell some here. You may not understand the words in them. That’s all right. From the Abelian grape there, you gather this is some manner of wordplay. A pun, particularly. It’s built on a technical term. “Abelian groups” come from (not high school) Algebra. In an Abelian group, the group multiplication commutes. That is, if ‘a’ and ‘b’ are any things in the group, then their product “ab” is the same as “ba’. That is, the group works like ordinary addition on numbers does. We say “Abelian” in honor of Niels Henrik Abel, who taught us some fascinating stuff about polynomials. Puns are a common kind of humor. So common, they’re almost base. Even a good pun earns less laughter than groans.
But mathematicians make many puns. A typical page of mathematics jokes has a whole section of puns. “What’s yellow and equivalent to the Axiom of Choice? Zorn’s Lemon.” “What’s nonorientable and lives in the sea?” “Möbius Dick.” “One day Jesus said to his disciples, `The Kingdom of Heaven is like 3x2 + 8x – 9′. Thomas looked very confused and asked peter, `What does the teacher mean?’ Peter replied, `Don’t worry. It’s just another one of his parabolas’.” And there are many jokes built on how it is impossible to tell the difference between the sounds of “π” and “pie”.
It shouldn’t surprise that mathematicians make so many puns. Mathematics trains people to know definitions. To think about precisely what we mean. Puns ignore definitions. They build nonsense out of the ways that sounds interact. Mathematicians practice how to make things interact, even if they don’t know or care what the underlying things are. If you’ve gotten used to proving things about , without knowing what ‘a’ or ‘b’ are, it’s difficult to avoid turning “poles on the half-plane” (which matters in some mathematical physics) to a story about Polish people on an aircraft.
If there’s a flaw to this kind of humor it’s that these jokes may sound juvenile. One of the first things that strikes kids as funny is that a thing might have several meanings. Or might sound like another thing. “Why do mathematicians like parks? Because of all the natural logs!”
Jokes can be built tightly around definitions. “What do you get if you cross a mosquito with a mountain climber? Nothing; you can’t cross a vector with a scalar.” “There are 10 kinds of people in the world, those who understand binary mathematics and those who don’t.” “Life is complex; it has real and imaginary parts.”
There are more sophisticated jokes. Many of them are self-deprecating. “A mathematician is a device for turning coffee into theorems.” “An introvert mathematician looks at her shoes while talking to you. An extrovert mathematician looks at your shoes.” “A mathematics professor is someone who talks in someone else’s sleep”. “Two people are adrift in a hot air balloon. Finally they see someone and shout down, `Where are we?’ The person looks up, and studies them, watching the balloon drift away. Finally, when they are barely in shouting range, the person on the ground shouts back, `You are in a balloon!’ The first passenger curses their luck at running across a mathematician. `How do you know that was a mathematician?’ `Because her answer took a long time, was perfectly correct, and absolutely useless!”’ These have the form of being about mathematicians. But they’re not really. It would be the same joke to say “a poet is a device for turning coffee into couplets”, the sleep-talker anyone who teachers, or have the hot-air balloonists discover a lawyer or a consultant.
Some of these jokes get more specific, with mathematics harder to extract from the story. The tale of the nervous flyer who, before going to the conference, sends a postcard that she has a proof of the Riemann hypothesis. She arrives and admits she has no such thing, of course. But she sends that word ahead of every conference. She knows if she died in a plane crash after that, she’d be famous forever, and God would never give her that. (I wonder if Ian Randal Strock’s little joke of a story about Pierre de Fermat was an adaptation of this joke.) You could recast the joke for physicists uniting gravity and quantum mechanics. But I can’t imagine a way to make this joke about an ISO 9000 consultant.
A dairy farmer knew he could be milking his cows better. He could surely get more milk, and faster, if only the operations of his farm were arranged better. So he hired a mathematician to find the optimal way to configure everything. The mathematician toured every part of the pastures, the milking barn, the cows, everything relevant. And then the mathematician set to work devising a plan for the most efficient possible cow-milking operation. The mathematician declared, “First, assume a spherical cow.”
This joke is very mathematical. I know of no important results actually based on spherical cows. But the attitude that tries to make spheres of cows comes from observing mathematicians. To describe any real-world process is to make a model of that thing. A model is a simplification of the real thing. You suppose that things behave more predictably than the real thing. You trust the error made by this supposition is small enough for your needs. A cow is complicated, all those pointy ends and weird contours. A sphere is easy. And, besides, cows are funny. “Spherical cow” is a funny string of sounds, at least in English.
The spherical cows approach parodying the work mathematicians do. Many mathematical jokes are burlesques of deductive logic. Or not even burlesques. Charles Dodgson, known to humans as Lewis Carroll, wrote this in Symbolic Logic:
“No one, who means to go by the train and cannot get a conveyance, and has not enough time to walk to the station, can do without running;
This party of tourists mean to go by the train and cannot get a conveyance, but they have plenty of time to walk to the station.
∴ This party of tourists need not run.”
[ Here is another opportunity, gentle Reader, for playing a trick on your innocent friend. Put the proposed Syllogism before him, and ask him what he thinks of the Conclusion.
He will reply “Why, it’s perfectly correct, of course! And if your precious Logic-book tells you it isn’t, don’t believe it! You don’t mean to tell me those tourists need to run? If I were one of them, and knew the Premises to be true, I should be quite clear that I needn’t run — and I should walk!”
And you will reply “But suppose there was a mad bull behind you?”
And then your innocent friend will say “Hum! Ha! I must think that over a bit!” ]
The punch line is diffused by the text being so educational. And by being written in the 19th century, when it was bad form to excise any word from any writing. But you can recognize the joke, and why it should be a joke.
Not every mathematical-reasoning joke features some manner of cattle. Some are legitimate:
Claim. There are no uninteresting whole numbers.
Proof. Suppose there is a smalled uninteresting whole number. Call it N. That N is uninteresting is an interesting fact. Therefore N is not an uninteresting whole number.
Three mathematicians step up to the bar. The bartender asks, “you all want a beer?” The first mathematician says, “I don’t know.” The second mathematician says, “I don’t know.” The third says, “Yes”.
Some mock reasoning uses nonsense methods to get a true conclusion. It’s the fun of watching Mister Magoo walk unharmed through a construction site to find the department store exchange counter:
Venn Diagrams are not by themselves jokes (most of the time). But they are a great structure for jokes. And easy to draw, which is great for us who want to be funny but don’t feel sure about their drafting abilities.
And then there are personality jokes. Mathematics encourages people to think obsessively. Obsessive people are often funny people. Alexander Grothendieck was one of the candidates for “greatest 20th century mathematician”. His reputation is that he worked so well on abstract problems that he was incompetent at practical ones. The story goes that he was demonstrating something about prime numbers and his audience begged him to speak about a specific number, that they could follow an example. And that he grumbled a bit and, finally, said, “57”. It’s not a prime number. But if you speak of “Grothendieck’s prime”, many will recognize what you mean, and grin.
There are more outstanding, preposterous personalities. Paul Erdös was prolific, and a restless traveller. The stories go that he would show up at some poor mathematician’s door and stay with them several months. And then co-author a paper with the elevator operator. (Erdös is also credited as the originator of the “coffee into theorems” quip above.) John von Neumann was supposedly presented with this problem:
Two trains are on the same track, 60 miles apart, heading toward each other, each travelling 30 miles per hour. A fly travels 60 miles per hour, leaving one engine flying toward the other. When it reaches the other engine it turns around immediately and flies back to the other engine. This is repeated until the two trains crash. How far does the fly travel before the crash?
The first, hard way to do this is to realize how far the fly travels is a series. It starts at, let’s say, the left engine and flies to the right. Add to that the distance from the right to the left train now. Then left to the right again. Right to left. This is a bunch of calculations. Most people give up on that and realize the problem is easier. The trains will crash in one hour. The fly travels 60 miles per hour for an hour. It’ll fly 60 miles total. John von Neumann, say witnesses, had the answer instantly. He recognized the trick? “I summed the series.”
The personalities can be known more remotely, from a handful of facts about who they were or what they did. “Cantor did it diagonally.” Georg Cantor is famous for great thinking about infinitely large sets. His “diagonal proof” shows the set of real numbers must be larger than the set of rational numbers. “Fermat tried to do it in the margin but couldn’t fit it in.” “Galois did it on the night before.” (Évariste Galois wrote out important pieces of group theory the night before a duel. It went badly for him. French politics of the 1830s.) Every field has its celebrities. Mathematicians learn just enough about theirs to know a couple of jokes.
The jokes can attach to a generic mathematician personality. “How can you possibly visualize something that happens in a 12-dimensional space?” “Easy, first visualize it in an N-dimensional space, and then let N go to 12.” Three statisticians go hunting. They spot a deer. One shoots, missing it on the left. The second shoots, missing it on the right. The third leaps up, shouting, “We’ve hit it!” An engineer and a mathematician are sleeping in a hotel room when the fire alarm goes off. The engineer ties the bedsheets into a rope and shimmies out of the room. The mathematician looks at this, unties the bedsheets, sets them back on the bed, declares, “this is a problem already solved” and goes back to sleep. (Engineers and mathematicians pair up a lot in mathematics jokes. I assume in engineering jokes too, but that the engineers make wrong assumptions about who the joke is on. If there’s a third person in the party, she’s a physicist.)
Do I have a favorite mathematics joke? I suppose I must. There are jokes I like better than others, and there are — I assume — finitely many different mathematics jokes. So I must have a favorite. What is it? I don’t know. It must vary with the day and my mood and the last thing I thought about. I know a bit of doggerel keeps popping into my head, unbidden. Let me close by giving it to you.
Integral z-squared dz
From 1 to the cube root of 3
Times the cosine
Of three π over nine
Equals log of the cube root of e.
This may not strike you as very funny. I’m not sure it strikes me as very funny. But it keeps showing up, all the time. That has to add up.
Thaves’s Frank and Ernest for the 18th is a bit of wordplay. There’s something interesting culturally about phrasing “lots of math, but no chemistry”. Algorithms as mathematics makes sense. Much of mathematics is about finding processes to do interesting things. Algorithms, and the mathematics which justifies them, can at least in principle be justified with deductive logic. And we like to think that the universe must make deductive-logical sense. So it is easy to suppose that something mathematical simply must make logical sense.
Chemistry, though. It’s a metaphor for whatever the difference is between a thing’s roster of components and the effect of the whole. The suggestion is that it is mysterious and unpredictable. It’s an attitude strange to actual chemists, who have a rather good understanding of why most things happen. My suspicion is that this sense of chemistry is old, dating to before we had a good understanding of why chemical bonds work. We have that understanding thanks to quantum mechanics, and its mathematical representations.
But we can still allow for things that happen but aren’t obvious. When we write about “emergent properties” we describe things which are inherent in whatever we talk about. But they only appear when the things are a large enough mass, or interact long enough. Some things become significant only when they have enough chance to be seen.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 18th is about mathematicians’ favorite Ancient Greek philosopher they haven’t actually read. (In fairness, Zeno is hard to read, even for those who know the language.) Zeno’s famous for four paradoxes, the most familiar of which is alluded to here. To travel across a space requires travelling across half of it first. But this applies recursively. To travel any distance requires accomplishing infinitely many partial-crossings. How can you do infinitely many things, each of which take more than zero time, in less than an infinitely great time? But we know we do this; so, what aren’t we understanding? A callow young mathematics major would answer: well, pick any tiny interval of time you like. All but a handful of the partial-crossings take less than your tiny interval time. This seems like a sufficient answer and reason to chuckle at philosophers. Fine; an instant has zero time elapse during it. Nothing must move during that instant, then. So when does movement happen, if there is no movement during all the moments of time? Reconciling these two points slows the mathematician down.
Patrick Roberts’s Todd the Dinosaur for the 19th mentions fractions. It’s only used to list a kind of mathematics problem a student might feign unconsciousness rather than do. And takes quite little space in the word balloon to describe. It’d be the same joke if Todd were asked to come up and give a ten-minute presentation on the Battle of Bunker Hill.
Julie Larson’s The Dinette Set for the 19th mentions the Rubik’s Cube. Sometime I should do a proper essay about its mathematics. Any Rubik’s Cube can be solved in at most 20 moves. And it’s apparently known there are some cube configurations that take at least 20 moves, so, that’s nice to have worked out. But there are many approaches to solving a cube, none of which I am competent to do. Some algorithms are, apparently, easier for people to learn, at the cost of taking more steps. And that’s fine. You should understand something before you try to do it efficiently.
Some mathematics escapes mathematicians and joins culture. This is one such. The monkeys are part of why. They’re funny and intelligent and sad and stupid and deft and clumsy, and they can sit at a keyboard almost look in place. They’re so like humans, except that we empathize with them. To imagine lots of monkeys, and putting them to some silly task, is compelling.
The metaphor traces back to a 1913 article by the mathematical physicist Émile Borel which I have not read. Searching the web I find much more comment about it than I find links to a translation of the text. And only one copy of the original, in French. And that page wants €10 for it. So I can tell you what everybody says was in Borel’s original text, but can’t verify it. The paper’s title is “Statistical Mechanics and Irreversibility”. From this I surmise that Borel discussed one of the great paradoxes of statistical mechanics. If we open a bottle of one gas in an airtight room, it disperses through the room. Why doesn’t every molecule of gas just happen, by chance, to end up back where it started? It does seem that if we waited long enough, it should. It’s unlikely it would happen on any one day, but give it enough days …
But let me turn to many web sites that are surely not all copying Wikipedia on this. Borel asked us to imagine a million monkeys typing ten hours a day. He posited it was possible but extremely unlikely that they would exactly replicate all the books of the richest libraries of the world. But that would be more likely than the atmosphere in a room un-mixing like that. Fair enough, but we’re not listening anymore. We’re thinking of monkeys. Borel’s is a fantastic image. It would see some adaptation in the years. Physicist Arthur Eddington, in 1928, made it an army of monkeys, with their goal being the writing all the books in the British Museum. By 1960 Bob Newhart had an infinite number of monkeys and typewriters, and a goal of all the great books. Stating the premise gets a laugh I doubt the setup would today. I’m curious whether Newhart brought the idea to the mass audience. (Google NGrams for “monkeys at typewriters” suggest that phrase was unwritten, in books, before about 1965.) We may owe Bob Newhart thanks for a lot of monkeys-at-typewriters jokes.
Newhart has a monkey hit on a line from Hamlet. I don’t know if it was Newhart that set the monkeys after Shakespeare particularly, rather than some other great work of writing. Shakespeare does seem to be the most common goal now. Sometimes the number of monkeys diminishes, to a thousand or even to one. Some people move the monkeys off of typewriters and onto computers. Some take the cowardly measure of putting the monkeys at “keyboards”. The word is ambiguous enough to allow for typewriters, computers, and maybe a Megenthaler Linotype. The monkeys now work 24 hours a day. This will be a comment someday about how bad we allowed pre-revolutionary capitalism to get.
The cultural legacy of monkeys-at-keyboards might well itself be infinite. It turns up in comic strips every few weeks at least. Television shows, usually writing for a comic beat, mention it. Computer nerds doing humor can’t resist the idea. Here’s a video of a 1979 Apple ][ program titled THE INFINITE NO. OF MONKEYS, which used this idea to show programming tricks. And it’s a great philosophical test case. If a random process puts together a play we find interesting, has it created art? No deliberate process creates a sunset, but we can find in it beauty and meaning. Why not words? There’s likely a book to write about the infinite monkeys in pop culture. Though the quotations of original materials would start to blend together.
But the big question. Have the monkeys got a chance? In a break from every probability question ever, the answer is: it depends on what the question precisely is. Occasional real-world experiments-cum-art-projects suggest that actual monkeys are worse typists than you’d think. They do more of bashing the keys with a stone before urinating on it, a reminder of how slight is the difference between humans and our fellow primates. So we turn to abstract monkeys who behave more predictably, and run experiments that need no ethical oversight.
So we must think what we mean by Shakespeare’s Plays. Arguably the play is a specific performance of actors in a set venue doing things. This is a bit much to expect of even a skilled abstract monkey. So let us switch to the book of a play. This has a more clear representation. It’s a string of characters. Mostly letters, some punctuation. Good chance there’s numerals in there. It’s probably a lot of characters. So the text to match is some specific, long string of characters in a particular order.
And what do we mean by a monkey at the keyboard? Well, we mean some process that picks characters randomly from the allowed set. When I see something is picked “randomly” I want to know what the distribution rule is. Like, are Q’s exactly as probable as E’s? As &’s? As %’s? How likely it is a particular string will get typed is easiest to answer if we suppose a “uniform” distribution. This means that every character is equally likely. We can quibble about capital and lowercase letters. My sense is most people frame the problem supposing case-insensitivity. That the monkey is doing fine to type “whaT beArD weRe i BEsT tO pLAy It iN?”. Or we could set the monkey at an old typesetter’s station, with separate keys for capital and lowercase letters. Some will even forgive the monkeys punctuating terribly. Make your choices. It affects the numbers, but not the point.
I’ll suppose there are 91 characters to pick from, as a Linotype keyboard had. So the monkey has capitals and lowercase and common punctuation to get right. Let your monkey pick one character. What is the chance it hit the first character of one of Shakespeare’s plays? Well, the chance is 1 in 91 that you’ve hit the first character of one specific play. There’s several dozen plays your monkey might be typing, though. I bet some of them even start with the same character, so giving an exact answer is tedious. If all we want monkey-typed Shakespeare plays, we’re being fussy if we want The Tempest typed up first and Cymbeline last. If we want a more tractable problem, it’s easier to insist on a set order.
So suppose we do have a set order. Then there’s a one-in-91 chance the first character matches the first character of the desired text. A one-in-91 chance the second character typed matches the second character of the desired text. A one-in-91 chance the third character typed matches the third character of the desired text. And so on, for the whole length of the play’s text. Getting one character right doesn’t make it more or less likely the next one is right. So the chance of getting a whole play correct is raised to the power of however many characters are in the first script. Call it 800,000 for argument’s sake. More characters, if you put two spaces between sentences. The prospects of getting this all correct is … dismal.
I mean, there’s some cause for hope. Spelling was much less fixed in Shakespeare’s time. There are acceptable variations for many of his words. It’d be silly to rule out a possible script that (say) wrote “look’d” or “look’t”, rather than “looked”. Still, that’s a slender thread.
But there is more reason to hope. Chances are the first monkey will botch the first character. But what if they get the first character of the text right on the second character struck? Or on the third character struck? It’s all right if there’s some garbage before the text comes up. Many writers have trouble starting and build from a first paragraph meant to be thrown away. After every wrong letter is a new chance to type the perfect thing, reassurance for us all.
Since the monkey does type, hypothetically, forever … well, so each character has a probability of only (or whatever) of starting the lucky sequence. The monkey will have chances to start. More chances than that.
And we don’t have only one monkey. We have a thousand monkeys. At least. A million monkeys. Maybe infinitely many monkeys. Each one, we trust, is working independently, owing to the monkeys’ strong sense of academic integrity. There are monkeys working on the project. And more than that. Each one takes their chance.
There are dizzying possibilities here. There’s the chance some monkey will get it all exactly right first time out. More. Think of a row of monkeys. What’s the chance the first thing the first monkey in the row types is the first character of the play? What’s the chance the first thing the second monkey in the row types is the second character of the play? The chance the first thing the third monkey in the row types is the third character in the play? What’s the chance a long enough row of monkeys happen to hit the right buttons so the whole play appears in one massive simultaneous stroke of the keys? Not any worse than the chance your one monkey will type this all out. Monkeys at keyboards are ergodic. It’s as good to have a few monkeys working a long while as to have many monkeys working a short while. The Mythical Man-Month is, for this project, mistaken.
That solves it then, doesn’t it? A monkey, or a team of monkeys, has a nonzero probability of typing out all Shakespeare’s plays. Or the works of Dickens. Or of Jorge Luis Borges. Whatever you like. Given infinitely many chances at it, they will, someday, succeed.
What is the chance that the monkeys screw up? They get the works of Shakespeare just right, but for a flaw. The monkeys’ Midsummer Night’s Dream insists on having the fearsome lion played by “Smaug the joiner” instead. This would send the play-within-the-play in novel directions. The result, though interesting, would not be Shakespeare. There’s a nonzero chance they’ll write the play that way. And so, given infinitely many chances, they will.
What’s the chance that they always will? That they just miss every single chance to write “Snug”. It comes out “Smaug” every time?
We can say. Call the probability that they make this Snug-to-Smaug typo any given time . That’s a number from 0 to 1. 0 corresponds to not making this mistake; 1 to certainly making it. The chance they get it right is . The chance they make this mistake twice is smaller than . The chance that they get it right at least once in two tries is closer to 1 than is. The chance that, given three tries, they make the mistake every time is even smaller still. The chance that they get it right at least once is even closer to 1.
You see where this is going. Every extra try makes the chance they got it wrong every time smaller. Every extra try makes the chance they get it right at least once bigger. And now we can let some analysis come into play.
So give me a positive number. I don’t know your number, so I’ll call it ε. It’s how unlikely you want something to be before you say it won’t happen. Whatever your ε was, I can give you a number . If the monkeys have taken more than tries, the chance they get it wrong every single time is smaller than your ε. The chance they get it right at least once is bigger than 1 – ε. Let the monkeys have infinitely many tries. The chance the monkey gets it wrong every single time is smaller than any positive number. So the chance the monkey gets it wrong every single time is zero. It … can’t happen, right? The chance they get it right at least once is closer to 1 than to any other number. So it must be 1. So it must be certain. Right?
But let me give you this. Detach a monkey from typewriter duty. This one has a coin to toss. It tosses fairly, with the coin having a 50% chance of coming up tails and 50% chance of coming up heads each time. The monkey tosses the coin infinitely many times. What is the chance the coin comes up tails every single one of these infinitely many times? The chance is zero, obviously. At least you can show the chance is smaller than any positive number. So, zero.
Yet … what power enforces that? What forces the monkey to eventually have a coin come up heads? It’s … nothing. Each toss is a fair toss. Each toss is independent of its predecessors. But there is no force that causes the monkey, after a hundred million billion trillion tosses of “tails”, to then toss “heads”. It’s the gambler’s fallacy to think there is one. The hundred million billion trillionth-plus-one toss is as likely to come up tails as the first toss is. It’s impossible that the monkey should toss tails infinitely many times. But there’s no reason it can’t happen. It’s also impossible that the monkeys still on the typewriters should get Shakespeare wrong every single time. But there’s no reason that can’t happen.
It’s unsettling. Well, probability is unsettling. If you don’t find it disturbing you haven’t thought long enough about it. Infinities, too, are unsettling so.
Formally, mathematicians interpret this — if not explain it — by saying the set of things that can happen is a “probability space”. The likelihood of something happening is what fraction of the probability space matches something happening. (I’m skipping a lot of background to say something that simple. Do not use this at your thesis defense without that background.) This sort of “impossible” event has “measure zero”. So its probability of happening is zero. Measure turns up in analysis, in understanding how calculus works. It complicates a bunch of otherwise-obvious ideas about continuity and stuff. It turns out to apply to probability questions too. Imagine the space of all the things that could possibly happen as being the real number line. Pick one number from that number line. What is the chance you have picked exactly the number -24.11390550338228506633488? I’ll go ahead and say you didn’t. It’s not that you couldn’t. It’s not impossible. It’s just that the chance that this happened, out of the infinity of possible outcomes, is zero.
The infinite monkeys give us this strange set of affairs. Some things have a probability of zero of happening, which does not rule out that they can. Some things have a probability of one of happening, which does not mean they must. I do not know what conclusion Borel ultimately drew about the reversibility problem. I expect his opinion to be that we have a clear answer, and unsettlingly great room for that answer to be incomplete.
Thaves’s Frank and Ernest for the 17th is, for me, extremely relatable content. I don’t say that my interest in mathematics is entirely because there was this Berenstain Bears book about jobs which made it look like a mathematician’s job was to do sums in an observatory on the Moon. But it didn’t hurt. When I joke about how seven-year-old me wanted to be the astronaut who drew Popeye, understand, that’s not much comic exaggeration.
Justin Thompson’s Mythtickle rerun for the 17th is a timely choice about lotteries and probabilities. Vlad raises a fair point about your chance of being struck by lightning. It seems like that’s got to depend on things like where you are. But it does seem like we know what we mean when we say “the chance you’ll be hit by lightning”. At least I think it means “the probability that a person will be hit by lightning at some point in their life, if we have no information about any environmental facts that might influence this”. So it would be something like the number of people struck by lightning over the course of a year divided by the number of people in the world that year. You might have a different idea of what “the chance you’ll be hit by lightning” means, and it’s worth trying to think what precisely that does mean to you.
Lotteries are one of those subjects that a particular kind of nerd likes to feel all smug about. Pretty sure every lottery comic ever has drawn a comment about a tax on people who can’t do mathematics. This one did too. But then try doing the mathematics. The Mega Millions lottery, in the US, has a jackpot for the first drawing this week estimated at more than a billion dollars. The chance of winning is about one in 300 million. A ticket costs two dollars. So what is the expectation value of playing? You lose two dollars right up front, in the cost of the ticket. What do you get back? A one-in-300-million chance of winning a billion dollars. That is, you can expect to get back a bit more than three dollars. The implication is: you make a profit of dollar on each ticket you buy. There’s something a bit awry here, as you can tell from my decision not to put my entire savings into lottery tickets this week. But I won’t say someone is foolish or wrong if they buy a couple.
Mike Baldwin’s Cornered for the 18th is a bit of mathematics-circling wordplay, featuring the blackboard full of equations. The blackboard doesn’t have any real content on it, but it is a good visual shorthand. And it does make me notice that rounding a quantity off is, in a way, making it simpler. If we are only a little interested in the count of the thing, “two thousand forty” or even “two thousand” may be more useful than the exact 2,038. The loss of precision may be worth it for the ease with which the rounded-off version is remembered and communicated.
Today’s term was one of several nominations I got for ‘H’. This one comes from John Golden, @mathhobre on Twitter and author of the Math Hombre blog on Blogspot. He brings in a lot of thought about mathematics education and teaching tools that you might find interesting or useful or, better, both.
The half-plane part is easy to explain. By the “plane” mathematicians mean, well, the plane. What you’d get if a sheet of paper extended forever. Also if it had zero width. To cut it in half … well, first we have to think hard what we mean by cutting an infinitely large thing in half. Then we realize we’re overthinking this. Cut it by picking a line on the plane, and then throwing away everything on one side or the other of that line. Maybe throw away everything on the line too. It’s logically as good to pick any line. But there are a couple lines mathematicians use all the time. This is because they’re easy to describe, or easy to work with. At least once you fix an origin and, with it, x- and y-axes. The “right half-plane”, for example, is everything in the positive-x-axis direction. Every point with coordinates you’d describe with positive x-coordinate values. Maybe the non-negative ones, if you want the edge included. The “upper half plane” is everything in the positive-y-axis direction. All the points whose coordinates have a positive y-coordinate value. Non-negative, if you want the edge included. You can make guesses about what the “left half-plane” or the “lower half-plane” are. You are correct.
The “hyperbolic” part takes some thought. What is there to even exaggerate? Wrong sense of the word “hyperbolic”. The word here is the same one used in “hyperbolic geometry”. That takes explanation.
The Western mathematics tradition, as we trace it back to Ancient Greece and Ancient Egypt and Ancient Babylon and all, gave us “Euclidean” geometry. It’s a pretty good geometry. It describes how stuff on flat surfaces works. In the Euclidean formation we set out a couple of axioms that aren’t too controversial. Like, lines can be extended indefinitely and that all right angles are congruent. And one axiom that is controversial. But which turns out to be equivalent to the idea that there’s only one line that goes through a point and is parallel to some other line.
And it turns out that you don’t have to assume that. You can make a coherent “spherical” geometry, one that describes shapes on the surface of a … you know. You have to change your idea of what a line is; it becomes a “geodesic” or, on the globe, a “great circle”. And it turns out that there’s nolines geodesics that go through a point and that are parallel to some other line geodesic. (I know you want to think about globes. I do too. You maybe want to say the lines of latitude are parallel one another. They’re even called parallels, sometimes. So they are. But they’re not geodesics. They’re “little circles”. I am not throwing in ad hoc reasons I’m right and you’re not.)
There is another, though. This is “hyperbolic” geometry. This is the way shapes work on surfaces that mathematicians call saddle-shaped. I don’t know what the horse enthusiasts out there call these shapes. My guess is they chuckle and point out how that would be the most painful saddle ever. Doesn’t matter. We have surfaces. They act weird. You can draw, through a point, infinitely many lines parallel to a given other line.
That’s some neat stuff. That’s weird and interesting. They’re even called “hyperparallel lines” if that didn’t sound great enough. You can see why some people would find this worth studying. The catch is that it’s hard to order a pad of saddle-shaped paper to try stuff out on. It’s even harder to get a hyperbolic blackboard. So what we’d like is some way to represent these strange geometries using something easier to work with.
The hyperbolic half-plane is one of those approaches. This uses the upper half-plane. It works by a move as brilliant and as preposterous as that time Q told Data and LaForge how to stop that falling moon. “Simple. Change the gravitational constant of the universe.”
What we change here is the “metric”. The metric is a function. It tells us something about how points in a space relate to each other. It gives us distance. In Euclidean geometry, plane geometry, we use the Euclidean metric. You can find the distance between point A and point B by looking at their coordinates, and . This distance is . Don’t worry about the formulas. The lines on a sheet of graph paper are a reflection of this metric. Each line is (normally) a fixed distance from its parallel neighbors. (Yes, there are polar-coordinate graph papers. And there are graph papers with logarithmic or semilogarithmic spacing. I mean graph paper like you can find at the office supply store without asking for help.)
But the metric is something we choose. There are some rules it has to follow to be logically coherent, yes. But those rules give us plenty of room to play. By picking the correct metric, we can make this flat plane obey the same geometric rules as the hyperbolic surface. This metric looks more complicated than the Euclidean metric does, but only because it has more terms and takes longer to write out. What’s important about it is that the distance your thumb put on top of the paper covers up is bigger if your thumb is near the bottom of the upper-half plane than if your thumb is near the top of the paper.
So. There are now two things that are “lines” in this. One of them is vertical lines. The graph paper we would make for this has a nice file of parallel lines like ordinary paper does. The other thing, though … well, that’s half-circles. They’re half-circles with a center on the edge of the half-plane. So our graph paper would also have a bunch of circles, of different sizes, coming from regularly-spaced sources on the bottom of the paper. A line segment is a piece of either these vertical lines or these half-circles. You can make any polygon you like with these, if you pick out enough line segments. They’re there.
There are many ways to represent hyperbolic surfaces. This is one of them. It’s got some nice properties. One of them is that it’s “conformal”. Angles that you draw using this metric are the same size as those on the corresponding hyperbolic surface. You don’t appreciate how sweet that is until you’re working in non-Euclidean geometries. Circles that are entirely within the hyperbolic half-plane match to circles on a hyperbolic surface. Once you’ve got your intuition for this hyperbolic half-plane, you can step into hyperbolic half-volumes. And that lets you talk about the geometry of hyperbolic spaces that reach into four or more dimensions of human-imaginable spaces. Isometries — picking up a shape and moving it in ways that don’t change distance — match up with the Möbius Transformations. These are a well-understood set of altering planes that comes from a different corner of geometry. Also from that fellow with the strip, August Ferdinand Möbius. It’s always exciting to find relationships like that in mathematical structures.
The first two comics for this essay have titles of the form Name’s Thing, so, that’s why this edition title. That’s good enough, isn’t it? And besides this series there was a Perry Bible Fellowship which at least depicted mathematical symbols. It’s a rerun, though, even among those shown on GoComics.com. It was rerun recently enough that I featured it around here back in June. It’s a bit risque. But the strip was rerun the 12th. Maybe I also need to drop Perry Bible Fellowship from the roster of comics I read for this.
On to the comics I haven’t dropped.
Tony Buino and Gary Markstein’s Daddy’s Home for the 11th tries using specific examples to teach mathematics. There’s strangeness to arithmetic. It’s about these abstract things like “thirty” and “addition” and such. But these things match very well the behaviors of discrete objects, ones that don’t blend together or shatter by themselves. So we can use the intuition we have for specific things to get comfortable working with the abstract. This doesn’t stop, either. Mathematicians like to work on general, abstract questions; they let us answer big swaths of questions all at once. But working out a specific case is usually easier, both to prove and to understand. I don’t know what’s the most advanced mathematics that could be usefully practiced by thinking about cupcakes. Probably something in group theory, in studying the rotations of objects that are perfectly, or nearly, rotationally symmetric.
John Zakour and Scott Roberts’s Maria’s Day for the 11th is a follow-up to a strip featured last week. Maria’s been getting help on her mathematics from one of her closet monsters. And includes the usual joke about Common Core being such a horrible thing that it must come from monsters. I don’t know whether in the comic strip’s universe the monster is supposed to be imaginary. (Usually, in a comic strip, the question of whether a character is imaginary-or-real is pointless. I think Richard Thompson’s Cul de Sac is the only one to have done something good with it.) But if the closet monster is in Maria’s imagination, it’s quite in line for her to think that teaching comes from some malevolent and inscrutable force.
Olivia Jaimes’s Nancy for the 12th features one of the first interesting mathematics questions you do in physics. This is often done with calculus. Not much, but more than Nancy and Esther could realistically have. It could be worked out experimentally, and that’s likely what the teacher was hoping for. Calculus isn’t really necessary, although it does show skeptical students there’s some value in all this d-dx business they’ve been working through. You can find the same answers by dimensional analysis, which is less intimidating. But you’d still need to know some trigonometry functions. That’s beyond whatever Nancy’s grade level is too. In any case, Nancy is an expert at identifying unstated assumptions, and working out loopholes in them. I’m curious whether the teacher would respect Nancy’s skill here. (The way the writing’s been going, I think she would.)
Francesco Marciuliano and Jim Keefe’s Sally Forth for the 13th is about new-friend Jenny trying to work out her relationship with Hilary-Faye-and-Nona. It’s a good bit of character work, but that is outside my subject here. In the last panel Nona admits she’s been talking, or at least thinking about τ versus π. This references a minor nerd-squabble that’s been going on a couple years. π is an incredibly well-known, useful number. It’s the only transcendental number you can expect a normal person to have ever heard of. Humans noticed it, historically, because the length of the circumference of a circle is π times the length of its diameter. Going between “the distance across” and “the distance around” turns out to be useful.
The thing is, many mathematical and physics formulas find it more convenient to write things in terms of the radius of a circle or sphere. And this makes 2π show up in formulas. A lot. Even in things that don’t obviously have circles in them. For example, the Gaussian distribution, which describes how much a sample looks like the population it’s sampled from, has 2π in it. So, the τ argument does, why write out 2π all these places? Why not decide that that’s the useful number to think about, give it the catchy name τ, and use that instead? All the interesting questions about π have exact, obvious parallel questions about τ. Any answers about one give us answers about the other. So why not make this switch and then … pocket the savings in having shorter formulas?
You may sense in me a certain skepticism. I don’t see where changing over gets us anything worth the bother. But there are fashions in mathematics as with everything else. Perhaps τ has some ability to clarify things in ways we’ll come to better appreciate.
This starts from groups. A group, here, means a pair of things. The first thing is a set of elements. The second is some operation. It takes a pair of things in the set and matches it to something in the set. For example, try the integers as the set, with addition as the operation. There are many kinds of groups you can make. There can be finite groups, ones with as few as one element or as many as you like. (The one-element groups are so boring. We usually need at least two to have much to say about them.) There can be infinite groups, like the integers. There can be discrete groups, where there’s always some minimum distance between elements. There can be continuous groups, like the real numbers, where there’s no smallest distance between distinct elements.
Groups came about from looking at how numbers work. So the first examples anyone gets are based on numbers. The integers, especially, and then the integers modulo something. For example, there’s , which has two numbers, 0 and 1. Addition works by the rule that 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 0. There’s similar rules for , which has three numbers, 0, 1, and 2.
But after a few comfortable minutes on this, group theory moves on to more abstract things. Things with names like the “permutation group”. This starts with some set of things and we don’t even care what the things are. They can be numbers. They can be letters. They can be places. They can be anything. We don’t care. The group is all of the ways to swap elements around. All the relabellings we can do without losing or gaining an item. Or another, the “symmetry group”. This is, for some given thing — plates, blocks, and wallpaper patterns are great examples — all the ways you can rotate or move or reflect the thing without changing the way it looks.
And now we’re creeping up on what a “group action” is. Let me just talk about permutations here. These are where you swap around items. Like, start out with a list of items “1 2 3 4”. And pick out a permutation, say, swap the second with the fourth item. We write that, in shorthand, as (2 4). Maybe another permutation too. Say, swap the first item with the third. Write that out as (1 3). We can multiply these permutations together. Doing these permutations, in this order, has a particular effect: it swaps the second and fourth items, and swaps the first and third items. This is another permutation on these four items.
These permutations, these “swap this item with that” rules, are a group. The set for the group is instructions like “swap this with that”, or “swap this with that, and that with this other thing, and this other thing with the first thing”. Or even “leave this thing alone”. The operation between two things in the set is, do one and then the other. For example, (2 3) and then (3 4) has the effect of moving the second thing to the fourth spot, the (original) fourth thing to the third spot, and the original third thing to the second spot. That is, it’s the permutation (2 3 4). If you ever need something to doodle during a slow meeting, try working out all the ways you can shuffle around, say, six things. And what happens as you do all the possible combinations of these things. Hey, you’re only permuting six items. How many ways could that be?
So here’s what sounds like a fussy point. The group here is made up the ways you can permute these items. The items aren’t part of the group. They just gave us something to talk about. This is where I got so confused, as an undergraduate, working out groups and group actions.
When we move back to talking about the original items, then we get a group action. You get a group action by putting together a group with some set of things. Let me call the group ‘G’ and the set ‘X’. If I need something particular in the group I’ll call that ‘g’. If I need something particular from the set ‘X’ I’ll call that ‘x’. This is fairly standard mathematics notation. You see how subtly clever this notation is. The group action comes from taking things in G and applying them to things in X, to get things in X. Usually other things, but not always. In the lingo, we say the group action maps the pair of things G and X to the set X.
There are rules these actions have to follow. They’re what you would expect, if you’ve done any fiddling with groups. Don’t worry about them. What’s interesting is what we get from group actions.
First is group orbits. Take some ‘g’ out of the group G. Take some ‘x’ out of the set ‘X’. And build this new set. First, x. Then, whatever g does to x, which we write as ‘gx’. But ‘gx’ is still something in ‘X’, so … what does g do to that? So toss in ‘ggx’. Which is still something in ‘X’, so, toss in ‘gggx’. And ‘ggggx’. And keep going, until you stop getting new things. If ‘X’ is finite, this sequence has to be finite. It might be the whole set of X. It might be some subset of X. But if ‘X’ is finite, it’ll get back, eventually, to where you started, which is why we call this the “group orbit”. We use the same term even if X isn’t finite and we can’t guarantee that all these iterations of g on x eventually get back to the original x. This is a subgroup of X, based on the same group operation that G has.
There can be other special groups. Like, are there elements ‘g’ that map ‘x’ to ‘x’? Sure. The has to be at least one, since the group G has an identity element. There might be others. So, for any given ‘x’, what are all the elements in ‘g’ that don’t change it? The set of all the values of g for which gx is x is the “isotropy group” Gx. Or the “stabilizer subgroup”. This is a subgroup of G, based on x.
Yes, but the point?
Well, the biggest thing we get from group actions is the chance to put group theory principles to work on specific things. A group might describe the ways you can rotate or reflect a square plate without leaving an obvious change in the plate. The group action lets you make this about the plate. Much of modern physics is about learning how the geometry of a thing affects its behavior. This can be the obvious sorts of geometry, like, whether it’s rotationally symmetric. But it can be subtler things, like, whether the forces in the system are different at different times. Group actions let us put what we know from geometry and topology to work in specifics.
A particular favorite of mine is that they let us express the wallpaper groups. These are the ways we can use rotations and reflections and translations (linear displacements) to create different patterns. There are fewer different patterns than you might have guessed. (Different, here, overlooks such petty things as whether the repeated pattern is a diamond, a flower, or a hexagon. Or whether the pattern repeats every two inches versus every three inches.)
And they stay useful for abstract mathematical problems. All this talk about orbits and stabilizers lets us find something called the Orbit Stabilization Theorem. This connects the size of the group G to the size of orbits of x and of the stabilizer subgroups. This has the exciting advantage of letting us turn many proofs into counting arguments. A counting argument is just what you think: showing there’s as many of one thing as there are another. here’s a nice page about the Orbit Stabilization Theorem, and how to use it. This includes some nice, easy-to-understand problems like “how many different necklaces could you make with three red, two green, and one blue bead?” Or if that seems too mundane a problem, an equivalent one from organic chemistry: how many isomers of naphthol could there be? You see where these group actions give us useful information about specific problems.
If you should like a more detailed introduction, although one that supposes you’re more conversant with group theory than I do here, this is a good sequence: Group Actions I, which actually defines the things. Group actions II: the orbit-stabilizer theorem, which is about just what it says. Group actions III — what’s the point of them?, which has the sort of snappy title I like, but which gives points that make sense when you’re comfortable talking about quotient groups and isomorphisms and the like. And what I think is the last in the sequence, Group actions IV: intrinsic actions, which is about using group actions to prove stuff. And includes a mention of one of my favorite topics, the points the essay-writer just didn’t get the first time through. (And more; there’s a point where the essay goes wrong, and needs correction. I am not the Joseph who found the problem.)
I ended up not finding more comics on-topic on GoComics yesterday. So this past week’s mathematically-themed strips should fit into two posts well. I apologize for any loss of coherence in this essay, as I’m getting a bit of a cold. I’m looking forward to what this cold does for the A To Z essays coming Tuesday and Friday this week, too.
Stephen Beals’s Adult Children for the 7th uses Albert Einstein’s famous equation as shorthand for knowledge. I’m a little surprised it’s written out in words, rather than symbols. This might reflect that is often understood just as this important series of sounds, rather than as an equation relating things to one another. Or it might just reflect the needs of the page composition. It could be too small a word balloon otherwise.
Julie Larson’s The Dinette Set for the 9th continues the thread of tip-calculation jokes around here. I have no explanation for this phenomenon. In this case, Burl is doing the calculation correctly. If the tip is supposed to be 15% of the bill, and the bill is reduced 10%, then the tip would be reduced 10%. If you already have the tip calculated, it might be quicker to figure out a tenth of that rather than work out 15% of the original bill. And, yes, the characters are being rather unpleasantly penny-pinching. That was just the comic strip’s sense of humor.
Todd Clark’s Lola for the 9th take the form of your traditional grumbling about story problems. It also shows off the motif of updating of the words in a story problem to be awkwardly un-hip. The problem seems to be starting in a confounding direction anyway. The first sentence isn’t out and it’s introducing the rate at which Frank is shedding social-media friends over time and the rate at which a train is travelling, some distancer per time. Having one quantity with dimensions friends-per-time and another with dimensions distance-per-time is begging for confusion. Or for some weird gibberish thing, like, determining something to be (say) ninety mile-friends. There’s trouble ahead.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 10th proposes naming a particular kind of series. A series is the sum of a sequence of numbers. It doesn’t have to be a sequence with infinitely many numbers in it, but it usually is, if it’s to be an interesting series. Properly, a series gets defined by something like the symbols in the upper caption of the panel:
Here the ‘i’ is a “dummy variable”, of no particular interest and not even detectable once the calculation is done. It’s not that thing with the square roots of -1 in thise case. ‘i’ is specifically known as the ‘index’, since it indexes the terms in the sequence. Despite the logic of i-index, I prefer to use ‘j’, ‘k’, or ‘n’. This avoids confusion with that square-root-of-minus-1 meaning for i. The index starts at some value, the one to the right of the equals sign underneath the capital sigma; in this case, 1. The sequence evaluates whatever the formula described by is, for each whole number between that lowest ‘i’, in this case 1, and whatever the value above the sigma is. For the infinite series, that’s infinitely large. That is, work out for every counting number ‘i’. For the first sum in the caption, that highest number is 4, and you only need to evaluate four terms and add them together. There’s no rule given for in the caption; that just means that, in this case, we don’t yet have reason to care what the formula is.
This is the way to define a series if we’re being careful, and doing mathematics properly. But there are shorthands, and we fall back on them all the time. On the blackboard is one of them: . The at the end of a summation like this means “carry on this pattern for infinitely many terms”. If it appears in the middle of a summation, like it means “carry on this pattern for the appropriate number of terms”. In that case, it would be .
The flaw with this “carry on this pattern” is that, properly, there’s no such thing as “the” pattern. There are infinitely many ways to continue from whatever the start was, and they’re all equally valid. What lets this scheme work is cultural expectations. We expect the difference between one term and the next to follow some easy patterns. They increase or decrease by the same amount as we’ve seen before (an arithmetic progression, like 2 + 4 + 6 + 8, increasing by two each time). They increase or decrease by the same ratio as we’ve seen before (a geometric progression, like 24 + 12 + 6 + 3, cutting in half each time). Maybe the sign alternates, or changes by some straightforward rule. If it isn’t one of these, then we have to fall back on being explicit. In this case, it would be that .
The capital-sigma as shorthand for “sum” traces to Leonhard Euler, because of course. I’m finding it hard, in my copy of Florian Cajori’s History of Mathematical Notations, to find just where the series notation as we use it got started. Also I’m not finding where ellipses got into mathematical notation either. It might reflect everybody realizing this was a pretty good way to represent “we’re not going to write out the whole thing here”.
Norm Feuti’s Retail for the 11th riffs on how many people, fundamentally, don’t know what percentages are. I think it reflects thinking of a percentage as some kind of unit. We get used to measurements of things, like, pounds or seconds or dollars or degrees or such that are fixed in value. But a percentage is relative. It’s a fraction of some original quantity. A difference of (say) two pounds in weight is the same amount of weight whatever the original was; why wouldn’t two percent of the weight behave similarly? … Gads, yes, I feel for the next retailer who gets these customers.
I think I’ve already used the story from when I worked in the bookstore about the customer concerned whether the ten-percent-off sticker applied before or after sales tax was calculated. So I’ll only share if people ask to hear it. (They won’t ask.)