## Reading the Comics, August 14, 2022: Not Being Wrong Edition

The handful of comic strips I’ve chosen to write about this week include a couple with characters who want to not be wrong. That’s a common impulse among people learning mathematics, that drive to have the right answer.

Will Henry’s Wallace the Brave for the 8th opens the theme, with Rose excited to go to mathematics camp as a way of learning more ways to be right. I imagine everyone feels this appeal of mathematics, arithmetic particularly. If you follow these knowable rules, and avoid calculation errors, you get results that are correct. Not just coincidentally right, but right for all time. It’s a wonderful sense of security, even when you get past that childhood age where so little is in your control.

A thing that creates a problem, if you love this too closely, is that much of mathematics builds on approximations. Things we know not to be right, but which we know are not too far wrong. You expect this from numerical mathematics, yes. But it happens in analytic mathematics too. I remember struggling in high school physics, in the modeling a pendulum’s swing. To do this you have to approximate the sine of the angle the pendulum bob with the angle itself. This approximation is quite good, if the angle is small, as you can see from comparing the sine of 0.01 radians to the number 0.01. But I wanted to know when that difference was accounted for, and it never was.

(An alternative interpretation is to treat the path swung by the end of the pendulum as though it were part of a parabola, instead of the section of circle that it really is. A small arc of parabola looks much like a small arc of circle. But there is a difference, not accounted for.)

Nor would it be. A regular trick in analytic mathematics is to show that the thing you want is approximated well enough by a thing you can calculate. And then show that if one takes a limit of the thing you can calculate you make the error infinitesimally small. This is all rigorous and you can in time come to accept it. I hope Rose someday handles the discovery that we get to right answers through wrong-but-useful ones well.

Charles Schulz’s Peanuts Begins for the 8th is one that I have featured here before. It’s built on Lucy not accepting that the answer to a multiplication can be zero, even if it is zero times zero. It’s also built on the mixture of meanings between “zero” and “nothing” and “not existent”. Lucy’s right that zero times zero has to be something, as in a thing with some value. But we also so often use zero to mean “nothing that exists” makes zero a struggle to learn and to work with.

Dan Thompson’s Brevity for the 12th is an anthropomorphic numerals joke, built on the ancient playground pun about why six is afraid of seven. And a bit of wordplay about odd and even numbers on top of that. For this I again offer the followup joke that I first heard a couple of years ago. Why was it that 7 ate 9? Because 7 knows to eat 3-squared meals a day!

Lincoln Pierce’s Big Nate for the 14th is a baseball statistics joke. Really a sabermetrics joke. Sabermetrics and other fine-grained sports analysis study at the enormous number of games played, and situations within those games. The goal is to find enough similar situations to make estimates about outcomes. This is through what’s called the “frequentist” interpretation of statistics. That is, if this situation has come up a hundred times before, and it’s led to one particular outcome 85 of those times, then there’s an 85 percent chance of that outcome in this situation.

Baseball is well-posed to set up this sort of analysis. The organized game has always demanded the keeping of box scores, close records of what happened in what order. Other sports can have the same techniques applied, though. It’s not likely that Randy has thrown enough pitches to estimate his chance of giving up a walk-off grand slam. But combine all the little league teams there are, and all the seasons they’ve played? That starts to sound plausible. Doesn’t help the feeling that one was scheduled for a win and then it didn’t happen.

And that’s enough comics for now. All of my Reading the Comics posts should be at this link, and I hope to have another next week. Thanks for reading.

## From my Seventh A-to-Z: Zero Divisor

Here I stand at the end of the pause I took in 2021’s Little Mathematics A-to-Z, in the hopes of building the time and buffer space to write its last three essays. Have I succeeded? We’ll see next week, but I will say that I feel myself in a much better place than I was in December.

The Zero Devisor closed out my big project for the first plague year. It let me get back to talking about abstract algebra, one of the cores of a mathematics major’s education. And it let me get into graph theory, the unrequited love of my grad school life. The subject also let me tie back to Michael Atiyah, the start of that year’s A-to-Z. Often a sequence will pick up a theme and 2020’s gave a great illusion of being tightly constructed.

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

# Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$.

And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

## My All 2020 Mathematics A to Z: Zero Divisor

Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot.

# Zero Divisor.

3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings.

A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things).

Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things.

We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic.

How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that.

When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is zero, the additive identity, always a zero divisor? … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero?

Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0.

Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring.

It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work.

In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.)

Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have?

It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”.

And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$.

And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading.

## Reading the Comics, March 12, 2019: Back To Sequential Time Edition

Since I took the Pi Day comics ahead of their normal sequence on Sunday, it’s time I got back to the rest of the week. There weren’t any mathematically-themed comics worth mentioning from last Friday or Saturday, so I’m spending the latter part of this week covering stuff published before Pi Day. It’s got me slightly out of joint. It’ll all be better soon.

Mark Anderson’s Andertoons for the 11th is the Mark Anderson’s Andertoons for this week. That’s nice to have. It’s built on the concept of story problems. That there should be “stories” behind a problem makes sense. Most actual mathematics, even among mathematicians, is done because we want to know a thing. Acting on a want is a story. Wanting to know a thing justifies the work of doing this calculation. And real mathematics work involves looking at some thing, full of the messiness of the real world, and extracting from it mathematics. This would be the question to solve, the operations to do, the numbers (or shapes or connections or whatever) to use. We surely learn how to do that by doing simple examples. The kid — not Wavehead, for a change — points out a common problem here. There’s often not much of a story to a story problem. That is, where we don’t just want something, but someone else wants something too.

Parker and Hart’s The Wizard of Id for the 11th is a riff on the “when do you use algebra in real life” snark. Well, no one disputes that there are fields which depend on advanced mathematics. The snark comes in from supposing that a thing is worth learning only if it’s regularly “useful”.

Rick Detorie’s One Big Happy for the 12th has Joe stalling class to speak to “the guy who invented zero”. I really like this strip since it’s one of those cute little wordplay jokes that also raises a legitimate point. Zero is this fantastic idea and it’s hard to imagine mathematics as we know it without the concept. Of course, we could say the same thing about trying to do mathematics without the concept of, say, “twelve”.

We don’t know who’s “the guy” who invented zero. It’s probably not all a single person, though, or even a single group of people. There are several threads of thought which merged together to zero. One is the notion of emptiness, the absense of a measurable thing. That probably occurred to whoever was the first person to notice a thing wasn’t where it was expected. Another part is the notion of zero as a number, something you could add to or subtract from a conventional number. That is, there’s this concept of “having nothing”, yes. But can you add “nothing” to a pile of things? And represent that using the addition we do with numbers? Sure, but that’s because we’re so comfortable with the idea of zero that we don’t ponder whether “2 + 1” and “2 + 0” are expressing similar ideas. You’ll occasionally see people asking web forums whether zero is really a number, often without getting much sympathy for their confusion. I admit I have to think hard to not let long reflex stop me wondering what I mean by a number and why zero should be one.

And then there’s zero, the symbol. As in having a representation, almost always a circle, to mean “there is a zero here”. We don’t know who wrote the first of that. The oldest instance of it that we know of dates to the year 683, and was written in what’s now Cambodia. It’s in a stone carving that seems to be some kind of bill of sale. I’m not aware whether there’s any indication from that who the zero was written for, or who wrote it, though. And there’s no reason to think that’s the first time zero was represented with a symbol. It’s the earliest we know about.

Darrin Bell’s Candorville for the 12th has some talk about numbers, and favorite numbers. Lemont claims to have had 8 as his favorite number because its shape, rotated, is that of the infinity symbol. C-Dog disputes Lemont’s recollection of his motives. Which is fair enough; it’s hard to remember what motivated you that long ago. What people mostly do is think of a reason that they, today, would have done that, in the past.

The ∞ symbol as we know it is credited to John Wallis, one of that bunch of 17th-century English mathematicians. He did a good bit of substantial work, in fields like conic sections and physics and whatnot. But he was also one of those people good at coming up with notation. He developed what’s now the standard notation for raising a number to a power, that $x^n$ stuff, and showed how to define raising a number to a rational-number power. Bunch of other things. He also seems to be the person who gave the name “continued fraction” to that concept.

Wallis never explained why he picked ∞ as a shape, of all the symbols one could draw, for this concept. There’s speculation he might have been varying the Roman numeral for 1,000, which we’ve simplified to M but which had been rendered as (|) or () and I can see that. (Well, really more of a C and a mirror-reflected C rather than parentheses, but I don’t have the typesetting skills to render that.) Conflating “a thousand” with “many” or “infinitely many” has a good heritage. We do the same thing when we talk about something having millions of parts or costing trillions of dollars or such. But, Wallis never explained (so far as we’re aware), so all this has to be considered speculation and maybe mnemonic helps to remembering the symbol.

Terry LaBan and Patty LaBan’s Edge City for the 12th is another story problem joke. Curiously the joke seems to be simply that the father gets confused following the convolutions of the story. The specific story problem circles around the “participation awards are the WORST” attitude that newspaper comics are surprisingly prone to. I think the LaBans just wanted the story problem to be long and seem tedious enough that our eyes glazed over. Anyway you could not pay me to read whatever the comments on this comic are. Sorry not sorry.

I figure to have one more Reading the Comics post this week. When that’s posted it should be available at this link. Thanks for being here.

## Reading the Comics, February 10, 2018: I Meant To Post This Thursday Edition

Ah, yes, so, in the midst of feeling all proud that I’d gotten my Reading the Comics workflow improved, I went out to do my afternoon chores without posting the essay. I’m embarrassed. But it really only affects me looking at the WordPress Insights page. It publishes this neat little calendar-style grid that highlights the days when someone’s posted and this breaks up the columns. This can only unnerve me. I deserve it.

Tom Thaves’s Frank and Ernest for the 8th of February is about the struggle to understand zero. As often happens, the joke has a lot of truth to it. Zero bundles together several ideas, overlapping but not precisely equal. And part of that is the idea of “nothing”. Which is a subtly elusive concept: to talk about the properties of a thing that does not exist is hard. As adults it’s easy to not notice this anymore. Part’s likely because mastering a concept makes one forget what it took to understand. Part is likely because if you don’t have to ponder whether the “zero” that’s “one less than one” is the same as the “zero” that denotes “what separates the count of thousands from the count of tens in the numeral 2,038” you might not, and just assume you could explain the difference or similarity to someone who has no idea.

John Zakour and Scott Roberts’s Maria’s Day for the 8th has maria and another girl bonding over their hatred of mathematics. Well, at least they’re getting something out of it. The date in the strip leads me to realize this is probably a rerun. I’m not sure just when it’s from.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 8th proposes a prank based on mathematical use of the word “arbitrarily”. This is a word that appears a lot in analysis, and the strip makes me realize I’m not sure I can give a precise definition. An “arbitrarily large number”, for example, would be any number that’s large enough. But this also makes me realize I’m not sure precisely what joke Weinersmith is going for. I suppose that if someone were to select an arbitrarily large number they might pick 53, or a hundred, or million billion trillion. I suppose Weinersmith’s point is that in ordinary speech an arbitrarily made choice is one selection from all the possible alternatives. In mathematical speech an arbitrarily made choice reflects every possible choice. To speak of an arbitrarily large number is to say that whatever selection is made, we can go on to show this interesting stuff is true. We’d typically like to prove the most generically true thing possible. But picking a single example can be easier to prove. It can certainly be easier to visualize. 53 is probably easier to imagine than “every number 52 or larger”, for example.

Ted Shearer’s Quincy for the 16th of December, 1978 was rerun the 9th of February. It just shows Quincy at work on his mathematics homework, and considering dedicating it to his grandmother. Mathematics books have dedications, just as any other book does. I’m not aware of dedications of proofs or other shorter mathematics works, but there’s likely some. There’s often a note of thanks, usually given to people who’ve made the paper’s writers think harder about the subjects. But I don’t think there’s any reason a paper wouldn’t thank someone who provided “mere” emotional support. I just don’t have examples offhand.

Jef Mallet’s Frazz for the 9th looks like one of those creative-teaching exercises I sometimes see in Mathematics Education Twitter: the teacher gives answers and the students come up with story problems to match. That’s not a bad project. I’m not sure how to grade it, but I haven’t done anything that creative when I’ve taught. I’m sorry I haven’t got more to say about it since the idea seems fun.

Gordon Bess’s Redeye for the 30th of September, 1971 was rerun the 10th. It’s a bit of extremely long division and I don’t blame Pokey for giving up on that problem. Starting from 5,967,342 divided by 973 I’d say, well, that’s about six million divided by a thousand, so the answer should be near six thousand. I don’t think the last digits of 2 and 3 suggest anything about what the final digit should be, if this divides evenly. So the only guidance I have is that my answer ought to be around six thousand and then we have to go into actually working. It turns out that 973 doesn’t go into 5,967,342 a whole number of times, so I sympathize more with Pokey. The answer is a little more than 6,132.9311.

## Reading the Comics, June 24, 2017: Saturday Morning Breakfast Cereal Edition

Somehow this is not the title of every Reading The Comics review! But it is for this post and we’ll explore why below.

Dave Coverly’s Speed Bump for the 18th is not exactly an anthropomorphic-numerals joke. It is about making symbols manifest in the real world, at least. The greater-than and less-than signs as we know them were created by the English mathematician Thomas Harriot, and introduced to the world in his posthumous Artis Analyticae Praxis (1631). He also had an idea of putting a . between the numerals of an expression and the letters multiplied by them, for example, “4.x” to mean four times x. We mostly do without that now, taking multiplication as assumed if two meaningful quantities are put next to one another. But we will use, now, a vertically-centered dot to separate terms multiplied together when that helps our organization. The equals sign we trace to the 16th century mathematician Robert Recorde, whose 1557 Whetsone of Witte uses long but recognizable equals signs. The = sign went into hibernation after that, though, until the 17th century and it took some time to quite get well-used. So it often is with symbols.

Ted Shearer’s Quincy for the 25th of April, 1978 and rerun the 19th of June, starts from the history of zero. It’s worth noting there are a couple of threads woven together in the concept of zero. One is the idea of “nothing”, which we’ve had just forever. I mean, the idea that there isn’t something to work with. Another is the idea of the … well, the additive identity, there being some number that’s one less than one and two less than two. That you can add to anything without changing the thing. And then there’s symbols. There’s the placeholder for “there are no examples of this quantity here”. There’s the denotation of … well, the additive identity. All these things are zeroes, and if you listen closely, they are not quite the same thing. Which is not weird. Most words mean a collection of several concepts. We’re lucky the concepts we mean by “zero” are so compatible in meaning. Think of the poor person trying to understand the word “bear”, or “cleave”.

John Deering’s Strange Brew for the 19th is a “New Math” joke, fittingly done with cavemen. Well, numerals were new things once. Amusing to me is that — while I’m not an expert — in quite a few cultures the symbol for “one” was pretty much the same thing, a single slash mark. It’s hard not to suppose that numbers started out with simple tallies, and the first thing to tally might get dressed up a bit with serifs or such but is, at heart, the same thing you’d get jabbing a sharp thing into a soft rock.

Guy Gilchrist’s Today’s Dogg for the 19th I’m sure is a rerun and I think I’ve featured it here before. So be it. It’s silly symbol-play and dog arithmetic. It’s a comic strip about how dogs are cute; embrace it or skip it.

Zach Weinersmith’s Saturday Morning Breakfast Cereal is properly speaking reruns when it appears on GoComics.com. For whatever reason Weinersmith ran a patch of mathematics strips there this past week. So let me bundle all that up. On the 19th he did a joke mathematicians get a lot, about how the only small talk anyone has about mathematics is how they hated mathematics. I’m not sure mathematicians have it any better than any other teachers, though. Have you ever known someone to say, “My high school gym class gave me a greater appreciation of the world”? Or talk about how grade school history opened their eyes to the wonders of the subject? It’s a sad thing. But there are a lot of things keeping teachers from making students feel joy in their subjects.

For the 21st Weinersmith makes a statisticians joke. I can wrangle some actual mathematics out of an otherwise correctly-formed joke. How do we ever know that something is true? Well, we gather evidence. But how do we know the evidence is relevant? Even if the evidence is relevant, how do we know we’ve interpreted it correctly? Even if we have interpreted it correctly, how do we know that it shows what we want to know? Statisticians become very familiar with hypothesis testing, which amounts to the question, “does this evidence indicate that some condition is implausibly unlikely”? And they can do great work with that. But “implausibly unlikely” is not the same thing as “false”. A person knowledgeable enough and honest turns out to have few things that can be said for certain.

The June 23rd strip I’ve seen go around Mathematics Twitter several times, as see above tweet, about the ways in which mathematical literacy would destroy modern society. It’s a cute and flattering portrait of mathematics’ power, probably why mathematicians like passing it back and forth. But … well, how would “logic” keep people from being fooled by scams? What makes a scam work is that the premise seems logical. And real-world problems — as opposed to logic-class problems — are rarely completely resolvable by deductive logic. There have to be the assumptions, the logical gaps, and the room for humbuggery that allow hoaxes and scams to slip through. And does anyone need a logic class to not “buy products that do nothing”? And what is “nothing”? I have more keychains than I have keys to chain, even if we allow for emergencies and reasonable unexpected extra needs. This doesn’t stop my buying keychains as souvenirs. Does a Penn Central-logo keychain “do nothing” merely because it sits on the windowsill rather than hold any sort of key? If so, was my love foolish to buy it as a present? Granted that buying a lottery ticket is a foolish use of money; is my life any worse for buying that than, say, a peanut butter cup that I won’t remember having eaten a week afterwards? As for credit cards — It’s not clear to me that people max out their credit cards because they don’t understand they will have to pay it back with interest. My experience has been people max out their credit cards because they have things they must pay for and no alternative but going further into debt. That people need more money is a problem of society, yes, but it’s not clear to me that a failure to understand differential equations is at the heart of it. (Also, really, differential equations are overkill to understand credit card debt. A calculator with a repeat-the-last-operation feature and ten minutes to play is enough.)

## Some Mathematical Tweets To Read

Can’t deny that I will sometimes stockpile links of mathematics stuff to talk about. Sometimes I even remember to post it. Sometimes it’s a tweet like this, which apparently I’ve been carrying around since April:

I admit I do not know whether the claim is true. It’s plausible enough. English has many variants in England alone, and any trade will pick up its own specialized jargon. The words are fun as it is.

From the American Mathematical Society there’s this:

I talk a good bit about knot theory. It captures the imagination and it’s good for people who like to doodle. And it has a lot of real-world applications. Tangled wires, protein strands, high-energy plasmas, they all have knots in them. Some work by Paul Sutcliffe and Fabian Maucher, both of Durham University, studies tangled vortices. These are vortices that are, er, tangled together, just like you imagine. Knot theory tells us much about this kind of vortex. And it turns out these tangled vortices can untangle themselves and smooth out again, even without something to break them up and rebuild them. It gives hope for power cords everywhere.

Nerds have a streak which compels them to make blueprints of things. It can be part of the healthier side of nerd culture, the one that celebrates everything. The side that tries to fill in the real-world things that the thing-celebrated would have if it existed. So here’s a bit of news about doing that:

I like the attempt to map Sir Thomas More’s Utopia. It’s a fun exercise in matching stuff to a thin set of data. But as mentioned in the article, nobody should take it too seriously. The exact arrangement of things in Utopia isn’t the point of the book. More probably didn’t have a map for it himself.

(Although maybe. I believe I got this from Simon Garfield’s On The Map: A Mind-Expanding Exploration Of The Way The World Looks and apologize generally if I’ve got it wrong. My understanding is Robert Louis Stevenson drew a map of Treasure Island and used it to make sure references in the book were consistent. Then the map was lost in the mail to his publishers. He had to read his text and re-create it as best he could. Which, if true, makes the map all the better. It makes it so good a lost-map story that I start instinctively to doubt it; it’s so colorfully perfect, after all.)

And finally there’s this gem from the Magic Realism Bot:

## Reading the Comics, July 13, 2016: Catching Up On Vacation Week Edition

I confess I spent the last week on vacation, away from home and without the time to write about the comics. And it was another of those curiously busy weeks that happens when it’s inconvenient. I’ll try to get caught up ahead of the weekend. No promises.

Art and Chip Samson’s The Born Loser for the 10th talks about the statistics of body measurements. Measuring bodies is one of the foundations of modern statistics. Adolphe Quetelet, in the mid-19th century, found a rough relationship between body mass and the square of a person’s height, used today as the base for the body mass index.Francis Galton spent much of the late 19th century developing the tools of statistics and how they might be used to understand human populations with work I will describe as “problematic” because I don’t have the time to get into how much trouble the right mind at the right idea can be.

No attempt to measure people’s health with a few simple measurements and derived quantities can be fully successful. Health is too complicated a thing for one or two or even ten quantities to describe. Measures like height-to-waist ratios and body mass indices and the like should be understood as filters, the way temperature and blood pressure are. If one or more of these measurements are in dangerous ranges there’s reason to think there’s a health problem worth investigating here. It doesn’t mean there is; it means there’s reason to think it’s worth spending resources on tests that are more expensive in time and money and energy. And similarly just because all the simple numbers are fine doesn’t mean someone is perfectly healthy. But it suggests that the person is more likely all right than not. They’re guides to setting priorities, easy to understand and requiring no training to use. They’re not a replacement for thought; no guides are.

Jeff Harris’s Shortcuts educational panel for the 10th is about zero. It’s got a mix of facts and trivia and puzzles with a few jokes on the side.

I don’t have a strong reason to discuss Ashleigh Brilliant’s Pot-Shots rerun for the 11th. It only mentions odds in a way that doesn’t open up to discussing probability. But I do like Brilliant’s “Embrace-the-Doom” tone and I want to share that when I can.

John Hambrock’s The Brilliant Mind of Edison Lee for the 13th of July riffs on the world’s leading exporter of statistics, baseball. Organized baseball has always been a statistics-keeping game. The Olympic Ball Club of Philadelphia’s 1837 rules set out what statistics to keep. I’m not sure why the game is so statistics-friendly. It must be in part that the game lends itself to representation as a series of identical events — pitcher throws ball at batter, while runners wait on up to three bases — with so many different outcomes.

Alan Schwarz’s book The Numbers Game: Baseball’s Lifelong Fascination With Statistics describes much of the sport’s statistics and record-keeping history. The things recorded have varied over time, with the list of things mostly growing. The number of statistics kept have also tended to grow. Sometimes they get dropped. Runs Batted In were first calculated in 1880, then dropped as an inherently unfair statistic to keep; leadoff hitters were necessarily cheated of chances to get someone else home. How people’s idea of what is worth measuring changes is interesting. It speaks to how we change the ways we look at the same event.

Dana Summers’s Bound And Gagged for the 13th uses the old joke about computers being abacuses and the like. I suppose it’s properly true that anything you could do on a real computer could be done on the abacus, just, with a lot ore time and manual labor involved. At some point it’s not worth it, though.

Nate Fakes’s Break of Day for the 13th uses the whiteboard full of mathematics to denote intelligence. Cute birds, though. But any animal in eyeglasses looks good. Lab coats are almost as good as eyeglasses.

David L Hoyt and Jeff Knurek’s Jumble for the 13th is about one of geometry’s great applications, measuring how large the Earth is. It’s something that can be worked out through ingenuity and a bit of luck. Once you have that, some clever argument lets you work out the distance to the Moon, and its size. And that will let you work out the distance to the Sun, and its size. The Ancient Greeks had worked out all of this reasoning. But they had to make observations with the unaided eye, without good timekeeping — time and position are conjoined ideas — and without photographs or other instantly-made permanent records. So their numbers are, to our eyes, lousy. No matter. The reasoning is brilliant and deserves respect.

## Reading the Comics, November 21, 2015: Communication Edition

And then three days pass and I have enough comic strips for another essay. That’s fine by me, really. I picked this edition’s name because there’s a comic strip that actually touches on information theory, and another that’s about a much-needed mathematical symbol, and another about the ways we represent numbers. That’s enough grounds for me to use the title.

Samson’s Dark Side Of The Horse for the 19th of November looks like this week’s bid for an anthropomorphic numerals joke. I suppose it’s actually numeral cosplay instead. I’m amused, anyway.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 19th of November makes a patent-law joke out of the invention of zero. It’s also an amusing joke. It may be misplaced, though. The origins of zero as a concept is hard enough to trace. We can at least trace the symbol zero. In Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers, Amir D Aczel traces out not just the (currently understood) history of Arabic numerals, but some of how the history of that history has evolved, and finally traces down the oldest known example of a written (well, carved) zero.

Tony Cochrane’s Agnes for the 20th of November is at heart just a joke about a student’s apocalyptically bad grades. It contains an interesting punch line, though, in Agnes’s statement that “math people are dreadful spellers”. I haven’t heard that before. It might be a joke about algebra introducing letters into numbers. But it does seem to me there’s a supposition that mathematics people aren’t very good writers or speakers. I do remember back as an undergraduate other people on the student newspaper being surprised I could write despite majoring in physics and mathematics. That may reflect people remembering bad experiences of sitting in class with no idea what the instructor was going on about. It’s easy to go from “I don’t understand this mathematics class” to “I don’t understand mathematics people”.

Steve Sicula’s Home and Away for the 20th of November is about using gambling as a way to teach mathematics. So it would be a late entry for the recent Gambling Edition of the Reading The Comics posts. Although this strip is a rerun from the 15th of August, 2008, so it’s actually an extremely early entry.

Ruben Bolling’s Tom The Dancing Bug for the 20th of November is a Super-Fun-Pak Comix installment. And for a wonder it hasn’t got a Chaos Butterfly sequence. Under the Guy Walks Into A Bar label is a joke about a horse doing arithmetic that itself swings into a base-ten joke. In this case it’s suggested the horse would count in base four, and I suppose that’s plausible enough. The joke depends on the horse pronouncing a base four “10” as “ten”, when the number is actually “four”. But the lure of the digits is very hard to resist, and saying “four” suggests the numeral “4” whatever the base is supposed to be.

Mark Leiknes’s Cow and Boy for the 21st of November is a rerun from the 9th of August, 2008. It mentions the holographic principle, which is a neat concept. The principle’s explained all right in the comic. The idea was first developed in the late 1970s, following the study of black hole thermodynamics. Black holes are fascinating because the mathematics of them suggest they have a temperature, and an entropy, and even information which can pass into and out of them. This study implied that information about the three-dimensional volume of the black hole was contained entirely in the two-dimensional surface, though. From here things get complicated, though, and I’m going to shy away from describing the whole thing because I’m not sure I can do it competently. It is an amazing thing that information about a volume can be encoded in the surface, though, and vice-versa. And it is astounding that we can imagine a logically consistent organization of the universe that has a structure completely unlike the one our senses suggest. It’s a lasting and hard-to-dismiss philosophical question. How much of the way the world appears to be structured is the result of our minds, our senses, imposing that structure on it? How much of it is because the world is ‘really’ like that? (And does ‘really’ mean anything that isn’t trivial, then?)

I should make clear that while we can imagine it, we haven’t been able to prove that this holographic universe is a valid organization. Explaining gravity in quantum mechanics terms is a difficult point, as it often is.

Dave Blazek’s Loose Parts for the 21st of November is a two- versus three-dimensions joke. The three-dimension figure on the right is a standard way of drawing x-, y-, and z-axes, organized in an ‘isometric’ view. That’s one of the common ways of drawing three-dimensional figures on a two-dimensional surface. The two-dimension figure on the left is a quirky representation, but it’s probably unavoidable as a way to make the whole panel read cleanly. Usually when the axes are drawn isometrically, the x- and y-axes are the lower ones, with the z-axis the one pointing vertically upward. That is, they’re the ones in the floor of the room. So the typical two-dimensional figure would be the lower axes.

## The Set Tour, Part 6: One Big One Plus Some Rubble

I have a couple of sets for this installment of the Set Tour. It’s still an unusual installment because only one of the sets is that important for my purposes here. The rest I mention because they appear a lot, even if they aren’t much used in these contexts.

## I, or J, or maybe Z

The important set here is the integers. You know the integers: they’re the numbers everyone knows. They’re the numbers we count with. They’re 1 and 2 and 3 and a hundred million billion. As we get older we come to accept 0 as an integer, and even the negative integers like “negative 12” and “minus 40” and all that. The integers might be the easiest mathematical construct to know. The positive integers, anyway. The negative ones are still a little suspicious.

The set of integers has several shorthand names. I is a popular and common one. As with the real-valued numbers R and the complex-valued numbers C it gets written by hand, and typically typeset, with a double vertical stroke. And we’ll put horizontal serifs on the top and bottom of the symbol. That’s a concession to readability. You see the same effect in comic strip lettering. A capital “I” in the middle of a word will often be written without serifs, while the word by itself needs the extra visual bulk.

The next popular symbol is J, again with a double vertical stroke. This gets used if we want to reserve “I”, or the word “I”, for some other purpose. J probably gets used because it’s so very close to I, and it’s only quite recently (in historic terms) that they’ve even been seen as different letters.

The symbol that seems to come out of nowhere is Z. It comes less from nowhere than it does from German. The symbol derives from “Zahl”, meaning “number”. It seems to have got into mathematics by way of Nicolas Bourbaki, the renowned imaginary French mathematician. The Z gets written with a double diagonal stroke.

Personally, I like Z most of this set, but on trivial grounds. It’s a more fun letter to write, especially since I write it with the middle horizontal stroke that. I’ve got no good cultural or historical reason for this. I just picked it up as a kid and never set it back down.

In these Set Tour essays I’m trying to write about sets that get used often as domains and ranges for functions. The integers get used a fair bit, although not nearly as often as real numbers do. The integers are a natural way to organize sequences of numbers. If the record of a week’s temperatures (in Fahrenheit) are “58, 45, 49, 54, 58, 60, 64”, there’s an almost compelling temperature function here. f(1) = 58, f(2) = 45, f(3) = 49, f(4) = 54, f(5) = 58, f(6) = 60, f(7) = 64. This is a function that has as its domain the integers. It happens that the range here is also integers, although you might be able to imagine a day when the temperature reading was 54.5.

Sequences turn up a lot. We are almost required to measure things we are interested in in discrete samples. So mathematical work with sequences uses integers as the domain almost by default. The use of integers as a domain gets done so often that it often becomes invisible, though. Someone studying my temperature data above might write the data as f1, f2, f3, and so on. One might reasonably never even notice there’s a function there, or a domain.

And that’s fine. A tool can be so useful it disappears. Attend a play; the stage is in light and the audience in darkness. The roles the light and darkness play disappear unless the director chooses to draw attention to this choice.

And to be honest, integers are a lousy domain for functions. It’s achingly hard to prove things for functions defined just on the integers. The easiest way to do anything useful is typically to find an equivalent problem for a related function that’s got the real numbers as a domain. Then show the answer for that gives you your best-possible answer for the original question.

If all we want are the positive integers, we put a little superscript + to our symbol: I+ or J+ or Z+. That’s a popular choice if we’re using the integers as an index. If we just want the negative numbers that’s a little weird, but, change the plus sign to a minus: I.

Now for some trouble.

Sometimes we want the positive numbers and zero, or in the lingo, the “nonnegative numbers”. Good luck with that. Mathematicians haven’t quite settled on what this should be called, or abbreviated. The “Natural numbers” is a common name for the numbers 0, 1, 2, 3, 4, and so on, and this makes perfect sense and gets abbreviated N. You can double-brace the left vertical stroke, or the diagonal stroke, as you like and that will be understood by everybody.

That is, everybody except the people who figure “natural numbers” should be 1, 2, 3, 4, and so on, and that zero has no place in this set. After all, every human culture counts with 1 and 2 and 3, and for that matter crows and raccoons understand the concept of “four”. Yet it took thousands of years for anyone to think of “zero”, so how natural could that be?

So we might resort to speaking of the “whole numbers” instead. More good luck with that. Besides leaving open the question of whether zero should be considered “whole” there’s the linguistic problem. “Whole” number carries, for many, the implication of a number that is an integer with no fractional part. We already have the word “integer” for that, yes. But the fact people will talk about rounding off to a whole number suggests the phrase “whole number” serves some role that the word “integer” doesn’t. Still, W is sitting around not doing anything useful.

Then there’s “counting numbers”. I would be willing to endorse this as a term for the integers 0, 1, 2, 3, 4, and so on, except. Have you ever met anybody who starts counting from zero? Yes, programmers for some — not all! — computer languages. You know which computer languages. They’re the languages which baffle new students because why on earth would we start counting things from zero all of a sudden? And the obvious single-letter abbreviation C is no good because we need that for complex numbers, a set that people actually use for domains a lot.

There is a good side to this, if you aren’t willing to sit out the 150 years or so mathematicians are going to need to sort this all out. You can set out a symbol that makes sense to you, early on in your writing, and stick with it. If you find you don’t like it, you can switch to something else in your next paper and nobody will protest. If you figure out a good one, people may imitate you. If you figure out a really good one, people will change it just a tiny bit so that their usage drives you crazy. Life is like that.

Eric Weisstein’s Mathworld recommends using Z* for the nonnegative integers. I don’t happen to care for that. I usually associate superscript * symbols with some operations involving complex-valued numbers and with the duals of sets, neither of which is in play here. But it’s not like he’s wrong and I’m right. If I were forced to pick a symbol right now I’d probably give Z0+. And for the nonpositive itself — the negative integers and zero — Z0- presents itself. I fully understand there are people who would be driven stark raving mad by this. Maybe you have a better one. I’d believe that.

Let me close with something non-controversial.

These are some sets that are too important to go unmentioned. But they don’t get used much in the domain-and-range role I’ve been using as basis for these essays. They are, in the terrain of these essays, some rubble.

You know the rational numbers? They’re the things you can write as fractions: 1/2, 5/13, 32/7, -6/7, 0 (think about it). This is a quite useful set, although it doesn’t get used much for the domain or range of functions, at least not in the fields of mathematics I see. It gets abbreviated as Q, though. There’s an extra vertical stroke on the left side of the loop, just as a vertical stroke gets added to the C for complex-valued numbers. Why Q? Well, “R” is already spoken for, as we need it for the real numbers. The key here is that every rational number can be written as the quotient of one integer divided by another. So, this is the set of Quotients. This abbreviation we get thanks to Bourbaki, the same folks who gave us Z for integers. If it strikes you that the imaginary French mathematician Bourbaki used a lot of German words, all I can say is I think that might have been part of the fun of the Bourbaki project. (Well, and German mathematicians gave us many breakthroughs in the understanding of sets in the late 19th and early 20th centuries. We speak with their language because they spoke so well.)

If you’re comfortable with real numbers and with rational numbers, you know of irrational numbers. These are (most) square roots, and pi and e, and the golden ratio and a lot of cosines of angles. Strangely, there really isn’t any common shorthand name or common notation for the irrational numbers. If we need to talk about them, we have the shorthand “R \ Q”. This means “the real numbers except for the rational numbers”. Or we have the shorthand “Qc”. This means “everything except the rational numbers”. That “everything” carries the implication “everything in the real numbers”. The “c” in the superscript stands for “complement”, everything outside the set we’re talking about. These are ungainly, yes. And it’s a bit odd considering that most real numbers are irrational numbers. The rational numbers are a most ineffable cloud of dust the atmosphere of the real numbers.

But, mostly, we don’t need to talk about functions that have an irrational-number domain. We can do our work with a real-number domain instead. So we leave that set with a clumsy symbol. If there’s ever a gold rush of fruitful mathematics to be done with functions on irrational domains then we’ll put in some better notation. Until then, there are better jobs for our letters to do.

## Quick Little Calculus Puzzle

fluffy, one of my friends and regular readers, got to discussing with me a couple of limit problems, particularly, ones that seemed to be solved through L’Hopital’s Rule and then ran across some that don’t call for that tool of Freshman Calculus which you maybe remember. It’s the thing about limits of zero divided by zero, or infinity divided by infinity. (It can also be applied to a couple of other “indeterminate forms”; I remember when I took this level calculus the teacher explaining there were seven such forms. Without looking them up, I think they’re $\frac00, \frac{\infty}{\infty}, 0^0, \infty^{0}, 0^{\infty}, 1^{\infty}, \mbox{ and } \infty - \infty$ but I would not recommend trusting my memory in favor of actually studying for your test.)

Anyway, fluffy put forth two cute little puzzles that I had immediate responses for, and then started getting plagued by doubts about, so I thought I’d put them out here for people who want the recreation. They’re both about taking the limit at zero of fractions, specifically:

$\lim_{x \rightarrow 0} \frac{e^x}{x^e}$

$\lim_{x \rightarrow 0} \frac{x^e}{e^x}$

where e here is the base of the natural logarithm, that is, that number just a little high of 2.71828 that mathematicians find so interesting even though it isn’t pi.

The limit is, if you want to be exact, a subtly and carefully defined idea that took centuries of really bright work to explain. But the first really good feeling that I really got for it is to imagine a function evaluated at the points near but not exactly at the target point — in the limits here, where x equals zero — and to see, if you keep evaluating x very near zero, are the values of your expression very near something? If it does, that thing the expression gets near is probably the limit at that point.

So, yes, you can plug in values of x like 0.1 and 0.01 and 0.0001 and so on into $\frac{e^x}{x^e}$ and $\frac{x^e}{e^x}$ and get a feeling for what the limit probably is. Saying what it definitely is takes a little more work.