Reading the Comics, June 18, 2022: Pizza Edition


I’m back with my longest-running regular feature here. As I’ve warned I’m trying not to include every time one of the newspaper comics (that is, mostly, ones running on Comics Kingdom or GoComics) mentions the existence of arithmetic. So, for example, both Frank and Ernest and Rhymes with Orange did jokes about the names of the kinds of triangles. You can clip those at your leisure; I’m looking to discuss deeper subjects.

Scott Hilburn’s The Argyle Sweater is … well, it’s just an anthropomorphic-numerals joke. I have a weakness for The Wizard of Oz, that’s all. Also, I don’t know but somewhere in the nine kerspillion authorized books written since Baum’s death there be at least one with a “wizard of odds” plot.

A scene as from The Wizard of Oz. A numeral 10, in tin, with an axe beside speaks to an 8 in a gingko dress and a 22. The 10 says, 'I'm a 10 man, but I'd like to be an 11.' The 8 says, 'Come with me and two-two! We're off to see the Wizard! The Wonderful Wizard of Odds!'
Scott Hilburn’s The Argyle Sweater for the 12th of June, 2022. This and many other essays mentioning The Argyle Sweater are at this link.

Bill Amend’s FoxTrot reads almost like a word problem’s setup. There’s a difference in cost between pizzas of different sizes. Jason and Marcus make the supposition that they could buy the difference in sizes. They are asking for something physically unreasonable, but in a way that mathematics problems might do. The ring of pizza they’d be buying would be largely crust, after all. (Some people like crust, but I doubt any are ten-year-olds like Jason and Marcus.) The obvious word problem to spin out of this is extrapolating the costs of 20-inch or 8-inch pizzas, and maybe the base cost of making any pizza however tiny.

Jason and Marcus, kids, at a pizzeria's cashier: 'Your 16-inch cheese pizzas cost $17.99 and your 12-inch ones cost $14.99?' Steve the cashier: 'Um, correct.' Jason: 'We'd like to order the difference.' Steve: 'The what?' Jason: 'A 16-inch-diameter circle has an area of 201 square inches and a 12-inch diameter circle has an area of 113 square inches. We'd like the difference of 88 square inches of pizza.' Marcus, offering: 'Here's $3.' Silent penultimate panel as Steve looks at this strange pair. Later, Jason's older brother Pete says, 'My friend Steve says you very briefly dropped by the pizza shop today.' Jason: 'Your friend Steve needs a math tutor.'
Bill Amend’s FoxTrot for the 12th of June, 2022. This and other essays about FoxTrot are at this link.

You can think of a 16-inch-wide circle as a 12-inch-wide circle with an extra ring around it. (An annulus, we’d say in the trades.) This is often a useful way to look at circles. If you get into calculus you’ll see the extra area you get from a slight increase in the diameter (or, more likely, the radius) all over the place. Also, in three dimensions, the difference in volume you get from an increase in diameter. There are also a good number of theorems with names like Green’s and Stokes’s. These are all about what you can know about the interior of a shape, like a pizza, from what you know about the ring around the edge.

Jarvis, the valet: 'Preparing for your mathematics final, sir?' Sedgwick, the awful child: 'Yes. But I'm not too terribly concerned. We're allowed an abacus during the test to aid in our calculations.' Jarvis looks over the abacus and says, 'Well ... this should help simplify the ... ' Sedgwick: 'And of course we're allowed a hyperbolic abacus to perform functions like square roots ... sine ... cosine ... etc ... ' He holds up an icosahedral device with beads all over.
Jim Meddick’s Monty for the 15th of June, 2022. The essays with some mention of Monty are at this link.

Jim Meddick’s Monty sees Sedgwick, spoiled scion of New Jersey money, preparing for a mathematics test. He’s allowed the use of an abacus, one of the oldest and best-recognized computational aides. The abacus works by letting us turn the operations of basic arithmetic into physical operations. This has several benefits. We (generally) understand things in space pretty well. And the beads and wires serve as aides to memory, always a struggle. Sedgwick also brings out a “hyperbolic abacus”, a tool for more abstract operations like square roots and sines and cosines. I don’t know of anything by that name, but you can design mechanical tools to do particular computations. Slide rules, for example, generally have markings to let one calculate square roots and cube roots easily. Aircraft pilots might use a flight computer, a set of plastic discs to do quick estimates of flight time, fuel consumption, ground speed, and such. (There’s even an episode of the original Star Trek where Spock fiddles with one!)

I have heard, but not seen, that specialized curves were made to let people square circles with something approximating a compass-and-straightedge method. A contraption to calculate sines and cosines would not be hard to imagine. It would need to be a post on a hinge, mostly, with a set of lines to read off sine and cosine values over a range of angles. I don’t know of one that existed, as it’s easy enough to print out a table of trig functions, but it wouldn’t be hard to make.

And that’s enough for this week. This and all my other Reading the Comics posts should be at this link. I hope to get this back to a weekly column, but that does depend on Comic Strip Master Command doing what’s convenient for me. We’ll see how it turns out.

My 2018 Mathematics A To Z: Hyperbolic Half-Plane


Today’s term was one of several nominations I got for ‘H’. This one comes from John Golden, @mathhobre on Twitter and author of the Math Hombre blog on Blogspot. He brings in a lot of thought about mathematics education and teaching tools that you might find interesting or useful or, better, both.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Hyperbolic Half-Plane.

The half-plane part is easy to explain. By the “plane” mathematicians mean, well, the plane. What you’d get if a sheet of paper extended forever. Also if it had zero width. To cut it in half … well, first we have to think hard what we mean by cutting an infinitely large thing in half. Then we realize we’re overthinking this. Cut it by picking a line on the plane, and then throwing away everything on one side or the other of that line. Maybe throw away everything on the line too. It’s logically as good to pick any line. But there are a couple lines mathematicians use all the time. This is because they’re easy to describe, or easy to work with. At least once you fix an origin and, with it, x- and y-axes. The “right half-plane”, for example, is everything in the positive-x-axis direction. Every point with coordinates you’d describe with positive x-coordinate values. Maybe the non-negative ones, if you want the edge included. The “upper half plane” is everything in the positive-y-axis direction. All the points whose coordinates have a positive y-coordinate value. Non-negative, if you want the edge included. You can make guesses about what the “left half-plane” or the “lower half-plane” are. You are correct.

The “hyperbolic” part takes some thought. What is there to even exaggerate? Wrong sense of the word “hyperbolic”. The word here is the same one used in “hyperbolic geometry”. That takes explanation.

The Western mathematics tradition, as we trace it back to Ancient Greece and Ancient Egypt and Ancient Babylon and all, gave us “Euclidean” geometry. It’s a pretty good geometry. It describes how stuff on flat surfaces works. In the Euclidean formation we set out a couple of axioms that aren’t too controversial. Like, lines can be extended indefinitely and that all right angles are congruent. And one axiom that is controversial. But which turns out to be equivalent to the idea that there’s only one line that goes through a point and is parallel to some other line.

And it turns out that you don’t have to assume that. You can make a coherent “spherical” geometry, one that describes shapes on the surface of a … you know. You have to change your idea of what a line is; it becomes a “geodesic” or, on the globe, a “great circle”. And it turns out that there’s no lines geodesics that go through a point and that are parallel to some other line geodesic. (I know you want to think about globes. I do too. You maybe want to say the lines of latitude are parallel one another. They’re even called parallels, sometimes. So they are. But they’re not geodesics. They’re “little circles”. I am not throwing in ad hoc reasons I’m right and you’re not.)

There is another, though. This is “hyperbolic” geometry. This is the way shapes work on surfaces that mathematicians call saddle-shaped. I don’t know what the horse enthusiasts out there call these shapes. My guess is they chuckle and point out how that would be the most painful saddle ever. Doesn’t matter. We have surfaces. They act weird. You can draw, through a point, infinitely many lines parallel to a given other line.

That’s some neat stuff. That’s weird and interesting. They’re even called “hyperparallel lines” if that didn’t sound great enough. You can see why some people would find this worth studying. The catch is that it’s hard to order a pad of saddle-shaped paper to try stuff out on. It’s even harder to get a hyperbolic blackboard. So what we’d like is some way to represent these strange geometries using something easier to work with.

The hyperbolic half-plane is one of those approaches. This uses the upper half-plane. It works by a move as brilliant and as preposterous as that time Q told Data and LaForge how to stop that falling moon. “Simple. Change the gravitational constant of the universe.”

What we change here is the “metric”. The metric is a function. It tells us something about how points in a space relate to each other. It gives us distance. In Euclidean geometry, plane geometry, we use the Euclidean metric. You can find the distance between point A and point B by looking at their coordinates, (x_A, y_A) and (x_B, y_B) . This distance is \sqrt{\left(x_B - x_A\right)^2 + \left(y_B - y_A\right)^2} . Don’t worry about the formulas. The lines on a sheet of graph paper are a reflection of this metric. Each line is (normally) a fixed distance from its parallel neighbors. (Yes, there are polar-coordinate graph papers. And there are graph papers with logarithmic or semilogarithmic spacing. I mean graph paper like you can find at the office supply store without asking for help.)

But the metric is something we choose. There are some rules it has to follow to be logically coherent, yes. But those rules give us plenty of room to play. By picking the correct metric, we can make this flat plane obey the same geometric rules as the hyperbolic surface. This metric looks more complicated than the Euclidean metric does, but only because it has more terms and takes longer to write out. What’s important about it is that the distance your thumb put on top of the paper covers up is bigger if your thumb is near the bottom of the upper-half plane than if your thumb is near the top of the paper.

So. There are now two things that are “lines” in this. One of them is vertical lines. The graph paper we would make for this has a nice file of parallel lines like ordinary paper does. The other thing, though … well, that’s half-circles. They’re half-circles with a center on the edge of the half-plane. So our graph paper would also have a bunch of circles, of different sizes, coming from regularly-spaced sources on the bottom of the paper. A line segment is a piece of either these vertical lines or these half-circles. You can make any polygon you like with these, if you pick out enough line segments. They’re there.

There are many ways to represent hyperbolic surfaces. This is one of them. It’s got some nice properties. One of them is that it’s “conformal”. Angles that you draw using this metric are the same size as those on the corresponding hyperbolic surface. You don’t appreciate how sweet that is until you’re working in non-Euclidean geometries. Circles that are entirely within the hyperbolic half-plane match to circles on a hyperbolic surface. Once you’ve got your intuition for this hyperbolic half-plane, you can step into hyperbolic half-volumes. And that lets you talk about the geometry of hyperbolic spaces that reach into four or more dimensions of human-imaginable spaces. Isometries — picking up a shape and moving it in ways that don’t change distance — match up with the Möbius Transformations. These are a well-understood set of altering planes that comes from a different corner of geometry. Also from that fellow with the strip, August Ferdinand Möbius. It’s always exciting to find relationships like that in mathematical structures.

Pictures often help. I don’t know why I don’t include them. But here is a web site with pages, and pictures, that describe much of the hyperbolic half-plane. It includes code to use with the Geometer Sketchpad software, which I have never used and know nothing about. That’s all right. There’s at least one page there showing a wondrous picture. I hope you enjoy.


This and other essays in the Fall 2018 A-To-Z should be at this link. And I’ll start paneling for more letters soon.

My 2018 Mathematics A To Z: Fermat’s Last Theorem


Today’s topic is another request, this one from a Dina. I’m not sure if this is Dina Yagodich, who’d also suggested using the letter ‘e’ for the number ‘e’. Trusting that it is, Dina Yagodich has a YouTube channel of mathematics videos. They cover topics like how to convert degrees and radians to one another, what the chance of a false positive (or false negative) on a medical test is, ways to solve differential equations, and how to use computer tools like MathXL, TI-83/84 calculators, or Matlab. If I’m mistaken, original-commenter Dina, please let me know and let me know if you have any creative projects that should be mentioned here.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Fermat’s Last Theorem.

It comes to us from number theory. Like many great problems in number theory, it’s easy to understand. If you’ve heard of the Pythagorean Theorem you know, at least, there are triplets of whole numbers so that the first number squared plus the second number squared equals the third number squared. It’s easy to wonder about generalizing. Are there quartets of numbers, so the squares of the first three add up to the square of the fourth? Quintuplets? Sextuplets? … Oh, yes. That’s easy. What about triplets of whole numbers, including negative numbers? Yeah, and that turns out to be boring. Triplets of rational numbers? Turns out to be the same as triplets of whole numbers. Triplets of real-valued numbers? Turns out to be very boring. Triplets of complex-valued numbers? Also none too interesting.

Ah, but, what about a triplet of numbers, only raised to some other power? All three numbers raised to the first power is easy; we call that addition. To the third power, though? … The fourth? Any other whole number power? That’s hard. It’s hard finding, for any given power, a trio of numbers that work, although some come close. I’m informed there was an episode of The Simpsons which included, as a joke, the equation 1782^{12} + 1841^{12} = 1922^{12} . If it were true, this would be enough to show Fermat’s Last Theorem was false. … Which happens. Sometimes, mathematicians believe they have found something which turns out to be wrong. Often this comes from noticing a pattern, and finding a proof for a specific case, and supposing the pattern holds up. This equation isn’t true, but it is correct for the first nine digits. An episode of The Wizard of Evergreen Terrace puts forth 3987^{12} + 4365^{12} = 4472^{12} , which apparently matches ten digits. This includes the final digit, also known as “the only one anybody could check”. (The last digit of 398712 is 1. Last digit of 436512 is 5. Last digit of 447212 is 6, and there you go.) Really makes you think there’s something weird going on with 12th powers.

For a Fermat-like example, Leonhard Euler conjectured a thing about “Sums of Like Powers”. That for a whole number ‘n’, you need at least n whole numbers-raised-to-an-nth-power to equal something else raised to an n-th power. That is, you need at least three whole numbers raised to the third power to equal some other whole number raised to the third power. At least four whole numbers raised to the fourth power to equal something raised to the fourth power. At least five whole numbers raised to the fifth power to equal some number raised to the fifth power. Euler was wrong, in this case. L J Lander and T R Parkin published, in 1966, the one-paragraph paper Counterexample to Euler’s Conjecture on Sums of Like Powers. 27^5 + 84^5 + 110^5 + 133^5 = 144^5 and there we go. Thanks, CDC 6600 computer!

But Fermat’s hypothesis. Let me put it in symbols. It’s easier than giving everything long, descriptive names. Suppose that the power ‘n’ is a whole number greater than 2. Then there are no three counting numbers ‘a’, ‘b’, and ‘c’ which make true the equation a^n + b^n = c^n . It looks doable. It looks like once you’ve mastered high school algebra you could do it. Heck, it looks like if you know the proof about how the square root of two is irrational you could approach it. Pierre de Fermat himself said he had a wonderful little proof of it.

He was wrong. No shame in that. He was right about a lot of mathematics, including a lot of stuff that leads into the basics of calculus. And he was right in his feeling that this a^n + b^n = c^n stuff was impossible. He was wrong that he had a proof. At least not one that worked for every possible whole number ‘n’ larger than 2.

For specific values of ‘n’, though? Oh yes, that’s doable. Fermat did it himself for an ‘n’ of 4. Euler, a century later, filed in ‘n’ of 3. Peter Dirichlet, a great name in number theory and analysis, and Joseph-Louis Lagrange, who worked on everything, proved the case of ‘n’ of 5. Dirichlet, in 1832, proved the case for ‘n’ of 14. And there were more partial solutions. You could show that if Fermat’s Last Theorem were ever false, it would have to be false for some prime-number value of ‘n’. That’s great work, answering as it does infinitely many possible cases. It just leaves … infinitely many to go.

And that’s how things went for centuries. I don’t know that every mathematician made some attempt on Fermat’s Last Theorem. But it seems hard to imagine a person could love mathematics enough to spend their lives doing it and not at least take an attempt at it. Nobody ever found it, though. In a 1989 episode of Star Trek: The Next Generation, Captain Picard muses on how eight centuries after Fermat nobody’s proven his theorem. This struck me at the time as too pessimistic. Granted humans were stumped for 400 years. But for 800 years? And stumping everyone in a whole Federation of a thousand worlds? And more than a thousand mathematical traditions? And, for some of these species, tens of thousands of years of recorded history? … Still, there wasn’t much sign of the solving the problem. In 1992 Analog Science Fiction Magazine published a funny short-short story by Ian Randal Strock, “Fermat’s Legacy”. In it, Fermat — jealous of figures like René Descartes and Blaise Pascal who upstaged his mathematical accomplishments — jots down the note. He figures an unsupported claim like that will earn true lasting fame.

So that takes us to 1993, when the world heard about elliptic integrals for the first time. Elliptic curves are neat things. They’re polynomials. They have some nice mathematical properties. People first noticed them in studying how long arcs of ellipses are. (This is why they’re called elliptic curves, even though most of them have nothing to do with any ellipse you’d ever tolerate in your presence.) They look ready to use for encryption. And in 1985, Gerhard Frey noticed something. Suppose you did have, for some ‘n’ bigger than 2, a solution a^n + b^n = c^n . Then you could use that a, b, and n to make a new elliptic curve. That curve is the one that satisfies y^2 = x\cdot\left(x - a^n\right)\cdot\left(x + b^n\right) . And then that elliptic curve would not be “modular”.

I would like to tell you what it means for an elliptic curve to be modular. But getting to that point would take at least four subsidiary essays. MathWorld has a description of what it means to be modular, and even links to explaining terms like “meromorphic”. It’s getting exotic stuff.

Frey didn’t show whether elliptic curves of this time had to be modular or not. This is normal enough, for mathematicians. You want to find things which are true and interesting. This includes conjectures like this, that if elliptic curves are all modular then Fermat’s Last Theorem has to be true. Frey was working on consequences of the Taniyama-Shimura Conjecture, itself three decades old at that point. Yutaka Taniyama and Goro Shimura had found there seemed to be a link between elliptic curves and these “modular forms”, which are a kind of group. That is, a group-theory thing.

So in fall of 1993 I was taking an advanced, though still undergraduate, course in (not-high-school) algebra at Rutgers. It’s where we learn group theory, after Intro to Algebra introduced us to group theory. Some exciting news came out. This fellow named Andrew Wiles at Princeton had shown an impressive bunch of things. Most important, that the Taniyama-Shimura Conjecture was true for semistable elliptic curves. This includes the kind of elliptic curve Frey made out of solutions to Fermat’s Last Theorem. So the curves based on solutions to Fermat’s Last Theorem would have be modular. But Frey had shown any curves based on solutions to Fermat’s Last Theorem couldn’t be modular. The conclusion: there can’t be any solutions to Fermat’s Last Theorem. Our professor did his best to explain the proof to us. Abstract Algebra was the undergraduate course closest to the stuff Wiles was working on. It wasn’t very close. When you’re still trying to work out what it means for something to be an ideal it’s hard to even follow the setup of the problem. The proof itself was inaccessible.

Which is all right. Wiles’s original proof had some flaws. At least this mathematics major shrugged when that news came down and wondered, well, maybe it’ll be fixed someday. Maybe not. I remembered how exciting cold fusion was for about six weeks, too. But this someday didn’t take long. Wiles, with Richard Taylor, revised the proof and published about a year later. So far as I’m aware, nobody has any serious qualms about the proof.

So does knowing Fermat’s Last Theorem get us anything interesting? … And here is a sad anticlimax. It’s neat to know that a^n + b^n = c^n can’t be true unless ‘n’ is 1 or 2, at least for positive whole numbers. But I’m not aware of any neat results that follow from that, or that would follow if it were untrue. There are results that follow from the Taniyama-Shimura Conjecture that are interesting, according to people who know them and don’t seem to be fibbing me. But Fermat’s Last Theorem turns out to be a cute little aside.

Which is not to say studying it was foolish. This easy-to-understand, hard-to-solve problem certainly attracted talented minds to think about mathematics. Mathematicians found interesting stuff in trying to solve it. Some of it might be slight. I learned that in a Pythagorean triplet — ‘a’, ‘b’, and ‘c’ with a^2 + b^2 = c^2 — that I was not the infinitely brilliant mathematician at age fifteen I hoped I might be. Also that if ‘a’, ‘b’, and ‘c’ are relatively prime, you can’t have ‘a’ and ‘b’ both odd and ‘c’ even. You had to have ‘c’ and either ‘a’ or ‘b’ odd, with the other number even. Other mathematicians of more nearly infinite ability found stuff of greater import. Ernst Eduard Kummer in the 19th century developed ideals. These are an important piece of group theory. He was busy proving special cases of Fermat’s Last Theorem.

Kind viewers have tried to retcon Picard’s statement about Fermat’s Last Theorem. They say Picard was really searching for the proof Fermat had, or believed he had. Something using the mathematical techniques available to the early 17th century. Or that follow closely enough from that. The Taniyama-Shimura Conjecture definitely isn’t it. I don’t buy the retcon, but I’m willing to play along for the sake of not causing trouble. I suspect there’s not a proof of the general case that uses anything Fermat could have recognized, or thought he had. That’s all right. The search for a thing can be useful even if the thing doesn’t exist.

Did The Greatest Generation Hosts Get As Drunk As I Expected?


I finally finished listening to Benjamin Ahr Harrison and Adam Pranica’s Greatest Generation podcast reviews of the first season of Star Trek: Deep Space Nine. (We’ve had fewer long car trips for this.) So I can return to my projection of how their drinking game would turn out.

Their plan was to make more exciting the discussion of some of Deep Space Nine‘s episodes by recording their reviews while drinking a lot. The plan was, for the fifteen episodes they had in the season, there would be a one-in-fifteen chance of doing any particular episode drunk. So how many drunk episodes would you expect to get, on this basis?

It’s a well-formed expectation value problem. There could be as few as zero or as many as fifteen, but some cases are more likely than others. Each episode could be recorded drunk or not-drunk. There’s an equal chance of each episode being recorded drunk. Whether one episode is drunk or not doesn’t depend on whether the one before was, and doesn’t affect whether the next one is. (I’ll come back to this.)

The most likely case was for there to be one drunk episode. The probability of exactly one drunk episode was a little over 38%. No drunk episodes was also a likely outcome. There was a better than 35% chance it would never have turned up. The chance of exactly two drunk episodes was about 19%. There drunk episodes had a slightly less than 6% chance of happening. Four drunk episodes a slightly more than 1% chance of happening. And after that you get into the deeply unlikely cases.

As the Deep Space Nine season turned out, this one-in-fifteen chance came up twice. It turned out they sort of did three drunk episodes, though. One of the drunk episodes turned out to be the first of two they planned to record that day. I’m not sure why they didn’t just swap what episode they recorded first, but I trust they had logistical reasons. As often happens with probability questions, the independence of events — whether a success for one affects the outcome of another — changes calculations.

There’s not going to be a second-season update to this. They’ve chosen to make a more elaborate recording game of things. They’ve set up a modified Snakes and Ladders type board with a handful of spots marked for stunts. Some sound like fun, such as recording without taking any notes about the episode. Some are, yes, drinking episodes. But this is all a very different and more complicated thing to project. If I were going to tackle that it’d probably be by running a bunch of simulations and taking averages from that.

Still from Deep Space Nine, season 6, episode 23, 'Profit and Lace', the sex-changed Quark feeling her breasts and looking horrified.
Real actual episode that was really actually made and really actually aired for real. I’m going to go ahead and guess that it hasn’t aged well.

Also I trust they’ve been warned about the episode where Quark has a sex change so he can meet a top Ferengi soda magnate after accidentally giving his mother a heart attack because gads but that was a thing that happened somehow.

How Drunk Can We Expect The Greatest Generation Podcast Hosts To Get?


Among my entertainments is listening to the Greatest Generation podcast, hosted by Benjamin Ahr Harrison and Adam Pranica. They recently finished reviewing all the Star Trek: The Next Generation episodes, and have started Deep Space Nine. To add some fun and risk to episode podcasts the hosts proposed to record some episodes while drinking heavily. I am not a fun of recreational over-drinking, but I understand their feelings. There’s an episode where Quark has a sex-change operation because he gave his mother a heart attack right before a politically charged meeting with a leading Ferengi soda executive. Nobody should face that mess sober.

At the end of the episode reviewing “Babel”, Harrison proposed: there’s 15 episodes left in the season. Use a random number generator to pick a number from 1 to 15; if it’s one, they do the next episode (“Captive Pursuit”) drunk. And it was; what are the odds? One in fifteen. I just said.

Still from Next Generation season 1, episode 3, 'The Naked Now', causing all us Trekkies at home to wonder if maybe this new show wasn't going to be as good as we so desperately needed it to be?
Space-drunk engineer Jim Shimoda throwing control chips around in the moment that made him a Greatest Generation running joke. In the podcast’s context this makes sense. In the original context this made us all in 1987 grit our teeth and say, “No, no, this really is as good a show as we need this to be shut up shut up shut up”.

The question: how many episodes would they be doing drunk? As they discussed in the next episode, this would imply they’d always get smashed for the last episode of the season. This is a straightforward expectation-value problem. The expectation value of a thing is the sum of all the possible outcomes times the chance of each outcome. Here, the possible outcome is adding 1 to the number of drunk episodes. The chance of any particular episode being a drunk episode is 1 divided by ‘N’, if ‘N’ is the number of episodes remaining. So the next-to-the-last episode has 1 chance in 2 of being drunk. The second-from-the-last has 1 chance in 3 of being drunk. And so on.

This expectation value isn’t hard to calculate. If we start counting from the last episode of the season, then it’s easy. Add up 1 + \frac12 + \frac13 + \frac14 + \frac15 + \frac16 + \cdots , ending when we get up to one divided by the number of episodes in the season. 25 or 26, for most seasons of Deep Space Nine. 15, from when they counted here. This is the start of the harmonic series.

The harmonic series gets taught in sequences and series in calculus because it does some neat stuff if you let it go on forever. For example, every term in this sequence gets smaller and smaller. (The “sequence” is the terms that go into the sum: 1, \frac12, \frac13, \frac14, \frac{1}{1054}, \frac{1}{2038} , and so on. The “series” is the sum of a sequence, a single number. I agree it seems weird to call a “series” that sum, but it’s the word we’re stuck with. If it helps, consider: when we talk about “a TV series” we usually mean the whole body of work, not individual episodes.) You can pick any number, however tiny you like. I can then respond with the last term in the sequence bigger than your number. Infinitely many terms in the sequence will be smaller than your pick. And yet: you can pick any number you like, however big. And I can take a finite number of terms in this sequence to make a sum bigger than whatever number you liked. The sum will eventually be bigger than 10, bigger than 100, bigger than a googolplex. These two facts are easy to prove, but they seem like they ought to be contradictory. You can see why infinite series are fun and produce much screaming on the part of students.

No Star Trek show has a season has infinitely many episodes, though, however long the second season of Enterprise seemed to drag out. So we don’t have to worry about infinitely many drunk episodes.

Since there were 15 episodes up for drunkenness in the first season of Deep Space Nine the calculation’s easy. I still did it on the computer. For the first season we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{15} drunk episodes. This is a number a little bigger than 3.318. So, more likely three drunk episodes, four being likely. For the 25-episode seasons (seasons four and seven, if I’m reading this right), we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{25} or just over 3.816 drunk episodes. Likely four, maybe three. For the 26-episode seasons (seasons two, five, and six), we could expect 1 + \frac12 + \frac13 + \cdots + \frac{1}{26} drunk episodes. That’s just over 3.854.

The number of drunk episodes to expect keeps growing. The harmonic series grows without bounds. But it keeps growing slower, compared to the number of terms you add together. You need a 31-episode season to be able to expect at four drunk episodes. To expect five drunk episodes you’d need an 83-episode season. If the guys at Worst Episode Ever, reviewing The Simpsons, did all 625-so-far episodes by this rule we could only expect seven drunk episodes.

Still, three, maybe four, drunk episodes of the 15 remaining first season is a fair number. They shouldn’t likely be evenly spaced. The chance of a drunk episode rises the closer they get to the end of the season. Expected length between drunk episodes is interesting but I don’t want to deal with that. I’ll just say that it probably isn’t the five episodes the quickest, easiest suggested by taking 15 divided by 3.

And it’s moot anyway. The hosts discussed it just before starting “Captive Pursuit”. Pranica pointed out, for example, the smashed-last-episode problem. What they decided they meant was there would be a 1-in-15 chance of recording each episode this season drunk. For the 25- or 26-episode seasons, each episode would get its 1-in-25 or 1-in-26 chance.

That changes the calculations. Not in spirit: that’s still the same. Count the number of possible outcomes and the chance of each one being a drunk episode and add that all up. But the work gets simpler. Each episode has a 1-in-15 chance of adding 1 to the total of drunk episodes. So the expected number of drunk episodes is the number of episodes (15) times the chance each is a drunk episode (1 divided by 15). We should expect 1 drunk episode. The same reasoning holds for all the other seasons; we should expect 1 drunk episode per season.

Still, since each episode gets an independent draw, there might be two drunk episodes. Could be three. There’s no reason that all 15 couldn’t be drunk. (Except that at the end of reviewing “Captive Pursuit” they drew for the next episode and it’s not to be a drunk one.) What are the chances there’s no drunk episodes? What are the chances there’s two, or three, or eight drunk episodes?

There’s a rule for this. This kind of problem is a mathematically-famous one. We get our results from the “binomial distribution”. This applies whenever there’s a bunch of attempts at something. And each attempt can either clearly succeed or clearly fail. And the chance of success (or failure) each attempt is always the same. That’s what applies here. If there’s ‘N’ episodes, and the chance is ‘p’ that any one will be drunk, then we get the chance ‘y’ of turning up exactly ‘k’ drunk episodes by the formula:

y = \frac{N!}{k! \cdot \left(n - k\right)!} p^k \left(1 - p\right)^{n - k}

That looks a bit ugly, yeah. (I don’t like using ‘y’ as the name for a probability. I ran out of good letters and didn’t want to do subscripts.) It’s just tedious to calculate is all. Factorials and everything. Better to let the computer work it out. There is a formula that’s easy enough to work with, though. That’s because the chance of a drunk episode is the same each episode. I don’t know a formula to get the chance of exactly zero or one or four drunk episodes with the first, one-in-N chance. Probably the only thing to do is run a lot of simulations and trust that’s approximately right.

But for this rule it’s easy enough. There’s this formula, like I said. I figured out the chance of all the possible drunk episode combinations for the seasons. I mean I had the computer work it out. All I figured out was how to make it give me the results in a format I liked. Here’s what I got.

The chance of these many drunk episodes In a 15-episode season is
0 0.355
1 0.381
2 0.190
3 0.059
4 0.013
5 0.002
6 0.000
7 0.000
8 0.000
9 0.000
10 0.000
11 0.000
12 0.000
13 0.000
14 0.000
15 0.000

Sorry it’s so dull, but the chance of a one-in-fifteen event happening 15 times in a row? You’d expect that to be pretty small. It’s got a probability of something like 0.000 000 000 000 000 002 28 of happening. Not technically impossible, but yeah, impossible.

How about for the 25- and 26-episode seasons? Here’s the chance of all the outcomes:

The chance of these many drunk episodes In a 25-episode season is
0 0.360
1 0.375
2 0.188
3 0.060
4 0.014
5 0.002
6 0.000
7 0.000
8 or more 0.000

And things are a tiny bit different for a 26-episode season.

The chance of these many drunk episodes In a 26-episode season is
0 0.361
1 0.375
2 0.188
3 0.060
4 0.014
5 0.002
6 0.000
7 0.000
7 0.000
8 or more 0.000

Yes, there’s a greater chance of no drunk episodes. The difference is really slight. It only looks so big because of rounding. A no-drunk 25 episode season has a chance of about 0.3604, while a no-drunk 26 episodes season has a chance of about 0.3607. The difference comes from the chance of lots of drunk episodes all being even worse somehow.

And there’s some neat implications through this. There’s a slightly better than one in three chance that each of the second through seventh seasons won’t have any drunk episodes. We could expect two dry seasons, hopefully not the one with Quark’s sex-change episode. We can reasonably expect at least one season with two drunk episodes. There’s a slightly more than 40 percent chance that some season will have three drunk episodes. There’s just under a 10 percent chance some season will have four drunk episodes.

Still from Deep Space Nine, season 6, episode 23, 'Profit and Lace', the sex-changed Quark feeling her breasts and looking horrified.
Real actual episode that was really actually made and really actually aired for real. I don’t know when I last saw it. I’m going to go ahead and guess that it hasn’t aged well.

There’s no guarantees, though. Probability has a curious blend. There’s no predicting when any drunk episode will come. But we can make meaningful predictions about groups of episodes. These properties seem like they should be contradictions. And they’re not, and that’s wonderful.

Why Stuff Can Orbit, Part 4: On The L


Less way previously:


We were chatting about central forces. In these a small object — a satellite, a planet, a weight on a spring — is attracted to the center of the universe, called the origin. We’ve been studying this by looking at potential energy, a function that in this case depends only on how far the object is from the origin. But to find circular orbits, we can’t just look at the potential energy. We have to modify this potential energy to account for angular momentum. This essay I mean to discuss that angular momentum some.

Let me talk first about the potential energy. Mathematical physicists usually write this as a function named U or V. I’m using V. That’s what my professor used teaching this, back when I was an undergraduate several hundred thousand years ago. A central force, by definition, changes only with how far you are from the center. I’ve put the center at the origin, because I am not a madman. This lets me write the potential energy as V = V(r).

V(r) could, in principle, be anything. In practice, though, I am going to want it to be r raised to a power. That is, V(r) is equal to C rn. The ‘C’ here is a constant. It’s a scaling constant. The bigger a number it is the stronger the central force. The closer the number is to zero the weaker the force is. In standard units, gravity has a constant incredibly close to zero. This makes orbits very big things, which generally works out well for planets. In the mathematics of masses on springs, the constant is closer to middling little numbers like 1.

The ‘n’ here is a deceiver. It’s a constant number, yes, and it can be anything we want. But the use of ‘n’ as a symbol has connotations. Usually when a mathematician or a physicist writes ‘n’ it’s because she needs a whole number. Usually a positive whole number. Sometimes it’s negative. But we have a legitimate central force if ‘n’ is any real number: 2, -1, one-half, the square root of π, any of that is good. If you just write ‘n’ without explanation, the reader will probably think “integers”, possibly “counting numbers”. So it’s worth making explicit when this isn’t so. It’s bad form to surprise the reader with what kind of number you’re even talking about.

(Some number of essays on we’ll find out that the only values ‘n’ can have that are worth anything are -1, 2, and 7. And 7 isn’t all that good. But we aren’t supposed to know that yet.)

C rn isn’t the only kind of central force that could exist. Any function rule would do. But it’s enough. If we wanted a more complicated rule we could just add two, or three, or more potential energies together. This would give us V(r) = C_1 r^{n_1} + C_2 r^{n_2} , with C1 and C2 two possibly different numbers, and n1 and n2 two definitely different numbers. (If n1 and n2 were the same number then we should just add C1 and C2 together and stop using a more complicated expression than we need.) Remember that Newton’s Law of Motion about the sum of multiple forces being something vector something something direction? When we look at forces as potential energy functions, that law turns into just adding potential energies together. They’re well-behaved that way.

And if we can add these r-to-a-power potential energies together then we’ve got everything we need. Why? Polynomials. We can approximate most any potential energy that would actually happen with a big enough polynomial. Or at least a polynomial-like function. These r-to-a-power forces are a basis set for all the potential energies we’re likely to care about. Understand how to work with one and you understand how to work with them all.

Well, one exception. The logarithmic potential, V(r) = C log(r), is really interesting. And it has real-world applicability. It describes how strongly two vortices, two whirlpools, attract each other. You can write the logarithm as a polynomial. But logarithms are pretty well-behaved functions. You might be better off just doing that as a special case.

Still, at least to start with, we’ll stick with V(r) = C rn and you know what I mean by all those letters now. So I’m free to talk about angular momentum.

You’ve probably heard of momentum. It’s got something to do with movement, only sports teams and political campaigns are always gaining or losing it somehow. When we talk of that we’re talking of linear momentum. It describes how much mass is moving how fast in what direction. So it’s a vector, in three-dimensional space. Or two-dimensional space if you’re making the calculations easier. To find what the vector is, we make a list of every object that’s moving. We take its velocity — how fast it’s moving and in what direction — and multiply that by its mass. Mass is a single number, a scalar, and we’re always allowed to multiply a vector by a scalar. This gets us another vector. Once we’ve done that for everything that’s moving, we add all those product vectors together. We can always add vectors together. And this gives us a grand total vector, the linear momentum of the system.

And that’s conserved. If one part of the system starts moving slower it’s because other parts are moving faster, and vice-versa. In the real world momentum seems to evaporate. That’s because some of the stuff moving faster turns out to be air objects bumped into, or particles of the floor that get dragged along by friction, or other stuff we don’t care about. That momentum can seem to evaporate is what makes its use in talking about ports teams or political campaigns make sense. It also annoys people who want you to know they understand science words better than you. So please consider this my authorization to use “gaining” and “losing” momentum in this sense. Ignore complainers. They’re the people who complain the word “decimate” gets used to mean “destroy way more than ten percent of something”, even though that’s the least bad mutation of an English word’s meaning in three centuries.

Angular momentum is also a vector. It’s also conserved. We can calculate what that vector is by the same sort of process, that of calculating something on each object that’s spinning and adding it all up. In real applications it can seem to evaporate. But that’s also because the angular momentum is going into particles of air. Or it rubs off grease on the axle. Or it does other stuff we wish we didn’t have to deal with.

The calculation is a little harder to deal with. There’s three parts to a spinning thing. There’s the thing, and there’s how far it is from the axis it’s spinning around, and there’s how fast it’s spinning. So you need to know how fast it’s travelling in the direction perpendicular to the shortest line between the thing and the axis it’s spinning around. Its angular momentum is going to be as big as the mass times the distance from the axis times the perpendicular speed. It’s going to be pointing in whichever axis direction makes its movement counterclockwise. (Because that’s how physicists started working this out and it would be too much bother to change now.)

You might ask: wait, what about stuff like a wheel that’s spinning around its center? Or a ball being spun? That can’t be an angular momentum of zero? How do we work that out? The answer is: calculus. Also, we don’t need that. This central force problem I’ve framed so that we barely even need algebra for it.

See, we only have a single object that’s moving. That’s the planet or satellite or weight or whatever it is. It’s got some mass, the value of which we call ‘m’ because why make it any harder on ourselves. And it’s spinning around the origin. We’ve been using ‘r’ to mean the number describing how far it is from the origin. That’s the distance to the axis it’s spinning around. Its velocity — well, we don’t have any symbols to describe what that is yet. But you can imagine working that out. Or you trust that I have some clever mathematical-physics tool ready to introduce to work it out. I have, kind of. I’m going to ignore it altogether. For now.

The symbol we use for the total angular momentum in a system is \vec{L} . The little arrow above the symbol is one way to denote “this is a vector”. It’s a good scheme, what with arrows making people think of vectors and it being easy to write on a whiteboard. In books, sometimes, we make do just by putting the letter in boldface, L, which is easier for old-fashioned word processors to do. If we’re sure that the reader isn’t going to forget that L is this vector then we might stop highlighting the fact altogether. That’s even less work to do.

It’s going to be less work yet. Central force problems like this mean the object can move only in a two-dimensional plane. (If it didn’t, it wouldn’t conserve angular momentum: the direction of \vec{L} would have to change. Sounds like magic, but trust me.) The angular momentum’s direction has to be perpendicular to that plane. If the object is spinning around on a sheet of paper, the angular momentum is pointing straight outward from the sheet of paper. It’s pointing toward you if the object is moving counterclockwise. It’s pointing away from you if the object is moving clockwise. What direction it’s pointing is locked in.

All we need to know is how big this angular momentum vector is, and whether it’s positive or negative. So we just care about this number. We can call it ‘L’, no arrow, no boldface, no nothing. It’s just a number, the same as is the mass ‘m’ or distance from the origin ‘r’ or any of our other variables.

If ‘L’ is zero, this means there’s no total angular momentum. This means the object can be moving directly out from the origin, or directly in. This is the only way that something can crash into the center. So if setting L to be zero doesn’t allow that then we know we did something wrong, somewhere. If ‘L’ isn’t zero, then the object can’t crash into the center. If it did we’d be losing angular momentum. The object’s mass times its distance from the center times its perpendicular speed would have to be some non-zero number, even when the distance was zero. We know better than to look for that.

You maybe wonder why we use ‘L’ of all letters for the angular momentum. I do. I don’t know. I haven’t found any sources that say why this letter. Linear momentum, which we represent with \vec{p} , I know. Or, well, I know the story every physicist says about it. p is the designated letter for linear momentum because we used to use the word “impetus”, as in “impulse”, to mean what we mean by momentum these days. And “p” is the first letter in “impetus” that isn’t needed for some more urgent purpose. (“m” is too good a fit for mass. “i” has to work both as an index and as that number which, squared, gives us -1. And for that matter, “e” we need for that exponentials stuff, and “t” is too good a fit for time.) That said, while everybody, everybody, repeats this, I don’t know the source. Perhaps it is true. I can imagine, say, Euler or Lagrange in their writing settling on “p” for momentum and everybody copying them. I just haven’t seen a primary citation showing this is so.

(I don’t mean to sound too unnecessarily suspicious. But just because everyone agrees on the impetus-thus-p story doesn’t mean it’s so. I mean, every Star Trek fan or space historian will tell you that the first space shuttle would have been named Constitution until the Trekkies wrote in and got it renamed Enterprise. But the actual primary documentation that the shuttle would have been named Constitution is weak to nonexistent. I’ve come to the conclusion NASA had no plan in mind to name space shuttles until the Trekkies wrote in and got one named. I’ve done less poking around the impetus-thus-p story, in that I’ve really done none, but I do want it on record that I would like more proof.)

Anyway, “p” for momentum is well-established. So I would guess that when mathematical physicists needed a symbol for angular momentum they looked for letters close to “p”. When you get into more advanced corners of physics “q” gets called on to be position a lot. (Momentum and position, it turns out, are nearly-identical-twins mathematically. So making their symbols p and q offers aesthetic charm. Also great danger if you make one little slip with the pen.) “r” is called on for “radius” a lot. Looking on, “t” is going to be time.

On the other side of the alphabet, well, “o” is just inviting danger. “n” we need to count stuff. “m” is mass or we’re crazy. “l” might have just been the nearest we could get to “p” without intruding on a more urgently-needed symbol. (“s” we use a lot for parameters like length of an arc that work kind of like time but aren’t time.) And then shift to the capital letter, I expect, because a lowercase l looks like a “1”, to everybody’s certain doom.

The modified potential energy, then, is going to include the angular momentum L. At least, the amount of angular momentum. It’s also going to include the mass of the object moving, and the radius r that says how far the object is from the center. It will be:

V_{eff}(r) = V(r) + \frac{L^2}{2 m r^2}

V(r) was the original potential, whatever that was. The modifying term, with this square of the angular momentum and all that, I kind of hope you’ll just accept on my word. The L2 means that whether the angular momentum is positive or negative, the potential will grow very large as the radius gets small. If it didn’t, there might not be orbits at all. And if the angular momentum is zero, then the effective potential is the same original potential that let stuff crash into the center.

For the sort of r-to-a-power potentials I’ve been looking at, I get an effective potential of:

V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

where n might be an integer. I’m going to pretend a while longer that it might not be, though. C is certainly some number, maybe positive, maybe negative.

If you pick some values for C, n, L, and m you can sketch this out. If you just want a feel for how this Veff looks it doesn’t much matter what values you pick. Changing values just changes the scale, that is, where a circular orbit might happen. It doesn’t change whether it happens. Picking some arbitrary numbers is a good way to get a feel for how this sort of problem works. It’s good practice.

Sketching will convince you there are energy minimums, where we can get circular orbits. It won’t say where to find them without some trial-and-error or building a model of this energy and seeing where a ball bearing dropped into it rolls to a stop. We can do this more efficiently.

Reading the Comics, September 5, 2015: Again No Pictures Edition


I’m disappointed to say this is another week of mathematically-themed comic strips I don’t have reason to include as pictures here. Gocomics.com links seem, as best I can tell, to stay up and working even for people who haven’t got accounts there. On the other hand this saves space for pictures in my WordPress account. It’s down to about 99 percent empty.

Julie Larson’s The Dinette Set for the 30th of August is about terrible people blustering through life. That’s the premise of the strip. But in this case they apply their blustering terribleness to the problem of working out tips. Any time you want to chain percentages together — 15 percent of something 10 percent off, or so — stop. Don’t work it out in your head. It’s too easy to get what you’re trying to calculate confused. And pay attention to what “100 percent” of whatever you’re talking about would mean. Then proceed with care.

Ed Allison’s surreal Unstrange Phenomenon for the 31st of August uses the Möbius strip as a way for the ever-swimming Fletcher to do laps. I suppose the trouble is this challenges ideas of what a “lap” means.

Richard Thompson’s Richard’s Poor Almanac for the 1st of September is a rerun. I think I may have discussed it here before. It features an appearance by probability’s favorite author, an infinite number of monkeys. The monkeys are portrayed by chimpanzees here, but that’s all right. They work on something more ambitious than just writing the works of Shakespeare.

Eric Teitelbaum and Bill Teitelbaum’s Bottomliners for the 3rd of September is, I guess, really an accounting joke. But it does seem to me that everything adding up perfectly could be a sign of trouble. After all, anything real has some error in it. Numbers get rounded off, or people miscount inventories, or stuff just gets lost. I hear tell that there are even people who will take things not belonging to them. It would be surprising if all the errors happened to cancel out exactly, and suspicious if there were no errors at all.

Lorie Ransom’s The Daily Drawing for the 4th of September is an anthropomorphized calculators joke. And, for that matter, a teaching specialization joke. Yes, I noticed what number is on Sam’s display.

Tom Thaves’s Frank and Ernest also for the 4th of September is a reminder about the use of motivation in encouraging mathematics.

Mark Leiknes’s Cow and Boy Classics from the 5th of September pits Billy in a fight with Mathematics. Mathematics is depicted by the one equation we can count on people recognizing. That said, I’m not sure what Boy — Billy — is getting at by speaking of “math theory explaining why life is too variabled and chaotic to be equationed”. I think that he’s trying to understand something like the Incompleteness Theorems, which tell us that there are mathematical truths that can never be proved true. That’s a heady conclusion to draw, and it doesn’t require a great deal of training to find it. It’s more esoteric than the proof that the set of integers and the set of real numbers are different sizes, but I think it’s about as accessible. Anyway, the notion of mathematics popping in and slapping Billy around is so appealingly silly this is my favorite of the week’s strips.

So Doug Bratton’s Pop Culture Shock Therapy for the 5th of September feels like a letdown by comparison. It’s a silly word problem strip is all.

Reading the Comics, February 11, 2014: Running Out Pi Edition


I’d figured I had enough mathematics comic strips for another of these entries, and discovered during the writing that I had much more to say about one than I had anticipated. So, although it’s no longer quite the 11th, or close to it, I’m going to exile the comics from after that date to the next of these entries.

Melissa DeJesus and Ed Power’s My Cage (February 6, rerun) makes another reference to the infinite-monkeys-with-typewriters scenario, which, since it takes place in a furry universe allows access to the punchline you might expect. I’ve written about that before, as the infinite monkeys problem sits at a wonderful intersection of important mathematics and captivating metaphors.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde (starting February 10) (and when am I going to make a macro for that credit and title?) has Cynthia given a slightly baffling homework lesson: to calculate the first ten digits of pi. The story continues through the 11th, the 12th, the 13th, finally resolving on the the 14th, in the way such stories must. I admit I’m not sure why exactly calculating the digits of π would be a suitable homework assignment; I can see working out division problems until the numbers start repeating, or doing a square root or something by hand until you’ve found enough digits.

π, though … well, there’s the question of why it’d be an assignment to start with, but also, what formula for generating π could be plausibly appropriate for an elementary school class. The one that seems obvious to me — π is equal to four times (1/1 minus 1/3 plus 1/5 minus 1/7 plus 1/9 minus 1/11 and so on and so on) — also takes way too long to work. If a little bit of coding is right, it takes something like 160 terms to get just the first two digits of π correct and that isn’t even stable. (The first 160 terms add to 3.135; the first 161 terms to 3.147.) Getting it to ten digits would take —

Well, I thought it might be as few was 10,000 terms, because it turns out the sum of the first ten thousand terms in that series is 3.1414926536, which looks dead-on until you notice that π is 3.1415926536. That’s a neat coincidence, though.

Anyway, obviously, that formula wouldn’t do, and we see on the strip of the 14th that Lucretia isn’t using that. There are a great many formulas that generate the value of π, any of which might be used for a project like this; some of them get the digits right quite rapidly, usually at a cost of being very complicated. The formula shown in the strip of the 14th, though, doesn’t seem to be right. Lucretia’s work uses the formula \pi = \sqrt{12} \cdot \sum_{k = 0}^{\infty} \frac{(-3)^{-k}}{2k + 1} , which takes only about 21 terms to get to the demanded ten digits of accuracy. I don’t want to guess how many pages of work it would take to get to 13,908 places.

If I don’t miss my guess the formula used here is one by Abraham Sharp, an astronomer and mathematician who worked for the Royal Observatory at Greenwich and set a record by calculating π to 72 decimal digits. He was also an instrument-maker, of rather some skill, and I found a page purporting to show his notes of how to cut some complicated polyhedrons out of a block of wood, so, if my father wants to carve a 120-sided figure, here’s his chance. Sharp seems to have started with Leibniz’s formula (yes, that Leibniz) — that the arctangent of a number x is equal to x minus one-third x cubed plus one-fifth x to the fifth power minus one-seventh x to the seventh power, et cetera — with the knowledge that the arctangent of the square root of one-third is equal to one-sixth π and produced this series that looks a lot like the one we started with, but which gets digits correct so very much more quickly.

Darrin Bell’s Candorville (February 13) is primarily a bit of guys insulting friends, but what do you know and π makes a cameo appearance here.

Shannon Wheeler’s Too Much Coffee Man (February 10) is a Venn Diagram cartoon in the service of arguing that Venn Diagram cartoons aren’t funny. Putting aside the smoke and sparks popping out of the Nomad space probe which Kirk and Spock are rushing to the transporter room, I don’t think it’s quite fair: the ease the Venn diagram gives to grouping together concepts and showing how they relate helps organize one’s understanding of concepts and can be a really efficient way to set up a joke. Granting that, perhaps Wheeler’s seen too many Venn Diagram cartoons that fail, a complaint I’m sympathetic to.

Bill Amend’s FoxTrot (February 11, rerun) was one of those strips trying to be taped to the math teacher’s door, with the pun-based programming for the Math Channel.

Arthur Christmas and the End of Time


In working out my little Arthur Christmas-inspired problem, I argued that if the reindeer take some nice rational number of hours to complete one orbit of the Earth, eventually they’ll meet back up with Arthur and Grand-Santa stranded on the ground. And if the reindeer take an irrational number of hours to make one orbit, they’ll never meet again, although if they wait long enough, they’ll get pretty close together, eventually.

So far this doesn’t sound like a really thrilling result: the two parties, moving on their own paths, either meet again, or they don’t. Doesn’t sound quite like I earned the four-figure income I got from mathematics work last year. But here’s where I get to be worth it: if the reindeer and Arthur don’t meet up again, but I can accept their being very near one another, then they will get as close as I like. I only figured how long it would take for the two to get about 23 centimeters apart, but if I wanted, I could wait for them to be two centimeters apart, or two millimeters, or two angstroms if I wanted. I’d pay for this nearer miss with a longer wait. And this gives me my opening to a really stunning bit of mathematics.

Continue reading “Arthur Christmas and the End of Time”

What We Can Say About Nonexistent Things


The modern interpretation of what we mean by a statement like “all unicorns are one-horned animals” is that we aren’t making the assertion that any unicorns exist. If any did happen to exist, sure, they’d be one-horned animals, if our proposition is true, but we’re reserving judgement about whether they do exist. If we don’t like the way the natural-language interpretation of the proposition leads us, we might be satisfied by saying it’s equivalent to saying, “there are no non-one-horned animals which are unicorns”, and that doesn’t feel quite like it claims unicorns exist. You might not even come away feeling there ought to be non-one-horned animals from that sentence alone.

Continue reading “What We Can Say About Nonexistent Things”

%d bloggers like this: