## Reading the Comics, November 23, 2016: Featuring A Betty Boop Cartoon Edition

I admit to padding this week’s collection of mathematically-themed comic strips. There’s just barely enough to justify my splitting this into a Sunday and a Tuesday installment. I’m including a follow-the-bouncing-ball cartoon to make up for that though. Enjoy!

Jimmy Hatlo’s Little Iodine from the 20th originally ran the 18th of September, 1955. It’s a cute enough bit riffing on realistic word problems. If the problems do reflect stuff ordinary people want to know, after all, then they’re going to be questions people in the relevant fields know how to solve. A limitation is that word problems will tend to pick numbers that make for reasonable calculations, which may be implausible for actual problems. None of the examples Iodine gives seem implausible to me, but what do I know about horses? But I do sometimes encounter problems which have the form but not content of a reasonable question, like an early 80s probability book asking about the chances of one or more defective transistors in a five-transistor radio set. (The problem surely began as one about burned-out vacuum tubes in a radio.)

Daniel Beyer’s Long Story Short for the 21st is another use of Albert Einstein as iconic for superlative first-rate genius. I’m curious how long it did take for people to casually refer to genius as Einstein. The 1930 song Kitty From Kansas City (and its 1931 Screen Songs adaptation, starring Betty Boop) mention Einstein as one of those names any non-stupid person should know. But that isn’t quite the same as being the name for a genius.

My love asked if I’d include Stephen Pastis’s Pearls Before Swine of the 22nd. It has one of the impossibly stupid crocodiles say, poorly, that he was a mathematics major. I admitted it depended how busy the week was. On a slow week I’ll include more marginal stuff.

Is it plausible that the Croc is, for all his stupidity, a mathematics major? Well, sure. Perseverance makes it possible to get any degree. And given Croc’s spent twenty years trying to eat Zebra without getting close clearly perseverance is one of his traits. But are mathematics majors bad at communication?

Certainly we get the reputation for it. Part of that must be that any specialized field — whether mathematics, rocket science, music, or pasta-making — has its own vocabulary and grammar for that vocabulary that outsiders just don’t know. If it were easy to follow it wouldn’t be something people need to be trained in. And a lay audience starts scared of mathematics in a way they’re not afraid of pasta technology; you can’t communicate with people who’ve decided they can’t hear you. And many mathematical constructs just can’t be explained in a few sentences, the way vacuum extrusion of spaghetti noodles could be. And, must be said, it’s often the case a mathematics major (or a major in a similar science or engineering-related field) has English as a second (or third) language. Even a slight accent can make someone hard to follow, and build an undeserved reputation.

The Pearls crocodiles are idiots, though. The main ones, anyway; their wives and children are normal.

Ernie Bushmiller’s Nancy Classics for the 23rd originally appeared the 23rd of November, 1949. It’s just a name-drop of mathematics, though, using it as the sort of problem that can be put on the blackboard easily. And it’s not the most important thing going on here, but I do notice Bushmiller drawing the blackboard as … er … not black. It makes the composition of the last panel easier to read, certainly. And makes the visual link between the paper in the second panel and the blackboard in the last stronger. It seems more common these days to draw a blackboard that’s black. I wonder if that’s so, or if it reflects modern technology making white-on-black-text easier to render. A Photoshop select-and-invert is instantaneous compared to what Bushmiller had to do.

## Reading the Comics, April 5, 2016: April 5, 2016 Edition

I’ve mentioned I like to have five or six comic strips for a Reading The Comics entry. On the 5th, it happens, I got a set of five all at once. Perhaps some are marginal for mathematics content but since when does that stop me? Especially when there’s the fun of a single-day Reading The Comics post to consider. So here goes:

Mark Anderson’s Andertoons is a student-resisting-the-problem joke. And it’s about long division. I can’t blame the student for resisting. Long division’s hard to learn. It’s probably the first bit of arithmetic in which you just have to make an educated guess for an answer and face possibly being wrong. And this is a problem that’ll have a remainder in it. I think I remember early on in long division finding a remainder left over feeling like an accusation. Surely if I’d done it right, the divisor would go into the original number a whole number of times, right? No, but you have to warm up to being comfortable with that.

Ted Key’s Hazel feels less charmingly out-of-date when you remember these are reruns. Ted Key — who created Peabody’s Improbable History as well as the sitcom based on this comic panel — retired in 1993. So Hazel’s attempt to create a less abstract version of the mathematics problem for Harold is probably relatively time-appropriate. And recasting a problem as something less abstract is often a good way to find a solution. It’s all right to do side work as a way to get the work you want to do.

John McNamee’s Pie Comic is a joke about the uselessness of mathematics. Tch. I wonder if the problem here isn’t the abstractness of a word like “hypotenuse”. I grant the word doesn’t evoke anything besides “hypotenuse”. But one irony is that hypotenuses are extremely useful things. We can use them to calculate how far away things are, without the trouble of going out to the spot. We can imagine post-apocalyptic warlords wanting to know how far things are, so as to better aim the trebuchets.

Percy Crosby’s Skippy is a rerun from 1928, of course. It’s also only marginally on point here. The mention of arithmetic is irrelevant to the joke. But it’s a fine joke and I wanted people to read it. Longtime readers know I’m a Skippy fan. (Saturday’s strip follows up on this. It’s worth reading too.)

Bill Griffith’s Zippy the Pinhead has picked up some quantum mechanics talk. At least he’s throwing around the sorts of things we see in pop science and, er, pop mathematical talk about the mathematics of cutting-edge physics. I’m not aware of any current models of everything which suppose there to be fourteen, or seventeen, dimensions of space. But high-dimension spaces are common points of speculation. Most of those dimensions appear to be arranged in ways we don’t see in the everyday world, but which leave behind mathematical traces. The crack about God not playing dice with the universe is famously attributed to Albert Einstein. Einstein was not comfortable with the non-deterministic nature of quantum mechanics, that there is this essential randomness to this model of the world.

## JH van ‘t Hoff and the Gaseous Theory of Solutions; also, Pricing Games

Do you ever think about why stuff dissolves? Like, why a spoon of sugar in a glass of water should seem to disappear instead of turning into a slight change in the water’s clarity? Well, sure, in those moods when you look at the world as a child does, not accepting that life is just like that and instead can imagine it being otherwise. Take that sort of question and put it to adult inquiry and you get great science.

Peter Mander of the Carnot Cycle blog this month writes a tale about Jacobus Henricus van ‘t Hoff, the first winner of a Nobel Prize for Chemistry. In 1883, on hearing of an interesting experiment with semipermeable membranes, van ‘t Hoff had a brilliant insight about why things go into solution, and how. The insight had only one little problem. It makes for fine reading about the history of chemistry and of its mathematical study.

In other, television-related news, the United States edition of The Price Is Right included a mention of “square root day” yesterday, 4/4/16. It was in the game “Cover-Up”, in which the contestant tries making successively better guesses at the price of a car. This they do by covering up wrong digits with new guesses. For the start of the game, before the contestant’s made any guesses, they need something irrelevant to the game to be on the board. So, they put up mock calendar pages for 1/1/2001, 2/2/2004, 3/3/2009, 4/4/2016, and finally a card reading $\sqrt{DAY}$. The game show also had a round devoted to Pi Day a few weeks back. So I suppose they’re trying to reach out to people into pop mathematics. It’s cute.

## Bringing Up Arthur Christmas Again

Since it’s the week for this, I would like to remind folks they could be watching the Aardman Animation film Arthur Christmas. Also, I was able to spin out a couple of mathematical and physics questions from one scene in the film. Last year I collected links to the essays — there’s five of them — into a single cover page. I hope you’ll consider them.

## How Interesting Is A Basketball Tournament?

Yes, I can hear people snarking, “not even the tiniest bit”. These are people who think calling all athletic contests “sportsball” is still a fresh and witty insult. No matter; what I mean to talk about applies to anything where there are multiple possible outcomes. If you would rather talk about how interesting the results of some elections are, or whether the stock market rises or falls, whether your preferred web browser gains or loses market share, whatever, read it as that instead. The work is all the same.

To talk about quantifying how interesting the outcome of a game (election, trading day, whatever) means we have to think about what “interesting” qualitatively means. A sure thing, a result that’s bound to happen, is not at all interesting, since we know going in that it’s the result. A result that’s nearly sure but not guaranteed is at least a bit interesting, since after all, it might not happen. An extremely unlikely result would be extremely interesting, if it could happen.

## The Arthur Christmas Problem

Since it’s the season for it I’d like to point new or new-wish readers to a couple of posts I did in 2012-13, based on the Aardman Animation film Arthur Christmas, which was just so very charming in every way. It also puts forth some good mathematical and mathematical-physics questions.

Opening the scene is “Could `Arthur Christmas’ Happen In Real Life?” which begins with a scene in the movie: Arthur and Grand-Santa are stranded on a Caribbean island while the reindeer and sleigh, without them, go flying off in a straight line. This raises the question of what is a straight line if you’re on the surface of something spherical like the Earth.

“Returning To Arthur Christmas” was titled that because I’d left the subject for a couple weeks, as is my wont, and it gets a little bit more spoiler-y since the film seems to come down on the side of the reindeer moving on a path called a Great Circle. This forces us to ask another question: if the reindeer are moving around the Earth, are they moving with the Earth’s rotation, like an airplane does, or freely of it, like a satellite does?

“Arthur Christmas And The Least Common Multiple” starts by supposing that the reindeer are moving the way satellites do, independent of the Earth’s rotation, and on making some assumptions about the speed of the reindeer and the path they’re taking, works out how long Arthur and Grand-Santa would need to wait before the reindeer and sled are back if they’re lucky enough to be waiting on the equator.

“Six Minutes Off” shifts matters a little, by supposing that they’re not on the equator, which makes meeting up the reindeer a much nastier bit of timing. If they’re willing to wait long enough the reindeer will come as close as they want to their position, but the wait can be impractically long, for example, eight years, or over five thousand years, which would really slow down the movie.

And finally “Arthur Christmas and the End of Time” wraps up matters with a bit of heady speculation about recurrence: the way that a physical system can, if the proper conditions are met, come back either to its starting point or to a condition arbitrarily close to its starting point, if you wait long enough. This offers some dazzling ideas about the really, really long-term fate of the universe, which is always a heady thought. I hope you enjoy.

I did join a little group of people competing to try calling the various NCAA basketball tournament brackets. It’s a silly pastime and way to commiserate with other people about how badly we’re doing forecasting the outcome of the 63 games in the match. We’re competing just for points and the glory of doing a little better than our friends, but there’s some actual betting pools out there, and some contests that offer, for perfect brackets, a billion dollars (Warren Buffet, if I have that right), or maybe even a new car (WLNS-TV, channel 6, Lansing).

Working out what the odds are of getting all 63 games right is more interesting than it might seem at first. The natural (it seems to me) first guess at working out the odds is to say, well, there are 63 games, and whatever team you pick has a 50 percent chance of winning that game, so the chance of getting all 63 games right is $\left(\frac{1}{2}\right)^{63}$, or one chance in 9,223,372,036,854,775,808.

But it’s not quite so, and the reason is buried in the assumption that every team has a 50 percent chance of winning any given game. And that’s just not so: it’s plausible (as of this writing) to think that the final game will be Michigan State playing the University of Michigan. It’s just ridiculous to think that the final game will be SUNY/Albany (16th seeded) playing Wofford (15th).

The thing is that not all the matches are equally likely to be won by either team. The contest starts out with the number one seed playing the number 16, the number two seed playing the number 15, and so on. The seeding order roughly approximates the order of how good the teams are. It doesn’t take any great stretch to imagine the number ten seed beating the number nine seed; but, has a number 16 seed ever beaten the number one?

To really work out the probability of getting all the brackets right turns into a fairly involved problem. We can probably assume that the chance of, say, number-one seed Virginia beating number-16 seed Coastal Carolina is close to how frequently number-one seeds have beaten number-16 seeds in the past, and similarly that number-four seed Michigan State’s chances over number-13 Delaware is close to that historical average. But there are some 9,223,372,036,854,775,808 possible ways that the tournament could, in principle, go, and they’ve all got different probabilities of happening.

So there isn’t a unique answer to what is the chance that you’ve picked a perfect bracket set. It’s higher if you’ve picked a lot of higher-ranking seeds, certainly, at least assuming that this year’s tournament is much like previous years’, and that seeds do somewhat well reflect how likely teams are to win. At some point it starts to be easier to accept “one chance in 9,223,372,036,854,775,808” as close enough. Me, I’ll be gloating for the whole tournament thanks to my guess that Ohio State would lose to Dayton.

[Edit: first paragraph originally read “games in the match”, which doesn’t quite parse.]

## The Mathematics Of A Pricing Game

There was a new pricing game that debuted on The Price Is Right for the start of its 42nd season, with a name that’s designed to get my attention: it’s called “Do The Math”. This seems like a dangerous thing to challenge contestants to do since the evidence is that pricing games which depend on doing some arithmetic tend to be challenging (“Grocery Game”, “Bullseye”), or confusing (“The Check Game”), or outright disasters (“Add Em Up”). This one looks likely to be more successful, though.

The setup is this: The contestant is shown two prizes. In the first (and, so far, only) playing of the game this was a 3-D HDTV and a motorcycle. The names of those prizes are put on either side of a monitor made up to look like a green chalkboard. The difference in prize values is shown; in this case, it was $1160, and that’s drawn in the middle of the monitor in Schoolboard Extra-Large font. The contestant has to answer whether the price of the prize listed on the left (here, the 3-D HDTV) plus the cash ($1160) is the price of the prize on the right (the motorcycle), or whether the price of the prize on the left minus the cash is the price of the prize on the right. The contestant makes her or his guess and, if right, wins both prizes and the money.

There’s not really much mathematics involved here. The game is really just a two-prize version of “Most Expensive” (in which the contestant has to say which of three prizes and then it’s right there on the label). I think there’s maybe a bit of educational value in it, though, in that by representing the prices of the two prizes — which are fixed quantities, at least for the duration of taping, and may or may not be known to the contestant — with abstractions it might make people more comfortable with the mathematical use of symbols. x and all the other letters of the English (and Greek) alphabets get called into place to represent quantities that might be fixed, or might not be; and that might be known, or might be unknown; and that we might actually wish to know or might not really care about but need to reference somehow.

That conceptual leap often confuses people, as see any joke about how high school algebra teachers can’t come up with a consistent answer about what x is. This pricing game is a bit away from mathematics classes, but it might yet be a way people could see that the abstraction idea is not as abstract or complicated as they fear.

I suspect, getting away from my flimsy mathematics link, that this should be a successful pricing game, since it looks to be quick and probably not too difficult for players to get. I’m sorry the producers went with a computer monitor for the game’s props, rather than — say — having a model actually write plus or minus, or some other physical prop. Computer screens are boring television; real objects that move are interesting. There are some engagingly apocalyptic reviews of the season premiere over at golden-road.net, a great fan site for The Price Is Right.

## Reading the Comics, September 11, 2013

I may need to revise my seven-or-so-comic standard for hosting one of these roundups of mathematics-themed comic strips, at least during the summer vacation. We’ll see how they go as the school year picks up and cartoonists return to the traditional jokes of students not caring about algebra and kids giving wiseacre responses to word problems.

Jan Eliot’s Stone Soup began a sequence on the 26th of August in which Holly, the teenager, has to do flash cards to improve her memorization of the multiplication tables. It’s a baffling sequence to me, at least, since I can’t figure why a high schooler needs to study the times tables (on the 27th, Grandmom says it’s because it will make mathematics easier the more arithmetic she can do in her head). It’s also a bit infuriating because I can’t see a way to make sure Holly sees mathematics as tedious drudge work more than getting drilled by flash cards through summer vacation, particularly as she’s at an age where she ought to be doing algebra or trigonometry or geometry.

Steve Moore’s In The Bleachers (September 1) uses a bit of mathematics as a throwaway “something complicated to be thinking about” bit. I do like that the mathematics shown at least parses. I’m not sure offhand what problem the pitcher is trying to solve, that is, but the steps in it are done correctly, and even show off a nice bit of implicit differentiation. That’s a bit of differential calculus where you’ll find the rate of change of one variable with respect to another depends on the value of the variable, which isn’t actually hard to do if you follow the rules correctly but which, as I remember it, produces a vague sense of unease at its introduction. Probably it feels vaguely illicit to have a function defined in, in parts, in terms of itself.

## Reading the Comics, August 18, 2013

I’m sorry to have fallen silent so long; I was away from home and thought I’d be able to put up a couple of short pieces along the way, and turned out to be rather busy doing other things instead. It’s given me at least one nice problem with dramatic photographs to use in a near-future entry, though, so not all is lost (although I’m trying to think of a way to re-do the work in it that doesn’t involve quite so much algebra; I’m afraid of losing my readers and worse of making a hash of the LaTeX involved). Meanwhile, it’s been surprisingly close to a month since the last summary of comic strips with mathematical themes — I imagine the cartoonists are taking a break on Students In Classroom setups what with it being summer vacation across so much of the United States — so let me return to that.

## And The $64 Question Was … I ran across something interesting — I always do, but this was something I wasn’t looking for — in John Dunning’s On The Air: The Encyclopedia of Old-Time Radio, which is about exactly what it says. In the entry for the quiz show Take It Or Leave It, which, like the quiz shows it evolved into (The$64 Question and The $64,000 Question) asked questions worth amounts doubling all the way to$64. Says Dunning:

Researcher Edith Oliver tried to increase the difficulty with each step, but it was widely believed that the $32 question was the toughest. Perhaps that’s why 75 percent of contestants who got that far decided to go all the way, though only 20 percent of those won the$64.

I am a bit skeptical of those percentages, because they look too much to me like someone, probably for a press release, said something like “three out of four contestants go all the way” and it got turned into a percentage because of the hypnotic lure that decimal digits have on people. However, I can accept that the producers would have a pretty good idea how likely it was a contestant who won $32 would decide to go for the jackpot, rather than take the winnings and go safely home, since that’s information indispensable to making out the show’s budget. I’m a little surprised the final question might have a success rate of only one in five, but then, this is the program that launched the taunting cry “You’ll be sorrrreeeeee” into many cartoons that baffled kids born a generation after the show went off the air (December 1951, in the original incarnation). It strikes me that topics like how many contestants go on for bigger prizes, and how many win, could be used to produce a series of word problems grounded in a plausible background, at least if the kids learning probability and statistics these days even remember Who Wants To Be A Millionaire is still technically running. (Check your local listings!) Sensible questions could include how likely it is any given contestant would go on to the million-dollar question, how many questions the average contestant answer successfully, and — if you include an estimate for how long the average question takes to answer — how many contestants and questions the show is going to need to fill a day or a week or a month’s time. ## Roger Cotes’s Birthday Amongst the Twitter feeds I follow and which aren’t based on fictional squirrels is the @mathshistory one, reporting just what it sounds like. It noted the 10th of July was the birthday of Roger Cotes (1682 – 1716) and I knew there was something naggingly familiar about his name. His biography at the MacTutor History of Mathematics Archive features what surely kept him in my mind: that on Cotes’s death at age 34 Isaac Newton said, “… if he had lived we might have known something”. Given Newton’s standing, that’s a eulogy almost good enough to get Cotes tenure, even today. MacTutor credits Cotes with, among other things, inventing the radian measure of angles; I’m wise enough, I hope, to view skeptically all claims of anyone uniquely inventing anything mathematical, although it’s certainly so that radian measure — in which you give an angle of arc, not by how many degrees it reaches, but by how long the arc is, in units of the radius — is extraordinarily convenient analytically and it’s hard to see how mathematicians did without it. People who advanced the idea and its use deserve their praise. (Normal people can carry on with degrees of arc, for which the numbers are just more pleasant.) As a bonus it serves as one of the points on which people coming into trigonometry classes can feel their heads exploding. Cotes’s name also gets a decent and, if I have it right, appropriate amount of fame for what are called Newton-Cotes formulas. These are methods for “numerical quadrature”, the slightly old-fashioned name we use to talk about numerical approximations of integrals. In an introductory calculus class one’s likely to run across a couple of rules for numerical quadrature — given names like the Trapezoid Rule, Simpson’s Rule, Simpson’s 3/8ths rule, or the Midpoint Rule — and these are all examples of the Newton-Cotes formulas. Teaching the routine for getting all these Newton-Cotes formulas was, for whatever reason, one of the things I found particularly delightful when I taught numerical mathematics; some subjects are just fun to explain. MacTutor also notes that from 1709 through 1713, Cotes edited the second edition of Newton’s Principia, and apparently did a most thorough job of it. It claims he studied the Principia and arguing its points with Newton in enough detail that Newton finally removed the thanks he gave to Cotes in the first draft of his preface. A difficult but correct editor is probably more pleasant to have when the project is finished. ## Solving The Price Is Right’s “Any Number” Game A friend who’s also into The Price Is Right claimed to have noticed something peculiar about the “Any Number” game. Let me give context before the peculiarity. This pricing game is the show’s oldest — it was actually the first one played when the current series began in 1972, and also the first pricing game won — and it’s got a wonderful simplicity: four digits from the price of a car (the first digit, nearly invariably a 1 or a 2, is given to the contestant and not part of the game), three digits from the price of a decent but mid-range prize, and three digits from a “piggy bank” worth up to$9.87 are concealed. The contestant guesses digits from zero through nine inclusive, and they’re revealed in the three prices. The contestant wins whichever prize has its price fully revealed first. This is a steadily popular game, and one of the rare Price games which guarantees the contestant wins something.

A couple things probably stand out. The first is that if you’re very lucky (or unlucky) you can win with as few as three digits called, although it might be the piggy bank for a measly twelve cents. (Past producers have said they’d never let the piggy bank hold less than \$1.02, which still qualifies as “technically something”.) The other is that no matter how bad you are, you can’t take more than eight digits to win something, though it might still be the piggy bank.

What my friend claimed to notice was that these “Any Number” games went on to the last possible digit “all the time”, and he wanted to know, why?

My first reaction was: “all” the time? Well, at least it happened an awful lot of the time. But I couldn’t think of a particular reason that they should so often take the full eight digits needed, or whether they actually did; it’s extremely easy to fool yourself about how often events happen when there’s a complicated possibile set of events. But stipulating that eight digits were often needed, then, why should they be needed? (For that matter, trusting the game not to be rigged — and United States televised game shows are by legend extremely sensitive to charges of rigging — how could they be needed?) Could I explain why this happened? And he asked again, enough times that I got curious myself.

## The Rare Days

The subject doesn’t quite feel right for my occasional roundups of mathematics-themed comic strips, but I noticed this month that the bit about “what is so rare as a day in June?” is coming up … well, twice, so it’s silly to call that “a lot” just yet, but it’s coming up at all. First was back on June 10th, with Jef Mallet’s Frazz (which actually enlightened me as I didn’t know where the line came from, and yes, it’s the Lowell family that also produced Percival), and then John Rose’s Barney Google and Snuffy Smith repeated the question on the 13th.

The question made me immediately think of an installment of Walt Kelly’s Pogo, where Pogo (I believe) asked the question and Porky Pine immediately answered “any day in February”. But it got me wondering whether the question could be answered more subtly, that is, more counter-intuitively.

## Reading the Comics, June 1, 2013

I’ve got a fresh batch of comics strips with mathematical themes. Actually, something I realized only as I was putting the list of them together, they’re word problem themes: there’s not any anthropomorphized numerals or puns on Wilhelm Leibniz’s name or anything like that. I can conjure easily reasons why word problems are good starting points for comic strip writers: they’re familiar to the reader, they don’t require any careful integration into character or storyline, and they can be designed to set up any punch line the cartoonist has in mind. Jason Chatfield’s Ginger Meggs and Gary Brookins’ and Susie MacNelly’s Shoe have running jokes in which Ginger or Skyler are asked for the collective name for a group of things, and some appropriately pun-like construct is given, and this is accepted though I don’t know why anyone would suppose there to be a collective name for a group of grocery store clerks or DSL technicians or whatnot.

Mason Mastroianni’s B.C. (May 23) sets things off with the classic form of a high school algebra word problem. I have wondered how long train-leaving-the-station problems are going to linger as example algebra problems, given that people (in the United States) really don’t take the trains for long distances if they can help it. The service is there; I just don’t believe it’s part of the common experience of students, which makes it a bit baffling as a word problem source. But the problems can be rewritten easily as airplane travel or cars on highways, if you want to salvage the question. (I’d also like to mention I generally like how Mastroianni has revitalized B.C. since Johnny Hart’s death. Particularly, the strip’s doing more of the comic anachronism that built the strip up in the first place, and this particular example contains a demonstration of that.)

## Odd Proofs

May 2013 turned out to be an interesting month for number theory, in that there’ve been big breakthroughs on two long-standing conjectures. Number theory is great fun in part because it’s got many conjectures that anybody can understand but which are really hard to prove. The one that’s gotten the most attention, at least from the web sites that I read which dip into popular mathematics, has been the one about twin primes.

It’s long been suspected that there are infinitely many pairs of “twin primes”, such as 5 and 7, or 11 and 13, or 29 and 31, separated by only two. It’s not proven that there are such, not yet. Yitang Zhang of Harvard has announced proof that there are infinitely many pairings of primes that are no more than 70,000,000 apart. This is admittedly not the tightest bound out there, but it’s better than what there was before. But while there are infinitely many primes — anyone can prove that — how many there are in any fixed-width range tends to decrease, and it would be imaginable to think that the distance between primes just keeps increasing, without bounds, the way that (say) each pair of successive powers of two is farther apart than the previous pair were. But it’s not so, and that’s neat to see.

Less publicized is a proof of Goldbach’s Odd Conjecture. Goldbach’s Conjecture is the famous one that every even number bigger than two can be written as the sum of two primes. An equivalent form would be to say that every whole number — even or odd — larger than five can be written as the sum of three primes. Goldbach’s Odd Conjecture cuts the problem by just saying that every odd whole number greater than five can be written as the sum of three primes. And it’s this which Harald Andres Helfgott claims to have a proof for. (He also claims to have a proof that every odd number greater than seven can be written as the sum of three odd primes, that is, that two isn’t needed for more than single-digit odd numbers.)

## Reading the Comics, 16 May 2013

It’s a good time for another round of comic strip reading, particularly I haven’t had the time to think in detail about all the news in number theory that’s come out this past week, and that I’m not sure whether I should go into explaining arc lengths after I trapped at least one friend into trying to work out the circumference of an ellipse (you can’t do it either, but there are a lot of curves you could). I also notice I’m approaching that precious 10,000th blog hit here, so I can get back to work verifying that law about random data starting with the digit 1.

Berkeley Breathed’s Bloom County (May 2, rerun) throws up a bunch of mathematical symbols with the intention of producing a baffling result, so that Milo can make a clean getaway from Freida. The splendid thing to me, though, is that Milo’s answer — “log 10 times 10 to the derivative of 10,000” — actually does parse, if you read it a bit charitably. The “log 10” bit we can safely suppose to mean the logarithm base 10, because the strip originally ran in 1981 or so when there was still some use for the common logarithm. These days, we have calculators, and “log” is moving over to be the “natural logarithm”, base e, what was formerly denoted as “ln”.

## Reading the Comics, April 28, 2013

The flow of mathematics-themed comic strips almost dried up in April. I’m going to assume this reflects the kids of the cartoonists being on Spring Break, and teachers not placing exams immediately after the exam, in early to mid-March, and that we were just seeing the lag from that. I’m joking a little bit, but surely there’s some explanation for the rash of did-you-get-your-taxes-done comics appearing two weeks after April 15, and I’m fairly sure it isn’t the high regard United States newspaper syndicates have for their Canadian readership.

Dave Whamond’s Reality Check (April 8) uses the “infinity” symbol and tossed pizza dough together. The ∞ symbol, I understand, is credited to the English mathematician John Wallis, who introduced it in the Treatise on the Conic Sections, a text that made clearer how conic sections could be described algebraically. Wikipedia claims that Wallis had the idea that negative numbers were, rather than less than zero, actually greater than infinity, which we’d regard as a quirky interpretation, but (if I can verify it) it makes for an interesting point in figuring out how long people took to understand negative numbers like we believe we do today.

Jonathan Lemon’s Rabbits Against Magic (April 9) does a playing-the-odds joke, in this case in the efficiency of alligator repellent. The joke in this sort of thing comes to the assumption of independence of events — whether the chance that a thing works this time is the same as the chance of it working last time — and a bit of the idea that you find the probability of something working by trying it many times and counting the successes. Trusting in the Law of Large Numbers (and the independence of the events), this empirically-generated probability can be expected to match the actual probability, once you spend enough time thinking about what you mean by a term like “the actual probability”.

## Kenneth Appel and Colored Maps

Word’s come through mathematics circles about the death of Kenneth Ira Appel, who along with Wolgang Haken did one of those things every mathematically-inclined person really wishes to do: solve one of the long-running unsolved problems of mathematics. Even better, he solved one of those accessible problems. There are a lot of great unsolved problems that take a couple paragraphs just to set up for the lay audience (who then will wonder what use the problem is, as if that were the measure of interesting); Appel and Haken’s was the Four Color Theorem, which people can understand once they’ve used crayons and coloring books (even if they wonder whether it’s useful for anyone besides Hammond).

It was, by everything I’ve read, a controversial proof at the time, although by the time I was an undergraduate the controversy had faded the way controversial stuff doesn’t seem that exciting decades on. The proximate controversy was that much of the proof was worked out by computer, which is the sort of thing that naturally alarms people whose jobs are to hand-carve proofs using coffee and scraps of lumber. The worry about that seems to have faded as more people get to use computers and find they’re not putting the proof-carvers out of work to any great extent, and as proof-checking software gets up to the task of doing what we would hope.

Still, the proof, right as it probably is, probably offers philosophers of mathematics a great example for figuring out just what is meant by a “proof”. The word implies that a proof is an argument which convinces a person of some proposition. But the Four Color Theorem proof is … well, according to Appel and Haken, 50 pages of text and diagrams, with 85 pages containing an additional 2,500 diagrams, and 400 microfiche pages with additional diagrams of verifications of claims made in the main text. I’ll never read all that, much less understand all that; it’s probably fair to say very few people ever will.

So I couldn’t, honestly, say it was proved to me. But that’s hardly the standard for saying whether something is proved. If it were, then every calculus class would produce the discovery that just about none of calculus has been proved, and that this whole “infinite series” thing sounds like it’s all doubletalk made up on the spot. And yet, we could imagine — at least, I could imagine — a day when none of the people who wrote the proof, or verified it for publication, or have verified it since then, are still alive. At that point, would the theorem still be proved?

(Well, yes: the original proof has been improved a bit, although it’s still a monstrously large one. And Neil Robertson, Daniel P Sanders, Paul Seymour, and Robin Thomas published a proof, similar in spirit but rather smaller, and have been distributing the tools needed to check their work; I can’t imagine there being nobody alive who hasn’t done, or at least has the ability to do, the checking work.)

I’m treading into the philosophy of mathematics, and I realize my naivete about questions like what constitutes a proof are painful to anyone who really studies the field. I apologize for inflicting that pain.

## Looking to Euler

I haven’t forgotten about writing original material here — actually I’ve been trying to think of why something I’ve not thought about a long while is true, which is embarrassing and hard to do — but in the meanwhile I’d like to remember Leonhard Euler’s 306th birthday and point to Richard Elwes’s essay here about Euler’s totient function. “Totient” is, as best I can determine, a word that exists only for this mathematical concept — it’s the count of how many numbers are relatively prime to a given number — but even if the word comes only from the mildly esoteric world of prime number studies, it’s still one of my favorite mathematical terms. It feels like a word that ought to be more successful. Someday I’ll probably get in a nasty argument with other people playing Boggle about it.

Apparently, though, Euler didn’t dub this quantity the “totient”, and the word is a neologism coined by James Joseph Sylvester (1814 – 1897). That’s pretty respectable company, though: Sylvester — whose name you probably brush up against if you study mathematical matrices — is widely praised for his skill in naming things, although the only terms I know offhand that he gave us were “totient” and “discriminant”. That $b^2 - 4ac$ term in the quadratic formula which tells you whether a quadratic equation has two real, one real, or two imaginary solutions, was a name (not a concept) given by him, and he named (and extended) the similar concept for cubic equations. I do believe there are more such Sylvester-dubbed terms, just, that we need a Wikipedia category to gather them together.

I’m amused to be reminded that, according to the St Andrews biographies of mathematicians, Sylvester at least one tossed off this version of the Chicken McNuggets problem, possibly after he’d worked out the general solution:

I have a large number of stamps to the value of 5d and 17d only. What is the largest denomination which I cannot make up with a combination of these two different values.