About Chances of Winning on The Price Is Right, Again


While I continue to wait for time and muse and energy and inspiration to write fresh material, let me share another old piece. This bit from a decade ago examines statistical quirks in The Price Is Right. Game shows offer a lot of material for probability questions. The specific numbers have changed since this was posted, but, the substance hasn’t. I got a bunch of essays out of one odd incident mentioned once on the show, and let me do something useful with that now.

To the serious game show fans: Yes, I am aware that the “Item Up For Bid” is properly called the “One-Bid”. I am writing for a popular audience. (The name “One-Bid” comes from the original, 1950s, run of the show, when the game was entirely about bidding for prizes. A prize might have several rounds of bidding, or might have just the one, and that format is the one used for the Item Up For Bid for the current, 1972-present, show.)


Putting together links to all my essays about trapezoid areas made me realize I also had a string of articles examining that problem of The Price Is Right, with Drew Carey’s claim that only once in the show’s history had all six contestants winning the Item Up For Bids come from the same seat in Contestants’ Row. As with the trapezoid pieces they form a more or less coherent whole, so, let me make it easy for people searching the web for the likelihood of clean sweeps or of perfect games on The Price Is Right to find my thoughts.

Continue reading “About Chances of Winning on The Price Is Right, Again”

Reading The Comics, September 24, 2014: Explained In Class Edition


I’m a fan of early 20th century humorist Robert Benchley. You might not be yourself, but it’s rather likely that among the humorists you do like are a good number of people who are fans of his. He’s one of the people who shaped the modern American written-humor voice, and as such his writing hasn’t dated, the way that, for example, a 1920s comic strip will often seem to come from a completely different theory of what humor might be. Among Benchley’s better-remembered quotes, and one of those striking insights into humanity, not to mention the best productivity tip I’ve ever encountered, was something he dubbed the Benchley Principle: “Anyone can do any amount of work, provided it isn’t the work he is supposed to be doing at the moment.” One of the comics in today’s roundup of mathematics-themed comics brought the Benchley Principle to mind, and I mean to get to how it did and why.

Eric The Circle (by ‘Griffinetsabine’ this time) (September 18) steps again into the concerns of anthropomorphized shapes. It’s also got a charming-to-me mention of the trapezium, the geometric shape that’s going to give my mathematics blog whatever immortality it shall have.

Bill Watterson’s Calvin and Hobbes (September 20, rerun) dodged on me: I thought after the strip from the 19th that there’d be a fresh round of explanations of arithmetic, this time including imaginary numbers like “eleventeen” and “thirty-twelve” and the like. Not so. After some explanation of addition by Calvin’s Dad,
Spaceman Spiff would take up the task on the 22nd of smashing together Mysterio planets 6 and 5, which takes a little time to really get started, and finally sees the successful collision of the worlds. Let this serve as a reminder: translating a problem to a real-world application can be a fine way to understand what is wanted, but you have to make sure that in the translation you preserve the result you wanted from the calculation.

Joe has memorized the odds for various poker hands. Four times four, not so much.
Rick Detorie’s One Big Happy for the 21st of September, 2014. I confess ignorance as to whether these odds are accurate.

It’s Rick DeTorie’s One Big Happy (September 21) which brought the Benchley Principle to my mind. Here, Joe is shown to know extremely well the odds of poker hands, but to have no chance at having learned the multiplication table. It seems like something akin to Benchley’s Principle is at work here: Joe memorizing the times tables might be socially approved, but it isn’t what he wants to do, and that’s that. But inspiring the desire to know something is probably the one great challenge facing everyone who means to teach, isn’t it?

Jonathan Lemon’s Rabbits Against Magic (September 21) features a Möbius strip joke that I imagine was a good deal of fun to draw. The Möbius strip is one of those concepts that really catches the imagination, since it seems to defy intuition that something should have only the one side. I’m a little surprise that topology isn’t better-popularized, as it seems like it should be fairly accessible — you don’t need equations to get some surprising results, and you can draw pictures — but maybe I just don’t understand the field well enough to understand what’s difficult about bringing it to a mass audience.

Hector D. Cantu and Carlos Castellanos’s Baldo (September 23) tells a joke about percentages and students’ self-confidence about how good they are with “numbers”. In strict logic, yes, the number of people who say they are and who say they aren’t good at numbers should add up to something under 100 percent. But people don’t tend to be logically perfect, and are quite vulnerable to the way questions are framed, so the scenario is probably more plausible in the real world than the writer intended.

Steve Moore’s In The Bleachers (September 23) falls back on the most famous of all equations as representative of “something it takes a lot of intelligence to understand”.

Reading the Comics, August 29, 2014: Recurring Jokes Edition


Well, I did say we were getting to the end of summer. It’s taken only a couple days to get a fresh batch of enough mathematics-themed comics to include here, although the majority of them are about mathematics in ways that we’ve seen before, sometimes many times. I suppose that’s fair; it’s hard to keep thinking of wholly original mathematics jokes, after all. When you’ve had one killer gag about “537”, it’s tough to move on to “539” and have it still feel fresh.

Tom Toles’s Randolph Itch, 2 am (August 27, rerun) presents Randolph suffering the nightmare of contracting a case of entropy. Entropy might be the 19th-century mathematical concept that’s most achieved popular recognition: everyone knows it as some kind of measure of how disorganized things are, and that it’s going to ever increase, and if pressed there’s maybe something about milk being stirred into coffee that’s linked with it. The mathematical definition of entropy is tied to the probability one will find whatever one is looking at in a given state. Work out the probability of finding a system in a particular state — having particles in these positions, with these speeds, maybe these bits of magnetism, whatever — and multiply that by the logarithm of that probability. Work out that product for all the possible ways the system could possibly be configured, however likely or however improbable, just so long as they’re not impossible states. Then add together all those products over all possible states. (This is when you become grateful for learning calculus, since that makes it imaginable to do all these multiplications and additions.) That’s the entropy of the system. And it applies to things with stunning universality: it can be meaningfully measured for the stirring of milk into coffee, to heat flowing through an engine, to a body falling apart, to messages sent over the Internet, all the way to the outcomes of sports brackets. It isn’t just body parts falling off.

Stanley's old algebra teacher insists there is yet hope for him.
Randy Glasbergen’s _The Better Half_ For the 28th of August, 2014.

Randy Glasbergen’s The Better Half (August 28) does the old joke about not giving up on algebra someday being useful. Do teachers in other subjects get this? “Don’t worry, someday your knowledge of the Panic of 1819 will be useful to you!” “Never fear, someday they’ll all look up to you for being able to diagram a sentence!” “Keep the faith: you will eventually need to tell someone who only speaks French that the notebook of your uncle is on the table of your aunt!”

Eric the Circle (August 28, by “Gilly” this time) sneaks into my pages again by bringing a famous mathematical symbol into things. I’d like to make a mention of the links between mathematics and music which go back at minimum as far as the Ancient Greeks and the observation that a lyre string twice as long produced the same note one octave lower, but lyres and strings don’t fit the reference Gilly was going for here. Too bad.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 28) is another strip to use a “blackboard full of mathematical symbols” as visual shorthand for “is incredibly smart stuff going on”. The symbols look to me like they at least started out as being meaningful — they’re the kinds of symbols I expect in describing the curvature of space, and which you can find by opening up a book about general relativity — though I’m not sure they actually stay sensible. (It’s not the kind of mathematics I’ve really studied.) However, work in progress tends to be sloppy, the rough sketch of an idea which can hopefully be made sound.

Anthony Blades’s Bewley (August 29) has the characters stare into space pondering the notion that in the vastness of infinity there could be another of them out there. This is basically the same existentially troublesome question of the recurrence of the universe in enough time, something not actually prohibited by the second law of thermodynamics and the way entropy tends to increase with the passing of time, but we have already talked about that.

Reading the Comics, August 25, 2014: Summer Must Be Ending Edition


I’m sorry to admit that I can’t think of a unifying theme for the most recent round of comic strips which mention mathematical topics, other than that this is one of those rare instances of nobody mentioning infinite numbers of typing monkeys. I have to guess Comic Strip Master Command sent around a notice that summer vacation (in the United States) will be ending soon, so cartoonists should start practicing their mathematics jokes.

Tom Toles’s Randolph Itch, 2 a.m. (August 22, rerun) presents what’s surely the lowest-probability outcome of a toss of a fair coin: its landing on the edge. (I remember this as also the gimmick starting a genial episode of The Twilight Zone.) It’s a nice reminder that you do have to consider all the things that might affect an experiment’s outcome before concluding what are likely and unlikely results.

It also inspires, in me, a side question: a single coin, obviously, has a tiny chance of landing on its side. A roll of coins has a tiny chance of not landing on its side. How thick a roll has to be assembled before the chance of landing on the side and the chance of landing on either edge become equal? (Without working it out, my guess is it’s about when the roll of coins is as tall as it is across, but I wouldn’t be surprised if it were some slightly oddball thing like the roll has to be the square root of two times the diameter of the coins.)

Doug Savage’s Savage Chickens (August 22) presents an “advanced Sudoku”, in a puzzle that’s either trivially easy or utterly impossible: there’s so few constraints on the numbers in the presented puzzle that it’s not hard to write in digits that will satisfy the results, but, if there’s one right answer, there’s not nearly enough information to tell which one it is. I do find interesting the problem of satisfiability — giving just enough information to solve the puzzle, without allowing more than one solution to be valid — an interesting one. I imagine there’s a very similar problem at work in composing Ivasallay’s Find The Factors puzzles.

Phil Frank and Joe Troise’s The Elderberries (August 24, rerun) presents a “mind aerobics” puzzle in the classic mathematical form of drawing socks out of a drawer. Talking about pulling socks out of drawers suggests a probability puzzle, but the question actually takes it a different direction, into a different sort of logic, and asks about how many socks need to be taken out in order to be sure you have one of each color. The easiest way to apply this is, I believe, to use what’s termed the “pigeon hole principle”, which is one of those mathematical concepts so clear it’s hard to actually notice it. The principle is just that if you have fewer pigeon holes than you have pigeons, and put every pigeon in a pigeon hole, then there’s got to be at least one pigeon hole with more than one pigeons. (Wolfram’s MathWorld credits the statement to Peter Gustav Lejeune Dirichlet, a 19th century German mathematician with a long record of things named for him in number theory, probability, analysis, and differential equations.)

Dave Whamond’s Reality Check (August 24) pulls out the old little pun about algebra and former romantic partners. You’ve probably seen this joke passed around your friends’ Twitter or Facebook feeds too.

Julie Larson’s The Dinette Set (August 25) presents some terrible people’s definition of calculus, as “useless math with letters instead of numbers”, which I have to gripe about because that seems like a more on-point definition of algebra. I’m actually sympathetic to the complaint that calculus is useless, at least if you don’t go into a field that requires it (although that’s rather a circular definition, isn’t it?), but I don’t hold to the idea that whether something is “useful” should determine whether it’s worth learning. My suspicion is that things you find interesting are worth learning, either because you’ll find uses for them, or just because you’ll be surrounding yourself with things you find interesting.

Shifting from numbers to letters, as are used in algebra and calculus, is a great advantage. It allows you to prove things that are true for many problems at once, rather than just the one you’re interested in at the moment. This generality may be too much work to bother with, at least for some problems, but it’s easy to see what’s attractive in solving a problem once and for all.

Mikael Wulff and Anders Morgenthaler’s WuMo (August 25) uses a couple of motifs none of which I’m sure are precisely mathematical, but that seem close enough for my needs. First there’s the motif of Albert Einstein as just being so spectacularly brilliant that he can form an argument in favor of anything, regardless of whether it’s right or wrong. Surely that derives from Einstein’s general reputation of utter brilliance, perhaps flavored by the point that he was able to show how common-sense intuitive ideas about things like “it’s possible to say whether this event happened before or after that event” go wrong. And then there’s the motif of a sophistic argument being so massive and impressive in its bulk that it’s easier to just give in to it rather than try to understand or refute it.

It’s fair of the strip to present Einstein as beginning with questions about how one perceives the universe, though: his relativity work in many ways depends on questions like “how can you tell whether time has passed?” and “how can you tell whether two things happened at the same time?” These are questions which straddle physics, mathematics, and philosophy, and trying to find answers which are logically coherent and testable produced much of the work that’s given him such lasting fame.

Reading the Comics, July 24, 2014: Math Is Just Hard Stuff, Right? Edition


Maybe there is no pattern to how Comic Strip Master Command directs the making of mathematics-themed comic strips. It hasn’t quite been a week since I had enough to gather up again. But it’s clearly the summertime anyway; the most common theme this time seems to be just that mathematics is some hard stuff, without digging much into particular subjects. I can work with that.

Pab Sungenis’s The New Adventures of Queen Victoria (July 19) brings in Erwin Schrödinger and his in-strip cat Barfly for a knock-knock joke about proof, with Andrew Wiles’s name dropped probably because he’s the only person who’s gotten to be famous for a mathematical proof. Wiles certainly deserves fame for proving Fermat’s Last Theorem and opening up what I understand to be a useful new field for mathematical research (Fermat’s Last Theorem by itself is nice but unimportant; the tools developed to prove it, though, that’s worthwhile), but remembering only Wiles does slight Richard Taylor, whose help Wiles needed to close a flaw in his proof.

Incidentally I don’t know why the cat is named Barfly. It has the feel to me of a name that was a punchline for one strip and then Sungenis felt stuck with it. As Thomas Dye of the web comic Newshounds said, “Joke names’ll kill you”. (I’m inclined to think that funny names can work, as the Marx Brotehrs, Fred Allen, and Vic and Sade did well with them, but they have to be a less demanding kind of funny.)

John Deering’s Strange Brew (July 19) uses a panel full of mathematical symbols scrawled out as the representation of “this is something really hard being worked out”. I suppose this one could also be filed under “rocket science themed comics”, but it comes from almost the first problem of mathematical physics: if you shoot something straight up, how long will it take to fall back down? The faster the thing starts up, the longer it takes to fall back, until at some speed — the escape velocity — it never comes back. This is because the size of the gravitational attraction between two things decreases as they get farther apart. At or above the escape velocity, the thing has enough speed that all the pulling of gravity, from the planet or moon or whatever you’re escaping from, will not suffice to slow the thing down to a stop and make it fall back down.

The escape velocity depends on the size of the planet or moon or sun or galaxy or whatever you’re escaping from, of course, and how close to the surface (or center) you start from. It also assumes you’re talking about the speed when the thing starts flying away, that is, that the thing doesn’t fire rockets or get a speed boost by flying past another planet or anything like that. And things don’t have to reach the escape velocity to be useful. Nothing that’s in earth orbit has reached the earth’s escape velocity, for example. I suppose that last case is akin to how you can still get some stuff done without getting out of the recliner.

Mel Henze’s Gentle Creatures (July 21) uses mathematics as the standard for proving intelligence exists. I’ve got a vested interest in supporting that proposition, but I can’t bring myself to say more than that it shows a particular kind of intelligence exists. I appreciate the equation of the final panel, though, as it can be pretty well generalized.

To disguise a sports venue it's labelled ``Math Arena'', with ``lectures on the actual odds of beating the casino''.
Bill Holbrook’s _Safe Havens_ for the 22nd of July, 2014.

Bill Holbrook’s Safe Havens (July 22) plays on mathematics’ reputation of being not very much a crowd-pleasing activity. That’s all right, although I think Holbrook makes a mistake by having the arena claim to offer a “lecture on the actual odds of beating the casino”, since the mathematics of gambling is just the sort of mathematics I think would draw a crowd. Probability enjoys a particular sweet spot for popular treatment: many problems don’t require great amounts of background to understand, and have results that are surprising, but which have reasons that are easy to follow and don’t require sophisticated arguments, and are about problems that are easy to imagine or easy to find interesting: cards being drawn, dice being rolled, coincidences being found, or secrets being revealed. I understand Holbrook’s editorial cartoon-type point behind the lecture notice he put up, but the venue would have better scared off audiences if it offered a lecture on, say, “Chromatic polynomials for rigidly achiral graphs: new work on Yamada’s invariant”. I’m not sure I could even explain that title in 1200 words.

Missy Meyer’s Holiday Doodles (July 22) revelas to me that apparently the 22nd of July was “Casual Pi Day”. Yeah, I suppose that passes. I didn’t see much about it in my Twitter feed, but maybe I need some more acquaintances who don’t write dates American-fashion.

Thom Bluemel’s Birdbrains (July 24) again uses mathematics — particularly, Calculus — as not just the marker for intelligence but also as The Thing which will decide whether a kid goes on to success in life. I think the dolphin (I guess it’s a dolphin?) parent is being particularly horrible here, as it’s not as if a “B+” is in any way a grade to be ashamed of, and telling kids it is either drives them to give up on caring about grades, or makes them send whiny e-mails to their instructors about how they need this grade and don’t understand why they can’t just do some make-up work for it. Anyway, it makes the kid miserable, it makes the kid’s teachers or professors miserable, and for crying out loud, it’s a B+.

(I’m also not sure whether a dolphin would consider a career at Sea World success in life, but that’s a separate and very sad issue.)

Reading the Comics, June 11, 2014: Unsound Edition


I can tell the school year is getting near the end: it took a full week to get enough mathematics-themed comic strips to put together a useful bundle of them this time. I don’t know what I’m going to do this summer when there’s maybe two comic strips I can talk about per week and I have to go finding my own initiative to write about things.

Jef Mallet’s Frazz (June 6) is a pun strip, yeah, although it’s one that’s more or less legitimate for a word problem. The reason I have to say “more or less” is that it’s not clear to me whether, per Caulfield’s specification, the amount of ore lost across each Great Lake is three percent of the original cargo or three percent of the remaining cargo. But writing a word problem so that there’s only the one correct solution is a skill that needs development no less than solving word problems is, and probably if we imagine Caulfield grading he’d realize there was an ambiguity when a substantial number of of the papers make the opposite assumption to what he’d had in his mind.

Ruben Bolling’s Tom the Dancing Bug (June 6, and I believe it’s a rerun) steps into some of the philosophically heady waters that one gets into when you look seriously at probability, and that get outright silly when you mix omniscience into the mix. The Supreme Planner has worked out what he concludes to be a plan certain of success, but: does that actually mean one will succeed? Even if we assume that the Supreme Planner is able to successfully know and account for every factor which might affect his success — well, for a less criminal plan, consider: one is certain to toss heads at least once, if one flips a fair coin infinitely many times. And yet it would not actually be impossible to flip a fair coin infinitely many times and have it turn up tails every time. That something can have a probability of 1 (or 100%) of happening and nevertheless not happen — or equivalently, that something can have a probability of 0 (0%) of happening and still happen — is exactly analogous to how a concept can be true almost everywhere, that is, it can be true with exceptions that in some sense don’t matter. Ruben Bolling tosses in the troublesome notion of the multiverse, the idea that everything which might conceivably happen does happen “somewhere”, to make these impossible events all the more imminent. I’m impressed Bolling is able to touch on so much, with a taste of how unsettling the implications are, in a dozen panels and stay funny about it.

Enos cheats, badly, on his test.
Bud Grace’s The Piranha Club for the 9th of June, 2014.

Bud Grace’s The Piranha Club (June 9) gives us Enos cheating with perfectly appropriate formulas for a mathematics exam. I’m kind of surprised the Pythagorean Theorem would rate cheat-sheet knowledge, actually, as I thought that had reached the popular culture at least as well as Einstein’s E = mc2 had, although perhaps it’s reached it much as Einstein’s has, as a charming set of sounds without any particular meaning behind them. I admit my tendency in giving exams, too, has been to allow students to bring their own sheet of notes, or even to have open-book exams, on the grounds that I don’t really care whether they’ve memorized formulas and am more interested in whether they can find and apply the relevant formulas. But that doesn’t make me right; I agree there’s value in being able to identify what the important parts of the course are and to remember them well, and even more value in being able to figure out the area of a triangle or a trapezoid from thinking hard about the subject on your own.

Jason Poland’s Robbie and Bobbie (June 10) is looking for philosophy and mathematics majors, so, here’s hoping it’s found a couple more. The joke here is about the classification of logical arguments. A valid argument is one in which the conclusion does indeed follow from the premises according to the rules of deductive logic. A sound argument is a valid argument in which the premises are also true. The reason these aren’t exactly the same thing is that whether a conclusion follows from the premise depends on the structure of the argument; the content is irrelevant. This means we can do a great deal of work, reasoning out things which follow if we suppose that proposition A being true implies B is false, or that we know B and C cannot both be false, or whatnot. But this means we may fill in, Mad-Libs-style, whatever we like to those propositions and come away with some funny-sounding arguments.

So this is how we can have an argument that’s valid yet not sound. It is valid to say that, if baseball is a form of band organ always found in amusement parks, and if amusement parks are always found in the cubby-hole under my bathroom sink, then, baseball is always found in the cubby-hole under my bathroom sink. But as none of the premises going into that argument are true, the argument’s not sound, which is how you can have anything be “valid but not sound”. Identifying arguments that are valid but not sound is good for a couple questions on your logic exam, so, be ready for that.

Edison Lee fails to catch a ball because he miscalculates where it should land.
John Hambrock’s The Brilliant Mind of Edison Lee, 11 June 2014.

John Hambrock’s The Brilliant Mind of Edison Lee (June 11) has the brilliant yet annoying Edison trying to prove his genius by calculating precisely where the baseball will drop. This is a legitimate mathematics/physics problem, of course: one could argue that the modern history of mathematical physics comes from the study of falling balls, albeit more of cannonballs than baseballs. If there’s no air resistance and if gravity is uniform, the problem is easy and you get to show off your knowledge of parabolas. If gravity isn’t uniform, you have to show off your knowledge of ellipses. Either way, you can get into some fine differential equations work, and that work gets all the more impressive if you do have to pay attention to the fact that a ball moving through the air loses some of its speed to the air molecules. That said, it’s amazing that people are able to, in effect, work out approximate solutions to “where is this ball going” in their heads, not to mention to act on it and get to the roughly correct spot, lat least when they’ve had some practice.

Reading the Comics, May 13, 2014: Good Class Problems Edition


Someone in Comic Strip Master Command must be readying for the end of term, as there’s been enough comic strips mentioning mathematics themes to justify another of these entries, and that’s before I even start reading Wednesday’s comics. I can’t say that there seem to be any overarching themes in the past week’s grab-bag of strips, but, there are a bunch of pretty good problems that would fit well in a mathematics class here.

Darrin Bell’s Candorville (May 6) comes back around to the default application of probability, questions in coin-flipping. You could build a good swath of a probability course just from the questions the strip implies: how many coins have to come up heads before it becomes reasonable to suspect that something funny is going on? Two is obviously too few; two thousand is likely too many. But improbable things do happen, without it signifying anything. So what’s the risk of supposing something’s up when it isn’t? What’s the risk of dismissing the hints that something is happening?

Mark Anderson’s Andertoons (May 8) is another entry in the wiseacre schoolchild genre (I wonder if I’ve actually been consistent in describing this kind of comic, but, you know what I mean) and suggesting that arithmetic just be done on the computer. I’m sympathetic, however much fun it is doing arithmetic by hand.

Justin Boyd’s Invisible Bread (May 9) is honestly a marginal inclusion here, but it does show a mathematics problem that’s correctly formed and would reasonably be included on a precalculus or calculus class’s worksheets. It is a problem that’s a no-brainer, really, but that fits the comic’s theme of poorly functioning.

Steve Moore’s In The Bleachers (May 12) uses baseball scores and the start of a series. A series, at least once you’re into calculus, is the sum of a sequence of numbers, and if there’s only finitely many of them, here, there’s not much that’s interesting to say. Each sequence of numbers has some sum and that’s it. But if you have an infinite series — well, there, all sorts of amazing things become possible (or at least logically justified), including integral calculus and numerical computing. The series from the panel, if carried out, would come to a pair of infinitely large sums — this is called divergence, and is why your mathematician friends on Facebook or Twitter are passing around that movie poster with a math formula for a divergent series on it — and you can probably get a fair argument going about whether the sum of all the even numbers would be equal to the sum of all the odd numbers. (My advice: if pressed to give an answer, point to the other side of the room, yell, “Look, a big, distracting thing!” and run off.)

Samson’s Dark Side Of The Horse (May 13) is something akin to a pun, playing as it does on the difference between a number and a numeral and shifting between the ways we might talk about “three”. Also, I notice for the first time that apparently the little bird sometimes seen in the comic is named “Sine”, which is probably why it flies in such a wavy pattern. I don’t know how I’d missed that before.

Rick Detorie’s One Big Happy (May 13, rerun) is also a strip that plays on the difference between a number and its representation as a numeral, really. Come to think of it, it’s a bit surprising that in Arabic numerals there aren’t any relationships between the representations for numbers; one could easily imagine a system in which, say, the symbol for “four” were a pair of whatever represents “two”. In A History Of Mathematical Notations Florian Cajori notes that there really isn’t any system behind why a particular numeral has any particular shape, and he takes a section (Section 96 in Book 1) to get engagingly catty about people who do. I’d like to quote it because it’s appealing, in that way:

A problem as fascinating as the puzzle of the origin of language relates to the evolution of the forms of our numerals. Proceeding on the tacit assumption that each of our numerals contains within itself, as a skeleton so to speak, as many dots, strokes, or angles as it represents units, imaginative writers of different countries and ages have advanced hypotheses as to their origin. Nor did these writers feel that they were indulging simply in pleasing pastimes or merely contributing to mathematical recreations. With perhaps only one exception, they were as convinced of the correctness of their explanations as are circle-squarers of the soundness of their quadratures.

Cajori goes on to describe attempts to rationalize the Arabic numerals as “merely … entertaining illustrations of the operation of a pseudo-scientific imagination, uncontrolled by all the known facts”, which gives some idea why Cajori’s engaging reading for seven hundred pages about stuff like where the plus sign comes from.

Reading the Comics, April 27, 2014: The Poetry of Calculus Edition


I think there are enough comic strips for another installment of this series, so, here you go. There are a couple comics once again using mathematics, and calculus particularly, just to signify that there’s something requiring a lot of brainpower going on, which is flattering to people who learned calculus well enough, at the risk of conveying a sense that normal people can’t hope to become literate in mathematics. I don’t buy that. Anyway, there were comics that went in other directions, which is why there’s more talk about Dutch military engineering than you might have expected for today’s entry.

Mark Anderson’s Andertoons (April 22) uses the traditional blackboard full of calculus to indicate a genius. The exact formulas on the board don’t suggest anything particular to me, although they do seem to parse. I wouldn’t be surprised if they turned out to be taken from a textbook, possibly in fluid mechanics, that I just happen not to have noticed.

Piers Baker’s Ollie and Quentin (April 23, rerun) has Ollie and Quentin flipping a coin repeatedly until Quentin (the lugworm) sees his choice come up. Of course, if it is a fair coin, a call of heads or tails will come up eventually, at least if we carefully define what we mean by “eventually”, and for that matter, Quentin’s choice will surely come up if he tries long enough.

Continue reading “Reading the Comics, April 27, 2014: The Poetry of Calculus Edition”

What Is True Almost Everywhere?


I was reading a thermodynamics book (C Truesdell and S Bharatha’s The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, which is a fascinating read, for the field, and includes a number of entertaining, for the field, snipes at the stuff textbook writers put in because they’re just passing on stuff without rethinking it carefully), and ran across a couple proofs which mentioned equations that were true “almost everywhere”. That’s a construction it might be surprising to know even exists in mathematics, so, let me take a couple hundred words to talk about it.

The idea isn’t really exotic. You’ve seen a kind of version of it when you see an equation containing the note that there’s an exception, such as, \frac{\left(x - 1\right)^2}{\left(x - 1\right)} = x \mbox{ for } x \neq 1 . If the exceptions are tedious to list — because there are many of them to write down, or because they’re wordy to describe (the thermodynamics book mentioned the exceptions were where a particular set of conditions on several differential equations happened simultaneously, if it ever happened) — and if they’re unlikely to come up, then, we might just write whatever it is we want to say and add an “almost everywhere”, or for shorthand, put an “ae” after the line. This “almost everywhere” will, except in freak cases, propagate through the rest of the proof, but I only see people writing that when they’re students working through the concept. In publications, the “almost everywhere” gets put in where the condition first stops being true everywhere-everywhere and becomes only almost-everywhere, and taken as read after that.

I introduced this with an equation, but it can apply to any relationship: something is greater than something else, something is less than or equal to something else, even something is not equal to something else. (After all, “x \neq -x is true almost everywhere, but there is that nagging exception.) A mathematical proof is normally about things which are true. Whether one thing is equal to another is often incidental to that.

What’s meant by “unlikely to come up” is actually rigorously defined, which is why we can get away with this. It’s otherwise a bit daft to think we can just talk about things that are true except where they aren’t and not even post warnings about where they’re not true. If we say something is true “almost everywhere” on the real number line, for example, that means that the set of exceptions has a total length of zero. So if the only exception is where x equals 1, sure enough, that’s a set with no length. Similarly if the exceptions are where x equals positive 1 or negative 1, that’s still a total length of zero. But if the set of exceptions were all values of x from 0 to 4, well, that’s a set of total length 4 and we can’t say “almost everywhere” for that.

This is all quite like saying that it can’t happen that if you flip a fair coin infinitely many times it will come up tails every single time. It won’t, even though properly speaking there’s no reason that it couldn’t. If something is true almost everywhere, then your chance of picking an exception out of all the possibilities is about like your chance of flipping that fair coin and getting tails infinitely many times over.

Reading the Comics, April 1, 2014: Name-Dropping Monkeys Edition


There’s been a little rash of comics that bring up mathematical themes, now, which is ordinarily pretty good news. But when I went back to look at my notes I realized most of them are pretty much name-drops, mentioning stuff that’s mathematical without giving me much to expand upon. The exceptions are what might well be the greatest gift which early 20th century probability could give humor writers. That’s enough for me.

Mark Anderson’s Andertoons (March 27) plays on the double meaning of “fifth” as representing a term in a sequence and as representing a reciprocal fraction. It also makes me realize that I hadn’t paid attention to the fact that English (at least) lets you get away with using the ordinal number for the part fraction, at least apart from “first” and “second”. I can make some guesses about why English allows that, but would like to avoid unnecessarily creating folk etymologies.

Hector D Cantu and Carlos Castellanos’s Baldo (March 27) has Baldo not do as well as he expected in predictive analytics, which I suppose doesn’t explicitly require mathematics, but would be rather hard to do without. Making predictions is one of mathematics’s great applications, and drives much mathematical work, in the extrapolation of curves and the solving of differential equations most obviously.

Dave Whamond’s Reality Check (March 27) name-drops the New Math, in the service of the increasingly popular sayings that suggest Baby Boomers aren’t quite as old as they actually are.

Rick Stromoski’s Soup To Nutz (March 29) name-drops the metric system, as Royboy notices his ten fingers and ten toes and concludes that he is indeed metric. The metric system is built around base ten, of course, and the idea that changing units should be as easy as multiplying and dividing by powers of ten, and powers of ten are easy to multiply and divide by because we use base ten for ordinary calculations. And why do we use base ten? Almost certainly because most people have ten fingers and ten toes, and it’s so easy to make the connection between counting fingers, counting objects, and then to the abstract idea of counting. There are cultures that used other numerical bases; for example, the Maya used base 20, but it’s hard not to notice that that’s just using fingers and toes together.

Greg Cravens’s The Buckets (March 30) brings out a perennial mathematics topic, the infinite monkeys. Here Toby figures he could be the greatest playwright by simply getting infinite monkeys and typewriters to match, letting them work, and harvesting the best results. He hopes that he doesn’t have to buy many of them, to spoil the joke, but the remarkable thing about the infinite monkeys problem is that you don’t actually need that many monkeys. You’ll get the same result — that, eventually, all the works of Shakespeare will be typed — with one monkey or with a million or with infinitely many monkeys; with fewer monkeys you just have to wait longer to expect success. Tim Rickard’s Brewster Rockit (April 1) manages with a mere hundred monkeys, although he doesn’t reach Shakespearean levels.

But making do with fewer monkeys is a surprisingly common tradeoff in random processes. You can often get the same results with many agents running for a shorter while, or a few agents running for a longer while. Processes that allow you to do this are called “ergodic”, and being able to prove that a process is ergodic is good news because it means a complicated system can be represented with a simple one. Unfortunately it’s often difficult to prove that something is ergodic, so you might instead just warn that you are assuming the ergodic hypothesis or ergodicity, and if nothing else you can probably get a good fight going about the validity of “ergodicity” next time you play Scrabble or Boggle.

What Are The Chances Of An Upset?


I’d wondered idly the other day if a number-16 seed had ever lost to a number-one seed in the NCAA Men’s Basketball tournament. This finally made me go and actually try looking it up; a page on statistics.about.com has what it claims are the first-round results from 1985 (when the current 64-team format was adopted) to 2012. This lets us work out roughly the probability of, for example, the number-three seed beating the number-14, at least by what’s termed the “frequentist” interpretation of probability. In that interpretation, the probability of something happening is roughly how many times the thing you’re interested in happens for the number of times it could happen. From 1985 to 2012 each of the various first-round possibilites was played 112 times (28 tournaments with four divisions each); if we make some plausible assumptions about games being independent events (how one seed did last year doesn’t affect how it does this year), we should have a decent rough idea of the probability of each seed winning.

According to its statistics, and remarkable to me, is that apparently the number-one seed has never been beaten by the number-16. I’m surprised; I’d have guessed the bottom team had at least a one percent chance of victory. I’m also surprised that the Internet seems to have only the one page that’s gathered explicitly how often the first rounds go to the various seeds, although perhaps I’m just not searching for the right terms.

From http://bracketodds.cs.illinois.edu I learn that Dr Sheldon Jacobson and Dr Douglas M King of the University of Illinois (Urbana) published an interesting paper “Seeding In The NCAA Men’s Basketball Tournament: When is A Higher Seed Better?” which runs a variety of statistical tests on the outcomes of March Madness tournaments and finds that the seeding does seem to correspond to the stronger team in the first few rounds, but that after the Elite Eight round there’s not the evidence that a higher seed is more likely to win than the lower; effectively, after the first few rounds you might as well make a random pick.

Jacobson and King, along with Dr Alexander Nikolaev at SUNY/Buffalo and Dr Adrian J Lee, Central Illinois Technology and Education Research Institute, also wrote “Seed Distributions for the NCAA Men’s Basketball Tournament” which tries to model the tournament’s outcomes as random variables, and compares how these random-variable projections compare to what actually happened between 1985 and 2010. This includes some interesting projections about how often we might expect the various seeds to appear in the Sweet Sixteen, Elite Eight, or Final Four. It brings out some surprises — which make sense when you look back at the brackets — such as that the number-eight or number-nine seed has a worse chance of getting to the Sweet Sixteen than the eleventh- or twelfth-seed does.

(The eighth or ninth seed, if they win, have to play whoever wins the sixteen-versus-one contest, which will be the number-one seed. The eleventh seed has to beat first the number-six seed, and then either the number-three or the number-14 seed, either one of which is more likely.)

Meanwhile, it turns out that in my brackets I had picked Connecticut to beat Villanova, which has me doing well in my group — we get bonus points for calling upsets — apart from the accusations of witchcraft.

Calculating March Madness


I did join a little group of people competing to try calling the various NCAA basketball tournament brackets. It’s a silly pastime and way to commiserate with other people about how badly we’re doing forecasting the outcome of the 63 games in the match. We’re competing just for points and the glory of doing a little better than our friends, but there’s some actual betting pools out there, and some contests that offer, for perfect brackets, a billion dollars (Warren Buffet, if I have that right), or maybe even a new car (WLNS-TV, channel 6, Lansing).

Working out what the odds are of getting all 63 games right is more interesting than it might seem at first. The natural (it seems to me) first guess at working out the odds is to say, well, there are 63 games, and whatever team you pick has a 50 percent chance of winning that game, so the chance of getting all 63 games right is \left(\frac{1}{2}\right)^{63} , or one chance in 9,223,372,036,854,775,808.

But it’s not quite so, and the reason is buried in the assumption that every team has a 50 percent chance of winning any given game. And that’s just not so: it’s plausible (as of this writing) to think that the final game will be Michigan State playing the University of Michigan. It’s just ridiculous to think that the final game will be SUNY/Albany (16th seeded) playing Wofford (15th).

The thing is that not all the matches are equally likely to be won by either team. The contest starts out with the number one seed playing the number 16, the number two seed playing the number 15, and so on. The seeding order roughly approximates the order of how good the teams are. It doesn’t take any great stretch to imagine the number ten seed beating the number nine seed; but, has a number 16 seed ever beaten the number one?

To really work out the probability of getting all the brackets right turns into a fairly involved problem. We can probably assume that the chance of, say, number-one seed Virginia beating number-16 seed Coastal Carolina is close to how frequently number-one seeds have beaten number-16 seeds in the past, and similarly that number-four seed Michigan State’s chances over number-13 Delaware is close to that historical average. But there are some 9,223,372,036,854,775,808 possible ways that the tournament could, in principle, go, and they’ve all got different probabilities of happening.

So there isn’t a unique answer to what is the chance that you’ve picked a perfect bracket set. It’s higher if you’ve picked a lot of higher-ranking seeds, certainly, at least assuming that this year’s tournament is much like previous years’, and that seeds do somewhat well reflect how likely teams are to win. At some point it starts to be easier to accept “one chance in 9,223,372,036,854,775,808” as close enough. Me, I’ll be gloating for the whole tournament thanks to my guess that Ohio State would lose to Dayton.

[Edit: first paragraph originally read “games in the match”, which doesn’t quite parse.]

Reading the Comics, March 1, 2014: Isn’t It One-Half X Squared Plus C? Edition


So the subject line references here a mathematics joke that I never have heard anybody ever tell, and only encounter in lists of mathematics jokes. It goes like this: a couple professors are arguing at lunch about whether normal people actually learn anything about calculus. One of them says he’s so sure normal people learn calculus that even their waiter would be able to answer a basic calc question, and they make a bet on that. He goes back and finds their waiter and says, when she comes with the check he’s going to ask her if she knows what the integral of x is, and she should just say, “why, it’s one-half x squared, of course”. She agrees. He goes back and asks her what the integral of x is, and she says of course it’s one-half x squared, and he wins the bet. As he’s paid off, she says, “But excuse me, professor, isn’t it one-half x squared plus C?”

Let me explain why this is an accurately structured joke construct and must therefore be classified as funny. “The integral of x”, as the question puts it, has not just one correct answer but rather a whole collection of correct answers, which are different from one another only by the addition of a constant whole number, by convention denoted C, and the inclusion of that “plus C” denotes that whole collection. The professor was being sloppy in referring to just a single example from that collection instead of the whole set, as the waiter knew to do. You’ll see why this is relevant to today’s collection of mathematics-themed comics.

Jef Mallet’s Frazz (February 22) points out one of the grand things about mathematics, that if you follow the proper steps in a mathematical problem you get to be right, and to be extraordinarily confident in that rightness. And that’s true, although, at least to me a good part of what’s fun in mathematics is working out what the proper steps are: figuring out what the important parts of something you want to study should be, and what follows from your representation of them, and — particularly if you’re trying to represent a complicated real-world phenomenon with a model — whether you’re representing the things you find interesting in the real-world phenomenon well. So, while following the proper steps gets you an answer that is correct within the limits of whatever it is you’re doing, you still get to work out whether you’re working on the right problem, which is the real fun.

Mark Pett’s Lucky Cow (February 23, rerun) uses that ambiguous place between mathematics and physics to represent extreme smartness. The equation the physicist brings to Neil is the (time-dependent) Schrödinger Equation, describing how probability evolves in time, and the answer is correct. If Neil’s coworkers at Lucky Cow were smarter they’d realize the scam, though: while the equation is impressively scary-looking to people not in the know, a particle physicist would have about as much chance of forgetting this as of forgetting the end of “E equals m c … ”.

Hilary Price’s Rhymes With Orange (February 24) builds on the familiar infinite-monkeys metaphor, but misses an important point. Price is right that yes, an infinite number of monkeys already did create the works of Shakespeare, as a result of evolving into a species that could have a Shakespeare. But the infinite monkeys problem is about selecting letters at random, uniformly: the letter following “th” is as likely to be “q” as it is to be “e”. An evolutionary system, however, encourages the more successful combinations in each generation, and discourages the less successful: after writing “th” Shakespeare would be far more likely to put “e” and never “q”, which makes calculating the probability rather less obvious. And Shakespeare was writing with awareness that the words mean things and they must be strings of words which make reasonable sense in context, which the monkeys on typewriters would not. Shakespeare could have followed the line “to be or not to be” with many things, but one of the possibilities would never be “carport licking hammer worbnoggle mrxl 2038 donkey donkey donkey donkey donkey donkey donkey”. The typewriter monkeys are not so selective.

Dan Thompson’s Brevity (February 26) is a cute joke about a number’s fashion sense.

Mark Pett’s Lucky Cow turns up again (February 28, rerun) for the Rubik’s Cube. The tolerably fun puzzle and astoundingly bad Saturday morning cartoon of the 80s can be used to introduce abstract algebra. When you rotate the nine little cubes on the edge of a Rubik’s cube, you’re doing something which is kind of like addition. Think of what you can do with the top row of cubes: you can leave it alone, unchanged; you can rotate it one quarter-turn clockwise; you can rotate it one quarter-turn counterclockwise; you can rotate it two quarter-turns clockwise; you can rotate it two quarter-turns counterclockwise (which will result in something suspiciously similar to the two quarter-turns clockwise); you can rotate it three quarter-turns clockwise; you can rotate it three quarter-turns counterclockwise.

If you rotate the top row one quarter-turn clockwise, and then another one quarter-turn clockwise, you’ve done something equivalent to two quarter-turns clockwise. If you rotate the top row two quarter-turns clockwise, and then one quarter-turn counterclockwise, you’ve done the same as if you’d just turned it one quarter-turn clockwise and walked away. You’re doing something that looks a lot like addition, without being exactly like it. Something odd happens when you get to four quarter-turns either clockwise or counterclockwise, particularly, but it all follows clear rules that become pretty familiar when you notice how much it’s like saying four hours after 10:00 will be 2:00.

Abstract algebra marks one of the things you have to learn as a mathematics major that really changes the way you start looking at mathematics, as it really stops being about trying to solve equations of any kind. You instead start looking at how structures are put together — rotations are seen a lot, probably because they’re familiar enough you still have some physical intuition, while still having significant new aspects — and, following this trail can get for example to the parts of particle physics where you predict some exotic new subatomic particle has to exist because there’s this structure that makes sense if it does.

Jenny Campbell’s Flo and Friends (March 1) is set off with the sort of abstract question that comes to mind when you aren’t thinking about mathematics: how many five-card combinations are there in a deck of (52) cards? Ruthie offers an answer, although — as the commenters get to disputing — whether she’s right depends on what exactly you mean by a “five-card combination”. Would you say that a hand of “2 of hearts, 3 of hearts, 4 of clubs, Jack of diamonds, Queen of diamonds” is a different one to “3 of hearts, Jack of diamonds, 4 of clubs, Queen of diamonds, 2 of hearts”? If you’re playing a game in which the order of the deal doesn’t matter, you probably wouldn’t; but, what if the order does matter? (I admit I don’t offhand know a card game where you’d get five cards and the order would be important, but I don’t know many card games.)

For that matter, if you accept those two hands as the same, would you accept “2 of clubs, 3 of clubs, 4 of diamonds, Jack of spades, Queen of spades” as a different hand? The suits are different, yes, but they’re not differently structured: you’re still three cards away from a flush, and two away from a straight. Granted there are some games in which one suit is worth more than another, in which case it matters whether you had two diamonds or two spades; but if you got the two-of-clubs hand just after getting the two-of-hearts hand you’d probably be struck by how weird it was you got the same hand twice in a row. You can’t give a correct answer to the question until you’ve thought about exactly what you mean when you say two hands of cards are different.

I Know Nothing Of John Venn’s Diagram Work


My Dearly Beloved, the professional philosopher, mentioned after reading the last comics review that one thing to protest in the Too Much Coffee Man strip — showing Venn diagram cartoons and Things That Are Funny as disjoint sets — was that the Venn diagram was drawn wrong. In philosophy, you see, they’re taught to draw a Venn diagram for two sets as two slightly overlapping circles, and then to black out any parts of the diagram which haven’t got any elements. If there are three sets, you draw three overlapping circles of equal size and again black out the parts that are empty.

I granted that this certainly better form, and indispensable if you don’t know anything about what sets, intersections, and unions have any elements in them, but that it was pretty much the default in mathematics to draw the loops that represent sets as not touching if you know the intersection of the sets is empty. That did get me to wondering what the proper way of doing things was, though, and I looked it up. And, indeed, according to MathWorld, I have been doing it wrong for a very long time. Per MathWorld (which is as good a general reference for this sort of thing as I can figure), to draw a Venn diagram reflecting data for N sets, the rules are:

  1. Draw N simple, closed curves on the plane, so that the curves partition the plane into 2N connected regions.
  2. Have each subset of the N different sets correspond to one and only one region formed by the intersection of the curves.

Partitioning the plane is pretty much exactly what you might imagine from the ordinary English meaning of the world: you divide the plane into parts that are in this group or that group or some other group, with every point in the plane in exactly one of these partitions (or on the border between them). And drawing circles which never touch mean that I (and Shannon Wheeler, and many people who draw Venn diagram cartoons) are not doing that first thing right: two circles that have no overlap the way the cartoon shows partition the plane into three pieces, not four.

I can make excuses for my sloppiness. For one, I learned about Venn diagrams in the far distant past and never went back to check I was using them right. For another, the thing I most often do with Venn diagrams is work out probability problems. One approach for figuring out the probability of something happen is to identify the set of all possible outcomes of an experiment — for a much-used example, all the possible numbers that can come up if you throw three fair dice simultaneously — and identify how many of those outcomes are in the set of whatever you’re interested in — say, rolling a nine total, or rolling a prime number, or for something complicated, “rolling a prime number or a nine”. When you’ve done this, if every possible outcome is equally likely, the probability of the outcome you’re interested in is the number of outcomes that satisfy what you’re looking for divided by the number of outcomes possible.

If you get to working that way, then, you might end up writing a list of all the possible outcomes and drawing a big bubble around the outcomes that give you nine, and around the outcomes that give you a prime number, and those aren’t going to touch for the reasons you’d expect. I’m not sure that this approach is properly considered a Venn diagram anymore, though, although I’d introduced it in statistics classes as such and seen it called that in the textbook. There might not be a better name for it, but it is doing violence to the Venn diagram concept and I’ll try to be more careful in future.

The Mathworld page, by the way, provides a couple examples of Venn diagrams for more than three propositions, down towards the bottom of the page. The last one that I can imagine being of any actual use is the starfish shape used to work out five propositions at once. That shows off 32 possible combinations of sets and I can barely imagine finding that useful as a way to visualize the relations between things. There are also representations based on seven sets, which have 128 different combinations, and for 11 propositions, a mind-boggling 2,048 possible combinations. By that point the diagram is no use for visualizing relationships of sets and is simply mathematics as artwork.

Something else I had no idea bout is that if you draw the three-circle Venn diagram, and set it so that the intersection of any two circles is at the center of the third, then the innermost intersection is a Reuleaux triangle, one of those oddball shapes that rolls as smoothly as a circle without actually being a circle. (MathWorld has an animated gif showing it rolling so.) This figure, it turns out, is also the base for something called the Henry Watt square drill bit. It can be used as a spinning drill bit to produce a (nearly) square hole, which is again pretty amazing as I make these things out, and which my father will be delighted to know I finally understand or have heard of.

In any case, the philosophy department did better teaching Venn diagrams properly than whatever math teacher I picked them up from did, or at least, my spouse retained the knowledge better than I did.

Reading the Comics, December 12, 2013


It’s a bit of a shame there weren’t quite enough comics to run my little roundup on the 11th of December, for that nice 11/12/13 sequence, but I’m not in charge of arranging these things. For this week’s gathering of mathematically themed comic strips there’s not any deeper theme than they mention mathematic points, but at least the first couple of them have some real meat to the subject matter. (It feels to me like if one of the gathered comics inspires an essay, it’s usually one of the first couple in a collection. That might indicate that I get tired while writing these out, or it might reflect a biased recollection of when I do break out an essay.)

John Allen’s Nest Heads (December 5) is built around a kid not understanding a probability distribution: how many days in a row does it take to get the chance of snow to be 100 percent? The big flaw here is the supposition that the chance of snow is a (uhm) cumulative thing, so that if the snow didn’t happen yesterday or the day before it’s the more likely to happen today or tomorrow. As we actually use weather forecasts, though, they’re … well, I’m not sure I’d say they’re independent, that yesterday’s 30 percent chance of snow has nothing to do with today’s 25 percent chance, since it seems to me plausible that whether it snowed yesterday affects whether it snows today. But they don’t just add up until we get a 100 percent chance of snow when things start to drop.

Continue reading “Reading the Comics, December 12, 2013”

The Intersecting Lines


I haven’t had much chance to sit and think about this, but that’s no reason to keep my readers away from it. Elke Stangl has been pondering a probability problem regarding three intersecting lines on a plane, a spinoff of a physics problem about finding the center of mass of an object by the method of pinning it up from a couple different points and dropping the plumb line. My first impulse, of turning this into a matrix equation, flopped for what were as soon as I worked out a determinant obvious reasons, but that hardly means I’m stuck just yet.

Reading the Comics, November 13, 2013


For this week’s round of comic strips there’s almost a subtler theme than “they mention math in some way”: several have got links to statistical mechanics and the problem of recurrence. I’m not sure what’s gotten into Comic Strip Master Command that they sent out instructions to do comics that I can tie to the astounding interactions of infinities and improbable events, but it makes me wonder if I need to write a few essays about it.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde (October 30) summons the classic “infinite monkeys” problem of probability for its punch line. The concept — that if you had something producing strings of letters at random (taken to be monkeys because, I suppose, it’s assumed they would hit every key without knowing what sensibly comes next), it would, given enough time, produce any given result. The idea goes back a long way, and it’s blessed with a compelling mental image even though typewriters are a touch old-fashioned these days.

It seems to have gotten its canonical formulation in Émile Borel’s 1913 article “Statistical Mechanics and Irreversibility”, as you might expect since statistical mechanics brings up the curious problem of entropy. In short: every physical interaction, say, when two gases — let’s say clear air and some pink smoke as 1960s TV shows used to knock characters out — mix, is time-reversible. Look at the interaction of one clear-gas molecule and one pink-gas molecule and you can’t tell whether it’s playing forward or backward. But look at the entire room and it’s obvious whether they’re mixing or unmixing. How can something be time-reversible at every step of every interaction but not in whole?

The idea got a second compelling metaphor with Jorge Luis Borges’s Library of Babel, with a bit more literary class and in many printings fewer monkeys.

Continue reading “Reading the Comics, November 13, 2013”

The Big Zero


I want to try re-proving the little point from last time, that the chance of picking one specific number from the range of zero to one is actually zero. This might not seem like a big point but it can be done using a mechanism that turns out to be about three-quarters of all the proofs in real analysis, which is probably the most spirit-crushing of courses you take as a mathematics undergraduate, and I like that it can be shown in a way that you can understand without knowing anything more sophisticated than the idea of “less than or equal to”.

So here’s my proposition: that the probability of selecting the number 1/2 from the range of numbers running from zero to one, is zero. This is assuming that you’re equally likely to pick any number. The technique I mean to use, and it’s an almost ubiquitous one, is to show that the probability has to be no smaller than zero, and no greater than zero, and therefore it has to be exactly zero. Very many proofs are done like this, showing that the thing you want can’t be smaller than some number, and can’t be greater than that same number, and we thus prove that it has to be that number.

Showing that the probability of picking exactly 1/2 can’t be smaller than zero is easy: the probability of anything is a number greater than or equal to zero, and less than or equal to one. (A few bright people have tried working out ways to treat probabilities that can be negative numbers, or that can be greater than one, but nobody’s come up with a problem that these approaches solve in a compelling way, and it’s really hard to figure out what a negative probability would mean in the observable world, so we leave the whole idea for someone after us to work out.) That was easy enough.

Continue reading “The Big Zero”

Split Lines


My spouse, the professional philosopher, was sharing some of the engagingly wrong student responses. I hope it hasn’t shocked you to learn your instructors do this, but, if you got something wrong in an amusing way, and it was easy to find someone to commiserate with, yes, they said something.

The particular point this time was about Plato’s Analogy of the Divided Line, part of a Socratic dialogue that tries to classify the different kinds of knowledge. I’m not informed enough to describe fairly the point Plato was getting at, but the mathematics is plain enough. It starts with a line segment that gets divided into two unequal parts; each of the two parts is then divided into parts of the same proportion. Why this has to be I’m not sure (my understanding is it’s not clear exactly why Plato thought it important they be unequal parts), although it has got the interesting side effect of making exactly two of the four line segments of equal length.

Continue reading “Split Lines”

Why I Don’t Believe It’s 1/e




The above picture, showing the Leap-the-Dips roller coaster at Lakemont Park before its renovation, kind of answers why despite my neat reasoning and mental calculations I don’t really believe that there’s a chance of something like one in three that any particular board from the roller coaster’s original, 1902, construction is still in place. The picture — from the end of the track, if I’m not mistaken — dates to shortly before the renovation of the roller coaster began in the late 90s. Leap-the-Dips had stood without operating, and almost certainly without maintenance, from 1986 (coincidental to the park’s acquisition by the Boyer Candy company and its temporary renaming as Boyertown USA, in miniature imitation of Hershey Park) to 1998.

The result of this period seems almost to demand replacing every board in the thing. But we don’t know that happened, and after all, surely some boards took it better than others, didn’t they? Not every board was equally exposed to the elements, or to vandalism, or to whatever does smash up wood. And there’s a lot of pieces of wood that go into a wooden roller coaster. Surely some were lucky by virtue of being in the right spot?

Continue reading “Why I Don’t Believe It’s 1/e”

Why I Say 1/e About This Roller Coaster


The Leap-The-Dips at Lakemont Park, Altoona, Pennsylvania, as photographed by Joseph Nebus in July 2013 from the edge of the launch platform.

So in my head I worked out an estimate of about one in three that any particular board would have remained from the Leap-The-Dips’ original, 1902, configuration, even though I didn’t really believe it. Here’s how I got that figure.

First, you have to take a guess as to how likely it is that any board is going to be replaced in any particular stretch of time. Guessing that one percent of boards need replacing per year sounded plausible, what with how neatly a chance of one-in-a-hundred fits with our base ten numbering system, and how it’s been about a hundred years in operation. So any particular board would have about a 99 percent chance of making it through any particular year. If we suppose that the chance of a board making it through the year is independent — it doesn’t change with the board’s age, or the condition of neighboring boards, or anything but the fact that a year has passed — then the chance of any particular board lasting a hundred years is going to be 0.99^{100} . That takes a little thought to work out if you haven’t got a calculator on hand.

Continue reading “Why I Say 1/e About This Roller Coaster”

Just Answer 1/e Whenever Anyone Asks This Kind Of Question


I recently had the chance to ride the Leap-the-Dips at Lakemont Park (Altoona, Pennsylvania), the world’s oldest operating roller coaster. The statistics of this 1902-vintage roller coaster might not sound impressive, as it has a maximum height of about forty feet and a greatest drop of about nine feet, but it gets rather more exciting when you consider that the roller coaster car hasn’t got any seat belts or lap bar or other restraints (just a bar you can grab onto if you so choose), and that the ride was built before the invention of upstop wheels, the wheels that actually go underneath the track and keep roller coaster cars from jumping off. At each of the dips, yes, the car does jump up and off the track, and the car just keeps accelerating the whole ride. (Side boards ensure that once the car jumps off the tracks it falls back into place.) It’s worth the visit.

Looking at the wonderful mesh of wood that makes up a classic roller coaster like this inspired the question: could any of it be original? What’s the chance that any board in it has lasted the hundred-plus years of the roller coaster’s life (including a twelve-year stretch when the ride was not running, a state which usually means routine maintenance is being skipped and which just destroys amusement park rides)? Taking some reasonable guesses about the replacement rate per year, and a quite unreasonable guess about replacement procedure, I worked out my guess, given in the subject line above, and I figure to come back and explain where that all came from.

And The $64 Question Was …


I ran across something interesting — I always do, but this was something I wasn’t looking for — in John Dunning’s On The Air: The Encyclopedia of Old-Time Radio, which is about exactly what it says. In the entry for the quiz show Take It Or Leave It, which, like the quiz shows it evolved into (The $64 Question and The $64,000 Question) asked questions worth amounts doubling all the way to $64. Says Dunning:

Researcher Edith Oliver tried to increase the difficulty with each step, but it was widely believed that the $32 question was the toughest. Perhaps that’s why 75 percent of contestants who got that far decided to go all the way, though only 20 percent of those won the $64.

I am a bit skeptical of those percentages, because they look too much to me like someone, probably for a press release, said something like “three out of four contestants go all the way” and it got turned into a percentage because of the hypnotic lure that decimal digits have on people. However, I can accept that the producers would have a pretty good idea how likely it was a contestant who won $32 would decide to go for the jackpot, rather than take the winnings and go safely home, since that’s information indispensable to making out the show’s budget. I’m a little surprised the final question might have a success rate of only one in five, but then, this is the program that launched the taunting cry “You’ll be sorrrreeeeee” into many cartoons that baffled kids born a generation after the show went off the air (December 1951, in the original incarnation).

It strikes me that topics like how many contestants go on for bigger prizes, and how many win, could be used to produce a series of word problems grounded in a plausible background, at least if the kids learning probability and statistics these days even remember Who Wants To Be A Millionaire is still technically running. (Check your local listings!) Sensible questions could include how likely it is any given contestant would go on to the million-dollar question, how many questions the average contestant answer successfully, and — if you include an estimate for how long the average question takes to answer — how many contestants and questions the show is going to need to fill a day or a week or a month’s time.

Distribution of the batting order slot that ends a baseball game


The God Plays Dice blog has a nice piece attempting to model a baseball question. Baseball is wonderful for all kinds of mathematics questions, partly because the game has since its creation kept data about the plays made, partly because the game breaks its action neatly into discrete units with well-defined outcomes.

Here, Dr Michael Lugo ponders whether games are more likely to end at any particular spot in the batting order. Lugo points out that certainly we could just count where games actually end, since baseball records are enough to make an estimate from that route possible. But that’s tedious, and it’s easier to work out a simple model and see what that suggests. Lupo also uses the number of perfect games as a test of whether the model is remotely plausible, and a test like this — a simple check to whether the scheme could possibly tell us something meaningful — is worth doing whenever one builds a model of something interesting.

God plays dice

Tom Tango, while writing about lineup construction in baseball, pointed out that batters batting closer to the top of the batting order have a greater chance of setting records that are based on counting something – for example, Chris Davis’ chase for 62 home runs. (It’s interesting that enough people see Roger Maris’ 61 as the “real” record that 62 is a big deal.) He observes that over a 162-game season, each slot further down in the batting order (of 9) means 18 fewer plate appearances.

Implicitly this means that every slot in the batting order is equally likely to end the game — that is, that the number of plate appearances for a team in a game, mod 9, is uniformly distributed over {0, 1, …, 8}.

Can we check this? There are two ways to check it:

  • 1. find the number of plate appearances in every game…

View original post 652 more words

Monopoly Chances


While the whole world surely heard about it before, I just today ran across a web page purporting to give the probabilities and expected incomes for the various squares on a Monopoly board. There are many similar versions of this table around — the Monopoly app for iPad even offers the probability that your opponents will land on any given square in the next turn, which is superlatively useful if you want to micromanage your building — and I wouldn’t be surprised if there are little variations and differences between tables.

What’s interesting to me is that the author, Truman Collins, works out the answers by two different models, and considers the results to probably be fairly close to correct because the different models of the game agree fairly well. There are some important programming differences between Collins’s two models (both of which are shown, in code written in C, so it won’t compile on your system without a lot of irritating extra work), but the one that’s most obvious is that in one model the effect of being tossed into jail after rolling three doubles in a row is modelled, while in the other it’s ignored.

Does this matter? Well, it matters a bit, since one is closer to the true game than the other, but at the cost of making a more complicated simulation, which is the normal sort of trade-off someone building a model has to make. Any simulation simplifies the thing being modelled, and a rule like the jail-on-three-doubles might be too much bother for the improvement in accuracy it offers.

Here’s another thing to decide in building the model: when you land in jail, you can either pay a $50 fine and get out immediately, or can try to roll doubles. If there are a lot of properties bought by your opponents, sitting in jail (as the rolling-doubles method implies) can be better, as it reduces the chance you have to pay rent to someone else. That’s likely the state in the later part of the game. If there are a lot of unclaimed properties, you want to get out and buy stuff. Collins simulates this by supposing that in the early game one buys one’s way out, and in the late game one rolls for doubles. But even that’s a simplification: suppose you owned much of the sides of the board after jail. (You’re likely crushing me, in that case.) Why not get out and get closer to Go the sooner, as long as it’s not likely to cost you?

That Collins tries different models and gets similar results suggest that these estimates are tolerably close to right, and often, that’s the best one can really know about how well a model of a complicated thing represents the reality.

Probability Spaces and Plain English


Lucas Wilkins over on the blog Jellymatter writes an article which starts from a grand old point: being annoyed by something on Wikipedia. In this case, it’s Wikipedia’s entry on the axioms of probability, which, like many Wikipedia entries on mathematical subjects is precise, correct, and useless.

Why useless? Because while the entry does draw from a nice, logically rigorous introduction to the way probability is defined, it’s done by way of measure theory, a mildly exotic field of mathematics — I didn’t get my toes wet in it until my senior year as a math major, and didn’t do any serious work with it until grad school — for a subject, probability, that an eight-year-old could reasonably be expected to study. (Measure theory gets called in for a number of tasks; in my grad school career, its biggest job was rebuilding integral calculus, compared to what I’d learned in high school and as an undergraduate, for greater analytic power. So, yes, calculus can be done harder.)

Wilkins goes on to explain the same topic but in plain English, to what seems to me great effect, including an introduction to measure theory that won’t make Wikipedia’s precise-but-curt definition make sense, but will leave someone better-prepared to read it.

Solving The Price Is Right’s “Any Number” Game


A friend who’s also into The Price Is Right claimed to have noticed something peculiar about the “Any Number” game. Let me give context before the peculiarity.

This pricing game is the show’s oldest — it was actually the first one played when the current series began in 1972, and also the first pricing game won — and it’s got a wonderful simplicity: four digits from the price of a car (the first digit, nearly invariably a 1 or a 2, is given to the contestant and not part of the game), three digits from the price of a decent but mid-range prize, and three digits from a “piggy bank” worth up to $9.87 are concealed. The contestant guesses digits from zero through nine inclusive, and they’re revealed in the three prices. The contestant wins whichever prize has its price fully revealed first. This is a steadily popular game, and one of the rare Price games which guarantees the contestant wins something.

A couple things probably stand out. The first is that if you’re very lucky (or unlucky) you can win with as few as three digits called, although it might be the piggy bank for a measly twelve cents. (Past producers have said they’d never let the piggy bank hold less than $1.02, which still qualifies as “technically something”.) The other is that no matter how bad you are, you can’t take more than eight digits to win something, though it might still be the piggy bank.

What my friend claimed to notice was that these “Any Number” games went on to the last possible digit “all the time”, and he wanted to know, why?

My first reaction was: “all” the time? Well, at least it happened an awful lot of the time. But I couldn’t think of a particular reason that they should so often take the full eight digits needed, or whether they actually did; it’s extremely easy to fool yourself about how often events happen when there’s a complicated possibile set of events. But stipulating that eight digits were often needed, then, why should they be needed? (For that matter, trusting the game not to be rigged — and United States televised game shows are by legend extremely sensitive to charges of rigging — how could they be needed?) Could I explain why this happened? And he asked again, enough times that I got curious myself.

Continue reading “Solving The Price Is Right’s “Any Number” Game”

Reading the Comics, April 28, 2013


The flow of mathematics-themed comic strips almost dried up in April. I’m going to assume this reflects the kids of the cartoonists being on Spring Break, and teachers not placing exams immediately after the exam, in early to mid-March, and that we were just seeing the lag from that. I’m joking a little bit, but surely there’s some explanation for the rash of did-you-get-your-taxes-done comics appearing two weeks after April 15, and I’m fairly sure it isn’t the high regard United States newspaper syndicates have for their Canadian readership.

Dave Whamond’s Reality Check (April 8) uses the “infinity” symbol and tossed pizza dough together. The ∞ symbol, I understand, is credited to the English mathematician John Wallis, who introduced it in the Treatise on the Conic Sections, a text that made clearer how conic sections could be described algebraically. Wikipedia claims that Wallis had the idea that negative numbers were, rather than less than zero, actually greater than infinity, which we’d regard as a quirky interpretation, but (if I can verify it) it makes for an interesting point in figuring out how long people took to understand negative numbers like we believe we do today.

Jonathan Lemon’s Rabbits Against Magic (April 9) does a playing-the-odds joke, in this case in the efficiency of alligator repellent. The joke in this sort of thing comes to the assumption of independence of events — whether the chance that a thing works this time is the same as the chance of it working last time — and a bit of the idea that you find the probability of something working by trying it many times and counting the successes. Trusting in the Law of Large Numbers (and the independence of the events), this empirically-generated probability can be expected to match the actual probability, once you spend enough time thinking about what you mean by a term like “the actual probability”.

Continue reading “Reading the Comics, April 28, 2013”

Reblog: A quick guide to non-transitive Grime Dice


The bayesianbiologist blog here has an entry just about a special set of dice which allow for an intransitive game. Intransitivity is a neat little property, maybe most familiar from the rock-paper-scissors game, and it’s a property that sneaks into many practical applications, among the interesting ones voting preferences.

bayesianbiologist

A very special package that I am rather excited about arrived in the mail recently. The package contained a set of 6-sided dice. These dice, however, don’t have the standard numbers one to six on their faces. Instead, they have assorted numbers between zero and nine. Here’s the exact configuration:

Aside from maybe making for a more interesting version of snakes and ladders, why the heck am I so excited about these wacky dice? To find out what makes them so interesting, lets start by just rolling one against another and seeing which one rolls the higher number. Simple enough. Lets roll Red against Blue. Until you get your own set, you can roll in silico.

That was fun. We can do it over and over again and we’ll find that Red beats Blue more often than not. So it seems like Red is a pretty good…

View original post 485 more words

Reblog: Parrondo’s Paradox


Ad Nihil here presents an interesting-looking game demonstrating something I hadn’t heard of before, Parrondo’s Paradox, which apparently is a phenomenon in which a combination of losing strategies becomes a winning strategy. I do want to think about this more, so I offer the link to that blog’s report so that I hopefully will go back and consider it more when I’m able.

ad nihil

My inspiration with my daughter’s 8th grade probability problems continues. In a previous post I worked on a hypothetical story of monitoring all communications for security with a Bayesian analysis approach. This time when I saw the spinning wheel problems in her text book, I was yet again inspired to create a game system to demonstrate Parrondo’s Paradox.

“Parrondo’s paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996.” – Wikipedia.org

Simply put, with certain (not all) combinations, you may create an overall winning strategy by playing different losing scenarios alternatively in the long run. Here’s the game system I came up with this (simpler than the original I believe):

Let’s imagine a spinning wheel like below, divided into eight equal parts with 6 parts red…

View original post 230 more words

Reading the Comics, February 26, 2013


I hit the seven-comics line without quite realizing it, because I’d been dividing my notes between my home computer and one I can access from work. (I haven’t quite taken to writing entries on my iPad, much as I told myself that’d be a great use for it before I bought it, mostly because it’s too annoying to enter all the HTML tags by hand on the iPad keyboard. I’m of the generation that tries to hew its own HTML, even when there’s no benefit to doing that.) This is also skipping a couple strips that just mentioned the kids were in math class because that felt too slight a link to even me.

Carla Ventresca and Henry Beckett’s On A Claire Day (February 15) discusses the “probability formulas” of a box of chocolates. Distribution functions are just what the name suggests: the set of the possible outcomes of something (like, picking this candy) with the chance of each turning up. It’s useful in simple random-luck problems like gathering candies, but by adding probability distributions to mechanics you create the tool of statistical mechanics, which lets the messy complicated reality of things be treated.

Pascal Wyse and Joe Berger’s Berger and Wyse (February 18) uses one of the classic motifs of the word problem: fractions as portions of apples, and visualizing fractions by thinking of apple slices. (I tend to eat apples whole, or at least nearly whole, which makes me realize that I probably visualize fractions of apples as a particular instance of fractions rather than as particular versions of apples.)

Chip Sansom’s The Born Loser (February 21) just shows off Roman numerals and makes fun of the fact they can be misunderstood. But then what can’t?

Tom Thaves’s Frank and Ernest (February 22) uses the tolerably famous bit of mathematical history about negative numbers being unknown to the ancients and tosses in a joke about the current crisis in the Greek economy so, as ever, don’t read the comments thread.

William Wilson’s Ordinary Bill (February 22) possibly qualifies for entry into the “silent penultimate panel” family of comic strips (I feel like having significant implied developments in the next-to-the-final panel violates the spirit of the thing but it isn’t my category to define) for a joke about how complicated it is to do one’s taxes. I suspect this is something that’s going to turn up a lot in the coming two months.

Marc Anderson’s Andertoons (February 24) (I’m wondering whether this or Frank and Ernest gets in here more) pops in with a chalkboard full of math symbols as the way to draw “something incredibly hard to understand”.

Brian and Ron Boychuk’s The Chuckle Brothers (February 26) has a pie joke that’s so slight I’d almost think they were just angling for the chance for me to notice them. But the name-dropping of the Helsinki Mathematical Institute, and earlier strips with features like references such as to Joseph Henry, make me suspect they’re just enjoying being moderately nerdy. That said, I’m not aware of a specific “Helsinki Mathematical Institute”, although the Rolf Nevanlinna Institute at the University of Helsinki would probably get called something like that. They wouldn’t consider hiring me, anyway.

Meteors and Money Management


I probably heard of Wethersfield, Connecticut, although I forgot about it until teaching a statistics course last academic year. The town vanished from my memory shortly thereafter, because as far as I know I’ve never been there or known anyone who had. The rather exciting meteor strike in Russia last week brought it back to mind, though, because the town worked its way into a probability book I was using for reference.

Here’s the setup: the town is about 14 square miles in area, out of something like 200,000,000 square miles of land and water on the surface of the Earth. Something like three meteors of appreciable size strike the surface of the Earth, somewhere, three times a day. Suppose that every spot on the planet is equally likely to get a meteor strike. So, what’s the probability that Wethersfield should get struck in any one year?

Continue reading “Meteors and Money Management”

Can Rex Morgan Be Made Plausible?


'And ... My batting average of predicting the [unborn child's] sex is 97% accurate!'

The comic strip Rex Morgan, MD, put up an interesting bit of nonsense in its current ridiculous story. (Rex and June are investigating a condo where nobody’s been paying rent; the residents haven’t because everyone living there is strippers who’re raising money for a cancer-stricken compatriot; the details are dopier, and much more slowly told, than this makes it sound.) But on the 7th this month it put up one of those things that caught me. Never mind the claim that Delores here (the cancer-stricken woman) puts up about being able to sense pregnancy. She claims she can predict the sex of the unborn child with 97 percent accuracy.

Is that plausible? Well, she may be just making the number up, since putting a decimal or a percentage into a number carries connotations of “only a fool would dare question me” similar to those of holding a clipboard and glaring at it while walking purposefully around. If she’s doing ordinary human-style rounding off, that could mean that she’s guessed five of six pregnancies correctly. I could believe a person thinking that makes her 97 percent accurate, but I wouldn’t be convinced by the claim and I doubt you would either.

So here’s a little recreational puzzle for you: how many pregnancies would Delores have to have predicted, and how many called accurately, for the claim of 97 percent accuracy to be hard to dismiss? How many until it isn’t clearly just luck or a small sample size?

What Is The Most Common Jeopardy! Response?


Happy New Year!

I want to bring a pretty remarkable project to people’s attention. Dan Slimmon here has taken the archive of Jeopardy! responses (you know, the answers, only the ones given in the form of a question) from the whole Jeopardy! fan archive, http://www.j-archive.com, and analyzed them. He was interested not just in the most common response — which turns out to be “What is Australia?” — but in the expectation value of the responses.

Expectation value I’ve talked about before, and for that matter, everyone mentioning probability or statistics has. Slimmon works out approximately what the expectation value would be for each clue. That is, imagine this: if you ignored the answer on the board entirely and just guessed to every answer either responded absolutely nothing or else responded “What is Australia?”, some of the time you’d be right, and you’d get whatever that clue was worth. How much would you expect to get if you just guessed that answer? Responses that turn up often, such as “Australia”, or that turn up more often in higher-value squares, are worth more. Responses that turn up rarely, or only in low-value squares, have a lower expectation value.

Simmons goes on to list, based on his data, what the 1000 most frequent Jeopardy! responses are, and what the 1000 responses with the highest expectation value are. I’m so delighted to discover this work that I want to bring folks’ attention to it. (I do have a reservation about his calculations, but I need some time to convince myself that I understand exactly his calculation, and my reservation, before I bother anyone with it.)

The comments at his page include a discussion of a technical point about the expectation value calculation which has an interesting point about the approximations often useful, or inevitable, in this kind of work, but that’ll take a separate essay to quite explain that I haven’t the time for just today.

[ Edit: I initially misunderstood Slimmon’s method and have amended the article to reflect the calculation’s details. Specifically I misunderstood him at first to have calculated the expectation value of giving a particular response, and either having it be right or wrong. Slimmon assumed that one would either give the response or not at all; getting the answer wrong costs the contestant money and so has a negative value, while not answering has no value. ]

Reading The Comics, December 28, 2012


As per my declaration I’d do these reviews when I had about seven to ten comics to show off, I’m entering another in the string of mathematics-touching comic strip summaries. Unless the last two days of the year are a bumper crop this finishes out 2012 in the comics and I hope to see everyone in the new year.

Continue reading “Reading The Comics, December 28, 2012”