While I continue to wait for time and muse and energy and inspiration to write fresh material, let me share another old piece. This bit from a decade ago examines statistical quirks in The Price Is Right. Game shows offer a lot of material for probability questions. The specific numbers have changed since this was posted, but, the substance hasn’t. I got a bunch of essays out of one odd incident mentioned once on the show, and let me do something useful with that now.

To the serious game show fans: Yes, I am aware that the “Item Up For Bid” is properly called the “One-Bid”. I am writing for a popular audience. (The name “One-Bid” comes from the original, 1950s, run of the show, when the game was entirely about bidding for prizes. A prize might have several rounds of bidding, or might have just the one, and that format is the one used for the Item Up For Bid for the current, 1972-present, show.)

Putting together links to all my essays about trapezoid areas made me realize I also had a string of articles examining that problem of The Price Is Right, with Drew Carey’s claim that only once in the show’s history had all six contestants winning the Item Up For Bids come from the same seat in Contestants’ Row. As with the trapezoid pieces they form a more or less coherent whole, so, let me make it easy for people searching the web for the likelihood of clean sweeps or of perfect games on The Price Is Right to find my thoughts.

One natural question is: does the order matter? Are you better off going first, second, or third? Contestants don’t get to choose order; they’re ranked by how much they’ve won on the show already. (I believe this includes the value of their One-Bids, the item-up-for-bid that gets them on stage. This lets them rank contestants when all three lost their pricing games.) The first contestant always has a choice of whether to spin once or twice. The second and third contestants don’t necessarily get to choose what to do. Is that an advantage or a disadvantage?

In this paper, published 2002, Tenorio and Cason look at the game-theoretical logic. And compare it to how people actually play the game, on the show and in laboratory experiments. (The advantage of laboratory experiments, besides that you can get more than two each day, is that participants’ behavior won’t be thrown off by the thoughts of winning a thousand or more dollars for a good spin.) They also look some at how the psychology of risk affects people’s play.

(I’m compelled — literally, I can’t help myself — to note they make some terminology errors. They mis-label the Showcase Showdown as the bit at the end of the show, where two contestants put up bids for showcases. It’s a common mistake, and probably reflects that “showdown” has connotations of being one-on-one. But that segment is simply the Showcase Round. The Showcase Showdown is the spinning-the-big-wheel part.)

Their research, anyway, suggests that if every contestant played perfectly — achieving a “Nash equilibrium”, in which nobody can pick a better strategy given the choices other players make — going later does, indeed, give a slight advantage. The first contestant would win about 31% of the time, the second about 33%, and the third about 36% of the time. In watching the show to see what happens they found the first contestant won about 30% of the time, the second about 34%, and the third about 36% of the time. That’s no big difference.

The article includes more fascinating statistical breakdowns, answering questions such as “are spins on the wheel uniformly distributed?” That is, are you as likely to spin $1.00 on the first spin as you are to spin 0.05? Or 0.50? They have records of what people actually do. Or what prize payouts would be expected, from theoretical perfect play, and how they compare to actual play.

The paper is written for an academic audience, particularly one versed in game theory. If you are somehow not, it can be tough going. It’s all right to let your eye zip past a paragraph of jargon, or of calculations, to get back to the parts that read as English. Real mathematicians do that too, as a way of understanding the point. They can come back around later to learn how the authors got to the point.

Their plan was to make more exciting the discussion of some of Deep Space Nine‘s episodes by recording their reviews while drinking a lot. The plan was, for the fifteen episodes they had in the season, there would be a one-in-fifteen chance of doing any particular episode drunk. So how many drunk episodes would you expect to get, on this basis?

It’s a well-formed expectation value problem. There could be as few as zero or as many as fifteen, but some cases are more likely than others. Each episode could be recorded drunk or not-drunk. There’s an equal chance of each episode being recorded drunk. Whether one episode is drunk or not doesn’t depend on whether the one before was, and doesn’t affect whether the next one is. (I’ll come back to this.)

The most likely case was for there to be one drunk episode. The probability of exactly one drunk episode was a little over 38%. No drunk episodes was also a likely outcome. There was a better than 35% chance it would never have turned up. The chance of exactly two drunk episodes was about 19%. There drunk episodes had a slightly less than 6% chance of happening. Four drunk episodes a slightly more than 1% chance of happening. And after that you get into the deeply unlikely cases.

As the Deep Space Nine season turned out, this one-in-fifteen chance came up twice. It turned out they sort of did three drunk episodes, though. One of the drunk episodes turned out to be the first of two they planned to record that day. I’m not sure why they didn’t just swap what episode they recorded first, but I trust they had logistical reasons. As often happens with probability questions, the independence of events — whether a success for one affects the outcome of another — changes calculations.

There’s not going to be a second-season update to this. They’ve chosen to make a more elaborate recording game of things. They’ve set up a modified Snakes and Ladders type board with a handful of spots marked for stunts. Some sound like fun, such as recording without taking any notes about the episode. Some are, yes, drinking episodes. But this is all a very different and more complicated thing to project. If I were going to tackle that it’d probably be by running a bunch of simulations and taking averages from that.

Also I trust they’ve been warned about the episode where Quark has a sex change so he can meet a top Ferengi soda magnate after accidentally giving his mother a heart attack because gads but that was a thing that happened somehow.

Among my entertainments is listening to the Greatest Generation podcast, hosted by Benjamin Ahr Harrison and Adam Pranica. They recently finished reviewing all the Star Trek: The Next Generation episodes, and have started Deep Space Nine. To add some fun and risk to episode podcasts the hosts proposed to record some episodes while drinking heavily. I am not a fun of recreational over-drinking, but I understand their feelings. There’s an episode where Quark has a sex-change operation because he gave his mother a heart attack right before a politically charged meeting with a leading Ferengi soda executive. Nobody should face that mess sober.

At the end of the episode reviewing “Babel”, Harrison proposed: there’s 15 episodes left in the season. Use a random number generator to pick a number from 1 to 15; if it’s one, they do the next episode (“Captive Pursuit”) drunk. And it was; what are the odds? One in fifteen. I just said.

The question: how many episodes would they be doing drunk? As they discussed in the next episode, this would imply they’d always get smashed for the last episode of the season. This is a straightforward expectation-value problem. The expectation value of a thing is the sum of all the possible outcomes times the chance of each outcome. Here, the possible outcome is adding 1 to the number of drunk episodes. The chance of any particular episode being a drunk episode is 1 divided by ‘N’, if ‘N’ is the number of episodes remaining. So the next-to-the-last episode has 1 chance in 2 of being drunk. The second-from-the-last has 1 chance in 3 of being drunk. And so on.

This expectation value isn’t hard to calculate. If we start counting from the last episode of the season, then it’s easy. Add up , ending when we get up to one divided by the number of episodes in the season. 25 or 26, for most seasons of Deep Space Nine. 15, from when they counted here. This is the start of the harmonic series.

The harmonic series gets taught in sequences and series in calculus because it does some neat stuff if you let it go on forever. For example, every term in this sequence gets smaller and smaller. (The “sequence” is the terms that go into the sum: , and so on. The “series” is the sum of a sequence, a single number. I agree it seems weird to call a “series” that sum, but it’s the word we’re stuck with. If it helps, consider: when we talk about “a TV series” we usually mean the whole body of work, not individual episodes.) You can pick any number, however tiny you like. I can then respond with the last term in the sequence bigger than your number. Infinitely many terms in the sequence will be smaller than your pick. And yet: you can pick any number you like, however big. And I can take a finite number of terms in this sequence to make a sum bigger than whatever number you liked. The sum will eventually be bigger than 10, bigger than 100, bigger than a googolplex. These two facts are easy to prove, but they seem like they ought to be contradictory. You can see why infinite series are fun and produce much screaming on the part of students.

No Star Trek show has a season has infinitely many episodes, though, however long the second season of Enterprise seemed to drag out. So we don’t have to worry about infinitely many drunk episodes.

Since there were 15 episodes up for drunkenness in the first season of Deep Space Nine the calculation’s easy. I still did it on the computer. For the first season we could expect drunk episodes. This is a number a little bigger than 3.318. So, more likely three drunk episodes, four being likely. For the 25-episode seasons (seasons four and seven, if I’m reading this right), we could expect or just over 3.816 drunk episodes. Likely four, maybe three. For the 26-episode seasons (seasons two, five, and six), we could expect drunk episodes. That’s just over 3.854.

The number of drunk episodes to expect keeps growing. The harmonic series grows without bounds. But it keeps growing slower, compared to the number of terms you add together. You need a 31-episode season to be able to expect at four drunk episodes. To expect five drunk episodes you’d need an 83-episode season. If the guys at Worst Episode Ever, reviewing The Simpsons, did all 625-so-far episodes by this rule we could only expect seven drunk episodes.

Still, three, maybe four, drunk episodes of the 15 remaining first season is a fair number. They shouldn’t likely be evenly spaced. The chance of a drunk episode rises the closer they get to the end of the season. Expected length between drunk episodes is interesting but I don’t want to deal with that. I’ll just say that it probably isn’t the five episodes the quickest, easiest suggested by taking 15 divided by 3.

And it’s moot anyway. The hosts discussed it just before starting “Captive Pursuit”. Pranica pointed out, for example, the smashed-last-episode problem. What they decided they meant was there would be a 1-in-15 chance of recording each episode this season drunk. For the 25- or 26-episode seasons, each episode would get its 1-in-25 or 1-in-26 chance.

That changes the calculations. Not in spirit: that’s still the same. Count the number of possible outcomes and the chance of each one being a drunk episode and add that all up. But the work gets simpler. Each episode has a 1-in-15 chance of adding 1 to the total of drunk episodes. So the expected number of drunk episodes is the number of episodes (15) times the chance each is a drunk episode (1 divided by 15). We should expect 1 drunk episode. The same reasoning holds for all the other seasons; we should expect 1 drunk episode per season.

Still, since each episode gets an independent draw, there might be two drunk episodes. Could be three. There’s no reason that all 15 couldn’t be drunk. (Except that at the end of reviewing “Captive Pursuit” they drew for the next episode and it’s not to be a drunk one.) What are the chances there’s no drunk episodes? What are the chances there’s two, or three, or eight drunk episodes?

There’s a rule for this. This kind of problem is a mathematically-famous one. We get our results from the “binomial distribution”. This applies whenever there’s a bunch of attempts at something. And each attempt can either clearly succeed or clearly fail. And the chance of success (or failure) each attempt is always the same. That’s what applies here. If there’s ‘N’ episodes, and the chance is ‘p’ that any one will be drunk, then we get the chance ‘y’ of turning up exactly ‘k’ drunk episodes by the formula:

That looks a bit ugly, yeah. (I don’t like using ‘y’ as the name for a probability. I ran out of good letters and didn’t want to do subscripts.) It’s just tedious to calculate is all. Factorials and everything. Better to let the computer work it out. There is a formula that’s easy enough to work with, though. That’s because the chance of a drunk episode is the same each episode. I don’t know a formula to get the chance of exactly zero or one or four drunk episodes with the first, one-in-N chance. Probably the only thing to do is run a lot of simulations and trust that’s approximately right.

But for this rule it’s easy enough. There’s this formula, like I said. I figured out the chance of all the possible drunk episode combinations for the seasons. I mean I had the computer work it out. All I figured out was how to make it give me the results in a format I liked. Here’s what I got.

The chance of these many drunk episodes

In a 15-episode season is

0

0.355

1

0.381

2

0.190

3

0.059

4

0.013

5

0.002

6

0.000

7

0.000

8

0.000

9

0.000

10

0.000

11

0.000

12

0.000

13

0.000

14

0.000

15

0.000

Sorry it’s so dull, but the chance of a one-in-fifteen event happening 15 times in a row? You’d expect that to be pretty small. It’s got a probability of something like 0.000 000 000 000 000 002 28 of happening. Not technically impossible, but yeah, impossible.

How about for the 25- and 26-episode seasons? Here’s the chance of all the outcomes:

The chance of these many drunk episodes

In a 25-episode season is

0

0.360

1

0.375

2

0.188

3

0.060

4

0.014

5

0.002

6

0.000

7

0.000

8 or more

0.000

And things are a tiny bit different for a 26-episode season.

The chance of these many drunk episodes

In a 26-episode season is

0

0.361

1

0.375

2

0.188

3

0.060

4

0.014

5

0.002

6

0.000

7

0.000

7

0.000

8 or more

0.000

Yes, there’s a greater chance of no drunk episodes. The difference is really slight. It only looks so big because of rounding. A no-drunk 25 episode season has a chance of about 0.3604, while a no-drunk 26 episodes season has a chance of about 0.3607. The difference comes from the chance of lots of drunk episodes all being even worse somehow.

And there’s some neat implications through this. There’s a slightly better than one in three chance that each of the second through seventh seasons won’t have any drunk episodes. We could expect two dry seasons, hopefully not the one with Quark’s sex-change episode. We can reasonably expect at least one season with two drunk episodes. There’s a slightly more than 40 percent chance that some season will have three drunk episodes. There’s just under a 10 percent chance some season will have four drunk episodes.

There’s no guarantees, though. Probability has a curious blend. There’s no predicting when any drunk episode will come. But we can make meaningful predictions about groups of episodes. These properties seem like they should be contradictions. And they’re not, and that’s wonderful.

Do you ever think about why stuff dissolves? Like, why a spoon of sugar in a glass of water should seem to disappear instead of turning into a slight change in the water’s clarity? Well, sure, in those moods when you look at the world as a child does, not accepting that life is just like that and instead can imagine it being otherwise. Take that sort of question and put it to adult inquiry and you get great science.

Peter Mander of the Carnot Cycle blog this month writes a tale about Jacobus Henricus van ‘t Hoff, the first winner of a Nobel Prize for Chemistry. In 1883, on hearing of an interesting experiment with semipermeable membranes, van ‘t Hoff had a brilliant insight about why things go into solution, and how. The insight had only one little problem. It makes for fine reading about the history of chemistry and of its mathematical study.

In other, television-related news, the United States edition of The Price Is Right included a mention of “square root day” yesterday, 4/4/16. It was in the game “Cover-Up”, in which the contestant tries making successively better guesses at the price of a car. This they do by covering up wrong digits with new guesses. For the start of the game, before the contestant’s made any guesses, they need something irrelevant to the game to be on the board. So, they put up mock calendar pages for 1/1/2001, 2/2/2004, 3/3/2009, 4/4/2016, and finally a card reading . The game show also had a round devoted to Pi Day a few weeks back. So I suppose they’re trying to reach out to people into pop mathematics. It’s cute.

Brian Fies’s Mom’s Cancer is a heartbreaking story. It’s compelling reading, but people who are emotionally raw from lost love ones, or who know they’re particularly sensitive to such stories, should consider before reading that the comic is about exactly what the title says.

But it belongs here because in the October 29th and the November 2nd installments are about a curiosity of area, and volume, and hypervolume, and more. That is that our perception of how big a thing is tends to be governed by one dimension, the length or the diameter of the thing. But its area is the square of that, its volume the cube of that, its hypervolume some higher power yet of that. So very slight changes in the diameter produce great changes in the volume. Conversely, though, great changes in volume will look like only slight changes. This can hurt.

Tom Toles’s Randolph Itch, 2 am from the 29th of October is a Roman numerals joke. I include it as comic relief. The clock face in the strip does depict 4 as IV. That’s eccentric but not unknown for clock faces; IIII seems to be more common. There’s not a clear reason why this should be. The explanation I find most nearly convincing is an aesthetic one. Roman numerals are flexible things, and can be arranged for artistic virtue in ways that Arabic numerals make impossible.

The aesthetic argument is that the four-character symbol IIII takes up nearly as much horizontal space as the VIII opposite it. The two-character IV would look distractingly skinny. Now, none of the symbols takes up exactly the same space as their counterpart. X is shorter than II, VII longer than V. But IV-versus-VIII does seem like the biggest discrepancy. Still, Toles’s art shows it wouldn’t look all that weird. And he had to conserve line strokes, so that the clock would read cleanly in newsprint. I imagine he also wanted to avoid using different representations of “4” so close together.

Jon Rosenberg’s Scenes From A Multiverse for the 29th of October is a riff on both quantum mechanics — Schödinger’s Cat in a box — and the uncertainty principle. The uncertainty principle can be expressed as a fascinating mathematical construct. It starts with Ψ, a probability function that has spacetime as its domain, and the complex-valued numbers as its range. By applying a function to this function we can derive yet another function. This function-of-a-function we call an operator, because we’re saying “function” so much it’s starting to sound funny. But this new function, the one we get by applying an operator to Ψ, tells us the probability that the thing described is in this place versus that place. Or that it has this speed rather than that speed. Or this angular momentum — the tendency to keep spinning — versus that angular momentum. And so on.

If we apply an operator — let me call it A — to the function Ψ, we get a new function. What happens if we apply another operator — let me call it B — to this new function? Well, we get a second new function. It’s much the way if we take a number, and multiply it by another number, and then multiply it again by yet another number. Of course we get a new number out of it. What would you expect? This operators-on-functions things looks and acts in many ways like multiplication. We even use symbols that look like multiplication: AΨ is operator A applied to function Ψ, and BAΨ is operator B applied to the function AΨ.

Now here is the thing we don’t expect. What if we applied operator B to Ψ first, and then operator A to the product? That is, what if we worked out ABΨ? If this was ordinary multiplication, then, nothing all that interesting. Changing the order of the real numbers we multiply together doesn’t change what the product is.

Operators are stranger creatures than real numbers are. It can be that BAΨ is not the same function as ABΨ. We say this means the operators A and B do not commute. But it can be that BAΨ is exactly the same function as ABΨ. When this happens we say that A and B do commute.

Whether they do or they don’t commute depends on the operators. When we know what the operators are we can say whether they commute. We don’t have to try them out on some functions and see what happens, although that sometimes is the easiest way to double-check your work. And here is where we get the uncertainty principle from.

The operator that lets us learn the probability of particles’ positions does not commute with the operator that lets us learn the probability of particles’ momentums. We get different answers if we measure a particle’s position and then its velocity than we do if we measure its velocity and then its position. (Velocity is not the same thing as momentum. But they are related. There’s nothing you can say about momentum in this context that you can’t say about velocity.)

The uncertainty principle is a great source for humor, and for science fiction. It seems to allow for all kinds of magic. Its reality is no less amazing, though. For example, it implies that it is impossible for an electron to spiral down into the nucleus of an atom, collapsing atoms the way satellites eventually fall to Earth. Matter can exist, in ways that let us have solid objects and chemistry and biology. This is at least as good as a cat being perhaps boxed.

Jan Eliot’s Stone Soup Classics for the 29th of October is a rerun from 1995. (The strip itself has gone to Sunday-only publication.) It’s a joke about how arithmetic is easy when you have the proper motivation. In 1995 that would include catching TV shows at a particular time. You see, in 1995 it was possible to record and watch TV shows when you wanted, but it required coordinating multiple pieces of electronics. It would often be easier to just watch when the show actually aired. Today we have it much better. You can watch anything you want anytime you want, using any piece of consumer electronics you have within reach, including several current models of microwave ovens and programmable thermostats. This does, sadly, remove one motivation for doing arithmetic. Also, I’m not certain the kids’ TV schedule is actually consistent with what was on TV in 1995.

Oh, heck, why not. Obviously we’re 14 minutes before the hour. Let me move onto the hour for convenience. It’s 744 minutes to the morning cartoons; that’s 12.4 hours. Taking the morning cartoons to start at 8 am, that means it’s currently 14 minutes before 24 minutes before 8 pm. I suspect a rounding error. Let me say they’re coming up on 8 pm. 194 minutes to Jeopardy implies the game show is on at 11 pm. 254 minutes to The Simpsons puts that on at midnight, which is probably true today, though I don’t think it was so in 1995 just yet. 284 minutes to Grace puts that on at 12:30 am.

I suspect that Eliot wanted it to be 978 minutes to the morning cartoons, which would bump Oprah to 4:00, Jeopardy to 7:00, Simpsons and Grace to 8:00 and 8:30, and still let the cartoons begin at 8 am. Or perhaps the kids aren’t that great at arithmetic yet.

Stephen Beals’s Adult Children for the 30th of October tries to build a “math error” out of repeated use of the phrase “I couldn’t care less”. The argument is that the thing one cares least about is unique. But why can’t there be two equally least-cared-about things?

We can consider caring about things as an optimization problem. Optimization problems are about finding the most of something given some constraints. If you want the least of something, multiply the thing you have by minus one and look for the most of that. You may giggle at this. But it’s the sensible thing to do. And many things can be equally high, or low. Take a bundt cake pan, and drizzle a little water in it. The water separates into many small, elliptic puddles. If the cake pan were perfectly formed, and set on a perfectly level counter, then the bottom of each puddle would be at the same minimum height. I grant a real cake pan is not perfect; neither is any counter. But you can imagine such.

Just because you can imagine it, though, must it exist? Think of the “smallest positive number”. The idea is simple. Positive numbers are a set of numbers. Surely there’s some smallest number. Yet there isn’t; name any positive number and we can name a smaller number. Divide it by two, for example. Zero is smaller than any positive number, but it’s not itself a positive number. A minimum might not exist, at least not within the confines where we are to look. It could be there is not something one could not care less about.

So a minimum might or might not exist, and it might or might not be unique. This is why optimization problems are exciting, challenging things.

Niklas Eriksson’s Carpe Diem for the 1st of November is about understanding the universe by way of observation and calculation. We do rely on mathematics to tell us things about the universe. Immanuel Kant has a bit of reputation in mathematical physics circles for this observation. (I admit I’ve never seen the original text where Kant observed this, so I may be passing on an urban legend. My love has several thousands of pages of Kant’s writing, but I do not know if any of them touch on natural philosophy.) If all we knew about space was that gravitation falls off as the square of the distance between two things, though, we could infer that space must have three dimensions. Otherwise that relationship would not make geometric sense.

Jeff Harris’s kids-information feature Shortcuts for the 1st of November was about the Harvard Computers. By this we mean the people who did the hard work of numerical computation, back in the days before this could be done by electrical and then electronic computer. Mathematicians relied on people who could do arithmetic in those days. There is the folkloric belief that mathematicians are inherently terrible at arithmetic. (I suspect the truth is people assume mathematicians must be better at arithmetic than they really are.) But here, there’s the mathematics of thinking what needs to be calculated, and there’s the mathematics of doing the calculations.

Their existence tends to be mentioned as a rare bit of human interest in numerical mathematics books, usually in the preface in which the author speaks with amazement of how people who did computing were once called computers. I wonder if books about font and graphic design mention how people who typed used to be called typewriters in their prefaces.

There was a new pricing game that debuted on The Price Is Right for the start of its 42nd season, with a name that’s designed to get my attention: it’s called “Do The Math”. This seems like a dangerous thing to challenge contestants to do since the evidence is that pricing games which depend on doing some arithmetic tend to be challenging (“Grocery Game”, “Bullseye”), or confusing (“The Check Game”), or outright disasters (“Add Em Up”). This one looks likely to be more successful, though.

The setup is this: The contestant is shown two prizes. In the first (and, so far, only) playing of the game this was a 3-D HDTV and a motorcycle. The names of those prizes are put on either side of a monitor made up to look like a green chalkboard. The difference in prize values is shown; in this case, it was $1160, and that’s drawn in the middle of the monitor in Schoolboard Extra-Large font. The contestant has to answer whether the price of the prize listed on the left (here, the 3-D HDTV) plus the cash ($1160) is the price of the prize on the right (the motorcycle), or whether the price of the prize on the left minus the cash is the price of the prize on the right. The contestant makes her or his guess and, if right, wins both prizes and the money.

There’s not really much mathematics involved here. The game is really just a two-prize version of “Most Expensive” (in which the contestant has to say which of three prizes and then it’s right there on the label). I think there’s maybe a bit of educational value in it, though, in that by representing the prices of the two prizes — which are fixed quantities, at least for the duration of taping, and may or may not be known to the contestant — with abstractions it might make people more comfortable with the mathematical use of symbols. x and all the other letters of the English (and Greek) alphabets get called into place to represent quantities that might be fixed, or might not be; and that might be known, or might be unknown; and that we might actually wish to know or might not really care about but need to reference somehow.

That conceptual leap often confuses people, as see any joke about how high school algebra teachers can’t come up with a consistent answer about what x is. This pricing game is a bit away from mathematics classes, but it might yet be a way people could see that the abstraction idea is not as abstract or complicated as they fear.

I suspect, getting away from my flimsy mathematics link, that this should be a successful pricing game, since it looks to be quick and probably not too difficult for players to get. I’m sorry the producers went with a computer monitor for the game’s props, rather than — say — having a model actually write plus or minus, or some other physical prop. Computer screens are boring television; real objects that move are interesting. There are some engagingly apocalyptic reviews of the season premiere over at golden-road.net, a great fan site for The Price Is Right.

A friend who’s also into The Price Is Right claimed to have noticed something peculiar about the “Any Number” game. Let me give context before the peculiarity.

This pricing game is the show’s oldest — it was actually the first one played when the current series began in 1972, and also the first pricing game won — and it’s got a wonderful simplicity: four digits from the price of a car (the first digit, nearly invariably a 1 or a 2, is given to the contestant and not part of the game), three digits from the price of a decent but mid-range prize, and three digits from a “piggy bank” worth up to $9.87 are concealed. The contestant guesses digits from zero through nine inclusive, and they’re revealed in the three prices. The contestant wins whichever prize has its price fully revealed first. This is a steadily popular game, and one of the rare Price games which guarantees the contestant wins something.

A couple things probably stand out. The first is that if you’re very lucky (or unlucky) you can win with as few as three digits called, although it might be the piggy bank for a measly twelve cents. (Past producers have said they’d never let the piggy bank hold less than $1.02, which still qualifies as “technically something”.) The other is that no matter how bad you are, you can’t take more than eight digits to win something, though it might still be the piggy bank.

What my friend claimed to notice was that these “Any Number” games went on to the last possible digit “all the time”, and he wanted to know, why?

My first reaction was: “all” the time? Well, at least it happened an awful lot of the time. But I couldn’t think of a particular reason that they should so often take the full eight digits needed, or whether they actually did; it’s extremely easy to fool yourself about how often events happen when there’s a complicated possibile set of events. But stipulating that eight digits were often needed, then, why should they be needed? (For that matter, trusting the game not to be rigged — and United States televised game shows are by legend extremely sensitive to charges of rigging — how could they be needed?) Could I explain why this happened? And he asked again, enough times that I got curious myself.

My commenters, thank them, quite nicely outlined the major reasons that someone in the Deal or No Deal problem I posited would be wiser to take the Banker’s offer of a sure $11,750 rather than to keep a randomly selected one of $1, $10, $7,500, $25,000, or $35,000. Even though the expectation value, the average that the Contestant could expect from sticking with her suitcase if she played the game an enormous number of times is $13,502.20, fairly noticeably larger than the Banker’s offer, she is just playing the game the once. She’s more likely to do worse than the Banker’s offer, and is as likely to do much worse — $1 or $10 — rather than do any better.

If we suppose the contestant’s objective is to get as much money as possible from playing, her strategy is different if she plays just the once versus if she plays unlimitedly many times. I don’t know a name for this class of problems; maybe we can dub it the “lottery paradox”. It’s not rare for a lottery jackpot to rise high enough that the expected value of one’s winnings are more than the ticket price, which is typically when I’ll bother to buy one (well, two), but I know it’s effectively certain that all I’ll get from the purchase is one (well, two) dollars poorer.

It also strikes me that I have the article subjects for this and the previous entry reversed. Too bad.

Mathstina, in a post from August 25, put put a video from the Australian version of Deal Or No Deal which showed a spectacularly unlucky contestant, a contestant unlucky enough to inspire word problems. I quite like game shows, partly because I was a kid in an era — the late 70s and early 80s — when the American daytime game show was at a creative and commercial peak, when one could reasonably expect to see novel shows on two or three networks from 9 am until 1 or 2 pm, and partly because they give many wonderful, easy-to-understand mathematics problems. Here’s one I based on the show and used as an exam problem.

Putting together links to all my essays about trapezoid areas made me realize I also had a string of articles examining that problem of The Price Is Right, with Drew Carey’s claim that only once in the show’s history had all six contestants winning the Item Up For Bids come from the same seat in Contestants’ Row. As with the trapezoid pieces they form a more or less coherent whole, so, let me make it easy for people searching the web for the likelihood of clean sweeps or of perfect games on The Price Is Right to find my thoughts.

I have one last important thing to discuss before I finish my months spun off an offhand comment from The Price Is Right. There are a couple minor points I can also follow up on, but I don’t think they’re tied tightly enough to the show to deserve explicit mention or rate getting “tv” included as one of my keywords. Here’s my question: what’s the chance of winning an average pricing game, after one has got an Item Up For Bid won?

At first glance this is several dozen questions, since there are quite a few games, some winnable on pure skill — “Clock Game”, particularly, although contestants this season have been rotten at it, and “Hole In One … Or Two”, since a good miniature golfer could beat it — and some that are just never won — “Temptation” particularly — and some for which partial wins are possible — “Money Game” most obviously. For all, skill in pricing things help. For nearly all, there’s an element of luck.

I’m not going to attempt to estimate the chance of winning each of the dozens of pricing games. What I want is some kind of mean chance of winning, based on how contestants actually do. The tool I’ll use for this is the number of perfect episodes, episodes in which the contestant wins all six pricing games, and I’ll leave it to the definers of perfect such questions as what counts as a win for “Pay The Rent” (in which a prize of $100,000 is theoretically possible, but $10,000 is the most that has yet been paid out) or “Plinko” (theoretically paying up to $50,000, but which hasn’t done so in decades of playing).

One week, it seems, isn’t enough to tell the difference conclusively between the first bidder on Contestants Row having a 25 percent chance of winning — winning one out of four times — or a 17 percent chance of winning — winning one out of six times. But we’re not limited to watching just the one week of The Price Is Right, at least in principle. Some more episodes might help us, and we can test how many episodes are needed to be confident that we can tell the difference. I won’t be clever about this. I have a tool — Octave — which makes it very easy to figure out whether it’s plausible for something which happens 1/4 of the time to turn up only 1/6 of the time in a set number of attempts, and I’ll just keep trying larger numbers of attempts until I’m satisfied. Sometimes the easiest way to solve a problem is to keep trying numbers until something works.

In two weeks (or any ten episodes, really, as talked about above), with 60 items up for bids, a 25 percent chance of winning suggests the first bidder should win 15 times. A 17 percent chance of winning would be a touch over 10 wins. The chance of 10 or fewer successes out of 60 attempts, with a 25 percent chance of success each time, is about 8.6 percent, still none too compelling.

Here we might turn to despair: 6,000 episodes — about 35 years of production — weren’t enough to give perfectly unambiguous answers about whether there were fewer clean sweeps than we expected. There were too few at the 5 percent significance level, but not too few at the 1 percent significance level. Do we really expect to do better with only 60 shows?

The obvious thing to do is test: watch a couple episodes, and see whether it’s nearer to 1/6 or to 1/4 of the winning bids come from the first seat. It’s easy to tally the number of items up for bid and how often the first bidder wins. However, there are only six items up for bid each episode, and there are five episodes per week, for 30 trials in all. I talk about a week’s worth of episodes because it’s a convenient unit, easy to record on the Tivo or an equivalent device, easy to watch at The Price Is Right‘s online site, but it doesn’t have to be a single week. It could be any five episodes. But I’ll say a week just because it’s convenient to do so.

If the first seat has a chance of 25 percent of winning, we expect 30 times 1/4, or seven or eight, first-seat wins per week. If the first seat has a 17 percent chance of winning, we expect 30 times 1/6, or 5, first-seat wins per week. That’s not much difference. What’s the chance we see 5 first-seat wins if the first seat has a 25 percent chance of winning?

Let’s accept the conclusion that the small number of clean sweeps of Contestants Row is statistically significant, that all six winning contestants on a single episode of The Price Is Right come from the same seat less often than we would expect from chance alone, and that the reason for this is that whichever seat won the last item up for bids is less likely to win the next. It seems natural to suppose the seat which won last time — and which is therefore bidding first this next time — is at a disadvantage. The irresistible question, to me anyway, is: how big is that disadvantage? If no seats had any advantage, the first, second, third, and fourth bidders would be expected to have a probability of 1/4 of winning any particular item. How much less a chance does the first bidder need to have to get the one clean sweep in 6,000 episodes reported?

Chiaroscuro came to an estimate that the first bidder had a probability of about 17.6 percent of winning the item up for bids, and I agree with that, at least if we make a couple of assumptions which I’m confident we are making together. But it’s worth saying what those assumptions are because if the assumptions do not hold, the answers come out different.

The first assumption was made explicitly in the first paragraph here: that the low number of clean sweeps is because the chance of a clean sweep is less than the 1 in 1000 (or to be exact, 1 in 1024) chance which supposes every seat has an equal probability of winning. After all, the probability that we saw so few clean sweeps for chance alone was only a bit under two percent; that’s unlikely but hardly unthinkable. We’re supposing there is something to explain.

One assumption for a binomial distribution are that we have some trial, some event, which happens many times. Each episodes is the obvious trial here. The outcome we’re interested in seeing has some probability of happening on each trial; there is indeed some probability of a clean sweep each episode. The binomial distribution assumes that this probability is constant for every trial, that it doesn’t become more or less likely the tenth or hundredth or thousandth time around, and this seems likely to hold for The Price Is Right episodes. Granted there is some chance of a clean sweep in one episode; what could be done to increase or decrease the likelihood from episode to episode?

If the probability of having one or fewer clean sweep episodes of The Price Is Right out of 6,000 aired shows is a little over one and a half percent — and it is — and we consider outcomes whose probability is less than five percent to be so unlikely that we can rule them out as happening by chance — and, last time, we did — then there are improbably few episodes where all six contestants came from the same seat in Contestants Row, and we can usefully start looking for possible explanations as to why there are so few clean sweeps. At least, that’s the conclusion at our significance level, that five percent.

But there’s no law dictating that we pick that five percent significance level. If we picked a one percent significance level, which is still common enough and not too stringent, then we would say this might be fewer clean sweeps than we expected, but it isn’t so drastically few as to raise our eyebrows yet. And we would be correct to do so. Depending on the significance level, what we saw is either so few clean sweeps as to be suspicious, or it’s not. This is why it’s better form to choose the significance level before we know the outcome; it feels like drawing the bullseye after shooting the arrow the other way around.

The last important idea missing before we can judge this problem about The Price Is Right clean sweeps of Contestants Row is the significance level. Whenever an experiment is run — whether it’s the classic probability class problems of flipping coins or rolling dice, or whether it’s watching 6,000 episodes of a game show to see whether any seat produces the most winners, or whether it’s counting the number of red traffic lights one gets during the commute — there are some outcomes which are reasonably likely, some which are unlikely, and some which are vanishingly improbable.

We have to decide that some outcomes have such a low probability of happening naturally that they represent something going on, and are not just the result of chance. How low that probability should be is our decision. There are some common dividing lines, but they’re common just because they represent numbers which human beings find to be nice round figures: five percent, one percent, half a percent, one-tenth of a percent. What significance level one picks depends on many factors, including what’s common in the field, how different outcomes are expected to be, even what one can afford. Physicists looking for evidence of new subatomic particles have an extremely high standard before declaring something is definitely a new particle, but, they can run particle detection experiments until they get such clear evidence.

To be fair, we ought to pick our significance level before we’ve worked out the probability of something happening, but this is the earliest I could discuss it with motivation for you to read about it. But if we take the five percent significance level, we see we know already that there’s a little more than a one and a half percent chance of there being as few clean sweeps as observed. The conclusion is obvious: all six winning contestants in an episode should have come from the same seat, over 6,000 episodes, more often than the one time Drew Carey claimed they had. We can start looking for explanations for why there should be this deficiency.