## The Significance of the Item Up For Bids

The last important idea missing before we can judge this problem about The Price Is Right clean sweeps of Contestants Row is the significance level. Whenever an experiment is run — whether it’s the classic probability class problems of flipping coins or rolling dice, or whether it’s watching 6,000 episodes of a game show to see whether any seat produces the most winners, or whether it’s counting the number of red traffic lights one gets during the commute — there are some outcomes which are reasonably likely, some which are unlikely, and some which are vanishingly improbable.

We have to decide that some outcomes have such a low probability of happening naturally that they represent something going on, and are not just the result of chance. How low that probability should be is our decision. There are some common dividing lines, but they’re common just because they represent numbers which human beings find to be nice round figures: five percent, one percent, half a percent, one-tenth of a percent. What significance level one picks depends on many factors, including what’s common in the field, how different outcomes are expected to be, even what one can afford. Physicists looking for evidence of new subatomic particles have an extremely high standard before declaring something is definitely a new particle, but, they can run particle detection experiments until they get such clear evidence.

To be fair, we ought to pick our significance level before we’ve worked out the probability of something happening, but this is the earliest I could discuss it with motivation for you to read about it. But if we take the five percent significance level, we see we know already that there’s a little more than a one and a half percent chance of there being as few clean sweeps as observed. The conclusion is obvious: all six winning contestants in an episode should have come from the same seat, over 6,000 episodes, more often than the one time Drew Carey claimed they had. We can start looking for explanations for why there should be this deficiency.

Or …

## A Brief Word for the Comic Pages

There’s legitimate mathematical content linked from here, but mostly, I want to promote what seems to be a little-known comic strip that’s working very hard at making me love it. Part of that work has been in producing a couple of mathematics-oriented strips. Grant Snider’s Incidental Comics, part of the gocomics.com comics empire, is a roughly twice-a-week strip filling the page with lots of detail humor. It’s the sort of comic strips which assumes you will remember Ludwig Miles van der Rohe’s Farnsworth House. (However, I did see a Lego block version of the Farnsworth House in Barnes and Noble the other day, so maybe Miles van der Rohe has gone and become all trendy while I wasn’t looking.

Relevant to the nominal base for this little blog, though, is that Snider has posted a few comics based on mathematics jokes. The most recent is that from January 23, titled “Axes of Evil”, and mixes descriptive statistics with horror that is somehow not associated with calculating standard deviations. A little farther back is the December 12, 2011, strip, titled “Function World”, which adapts graphs of some popular functions, such as hyperbolas, the natural logarithm, and the inverse cosine (which is not actually popular, but don’t tell it) into amusement park rides. Do enjoy.

I am not certain how far in the archives people who haven’t got gocomics.com accounts can go before they’re nagged into getting gocomics.com accounts.

## The First Tail

We became suspicious of the number of clean sweeps in The Price Is Right when there were not the expected six of them in 6,000 episodes. The chance there would be only one was about one and a half percent, not very high. But are there so few clean sweeps that we should be suspicious? That is, is the difference between the expected number of sweeps and the observed number so large as to be significant? Is it too big to just result from chance?

This is significance testing: is whatever quantity we mean to observe dramatically less than what is expected? Is it dramatically more? Is it at least different? Are these differences bigger than what could be expected by mere chance? For every statistician’s favorite example, a tossed fair coin will come up tails half the time; that means, of twenty flips, there are expected to be ten tails. But there being merely nine or as many as twelve is reasonable. Three or fifteen tails may be a little unlikely. Zero or twenty seem impossible. There’s a point where if our observations are so different from what we expect then we have to reject the idea that our observations and our expectations agree.

It’s not enough to say there’s a probability of only 1.5 percent that there should be exactly one clean sweep episode out of 6,000, though. It’s unlikely that should happen, but if we look at it, it’s unlikely there should be any outcome. Even the most likely result of 6,000 episodes, six clean sweeps, has only about one chance in six of happening. That’s near the chance that the next person you meet will have a birthday in either September or November. That isn’t absurdly unlikely, but, the person betting against it has the surer deal.

## Significance Intrudes on Contestants Row

We worked out the likelihood that there would be only one clean sweep, with all six contestants getting on stage coming from the same seat in Contestants Row, out of six thousand episodes of The Price Is Right. That turned out to be not terribly likely: it had about a one and a half percent chance of being the case. For a sense of scale, that’s around the same probability that the moment you finish reading this sentence will be exactly 26 seconds past the minute. It’s pretty safe to bet that it wasn’t.

However, it isn’t particularly outlandish to suppose that it was. I’d certainly hope at least some reader found that it was. Events which aren’t particularly likely do happen, all the time. Consider the likelihood of this single-clean-sweep or the 26-seconds-past-the-minute thing happening to the likelihood of any given hand of poker: any specific hand is phenomenally less likely, but something has to happen once you start dealing. So do we have any grounds for saying the particular outcome of one clean sweep in 6,000 shows is improbable? Or for saying that it’s reasonable?

## A Simple Demonstration Which Does Not Clarify

When last we talked about the “clean sweep” of winning contestants coming from the same of four seats in Contestants Row for all six Items Up For Bid on The Price Is Right, we had got established the pieces needed if we suppose this to be a binomial distribution problem. That is, we suppose that any given episode has a probability, p, of successfully having all six contestants from the same seat, and a probability 1 – p of failing to have all six contestants from the same seat. There are N episodes, and we are interested in the chance of x of them being clean sweeps. From the production schedule we know the number of episodes N is about 6,000. We supposed the probability of a clean sweep to be about p = 1/1000, on the assumption that the chance of winning isn’t any better or worse for any contestant. The probability of there not being a clean sweep is then 1 – p = 999/1000. And we expected x = 6 clean sweeps, while Drew Carey claimed there had been only 1.

The chance of finding x successes out of N attempts, according to the binomial distribution, is the probability of any combination of x successes and N – x successes — which is equal to (p)(x) * (1 – p)(N – x) — times the number of ways there are to select x items out of N candidates. Either of those is easy enough to calculate, up to the point where we try calculating it. Let’s start out by supposing x to be the expected 6, and later we’ll look at it being 1 or other numbers.

## Off By A Factor Of 720 (Or More)

To work out the task of figuring out whether it was plausible that there had been only one “clean sweep”, of all six contestants winning the Item Up For Bid on The Price Is Right coming from the same seat, we had started a little into the binomial distribution. The key ideas included that we have “Bernoulli trials”, a number of independent chances for some condition to happen — in this case, we had about 6,000 such trials, the number of hourlong episodes of The Price Is Right — and a probability p of successfully seeing some event occur on any one episode. We worked that out to be somewhere about p = 1/1000, if every seat is equally likely to win every time. There is also a probability of 1 – p or 999/1000 of the event failing to see this event, that is, that one or more contestants comes from a different seat.

To find the probability of seeing some number, call it x since we don’t particularly care what it is, of successes out of some larger number, call it N because that’s a convenient number, of trials, we need to figure out how many ways there are to arrange x successes out of N trials. For small x and N values we can figure this out by hand, given time. For large numbers, we’d never finish if we tried by hand. But we can solve it, if we attack the problem methodically.

## From Drew Carey To An Imaginary Baseball Player

So, we calculated that on any given episode of The Price Is Right there’s around one chance of all six winners of the Item Up For Bid coming from the same seat. And we know there have been about six thousand episodes with six Items Up For Bid. So we expect there to have been about six clean sweep episodes; yet if Drew Carey is to be believed, there has been just the one. What’s wrong?

Possibly, nothing. Just because there is a certain probability of a thing happening does not mean it happens all that often. Consider an analogous situation: a baseball batter might hit safely one time out of every three at-bats; but there would be nothing particularly odd in the batter going hitless in four at-bats during a single game, however much we would expect him to get at least one. There wouldn’t be much very peculiar in his hitting all four times, either. Our expected value, the number of times something could happen times the probability of it happening each time, is not necessarily what we actually see. (We might get suspicious if we always saw the expected value turn up.)

Still, there must be some limits. We might accept a batter who hits one time out of every three getting no hits in four at-bats. If he got no runs in four hundred at-bats, we’d be inclined to say he’s not a decent hitter having some bad luck. More likely he’s failing to bring the bat with him to the plate. We need a tool to say whether some particular outcome is tolerably likely or so improbable that something must be up.

## Came On Down

On the December 15th episode of The Price Is Right, host Drew Carey mentioned as the sixth Item Up For Bids began that so far that show, all the contestants who won their Item Up For Bids (and so got on-stage for the pricing games) had come from the same spot so far, five out of six. He said that only once before on the show had all the contestants come from the same seat in Contestants Row. That seems awfully few, but, how many should there be?

We can say roughly how many “clean sweep” shows we should expect. There’ve been just about 6,000 episodes of The Price Is Right played in the current hour-long format (the show was a half-hour its first few years after being revived in 1972; it was a very different show in previous decades). If we know the probability of all six contestants in one game winning their Item Up For Bids — properly speaking, it’s called the One-Bid, but nobody cares — and multiply the probability of six contestants in one show coming from the same seat by the number of shows, we have the number of shows we should expect to have had such a clean sweep. This product, the chance of something happening times the number of times it could happen, is termed the “expected value” or “expectation value”, or sometimes just the “mean”, as in the average number to be, well, expected.

This makes a couple of assumptions. All probability problems do. For example, it assumes the chance of a clean sweep in one show is unaffected by clean sweeps in other shows. That is, if everyone in the red seat won on Thursday, that wouldn’t make everyone in the blue seat winning Friday more or less likely. That condition is termed “independence”, and it is frequently relied upon to make probability problems work out. Unfortunately, it’s often hard to prove: how do you prove that one thing happening doesn’t affect the other?

## Mid-Course Correction 1, Continued

The next part of examining just how well this blog is doing is to think about the mechanical details of it. I’ve been publishing something typically three times a week, which feels to me like an easy enough schedule, and posting that in the evenings, which leaves me the workdays of Monday, Wednesday, and Friday to actually think and compose things. The pieces have been around a thousand words long, although that has a greater tendency to be long than short. Thinking of something to write is hard; keeping going once I’ve started is easy.

That’s all set to my convenience. But I’m curious what readers think of these properties. For one, is the three-a-week schedule a good one? I feel like a weekly blog is too easy to forget about reading, and a daily one might be too much if the subject hasn’t got the fun aspects of ridiculing comic strips or the exciting aspects of ridiculing other people’s politics. Is my instinct reasonable? Also, the publication in the evening is nice for me, but it does mean articles go up when at least one of my readers has gone to sleep, and the Friday evening article can go missing completely against the backdrop of the weekend. Is there a better time?

Is the length reasonable? Should I try writing to a shorter length so as to not present so many walls of text, particularly when I start getting into topics that need equations or algebraic manipulations; or should I let several short pieces run into a more unified and longer post? What’s the natural reading length for pop mathematics that you find interesting? Yes, yes, a good essay is never long enough, but I’m not arrogant enough to think I’m always being very interesting.

As ever I’ll try to be good-spirited about complaints. I don’t promise to take everybody’s advice, but I do promise to consider it.

## Mid-Course Correction 1

I’m about three months into this particular blog-writing experiment, so it’s probably time to start over-thinking it. For the most part I’m happy; I like doing some thinking about mathematics in this kind of organized way, and I really like that I keep finding a thousand or so words to say on different topics, and that those fell to me to be topics that aren’t written about obsessively much in the rest of the pop mathematics universe. And the results have fit my typical self-estimation, that I find it all quite satisfying until the moment I publish, then realize I’ve just shown to the world the stupidest words ever strung together, and as I get some distance from publication come to find I didn’t say what I wanted quite right, but I did acceptably well.

My satisfaction’s not necessarily the important part, though; somewhere in the list of motives I have for writing is to communicate. So, I’d like to know whether you-the-presumed-reader does think I’m communicating. Am I, at least generally, writing about interesting topics; am I varying the topics at a reasonable rate, or should I keep on one thread for more or fewer posts in a row; are the individual essays as interesting as the topics demand?

I’ll try to be good-natured about criticisms, whether put out here or sent to me directly. I don’t promise to change in response to any particular complaint, but I will do my best to listen and consider whether it feels right and whether it might be something I can or want to act on. For example, one person said I harder to start than to finish reading. This feels odd to me, but I’m curious how other people see the same writings.

When last we discussed divisibility rules, particularly, rules for just adding up the digits in a number to tell what it might divide by, we had worked out rules for testing divisibility by eight. In that, we take the sum of four times the hundreds digit, plus two times the tens digit, plus the units digit, and if that sum is divisible by eight, then so was the original number. This hasn’t got the slick, smooth memorability of the rules for three and nine — just add all the numbers up — or the simplicity of checking for divisibility by ten, five, or two — just look at the last digit — but it’s not a complicated rule either.

Still, we came at it through an experimental method, fiddling around with possible rules until we found one which seemed to work. It seemed to work, and since we found out there are only a thousand possible cases to consider we can check that it works in every one of those cases. That’s tiresome to do, but functions, and it’s a legitimate way of forming mathematical rules. Quite a number of proofs amount to dividing a problem into several different cases and show that whatever we mean to prove is so in each ase.

Let’s see what we can do to tidy up the proof, though, and see if we can make it work without having to test out so many cases. We can, or I’d have been foolish to start this essay rather than another; along the way, though, we can remove the traces that show the experimenting that lead to the technique. We can put forth the cleaned-up reasoning and look all the more clever because it isn’t so obvious how we got there. This is another common property of proofs; the most attractive or elegant method of presenting them can leave the reader wondering how it was ever imagined.

## Hopefully, Saying Something True

I wanted to talk about drawing graphs that represent something, and to get there have to say what kinds of things I mean to represent. The quick and expected answer is that I mean to represent some kind of equation, such as “y = 3*x – 2” or “x2 + y2 = 4”, and that probably does come up the most often. We might also be interested in representing an inequality, something like “x2 – 2 y2 ≤ 1”. On occasion we’re interested just in the region where something is not true, saying something like “y ≠ 3 – x”. (I’ve used nice small counting numbers here not out of any interest in these numbers, or because larger ones or non-whole numbers or even irrational numbers don’t work, but because there is something pleasantly reassuring about seeing a “1” or a “2” in an equation. We strongly believe we know what we mean by “1”.)

Anyway, what we’ve written down is something describing a relationship which we are willing to suppose is true. We might not know what x or y are, and we might not care, but at least for the length of the problem we will suppose that the number represented by y must be equal to three times whatever number is represented by x and minus two. There might be only a single value of x we find interesting; there might be several; there might be infinitely many such values. There’ll be a corresponding number of y’s, at least, so long as the equation is true.

Sometimes we’ll turn the description in terms of an equation into a description in terms of a graph right away. Some of these descriptions are like as those of a line — the “y = 3*x – 2” equation — or a simple shape — “x2 + y2 = 4” is a circle — in that we can turn them into graphs right away without having to process them, at least not once we’re familiar and comfortable with the idea of graphing. Some of these descriptions are going to be in awkward forms. “x + 2 = – y2 / x + 2 y /x” is really just an awkward way to describe a circle (more or less), but that shape is hidden in the writing.

## Before Drawing a Graph

I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

## Ted Baxter and the Binomial Distribution

There are many hard things about teaching, although I appreciate that since I’m in mathematics I have advantages over many other fields. For example, students come in with the assumption that there are certainly right and certainly wrong answers to questions. I’m generally spared the problem of convincing students that I have authority to rule some answers in or out. There’s actually a lot of discretion and judgement and opinion involved, but most of that comes in when one is doing research. In an introductory course, there are some techniques that have gotten so well-established and useful we could fairly well pretend there isn’t any judgement left.

But one hard part is probably common to all fields: how closely to guide a student working out something. This case comes from office hours, as I tried getting a student to work out a problem in binomial distributions. Binomial distributions come up in studying the case where there are many attempts at something; and each attempt has a certain, fixed, chance of succeeding; and you want to know the chance of there being exactly some particular number of successes out of all those tries. For example, imagine rolling four dice, and being interested in getting exactly two 6’s on the four dice.

To work it out, you need the number of attempts, and the number of successes you’re interested in, and the chance of each attempt at something succeeding, and the chance of each attempt failing. For the four-dice problem, each attempt is the rolling of one die; there are four attempts at rolling die; we’re interested in finding two successful rolls of 6; the chance of successfully getting a 6 on any roll is 1/6; and the chance of failure on any one roll is —

## One Explanation For Friday the 13th’s Chance

So to give one answer to my calendar puzzle, which you may recall as this: for any given month and year, we know with certainty whether there’s a Friday the 13th in it. And yet, we can say that “Friday the 13ths are more likely than any other day of the week”, and mean something by it, and even mean something true by it. Thanks to the patterns of the Gregorian calendar we are more likely to see a Friday the 13th than we are a Thursday the 13th, or Tuesday the 13th, or so on. (We’re also more likely to see a Saturday the 14th than the 14th being any other day of the week, but somehow that’s not so interesting.)

Here’s one way to look at it. In December 2011 there’s zero chance of encountering a Friday the 13th. As it happens, 2011 has only one month with a Friday the 13th in it, the lowest case which happens. In January 2012 there’s a probability of one of encountering a Friday the 13th; it’s right there on the schedule. There’ll also be Fridays the 13th in April and July of 2012. For the other months of 2012, there’s zero probability of encountering a Friday the 13th.

Imagine that I pick one of the months in either 2011 or 2012. What is the chance that it has a Friday the 13th? If I tell you which month it is, you know right away the chance is zero or one; or, at least, you can tell as soon as you find a calendar. Or you might work out from various formulas what day of the week the 13th of that month should be, but you’re more likely to find a calendar before you are to find that formula, much less work it out.

## How Did Friday The 13th Get A Chance?

Here’s a little puzzle in probability which, in a slightly different form, I gave to my students to work out. I get the papers back tomorrow. To brace myself against that I’m curious what my readers here would make of it.

Possibly you’ve encountered a bit of calendrical folklore which says that Friday the 13ths are more likely than any other day of the week’s 13th. That’s not that there are more Fridays the 13th than all the other days of the week combined, but rather that a Friday the 13th is more likely to happen than a Thursday the 13th, or a Sunday, or what have you. And this is true; one is slightly more likely to see a Friday the 13th than any other specific day of the week being that 13.

And yet … there’s a problem in talking about the probability of any month having a Friday the 13th. Arguably, no month has any probability of holding a Friday the 13th. Consider.

Is there a Friday the 13th this month? For the month of this writing, December 2011, the answer is no; the 13th is a Tuesday; the Fridays are the 2nd, 9th, 16th, 23rd, and 30th. But were this January 2012, the answer would be yes. For February 2012, the answer is no again, as the 13th comes on a Monday. But altogether, every month has a Friday the 13th or it hasn’t. Technically, we might say that a month which definitely has a Friday the 13th has a probability of 1, or 100%; and a month which definitely doesn’t has a probability of 0, or 0%, but we tend to think of those as chances in the same way we think of white or black as colors, mostly when we want to divert an argument into nitpicking over definitions.

## What Makes Eight Different From Nine?

When last speaking about divisibility rules, we had finally worked out why it is that adding up the digits in a number will tell you whether the number is divisible by nine, or by three. We take the digits in the number, and add them up. If that sum is itself divisible by nine or three, so is the original number.

It’s a great trick. We have to want to do more. In one direction this is easy to expand. Last time we showed it explicitly by working on three-digit numbers; but we could show that adding a forth digit doesn’t change the reasoning which makes it work. Nor does adding a fifth, nor a sixth. We can carry on until we lose interest in showing longer numbers still work. However long the number is we can just add up its digits and the same divisibile-by-three or divisible-by-nine trick works.

Of course that isn’t enough. We want to check divisibility of more numbers. The obvious thing, at least the thing obvious to me in elementary school when I checked this, was to try other numbers. For example, how about divisibility by eight? And we test quickly … well, 14, one plus four is 5, that doesn’t divide by eight, and neither does fourteen. OK so far. 15 gives us similarly optimistic results. For 16, one plus six is 7, which doesn’t divide by eight, but 16 does, so, ah, obviously there’s something more we have to look at here. Maybe we need to patch up the rule, and look at the sum of the digits plus one and whether that divides eight.

This may sound a little fishy, but it’s at least a normal part of discovering mathematics, at least in my experience: notice a pattern, and try out little cases, and see if that suggests some overall rule. Sometimes it does; sometimes we find exceptions right away; sometimes a rule looks initially like it’s there and we learn something interesting by finding how it doesn’t.

## What is .19 of a bathroom?

I’ve had a little more time attempting to teach probability to my students and realized I had been overlooking something obvious in the communication of ideas such as the probability of events or the expectation value of a random variable. Students have a much easier time getting the abstract idea if the examples used for it are already ones they find interesting, and if the examples can avoid confusing interpretations. This is probably about 3,500 years behind the curve in educational discoveries, but at least I got there eventually.

A “random variable”, here, sounds a bit scary, but shouldn’t. It means that the variable, for which x is a popular name, is some quantity which might be any of a collection of possible values. We don’t know for any particular experiment what value it has, at least before the experiment is done, but we know how likely it is to be any of those. For example, the number of bathrooms in a house is going to be one of 1, 1.5, 2, 2.5, 3, 3.5, up to the limits of tolerance of the zoning committee.

The expectation value of a random variable is kind of the average value of that variable. You find it by taking the sum of each of the possible values of the random variable times the probability of the random variable having that value. This is at least for a discrete random variable, where the imaginable values are, er, discrete: there’s no continuous ranges of possible values. Number of bathrooms is clearly discrete; the number of seconds one spends in the bathroom is, at least in principle, continuous. For a continuous random variable you don’t take the sum, but instead take an integral, which is just a sum that handles the idea of infinitely many possible values quite well.

## At Least One Daughter Exists

In the class I’m teaching we’ve entered probability. This is a fun subject. It’s one of the bits of mathematics which people encounter most often, about as much as the elements of geometry enter ordinary life. It seems like everyone has some instinctive understanding of probability, at least given how people will hear a probability puzzle and give a solution with confidence. You don’t get that with pure algebra problems. Ask someone “the neighbor’s two children were born three years apart and twice the sum of their ages is 42; how old are they?” and you get an assurance of how mathematics was always their weakest subject and they never could do it. Ask someone “one of the neighbor’s children just walked in, and was a girl; what is the probability the other child is also a girl?” and you’ll get an answer.

But it’s getting a correct answer that is really interesting, and unfortunately, while everyone has some instinctive understanding and will give an answer as above, there’s little guarantee it’ll be the right one. Sometimes, and I say this looking over the exam papers, it seems our instinctive understanding of probability is designed to be the wrong one. I’m happy that people aren’t afraid of doing probability questions, not the way they are afraid of algebra or geometry or calculus or the more exotic realms, though, and feel like it’s my role to find the most straightforward ways to understanding which start from that willingness to try.

Some of the rotten track record people have in probability puzzles probably derives from how so many probability puzzles start as recreational puzzles, that is, things which are meant to look easy and turn out to be subtly complicated. I suspect the daughters-question comes from recreational puzzles, since there’s the follow-up question that “the elder child enters, and is a girl; what is the probability the younger is a girl?” There’s some soundness in presenting the two as a learning path, since they present what looks like the same question twice, and get different answers, and learning why there are different answers teaches something about how to do probability questions. But it still feels to me like the goal is that pleasant confusion a trick offers.

## Are You Stronger Than Jupiter?

A comment on my earlier piece comparing the acceleration due to gravity that we feel from the Moon compared to what we feel from someone else in the room challenged me: how strong is the gravitational pull from Jupiter, compared to that of someone else in the room? Jupiter has a great edge in mass: someone else in the room weighs in at somewhere around 75 kilograms, while the planet comes to around 1,898,600,000,000,000,000,000,000,000 kilograms. On the other hand, your neighbor is somewhere around one meter away, while Jupiter will be something like 816,520,800,000 meters away. Maybe farther: that’s the farthest Jupiter gets from the Sun, and the Earth can be on the opposite side of the Sun from Jupiter, so add up to another 152,098,232,000 meters on top of that.

That distance is going to be a little bit of a nuisance. The acceleration we feel towards any planet will be stronger the nearer it gets, and while, say, Neptune is always about the same distance from the Earth, there are times that Venus and Mars are surprisingly close. Usually these are announced by clouds of forwarded e-mails announcing that Mars will be closer to the Earth than it’s been in 33,000 years, and will appear to be as large as the Empire State Building. Before you have even had the chance to delete these e-mails unread the spoilsport in your group of e-mail friends will pass along the snopes.com report that none of this is true and the e-mail has been going around unaltered since 1997 anyway. But there is still a smallest distance and a greatest distance any planet gets from the Earth.

If we want to give planets the best shot, let’s look at the smallest distance any planet gets from the Earth. For Mercury and Venus, this happens when the planet is at aphelion, the farthest it gets from the Sun, and the Earth at perihelion, the nearest it gets. For the outer planets, it happens with the Earth at aphelion and the other planet at perihelion. (Some might say ‘apogee’ and ‘perigee’, although these are properly speaking only the words to use when something orbits the Earth. Some might say ‘apoapsis’ and ‘periapsis’, which talk about the nearest and farthest points of an orbit without being particular about what is being orbited, but no one actually does.) Here I’m making the assumption that there’s no weird orbital locks where, say, the Earth can’t be at perihelion while Venus is at aphelion, which might even be true. It’s probably close enough.