## A Simple Demonstration Which Does Not Clarify

When last we talked about the “clean sweep” of winning contestants coming from the same of four seats in Contestants Row for all six Items Up For Bid on The Price Is Right, we had got established the pieces needed if we suppose this to be a binomial distribution problem. That is, we suppose that any given episode has a probability, p, of successfully having all six contestants from the same seat, and a probability 1 – p of failing to have all six contestants from the same seat. There are N episodes, and we are interested in the chance of x of them being clean sweeps. From the production schedule we know the number of episodes N is about 6,000. We supposed the probability of a clean sweep to be about p = 1/1000, on the assumption that the chance of winning isn’t any better or worse for any contestant. The probability of there not being a clean sweep is then 1 – p = 999/1000. And we expected x = 6 clean sweeps, while Drew Carey claimed there had been only 1.

The chance of finding x successes out of N attempts, according to the binomial distribution, is the probability of any combination of x successes and N – x successes — which is equal to (p)(x) * (1 – p)(N – x) — times the number of ways there are to select x items out of N candidates. Either of those is easy enough to calculate, up to the point where we try calculating it. Let’s start out by supposing x to be the expected 6, and later we’ll look at it being 1 or other numbers.

## Off By A Factor Of 720 (Or More)

To work out the task of figuring out whether it was plausible that there had been only one “clean sweep”, of all six contestants winning the Item Up For Bid on The Price Is Right coming from the same seat, we had started a little into the binomial distribution. The key ideas included that we have “Bernoulli trials”, a number of independent chances for some condition to happen — in this case, we had about 6,000 such trials, the number of hourlong episodes of The Price Is Right — and a probability p of successfully seeing some event occur on any one episode. We worked that out to be somewhere about p = 1/1000, if every seat is equally likely to win every time. There is also a probability of 1 – p or 999/1000 of the event failing to see this event, that is, that one or more contestants comes from a different seat.

To find the probability of seeing some number, call it x since we don’t particularly care what it is, of successes out of some larger number, call it N because that’s a convenient number, of trials, we need to figure out how many ways there are to arrange x successes out of N trials. For small x and N values we can figure this out by hand, given time. For large numbers, we’d never finish if we tried by hand. But we can solve it, if we attack the problem methodically.

## From Drew Carey To An Imaginary Baseball Player

So, we calculated that on any given episode of The Price Is Right there’s around one chance of all six winners of the Item Up For Bid coming from the same seat. And we know there have been about six thousand episodes with six Items Up For Bid. So we expect there to have been about six clean sweep episodes; yet if Drew Carey is to be believed, there has been just the one. What’s wrong?

Possibly, nothing. Just because there is a certain probability of a thing happening does not mean it happens all that often. Consider an analogous situation: a baseball batter might hit safely one time out of every three at-bats; but there would be nothing particularly odd in the batter going hitless in four at-bats during a single game, however much we would expect him to get at least one. There wouldn’t be much very peculiar in his hitting all four times, either. Our expected value, the number of times something could happen times the probability of it happening each time, is not necessarily what we actually see. (We might get suspicious if we always saw the expected value turn up.)

Still, there must be some limits. We might accept a batter who hits one time out of every three getting no hits in four at-bats. If he got no runs in four hundred at-bats, we’d be inclined to say he’s not a decent hitter having some bad luck. More likely he’s failing to bring the bat with him to the plate. We need a tool to say whether some particular outcome is tolerably likely or so improbable that something must be up.

## Came On Down

On the December 15th episode of The Price Is Right, host Drew Carey mentioned as the sixth Item Up For Bids began that so far that show, all the contestants who won their Item Up For Bids (and so got on-stage for the pricing games) had come from the same spot so far, five out of six. He said that only once before on the show had all the contestants come from the same seat in Contestants Row. That seems awfully few, but, how many should there be?

We can say roughly how many “clean sweep” shows we should expect. There’ve been just about 6,000 episodes of The Price Is Right played in the current hour-long format (the show was a half-hour its first few years after being revived in 1972; it was a very different show in previous decades). If we know the probability of all six contestants in one game winning their Item Up For Bids — properly speaking, it’s called the One-Bid, but nobody cares — and multiply the probability of six contestants in one show coming from the same seat by the number of shows, we have the number of shows we should expect to have had such a clean sweep. This product, the chance of something happening times the number of times it could happen, is termed the “expected value” or “expectation value”, or sometimes just the “mean”, as in the average number to be, well, expected.

This makes a couple of assumptions. All probability problems do. For example, it assumes the chance of a clean sweep in one show is unaffected by clean sweeps in other shows. That is, if everyone in the red seat won on Thursday, that wouldn’t make everyone in the blue seat winning Friday more or less likely. That condition is termed “independence”, and it is frequently relied upon to make probability problems work out. Unfortunately, it’s often hard to prove: how do you prove that one thing happening doesn’t affect the other?

## Mid-Course Correction 1, Continued

The next part of examining just how well this blog is doing is to think about the mechanical details of it. I’ve been publishing something typically three times a week, which feels to me like an easy enough schedule, and posting that in the evenings, which leaves me the workdays of Monday, Wednesday, and Friday to actually think and compose things. The pieces have been around a thousand words long, although that has a greater tendency to be long than short. Thinking of something to write is hard; keeping going once I’ve started is easy.

That’s all set to my convenience. But I’m curious what readers think of these properties. For one, is the three-a-week schedule a good one? I feel like a weekly blog is too easy to forget about reading, and a daily one might be too much if the subject hasn’t got the fun aspects of ridiculing comic strips or the exciting aspects of ridiculing other people’s politics. Is my instinct reasonable? Also, the publication in the evening is nice for me, but it does mean articles go up when at least one of my readers has gone to sleep, and the Friday evening article can go missing completely against the backdrop of the weekend. Is there a better time?

Is the length reasonable? Should I try writing to a shorter length so as to not present so many walls of text, particularly when I start getting into topics that need equations or algebraic manipulations; or should I let several short pieces run into a more unified and longer post? What’s the natural reading length for pop mathematics that you find interesting? Yes, yes, a good essay is never long enough, but I’m not arrogant enough to think I’m always being very interesting.

As ever I’ll try to be good-spirited about complaints. I don’t promise to take everybody’s advice, but I do promise to consider it.

## Mid-Course Correction 1

I’m about three months into this particular blog-writing experiment, so it’s probably time to start over-thinking it. For the most part I’m happy; I like doing some thinking about mathematics in this kind of organized way, and I really like that I keep finding a thousand or so words to say on different topics, and that those fell to me to be topics that aren’t written about obsessively much in the rest of the pop mathematics universe. And the results have fit my typical self-estimation, that I find it all quite satisfying until the moment I publish, then realize I’ve just shown to the world the stupidest words ever strung together, and as I get some distance from publication come to find I didn’t say what I wanted quite right, but I did acceptably well.

My satisfaction’s not necessarily the important part, though; somewhere in the list of motives I have for writing is to communicate. So, I’d like to know whether you-the-presumed-reader does think I’m communicating. Am I, at least generally, writing about interesting topics; am I varying the topics at a reasonable rate, or should I keep on one thread for more or fewer posts in a row; are the individual essays as interesting as the topics demand?

I’ll try to be good-natured about criticisms, whether put out here or sent to me directly. I don’t promise to change in response to any particular complaint, but I will do my best to listen and consider whether it feels right and whether it might be something I can or want to act on. For example, one person said I harder to start than to finish reading. This feels odd to me, but I’m curious how other people see the same writings.

When last we discussed divisibility rules, particularly, rules for just adding up the digits in a number to tell what it might divide by, we had worked out rules for testing divisibility by eight. In that, we take the sum of four times the hundreds digit, plus two times the tens digit, plus the units digit, and if that sum is divisible by eight, then so was the original number. This hasn’t got the slick, smooth memorability of the rules for three and nine — just add all the numbers up — or the simplicity of checking for divisibility by ten, five, or two — just look at the last digit — but it’s not a complicated rule either.

Still, we came at it through an experimental method, fiddling around with possible rules until we found one which seemed to work. It seemed to work, and since we found out there are only a thousand possible cases to consider we can check that it works in every one of those cases. That’s tiresome to do, but functions, and it’s a legitimate way of forming mathematical rules. Quite a number of proofs amount to dividing a problem into several different cases and show that whatever we mean to prove is so in each ase.

Let’s see what we can do to tidy up the proof, though, and see if we can make it work without having to test out so many cases. We can, or I’d have been foolish to start this essay rather than another; along the way, though, we can remove the traces that show the experimenting that lead to the technique. We can put forth the cleaned-up reasoning and look all the more clever because it isn’t so obvious how we got there. This is another common property of proofs; the most attractive or elegant method of presenting them can leave the reader wondering how it was ever imagined.

## Hopefully, Saying Something True

I wanted to talk about drawing graphs that represent something, and to get there have to say what kinds of things I mean to represent. The quick and expected answer is that I mean to represent some kind of equation, such as “y = 3*x – 2” or “x2 + y2 = 4”, and that probably does come up the most often. We might also be interested in representing an inequality, something like “x2 – 2 y2 ≤ 1”. On occasion we’re interested just in the region where something is not true, saying something like “y ≠ 3 – x”. (I’ve used nice small counting numbers here not out of any interest in these numbers, or because larger ones or non-whole numbers or even irrational numbers don’t work, but because there is something pleasantly reassuring about seeing a “1” or a “2” in an equation. We strongly believe we know what we mean by “1”.)

Anyway, what we’ve written down is something describing a relationship which we are willing to suppose is true. We might not know what x or y are, and we might not care, but at least for the length of the problem we will suppose that the number represented by y must be equal to three times whatever number is represented by x and minus two. There might be only a single value of x we find interesting; there might be several; there might be infinitely many such values. There’ll be a corresponding number of y’s, at least, so long as the equation is true.

Sometimes we’ll turn the description in terms of an equation into a description in terms of a graph right away. Some of these descriptions are like as those of a line — the “y = 3*x – 2” equation — or a simple shape — “x2 + y2 = 4” is a circle — in that we can turn them into graphs right away without having to process them, at least not once we’re familiar and comfortable with the idea of graphing. Some of these descriptions are going to be in awkward forms. “x + 2 = – y2 / x + 2 y /x” is really just an awkward way to describe a circle (more or less), but that shape is hidden in the writing.

## Before Drawing a Graph

I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

## Ted Baxter and the Binomial Distribution

There are many hard things about teaching, although I appreciate that since I’m in mathematics I have advantages over many other fields. For example, students come in with the assumption that there are certainly right and certainly wrong answers to questions. I’m generally spared the problem of convincing students that I have authority to rule some answers in or out. There’s actually a lot of discretion and judgement and opinion involved, but most of that comes in when one is doing research. In an introductory course, there are some techniques that have gotten so well-established and useful we could fairly well pretend there isn’t any judgement left.

But one hard part is probably common to all fields: how closely to guide a student working out something. This case comes from office hours, as I tried getting a student to work out a problem in binomial distributions. Binomial distributions come up in studying the case where there are many attempts at something; and each attempt has a certain, fixed, chance of succeeding; and you want to know the chance of there being exactly some particular number of successes out of all those tries. For example, imagine rolling four dice, and being interested in getting exactly two 6’s on the four dice.

To work it out, you need the number of attempts, and the number of successes you’re interested in, and the chance of each attempt at something succeeding, and the chance of each attempt failing. For the four-dice problem, each attempt is the rolling of one die; there are four attempts at rolling die; we’re interested in finding two successful rolls of 6; the chance of successfully getting a 6 on any roll is 1/6; and the chance of failure on any one roll is —

## One Explanation For Friday the 13th’s Chance

So to give one answer to my calendar puzzle, which you may recall as this: for any given month and year, we know with certainty whether there’s a Friday the 13th in it. And yet, we can say that “Friday the 13ths are more likely than any other day of the week”, and mean something by it, and even mean something true by it. Thanks to the patterns of the Gregorian calendar we are more likely to see a Friday the 13th than we are a Thursday the 13th, or Tuesday the 13th, or so on. (We’re also more likely to see a Saturday the 14th than the 14th being any other day of the week, but somehow that’s not so interesting.)

Here’s one way to look at it. In December 2011 there’s zero chance of encountering a Friday the 13th. As it happens, 2011 has only one month with a Friday the 13th in it, the lowest case which happens. In January 2012 there’s a probability of one of encountering a Friday the 13th; it’s right there on the schedule. There’ll also be Fridays the 13th in April and July of 2012. For the other months of 2012, there’s zero probability of encountering a Friday the 13th.

Imagine that I pick one of the months in either 2011 or 2012. What is the chance that it has a Friday the 13th? If I tell you which month it is, you know right away the chance is zero or one; or, at least, you can tell as soon as you find a calendar. Or you might work out from various formulas what day of the week the 13th of that month should be, but you’re more likely to find a calendar before you are to find that formula, much less work it out.

## How Did Friday The 13th Get A Chance?

Here’s a little puzzle in probability which, in a slightly different form, I gave to my students to work out. I get the papers back tomorrow. To brace myself against that I’m curious what my readers here would make of it.

Possibly you’ve encountered a bit of calendrical folklore which says that Friday the 13ths are more likely than any other day of the week’s 13th. That’s not that there are more Fridays the 13th than all the other days of the week combined, but rather that a Friday the 13th is more likely to happen than a Thursday the 13th, or a Sunday, or what have you. And this is true; one is slightly more likely to see a Friday the 13th than any other specific day of the week being that 13.

And yet … there’s a problem in talking about the probability of any month having a Friday the 13th. Arguably, no month has any probability of holding a Friday the 13th. Consider.

Is there a Friday the 13th this month? For the month of this writing, December 2011, the answer is no; the 13th is a Tuesday; the Fridays are the 2nd, 9th, 16th, 23rd, and 30th. But were this January 2012, the answer would be yes. For February 2012, the answer is no again, as the 13th comes on a Monday. But altogether, every month has a Friday the 13th or it hasn’t. Technically, we might say that a month which definitely has a Friday the 13th has a probability of 1, or 100%; and a month which definitely doesn’t has a probability of 0, or 0%, but we tend to think of those as chances in the same way we think of white or black as colors, mostly when we want to divert an argument into nitpicking over definitions.

## What Makes Eight Different From Nine?

When last speaking about divisibility rules, we had finally worked out why it is that adding up the digits in a number will tell you whether the number is divisible by nine, or by three. We take the digits in the number, and add them up. If that sum is itself divisible by nine or three, so is the original number.

It’s a great trick. We have to want to do more. In one direction this is easy to expand. Last time we showed it explicitly by working on three-digit numbers; but we could show that adding a forth digit doesn’t change the reasoning which makes it work. Nor does adding a fifth, nor a sixth. We can carry on until we lose interest in showing longer numbers still work. However long the number is we can just add up its digits and the same divisibile-by-three or divisible-by-nine trick works.

Of course that isn’t enough. We want to check divisibility of more numbers. The obvious thing, at least the thing obvious to me in elementary school when I checked this, was to try other numbers. For example, how about divisibility by eight? And we test quickly … well, 14, one plus four is 5, that doesn’t divide by eight, and neither does fourteen. OK so far. 15 gives us similarly optimistic results. For 16, one plus six is 7, which doesn’t divide by eight, but 16 does, so, ah, obviously there’s something more we have to look at here. Maybe we need to patch up the rule, and look at the sum of the digits plus one and whether that divides eight.

This may sound a little fishy, but it’s at least a normal part of discovering mathematics, at least in my experience: notice a pattern, and try out little cases, and see if that suggests some overall rule. Sometimes it does; sometimes we find exceptions right away; sometimes a rule looks initially like it’s there and we learn something interesting by finding how it doesn’t.

## What is .19 of a bathroom?

I’ve had a little more time attempting to teach probability to my students and realized I had been overlooking something obvious in the communication of ideas such as the probability of events or the expectation value of a random variable. Students have a much easier time getting the abstract idea if the examples used for it are already ones they find interesting, and if the examples can avoid confusing interpretations. This is probably about 3,500 years behind the curve in educational discoveries, but at least I got there eventually.

A “random variable”, here, sounds a bit scary, but shouldn’t. It means that the variable, for which x is a popular name, is some quantity which might be any of a collection of possible values. We don’t know for any particular experiment what value it has, at least before the experiment is done, but we know how likely it is to be any of those. For example, the number of bathrooms in a house is going to be one of 1, 1.5, 2, 2.5, 3, 3.5, up to the limits of tolerance of the zoning committee.

The expectation value of a random variable is kind of the average value of that variable. You find it by taking the sum of each of the possible values of the random variable times the probability of the random variable having that value. This is at least for a discrete random variable, where the imaginable values are, er, discrete: there’s no continuous ranges of possible values. Number of bathrooms is clearly discrete; the number of seconds one spends in the bathroom is, at least in principle, continuous. For a continuous random variable you don’t take the sum, but instead take an integral, which is just a sum that handles the idea of infinitely many possible values quite well.

## At Least One Daughter Exists

In the class I’m teaching we’ve entered probability. This is a fun subject. It’s one of the bits of mathematics which people encounter most often, about as much as the elements of geometry enter ordinary life. It seems like everyone has some instinctive understanding of probability, at least given how people will hear a probability puzzle and give a solution with confidence. You don’t get that with pure algebra problems. Ask someone “the neighbor’s two children were born three years apart and twice the sum of their ages is 42; how old are they?” and you get an assurance of how mathematics was always their weakest subject and they never could do it. Ask someone “one of the neighbor’s children just walked in, and was a girl; what is the probability the other child is also a girl?” and you’ll get an answer.

But it’s getting a correct answer that is really interesting, and unfortunately, while everyone has some instinctive understanding and will give an answer as above, there’s little guarantee it’ll be the right one. Sometimes, and I say this looking over the exam papers, it seems our instinctive understanding of probability is designed to be the wrong one. I’m happy that people aren’t afraid of doing probability questions, not the way they are afraid of algebra or geometry or calculus or the more exotic realms, though, and feel like it’s my role to find the most straightforward ways to understanding which start from that willingness to try.

Some of the rotten track record people have in probability puzzles probably derives from how so many probability puzzles start as recreational puzzles, that is, things which are meant to look easy and turn out to be subtly complicated. I suspect the daughters-question comes from recreational puzzles, since there’s the follow-up question that “the elder child enters, and is a girl; what is the probability the younger is a girl?” There’s some soundness in presenting the two as a learning path, since they present what looks like the same question twice, and get different answers, and learning why there are different answers teaches something about how to do probability questions. But it still feels to me like the goal is that pleasant confusion a trick offers.

## Are You Stronger Than Jupiter?

A comment on my earlier piece comparing the acceleration due to gravity that we feel from the Moon compared to what we feel from someone else in the room challenged me: how strong is the gravitational pull from Jupiter, compared to that of someone else in the room? Jupiter has a great edge in mass: someone else in the room weighs in at somewhere around 75 kilograms, while the planet comes to around 1,898,600,000,000,000,000,000,000,000 kilograms. On the other hand, your neighbor is somewhere around one meter away, while Jupiter will be something like 816,520,800,000 meters away. Maybe farther: that’s the farthest Jupiter gets from the Sun, and the Earth can be on the opposite side of the Sun from Jupiter, so add up to another 152,098,232,000 meters on top of that.

That distance is going to be a little bit of a nuisance. The acceleration we feel towards any planet will be stronger the nearer it gets, and while, say, Neptune is always about the same distance from the Earth, there are times that Venus and Mars are surprisingly close. Usually these are announced by clouds of forwarded e-mails announcing that Mars will be closer to the Earth than it’s been in 33,000 years, and will appear to be as large as the Empire State Building. Before you have even had the chance to delete these e-mails unread the spoilsport in your group of e-mail friends will pass along the snopes.com report that none of this is true and the e-mail has been going around unaltered since 1997 anyway. But there is still a smallest distance and a greatest distance any planet gets from the Earth.

If we want to give planets the best shot, let’s look at the smallest distance any planet gets from the Earth. For Mercury and Venus, this happens when the planet is at aphelion, the farthest it gets from the Sun, and the Earth at perihelion, the nearest it gets. For the outer planets, it happens with the Earth at aphelion and the other planet at perihelion. (Some might say ‘apogee’ and ‘perigee’, although these are properly speaking only the words to use when something orbits the Earth. Some might say ‘apoapsis’ and ‘periapsis’, which talk about the nearest and farthest points of an orbit without being particular about what is being orbited, but no one actually does.) Here I’m making the assumption that there’s no weird orbital locks where, say, the Earth can’t be at perihelion while Venus is at aphelion, which might even be true. It’s probably close enough.

## A Planet Is Not A Dot

A lot of what I said in describing how we might fall into the Moon, if you and I were in the same room and suddenly the rest of the world stopped existing, was incorrect. That isn’t to say it was wrong or even bad to consider; it just means that the equations that I produced and the numbers that came out from them aren’t exactly what would happen if the sudden-failure-of-planet-Earth case were to happen. I knew the wouldn’t be exactly right going in, which leaves us the question of what I thought I was doing and why I bothered doing it.

The first reason, and the reason why it wasn’t a waste of time to consider these simple approximations of how strongly the Moon is attracting us — how fast we are falling into it, and how fast we would be falling if the Earth weren’t falling into the Moon along with us — is thanks to something which Isaac Asimov perfectly described. In an essay called “The Relativity Of Wrong”, he wrote about — well, the title says it. Ideas are not just right or wrong; they can be wrong by differing amounts, and can be wrong by such a tiny amount that it isn’t worth the complications to get it exactly right. Probably the most familiar example is the flatness of the Earth. To model the globe, or a large nation, the idea that the Earth is nearly flat is sufficiently wrong as to produce measurable, important errors where plots of land are justifiably claimed by multiple owners, maybe from multiple governments, or aren’t claimed at all and form the basis for nowhere towns in which mild fantasy or comic stories can be set. But if one wants to draw a map of the town, or of one’s own property, the curvature of the Earth is not worth considering. We can pretend the Earth is flat and get our work done a lot sooner. Other sources of error will mess up the precise result before that does.

## In Case Of Sudden Failure Of Planet Earth

Have you ever figured out just exactly what you would do if the Earth were to suddenly disappear from the universe, leaving just you and whatever’s around to fall towards whatever the nearest heavenly bodies are? No, me neither. Asked to improvise one, I suppose I’d suffocate within minutes and then everything else becomes not so interesting to me, although possibly my heirs might be interested, if they’re somewhere.

I did double-check, though, that she meant the gravitational pull of the Moon, rather than its tidal pull. The shorthand reason for this is that arguments for astrology having some physical basis tend to run along the lines of, the Moon creates the tides (the Sun does too, but smaller ones), tides are made of water (rock moves, too, although much less), human bodies are mostly water (I don’t know what the fluid properties of cytoplasm are, but I’m almost curious enough to look them up), so there must be something tide-like in human bodies too (so there). The gravitational pull of the Moon, meanwhile, doesn’t really mean much: the Moon is going to accelerate the Earth and the people standing on it by just about the same amount. The force of gravity between two objects grows with the two objects’ masses, and the Earth is more massive than any person on it. But this means the Earth feels a greater force pulling it towards the Moon, and the acceleration works out tobe just the same. The force of gravity between two objects falls off as the square of the distance between them, and the people on the surface of the Earth are a little bit closer or a little bit farther away from the Moon than the center of the Earth is, but that’s not very different considering just how far away the Moon is. We spend all our lives falling into the Moon, as fast as we possibly can, and we are falling into the Moon as fast as the Earth is.

## Pinball and Large Numbers

I had another little occasion to reflect on the ways of representing numbers, as well as the chance to feel a bit foolish, this past weekend so I’m naturally driven to share it. This came about on visiting the Silverball Museum, a pinball museum, or arcade, in Asbury Park, New Jersey. (I’m not sure the exact difference between a museum in which games are playable by visitors and an arcade, except for the signs affixed to nearly all the games.) Naturally I failed to bring my camera, so I can’t easily show what I had in mind; too bad.

Pinballs, at least once they got around to having electricity installed, need to show the scores. Since about the mid-1990s these have been shown by dot matrix displays, which are pretty easy to read — the current player’s score can be shown extremely large, for example — and make it easy for the game to go into different modes, where the scoring and objectives of play vary for a time. From about the mid-1970s to the mid-1990s eight-segment light-emitting diodes were preferred, for that “small alarm clock” look. And going before that were rotating number wheels, which are probably the iconic look to pinball score boards, to the extent anyone thinks of a classic pinball machine in that detail.

But there’s another score display, which I must admit offends my sense of order. In this, which I noticed mostly in the machines from the 1950s, with a few outliers in the early 60s (often used in conjunction with the rotating wheels), the parts of the number are broken apart, and the score is read by adding up the parts which are lit up. The machine I was looking at had one column of digits for the millions, another for hundreds of thousands, and then another with two-digit numbers.

## Descartes and the Terror of the Negative

When René Descartes first described the system we’ve turned into Cartesian coordinates he didn’t put it forth in quite the way we build them these days. This shouldn’t be too surprising; he lived about four centuries ago, and we have experience with the idea of matching every point on the plane to some ordered pair of numbers that he couldn’t have. The idea has been expanded on, and improved, and logical rigor I only pretend to understand laid underneath the concept. But the core remains: we put somewhere on our surface an origin point — usually this gets labelled O, mnemonic for “origin” and also suggesting the zeroes which fill its coordinates — and we pick some direction to be the x-coordinate and some direction to be the y-coordinate, and the ordered pair for a point are how far in the x-direction and how far in the y-direction one must go from the origin to get there.

The most obvious difference between Cartesian coordinates as Descartes set them up and Cartesian coordinates as we use them is that Descartes would fill a plane with four chips, one quadrant each in the plane. The first quadrant is the points to the right of and above the origin. The second quadrant is to the left of and still above the origin. The third quadrant is to the left of and below the origin, and the fourth is to the right of the origin but below it. This division of the plane into quadrants, and even their identification as quadrants I, II, III, and IV respectively, still exists, one of those minor points on which prealgebra and algebra students briefly trip on their way to tripping over the trigonometric identities.

Descartes had, from his perspective, excellent reason to divide the plane up this way. It’s a reason difficult to imagine today. By separating the plane like this he avoided dealing with something mathematicians of the day were still uncomfortable with. It’s easy enough to describe a point in the first quadrant as being so far to the right and so far above the origin. But a point in the second quadrant is … not any distance to the right. It’s to the left. How far to the right is something that’s to the left?