What Pinball Games Are Turing Machines?


I got to thinking about Turing machines. This is the conceptual model for basically all computers. The classic concept is to imagine a string of cells. In each cell is some symbol. It’s gone over by some device that follows some rule about whether and how to change the symbol. We have other rules that let us move the machine from one cell to the next. This doesn’t sound like much. But it’s enough. We can imagine all software to be some sufficiently involved bit of work on a string of cells and changing (or not) the symbols in those cells.

We don’t normally do this, because it’s too much tedious work. But we know we could go back to this if we truly must. A proper Turing machine has infinitely many cells, which no actual computer does, owing to the high cost of memory chips and the limited electricity budget. We can pretend that “a large enough number of cells” is good enough; it often is. And it turns out any one Turing machine can be used to simulate another Turing machine. This requires us to not care about how long it takes to do something, but that’s all right. Conceptually, we don’t care.

And I specifically got wondering what was the first pinball machine to be a Turing machine. I’m sure that modern pinball machines are, since there have been computers of some kind in pinball machines since the mid-1970s. So that’s a boring question. My question is: were there earlier pinball machines that satisfy the requirements of a Turing machine?

My gut tells me there must be. This is mostly because it’s surprisingly hard not to create a Turing machine. If you hang around near mathematics or computer science people you’ll occasionally run across things like where someone created a computer inside a game like Minecraft. It’s possible to create a Turing machine using the elements of the game. The number of things that are Turing-complete, as they say, is surprising. CSS version 3, a rule system for how to dress up content on a web site, turns out to be Turing-complete (if you make some reasonable extra suppositions). Magic: The Gathering cards are, too. So you could set up a game of Magic: the Gathering which simulated a game of Minecraft which itself simulated the styling rules of a web page. Note the “you” in that sentence.

That’s not proof, though. But I feel pretty good about supposing that some must be. Pinball machines consist, at heart, of a bunch of switches which are activated or not by whether a ball rolls over them. They can store a bit of information: a ball can be locked in a scoop, or kicked out of the scoop as need be. Points can be tallied on the scoring reel. The number of balls a player gets to plunge can be increased — or decreased — based on things that happen on the playfield. This feels to me like it’s got to be a Turing-complete scheme.

So I suspect that the layout of a pinball game, and the various ways to store a bit of information, with (presumably) perfect ball-flipping and table-nudging skills, should make it possible to make a Turing machine. (There ought not be a human in the loop, but I’m supposing that we could replace the person with a mechanism that flips or nudges at the right times or when the ball is in the right place.) I’m wanting for proof, though, and I leave the question here to tease people who’re better than I am at this field of mathematics and computer science.

And I’m curious when the first game that was so capable was made. The very earliest games were like large tabletop versions of those disappointing car toys, the tiny transparent-plastic things with a ball bearing you shoot into one of a series of scoops. Eventually, tilt mechanisms were added, and scoring reels, and then flippers, and then the chance to lock balls. Each changed what the games could do. Did it reach the level of complexity I think it did? I’d like to know.

Yes, this means that I believe it would be theoretically possible to play a pinball game that itself simulated the Pinball Arcade program simulating another pinball game. If this prospect does not delight you then I do not know that we can hope to ever understand one another.

A Little More Talk About What We Talk About When We Talk About How Interesting What We Talk About Is


I had been talking about how much information there is in the outcome of basketball games, or tournaments, or the like. I wanted to fill in at least one technical term, to match some of the others I’d given.

In this information-theory context, an experiment is just anything that could have different outcomes. A team can win or can lose or can tie in a game; that makes the game an experiment. The outcomes are the team wins, or loses, or ties. A team can get a particular score in the game; that makes that game a different experiment. The possible outcomes are the team scores zero points, or one point, or two points, or so on up to whatever the greatest possible score is.

If you know the probability p of each of the different outcomes, and since this is a mathematics thing we suppose that you do, then we have what I was calling the information content of the outcome of the experiment. That’s a number, measured in bits, and given by the formula

\sum_{j} - p_j \cdot \log\left(p_j\right)

The sigma summation symbol means to evaluate the expression to the right of it for every value of some index j. The pj means the probability of outcome number j. And the logarithm may be that of any base, although if we use base two then we have an information content measured in bits. Those are the same bits as are in the bytes that make up the megabytes and gigabytes in your computer. You can see this number as an estimate of how many well-chosen yes-or-no questions you’d have to ask to pick the actual result out of all the possible ones.

I’d called this the information content of the experiment’s outcome. That’s an idiosyncratic term, chosen because I wanted to hide what it’s normally called. The normal name for this is the “entropy”.

To be more precise, it’s known as the “Shannon entropy”, after Claude Shannon, pioneer of the modern theory of information. However, the equation defining it looks the same as one that defines the entropy of statistical mechanics, that thing everyone knows is always increasing and somehow connected with stuff breaking down. Well, almost the same. The statistical mechanics one multiplies the sum by a constant number called the Boltzmann constant, after Ludwig Boltzmann, who did so much to put statistical mechanics in its present and very useful form. We aren’t thrown by that. The statistical mechanics entropy describes energy that is in a system but that can’t be used. It’s almost background noise, present but nothing of interest.

Is this Shannon entropy the same entropy as in statistical mechanics? This gets into some abstract grounds. If two things are described by the same formula, are they the same kind of thing? Maybe they are, although it’s hard to see what kind of thing might be shared by “how interesting the score of a basketball game is” and “how much unavailable energy there is in an engine”.

The legend has it that when Shannon was working out his information theory he needed a name for this quantity. John von Neumann, the mathematician and pioneer of computer science, suggested, “You should call it entropy. In the first place, a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” There are variations of the quote, but they have the same structure and punch line. The anecdote appears to trace back to an April 1961 seminar at MIT given by one Myron Tribus, who claimed to have heard the story from Shannon. I am not sure whether it is literally true, but it does express a feeling about how people understand entropy that is true.

Well, these entropies have the same form. And they’re given the same name, give or take a modifier of “Shannon” or “statistical” or some other qualifier. They’re even often given the same symbol; normally a capital S or maybe an H is used as the quantity of entropy. (H tends to be more common for the Shannon entropy, but your equation would be understood either way.)

I’m not comfortable saying they’re the same thing, though. After all, we use the same formula to calculate a batting average and to work out the average time of a commute. But we don’t think those are the same thing, at least not more generally than “they’re both averages”. These entropies measure different kinds of things. They have different units that just can’t be sensibly converted from one to another. And the statistical mechanics entropy has many definitions that not just don’t have parallels for information, but wouldn’t even make sense for information. I would call these entropies siblings, with strikingly similar profiles, but not more than that.

But let me point out something about the Shannon entropy. It is low when an outcome is predictable. If the outcome is unpredictable, presumably knowing the outcome will be interesting, because there is no guessing what it might be. This is where the entropy is maximized. But an absolutely random outcome also has a high entropy. And that’s boring. There’s no reason for the outcome to be one option instead of another. Somehow, as looked at by the measure of entropy, the most interesting of outcomes and the most meaningless of outcomes blur together. There is something wondrous and strange in that.

Stable Marriages and Designing Markets


A few days ago Jeremy Kun with the Math ∩ Programming blog wrote about the problem of stable marriages, by which here is meant pairing off people so that everyone is happy with their pairing. Put like that it almost sounds like the sort of thing people used to complain about in letters to Ann Landers about mathematicians doing foolish things — don’t mathematicians know that feelings matter in this, and, how does this help them teach kids to do arithmetic.

But the problem is just put that way because it’s one convenient representation of a difficult problem. Given a number of agents that can be paired up, and some way of measuring the collection of pairings, how can you select the best pairing? And what do you mean by best? Do you mean the one that maximizes whatever it is you’re measuring? The one that minimizes it (if you’re measuring, say, unhappiness, or cost, or something else you’d want as little of)? Jeremy Kun describes the search for a pairing that’s stable, which requires, in part, coming up with a definition of just what “stable” means.

The work can be put to describe any two-party interaction, which can be marriages, or can be the choice of people where to work and employers who to hire, or can be people deciding what to buy or where to live, all sorts of things where people have preferences and good fits. Once the model’s developed it has more applications than what it was originally meant for, which is part of what makes this a good question. Kun also write a bit bout how to expand the problem so as to handle some more complicated cases, and shows how the problem can be put onto a computer.

Math ∩ Programming

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way…

View original post 2,343 more words

Peer Gibberish


Well, this is an embarrassing thing to see: according to Nature, the Springer publishing and the Institute of Electrical and Electronic Engineers (IEEE) have had to withdraw at least 120 papers from their subscription services, because the papers were gibberish produced by a program, SCIgen, that strings together words and phrases into computer science-ish texts. SCIgen and this sort of thing are meant for fun (Nature also linked to arXiv vs snarXiv, which lets you try to figure out whether titles are actual preprints on the arXiv server or gibberish), but such nonsense papers have been accepted for conferences or published in, typically, poorly-reviewed forums, to general amusement and embarrassment when it’s noticed.

I’m sympathetic to the people who were supposed to review these papers. It’s hard reading any kind of academic paper, for one. They tend to be written with the goal of presenting novel findings efficiently; whether they’re pleasant to read isn’t a factor. (I wouldn’t be surprised if authors had no idea how to write so as to be enjoyable to read, either. I didn’t get any training in writing-to-be-read and I don’t remember seeing courses in that.) It’s also very hard to read something outside your specialty: the terminology and vocabulary and writing styles can be ferociously localized. Just today I was reading a WordPress post which started from the equations Euler used to describe the flow of viscosity-free fluids, which was at the heart of my thesis, and before eight paragraphs it had got into symbols I barely recognized and into points I’ll need to re-read and re-think before I can grasp them. And reviewing papers is really unappreciated; the best you can really hope for is to dig deep into the paper and understand it so thoroughly you can write a better version of it than the authors did, and so be thanked for making perceptive criticisms when the revised version of the paper comes out. The system makes it too easy to conclude something like “well, I don’t really have the time to understand all of this, but I on skimming it I don’t see anything plainly offensive to all persons, so, it probably makes sense to people who are looking for this kind of paper” and go on to a more pressing deadline, and I admit I don’t have a better system in mind.

I’m also reminded of a bit of folklore from my grad school days, in a class on dynamical systems. That’s the study of physics-type problems, with the attention being not so much on actually saying what something will do from this starting point — for example, if you push this swing this hard, how long will it take to stop swinging — and more on what the different kinds of behavior are — can you make the swing just rock around a little bit, or loop around once and then rock to a stop, or loop around twice, or loop around four hundred times, or so on — and what it takes to change that behavior mode. The instructor referred us to a paper that was an important result but warned us to not bother trying to read it because nobody had ever understood it from the paper. Instead, it was understood — going back to the paper’s introduction — by people having the salient points explained by other people who’d had it taught to them in conversations, all the way back to the first understanders, who got it from the original authors, possibly in talking mathematics over while at the bar. I’m embarrassed to say I don’t remember which paper it was (it was a while ago and there are a lot of key results in the field), so I haven’t even been able to figure how to search for the paper or the lore around it.

%d bloggers like this: