Tagged: computer science Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 24 March, 2017 Permalink | Reply
    Tags: , computer science, open questions, , Turing machines   

    What Pinball Games Are Turing Machines? 


    I got to thinking about Turing machines. This is the conceptual model for basically all computers. The classic concept is to imagine a string of cells. In each cell is some symbol. It’s gone over by some device that follows some rule about whether and how to change the symbol. We have other rules that let us move the machine from one cell to the next. This doesn’t sound like much. But it’s enough. We can imagine all software to be some sufficiently involved bit of work on a string of cells and changing (or not) the symbols in those cells.

    We don’t normally do this, because it’s too much tedious work. But we know we could go back to this if we truly must. A proper Turing machine has infinitely many cells, which no actual computer does, owing to the high cost of memory chips and the limited electricity budget. We can pretend that “a large enough number of cells” is good enough; it often is. And it turns out any one Turing machine can be used to simulate another Turing machine. This requires us to not care about how long it takes to do something, but that’s all right. Conceptually, we don’t care.

    And I specifically got wondering what was the first pinball machine to be a Turing machine. I’m sure that modern pinball machines are, since there have been computers of some kind in pinball machines since the mid-1970s. So that’s a boring question. My question is: were there earlier pinball machines that satisfy the requirements of a Turing machine?

    My gut tells me there must be. This is mostly because it’s surprisingly hard not to create a Turing machine. If you hang around near mathematics or computer science people you’ll occasionally run across things like where someone created a computer inside a game like Minecraft. It’s possible to create a Turing machine using the elements of the game. The number of things that are Turing-complete, as they say, is surprising. CSS version 3, a rule system for how to dress up content on a web site, turns out to be Turing-complete (if you make some reasonable extra suppositions). Magic: The Gathering cards are, too. So you could set up a game of Magic: the Gathering which simulated a game of Minecraft which itself simulated the styling rules of a web page. Note the “you” in that sentence.

    That’s not proof, though. But I feel pretty good about supposing that some must be. Pinball machines consist, at heart, of a bunch of switches which are activated or not by whether a ball rolls over them. They can store a bit of information: a ball can be locked in a scoop, or kicked out of the scoop as need be. Points can be tallied on the scoring reel. The number of balls a player gets to plunge can be increased — or decreased — based on things that happen on the playfield. This feels to me like it’s got to be a Turing-complete scheme.

    So I suspect that the layout of a pinball game, and the various ways to store a bit of information, with (presumably) perfect ball-flipping and table-nudging skills, should make it possible to make a Turing machine. (There ought not be a human in the loop, but I’m supposing that we could replace the person with a mechanism that flips or nudges at the right times or when the ball is in the right place.) I’m wanting for proof, though, and I leave the question here to tease people who’re better than I am at this field of mathematics and computer science.

    And I’m curious when the first game that was so capable was made. The very earliest games were like large tabletop versions of those disappointing car toys, the tiny transparent-plastic things with a ball bearing you shoot into one of a series of scoops. Eventually, tilt mechanisms were added, and scoring reels, and then flippers, and then the chance to lock balls. Each changed what the games could do. Did it reach the level of complexity I think it did? I’d like to know.

    Yes, this means that I believe it would be theoretically possible to play a pinball game that itself simulated the Pinball Arcade program simulating another pinball game. If this prospect does not delight you then I do not know that we can hope to ever understand one another.

     
    • John Friedrich 12:15 am on Saturday, 25 March, 2017 Permalink | Reply

      I’m still struggling with the possibility that the entire universe is a computer simulation. No one has come up with a compelling argument against it yet.

      Like

      • Joseph Nebus 10:52 pm on Thursday, 30 March, 2017 Permalink | Reply

        I’m getting pretty far outside my competence to talk about a problem like that. It really calls on the expertise of the philosophy community to work out well.

        My poorly-considered thoughts about it run along these lines, though. Suppose our universe is a simulation run for whatever purpose in the super-universe. If there is no possible way for us to detect the super-universe’s existence, or to demonstrate that it is affecting our universe, then it’s hard to see what the difference is between the status of the simulated universe and whatever the universe being real might mean.

        But suppose that there is a super-universe. Then it’s hard to see what arguments for our universe being a simulation would not also apply to our super-universe; why shouldn’t it be a simulation in a super-super-universe? But then why wouldn’t that be a simulation in a super-super-super-universe? And so on to an infinite regression of universes simulated within more computationally powerful universes.

        That’s nothing conclusive, certainly. There’s no reason we can’t have an infinite regression like that. It feels wasteful of existence, somehow. But it also suggests there’s no point at which any entity in any of the super-(etc)-universes could be confident they were in reality. So either the infinite stack of simulations is wrong or there’s no such thing as “real”. Neither seems quite satisfying.

        I expect the professionals have better reasoning than mine, though. And it might be something that produces useful insights even if it can’t be resolved, akin to Zeno’s Paradoxes.

        Like

    • fluffy 5:47 pm on Saturday, 25 March, 2017 Permalink | Reply

      The thing with all those provable Turing machines is that they have dynamic and conceptually-unbounded amounts of storage space (Minecraft levels grow, Magic decks grow, etc.), whereas purely-mechanical pinball machines only have a finite amount of state. Which is to say that they are at best a finite state machine. I mean unless you want to go down the rabbit hole of considering every possible physical position of the ball to be a different state, in which case all you need for a Turing machine (given perfect nudge mechanics etc.) is a ball.

      Like

      • Joseph Nebus 10:59 pm on Thursday, 30 March, 2017 Permalink | Reply

        This is true, although I take a forgiving view of storage space. If we can imagine a Magic deck growing as large as needed for the problem we’re working, why can’t we imagine a Pop-A-Card that has a six- or seven- or even 10100-digit score reel? (Taking the score reel to be how the state is stored.) At least it seems to me if someone can keep getting all this tape for their classical Turing machine then we can hook another reel up.

        Like

  • Joseph Nebus 10:22 pm on Friday, 24 April, 2015 Permalink | Reply
    Tags: , computer science, , , , John von Neumann, Ludwig Boltzmann, , Shannon entropy   

    A Little More Talk About What We Talk About When We Talk About How Interesting What We Talk About Is 


    I had been talking about how much information there is in the outcome of basketball games, or tournaments, or the like. I wanted to fill in at least one technical term, to match some of the others I’d given.

    In this information-theory context, an experiment is just anything that could have different outcomes. A team can win or can lose or can tie in a game; that makes the game an experiment. The outcomes are the team wins, or loses, or ties. A team can get a particular score in the game; that makes that game a different experiment. The possible outcomes are the team scores zero points, or one point, or two points, or so on up to whatever the greatest possible score is.

    If you know the probability p of each of the different outcomes, and since this is a mathematics thing we suppose that you do, then we have what I was calling the information content of the outcome of the experiment. That’s a number, measured in bits, and given by the formula

    \sum_{j} - p_j \cdot \log\left(p_j\right)

    The sigma summation symbol means to evaluate the expression to the right of it for every value of some index j. The pj means the probability of outcome number j. And the logarithm may be that of any base, although if we use base two then we have an information content measured in bits. Those are the same bits as are in the bytes that make up the megabytes and gigabytes in your computer. You can see this number as an estimate of how many well-chosen yes-or-no questions you’d have to ask to pick the actual result out of all the possible ones.

    I’d called this the information content of the experiment’s outcome. That’s an idiosyncratic term, chosen because I wanted to hide what it’s normally called. The normal name for this is the “entropy”.

    To be more precise, it’s known as the “Shannon entropy”, after Claude Shannon, pioneer of the modern theory of information. However, the equation defining it looks the same as one that defines the entropy of statistical mechanics, that thing everyone knows is always increasing and somehow connected with stuff breaking down. Well, almost the same. The statistical mechanics one multiplies the sum by a constant number called the Boltzmann constant, after Ludwig Boltzmann, who did so much to put statistical mechanics in its present and very useful form. We aren’t thrown by that. The statistical mechanics entropy describes energy that is in a system but that can’t be used. It’s almost background noise, present but nothing of interest.

    Is this Shannon entropy the same entropy as in statistical mechanics? This gets into some abstract grounds. If two things are described by the same formula, are they the same kind of thing? Maybe they are, although it’s hard to see what kind of thing might be shared by “how interesting the score of a basketball game is” and “how much unavailable energy there is in an engine”.

    The legend has it that when Shannon was working out his information theory he needed a name for this quantity. John von Neumann, the mathematician and pioneer of computer science, suggested, “You should call it entropy. In the first place, a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” There are variations of the quote, but they have the same structure and punch line. The anecdote appears to trace back to an April 1961 seminar at MIT given by one Myron Tribus, who claimed to have heard the story from Shannon. I am not sure whether it is literally true, but it does express a feeling about how people understand entropy that is true.

    Well, these entropies have the same form. And they’re given the same name, give or take a modifier of “Shannon” or “statistical” or some other qualifier. They’re even often given the same symbol; normally a capital S or maybe an H is used as the quantity of entropy. (H tends to be more common for the Shannon entropy, but your equation would be understood either way.)

    I’m not comfortable saying they’re the same thing, though. After all, we use the same formula to calculate a batting average and to work out the average time of a commute. But we don’t think those are the same thing, at least not more generally than “they’re both averages”. These entropies measure different kinds of things. They have different units that just can’t be sensibly converted from one to another. And the statistical mechanics entropy has many definitions that not just don’t have parallels for information, but wouldn’t even make sense for information. I would call these entropies siblings, with strikingly similar profiles, but not more than that.

    But let me point out something about the Shannon entropy. It is low when an outcome is predictable. If the outcome is unpredictable, presumably knowing the outcome will be interesting, because there is no guessing what it might be. This is where the entropy is maximized. But an absolutely random outcome also has a high entropy. And that’s boring. There’s no reason for the outcome to be one option instead of another. Somehow, as looked at by the measure of entropy, the most interesting of outcomes and the most meaningless of outcomes blur together. There is something wondrous and strange in that.

     
    • Angie Mc 9:43 pm on Saturday, 25 April, 2015 Permalink | Reply

      Clever title to go with an interesting post, Joseph :)

      Like

    • ivasallay 3:35 am on Sunday, 26 April, 2015 Permalink | Reply

      There is so much entropy in my life that I just didn’t know there were two different kinds.

      Like

      • Joseph Nebus 8:21 pm on Monday, 27 April, 2015 Permalink | Reply

        It’s worse than that: there’s many kinds of entropy out there. There’s even a kind of entropy that describes how large black holes are.

        Like

    • Aquileana 12:08 pm on Sunday, 26 April, 2015 Permalink | Reply

      Shannon Entropy is so interesting … The last paragraph of your post is eloquent… Thanks for teaching us about the The sigma summation in which the pj means the probability of outcome number j.
      Best wishes to you. Aquileana :star:

      Like

    • vagabondurges 7:55 pm on Monday, 27 April, 2015 Permalink | Reply

      I always enjoy trying to follow along with your math posts, and throwing some mathmatician anecdotes in there seasons it to perfection.

      Like

      • Joseph Nebus 8:24 pm on Monday, 27 April, 2015 Permalink | Reply

        Thank you. I’m fortunate with mathematician anecdotes that so many of them have this charming off-kilter logic. They almost naturally have the structure of a simple vaudeville joke.

        Like

    • elkement 7:41 pm on Wednesday, 29 April, 2015 Permalink | Reply

      I totally agree on your way of introducing the entropy ‘siblings’. Actually, I had once wondered why you call the ‘information entropy’ ‘entropy’ just because of similar mathematical definitions.

      Again Feynman comes to my mind: In his physics lectures he said that very rarely did work in engineering contribute to theoretical foundations in science: One time Carnot did it – describing his ideal cycle and introducing thermodynamical entropy – and the other thing Feynman mentioned was Shannon’s information theory.

      Like

      • Joseph Nebus 5:55 am on Tuesday, 5 May, 2015 Permalink | Reply

        It’s curious to me how this p-times-log-p form turns up in things that don’t seem related. I do wonder if there’s a common phenomenon we need to understand that we haven’t quite pinned down yet and that makes for a logical unification of the different kinds of entropy.

        I hadn’t noticed that Feynman quote before, but he’s surely right about Carnot and Shannon. They did much to give clear central models and definitions to fields that were forming, and put out problems so compelling that they shaped the fields.

        Liked by 1 person

    • LFFL 9:58 am on Friday, 1 May, 2015 Permalink | Reply

      Omg the TITLE of this! Lol :D I’m getting motion sickness as I speak.

      Like

      • Joseph Nebus 6:04 am on Tuesday, 5 May, 2015 Permalink | Reply

        Yeah, I was a little afraid of that. But it’s just so wonderful to say. And more fun to diagram.

        I hope the text came out all right.

        Like

  • Joseph Nebus 2:37 pm on Friday, 11 April, 2014 Permalink | Reply
    Tags: computer science, markets, , , NP problems,   

    Stable Marriages and Designing Markets 


    A few days ago Jeremy Kun with the Math ∩ Programming blog wrote about the problem of stable marriages, by which here is meant pairing off people so that everyone is happy with their pairing. Put like that it almost sounds like the sort of thing people used to complain about in letters to Ann Landers about mathematicians doing foolish things — don’t mathematicians know that feelings matter in this, and, how does this help them teach kids to do arithmetic.

    But the problem is just put that way because it’s one convenient representation of a difficult problem. Given a number of agents that can be paired up, and some way of measuring the collection of pairings, how can you select the best pairing? And what do you mean by best? Do you mean the one that maximizes whatever it is you’re measuring? The one that minimizes it (if you’re measuring, say, unhappiness, or cost, or something else you’d want as little of)? Jeremy Kun describes the search for a pairing that’s stable, which requires, in part, coming up with a definition of just what “stable” means.

    The work can be put to describe any two-party interaction, which can be marriages, or can be the choice of people where to work and employers who to hire, or can be people deciding what to buy or where to live, all sorts of things where people have preferences and good fits. Once the model’s developed it has more applications than what it was originally meant for, which is part of what makes this a good question. Kun also write a bit bout how to expand the problem so as to handle some more complicated cases, and shows how the problem can be put onto a computer.

    Like

    Math ∩ Programming

    Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

    Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way…

    View original post 2,343 more words

     
    • elkement 4:20 pm on Sunday, 13 April, 2014 Permalink | Reply

      Ha – this time I remember :-) I have just read about thus in Scott Aaronson’s book about quantum computing – which is actually a book about all of computing, quite consended actually.

      Like

      • Joseph Nebus 6:42 pm on Wednesday, 16 April, 2014 Permalink | Reply

        I didn’t realize it was. I’ve got only a passing interest in quantum computing but a review of the whole field of computing could be good reading.

        Like

  • Joseph Nebus 9:11 pm on Wednesday, 26 February, 2014 Permalink | Reply
    Tags: academia, , computer science, , , journals   

    Peer Gibberish 


    Well, this is an embarrassing thing to see: according to Nature, the Springer publishing and the Institute of Electrical and Electronic Engineers (IEEE) have had to withdraw at least 120 papers from their subscription services, because the papers were gibberish produced by a program, SCIgen, that strings together words and phrases into computer science-ish texts. SCIgen and this sort of thing are meant for fun (Nature also linked to arXiv vs snarXiv, which lets you try to figure out whether titles are actual preprints on the arXiv server or gibberish), but such nonsense papers have been accepted for conferences or published in, typically, poorly-reviewed forums, to general amusement and embarrassment when it’s noticed.

    I’m sympathetic to the people who were supposed to review these papers. It’s hard reading any kind of academic paper, for one. They tend to be written with the goal of presenting novel findings efficiently; whether they’re pleasant to read isn’t a factor. (I wouldn’t be surprised if authors had no idea how to write so as to be enjoyable to read, either. I didn’t get any training in writing-to-be-read and I don’t remember seeing courses in that.) It’s also very hard to read something outside your specialty: the terminology and vocabulary and writing styles can be ferociously localized. Just today I was reading a WordPress post which started from the equations Euler used to describe the flow of viscosity-free fluids, which was at the heart of my thesis, and before eight paragraphs it had got into symbols I barely recognized and into points I’ll need to re-read and re-think before I can grasp them. And reviewing papers is really unappreciated; the best you can really hope for is to dig deep into the paper and understand it so thoroughly you can write a better version of it than the authors did, and so be thanked for making perceptive criticisms when the revised version of the paper comes out. The system makes it too easy to conclude something like “well, I don’t really have the time to understand all of this, but I on skimming it I don’t see anything plainly offensive to all persons, so, it probably makes sense to people who are looking for this kind of paper” and go on to a more pressing deadline, and I admit I don’t have a better system in mind.

    I’m also reminded of a bit of folklore from my grad school days, in a class on dynamical systems. That’s the study of physics-type problems, with the attention being not so much on actually saying what something will do from this starting point — for example, if you push this swing this hard, how long will it take to stop swinging — and more on what the different kinds of behavior are — can you make the swing just rock around a little bit, or loop around once and then rock to a stop, or loop around twice, or loop around four hundred times, or so on — and what it takes to change that behavior mode. The instructor referred us to a paper that was an important result but warned us to not bother trying to read it because nobody had ever understood it from the paper. Instead, it was understood — going back to the paper’s introduction — by people having the salient points explained by other people who’d had it taught to them in conversations, all the way back to the first understanders, who got it from the original authors, possibly in talking mathematics over while at the bar. I’m embarrassed to say I don’t remember which paper it was (it was a while ago and there are a lot of key results in the field), so I haven’t even been able to figure how to search for the paper or the lore around it.

     
    • elkement 12:34 pm on Sunday, 2 March, 2014 Permalink | Reply

      I have enjoyed this article as I have always been a fan of the Sokal hoax.

      When I had to review articles at the university these were only from my own narrow niche of specialization. I probably did what the reviewers of my papers also did: Zoom in on my pet topics and check for things that I knew from experience were difficult to measure or difficult to reproduce (it was experimental physics).

      It would be interesting to know how much the focus of papers to be reviewed deviates from the specialization of reviewers – and if there is a measure for that.. a measure for specialization / deviation. I guess interdisciplinary research is then most difficult to evaluate – as the Sokal hoax also proved.

      Like

      • Joseph Nebus 6:37 pm on Monday, 3 March, 2014 Permalink | Reply

        I’m of mixed minds about the Sokal hoax, and of related rings like the SCIgen papers that’ve been slipped in. Part of me really appreciates pranks and hoaxes: putting aside the comic value of slipping nonsense into the sensible, they also provide a way of testing that a person, or group, or organization is not just reading things but thinking critically about them. If too much nonsense is passing through one’s filters that suggests a problem with the filtering.

        On the other hand there’s something mean spirited about pranks meant to deceive without enlightening. And the way (some) “hard” science people get snobby about the “soft” fields is truly obnoxious, since, as the SCIgen thing shows, the “hardness” of a field hasn’t got much to do with whether peer review is functioning the way it’s supposed to.

        I think you’re right that it’d be interesting to measure how much the specialization of reviewers correlates to poorly-conducted peer review. I’m not sure just how to quantify a reviewer’s specialization, much less a paper’s, but I think I can sort of imagine ways to estimate both.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: