Tagged: computation Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 24 March, 2017 Permalink | Reply
    Tags: computation, , open questions, , Turing machines   

    What Pinball Games Are Turing Machines? 


    I got to thinking about Turing machines. This is the conceptual model for basically all computers. The classic concept is to imagine a string of cells. In each cell is some symbol. It’s gone over by some device that follows some rule about whether and how to change the symbol. We have other rules that let us move the machine from one cell to the next. This doesn’t sound like much. But it’s enough. We can imagine all software to be some sufficiently involved bit of work on a string of cells and changing (or not) the symbols in those cells.

    We don’t normally do this, because it’s too much tedious work. But we know we could go back to this if we truly must. A proper Turing machine has infinitely many cells, which no actual computer does, owing to the high cost of memory chips and the limited electricity budget. We can pretend that “a large enough number of cells” is good enough; it often is. And it turns out any one Turing machine can be used to simulate another Turing machine. This requires us to not care about how long it takes to do something, but that’s all right. Conceptually, we don’t care.

    And I specifically got wondering what was the first pinball machine to be a Turing machine. I’m sure that modern pinball machines are, since there have been computers of some kind in pinball machines since the mid-1970s. So that’s a boring question. My question is: were there earlier pinball machines that satisfy the requirements of a Turing machine?

    My gut tells me there must be. This is mostly because it’s surprisingly hard not to create a Turing machine. If you hang around near mathematics or computer science people you’ll occasionally run across things like where someone created a computer inside a game like Minecraft. It’s possible to create a Turing machine using the elements of the game. The number of things that are Turing-complete, as they say, is surprising. CSS version 3, a rule system for how to dress up content on a web site, turns out to be Turing-complete (if you make some reasonable extra suppositions). Magic: The Gathering cards are, too. So you could set up a game of Magic: the Gathering which simulated a game of Minecraft which itself simulated the styling rules of a web page. Note the “you” in that sentence.

    That’s not proof, though. But I feel pretty good about supposing that some must be. Pinball machines consist, at heart, of a bunch of switches which are activated or not by whether a ball rolls over them. They can store a bit of information: a ball can be locked in a scoop, or kicked out of the scoop as need be. Points can be tallied on the scoring reel. The number of balls a player gets to plunge can be increased — or decreased — based on things that happen on the playfield. This feels to me like it’s got to be a Turing-complete scheme.

    So I suspect that the layout of a pinball game, and the various ways to store a bit of information, with (presumably) perfect ball-flipping and table-nudging skills, should make it possible to make a Turing machine. (There ought not be a human in the loop, but I’m supposing that we could replace the person with a mechanism that flips or nudges at the right times or when the ball is in the right place.) I’m wanting for proof, though, and I leave the question here to tease people who’re better than I am at this field of mathematics and computer science.

    And I’m curious when the first game that was so capable was made. The very earliest games were like large tabletop versions of those disappointing car toys, the tiny transparent-plastic things with a ball bearing you shoot into one of a series of scoops. Eventually, tilt mechanisms were added, and scoring reels, and then flippers, and then the chance to lock balls. Each changed what the games could do. Did it reach the level of complexity I think it did? I’d like to know.

    Yes, this means that I believe it would be theoretically possible to play a pinball game that itself simulated the Pinball Arcade program simulating another pinball game. If this prospect does not delight you then I do not know that we can hope to ever understand one another.

     
    • John Friedrich 12:15 am on Saturday, 25 March, 2017 Permalink | Reply

      I’m still struggling with the possibility that the entire universe is a computer simulation. No one has come up with a compelling argument against it yet.

      Like

      • Joseph Nebus 10:52 pm on Thursday, 30 March, 2017 Permalink | Reply

        I’m getting pretty far outside my competence to talk about a problem like that. It really calls on the expertise of the philosophy community to work out well.

        My poorly-considered thoughts about it run along these lines, though. Suppose our universe is a simulation run for whatever purpose in the super-universe. If there is no possible way for us to detect the super-universe’s existence, or to demonstrate that it is affecting our universe, then it’s hard to see what the difference is between the status of the simulated universe and whatever the universe being real might mean.

        But suppose that there is a super-universe. Then it’s hard to see what arguments for our universe being a simulation would not also apply to our super-universe; why shouldn’t it be a simulation in a super-super-universe? But then why wouldn’t that be a simulation in a super-super-super-universe? And so on to an infinite regression of universes simulated within more computationally powerful universes.

        That’s nothing conclusive, certainly. There’s no reason we can’t have an infinite regression like that. It feels wasteful of existence, somehow. But it also suggests there’s no point at which any entity in any of the super-(etc)-universes could be confident they were in reality. So either the infinite stack of simulations is wrong or there’s no such thing as “real”. Neither seems quite satisfying.

        I expect the professionals have better reasoning than mine, though. And it might be something that produces useful insights even if it can’t be resolved, akin to Zeno’s Paradoxes.

        Like

    • fluffy 5:47 pm on Saturday, 25 March, 2017 Permalink | Reply

      The thing with all those provable Turing machines is that they have dynamic and conceptually-unbounded amounts of storage space (Minecraft levels grow, Magic decks grow, etc.), whereas purely-mechanical pinball machines only have a finite amount of state. Which is to say that they are at best a finite state machine. I mean unless you want to go down the rabbit hole of considering every possible physical position of the ball to be a different state, in which case all you need for a Turing machine (given perfect nudge mechanics etc.) is a ball.

      Like

      • Joseph Nebus 10:59 pm on Thursday, 30 March, 2017 Permalink | Reply

        This is true, although I take a forgiving view of storage space. If we can imagine a Magic deck growing as large as needed for the problem we’re working, why can’t we imagine a Pop-A-Card that has a six- or seven- or even 10100-digit score reel? (Taking the score reel to be how the state is stored.) At least it seems to me if someone can keep getting all this tape for their classical Turing machine then we can hook another reel up.

        Like

  • Joseph Nebus 6:00 pm on Sunday, 13 November, 2016 Permalink | Reply
    Tags: , , complexity, computation, , , , ,   

    Reading the Comics, November 12, 2016: Frazz and Monkeys Edition 


    Two things made repeat appearances in the mathematically-themed comics this week. They’re the comic strip Frazz and the idea of having infinitely many monkeys typing. Well, silly answers to word problems also turned up, but that’s hard to say many different things about. Here’s what I make the week in comics out to be.

    'An infinite number of monkeys sitting at an infinite number of typewriters will eventually reproduce the works of Shakespeare. ... Justy sayin' ... a four digit pin number is statistically sooo much better.'

    Sandra Bell-Lundy’s Between Friends for the 6th of November, 2016. I’m surprised Bell-Lundy used the broader space of a Sunday strip for a joke that doesn’t need that much illustration, but I understand sometimes you just have to go with the joke that you have. And it isn’t as though Sunday comics get that much space anymore either. Anyway, I suppose we have all been there, although for me that’s more often because I used to have a six-digit pin, and a six-digit library card pin, and those were just close enough to each other that I could never convince myself I was remembering the right one in context, so I would guess wrong.

    Sandra Bell-Lundy’s Between Friends for the 6th introduces the infinite monkeys problem. I wonder sometimes why the monkeys-on-typewriters thing has so caught the public imagination. And then I remember it encourages us to stare directly into infinity and its intuition-destroying nature from the comfortable furniture of the mundane — typewriters, or keyboards, for goodness’ sake — with that childish comic dose of monkeys. Given that it’s a wonder we ever talk about anything else, really.

    Monkeys writing Shakespeare has for over a century stood as a marker for what’s possible but incredibly improbable. I haven’t seen it compared to finding a four-digit PIN. It has got me wondering about the chance that four randomly picked letters will be a legitimate English word. I’m sure the chance is more than the one-in-a-thousand chance someone would guess a randomly drawn PIN correctly on one try. More than one in a hundred? I’m less sure. The easy-to-imagine thing to do is set a computer to try out all 456,976 possible sets of four letters and check them against a dictionary. The number of hits divided by the number of possibilities would be the chance of drawing a legitimate word. If I had a less capable computer, or were checking even longer words, I might instead draw some set number of words, never minding that I didn’t get every possibility. The fraction of successful words in my sample would be something close to the chance of drawing any legitimate word.

    If I thought a little deeper about the problem, though, I’d just count how many four-letter words are already in my dictionary and divide that into 456,976. It’s always a mistake to start programming before you’ve thought the problem out. The trouble is not being able to tell when that thinking-out is done.

    Richard Thompson’s Poor Richard’s Almanac for the 7th is the other comic strip to mention infinite monkeys. Well, chimpanzees in this case. But for the mathematical problem they’re not different. I’ve featured this particular strip before. But I’m a Thompson fan. And goodness but look at the face on the T S Eliot fan in the lower left corner there.

    Jeff Mallet’s Frazz for the 6th gives Caulfield one of those flashes of insight that seems like it should be something but doesn’t mean much. He’s had several of these lately, as mentioned here last week. As before this is a fun discovery about Roman Numerals, but it doesn’t seem like it leads to much. Perhaps a discussion of how the subtractive principle — that you can write “four” as “IV” instead of “IIII” — evolved over time. But then there isn’t much point to learning Roman Numerals at all. It’s got some value in showing how much mathematics depends on culture. Not just that stuff can be expressed in different ways, but that those different expressions make different things easier or harder to do. But I suspect that isn’t the objective of lessons about Roman Numerals.

    Frazz got my attention again the 12th. This time it just uses arithmetic, and a real bear of an arithmetic problem, as signifier for “a big pile of hard work”. This particular problem would be — well, I have to call it tedious, rather than hard. doing it is just a long string of adding together two numbers. But to do that over and over, by my count, at least 47 times for this one problem? Hardly any point to doing that much for one result.

    Patrick Roberts’s Todd the Dinosaur for the 7th calls out fractions, and arithmetic generally, as the stuff that ruins a child’s dreams. (Well, a dinosaur child’s dreams.) Still, it’s nice to see someone reminding mathematicians that a lot of their field is mostly used by accountants. Actuaries we know about; mathematics departments like to point out that majors can get jobs as actuaries. I don’t know of anyone I went to school with who chose to become one or expressed a desire to be an actuary. But I admit not asking either.

    Todd declares that after hearing one speak to his class he wants to be an accountant when he grows up. Trent, Todd's caretaker, says that's great but he'll need to do stuff that involves 'carrying the one and probably some fractions.' Todd panics. 'AAAGH! Not 'carrying the one' and that other word you said!'

    Patrick Roberts’s Todd the Dinosaur for the 7th of November, 2016. I don’t remember being talked to by classmates’ parents about what they where, but that might just be that it’s been a long time since I was in elementary school and everybody had the normal sorts of jobs that kids don’t understand. I guess we talked about what our parents did but that should make a weaker impression.

    Mike Thompson’s Grand Avenue started off a week of students-resisting-the-test-question jokes on the 7th. Most of them are hoary old word problem jokes. But, hey, I signed up to talk about it when a comic strip touches a mathematics topic and word problems do count.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal reprinted the 7th is a higher level of mathematical joke. It’s from the genre of nonsense calculation. This one starts off with what’s almost a cliche, at least for mathematics and physics majors. The equation it starts with, e^{i Pi} = -1 , is true. And famous. It should be. It links exponentiation, imaginary numbers, π, and negative numbers. Nobody would have seen it coming. And from there is the sort of typical gibberish reasoning, like writing “Pi” instead of π so that it can be thought of as “P times i”, to draw to the silly conclusion that P = 0. That much work is legitimate.

    From there it sidelines into “P = NP”, which is another equation famous to mathematicians and computer scientists. It’s a shorthand expression of a problem about how long it takes to find solutions. That is, how many steps it takes. How much time it would take a computer to solve a problem. You can see why it’s important to have some study of how long it takes to do a problem. It would be poor form to tie up your computer on a problem that won’t be finished before the computer dies of old age. Or just take too long to be practical.

    Most problems have some sense of size. You can look for a solution in a small problem or in a big one. You expect searching for the solution in a big problem to take longer. The question is how much longer? Some methods of solving problems take a length of time that grows only slowly as the size of the problem grows. Some take a length of time that grows crazy fast as the size of the problem grows. And there are different kinds of time growth. One kind is called Polynomial, because everything is polynomials. But there’s a polynomial in the problem’s size that describes how long it takes to solve. We call this kind of problem P. Another is called Non-Deterministic Polynomial, for problems that … can’t. We assume. We don’t know. But we know some problems that look like they should be NP (“NP Complete”, to be exact).

    It’s an open question whether P and NP are the same thing. It’s possible that everything we think might be NP actually can be solved by a P-class algorithm we just haven’t thought of yet. It would be a revolution in our understanding of how to find solutions if it were. Most people who study algorithms think P is not NP. But that’s mostly (as I understand it) because it seems like if P were NP then we’d have some leads on proving that by now. You see how this falls short of being rigorous. But it is part of expertise to get a feel for what seems to make sense in light of everything else we know. We may be surprised. But it would be inhuman not to have any expectations of a problem like this.

    Mark Anderson’s Andertoons for the 8th gives us the Andertoons content for the week. It’s a fair question why a right triangle might have three sides, three angles, three vertices, and just the one hypotenuse. The word’s origin, from Greek, meaning “stretching under” or “stretching between”. It’s unobjectionable that we might say this is the stretch from one leg of the right triangle to another. But that leaves unanswered why there’s just the one hypothenuse, since the other two legs also stretch from the end of one leg to another. Dr Sarah on The Math Forum suggests we need to think of circles. Draw a circle and a diameter line on it. Now pick any point on the circle other than where the diameter cuts it. Draw a line from one end of the diameter to your point. And from your point to the other end of the diameter. You have a right triangle! And the hypothenuse is the leg stretching under the other two. Yes, I’m assuming you picked a point above the diameter. You did, though, didn’t you? Humans do that sort of thing.

    I don’t know if Dr Sarah’s explanation is right. It sounds plausible and sensible. But those are weak pins to hang an etymology on. But I have no reason to think she’s mistaken. And the explanation might help people accept there is the one hypothenuse and there’s something interesting about it.

    The first (and as I write this only) commenter, Kristiaan, has a good if cheap joke there.

     
    • davekingsbury 10:38 pm on Monday, 14 November, 2016 Permalink | Reply

      I reckon it was Bob Newhart’s sketch about it that made the monkey idea so popular. Best bit, something like, hey one of them has something over here er to be or not to be that is the … gezoinebplatf!

      Like

      • Joseph Nebus 3:35 am on Sunday, 20 November, 2016 Permalink | Reply

        I like to think that helped. I fear that that particular routine’s been forgotten, though. I was surprised back in the 90s when I was getting his albums and ran across that bit, as I’d never heard it before. But it might’ve been important in feeding the idea to other funny people. There’s probably a good essay to be written tracing the monkeys at typewriters through pop culture.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 6 November, 2016 Permalink | Reply
    Tags: , computation, Halloween, , , ,   

    Reading the Comics, November 5, 2016: Surprisingly Few Halloween Costumes Edition 


    Comic Strip Master Command gave me a light load this week, which suit me fine. I’ve been trying to get the End 2016 Mathematics A To Z comfortably under way instead. It does strike me that there were fewer Halloween-themed jokes than I’d have expected. For all the jokes there are to make about Halloween I’d imagine some with some mathematical relevance would come up. But they didn’t and, huh. So it goes. The one big exception is the one I’d have guessed would be the exception.

    Bill Amend’s FoxTrot for the 30th — a new strip — plays with the scariness of mathematics. Trigonometry specifically. Trig is probably second only to algebra for the scariest mathematics normal people encounter. And that’s probably more because people get to algebra before they might get to trigonometry. Which is madness, in its way. Trigonometry is about how we can relate angles, arcs, and linear distances. It’s about stuff anyone would like to know, like how to go from an easy-to-make observation of the angle spanned by a thing to how big the thing must be. But the field does require a bunch of exotic new functions like sine and tangent and novelty acts like “arc-cosecant”. And the numbers involved can be terrible things. The sine of an angle, for example, is almost always going to be some irrational number. For common angles we use a lot it’ll be an irrational number with an easy-to-understand form. For example the sine of 45 degrees, mentioned here, is “one-half the square root of two”. Anyone not trying to be intimidating will use that instead. But the sine of, say, 50 degrees? I don’t know what that is either except that it’s some never-ending sequence of digits. People love to have digits, but when they’re asked to do something with them, they get afraid and I don’t blame them.

    Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 30th uses sudoku as shorthand for “genius thinking”. I am aware some complain sudoku isn’t mathematics. It’s certainly logic, though, and if we’re going to rule out logic puzzles from mathematics we’re going to lose a lot of fun fields. One of the commenters provided what I suppose the solution to be. (I haven’t checked.) If wish to do the puzzle be careful about scrolling.

    In Jef Mallet’s Frazz for the 2nd Caulfield notices something cute about 100. A perfect square is a familiar enough idea; it’s a whole number that’s the square of another whole number. The “roundest of round numbers” is a value judgement I’m not sure I can get behind. It’s a good round number, anyway, at least for stuff that’s sensibly between about 50 and 150. Or maybe between 50 and 500 if you’re just interested in about how big something might be. An irrational number, well, you know where that joke’s going.

    Mrs Olsen doesn’t seem impressed by Caulfield’s discovery, although in fairness we don’t see the actual aftermath. Sometimes you notice stuff like that and it is only good for a “huh”. But sometimes you get into some good recreational mathematics. It’s the sort of thinking that leads to discovering magic squares and amicable numbers and palindromic prime numbers and the like. Do they lead to important mathematics? Some of them do. Or at least into interesting mathematics. Sometimes they’re just passingly amusing.

    Greg Curfman’s Meg rerun for the 12th quotes Einstein’s famous equation as the sort of thing you could just expect would be asked in school. I’m not sure I ever had a class where knowing E = mc2 was the right answer to a question, though. Maybe as I got into physics since we did spend a bit of time on special relativity and E = mc2 turns up naturally there. Maybe I’ve been out of elementary school too long to remember.

    Mark Tatulli’s Heart of the City for the 4th has Heart and Dean talking about postapocalyptic society. Heart doubts that postapocalyptic society would need people like him, “with long-division experience”. Ah, but, grant the loss of computing devices. People will still need to compute. Before the days of electrical, and practical mechanical, computing people who could compute accurately were in demand. The example mathematicians learn to remember is Zacharias Dase, a German mental calculator. He was able to do astounding work and in his head. But he didn’t earn so much money as pro-mental-arithmetic propaganda would like us to believe. And why work entirely in your head if you don’t need to?

    Larry Wright’s Motley Classics rerun for the 5th is a word problem joke. And it’s mixed with labor relations humor for the sake of … I’m not quite sure, actually. Anyway I would have sworn I’d featured this strip in a long-ago Reading The Comics post, but I don’t see it on a casual search. So, go figure.

     
  • Joseph Nebus 5:12 pm on Sunday, 14 September, 2014 Permalink | Reply
    Tags: , common logarithms, computation, , desert island problems, , , natural logarithms   

    Without Machines That Think About Logarithms 


    I’ve got a few more thoughts about calculating logarithms, based on how the Harvard IBM Automatic Sequence-Controlled Calculator did things, and wanted to share them. I also have some further thoughts coming up shortly courtesy my first guest blogger, which is exciting to me.

    The procedure that was used back then to compute common logarithms — logarithms base ten — was built on several legs: that we can work out some logarithms ahead of time, that we can work out the natural (base e) logarithm of a number using an infinite series, that we can convert the natural logarithm to a common logarithm by a single multiplication, and that the logarithm of the product of two (or more) numbers equals the sum of the logarithm of the separate numbers.

    From that we got a pretty nice, fairly slick algorithm for producing logarithms. Ahead of time you have to work out the logarithms for 1, 2, 3, 4, 5, 6, 7, 8, and 9; and then, to make things more efficient, you’ll want the logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9; for that matter, you’ll also want 1.01, 1.02, 1.03, 1.04, and so on to 1.09. You can get more accurate numbers quickly by working out the logarithms for three digits past the decimal — 1.001, 1.002, 1.003, 1.004, and so on — and for that matter to four digits (1.0001) and more. You’re buying either speed of calculation or precision of result with memory.

    The process as described before worked out common logarithms, although there isn’t much reason that it has to be those. It’s a bit convenient, because if you want the logarithm of 47.2286 you’ll want to shift that to the logarithm of 4.72286 plus the logarithm of 10, and the common logarithm of 10 is a nice, easy 1. The same logic works in natural logarithms: the natural logarithm of 47.2286 is the natural logarithm of 4.72286 plus the natural logarithm of 10, but the natural logarithm of 10 is a not-quite-catchy 2.3026 (approximately). You pretty much have to decide whether you want to deal with factors of 10 being an unpleasant number or do deal with calculating natural logarithms and then multiplying them by the common logarithm of e, about 0.43429.

    But the point is if you found yourself with no computational tools, but plenty of paper and time, you could reconstruct logarithms for any number you liked pretty well: decide whether you want natural or common logarithms. I’d probably try working out both, since there’s presumably the time, after all, and who knows what kind of problems I’ll want to work out afterwards. And I can get quite nice accuracy after working out maybe 36 logarithms using the formula:

    \log_e\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots

    This will work very well for numbers like 1.1, 1.2, 1.01, 1.02, and so on: for this formula to work, h has to be between -1 and 1, or put another way, we have to be looking for the logarithms of numbers between 0 and 2. And it takes fewer terms to get the result as precise as you want the closer h is to zero, that is, the closer the number whose logarithm we want is to 1.

    So most of my reference table is easy enough to make. But there’s a column left out: what is the logarithm of 2? Or 3, or 4, or so on? The infinite-series formula there doesn’t work that far out, and if you give it a try, let’s say with the logarithm of 5, you get a good bit of nonsense, numbers swinging positive and negative and ever-larger.

    Of course we’re not limited to formulas; we can think, too. 3, for example, is equal to 1.5 times 2, so the logarithm of 3 is the logarithm of 1.5 2 plus the logarithm of 2, and we have the logarithm of 1.5, and the logarithm of 2 is … OK, that’s a bit of a problem. But if we had the logarithm of 2, we’d be able to work out the logarithm of 4 — it’s just twice that — and we could get to other numbers pretty easily: 5 is, among other things, 2 times 2 times 1.25 so its logarithm is twice the logarithm of 2 plus the logarithm of 1.25. We’d have to work out the logarithm of 1.25, but we can do that by formula. 6 is 2 times 2 times 1.5, and we already had 1.5 worked out. 7 is 2 times 2 times 1.75, and we have a formula for the logarithm of 1.75. 8 is 2 times 2 times 2, so, triple whatever the logarithm of 2 is. 9 is 3 times 3, so, double the logarithm of 3.

    We’re not required to do things this way. I just picked some nice, easy ways to factor the whole numbers up to 9, and that didn’t seem to demand doing too much more work. I’d need the logarithms of 1.25 and 1.75, as well as 2, but I can use the formula or, for that matter, work it out using the rest of my table: 1.25 is 1.2 times 1.04 times 1.001 times 1.000602, approximately. But there are infinitely many ways to get 3 by multiplying together numbers between 1 and 2, and we can use any that are convenient.

    We do still need the logarithm of 2, but, then, 2 is among other things equal to 1.6 times 1.25, and we’d been planning to work out the logarithm of 1.6 all the time, and 1.25 is useful in getting us to 5 also, so, why not do that?

    So in summary we could get logarithms for any numbers we wanted by working out the logarithms for 1.1, 1.2, 1.3, and so on, and 1.01, 1.02, 1.03, et cetera, and 1.001, 1.002, 1.003 and so on, and then 1.25 and 1.75, which lets us work out the logarithms of 2, 3, 4, and so on up to 9.

    I haven’t yet worked out, but I am curious about, what the fewest number of “extra” numbers I’d have to calculate are. That is, granted that I have to figure out the logarithms of 1.1, 1.01, 1.001, et cetera anyway. The way I outlined things I have to also work out the logarithms of 1.25 and 1.75 to get all the numbers I need. Is it possible to figure out a cleverer bit of factorization that requires only one extra number be worked out? For that matter, is it possible to need no extra numbers? My instinctive response is to say no, but that’s hardly a proof. I’d be interested to know better.

     
  • Joseph Nebus 10:27 pm on Saturday, 6 September, 2014 Permalink | Reply
    Tags: , computation, , , ,   

    Machines That Give You Logarithms 


    As I’ve laid out the tools that the Harvard IBM Automatic Sequence-Controlled Calculator would use to work out a common logarithm, now I can show how this computer of the 1940s and 1950s would do it. The goal, remember, is to compute logarithms to a desired accuracy, using computers that haven’t got abundant memory, and as quickly as possible. As quickly as possible means, roughly, avoiding multiplication (which takes time) and doing as few divisions as can possibly be done (divisions take forever).

    As a reminder, the tools we have are:

    1. We can work out at least some logarithms ahead of time and look them up as needed.
    2. The natural logarithm of a number close to 1 is log_e\left(1 + h\right) = h - \frac12h^2 + \frac13h^3 - \frac14h^4 + \frac15h^5 - \cdots .
    3. If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.
    4. Whether the natural or the common logarithm (or any other logarithm you might like) \log\left(a\cdot b\cdot c \cdot d \cdots \right) = \log(a) + \log(b) + \log(c) + \log(d) + \cdots

    Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

    So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

    We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

    We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about 0.1807 - \frac12 0.1807^2 + \frac13 0.1807^3 - \frac14 0.1807^4 + \frac15 0.1807^5 - \cdots or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of \frac16 0.1807^6 , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

    Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

    Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

    And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

    We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

    The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

    Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

    If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

    So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

    And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

    And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

    Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

    But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

     
  • Joseph Nebus 10:05 pm on Wednesday, 3 September, 2014 Permalink | Reply
    Tags: , computation, , ,   

    Machines That Do Something About Logarithms 


    I’m going to assume everyone reading this accepts that logarithms are worth computing, and try to describe how Harvard’s IBM Automatic Sequence-Controlled Calculator would work them out.

    The first part of this is kind of an observation: the quickest way to give the logarithm of a number is to already know it. Looking it up in a table is way faster than evaluating it, and that’s as true for the computer as for you. Obviously we can’t work out logarithms for every number, what with there being so many of them, but we could work out the logarithms for a reasonable range and to a certain precision and trust that the logarithm of (say) 4.42286 is going to be tolerably close to the logarithm of 4.423 that we worked out ahead of time. Working out a range of, say, 1 to 10 for logarithms base ten is plenty, because that’s all the range we need: the logarithm base ten of 44.2286 is the logarithm base ten of 4.42286 plus one. The logarithm base ten of 0.442286 is the logarithm base ten of 4.42286 minus one. You can guess from that what the logarithm of 4,422.86 is, compared to that of 4.42286.

    This is trading computer memory for computational speed, which is often worth doing. But the old Automatic Sequence-Controlled Calculator can’t do that, at least not as easily as we’d like: it had the ability to store 72 numbers, albeit to 23 decimal digits. We can’t just use “worked it out ahead of time”, although we’re not going to abandon that idea either.

    The next piece we have is something useful if we want to work out the natural logarithm — the logarithm base e — of a number that’s close to 1. We have a formula that will let us work out this natural logarithm to whatever accuracy we want:

    \log_{e}\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots \mbox{ if } |h| < 1

    In principle, we have to add up infinitely many terms to get the answer right. In practice, we only add up terms until the error — the difference between our sum and the correct answer — is smaller than some acceptable margin. This seems to beg the question, because how can we know how big that error is without knowing what the correct answer is? In fact we don’t know just what the error is, but we do know that the error can’t be any larger than the absolute value of the first term we neglect.

    Let me give an example. Suppose we want the natural logarithm of 1.5, which the alert have noticed is equal to 1 + 0.5. Then h is 0.5. If we add together the first five terms of the natural logarithm series, then we have 0.5 - \frac12 0.5^2 + \frac13 0.5^3 - \frac14 0.5^4 + \frac15 0.5^5 which is approximately 0.40729. If we were to work out the next term in the series, that would be -\frac16 0.5^6 , which has an absolute value of about 0.0026. So the natural logarithm of 1.5 is 0.40729, plus or minus 0.0026. If we only need the natural logarithm to within 0.0026, that’s good: we’re done.

    In fact, the natural logarithm of 1.5 is approximately 0.40547, so our error is closer to 0.00183, but that’s all right. Few people complain that our error is smaller than what we estimated it to be.

    If we know what margin of error we’ll tolerate, by the way, then we know how many terms we have to calculate. Suppose we want the natural logarithm of 1.5 accurate to 0.001. Then we have to find the first number n so that \frac1n 0.5^n < 0.001 ; if I'm not mistaken, that's eight. Just how many terms we have to calculate will depend on what h is; the bigger it is — the farther the number is from 1 — the more terms we'll need.

    The trouble with this is that it’s only good for working out the natural logarithms of numbers between 0 and 2. (And it’s better the closer the number is to 1.) If you want the natural logarithm of 44.2286, you have to divide out the highest power of e that’s less than it — well, you can fake that by dividing by e repeatedly — and what you get is that it’s e times e times e times 2.202 and we’re stuck there. Not hopelessly, mind you: we could find the logarithm of 1/2.202, which will be minus the logarithm of 2.202, at least, and we can work back to the original number from there. Still, this is a bit of a mess. We can do better.

    The third piece we can use is one of the fundamental properties of logarithms. This is true for any base, as long as we use the same base for each logarithm in the equation here, and I’ve mentioned it in passing before:

    \log\left(a\cdot b\cdot c\cdot d \cdots\right) = \log\left(a\right) + \log\left(b\right) + \log\left(c\right) + \log\left(d\right) + \cdots

    That is, if we could factor a number whose logarithm we want into components which we can either look up or we can calculate very quickly, then we know its logarithm is the sum of the logarithms of those components. And this, finally, is how we can work out logarithms quickly and without too much hard work.

     
    • howardat58 12:54 am on Thursday, 4 September, 2014 Permalink | Reply

      If you take the terms of the expansion of log(1+x) in pairs things are better:
      Take h^3/3-h^4/4 and get h^3*(4-3*h)/12 for example
      These are all positive for h<1 and the series will converge much more quickly.
      ps my post will soon be with you

      Like

      • Joseph Nebus 12:14 am on Friday, 5 September, 2014 Permalink | Reply

        Pairing things up looks nice, although I’m not sure it saves work. It might make for more numerically stable calculations but I haven’t tested that. (To explain: calculations on the computer naturally include a bit of error, because, basically, what we might want to write as 1/3 we write on the computer as 0.333 and that’s a tiny bit different. Sometimes doing calculations in one order will magnify that little difference between what you want and what you actually compute. Sometimes changing the order will mean those little differences stay little, and that’s a stable computation.)

        I’m looking forward to your post and appreciate the writing. Thank you.

        Like

    • mommycookforme 2:18 pm on Thursday, 4 September, 2014 Permalink | Reply

      Wow! A mathematical genius! Thanks for sharing us, have a wonderful day!:)

      Like

  • Joseph Nebus 4:27 pm on Sunday, 24 August, 2014 Permalink | Reply
    Tags: , computation, , , ,   

    Machines That Think About Logarithms 


    I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

    But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

    The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

    The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

    What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

    The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

    All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

    So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

     
    • Boxing Pythagoras 6:33 pm on Sunday, 24 August, 2014 Permalink | Reply

      I definitely need this book, now. Computers have been so naturally ingrained in our lives that people often forget how incredulous the concept once was.

      Like

      • Joseph Nebus 8:40 pm on Tuesday, 26 August, 2014 Permalink | Reply

        Oh, yes, people do forget how fantastical computers were until quite recently. This book goes into a lot of the hardware, though, which I think is great. There’s something wonderful in how a piece of metal can be made to remember a fact, and it’s not really explained anymore in introductions to computers even though that’s become much more standard than memory had been in the 1940s.

        I’d picked up the book from the university library. I haven’t checked how available and expensive it is as a used book.

        Like

    • chrisbrakeshow 3:04 am on Tuesday, 26 August, 2014 Permalink | Reply

      This is some pretty heavy stuff, man. Thanks for digging deep!

      -john

      Like

    • howardat58 3:43 am on Wednesday, 27 August, 2014 Permalink | Reply

      I am a bit stuck with the text only in the comment section but here goes:
      It started with logs-like-products so I wrote the chord slope function for log as
      (log(x + x*h) – log(x)/(x*h)
      which quickly and to my surprise became
      (1/x)*log(1+h)/h
      This provided a rationale for the formal definition of log(x) as
      integral(1/t) from t=1 to t=x
      I then thought to try the standard “add up the rectangles” approach, but with unequal widths, in a geometric progression.
      So for 5 intervals and k^5=x the interval points were 1, k, k^2, k^3, k^4 and k^5 (=x)
      The sum of the rectangles came to 5*(1 – 1/k) which eventually, via the trapezoidal rule, gave the n th estimate of log(x) as

      n(root(x,n) – 1/root(x,n))

      where root(x,n) is the nth root of x.

      I tried it out with n as a power of 2, so square rooting is the only messy bit, and with n=2^10 I got log(e) = 0.9999…..
      That is 4 dp in ten steps
      Not brilliant but it works, and I have never seen anything like this before.

      Like

      • Joseph Nebus 2:22 am on Thursday, 28 August, 2014 Permalink | Reply

        I’m still working out to my satisfaction the algebra behind this (in particular the formula n(root(x, n) – 1/root(x, n)) keeps coming out about twice the natural logarithm and I’m not sure just where the two comes in, but have suspicions), but I do believe you’re right about the basic approach. I believe that it’s structurally similar to the “method of exhaustion” that the Ancient Greeks would use to work out the areas underneath a curve, long before there was a calculus to work out these problems. In any case it’s a good scheme.

        Of course, the tradeoff for this is that you need to have the n-th root of the number whose logarithm you want. This might not be too bad, especially if you decide to use an n that’s a power of two, though.

        Like

        • howardat58 5:01 pm on Thursday, 28 August, 2014 Permalink | Reply

          Transcription error !
          The divisor of 2 was scribbled so small that I missed it.
          Yes, it should be divided by 2.
          More on a separate post, the lines are getting shorter.

          Like

          • Joseph Nebus 2:44 am on Friday, 29 August, 2014 Permalink | Reply

            I’m glad to have that sorted out.

            If you’d like, I’d be happy to set your work as a guest post on the main page. WordPress even allows the use of basic LaTeX commands … certainly in main posts; I’m not sure how it does in comments. (I’m afraid to try, given how hard it can be to get right the first time and that there’s no preview and editing of comments that I’ve found.)

            Like

            • howardat58 3:26 am on Friday, 29 August, 2014 Permalink | Reply

              I would like that. Thankyou.
              What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
              If I save any pics to my blogsite you can access them, put them in a post and so on using the url
              I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
              If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
              Gracias

              Like

            • howardat58 3:32 am on Friday, 29 August, 2014 Permalink | Reply

              I am putting this as a separate reply as it might then be readable!!

              I would like that. Thankyou.
              What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
              If I save any pics to my blogsite you can access them, put them in a post and so on using the url
              I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
              If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
              Gracias

              Like

    • howardat58 3:33 am on Friday, 29 August, 2014 Permalink | Reply

      That didn’t work!!!!!!!!!!!!!!!!! trying again

      I would like that. Thankyou.
      What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
      If I save any pics to my blogsite you can access them, put them in a post and so on using the url
      I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
      If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
      Gracias

      Like

  • Joseph Nebus 4:02 pm on Friday, 22 August, 2014 Permalink | Reply
    Tags: computation, E,   

    Writing About E (Not By Me) 


    It’s tricky to write about e. That is, it’s not a difficult thing to write about, but it’s hard to find the audience for this number. It’s quite important, mathematically, but it hasn’t got an easy-to-understand definition like pi’s “the circumference of a circle divided by its diameter”. E’s most concise definition, I guess, is “the base of the natural logarithm”, which as an explanation to someone who hasn’t done much mathematics is only marginally more enlightening than slapping him with a slice of cold pizza. And it hasn’t got the sort of renown of something like the golden ratio which makes the number sound familiar and even welcoming.

    Still, the Mean Green Math blog (“Explaining the whys of mathematics”) has been running a series of essays explaining e, by looking at different definitions of the number. The most recent of this has been the twelfth in the series, and they seem to be arranged in chronological order under the category of Algebra II topics, and under the tag of “E” essays, although I can’t promise how long it’ll be before you have to flip through so many “older” page links on the category and tag pages that it’s harder to find that way. If I see a master page collecting all the Definitions Of E essays into one guide I’ll post that.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: