The BBC’s general-discussion podcast In Our Time repeated another mathematics-themed session this week. The topic is P versus NP, a matter of computational complexity. P and NP here are shorthands to describe the amount of work needed to follow a procedure. And, particularly, how the amount of work changes as the size of the problem being worked on changes. We know there are problems of a complexity type P, and problems of a complexity type NP. What’s not known is whether those are, actually, the same, whether there’s a clever way of describing an NP problem so we can solve it with a P approach.

I do not remember whether I heard this program when it originally broadcast in 2015. And I haven’t had time to listen to this one yet. But these discussions are usually prett solid, and will get to discussing the ambiguities and limitations and qualifications of the field. So I feel comfortable recommending it even without a recent listen, which I will likely get to sometime during this week’s walks.

I do read other people’s mathematics writing, even if I don’t do it enough. A couple days ago RJ Lipton and KW Regan’s Reductions And Jokes discussed how one can take a problem and rewrite it as a different problem. This is one of the standard mathematician’s tricks. The point to doing this is that you might have a better handle on the new problem.

“Better” is an aesthetic judgement. It reflects whether the new problem is easier to work with. Along the way, they offer an example that surprised and delighted me, and that I wanted to share. It’s about multiplying whole numbers. Multiplication can take a fair while, as anyone who’s tried to do 38 times 23 by hand has found out. But we can speed that up. A multiplication table is a special case of a lookup table, a chunk of stored memory which has computed ahead of time all the multiplications someone is likely to do. Then instead of doing them, you just look them up.

The catch is that a multiplication table takes memory. To do all the multiplications for whole numbers 1 through 10 you need … well, not 100 memory cells. But 55. To have 1 through 20 worked out ahead of time you need 210 memory cells. Can we do better?

If addition and subtraction are easy enough to do? And if dividing by two is easy enough? Then, yes. Instead of working out every pair multiplication, work out the squares of the whole numbers. And then make use of this identity:

And that delights me. It’s one of those relationships that’s sitting there, waiting for anyone who’s ever squared a binomial to notice. I don’t know that anyone actually uses this. But it’s fun to see multiplication worked out by a different yet practical way.

Today’s A To Z term is the Abacus. It was suggested by aajohannas, on Twitter as @aajohannas. Particularly asked for was how to use an abacus. The abacus has been used by a great many cultures over thousands of years. So it’s hard to say that there is any one right way to use it. I’m going to get into a way to use it to compute, any more than there is a right way to use a hammer. There are many hammers, and many things to hammer. But there are similarities between all hammers, and the ways to use them as hammers are similar. So learning one kind, and one way to use that kind, can be a useful start.

Abacus.

I taught at the National University of Singapore in the first half of the 2000s. At the student union was this sheltered overhang formed by a stairwell. Underneath it, partly exposed to the elements (a common building style there) was a convenience store. Up front were the things with high turnover, snacks and pop and daily newspapers, that sort of thing. In the back, beyond the register, in the areas that the rain, the only non-gentle element, couldn’t reach even whipped by wind, were other things. Miscellaneous things. Exam bluebooks faded with age and dust. Good-luck cat statues colonized by spiderwebs. Unlabelled power cables for obsolete electronics. Once when browsing through this I encountered two things that I bought as badges of office.

One was a slide rule, a proper twelve-inch one. I’d had one already, a $2 six-inch-long one I’d gotten as an undergraduate from a convenience store the university had already decided to evict. The NUS one was a slide rule you could do actual work on. Another was a soroban, a compact Japanese abacus, in a patterned cardboard box a half-inch too short to hold it. I got both. For the novelty, yes. Also, I taught Computational Science. I felt it appropriate to have these iconic human computing devices.

But do I use them? Other than for decoration? … No, not really. I have too many calculators to need them. Also I am annoyed that while I can lay my hands on the slide rule I have put the soroban somewhere so logical and safe I can’t find it. A couple photographs would improve this essay. Too bad.

Do I know how to use them? If I find them? The slide rule, sure. If you know that a slide rule works via logarithms, and you play with it a little? You know how to use a slide rule. At least a little, after a bit of experimentation and playing with the three times table.

The abacus, though? How do you use that?

In childhood I heard about abacuses. That there’s a series of parallel rods, each with beads on them. Four placed below a center beam, one placed above. Sometimes two placed above. That the lower beads on a rod represent one each. That the upper bead represents five. That some people can do arithmetic on that faster than others can an electric calculator. And that was about all I got, or at least retained. How to do this arithmetic never penetrated my brain. I imagined, well, addition must be easy. Say you wanted to do three plus six, well, move three lower beads up to the center bar. Then slide one lower and one upper bead, for six, to the center bar, and read that off. Right?

The bizarre thing is my naive childhood idea is right. At least in the big picture. Let each rod represent one of the numbers in base-ten style. It’s anachronistic to the abacus’s origins to speak of a ones rod, a tens rod, a hundreds rod, and so on. So what? We’re using this tool today. We can use the ideas of base ten to make our understanding easier.

Pick a row of beads that you want to represent the ones. The row to the left of that represents tens. To the left of that, hundreds. To the right of the ones is the one-tenths, and the one-hundredths, and so on. This goes on to however far your need and however big your abacus is.

Move beads to the center to represent numbers you want. If you want ’21’, slide two lower beads up in the tens column and one lower bead in the ones column. If you want ’38’, slide three lower beads up in the tends column and one upper and three lower beads in the ones column.

To add two numbers, slide more beads representing those numbers toward the center bar. To subtract, slide beads away. Multiplication and division were beyond my young imagination. I’ll let them wait a bit.

There are complications. The complications are for good reason. When you master them, they make computation swifter. But you pay for that later speed with more time spent learning. This is a trade we make, and keep making, in computational mathematics. We make a process more reliable, more speedy, by making it less obvious.

Some of this isn’t too difficult. Like, work in one direction so far as possible. It’s easy to suppose this is better than jumping around from, say, the thousands digit to the tens to the hundreds to the ones. The advice I’ve read says work from the left to the right, that is, from the highest place to the lowest. Arithmetic as I learned it works from the ones to the tens to the hundreds, but this seems wiser. The most significant digits get calculated first this way. It’s usually more important to know the answer is closer to 2,000 than to 3,000 than to know that the answer ends in an 8 rather than a 6.

Some of this is subtle. This is to cope with practical problems. Suppose you want to add 5 to 6? There aren’t that many beads on any row. A Chinese abacus, which has two beads on the upper part, could cope with this particular problem. It’s going to be in trouble when you want to add 8 to 9, though. That’s not unique to an abacus. Any numerical computing technique can be broken by some problem. This is why it’s never enough to calculate; we still have to think. For example, thinking will let us handle this five plus six difficulty.

Consider this: four plus one is five. So four and one are “complementary numbers”, with respect to five. Similarly, three and two are five’s complementary numbers. So if we need to add four to a number, that’s equivalent to adding five and subtracting one. If we need to add two, that’s equivalent to adding five and subtracting three. This will get us through some shortages in bead count.

And consider this: four plus six is ten. So four and six are ten-complementary numbers. Similarly, three and seven are ten’s complementary numbers. Two and eight. One and nine. This gets us through much of the rest of the shortage.

So here’s how this works. Suppose we have 35, and wish to add 6 to it. There aren’t the beads to add six to the ones column. So? Instead subtract the complement of six. That is, add ten and subtract four. In moving across the rows, when you reach the tens, slide one lower bead up, making the abacus represent 45. Then from the ones column subtract four. that is, slide the upper bead away from the center bar, and slide the complement to four, one bead, up to the center. And now the abacus represents 41, just like it ought.

If you’re experienced enough you can reduce some of these operations, sliding the beads above and below the center bar at once. Or sliding a bead in the tens and another in the ones column at once. Don’t fret doing this. Worry about making correct steps. You’ll speed up with practice. I remember advice from a typesetting manual I collected once: “strive for consistent, regular keystrokes. Speed comes with practice. Errors are time-consuming to correct”. This is, mutatis mutandis, always good advice.

Subtraction works like addition. This should surprise few. If you have the beads in place, just remove them: four minus two takes no particular insight. If there aren’t enough beads? Fall back on complements. If you wish to do 35 minus 6? Set up 35, and calculate 35 minus 10 plus 4. When you get to the tens rod, slide one bead down; this leaves you with 25. Then in the ones column, slide four beads up. This leaves you with 29. I’m so glad these seem to be working out.

Doing longer additions and subtractions takes more rows, but not actually more work. 35.2 plus 6.4 is the same work as 35 plus 6 and 2 plus 4, each of which you, in principle, know how to do. 35.2 minus 6.4 is a bit more fuss. When you get to the 2 minus 4 bit you have to do that addition-of-complements stuff. But that’s not any new work.

Where the decimal point goes is something you have to keep track of. As with the slide rule, the magnitude of these numbers is notional. Your fingers move the same way to add 352 and 64 as they will 0.352 and 0.064. That’s convenient.

Multiplication gets more tedious. It demands paying attention to where the decimal point is. Just like the slide rule demands, come to think of it. You’ll need columns on the abacus for both the multiplicands and the product. And you’ll do a lot of adding up. But at heart? 2038 times 3.7 amounts to doing eight multiplications. 8 times 7, 3 times 7, 0 times 7 (OK, that one’s easy), 2 times 7, 3 times 7, 3 times 3, 0 times 3 (again, easy), and 2 times 3. And then add up these results in the correct columns. This may be tedious, but it’s not hard. It does mean the abacus doesn’t spare you having to know some times tables. I mean, you could use the abacus to work out 8 times 7 by adding seven to itself over and over, but that’s time-consuming. You can save time, and calculation steps, by memorization. By knowing some answers ahead of time.

Totton Heffelfinger and Gary Flom’s page, from which I’m drawing almost all my practical advice, offers a good notation of lettering the rods of the abacus, A, B, C, D, and so on. To multiply, say, 352 by 64 start by putting the 64 on rods BC. Set the 352 on rods EFG. We’ll get the answer with rod K as the ones column. 2 times 4 is 8; put that on rod K. 5 times 4 is 20; add that to rods IJ. 3 times 4 is 12; add that to rods HI. 2 times 6 is 12; add that to rods IJ. 5 times 6 is 30; add that to rods HI. 3 times 6 is 18; add that to rods GH. All going well this should add up to 22,528, spread out along rods GHIJK. I can see right away at least the 8 is correct.

You would do the same physical steps to multiply, oh, 3.52 by 0.0064. You have to take care of the decimal place yourself, though.

I see you, in the back there, growing suspicious. I’ll come around to this. Don’t worry.

Division is … oh, I have to fess up. Division is not something I feel comfortable with. I can read the instructions and repeat the examples given. I haven’t done it enough to have that flash where I understand the point of things. I recognize what’s happening. It’s the work of division as done by hand. You know, 821 divided by 56 worked out by, well, 56 goes into 82 once with a remainder of 26. Then drop down the 1 to make this 261. 56 goes into 261 … oh, it would be so nice if it went five times, but it doesn’t. It goes in four times, with a remainder of 37. I can walk you through the steps but all I am truly doing is trying to keep up with Totton Heffelfinger and Gary Flom’s instructions here.

There are, I read, also schemes to calculate square roots on the abacus. I don’t know that there are cube-root schemes also. I would bet on there being such, though.

Never mind, though. The suspicious thing I expect you’ve noticed is the steps being done. They’re represented as sliding beads along rods, yes. But the meaning of these steps? They’re the same steps you would do by doing arithmetic on paper. Sliding two beads and then two more beads up to the center bar isn’t any different from looking at 2 + 2 and representing that as 4. All this ten’s-complement stuff to subtract one number from another is just … well, I learned it as subtraction by “borrowing”. I don’t know the present techniques but I’m sure they’re at heart the same. But the work is eerily like what you would do on paper, using Arabic numerals.

The slide rule uses a logarithm-based ruler. This makes the addition of distances along the slides match the multiplication of the values of the rulers. What does the abacus do to help us compute?

Why use an abacus?

What an abacus gives us is memory. It stores numbers. It lets us break a big problem into a series of small problems. It lets us keep a partial computation while we work through those steps. We don’t add 35.2 to 6.4. We add 3 to 0 and 5 to 6 and 2 to 4. We don’t multiply 2038 by 3.7. We multiply 8 by 7, and 8 by 3, and 3 by 7, and 3 by 3, and so on.

And this is most of numerical computing, even today. We describe what we want to do as these high-level operations. But the computation is a lot of calculations, each one of them simple. We use some memory to hold partially completed results. Memory, the ability to store results, lets us change hard problems into long strings of simple ones.

We do more things the way the abacus encourages. We even use those complementary numbers. Not five’s or ten’s complements, not with binary arithmetic computers. Two’s complement arithmetic makes it possible to subtract, or write negative numbers, in ways that are easy to calculate. That there are a set number of rods even has its parallel in modern computing. When representing a real number on the computer we have only so many decimal places. (Yes, yes, binary digit places.) At least unless we use a weird data structure. This affects our calculations. There are numbers we can’t represent perfectly, such as one-third. We need to think about whether this affects what we are using our calculation for.

There are major differences between a digital computer and a person using the abacus. But the processes are similar. This may help us to understand why computational science works the way it does. It may at least help us understand those contests in the 1950s where the abacus user was faster than the calculator user.

But no, I confess, I only use mine for decoration, or will when I find it again.

I got to thinking about Turing machines. This is the conceptual model for basically all computers. The classic concept is to imagine a string of cells. In each cell is some symbol. It’s gone over by some device that follows some rule about whether and how to change the symbol. We have other rules that let us move the machine from one cell to the next. This doesn’t sound like much. But it’s enough. We can imagine all software to be some sufficiently involved bit of work on a string of cells and changing (or not) the symbols in those cells.

We don’t normally do this, because it’s too much tedious work. But we know we could go back to this if we truly must. A proper Turing machine has infinitely many cells, which no actual computer does, owing to the high cost of memory chips and the limited electricity budget. We can pretend that “a large enough number of cells” is good enough; it often is. And it turns out any one Turing machine can be used to simulate another Turing machine. This requires us to not care about how long it takes to do something, but that’s all right. Conceptually, we don’t care.

And I specifically got wondering what was the first pinball machine to be a Turing machine. I’m sure that modern pinball machines are, since there have been computers of some kind in pinball machines since the mid-1970s. So that’s a boring question. My question is: were there earlier pinball machines that satisfy the requirements of a Turing machine?

My gut tells me there must be. This is mostly because it’s surprisingly hard not to create a Turing machine. If you hang around near mathematics or computer science people you’ll occasionally run across things like where someone created a computer inside a game like Minecraft. It’s possible to create a Turing machine using the elements of the game. The number of things that are Turing-complete, as they say, is surprising. CSS version 3, a rule system for how to dress up content on a web site, turns out to be Turing-complete (if you make some reasonable extra suppositions). Magic: The Gathering cards are, too. So you could set up a game of Magic: the Gathering which simulated a game of Minecraft which itself simulated the styling rules of a web page. Note the “you” in that sentence.

That’s not proof, though. But I feel pretty good about supposing that some must be. Pinball machines consist, at heart, of a bunch of switches which are activated or not by whether a ball rolls over them. They can store a bit of information: a ball can be locked in a scoop, or kicked out of the scoop as need be. Points can be tallied on the scoring reel. The number of balls a player gets to plunge can be increased — or decreased — based on things that happen on the playfield. This feels to me like it’s got to be a Turing-complete scheme.

So I suspect that the layout of a pinball game, and the various ways to store a bit of information, with (presumably) perfect ball-flipping and table-nudging skills, should make it possible to make a Turing machine. (There ought not be a human in the loop, but I’m supposing that we could replace the person with a mechanism that flips or nudges at the right times or when the ball is in the right place.) I’m wanting for proof, though, and I leave the question here to tease people who’re better than I am at this field of mathematics and computer science.

And I’m curious when the first game that was so capable was made. The very earliest games were like large tabletop versions of those disappointing car toys, the tiny transparent-plastic things with a ball bearing you shoot into one of a series of scoops. Eventually, tilt mechanisms were added, and scoring reels, and then flippers, and then the chance to lock balls. Each changed what the games could do. Did it reach the level of complexity I think it did? I’d like to know.

Yes, this means that I believe it would be theoretically possible to play a pinball game that itself simulated the Pinball Arcade program simulating another pinball game. If this prospect does not delight you then I do not know that we can hope to ever understand one another.

Two things made repeat appearances in the mathematically-themed comics this week. They’re the comic strip Frazz and the idea of having infinitely many monkeys typing. Well, silly answers to word problems also turned up, but that’s hard to say many different things about. Here’s what I make the week in comics out to be.

Sandra Bell-Lundy’s Between Friends for the 6th introduces the infinite monkeys problem. I wonder sometimes why the monkeys-on-typewriters thing has so caught the public imagination. And then I remember it encourages us to stare directly into infinity and its intuition-destroying nature from the comfortable furniture of the mundane — typewriters, or keyboards, for goodness’ sake — with that childish comic dose of monkeys. Given that it’s a wonder we ever talk about anything else, really.

Monkeys writing Shakespeare has for over a century stood as a marker for what’s possible but incredibly improbable. I haven’t seen it compared to finding a four-digit PIN. It has got me wondering about the chance that four randomly picked letters will be a legitimate English word. I’m sure the chance is more than the one-in-a-thousand chance someone would guess a randomly drawn PIN correctly on one try. More than one in a hundred? I’m less sure. The easy-to-imagine thing to do is set a computer to try out all 456,976 possible sets of four letters and check them against a dictionary. The number of hits divided by the number of possibilities would be the chance of drawing a legitimate word. If I had a less capable computer, or were checking even longer words, I might instead draw some set number of words, never minding that I didn’t get every possibility. The fraction of successful words in my sample would be something close to the chance of drawing any legitimate word.

If I thought a little deeper about the problem, though, I’d just count how many four-letter words are already in my dictionary and divide that into 456,976. It’s always a mistake to start programming before you’ve thought the problem out. The trouble is not being able to tell when that thinking-out is done.

Richard Thompson’s Poor Richard’s Almanac for the 7th is the other comic strip to mention infinite monkeys. Well, chimpanzees in this case. But for the mathematical problem they’re not different. I’ve featured this particular strip before. But I’m a Thompson fan. And goodness but look at the face on the T S Eliot fan in the lower left corner there.

Jeff Mallet’s Frazz for the 6th gives Caulfield one of those flashes of insight that seems like it should be something but doesn’t mean much. He’s had several of these lately, as mentioned here last week. As before this is a fun discovery about Roman Numerals, but it doesn’t seem like it leads to much. Perhaps a discussion of how the subtractive principle — that you can write “four” as “IV” instead of “IIII” — evolved over time. But then there isn’t much point to learning Roman Numerals at all. It’s got some value in showing how much mathematics depends on culture. Not just that stuff can be expressed in different ways, but that those different expressions make different things easier or harder to do. But I suspect that isn’t the objective of lessons about Roman Numerals.

Frazz got my attention again the 12th. This time it just uses arithmetic, and a real bear of an arithmetic problem, as signifier for “a big pile of hard work”. This particular problem would be — well, I have to call it tedious, rather than hard. doing it is just a long string of adding together two numbers. But to do that over and over, by my count, at least 47 times for this one problem? Hardly any point to doing that much for one result.

Patrick Roberts’s Todd the Dinosaur for the 7th calls out fractions, and arithmetic generally, as the stuff that ruins a child’s dreams. (Well, a dinosaur child’s dreams.) Still, it’s nice to see someone reminding mathematicians that a lot of their field is mostly used by accountants. Actuaries we know about; mathematics departments like to point out that majors can get jobs as actuaries. I don’t know of anyone I went to school with who chose to become one or expressed a desire to be an actuary. But I admit not asking either.

Mike Thompson’s Grand Avenue started off a week of students-resisting-the-test-question jokes on the 7th. Most of them are hoary old word problem jokes. But, hey, I signed up to talk about it when a comic strip touches a mathematics topic and word problems do count.

Zach Weinersmith’s Saturday Morning Breakfast Cereal reprinted the 7th is a higher level of mathematical joke. It’s from the genre of nonsense calculation. This one starts off with what’s almost a cliche, at least for mathematics and physics majors. The equation it starts with, , is true. And famous. It should be. It links exponentiation, imaginary numbers, π, and negative numbers. Nobody would have seen it coming. And from there is the sort of typical gibberish reasoning, like writing “Pi” instead of π so that it can be thought of as “P times i”, to draw to the silly conclusion that P = 0. That much work is legitimate.

From there it sidelines into “P = NP”, which is another equation famous to mathematicians and computer scientists. It’s a shorthand expression of a problem about how long it takes to find solutions. That is, how many steps it takes. How much time it would take a computer to solve a problem. You can see why it’s important to have some study of how long it takes to do a problem. It would be poor form to tie up your computer on a problem that won’t be finished before the computer dies of old age. Or just take too long to be practical.

Most problems have some sense of size. You can look for a solution in a small problem or in a big one. You expect searching for the solution in a big problem to take longer. The question is how much longer? Some methods of solving problems take a length of time that grows only slowly as the size of the problem grows. Some take a length of time that grows crazy fast as the size of the problem grows. And there are different kinds of time growth. One kind is called Polynomial, because everything is polynomials. But there’s a polynomial in the problem’s size that describes how long it takes to solve. We call this kind of problem P. Another is called Non-Deterministic Polynomial, for problems that … can’t. We assume. We don’t know. But we know some problems that look like they should be NP (“NP Complete”, to be exact).

It’s an open question whether P and NP are the same thing. It’s possible that everything we think might be NP actually can be solved by a P-class algorithm we just haven’t thought of yet. It would be a revolution in our understanding of how to find solutions if it were. Most people who study algorithms think P is not NP. But that’s mostly (as I understand it) because it seems like if P were NP then we’d have some leads on proving that by now. You see how this falls short of being rigorous. But it is part of expertise to get a feel for what seems to make sense in light of everything else we know. We may be surprised. But it would be inhuman not to have any expectations of a problem like this.

Mark Anderson’s Andertoons for the 8th gives us the Andertoons content for the week. It’s a fair question why a right triangle might have three sides, three angles, three vertices, and just the one hypotenuse. The word’s origin, from Greek, meaning “stretching under” or “stretching between”. It’s unobjectionable that we might say this is the stretch from one leg of the right triangle to another. But that leaves unanswered why there’s just the one hypothenuse, since the other two legs also stretch from the end of one leg to another. Dr Sarah on The Math Forum suggests we need to think of circles. Draw a circle and a diameter line on it. Now pick any point on the circle other than where the diameter cuts it. Draw a line from one end of the diameter to your point. And from your point to the other end of the diameter. You have a right triangle! And the hypothenuse is the leg stretching under the other two. Yes, I’m assuming you picked a point above the diameter. You did, though, didn’t you? Humans do that sort of thing.

I don’t know if Dr Sarah’s explanation is right. It sounds plausible and sensible. But those are weak pins to hang an etymology on. But I have no reason to think she’s mistaken. And the explanation might help people accept there is the one hypothenuse and there’s something interesting about it.

The first (and as I write this only) commenter, Kristiaan, has a good if cheap joke there.

Comic Strip Master Command gave me a light load this week, which suit me fine. I’ve been trying to get the End 2016 Mathematics A To Z comfortably under way instead. It does strike me that there were fewer Halloween-themed jokes than I’d have expected. For all the jokes there are to make about Halloween I’d imagine some with some mathematical relevance would come up. But they didn’t and, huh. So it goes. The one big exception is the one I’d have guessed would be the exception.

Bill Amend’s FoxTrot for the 30th — a new strip — plays with the scariness of mathematics. Trigonometry specifically. Trig is probably second only to algebra for the scariest mathematics normal people encounter. And that’s probably more because people get to algebra before they might get to trigonometry. Which is madness, in its way. Trigonometry is about how we can relate angles, arcs, and linear distances. It’s about stuff anyone would like to know, like how to go from an easy-to-make observation of the angle spanned by a thing to how big the thing must be. But the field does require a bunch of exotic new functions like sine and tangent and novelty acts like “arc-cosecant”. And the numbers involved can be terrible things. The sine of an angle, for example, is almost always going to be some irrational number. For common angles we use a lot it’ll be an irrational number with an easy-to-understand form. For example the sine of 45 degrees, mentioned here, is “one-half the square root of two”. Anyone not trying to be intimidating will use that instead. But the sine of, say, 50 degrees? I don’t know what that is either except that it’s some never-ending sequence of digits. People love to have digits, but when they’re asked to do something with them, they get afraid and I don’t blame them.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 30th uses sudoku as shorthand for “genius thinking”. I am aware some complain sudoku isn’t mathematics. It’s certainly logic, though, and if we’re going to rule out logic puzzles from mathematics we’re going to lose a lot of fun fields. One of the commenters provided what I suppose the solution to be. (I haven’t checked.) If wish to do the puzzle be careful about scrolling.

In Jef Mallet’s Frazz for the 2nd Caulfield notices something cute about 100. A perfect square is a familiar enough idea; it’s a whole number that’s the square of another whole number. The “roundest of round numbers” is a value judgement I’m not sure I can get behind. It’s a good round number, anyway, at least for stuff that’s sensibly between about 50 and 150. Or maybe between 50 and 500 if you’re just interested in about how big something might be. An irrational number, well, you know where that joke’s going.

Mrs Olsen doesn’t seem impressed by Caulfield’s discovery, although in fairness we don’t see the actual aftermath. Sometimes you notice stuff like that and it is only good for a “huh”. But sometimes you get into some good recreational mathematics. It’s the sort of thinking that leads to discovering magic squares and amicable numbers and palindromic prime numbers and the like. Do they lead to important mathematics? Some of them do. Or at least into interesting mathematics. Sometimes they’re just passingly amusing.

Greg Curfman’s Meg rerun for the 12th quotes Einstein’s famous equation as the sort of thing you could just expect would be asked in school. I’m not sure I ever had a class where knowing E = mc^{2} was the right answer to a question, though. Maybe as I got into physics since we did spend a bit of time on special relativity and E = mc^{2} turns up naturally there. Maybe I’ve been out of elementary school too long to remember.

Mark Tatulli’s Heart of the City for the 4th has Heart and Dean talking about postapocalyptic society. Heart doubts that postapocalyptic society would need people like him, “with long-division experience”. Ah, but, grant the loss of computing devices. People will still need to compute. Before the days of electrical, and practical mechanical, computing people who could compute accurately were in demand. The example mathematicians learn to remember is Zacharias Dase, a German mental calculator. He was able to do astounding work and in his head. But he didn’t earn so much money as pro-mental-arithmetic propaganda would like us to believe. And why work entirely in your head if you don’t need to?

Larry Wright’s Motley Classics rerun for the 5th is a word problem joke. And it’s mixed with labor relations humor for the sake of … I’m not quite sure, actually. Anyway I would have sworn I’d featured this strip in a long-ago Reading The Comics post, but I don’t see it on a casual search. So, go figure.

I’ve got a few more thoughts about calculating logarithms, based on how the Harvard IBM Automatic Sequence-Controlled Calculator did things, and wanted to share them. I also have some further thoughts coming up shortly courtesy my first guest blogger, which is exciting to me.

The procedure that was used back then to compute common logarithms — logarithms base ten — was built on several legs: that we can work out some logarithms ahead of time, that we can work out the natural (base e) logarithm of a number using an infinite series, that we can convert the natural logarithm to a common logarithm by a single multiplication, and that the logarithm of the product of two (or more) numbers equals the sum of the logarithm of the separate numbers.

From that we got a pretty nice, fairly slick algorithm for producing logarithms. Ahead of time you have to work out the logarithms for 1, 2, 3, 4, 5, 6, 7, 8, and 9; and then, to make things more efficient, you’ll want the logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9; for that matter, you’ll also want 1.01, 1.02, 1.03, 1.04, and so on to 1.09. You can get more accurate numbers quickly by working out the logarithms for three digits past the decimal — 1.001, 1.002, 1.003, 1.004, and so on — and for that matter to four digits (1.0001) and more. You’re buying either speed of calculation or precision of result with memory.

The process as described before worked out common logarithms, although there isn’t much reason that it has to be those. It’s a bit convenient, because if you want the logarithm of 47.2286 you’ll want to shift that to the logarithm of 4.72286 plus the logarithm of 10, and the common logarithm of 10 is a nice, easy 1. The same logic works in natural logarithms: the natural logarithm of 47.2286 is the natural logarithm of 4.72286 plus the natural logarithm of 10, but the natural logarithm of 10 is a not-quite-catchy 2.3026 (approximately). You pretty much have to decide whether you want to deal with factors of 10 being an unpleasant number or do deal with calculating natural logarithms and then multiplying them by the common logarithm of e, about 0.43429.

But the point is if you found yourself with no computational tools, but plenty of paper and time, you could reconstruct logarithms for any number you liked pretty well: decide whether you want natural or common logarithms. I’d probably try working out both, since there’s presumably the time, after all, and who knows what kind of problems I’ll want to work out afterwards. And I can get quite nice accuracy after working out maybe 36 logarithms using the formula:

This will work very well for numbers like 1.1, 1.2, 1.01, 1.02, and so on: for this formula to work, h has to be between -1 and 1, or put another way, we have to be looking for the logarithms of numbers between 0 and 2. And it takes fewer terms to get the result as precise as you want the closer h is to zero, that is, the closer the number whose logarithm we want is to 1.

So most of my reference table is easy enough to make. But there’s a column left out: what is the logarithm of 2? Or 3, or 4, or so on? The infinite-series formula there doesn’t work that far out, and if you give it a try, let’s say with the logarithm of 5, you get a good bit of nonsense, numbers swinging positive and negative and ever-larger.

Of course we’re not limited to formulas; we can think, too. 3, for example, is equal to 1.5 times 2, so the logarithm of 3 is the logarithm of 1.5 2 plus the logarithm of 2, and we have the logarithm of 1.5, and the logarithm of 2 is … OK, that’s a bit of a problem. But if we had the logarithm of 2, we’d be able to work out the logarithm of 4 — it’s just twice that — and we could get to other numbers pretty easily: 5 is, among other things, 2 times 2 times 1.25 so its logarithm is twice the logarithm of 2 plus the logarithm of 1.25. We’d have to work out the logarithm of 1.25, but we can do that by formula. 6 is 2 times 2 times 1.5, and we already had 1.5 worked out. 7 is 2 times 2 times 1.75, and we have a formula for the logarithm of 1.75. 8 is 2 times 2 times 2, so, triple whatever the logarithm of 2 is. 9 is 3 times 3, so, double the logarithm of 3.

We’re not required to do things this way. I just picked some nice, easy ways to factor the whole numbers up to 9, and that didn’t seem to demand doing too much more work. I’d need the logarithms of 1.25 and 1.75, as well as 2, but I can use the formula or, for that matter, work it out using the rest of my table: 1.25 is 1.2 times 1.04 times 1.001 times 1.000602, approximately. But there are infinitely many ways to get 3 by multiplying together numbers between 1 and 2, and we can use any that are convenient.

We do still need the logarithm of 2, but, then, 2 is among other things equal to 1.6 times 1.25, and we’d been planning to work out the logarithm of 1.6 all the time, and 1.25 is useful in getting us to 5 also, so, why not do that?

So in summary we could get logarithms for any numbers we wanted by working out the logarithms for 1.1, 1.2, 1.3, and so on, and 1.01, 1.02, 1.03, et cetera, and 1.001, 1.002, 1.003 and so on, and then 1.25 and 1.75, which lets us work out the logarithms of 2, 3, 4, and so on up to 9.

I haven’t yet worked out, but I am curious about, what the fewest number of “extra” numbers I’d have to calculate are. That is, granted that I have to figure out the logarithms of 1.1, 1.01, 1.001, et cetera anyway. The way I outlined things I have to also work out the logarithms of 1.25 and 1.75 to get all the numbers I need. Is it possible to figure out a cleverer bit of factorization that requires only one extra number be worked out? For that matter, is it possible to need no extra numbers? My instinctive response is to say no, but that’s hardly a proof. I’d be interested to know better.

We can work out at least some logarithms ahead of time and look them up as needed.

The natural logarithm of a number close to 1 is .

If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.

Whether the natural or the common logarithm (or any other logarithm you might like)

Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

The first part of this is kind of an observation: the quickest way to give the logarithm of a number is to already know it. Looking it up in a table is way faster than evaluating it, and that’s as true for the computer as for you. Obviously we can’t work out logarithms for every number, what with there being so many of them, but we could work out the logarithms for a reasonable range and to a certain precision and trust that the logarithm of (say) 4.42286 is going to be tolerably close to the logarithm of 4.423 that we worked out ahead of time. Working out a range of, say, 1 to 10 for logarithms base ten is plenty, because that’s all the range we need: the logarithm base ten of 44.2286 is the logarithm base ten of 4.42286 plus one. The logarithm base ten of 0.442286 is the logarithm base ten of 4.42286 minus one. You can guess from that what the logarithm of 4,422.86 is, compared to that of 4.42286.

This is trading computer memory for computational speed, which is often worth doing. But the old Automatic Sequence-Controlled Calculator can’t do that, at least not as easily as we’d like: it had the ability to store 72 numbers, albeit to 23 decimal digits. We can’t just use “worked it out ahead of time”, although we’re not going to abandon that idea either.

The next piece we have is something useful if we want to work out the natural logarithm — the logarithm base e — of a number that’s close to 1. We have a formula that will let us work out this natural logarithm to whatever accuracy we want:

In principle, we have to add up infinitely many terms to get the answer right. In practice, we only add up terms until the error — the difference between our sum and the correct answer — is smaller than some acceptable margin. This seems to beg the question, because how can we know how big that error is without knowing what the correct answer is? In fact we don’t know just what the error is, but we do know that the error can’t be any larger than the absolute value of the first term we neglect.

Let me give an example. Suppose we want the natural logarithm of 1.5, which the alert have noticed is equal to 1 + 0.5. Then h is 0.5. If we add together the first five terms of the natural logarithm series, then we have which is approximately 0.40729. If we were to work out the next term in the series, that would be , which has an absolute value of about 0.0026. So the natural logarithm of 1.5 is 0.40729, plus or minus 0.0026. If we only need the natural logarithm to within 0.0026, that’s good: we’re done.

In fact, the natural logarithm of 1.5 is approximately 0.40547, so our error is closer to 0.00183, but that’s all right. Few people complain that our error is smaller than what we estimated it to be.

If we know what margin of error we’ll tolerate, by the way, then we know how many terms we have to calculate. Suppose we want the natural logarithm of 1.5 accurate to 0.001. Then we have to find the first number n so that ; if I'm not mistaken, that's eight. Just how many terms we have to calculate will depend on what h is; the bigger it is — the farther the number is from 1 — the more terms we'll need.

The trouble with this is that it’s only good for working out the natural logarithms of numbers between 0 and 2. (And it’s better the closer the number is to 1.) If you want the natural logarithm of 44.2286, you have to divide out the highest power of e that’s less than it — well, you can fake that by dividing by e repeatedly — and what you get is that it’s e times e times e times 2.202 and we’re stuck there. Not hopelessly, mind you: we could find the logarithm of 1/2.202, which will be minus the logarithm of 2.202, at least, and we can work back to the original number from there. Still, this is a bit of a mess. We can do better.

The third piece we can use is one of the fundamental properties of logarithms. This is true for any base, as long as we use the same base for each logarithm in the equation here, and I’ve mentioned it in passing before:

That is, if we could factor a number whose logarithm we want into components which we can either look up or we can calculate very quickly, then we know its logarithm is the sum of the logarithms of those components. And this, finally, is how we can work out logarithms quickly and without too much hard work.

I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 10^{2} is 100, and the logarithm of e^{1/3} (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 10^{3.3092} is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

It’s tricky to write about . That is, it’s not a difficult thing to write about, but it’s hard to find the audience for this number. It’s quite important, mathematically, but it hasn’t got an easy-to-understand definition like pi’s “the circumference of a circle divided by its diameter”. E’s most concise definition, I guess, is “the base of the natural logarithm”, which as an explanation to someone who hasn’t done much mathematics is only marginally more enlightening than slapping him with a slice of cold pizza. And it hasn’t got the sort of renown of something like the golden ratio which makes the number sound familiar and even welcoming.