Letting The Computer Do The Hard Work


Sometime in late August or early September 1994 I had one of those quietly astounding moments on a computer. It would have been while using Maple, a program capable of doing symbolic mathematics. It was something capable not just working out what the product of two numbers is, but of holding representations of functions and working out what the product of those functions was. That’s striking enough, but more was to come: I could describe a function and have Maple do the work of symbolically integrating it. That was astounding then, and it really ought to be yet. Let me explain why.

It’s fairly easy to think of symbolic representations of functions: if f(x) equals x^3 \cdot  sin(3 \cdot x) , well, you know if I give you some value for x, you can give me back an f(x), and if you’re a little better you can describe, roughly, a plot of x versus f(x). That is, that’s the plot of all the points on the plane for which the value of the x-coordinate and the value of the y-coordinate make the statement “y = f(x) ” a true statement.

If you’ve gotten into calculus, though, you’d like to know other things: the derivative, for example, of f(x). That is (among other interpretations), if I give you some value for x, you can tell me how quickly f(x) is changing at that x. Working out the derivative of a function is a bit of work, but it’s not all that hard; there’s maybe a half-dozen or so rules you have to follow, plus some basic cases where you learn what the derivative of x to a power is, or what the derivative of the sine is, or so on. (You don’t really need to learn those basic cases, but it saves you a lot of work if you do.) It takes some time to learn them, and what order to apply them in, but once you do it’s almost automatic. If you’re smart you might do some problems better, but, you don’t have to be smart, just indefatigable.

Integrating a function (among other interpretations, that’s finding the amount of area underneath a curve) is different, though, even though it’s kind of an inverse of finding the derivative. If you integrate a function, and then take its derivative, you get back the original function, unless you did it wrong. (For various reasons if you take a derivative and then integrate you won’t necessarily get back the original function, but you’ll get something close to it.) However, that integration is still really, really hard. There are rules to follow, yes, but despite that it’s not necessarily obvious what to do, or why to do it, and even if you do know the various rules and use them perfectly you’re not necessarily guaranteed to get an answer. Being indefatigable might help, but you also need to be smart.

So, it’s easy to imagine writing a computer program that can find a derivative; to find an integral, though? That’s amazing, and still is amazing. And that brings me at last to this tweet from @mathematicsprof:

The document linked to by this is a master’s thesis, titled Symbolic Integration, prepared by one Björn Terelius for the Royal Institute of Technology in Stockholm. It’s a fair-sized document, but it does open with a history of computers that work out integrals that anyone ought to be able to follow. It goes on to describe the logic behind algorithms that do this sor of calculation, though, and should be quite helpful in understanding just how it is the computer does this amazing thing.

(For a bonus, it also contains a short proof of why you can’t integrate e^{x^2} , one of those functions that looks nice and easy and that drives you crazy in Calculus II when you give it your best try.)

Reading the Comics, September 15, 2014: Are You Trying To Overload Me Edition


One of the little challenges in writing about mathematics-themed comics is one of pacing: how often should I do a roundup? Posting weekly, say, helps figure out a reasonable posting schedule for those rare moments when I’m working ahead of deadline, but that leaves the problem of weeks that just don’t have anything. Waiting for a certain number of comics before writing about them seems more reasonable, but then I have to figure how many comics are enough. I’ve settled into five-to-six as my threshold for a new post, but that can mean I have weeks where it seems like I’m doing nothing but comic strips posts. And then there’s conditions like this one where Comic Strip Master Command had its cartoonists put up just enough that I’d started composing a fresh post, and then tossed in a whole bunch more the next day. It’s like they’re trying to shake me by having too many strips to write about. I’d have though they’d be flattered to have me writing about them so.

Tiger studied times tables today. Studied it, not learned it.

Bud Blake’s _Tiger_ for the 11th of September, 2014.

Bud Blake’s Tiger (September 11, rerun) mentions Tiger as studying the times tables and points out the difference between studying a thing and learning it.

Marc Anderson’s Andertoons (September 12) belongs to that vein of humor about using technology words to explain stuff to kids. I admit I’m vague enough on the concept of mashups that I can accept that it might be a way of explaining addition, but it feels like it might also be a way of describing multiplication or for that matter the composition of functions. I suppose the kids would be drawn as older in those cases, though.

Bill Amend’s FoxTrot (September 13, rerun) does a word problem joke, but it does have the nice beat in the penultimate panel of Paige running a sanity check and telling at a glance that “two dollars” can’t possibly be the right answer. Sanity checks are nice things to have; they don’t guarantee against making mistakes, but they at least provide some protection against the easiest mistakes, and having some idea of what an answer could plausibly be might help in working out the answer. For example, if Paige had absolutely no idea how to set up equations for this problem, she could reason that the apple and the orange have to cost something from 1 to 29 cents, and could try out prices until finding something that satisfies both requirements. This is an exhausting method, but it would eventually work, too, and sometimes “working eventually” is better than “working cleverly”.

Bill Schorr’s The Grizzwells (September 13) starts out by playing on the fact that “yard” has multiple meanings; it also circles around one of those things that distinguishes word problems from normal mathematics. A word problem, by convention, normally contains exactly the information needed to solve what’s being asked — there’s neither useless information included nor necessary information omitted, except if the question-writer has made a mistake. In a real world application, figuring out what you need, and what you don’t need, is part of the work, possibly the most important part of the work. So to answer how many feet are in a yard, Gunther (the bear) is right to ask more questions about how big the yard is, as a start.

Ed would rather snack bars came in 100-calorie forms, rather than 70-calorie ones.

Steve Kelly and Jeff Parker’s _Dustin_ for the 14th of September, 2014.

Steve Kelley and Jeff Parker’s Dustin (September 14) is about one of the applications for mental arithmetic that people find awfully practical: counting the number of food calories that you eat. Ed’s point about it being convenient to have food servings be nice round numbers, as they’re easier to work with, is a pretty good one, and it’s already kind of accounted for in food labelling: it’s permitted (in the United States) to round off calorie counts to the nearest ten or so, on the rather sure grounds that if you are counting calories you’d rather add 70 to the daily total than 68 or 73. Don’t read the comments thread, which includes the usual whining about the Common Core and the wild idea that mental arithmetic might be well done by working out a calculation that’s close to the one you want but easier to do and then refining it to get the accuracy you need.

Mac and Bill King’s Magic In A Minute kids activity panel (September 14) presents a magic trick that depends on a bit of mental arithmetic. It’s a nice stunt, although it is certainly going to require kids to practice things because, besides dividing numbers by 4, it also requires adding 6, and that’s an annoying number to deal with. There’s also a nice little high school algebra problem to be done in explaining why the trick works.

Bill Watterson’s Calvin and Hobbes (September 15, rerun) includes one of Hobbes’s brilliant explanations of how arithmetic works, and if I haven’t wasted the time spent memorizing the strips where Calvin tries to do arithmetic homework then Hobbes follows up tomorrow with imaginary numbers. Can’t wait.

Jef Mallet’s Frazz (September 15) expresses skepticism about a projection being made for the year 2040. Extrapolations and interpolations are a big part of numerical mathematics and there’s fair grounds to be skeptical: even having a model of whatever your phenomenon is that accurately matches past data isn’t a guarantee that there isn’t some important factor that’s been trivial so far but will become important and will make the reality very different from the calculations. But that hardly makes extrapolations useless: for one, the fact that there might be something unknown which becomes important is hardly a guarantee that there is. If the modelling is good and the reasoning sound, what else are you supposed to use for a plan? And of course you should watch for evidence that the model and the reality aren’t too very different as time goes on.

Gary Wise and Lance Aldrich’s Real Life Adventures (September 15) describes mathematics as “insufferable and enigmatic”, which is a shame, as mathematics hasn’t said anything nasty about them, now has it?

Without Machines That Think About Logarithms


I’ve got a few more thoughts about calculating logarithms, based on how the Harvard IBM Automatic Sequence-Controlled Calculator did things, and wanted to share them. I also have some further thoughts coming up shortly courtesy my first guest blogger, which is exciting to me.

The procedure that was used back then to compute common logarithms — logarithms base ten — was built on several legs: that we can work out some logarithms ahead of time, that we can work out the natural (base e) logarithm of a number using an infinite series, that we can convert the natural logarithm to a common logarithm by a single multiplication, and that the logarithm of the product of two (or more) numbers equals the sum of the logarithm of the separate numbers.

From that we got a pretty nice, fairly slick algorithm for producing logarithms. Ahead of time you have to work out the logarithms for 1, 2, 3, 4, 5, 6, 7, 8, and 9; and then, to make things more efficient, you’ll want the logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9; for that matter, you’ll also want 1.01, 1.02, 1.03, 1.04, and so on to 1.09. You can get more accurate numbers quickly by working out the logarithms for three digits past the decimal — 1.001, 1.002, 1.003, 1.004, and so on — and for that matter to four digits (1.0001) and more. You’re buying either speed of calculation or precision of result with memory.

The process as described before worked out common logarithms, although there isn’t much reason that it has to be those. It’s a bit convenient, because if you want the logarithm of 47.2286 you’ll want to shift that to the logarithm of 4.72286 plus the logarithm of 10, and the common logarithm of 10 is a nice, easy 1. The same logic works in natural logarithms: the natural logarithm of 47.2286 is the natural logarithm of 4.72286 plus the natural logarithm of 10, but the natural logarithm of 10 is a not-quite-catchy 2.3026 (approximately). You pretty much have to decide whether you want to deal with factors of 10 being an unpleasant number or do deal with calculating natural logarithms and then multiplying them by the common logarithm of e, about 0.43429.

But the point is if you found yourself with no computational tools, but plenty of paper and time, you could reconstruct logarithms for any number you liked pretty well: decide whether you want natural or common logarithms. I’d probably try working out both, since there’s presumably the time, after all, and who knows what kind of problems I’ll want to work out afterwards. And I can get quite nice accuracy after working out maybe 36 logarithms using the formula:

\log_e\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots

This will work very well for numbers like 1.1, 1.2, 1.01, 1.02, and so on: for this formula to work, h has to be between -1 and 1, or put another way, we have to be looking for the logarithms of numbers between 0 and 2. And it takes fewer terms to get the result as precise as you want the closer h is to zero, that is, the closer the number whose logarithm we want is to 1.

So most of my reference table is easy enough to make. But there’s a column left out: what is the logarithm of 2? Or 3, or 4, or so on? The infinite-series formula there doesn’t work that far out, and if you give it a try, let’s say with the logarithm of 5, you get a good bit of nonsense, numbers swinging positive and negative and ever-larger.

Of course we’re not limited to formulas; we can think, too. 3, for example, is equal to 1.5 times 2, so the logarithm of 3 is the logarithm of 1.5 2 plus the logarithm of 2, and we have the logarithm of 1.5, and the logarithm of 2 is … OK, that’s a bit of a problem. But if we had the logarithm of 2, we’d be able to work out the logarithm of 4 — it’s just twice that — and we could get to other numbers pretty easily: 5 is, among other things, 2 times 2 times 1.25 so its logarithm is twice the logarithm of 2 plus the logarithm of 1.25. We’d have to work out the logarithm of 1.25, but we can do that by formula. 6 is 2 times 2 times 1.5, and we already had 1.5 worked out. 7 is 2 times 2 times 1.75, and we have a formula for the logarithm of 1.75. 8 is 2 times 2 times 2, so, triple whatever the logarithm of 2 is. 9 is 3 times 3, so, double the logarithm of 3.

We’re not required to do things this way. I just picked some nice, easy ways to factor the whole numbers up to 9, and that didn’t seem to demand doing too much more work. I’d need the logarithms of 1.25 and 1.75, as well as 2, but I can use the formula or, for that matter, work it out using the rest of my table: 1.25 is 1.2 times 1.04 times 1.001 times 1.000602, approximately. But there are infinitely many ways to get 3 by multiplying together numbers between 1 and 2, and we can use any that are convenient.

We do still need the logarithm of 2, but, then, 2 is among other things equal to 1.6 times 1.25, and we’d been planning to work out the logarithm of 1.6 all the time, and 1.25 is useful in getting us to 5 also, so, why not do that?

So in summary we could get logarithms for any numbers we wanted by working out the logarithms for 1.1, 1.2, 1.3, and so on, and 1.01, 1.02, 1.03, et cetera, and 1.001, 1.002, 1.003 and so on, and then 1.25 and 1.75, which lets us work out the logarithms of 2, 3, 4, and so on up to 9.

I haven’t yet worked out, but I am curious about, what the fewest number of “extra” numbers I’d have to calculate are. That is, granted that I have to figure out the logarithms of 1.1, 1.01, 1.001, et cetera anyway. The way I outlined things I have to also work out the logarithms of 1.25 and 1.75 to get all the numbers I need. Is it possible to figure out a cleverer bit of factorization that requires only one extra number be worked out? For that matter, is it possible to need no extra numbers? My instinctive response is to say no, but that’s hardly a proof. I’d be interested to know better.

Splitting a Cake with a Missing Piece


Joseph Nebus:

I wanted to offer something a little light today, as I’m in the midst of figuring out the next articles in a couple of my ongoing threads and getting ready for a guest posting. So here, from Mathematics Lounge, please enjoy this nice little puzzle about how to cut, into two even pieces, a cake that’s already had a piece cut out of it. It’s got a lovely answer and it’s worth pondering it and why that answer’s true before reading the solution. And there’s another, grin-worthy, solution offered in the comments.

Originally posted on Mathematics Lounge:

Problem:
Jeremy and Jane would like to divide a rectangular cake in half, but their friend Bob (who can be a jerk sometimes) has already cut out a piece for himself. Bob’s slice is a rectangle of some arbitrary size and rotation. How can Jeremy and Jane divide the remaining cake into two equal portions, using a single cut with a sufficiently long knife?

cake_question

Description:
This is an interesting problem with a fairly elegant solution. It is the type of problem that can be posed as a math puzzle/riddle, and figured out on the spot with some ingenuity.

For this problem, we define a single cut as a separation of the area made by a straight line, viewed from above. For example, a cut that crosses a gap (like below) may intersect the cake in two separate places, but still counts as one cut. (This example, of course, clearly does…

View original 191 more words

Reading the Comics, September 8, 2014: What Is The Problem Edition


Must be the start of school or something. In today’s roundup of mathematically-themed comics there are a couple of strips that I think touch on the question of defining just what the problem is: what are you trying to measure, what are you trying to calculate, what are the rules of this sort of calculation? That’s a lot of what’s really interesting about mathematics, which is how I’m able to say something about a rerun Archie comic. It’s not easy work but that’s why I get that big math-blogger paycheck.

Edison Lee works out the shape of the universe, and as ever in this sort of thing, he forgot to carry a number.

I’d have thought the universe to be at least three-dimensional.

John Hambrock’s The Brilliant Mind of Edison Lee (September 2) talks about the shape of the universe. Measuring the world, or the universe, is certainly one of the older influences on mathematical thought. From a handful of observations and some careful reasoning, for example, one can understand how large the Earth is, and how far away the Moon and the Sun must be, without going past the kinds of reasoning or calculations that a middle school student would probably be able to follow.

There is something deeper to consider about the shape of space, though: the geometry of the universe affects what things can happen in them, and can even be seen in the kinds of physics that happen. A famous, and astounding, result by the mathematical physicist Emmy Noether shows that symmetries in space correspond to conservation laws. That the universe is, apparently, rotationally symmetric — everything would look the same if the whole universe were picked up and rotated (say) 80 degrees along one axis — means that there is such a thing as the conservation of angular momentum. That the universe is time-symmetric — the universe would look the same if it had got started five hours later (please pretend that’s a statement that can have any coherent meaning) — means that energy is conserved. And so on. It may seem, superficially, like a cosmologist is engaged in some almost ancient-Greek-style abstract reasoning to wonder what shapes the universe could have and what it does, but (putting aside that it gets hard to divide mathematics, physics, and philosophy in this kind of field) we can imagine observable, testable consequences of the answer.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 5) tells a joke starting with “two perfectly rational perfectly informed individuals walk into a bar”, along the way to a joke about economists. The idea of “perfectly rational perfectly informed” people is part of the mathematical modeling that’s become a popular strain of economic thought in recent decades. It’s a model, and like many models, is properly speaking wrong, but it allows one to describe interesting behavior — in this case, how people will make decisions — without complications you either can’t handle or aren’t interested in. The joke goes on to the idea that one can assign costs and benefits to continuing in the joke. The idea that one can quantify preferences and pleasures and happiness I think of as being made concrete by Jeremy Bentham and the utilitarian philosophers, although trying to find ways to measure things has been a streak in Western thought for close to a thousand years now, and rather fruitfully so. But I wouldn’t have much to do with protagonists who can’t stay around through the whole joke either.

Marc Anderson’s Andertoons (September 6) was probably composed in the spirit of joking, but it does hit something that I understand baffles kids learning it every year: that subtracting a negative number does the same thing as adding a positive number. To be fair to kids who need a couple months to feel quite confident in what they’re doing, mathematicians needed a couple generations to get the hang of it too. We have now a pretty sound set of rules for how to work with negative numbers, that’s nice and logically tested and very successful at representing things we want to know, but there seems to be a strong intuition that says “subtracting a negative three” and “adding a positive three” might just be different somehow, and we won’t really know negative numbers until that sense of something being awry is resolved.

Andertoons pops up again the next day (September 7) with a completely different drawing of a chalkboard and this time a scientist and a rabbit standing in front of it. The rabbit’s shown to be able to do more than multiply and, indeed, the mathematics is correct. Cosines and sines have a rather famous link to exponentiation and to imaginary- and complex-valued numbers, and it can be useful to change an ordinary cosine or sine into this exponentiation of a complex-valued number. Why? Mostly, because exponentiation tends to be pretty nice, analytically: you can multiply and divide terms pretty easily, you can take derivatives and integrals almost effortlessly, and then if you need a cosine or a sine you can get that out at the end again. It’s a good trick to know how to do.

Jeff Harris’s Shortcuts children’s activity panel (September 9) is a page of stuff about “Geometry”, and it’s got some nice facts (some mathematical, some historical), and a fair bunch of puzzles about the field.

Morrie Turner’s Wee Pals (September 7, perhaps a rerun; Turner died several months ago, though I don’t know how far ahead of publication he was working) features a word problem in terms of jellybeans that underlines the danger of unwarranted assumptions in this sort of problem-phrasing.

Moose has trouble working out 15 percent of $8.95; Jughead explains why.

How far back is this rerun from if Moose got lunch for two for $8.95?

Craig Boldman and Henry Scarpelli’s Archie (September 8, rerun) goes back to one of arithmetic’s traditional comic strip applications, that of working out the tip. Poor Moose is driving himself crazy trying to work out 15 percent of $8.95, probably from a quiz-inspired fear that if he doesn’t get it correct to the penny he’s completely wrong. Being able to do a calculation precisely is useful, certainly, but he’s forgetting that in tis real-world application he gets some flexibility in what has to be calculated. He’d save some effort if he realized the tip for $8.95 is probably close enough to the tip for $9.00 that he could afford the difference, most obviously, and (if his budget allows) that he could just as well work out one-sixth the bill instead of fifteen percent, and give up that workload in exchange for sixteen cents.

Mark Parisi’s Off The Mark (September 8) is another entry into the world of anthropomorphized numbers, so you can probably imagine just what π has to say here.

Machines That Give You Logarithms


As I’ve laid out the tools that the Harvard IBM Automatic Sequence-Controlled Calculator would use to work out a common logarithm, now I can show how this computer of the 1940s and 1950s would do it. The goal, remember, is to compute logarithms to a desired accuracy, using computers that haven’t got abundant memory, and as quickly as possible. As quickly as possible means, roughly, avoiding multiplication (which takes time) and doing as few divisions as can possibly be done (divisions take forever).

As a reminder, the tools we have are:

  1. We can work out at least some logarithms ahead of time and look them up as needed.
  2. The natural logarithm of a number close to 1 is log_e\left(1 + h\right) = h - \frac12h^2 + \frac13h^3 - \frac14h^4 + \frac15h^5 - \cdots .
  3. If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.
  4. Whether the natural or the common logarithm (or any other logarithm you might like) \log\left(a\cdot b\cdot c \cdot d \cdots \right) = \log(a) + \log(b) + \log(c) + \log(d) + \cdots

Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about 0.1807 - \frac12 0.1807^2 + \frac13 0.1807^3 - \frac14 0.1807^4 + \frac15 0.1807^5 - \cdots or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of \frac16 0.1807^6 , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

Machines That Do Something About Logarithms


I’m going to assume everyone reading this accepts that logarithms are worth computing, and try to describe how Harvard’s IBM Automatic Sequence-Controlled Calculator would work them out.

The first part of this is kind of an observation: the quickest way to give the logarithm of a number is to already know it. Looking it up in a table is way faster than evaluating it, and that’s as true for the computer as for you. Obviously we can’t work out logarithms for every number, what with there being so many of them, but we could work out the logarithms for a reasonable range and to a certain precision and trust that the logarithm of (say) 4.42286 is going to be tolerably close to the logarithm of 4.423 that we worked out ahead of time. Working out a range of, say, 1 to 10 for logarithms base ten is plenty, because that’s all the range we need: the logarithm base ten of 44.2286 is the logarithm base ten of 4.42286 plus one. The logarithm base ten of 0.442286 is the logarithm base ten of 4.42286 minus one. You can guess from that what the logarithm of 4,422.86 is, compared to that of 4.42286.

This is trading computer memory for computational speed, which is often worth doing. But the old Automatic Sequence-Controlled Calculator can’t do that, at least not as easily as we’d like: it had the ability to store 72 numbers, albeit to 23 decimal digits. We can’t just use “worked it out ahead of time”, although we’re not going to abandon that idea either.

The next piece we have is something useful if we want to work out the natural logarithm — the logarithm base e — of a number that’s close to 1. We have a formula that will let us work out this natural logarithm to whatever accuracy we want:

\log_{e}\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots \mbox{ if } |h| < 1

In principle, we have to add up infinitely many terms to get the answer right. In practice, we only add up terms until the error — the difference between our sum and the correct answer — is smaller than some acceptable margin. This seems to beg the question, because how can we know how big that error is without knowing what the correct answer is? In fact we don’t know just what the error is, but we do know that the error can’t be any larger than the absolute value of the first term we neglect.

Let me give an example. Suppose we want the natural logarithm of 1.5, which the alert have noticed is equal to 1 + 0.5. Then h is 0.5. If we add together the first five terms of the natural logarithm series, then we have 0.5 - \frac12 0.5^2 + \frac13 0.5^3 - \frac14 0.5^4 + \frac15 0.5^5 which is approximately 0.40729. If we were to work out the next term in the series, that would be -\frac16 0.5^6 , which has an absolute value of about 0.0026. So the natural logarithm of 1.5 is 0.40729, plus or minus 0.0026. If we only need the natural logarithm to within 0.0026, that’s good: we’re done.

In fact, the natural logarithm of 1.5 is approximately 0.40547, so our error is closer to 0.00183, but that’s all right. Few people complain that our error is smaller than what we estimated it to be.

If we know what margin of error we’ll tolerate, by the way, then we know how many terms we have to calculate. Suppose we want the natural logarithm of 1.5 accurate to 0.001. Then we have to find the first number n so that \frac1n 0.5^n < 0.001 ; if I'm not mistaken, that's eight. Just how many terms we have to calculate will depend on what h is; the bigger it is — the farther the number is from 1 — the more terms we'll need.

The trouble with this is that it’s only good for working out the natural logarithms of numbers between 0 and 2. (And it’s better the closer the number is to 1.) If you want the natural logarithm of 44.2286, you have to divide out the highest power of e that’s less than it — well, you can fake that by dividing by e repeatedly — and what you get is that it’s e times e times e times 2.202 and we’re stuck there. Not hopelessly, mind you: we could find the logarithm of 1/2.202, which will be minus the logarithm of 2.202, at least, and we can work back to the original number from there. Still, this is a bit of a mess. We can do better.

The third piece we can use is one of the fundamental properties of logarithms. This is true for any base, as long as we use the same base for each logarithm in the equation here, and I’ve mentioned it in passing before:

\log\left(a\cdot b\cdot c\cdot d \cdots\right) = \log\left(a\right) + \log\left(b\right) + \log\left(c\right) + \log\left(d\right) + \cdots

That is, if we could factor a number whose logarithm we want into components which we can either look up or we can calculate very quickly, then we know its logarithm is the sum of the logarithms of those components. And this, finally, is how we can work out logarithms quickly and without too much hard work.

My Math Blog Statistics, August 2014


So, August 2014: it’s been a month that brought some interesting threads into my writing here. It’s also had slightly longer gaps in my writing than I quite like, because I’d just not had the time to do as much writing as I hoped. But that leaves the question of how this affected my readership: are people still sticking around and do they like what they see?

The number of unique readers around here, according to WordPress, rose slightly, from 231 in July to 255 in August. This doesn’t compare favorably to numbers like the 315 visitors in May, but still, it’s an increase. The total number of page views dropped from 589 in July to 561 in August and don’t think that the last few days of the month I wasn’t tempted to hit refresh a bunch of times. Anyway, views per visitor dropped from 2.55 to 2.20, which seems to be closer to my long-term average. And at some point in the month — I failed to track when — I reached my 17,000th reader, and got up to 17,323 by the end of the month. If I’m really interesting this month I could hit 18,000 by the end of September.

The countries sending me the most readers were, in first place, the ever-unsurprising United States (345). Second place was Spain (36) which did take me by surprise, and Puerto Rico was third (30). The United Kingdom, Austria, and Canada came up next so at least that’s all familiar enough, and India sent me a nice round dozen readers. I got a single reader from each of Argentina, Belgium, Brazil, Finland, Germany, Hong Kong, Indonesia, Latvia, Mexico, Romania, Serbia, South Korea, Sweden, Thailand, and Venezuela. The only country that also sent me a single reader in July was Hong Kong (which also sent a lone reader in June and in May), and going back over last month’s post revealed that Spain and Puerto Rico were single-reader countries in July. I don’t know what I did to become more interesting there in August but I’ll try to keep it going.

The most popular articles in August were:

I fear I lack any good Search Term Poetry this month. Actually the biggest search terms have been pretty rote ones, eg:

  • trapezoid
  • barney and clyde carl friedrich comic
  • moment of inertia of cube around the longest diagonal
  • where do negative numbers come from
  • comic strip math cube of binomials

Actually, Gauss comic strips were searched for a lot. I’m sorry I don’t have more of them for folks, but have you ever tried to draw Gauss? I thought not. At least I had something relevant for the moment of inertia question even if I didn’t answer it completely.

Reading the Comics, August 29, 2014: Recurring Jokes Edition


Well, I did say we were getting to the end of summer. It’s taken only a couple days to get a fresh batch of enough mathematics-themed comics to include here, although the majority of them are about mathematics in ways that we’ve seen before, sometimes many times. I suppose that’s fair; it’s hard to keep thinking of wholly original mathematics jokes, after all. When you’ve had one killer gag about “537”, it’s tough to move on to “539” and have it still feel fresh.

Tom Toles’s Randolph Itch, 2 am (August 27, rerun) presents Randolph suffering the nightmare of contracting a case of entropy. Entropy might be the 19th-century mathematical concept that’s most achieved popular recognition: everyone knows it as some kind of measure of how disorganized things are, and that it’s going to ever increase, and if pressed there’s maybe something about milk being stirred into coffee that’s linked with it. The mathematical definition of entropy is tied to the probability one will find whatever one is looking at in a given state. Work out the probability of finding a system in a particular state — having particles in these positions, with these speeds, maybe these bits of magnetism, whatever — and multiply that by the logarithm of that probability. Work out that product for all the possible ways the system could possibly be configured, however likely or however improbable, just so long as they’re not impossible states. Then add together all those products over all possible states. (This is when you become grateful for learning calculus, since that makes it imaginable to do all these multiplications and additions.) That’s the entropy of the system. And it applies to things with stunning universality: it can be meaningfully measured for the stirring of milk into coffee, to heat flowing through an engine, to a body falling apart, to messages sent over the Internet, all the way to the outcomes of sports brackets. It isn’t just body parts falling off.

Stanley's old algebra teacher insists there is yet hope for him.

Randy Glasbergen’s _The Better Half_ For the 28th of August, 2014.

Randy Glasbergen’s The Better Half (August 28) does the old joke about not giving up on algebra someday being useful. Do teachers in other subjects get this? “Don’t worry, someday your knowledge of the Panic of 1819 will be useful to you!” “Never fear, someday they’ll all look up to you for being able to diagram a sentence!” “Keep the faith: you will eventually need to tell someone who only speaks French that the notebook of your uncle is on the table of your aunt!”

Eric the Circle (August 28, by “Gilly” this time) sneaks into my pages again by bringing a famous mathematical symbol into things. I’d like to make a mention of the links between mathematics and music which go back at minimum as far as the Ancient Greeks and the observation that a lyre string twice as long produced the same note one octave lower, but lyres and strings don’t fit the reference Gilly was going for here. Too bad.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 28) is another strip to use a “blackboard full of mathematical symbols” as visual shorthand for “is incredibly smart stuff going on”. The symbols look to me like they at least started out as being meaningful — they’re the kinds of symbols I expect in describing the curvature of space, and which you can find by opening up a book about general relativity — though I’m not sure they actually stay sensible. (It’s not the kind of mathematics I’ve really studied.) However, work in progress tends to be sloppy, the rough sketch of an idea which can hopefully be made sound.

Anthony Blades’s Bewley (August 29) has the characters stare into space pondering the notion that in the vastness of infinity there could be another of them out there. This is basically the same existentially troublesome question of the recurrence of the universe in enough time, something not actually prohibited by the second law of thermodynamics and the way entropy tends to increase with the passing of time, but we have already talked about that.

Reading the Comics, August 25, 2014: Summer Must Be Ending Edition


I’m sorry to admit that I can’t think of a unifying theme for the most recent round of comic strips which mention mathematical topics, other than that this is one of those rare instances of nobody mentioning infinite numbers of typing monkeys. I have to guess Comic Strip Master Command sent around a notice that summer vacation (in the United States) will be ending soon, so cartoonists should start practicing their mathematics jokes.

Tom Toles’s Randolph Itch, 2 a.m. (August 22, rerun) presents what’s surely the lowest-probability outcome of a toss of a fair coin: its landing on the edge. (I remember this as also the gimmick starting a genial episode of The Twilight Zone.) It’s a nice reminder that you do have to consider all the things that might affect an experiment’s outcome before concluding what are likely and unlikely results.

It also inspires, in me, a side question: a single coin, obviously, has a tiny chance of landing on its side. A roll of coins has a tiny chance of not landing on its side. How thick a roll has to be assembled before the chance of landing on the side and the chance of landing on either edge become equal? (Without working it out, my guess is it’s about when the roll of coins is as tall as it is across, but I wouldn’t be surprised if it were some slightly oddball thing like the roll has to be the square root of two times the diameter of the coins.)

Doug Savage’s Savage Chickens (August 22) presents an “advanced Sudoku”, in a puzzle that’s either trivially easy or utterly impossible: there’s so few constraints on the numbers in the presented puzzle that it’s not hard to write in digits that will satisfy the results, but, if there’s one right answer, there’s not nearly enough information to tell which one it is. I do find interesting the problem of satisfiability — giving just enough information to solve the puzzle, without allowing more than one solution to be valid — an interesting one. I imagine there’s a very similar problem at work in composing Ivasallay’s Find The Factors puzzles.

Phil Frank and Joe Troise’s The Elderberries (August 24, rerun) presents a “mind aerobics” puzzle in the classic mathematical form of drawing socks out of a drawer. Talking about pulling socks out of drawers suggests a probability puzzle, but the question actually takes it a different direction, into a different sort of logic, and asks about how many socks need to be taken out in order to be sure you have one of each color. The easiest way to apply this is, I believe, to use what’s termed the “pigeon hole principle”, which is one of those mathematical concepts so clear it’s hard to actually notice it. The principle is just that if you have fewer pigeon holes than you have pigeons, and put every pigeon in a pigeon hole, then there’s got to be at least one pigeon hole with more than one pigeons. (Wolfram’s MathWorld credits the statement to Peter Gustav Lejeune Dirichlet, a 19th century German mathematician with a long record of things named for him in number theory, probability, analysis, and differential equations.)

Dave Whamond’s Reality Check (August 24) pulls out the old little pun about algebra and former romantic partners. You’ve probably seen this joke passed around your friends’ Twitter or Facebook feeds too.

Julie Larson’s The Dinette Set (August 25) presents some terrible people’s definition of calculus, as “useless math with letters instead of numbers”, which I have to gripe about because that seems like a more on-point definition of algebra. I’m actually sympathetic to the complaint that calculus is useless, at least if you don’t go into a field that requires it (although that’s rather a circular definition, isn’t it?), but I don’t hold to the idea that whether something is “useful” should determine whether it’s worth learning. My suspicion is that things you find interesting are worth learning, either because you’ll find uses for them, or just because you’ll be surrounding yourself with things you find interesting.

Shifting from numbers to letters, as are used in algebra and calculus, is a great advantage. It allows you to prove things that are true for many problems at once, rather than just the one you’re interested in at the moment. This generality may be too much work to bother with, at least for some problems, but it’s easy to see what’s attractive in solving a problem once and for all.

Mikael Wulff and Anders Morgenthaler’s WuMo (August 25) uses a couple of motifs none of which I’m sure are precisely mathematical, but that seem close enough for my needs. First there’s the motif of Albert Einstein as just being so spectacularly brilliant that he can form an argument in favor of anything, regardless of whether it’s right or wrong. Surely that derives from Einstein’s general reputation of utter brilliance, perhaps flavored by the point that he was able to show how common-sense intuitive ideas about things like “it’s possible to say whether this event happened before or after that event” go wrong. And then there’s the motif of a sophistic argument being so massive and impressive in its bulk that it’s easier to just give in to it rather than try to understand or refute it.

It’s fair of the strip to present Einstein as beginning with questions about how one perceives the universe, though: his relativity work in many ways depends on questions like “how can you tell whether time has passed?” and “how can you tell whether two things happened at the same time?” These are questions which straddle physics, mathematics, and philosophy, and trying to find answers which are logically coherent and testable produced much of the work that’s given him such lasting fame.