Bob Newhart interviews Herman Hollerith


Yesterday was the birthday of Herman Hollerith. His 40th since his birth in 1860. He’s renowned in computing circles. His work in automating the counting and of data made the United States’s 1890 Census possible. This is not the ordinary hyperbole: the 1880 Census’s data took eight years to fully collate. Hollerith’s tabulating machines took … well, six years for the full job, but they were keeping track of quite a bit of information. Hollerith’s system would go on to be used for other censuses, and also for general inventory and data-tracking purposes. His tabulating company would go on to be one of the original components of IBM. Cards, card readers, and card sorters with a clear lineage to this system would be used until fully electronic computers took over.

(It’s commonly assumed that the traditional 80-character width of a text terminal traces to the 80-hole punch cards which became the standard. Programmers particularly love to tell that tale, ignoring early computing screens that had different lengths, particularly 72 characters. More plausibly 80 characters owes to two things: it’s a nice round number, and it’s close to the number of characters you can type on a standard sheet of paper with a normal typewriter font. So it’s about the “right” length, one that we’ve been trained to accept as enough text to read at a glance.)

Well. In about 1970 IBM hired Bob Newhart to record a bit, for … fun, if that word applies to IBM. Part of the publicity for launching the famous System 370 machine. The structure echos the bit where Bob Newhart imagines being the first guy to hear of Sir Walter Raleigh’s importing of tobacco, and just how weird every bit of that is. In this bit, Newhart imagines talking on the phone with Herman Hollerith and hearing about just how this punched-card system is supposed to work. For decades, though, the film was reported lost.

What I did not know until mentioning to a friend two days ago is: the film was found! And a decade ago! In a Swedish bank vault because that’s the way this sort of thing always happens. Which is a neat bit of historical rhyming: the original fine data from the first Hollerith census of 1890 is lost, most likely destroyed in 1933 or 1934. So, please let me share with you Bob Newhart hearing about Herman Hollerith’s system. The end appears to be cut off, and there are Swedish subtitles that might just give away a couple jokes, if you can’t help paying attention to them.

Like a lot of comic work-for-hire it’s not Newhart’s best. It’s not going to displace the Voyage of the USS Codfish in my heart. There are a few spots to me where it seems like Newhart’s overlooked a good additional punch line, and I don’t know whether that reflects Newhart wanting to keep the piece from growing too long or too technical or what. It’s possible Newhart didn’t feel familiar enough with punch card technology to get too technical too. Newhart did work, briefly, as an accountant and might have had some reason to use the things. But I’m not aware of his telling any stories of doing so, and that seems a telling omission.

Still, it’s great to see this bit has been preserved, and is available. And is a Bob Newhart routine about early computer technologies, somehow.

Reading the Comics, October 4, 2016: Split Week Edition Part 1


The last week in mathematically themed comics was a pleasant one. By “a pleasant one” I mean Comic Strip Master Command sent enough comics out that I feel comfortable splitting them across two essays. Look for the other half of the past week’s strips in a couple days at a very similar URL.

Mac King and Bill King’s Magic in a Minute feature for the 2nd shows off a bit of number-pattern wonder. Set numbers in order on a four-by-four grid and select four as directed and add them up. You get the same number every time. It’s a cute trick. I would not be surprised if there’s some good group theory questions underlying this, like about what different ways one could arrange the numbers 1 through 16. Or for what other size grids the pattern will work for: 2 by 2? (Obviously.) 3 by 3? 5 by 5? 6 by 6? I’m not saying I actually have been having fun doing this. I just sense there’s fun to be had there.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd is based on one of those weirdnesses of the way computers add. I remember in the 90s being on a Java mailing list. Routinely it would draw questions from people worried that something was very wrong, as adding 0.01 to a running total repeatedly wouldn’t get to exactly 1.00. Java was working correctly, in that it was doing what the specifications said. It’s just the specifications didn’t work quite like new programmers expected.

What’s going on here is the same problem you get if you write down 1/3 as 0.333. You know that 1/3 plus 1/3 plus 1/3 ought to be 1 exactly. But 0.333 plus 0.333 plus 0.333 is 0.999. 1/3 is really a little bit more than 0.333, but we skip that part because it’s convenient to use only a few points past the decimal. Computers normally represent real-valued numbers with a scheme called floating point representation. At heart, that’s representing numbers with a couple of digits. Enough that we don’t normally see the difference between the number we want and the number the computer represents.

Every number base has some rational numbers it can’t represent exactly using finitely many digits. Our normal base ten, for example, has “one-third” and “two-third”. Floating point arithmetic is built on base two, and that has some problems with tenths and hundredths and thousandths. That’s embarrassing but in the main harmless. Programmers learn about these problems and how to handle them. And if they ask the mathematicians we tell them how to write code so as to keep these floating-point errors from growing uncontrollably. If they ask nice.

Random Acts of Nancy for the 3rd is a panel from Ernie Bushmiller’s Nancy. That panel’s from the 23rd of November, 1946. And it just uses mathematics in passing, arithmetic serving the role of most of Nancy’s homework. There’s a bit of spelling (I suppose) in there too, which probably just represents what’s going to read most cleanly. Random Acts is curated by Ernie Bushmiller fans Guy Gilchrist (who draws the current Nancy) and John Lotshaw.

Thom Bluemel’s Birdbrains for the 4th depicts the discovery of a new highest number. When humans discovered ‘1’ is, I would imagine, probably unknowable. Given the number sense that animals have it’s probably something that predates humans, that it’s something we’re evolved to recognize and understand. A single stroke for 1 seems to be a common symbol for the number. I’ve read histories claiming that a culture’s symbol for ‘1’ is often what they use for any kind of tally mark. Obviously nothing in human cultures is truly universal. But when I look at number symbols other than the Arabic and Roman schemes I’m used to, it is usually the symbol for ‘1’ that feels familiar. Then I get to the Thai numeral and shrug at my helplessness.

Bill Amend’s FoxTrot Classics for the 4th is a rerun of the strip from the 11th of October, 2005. And it’s made for mathematics people to clip out and post on the walls. Jason and Marcus are in their traditional nerdly way calling out sequences of numbers. Jason’s is the Fibonacci Sequence, which is as famous as mathematics sequences get. That’s the sequence of numbers in which every number is the sum of the previous two terms. You can start that sequence with 0 and 1, or with 1 and 1, or with 1 and 2. It doesn’t matter.

Marcus calls out the Perrin Sequence, which I neve heard of before either. It’s like the Fibonacci Sequence. Each term in it is the sum of two other terms. Specifically, each term is the sum of the second-previous and the third-previous terms. And it starts with the numbers 3, 0, and 2. The sequence is named for François Perrin, who described it in 1899, and that’s as much as I know about him. The sequence describes some interesting stuff. Take n points and put them in a ‘cycle graph’, which looks to the untrained eye like a polygon with n corners and n sides. You can pick subsets of those points. A maximal independent set is the biggest subset you can make that doesn’t fit into another subset. And the number of these maximal independent sets in a cyclic graph is the n-th number in the Perrin sequence. I admit this seems like a nice but not compelling thing to know. But I’m not a cyclic graph kind of person so what do I know?

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 4th is the anthropomorphic numerals joke for this essay and I was starting to worry we wouldn’t get one.

Reading the Comics, April 6, 2015: Little Infinite Edition


As I warned, there were a lot of mathematically-themed comic strips the last week, and here I can at least get us through the start of April. This doesn’t include the strips that ran today, the 7th of April by my calendar, because I have to get some serious-looking men to look at my car and I just know they’re going to disapprove of what my CV joint covers look like, even though I’ve done nothing to them. But I won’t be reading most of today’s comic strips until after that’s done, and so commenting on them later.

Mark Anderson’s Andertoons (April 3) makes its traditional appearance in my roundup, in this case with a business-type guy declaring infinity to be “the loophole of all loopholes!” I think that’s overstating things a fair bit, but strange and very counter-intuitive things do happen when you try to work out a problem in which infinities turn up. For example: in ordinary arithmetic, the order in which you add together a bunch of real numbers makes no difference. If you want to add together infinitely many real numbers, though, it is possible to have them add to different numbers depending on what order you add them in. Most unsettlingly, it’s possible to have infinitely many real numbers add up to literally any real number you like, depending on the order in which you add them. And then things get really weird.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips (April 3) is the other strip in this roundup to at least name-drop infinity. I confess I don’t see how “being infinite” would help in bringing about world peace, but I suppose being finite hasn’t managed the trick just yet so we might want to think outside the box.

Continue reading “Reading the Comics, April 6, 2015: Little Infinite Edition”

20,000: My Math Blog’s Statistics


I reached my 20,000th view around here sometime on the final day of 2014, which is an appealingly tidy coincidence. I’m glad for it. It also gives me a starting point to review the blog’s statistics, as gathered by WordPress, which is trying to move us to a new and perfectly awful statistics page that shows less information more inconveniently.

The total number of page views grew from 625 in October to 674 in November all the way to 831 in December 2014, which just ties my record number of viewers from back in January 2013. The number of unique visitors grew from October’s 323 to November’s 366 up to 424 total, which comes in second-place to January 2013’s 473. I don’t know what I was doing in January 2013 that I’m only gradually getting back up to these days. The number of views per visitor went from 1.93, to 1.84, back up to 1.96, which is probably just a meaningless fluctuation. January 2013 had 1.76.

My most popular articles — with 25 views or more each — were Reading The Comics posts, mostly, with the exceptions being two things almost designed to be clickbait, although I mean them sincerely:

  1. Reading the Comics, December 14, 2014: Pictures Gone Again? Edition, in which I work out one of the calculus-y expressions and find it isn’t all that interesting.
  2. Reading the Comics, December 5, 2014: Good Questions Edition, in which I figured out a Dark Side Of The Horse comic was using a page of symbols from orbital perturbation problems.
  3. Reading the Comics, December 27, 2014: Last of the Year Edition?, which it wasn’t, and which let me talk about how Sally Brown is going to discover rational numbers if Charlie Brown doesn’t over-instruct her.
  4. Reading The Comics, December 22, 2015: National Mathematics Day Edition, which celebrated Srinivasa Ramanujan’s birth by showing a formula that Leonhard Euler discovered, but Euler’s formula is much more comic-strip-readable than any of Ramanujan’s.
  5. What Do I Need To Pass This Class? (December 2014 Edition), which gathered and reposted for general accessibility the formula and the charts so people can figure out what the subject line says. Also what you need to get a B, or A, or any other desired grade. (Mostly, you needed to start caring about your grade earlier.)
  6. How Many Trapezoids I Can Draw, my life’s crowning achievement. (Six. If you find a seventh please let me know and I’ll do a follow-up post.)

The country sending me the most readers was, as ever, Bangladesh with 535 viewers. Well, two viewers, but it’s boring just listing the United States up front every time. The United Kingdom (37) and Canada (33) came up next, then Argentina (17), which surprises me every month by having a healthy number of readers there, Australia (12), Austria (11), and the Netherlands (10), proving that people in countries that don’t start with ‘A’ can still kind of like me too. The single-reader countries this month were the Czech Republic, Greece, Macedonia, Mexico, Romania, and South Africa. That’s far fewer than last month; of November’s 17 single-reader countries only Romania is a repeat.

Among search terms that brought people here were popeye comic computer king — I don’t know just how that’s going to end up either, folks, but I’m guessing “not that satisfyingly”, since Bud Sagendorf was fond of shaggy-dog non-endings to tales — and which reindeer was in arthur christmas (they were descendants of the “Original” canonical eight, though Grand-Santa forgets some of the names), daily press, “the dinette set” answer for december 11, 2014, and solution to the comic puzzle, “the dinette set”. in the daily press, december 12, 2014, and answer to the comic puzzle, “the dinette set”. in the daily press, december 12, 2014, which suggests maybe I should ditch the pop-math racket and just get into explaining The Dinette Set, which is admittedly kind of a complicated panel strip. There’s multiple riffs around the central joke in each panel, but if you don’t get the main joke then the riffs look like they’re part of the main joke, and they aren’t, so the whole thing becomes confusing. And the artist includes a “Find-It” bit in every panel, usually hiding something like a triangle or a star or something in the craggly details of the art and that can be hard to find. Mostly, though, the joke is: these people are genially and obliviously obnoxious people who you might love but you’d still find annoying. That’s it, nearly every panel. I hope that helps.

Reading the Comics, December 27, 2014: Last of the Year Edition?


I’m curious whether this is going to be the final bunch of mathematics-themed comics for the year 2014. Given the feast-or-famine nature of the strips it’s plausible we might not have anything good through to mid-January, but, who knows? Of the comics in this set I think the first Peanuts the most interesting to me, since it’s funny and gets at something big and important, although the Ollie and Quentin is a better laugh.

Mark Leiknes’s Cow and Boy (December 23, rerun) talks about chaos theory, the notion that incredibly small differences in a state can produce enormous differences in a system’s behavior. Chaos theory became a pop-cultural thing in the 1980s, when Edward Lorentz’s work (of twenty years earlier) broke out into public consciousness. In chaos theory the chaos isn’t that the system is unpredictable — if you have perfect knowledge of the system, and the rules by which it interacts, you could make perfect predictions of its future. What matters is that, in non-chaotic systems, a small error will grow only slightly: if you predict the path of a thrown ball, and you have the ball’s mass slightly wrong, you’ll make a proportionately small error on what the path is like. If you predict the orbit of a satellite around a planet, and have the satellite’s starting speed a little wrong, your prediction is proportionately wrong. But in a chaotic system there are at least some starting points where tiny errors in your understanding of the system produce huge differences between your prediction and the actual outcome. Weather looks like it’s such a system, and that’s why it’s plausible that all of us change the weather just by existing, although of course we don’t know whether we’ve made it better or worse, or for whom.

Charles Schulz’s Peanuts (December 23, rerun from December 26, 1967) features Sally trying to divide 25 by 50 and Charlie Brown insisting she can’t do it. Sally’s practical response: “You can if you push it!” I am a bit curious why Sally, who’s normally around six years old, is doing division in school (and over Christmas break), but then the kids are always being assigned Thomas Hardy’s Tess of the d’Urbervilles for a book report and that is hilariously wrong for kids their age to read, so, let’s give that a pass.

Continue reading “Reading the Comics, December 27, 2014: Last of the Year Edition?”

Letting The Computer Do The Hard Work


Sometime in late August or early September 1994 I had one of those quietly astounding moments on a computer. It would have been while using Maple, a program capable of doing symbolic mathematics. It was something capable not just working out what the product of two numbers is, but of holding representations of functions and working out what the product of those functions was. That’s striking enough, but more was to come: I could describe a function and have Maple do the work of symbolically integrating it. That was astounding then, and it really ought to be yet. Let me explain why.

It’s fairly easy to think of symbolic representations of functions: if f(x) equals x^3 \cdot  sin(3 \cdot x) , well, you know if I give you some value for x, you can give me back an f(x), and if you’re a little better you can describe, roughly, a plot of x versus f(x). That is, that’s the plot of all the points on the plane for which the value of the x-coordinate and the value of the y-coordinate make the statement “y = f(x) ” a true statement.

If you’ve gotten into calculus, though, you’d like to know other things: the derivative, for example, of f(x). That is (among other interpretations), if I give you some value for x, you can tell me how quickly f(x) is changing at that x. Working out the derivative of a function is a bit of work, but it’s not all that hard; there’s maybe a half-dozen or so rules you have to follow, plus some basic cases where you learn what the derivative of x to a power is, or what the derivative of the sine is, or so on. (You don’t really need to learn those basic cases, but it saves you a lot of work if you do.) It takes some time to learn them, and what order to apply them in, but once you do it’s almost automatic. If you’re smart you might do some problems better, but, you don’t have to be smart, just indefatigable.

Integrating a function (among other interpretations, that’s finding the amount of area underneath a curve) is different, though, even though it’s kind of an inverse of finding the derivative. If you integrate a function, and then take its derivative, you get back the original function, unless you did it wrong. (For various reasons if you take a derivative and then integrate you won’t necessarily get back the original function, but you’ll get something close to it.) However, that integration is still really, really hard. There are rules to follow, yes, but despite that it’s not necessarily obvious what to do, or why to do it, and even if you do know the various rules and use them perfectly you’re not necessarily guaranteed to get an answer. Being indefatigable might help, but you also need to be smart.

So, it’s easy to imagine writing a computer program that can find a derivative; to find an integral, though? That’s amazing, and still is amazing. And that brings me at last to this tweet from @mathematicsprof:

The document linked to by this is a master’s thesis, titled Symbolic Integration, prepared by one Björn Terelius for the Royal Institute of Technology in Stockholm. It’s a fair-sized document, but it does open with a history of computers that work out integrals that anyone ought to be able to follow. It goes on to describe the logic behind algorithms that do this sor of calculation, though, and should be quite helpful in understanding just how it is the computer does this amazing thing.

(For a bonus, it also contains a short proof of why you can’t integrate e^{x^2} , one of those functions that looks nice and easy and that drives you crazy in Calculus II when you give it your best try.)

Without Machines That Think About Logarithms


I’ve got a few more thoughts about calculating logarithms, based on how the Harvard IBM Automatic Sequence-Controlled Calculator did things, and wanted to share them. I also have some further thoughts coming up shortly courtesy my first guest blogger, which is exciting to me.

The procedure that was used back then to compute common logarithms — logarithms base ten — was built on several legs: that we can work out some logarithms ahead of time, that we can work out the natural (base e) logarithm of a number using an infinite series, that we can convert the natural logarithm to a common logarithm by a single multiplication, and that the logarithm of the product of two (or more) numbers equals the sum of the logarithm of the separate numbers.

From that we got a pretty nice, fairly slick algorithm for producing logarithms. Ahead of time you have to work out the logarithms for 1, 2, 3, 4, 5, 6, 7, 8, and 9; and then, to make things more efficient, you’ll want the logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9; for that matter, you’ll also want 1.01, 1.02, 1.03, 1.04, and so on to 1.09. You can get more accurate numbers quickly by working out the logarithms for three digits past the decimal — 1.001, 1.002, 1.003, 1.004, and so on — and for that matter to four digits (1.0001) and more. You’re buying either speed of calculation or precision of result with memory.

The process as described before worked out common logarithms, although there isn’t much reason that it has to be those. It’s a bit convenient, because if you want the logarithm of 47.2286 you’ll want to shift that to the logarithm of 4.72286 plus the logarithm of 10, and the common logarithm of 10 is a nice, easy 1. The same logic works in natural logarithms: the natural logarithm of 47.2286 is the natural logarithm of 4.72286 plus the natural logarithm of 10, but the natural logarithm of 10 is a not-quite-catchy 2.3026 (approximately). You pretty much have to decide whether you want to deal with factors of 10 being an unpleasant number or do deal with calculating natural logarithms and then multiplying them by the common logarithm of e, about 0.43429.

But the point is if you found yourself with no computational tools, but plenty of paper and time, you could reconstruct logarithms for any number you liked pretty well: decide whether you want natural or common logarithms. I’d probably try working out both, since there’s presumably the time, after all, and who knows what kind of problems I’ll want to work out afterwards. And I can get quite nice accuracy after working out maybe 36 logarithms using the formula:

\log_e\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots

This will work very well for numbers like 1.1, 1.2, 1.01, 1.02, and so on: for this formula to work, h has to be between -1 and 1, or put another way, we have to be looking for the logarithms of numbers between 0 and 2. And it takes fewer terms to get the result as precise as you want the closer h is to zero, that is, the closer the number whose logarithm we want is to 1.

So most of my reference table is easy enough to make. But there’s a column left out: what is the logarithm of 2? Or 3, or 4, or so on? The infinite-series formula there doesn’t work that far out, and if you give it a try, let’s say with the logarithm of 5, you get a good bit of nonsense, numbers swinging positive and negative and ever-larger.

Of course we’re not limited to formulas; we can think, too. 3, for example, is equal to 1.5 times 2, so the logarithm of 3 is the logarithm of 1.5 2 plus the logarithm of 2, and we have the logarithm of 1.5, and the logarithm of 2 is … OK, that’s a bit of a problem. But if we had the logarithm of 2, we’d be able to work out the logarithm of 4 — it’s just twice that — and we could get to other numbers pretty easily: 5 is, among other things, 2 times 2 times 1.25 so its logarithm is twice the logarithm of 2 plus the logarithm of 1.25. We’d have to work out the logarithm of 1.25, but we can do that by formula. 6 is 2 times 2 times 1.5, and we already had 1.5 worked out. 7 is 2 times 2 times 1.75, and we have a formula for the logarithm of 1.75. 8 is 2 times 2 times 2, so, triple whatever the logarithm of 2 is. 9 is 3 times 3, so, double the logarithm of 3.

We’re not required to do things this way. I just picked some nice, easy ways to factor the whole numbers up to 9, and that didn’t seem to demand doing too much more work. I’d need the logarithms of 1.25 and 1.75, as well as 2, but I can use the formula or, for that matter, work it out using the rest of my table: 1.25 is 1.2 times 1.04 times 1.001 times 1.000602, approximately. But there are infinitely many ways to get 3 by multiplying together numbers between 1 and 2, and we can use any that are convenient.

We do still need the logarithm of 2, but, then, 2 is among other things equal to 1.6 times 1.25, and we’d been planning to work out the logarithm of 1.6 all the time, and 1.25 is useful in getting us to 5 also, so, why not do that?

So in summary we could get logarithms for any numbers we wanted by working out the logarithms for 1.1, 1.2, 1.3, and so on, and 1.01, 1.02, 1.03, et cetera, and 1.001, 1.002, 1.003 and so on, and then 1.25 and 1.75, which lets us work out the logarithms of 2, 3, 4, and so on up to 9.

I haven’t yet worked out, but I am curious about, what the fewest number of “extra” numbers I’d have to calculate are. That is, granted that I have to figure out the logarithms of 1.1, 1.01, 1.001, et cetera anyway. The way I outlined things I have to also work out the logarithms of 1.25 and 1.75 to get all the numbers I need. Is it possible to figure out a cleverer bit of factorization that requires only one extra number be worked out? For that matter, is it possible to need no extra numbers? My instinctive response is to say no, but that’s hardly a proof. I’d be interested to know better.

Machines That Give You Logarithms


As I’ve laid out the tools that the Harvard IBM Automatic Sequence-Controlled Calculator would use to work out a common logarithm, now I can show how this computer of the 1940s and 1950s would do it. The goal, remember, is to compute logarithms to a desired accuracy, using computers that haven’t got abundant memory, and as quickly as possible. As quickly as possible means, roughly, avoiding multiplication (which takes time) and doing as few divisions as can possibly be done (divisions take forever).

As a reminder, the tools we have are:

  1. We can work out at least some logarithms ahead of time and look them up as needed.
  2. The natural logarithm of a number close to 1 is log_e\left(1 + h\right) = h - \frac12h^2 + \frac13h^3 - \frac14h^4 + \frac15h^5 - \cdots .
  3. If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.
  4. Whether the natural or the common logarithm (or any other logarithm you might like) \log\left(a\cdot b\cdot c \cdot d \cdots \right) = \log(a) + \log(b) + \log(c) + \log(d) + \cdots

Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about 0.1807 - \frac12 0.1807^2 + \frac13 0.1807^3 - \frac14 0.1807^4 + \frac15 0.1807^5 - \cdots or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of \frac16 0.1807^6 , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

Machines That Do Something About Logarithms


I’m going to assume everyone reading this accepts that logarithms are worth computing, and try to describe how Harvard’s IBM Automatic Sequence-Controlled Calculator would work them out.

The first part of this is kind of an observation: the quickest way to give the logarithm of a number is to already know it. Looking it up in a table is way faster than evaluating it, and that’s as true for the computer as for you. Obviously we can’t work out logarithms for every number, what with there being so many of them, but we could work out the logarithms for a reasonable range and to a certain precision and trust that the logarithm of (say) 4.42286 is going to be tolerably close to the logarithm of 4.423 that we worked out ahead of time. Working out a range of, say, 1 to 10 for logarithms base ten is plenty, because that’s all the range we need: the logarithm base ten of 44.2286 is the logarithm base ten of 4.42286 plus one. The logarithm base ten of 0.442286 is the logarithm base ten of 4.42286 minus one. You can guess from that what the logarithm of 4,422.86 is, compared to that of 4.42286.

This is trading computer memory for computational speed, which is often worth doing. But the old Automatic Sequence-Controlled Calculator can’t do that, at least not as easily as we’d like: it had the ability to store 72 numbers, albeit to 23 decimal digits. We can’t just use “worked it out ahead of time”, although we’re not going to abandon that idea either.

The next piece we have is something useful if we want to work out the natural logarithm — the logarithm base e — of a number that’s close to 1. We have a formula that will let us work out this natural logarithm to whatever accuracy we want:

\log_{e}\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots \mbox{ if } |h| < 1

In principle, we have to add up infinitely many terms to get the answer right. In practice, we only add up terms until the error — the difference between our sum and the correct answer — is smaller than some acceptable margin. This seems to beg the question, because how can we know how big that error is without knowing what the correct answer is? In fact we don’t know just what the error is, but we do know that the error can’t be any larger than the absolute value of the first term we neglect.

Let me give an example. Suppose we want the natural logarithm of 1.5, which the alert have noticed is equal to 1 + 0.5. Then h is 0.5. If we add together the first five terms of the natural logarithm series, then we have 0.5 - \frac12 0.5^2 + \frac13 0.5^3 - \frac14 0.5^4 + \frac15 0.5^5 which is approximately 0.40729. If we were to work out the next term in the series, that would be -\frac16 0.5^6 , which has an absolute value of about 0.0026. So the natural logarithm of 1.5 is 0.40729, plus or minus 0.0026. If we only need the natural logarithm to within 0.0026, that’s good: we’re done.

In fact, the natural logarithm of 1.5 is approximately 0.40547, so our error is closer to 0.00183, but that’s all right. Few people complain that our error is smaller than what we estimated it to be.

If we know what margin of error we’ll tolerate, by the way, then we know how many terms we have to calculate. Suppose we want the natural logarithm of 1.5 accurate to 0.001. Then we have to find the first number n so that \frac1n 0.5^n < 0.001 ; if I'm not mistaken, that's eight. Just how many terms we have to calculate will depend on what h is; the bigger it is — the farther the number is from 1 — the more terms we'll need.

The trouble with this is that it’s only good for working out the natural logarithms of numbers between 0 and 2. (And it’s better the closer the number is to 1.) If you want the natural logarithm of 44.2286, you have to divide out the highest power of e that’s less than it — well, you can fake that by dividing by e repeatedly — and what you get is that it’s e times e times e times 2.202 and we’re stuck there. Not hopelessly, mind you: we could find the logarithm of 1/2.202, which will be minus the logarithm of 2.202, at least, and we can work back to the original number from there. Still, this is a bit of a mess. We can do better.

The third piece we can use is one of the fundamental properties of logarithms. This is true for any base, as long as we use the same base for each logarithm in the equation here, and I’ve mentioned it in passing before:

\log\left(a\cdot b\cdot c\cdot d \cdots\right) = \log\left(a\right) + \log\left(b\right) + \log\left(c\right) + \log\left(d\right) + \cdots

That is, if we could factor a number whose logarithm we want into components which we can either look up or we can calculate very quickly, then we know its logarithm is the sum of the logarithms of those components. And this, finally, is how we can work out logarithms quickly and without too much hard work.

Machines That Think About Logarithms


I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

Reading the Comics, May 4, 2014: Summing the Series Edition


Before I get to today’s round of mathematics comics, a legend-or-joke, traditionally starring John Von Neumann as the mathematician.

The recreational word problem goes like this: two bicyclists, twenty miles apart, are pedaling toward each other, each at a steady ten miles an hour. A fly takes off from the first bicyclist, heading straight for the second at fifteen miles per hour (ground speed); when it touches the second bicyclist it instantly turns around and returns to the first at again fifteen miles per hour, at which point it turns around again and head for the second, and back to the first, and so on. By the time the bicyclists reach one another, the fly — having made, incidentally, infinitely many trips between them — has travelled some distance. What is it?

And this is not hard problem to set up, inherently: each leg of the fly’s trip is going to be a certain ratio of the previous leg, which means that formulas for a geometric infinite series can be used. You just need to work out what the lengths of those legs are to start with, and what that ratio is, and then work out the formula in your head. This is a bit tedious and people given the problem may need some time and a couple sheets of paper to make it work.

Von Neumann, who was an expert in pretty much every field of mathematics and a good number of those in physics, allegedly heard the problem and immediately answered: 15 miles! And the problem-giver said, oh, he saw the trick. (Since the bicyclists will spend one hour pedaling before meeting, and the fly is travelling fifteen miles per hour all that time, it travels a total of a fifteen miles. Most people don’t think of that, and try to sum the infinite series instead.) And von Neumann said, “What trick? All I did was sum the infinite series.”

Did this charming story of a mathematician being all mathematicky happen? Wikipedia’s description of the event credits Paul Halmos’s recounting of Nicholas Metropolis’s recounting of the story, which as a source seems only marginally better than “I heard it on the Internet somewhere”. (Other versions of the story give different distances for the bicyclists and different speeds for the fly.) But it’s a wonderful legend and can be linked to a Herb and Jamaal comic strip from this past week.

Paul Trap’s Thatababy (April 29) has the baby “blame entropy”, which fits as a mathematical concept, it seems to me. Entropy as a concept was developed in the mid-19th century as a thermodynamical concept, and it’s one of those rare mathematical constructs which becomes a superstar of pop culture. It’s become something of a fancy word for disorder or chaos or just plain messes, and the notion that the entropy of a system is ever-increasing is probably the only bit of statistical mechanics an average person can be expected to know. (And the situation is more complicated than that; for example, it’s just more probable that the entropy is increasing in time.)

Entropy is a great concept, though, as besides capturing very well an idea that’s almost universally present, it also turns out to be meaningful in surprising new places. The most powerful of those is in information theory, which is just what the label suggests; the field grew out of the problem of making messages understandable even though the telegraph or telephone lines or radio beams on which they were sent would garble the messages some, even if people sent or received the messages perfectly, which they would not. The most captivating (to my mind) new place is in black holes: the event horizon of a black hole has a surface area which is (proportional to) its entropy, and consideration of such things as the conservation of energy and the link between entropy and surface area allow one to understand something of the way black holes ought to interact with matter and with one another, without the mathematics involved being nearly as complicated as I might have imagined a priori.

Meanwhile, Lincoln Pierce’s Big Nate (April 30) mentions how Nate’s Earned Run Average has changed over the course of two innings. Baseball is maybe the archetypical record-keeping statistics-driven sport; Alan Schwarz’s The Numbers Game: Baseball’s Lifelong Fascination With Statistics notes that the keeping of some statistical records were required at least as far back as 1837 (in the Constitution of the Olympic Ball Club of Philadelphia). Earned runs — along with nearly every other baseball statistic the non-stathead has heard of other than batting averages — were developed as a concept by the baseball evangelist and reporter Henry Chadwick, who presented them from 1867 as an attempt to measure the effectiveness of batting and fielding. (The idea of the pitcher as an active player, as opposed to a convenient way to get the ball into play, was still developing.) But — and isn’t this typical? — he would come to oppose the earned run average as a measure of pitching performance, because things that were really outside the pitcher’s control, such as stolen bases, contributed to it.

It seems to me there must be some connection between the record-keeping of baseball and the development of statistics as a concept in the 19th century. Granted the 19th century was a century of statistics, starting with nation-states measuring their populations, their demographics, their economies, and projecting what this would imply for future needs; and then with science, as statistical mechanics found it possible to quite well understand the behavior of millions of particles despite it being impossible to perfectly understand four; and in business, as manufacturing and money were made less individual and more standard. There was plenty to drive the field without an amusing game, but, I can’t help thinking of sports as a gateway into the field.

Creator.com's _Donald Duck_ for 2 May 2014: Ludwig von Drake orders his computer to stop with the thinking.

The Disney Company’s Donald Duck (May 2, rerun) suggests that Ludwig von Drake is continuing to have problems with his computing machine. Indeed, he’s apparently having the same problem yet. I’d like to know when these strips originally ran, but the host site of creators.com doesn’t give any hint.

Stephen Bentley’s Herb and Jamaal (May 3) has the kid whose name I don’t really know fret how he spent “so much time” on an equation which would’ve been easy if he’d used “common sense” instead. But that’s not a rare phenomenon mathematically: it’s quite possible to set up an equation, or a process, or a something which does indeed inevitably get you to a correct answer but which demands a lot of time and effort to finish, when a stroke of insight or recasting of the problem would remove that effort, as in the von Neumann legend. The commenter Dartpaw86, on the Comics Curmudgeon site, brought up another excellent example, from Katie Tiedrich’s Awkward Zombie web comic. (I didn’t use the insight shown in the comic to solve it, but I’m happy to say, I did get it right without going to pages of calculations, whether or not you believe me.)

However, having insights is hard. You can learn many of the tricks people use for different problems, but, say, no amount of studying the Awkward Zombie puzzle about a square inscribed in a circle inscribed in a square inscribed in a circle inscribed in a square will help you in working out the area left behind when a cylindrical tube is drilled out of a sphere. Setting up an approach that will, given enough work, get you a correct solution is worth knowing how to do, especially if you can give the boring part of actually doing the calculations to a computer, which is indefatigable and, certain duck-based operating systems aside, pretty reliable. That doesn’t mean you don’t feel dumb for missing the recasting.

Rick Detorie's _One Big Happy_ for 3 May 2014: Joe names the whole numbers.

Rick DeTorie’s One Big Happy (May 3) puns a little on the meaning of whole numbers. It might sound a little silly to have a name for only a handful of numbers, but, there’s no reason not to if the group is interesting enough. It’s possible (although I’d be surprised if it were the case) that there are only 47 Mersenne primes (a number, such as 7 or 31, that is one less than a whole power of 2), and we have the concept of the “odd perfect number”, when there might well not be any such thing.

George Boole’s Birthday


The Maths History feed on Twitter reminded me that the second of November is the birthday of George Boole, one of a handful of people who’s managed to get a critically important computer data type named for him (others, of course, include Arthur von Integer and the Lady Annabelle String). Reminded is the wrong word; actually, I didn’t have any idea when his birthday was, other than that it was in the first half of the 19th century. To that extent I was right (it was 1815).

He’s famous, to the extent anyone in mathematics who isn’t Newton or Leibniz is, for his work in logic. “Boolean algebra” is even almost the default term for the kind of reasoning done on variables that may have either of exactly two possible values, which match neatly to the idea of propositions being either true or false. He’d also publicized how neatly the study of logic and the manipulation of algebraic symbols could parallel one another, which is a familiar enough notion that it takes some imagination to realize that it isn’t obviously so.

Boole also did work on linear differential equations, which are important because differential equations are nearly inevitably the way one describes a system in which the current state of the system affects how it is going to change, and linear differential equations are nearly the only kinds of differential equations that can actually be exactly solved. (There are some nonlinear differential equations that can be solved, but more commonly, we’ll find a linear differential equation that’s close enough to the original. Many nonlinear differential equations can also be approximately solved numerically, but that’s also quite difficult.)

His MacTutor History of Mathematics biography notes that Boole (when young) spent five years trying to teach himself differential and integral calculus — money just didn’t allow for him to attend school or hire a tutor — although given that he was, before the age of fourteen, able to teach himself ancient Greek I can certainly understand his supposition that he just needed the right books and some hard work. Apparently, at age fourteen he translated a poem by Meleager — I assume the poet from the first century BCE, though MacTutor doesn’t specify; there was also a Meleager who was briefly king of Macedon in 279 BCE, and another some decades before that who was a general serving Alexander the Great — so well that when it was published a local schoolmaster argued that a 14-year-old could not possibly have done that translation. He’d also, something I didn’t know until today, married Mary Everest, niece of the fellow whose name is on that tall mountain.

Professor Ludwig von Drake Explains Numerical Mathematics


The reruns of Donald Duck comics which appear at creators.com recently offered the above daily strip, featuring Ludwig von Drake and one of those computers of the kind movies and TV shows and comic strips had before anybody had computers of their own and, of course, the classic IBM motto that maybe they still have but I never hear anyone talking about it except as something from the distant and musty past. (Unfortunately, creators.com doesn’t note the date a strip originally ran, so all I can say is the strip first ran sometime after September of 1961 and before whenever Disney stopped having original daily strips drawn; I haven’t been able to find any hint of when that was other than not earlier than 1969 when cartoonist Al Taliaferro retired from it.)

Continue reading “Professor Ludwig von Drake Explains Numerical Mathematics”

Kenneth Appel and Colored Maps


Word’s come through mathematics circles about the death of Kenneth Ira Appel, who along with Wolgang Haken did one of those things every mathematically-inclined person really wishes to do: solve one of the long-running unsolved problems of mathematics. Even better, he solved one of those accessible problems. There are a lot of great unsolved problems that take a couple paragraphs just to set up for the lay audience (who then will wonder what use the problem is, as if that were the measure of interesting); Appel and Haken’s was the Four Color Theorem, which people can understand once they’ve used crayons and coloring books (even if they wonder whether it’s useful for anyone besides Hammond).

It was, by everything I’ve read, a controversial proof at the time, although by the time I was an undergraduate the controversy had faded the way controversial stuff doesn’t seem that exciting decades on. The proximate controversy was that much of the proof was worked out by computer, which is the sort of thing that naturally alarms people whose jobs are to hand-carve proofs using coffee and scraps of lumber. The worry about that seems to have faded as more people get to use computers and find they’re not putting the proof-carvers out of work to any great extent, and as proof-checking software gets up to the task of doing what we would hope.

Still, the proof, right as it probably is, probably offers philosophers of mathematics a great example for figuring out just what is meant by a “proof”. The word implies that a proof is an argument which convinces a person of some proposition. But the Four Color Theorem proof is … well, according to Appel and Haken, 50 pages of text and diagrams, with 85 pages containing an additional 2,500 diagrams, and 400 microfiche pages with additional diagrams of verifications of claims made in the main text. I’ll never read all that, much less understand all that; it’s probably fair to say very few people ever will.

So I couldn’t, honestly, say it was proved to me. But that’s hardly the standard for saying whether something is proved. If it were, then every calculus class would produce the discovery that just about none of calculus has been proved, and that this whole “infinite series” thing sounds like it’s all doubletalk made up on the spot. And yet, we could imagine — at least, I could imagine — a day when none of the people who wrote the proof, or verified it for publication, or have verified it since then, are still alive. At that point, would the theorem still be proved?

(Well, yes: the original proof has been improved a bit, although it’s still a monstrously large one. And Neil Robertson, Daniel P Sanders, Paul Seymour, and Robin Thomas published a proof, similar in spirit but rather smaller, and have been distributing the tools needed to check their work; I can’t imagine there being nobody alive who hasn’t done, or at least has the ability to do, the checking work.)

I’m treading into the philosophy of mathematics, and I realize my naivete about questions like what constitutes a proof are painful to anyone who really studies the field. I apologize for inflicting that pain.

Philosophical Origins of Computers


Scott Pellegrino here talks a bit about Boole’s laws, logic, set theory, and is building up into computation and information theory if his writing continues along the line promised here.

The Modern Dilettante

As indicated by my last post, I’d really like to tie in philosophical contributions to mathematics to the rise of the computer.  I’d like to jump from Leibniz to Boole, since Boole got the ball rolling to finally bring to fruition what Leibniz first speculated on the possibility.

In graduate school, I came across a series of lectures by a former head of the entire research and development division of IBM, which covered, in surprising level of detail, the philosophical origins of the computer industry. To be honest, it’s the sort of subject that really should be book in length.  But I think it really is a great contemporary example of exactly what philosophy is supposed to be, discovering new methods of analysis that as they develop are spun out of philosophy and are given birth as a new independent (or semi-independent) field their philosophical origins.  Theoretical linguistics is a…

View original post 771 more words

%d bloggers like this: