Reading the Comics, July 12, 2015: Chuckling At Hart Edition


I haven’t had the chance to read the Gocomics.com comics yet today, but I’d had enough strips to bring up anyway. And I might need something to talk about on Tuesday. Two of today’s strips are from the legacy of Johnny Hart. Hart’s last decades at especially B.C., when he most often wrote about his fundamentalist religious views, hurt his reputation and obscured the fact that his comics were really, really funny when they start. His heirs and successors have been doing fairly well at reviving the deliberately anachronistic and lightly satirical edge that made the strips funny to begin with, and one of them’s a perennial around here. The other, Wizard of Id Classics, is literally reprints from the earliest days of the comic strip’s run. That shows the strip when it was earning its place on every comics page everywhere, and made a good case for it.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. (July 8) shows how a compass, without straightedge, can be used to ensure one’s survival. I suppose it’s really only loosely mathematical but I giggled quite a bit.

Ken Cursoe’s Tiny Sepuku (July 9) talks about luck as being just the result of probability. That’s fair enough. Random chance will produce strings of particularly good, or bad, results. Those strings of results can look so long or impressive that we suppose they have to represent something real. Look to any sport and the argument about whether there are “hot hands” or “clutch performers”. And Maneki-Neko is right that a probability manipulator would help. You can get a string of ten tails in a row on a fair coin, but you’ll get many more if the coin has an eighty percent chance of coming up tails.

Brant Parker and Johnny Hart’s Wizard of Id Classics (July 9, rerun from July 12, 1965) is a fun bit of volume-guessing and logic. So, yes, I giggled pretty solidly at both B.C. and The Wizard of Id this week.

Mell Lazarus’s Momma (July 11) identifies “long division” as the first thing a person has to master to be an engineer. I don’t know that this is literally true. It’s certainly true that liking doing arithmetic helps one in a career that depends on calculation, though. But you can be a skilled songwriter without being any good at writing sheet music. I wouldn’t be surprised if there are skilled engineers who are helpless at dividing fourteen into 588.

In the panel of interest, Loretta says the numbers (presumably the bills) don't add up, but they subtract down fine.
Bunny Hoest and John Reiner’s Lockhorns for the 12th of July, 2015.

Bunny Hoest and John Reiner’s Lockhorns (July 12) includes an example of using “adding up” to mean “make sense”. It’s a slight thing. But the same idiom was used last week, in Eric Teitelbaum and Bill Teitelbaum’s Bottomliners. I don’t think Comic Strip Master Command is ordering this punch line yet, but you never know.

And finally, I do want to try something a tiny bit new, and explicitly invite you-the-readers to say what strip most amused you. Please feel free to comment about your choices, r warn me that I set up the poll wrong. I haven’t tried this before.

A Summer 2015 Mathematics A To Z: error


Error

This is one of my A to Z words that everyone knows. An error is some mistake, evidence of our human failings, to be minimized at all costs. That’s … well, it’s an attitude that doesn’t let you use error as a tool.

An error is the difference between what we would like to know and what we do know. Usually, what we would like to know is something hard to work out. Sometimes it requires complicated work. Sometimes it requires an infinite amount of work to get exactly right. Who has the time for that?

This is how we use errors. We look for methods that approximate the thing we want, and that estimate how much of an error that method makes. Usually, the method involves doing some basic step some large number of times. And usually, if we did the step more times, the estimate of the error we make will be smaller. My essay “Calculating Pi Less Terribly” shows an example of this. If we add together more terms from that Leibniz formula we get a running total that’s closer to the actual value of π.

Continue reading “A Summer 2015 Mathematics A To Z: error”

How Richard Feynman Got From The Square Root of 2 to e


I wanted to bring to greater prominence something that might have got lost in comments. Elke Stangl, author of the Theory And Practice Of Trying To Combine Just Anything blog, noticed that among the Richard Feynman Lectures on Physics, and available online, is his derivation of how to discover e — the base of natural logarithms — from playing around.

e is an important number, certainly, but it’s tricky to explain why it’s important; it hasn’t got a catchy definition like pi has, and even the description that most efficiently says why it’s interesting (“the base of the natural logarithm”) sounds perilously close to technobabble. As an explanation for why e should be interesting Feynman’s text isn’t economical — I make it out as something around two thousand words — but it’s a really good explanation since it starts from a good starting point.

That point is: it’s easy to understand what you mean by raising a number, say 10, to a positive integer: 104, for example, is four tens multiplied together. And it doesn’t take much work to extend that to negative numbers: 10-4 is one divided by the product of four tens multiplied together. Fractions aren’t too bad either: 101/2 would be the number which, multiplied by itself, gives you 10. 103/2 would be 101/2 times 101/2 times 101/2; or if you think this is easier (it might be!), the number which, multiplied by itself, gives you 103. But what about the number 10^{\sqrt{2}} ? And if you can work that out, what about the number 10^{\pi} ?

There’s a pretty good, natural way to go about writing that and as Feynman shows you find there’s something special about some particular number pretty close to 2.71828 by doing so.

How Big Is This Number? Answered


My little question about just how big a number 3^{3^{15}} was got answered just exactly right by John Friedrich, so if you wondered about how I could say a number took about seven million digits just to write out, there’s your answer. Friedrich gives it as a number with 6,846,169 digits, and I agree. Better, the calculator I found which was able to handle this (MatCalcLite, a free calculator app I have on my iPad) agrees too: it claims that 3^{3^{15}} is about 3.25 \times 10^{6 846 168} which has that magic 6,846,169 digits.

Friedrich uses logarithms to work it out, and this is one of the things logarithms are good for in these days when you don’t generally need them to do multiplications and divisions. You can look at logarithms as letting you evaluate the lengths of numbers — how many digits they need to work out — rather than the numbers themselves, and this brings to the field of accessibility numbers that would otherwise be too big to work with, even on the calculator. (Another thing logarithms are good for is that they’re quite nice to work with if you have to do calculus, so once you’re comfortable with them, you start looking for chances to slip them into analysis.)

One nagging little point about Friedrich’s work, though, is that you need to know the logarithm of 3 to work it out. (Also you need the logarithm of 10, or you could try using the common logarithm — the logarithm base ten — of 3 instead.) For finding the actual number that’s fine; trying to get this answer with any precision without looking up the logarithm of 3 is quirky if not crazy.

But what if you want to do this purely by the joys of mental arithmetic? Could you work out 3^{3^{15}} without finding a table of logarithms? Obviously you can’t if you want a really precise answer, and here 3.25 \times 10^{6 846 168} counts as precise, but could you at least get a good idea of how big a number it is?

Ted Baxter and the Binomial Distribution


There are many hard things about teaching, although I appreciate that since I’m in mathematics I have advantages over many other fields. For example, students come in with the assumption that there are certainly right and certainly wrong answers to questions. I’m generally spared the problem of convincing students that I have authority to rule some answers in or out. There’s actually a lot of discretion and judgement and opinion involved, but most of that comes in when one is doing research. In an introductory course, there are some techniques that have gotten so well-established and useful we could fairly well pretend there isn’t any judgement left.

But one hard part is probably common to all fields: how closely to guide a student working out something. This case comes from office hours, as I tried getting a student to work out a problem in binomial distributions. Binomial distributions come up in studying the case where there are many attempts at something; and each attempt has a certain, fixed, chance of succeeding; and you want to know the chance of there being exactly some particular number of successes out of all those tries. For example, imagine rolling four dice, and being interested in getting exactly two 6’s on the four dice.

To work it out, you need the number of attempts, and the number of successes you’re interested in, and the chance of each attempt at something succeeding, and the chance of each attempt failing. For the four-dice problem, each attempt is the rolling of one die; there are four attempts at rolling die; we’re interested in finding two successful rolls of 6; the chance of successfully getting a 6 on any roll is 1/6; and the chance of failure on any one roll is —

Continue reading “Ted Baxter and the Binomial Distribution”

%d bloggers like this: