Reading the Comics, October 25, 2014: No Images Again Edition


I had assumed it was a freak event last time that there weren’t any Comics Kingdom strips with mathematical topics to discuss, and which comics I include as pictures here because I don’t know that the links made to them will work for everyone arbitrarily far in the future. Apparently they’re just not in a very mathematical mood this month, though. Such happens; I’m sure they’ll reappear soon enough.

John Zakour and Scott Roberts’ Working Daze (October 22, a “best of” rerun) brings up one of my very many peeves-regarding-pedantry, the notion that you “can’t give more than 100 percent”. It depends on what 100 percent means. The metaphor of “giving 110 percent” is based on the one-would-think-obvious point that there is a standard quantity of effort, which is the 100 percent, and to give 110 percent is to give measurably more than the standard effort. The English language has enough illogical phrases in it; we don’t need to attack ones that are only senseless if you go out to pick a fight with them.

Mark Anderson’s Andertoons (October 23) shows a student attacking a problem with appreciable persistence. As the teacher says, though, there’s no way the student’s attempts at making 2 plus 2 equal 5 is ever not going to be wrong, at least unless we have different ideas about what is meant by 2, plus, equals, and 5. It’s easy to get from this point to some pretty heady territory: since it’s true that two plus two can’t equal five (using the ordinary definitions of these words), then this statement is true not just everywhere in this universe but in all possible universes. This — indeed, all — arithmetic would even be true if there were no universe. But if something can be true regardless of what the universe is like, or even if there is no universe, then how can it tell us anything about the specific universe that actually exists? And yet it seems to do so, quite well.

Tim Lachowski’s Get A Life (October 23) is really an accounting joke, or really more a “taxes they so mean” joke, but I thought it worth mentioning that, really, the majority of the mathematics the world has done have got to have been for the purposes of bookkeeping and accounting. I’m sorry that I’m not better-informed about this so as to better appreciate what is, in some ways, the dark matter of mathematical history.

Keith Tutt and Daniel Saunders’s chipper Lard’s World Peace Tips (October 23) recommends “be a genius” as one of the ways to bring about world peace, and uses mathematics as the comic shorthand for “genius activity”, not to mention sudoku as the comic shorthand for “mathematics”. People have tried to gripe that sudoku isn’t really mathematics; while it’s not arithmetic, though — you could replace the numerals with letters or with arbitrary symbols not to be repeated in one line, column, or subsquare and not change the problem at all — it’s certainly logic.

John Graziano’s Ripley’s Believe It or Not (October 23) besides giving me a spot of dizziness with that attribution line makes the claim that “elephants have been found to be better at some numerical tasks than chimps or even humans”. I can believe that, more or less, though I notice it doesn’t say exactly what tasks elephants are so good (or chimps and humans so bad) at. Counting and addition or subtraction seem most likely, though, because those are processes it seems possible to create tests for. At some stages in human and animal development the animals have a clear edge in speed or accuracy. I don’t remember reading evidence of elephant skills before but I can accept that they surely have some.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (October 24) applies the tools of infinite series — adding up infinitely many of a sequence of terms, often to a finite total — to parenting and the problem of one kid hitting another. This is held up as Real Analysis — – the field in which you learn why Calculus works — and it is, yeah, although this is the part of Real Analysis you can do in high school.

John Zakour and Scott Roberts’s Maria’s Day (October 25) picks up on the Math Wiz Monster in Maria’s closet mentioned last time I did one of these roundups. And it includes an attack on the “Common Core” standards, understandably: it’s unreasonable to today’s generation of parents that mathematics should be taught any differently from how it was taught to them, when they didn’t understand the mathematics they were being taught. Innovation in teaching never has a chance.

Dave Whamond’s Reality Check (October 25) reminds us that just because stock framing can be used to turn a subtraction problem into a word problem doesn’t mean that it can’t jump all the way out of mathematics into another field.

I haven’t included any comics from today — the 26th of October — in my reading yet but really, what are the odds there’s like a half-dozen comics of obvious relevance with nice, juicy topics to discuss?

Calculus without limits 5: log and exp


Joseph Nebus:

I’ve been on a bit of a logarithms kick lately, and I should say I’m not the only one. HowardAt58 has had a good number of articles about it, too, and I wanted to point some out to you. In this particular reblogging he brings a bit of calculus to show why the logarithm of the product of two numbere has to be the sum of the logarithms of the two separate numbers, in a way that’s more rigorous (if you’re comfortable with freshman calculus) than just writing down a couple examples along the lines of how 102 times 103 is equal to 105. (I won’t argue that having a couple specific examples might be better at communicating the point, but there’s a difference between believing something is so and being able to prove that it’s true.)

Originally posted on Saving school math:

The derivative of the log function can be investigated informally, as log(x) is seen as the inverse of the exponential function, written here as exp(x). The exponential function appears naturally from numbers raised to varying powers, but formal definitions of the exponential function are difficult to achieve. For example, what exactly is the meaning of exp(pi) or exp(root(2)).
So we look at the log function:-
calculus5 text
calculus5 pic

View original

Reading The Comics, October 20, 2014: No Images This Edition


Since I started including Comics Kingdom strips in my roundups of mathematically-themed strips I’ve been including images of those, because I’m none too confident that Comics Kingdom’s pages are accessible to normal readers after some time has passed. Gocomics.com has — as far as I’m aware, and as far as anyone has told me — no such problems, so I haven’t bothered doing more than linking to them. So this is the first roundup in a long while I remember that has only Gocomics strips, with nothing from Comics Kingdom. It’s also the first roundup for which I’m fairly sure I’ve done one of these strips before.

Guy Endore-Kaiser and Rodd Perry and Dan Thompson’s Brevity (October 15, but a rerun) is an entry in the anthropomorphic-numbers line of mathematics comics, and I believe it’s one that I’ve already mentioned in the past. This particular strip is a rerun; in modern times the apparently indefatigable Dan Thompson has added this strip to the estimated fourteen he does by himself. In any event it stands out in the anthropomorphic-numbers subgenre for featuring non-integers that aren’t pi.

Ralph Hagen’s The Barn (October 16) ponders how aliens might communicate with Earthlings, and like pretty much everyone who’s considered the question mathematics is supposed to be the way they’d do it. It’s easy to see why mathematics is plausible as a universal language: a mathematical truth should be true anywhere that deductive logic holds, and it’s difficult to conceive of a universe existing in which it could not hold true. I have somewhere around here a mention of a late-19th-century proposal to try contacting Martians by planting trees in Siberia which, in bloom, would show a proof of the Pythagorean theorem.

In modern times we tend to think of contact with aliens being done by radio more likely (or at least some modulated-light signal), which makes a signal like a series of pulses counting out prime numbers sound likely. It’s easy to see why prime numbers should be interesting too: any species that has understood multiplication has almost certainly noticed them, and you can send enough prime numbers in a short time to make clear that there is a deliberate signal being sent. For comparison, perfect numbers — whose factors add up to the original number — are also almost surely noticed by any species that understands multiplication, but the first several of those are 6, 28, 496, and 8,128; by the time 8,128 pulses of anything have been sent the whole point of the message has been lost.

And yet finding prime numbers is still not really quite universal. You or I might see prime numbers as key, but why not triangular numbers, like the sequence 1, 3, 6, 10, 15? Why not square or cube numbers? The only good answer is, well, we have to pick something, so to start communicating let’s hope we find something that everyone will be able to recognize. But there’s an arbitrariness that can’t be fully shed from the process.

John Zakour and Scott Roberts’s Maria’s Day (October 17) reminds us of the value of having a tutor for mathematics problems — if you’re having trouble in class, go to one — and of paying them appropriately.

Steve Melcher’s That Is Priceless (October 17) puts comic captions to classic paintings and so presented Jusepe de Ribera’s 1630 Euclid, Letting Me Copy His Math Homework. I confess I have a broad-based ignorance of art history, but if I’m using search engines correctly the correct title was actually … Euclid. Hm. It seems like Melcher usually has to work harder at these things. Well, I admit it doesn’t quite match my mental picture of Euclid, but that would have mostly involved some guy in a toga wielding a compass. Ribera seems to have had a series of Greek Mathematician pictures from about 1630, including Pythagoras and Archimedes, with similar poses that I’ll take as stylized representations of the great thinkers.

Mark Anderson’s Andertoons (October 18) plays around statistical ideas that include expectation values and the gambler’s fallacy, but it’s a good puzzle: has the doctor done the procedure hundreds of times without a problem because he’s better than average at it, or because he’s been lucky? For an alternate formation, baseball offers a fine question: Ted Williams is the most recent Major League Baseball player to have a season batting average over .400, getting a hit in at least two-fifths of his at-bats over the course of the season. Was he actually good enough to get a hit that often, though, or did he just get lucky? Consider that a .250 hitter — with a 25 percent chance of a hit at any at-bat — could quite plausibly get hits in three out of his four chances in one game, or for that matter even two or three games. Why not a whole season?

Well, because at some point it becomes ridiculous, rather the way we would suspect something was up if a tossed coin came up tails thirty times in a row. Yes, possibly it’s just luck, but there’s good reason to suspect this coin doesn’t have a fifty percent chance of coming up heads, or that the hitter is likely to do better than one hit for every four at-bats, or, to the original comic, that the doctor is just better at getting through the procedure without complications.

Ryan North’s quasi-clip-art Dinosaur Comics (October 20) thrilled the part of me that secretly wanted to study language instead by discussing “light verb constructions”, a grammatical touch I hadn’t paid attention to before. The strip is dubbed “Compressed Thesis Comics”, though, from the notion that the Tyrannosaurus Rex is inspired to study “computationally” what forms of light verb construction are more and what are less acceptable. The impulse is almost perfect thesis project, really: notice a thing and wonder how to quantify it. A good piece of this thesis would probably be just working out how to measure acceptability of a particular verb construction. I imagine the linguistics community has a rough idea how to measure these or else T Rex is taking on way too big a project for a thesis, since that’d be an obvious point for the thesis to crash against.

Well, I still like the punch line.

How Richard Feynman Got From The Square Root of 2 to e


I wanted to bring to greater prominence something that might have got lost in comments. Elke Stangl, author of the Theory And Practice Of Trying To Combine Just Anything blog, noticed that among the Richard Feynman Lectures on Physics, and available online, is his derivation of how to discover e — the base of natural logarithms — from playing around.

e is an important number, certainly, but it’s tricky to explain why it’s important; it hasn’t got a catchy definition like pi has, and even the description that most efficiently says why it’s interesting (“the base of the natural logarithm”) sounds perilously close to technobabble. As an explanation for why e should be interesting Feynman’s text isn’t economical — I make it out as something around two thousand words — but it’s a really good explanation since it starts from a good starting point.

That point is: it’s easy to understand what you mean by raising a number, say 10, to a positive integer: 104, for example, is four tens multiplied together. And it doesn’t take much work to extend that to negative numbers: 10-4 is one divided by the product of four tens multiplied together. Fractions aren’t too bad either: 101/2 would be the number which, multiplied by itself, gives you 10. 103/2 would be 101/2 times 101/2 times 101/2; or if you think this is easier (it might be!), the number which, multiplied by itself, gives you 103. But what about the number 10^{\sqrt{2}} ? And if you can work that out, what about the number 10^{\pi} ?

There’s a pretty good, natural way to go about writing that and as Feynman shows you find there’s something special about some particular number pretty close to 2.71828 by doing so.

Reading the Comics, October 14, 2014: Not Talking About Fourier Transforms Edition


I know that it’s disappointing to everyone, given that one of the comic strips in today’s roundup of mathematically-themed such gives me such a good excuse to explain what Fourier Transforms are and why they’re interesting and well worth the time learning. But I’m not going to do that today. There’s enough other things to think about and besides you probably aren’t going to need Fourier Transforms in class for a couple more weeks yet. For today, though, no, I’ll go on to other things instead. Sorry to disappoint.

Glen McCoy and Gary McCoy’s The Flying McCoys (October 9) jokes about how one can go through life without ever using algebra. I imagine other departments get this, too, like, “I made it through my whole life without knowing anything about US History!” or “And did any of that time I spent learning Art do anything for me?” I admit a bias here: I like learning stuff even if it isn’t useful because I find it fun to learn stuff. I don’t insist that you share in finding that fun, but I am going to look at you weird if you feel some sense of triumph about not learning stuff.

Tom Thaves’s Frank and Ernest (October 10) does a gag about theoretical physics, and string theory, which is that field where physics merges almost imperceptibly into mathematics and philosophy. The rough idea of string theory is that it’d be nice to understand why the particles we actually observe exist, as opposed to things that we could imagine existing that that don’t seem to — like, why couldn’t there be something that’s just like an electron, but two times as heavy? Why couldn’t there be something with the mass of a proton but three-quarters the electric charge? — by supposing that what we see are the different natural modes of behavior of some more basic construct, these strings. A natural mode is, well, what something will do if it’s got a bunch of energy and is left to do what it will with it.

Probably the most familiar kind of natural mode is how if you strike a glass or a fork or such it’ll vibrate, if we’re lucky at a tone we can hear, and if we’re really lucky, at one that sounds good. Things can have more than one natural mode. String theory hopes to explain all the different kinds of particles, and the different ways in which they interact, as being different modes of a hopefully small and reasonable variety of “strings”. It’s a controversial theory because it’s been very hard to find experiments that proves, or soundly rules out, a particular model of it as representation of reality, and the models require invoking exotic things like more dimensions of space than we notice. This could reflect string theory being an intriguing but ultimately non-physical model of the world; it could reflect that we just haven’t found the right way to go about proving it yet.

Charles Schulz’s Peanuts (October 10, originally run October 13, 1967) has Sally press Charlie Brown into helping her with her times tables. She does a fair bit if guessing, which isn’t by itself a bad approach. For one, if you don’t know the exact answer, but you can pin down a lower and and upper bound, you’re doing work that might be all you really need and you’re doing work that may give you a hint how to get what you really want. And for that matter, guessing at a solution can be the first step to finding one. One of my favorite areas of mathematics, Monte Carlo methods, finds solutions to complicated problems by starting with a wild guess and making incremental refinements. It’s not guaranteed to work, but when it does, it gets extremely good solutions and with a remarkable ease. Granted this, doesn’t really help the times tables much.

On the 11th (originally run October 14, 1967), Sally incidentally shows the hard part of refining guesses about a solution; there has to be some way of telling whether you’re getting warmer. In your typical problem for a Monte Carlo approach, for example, you have some objective function — say, the distance travelled by something going along a path, or the total energy of a system — and can measure whether an attempted change is improving your solution — say, minimizing your distance or reducing the potential energy — or is making it worse. Typically, you take any refinement that makes the provisional answer better, and reject most, but not all, refinements that make the provisional answer worse.

That said, “Overly-Eight” is one of my favorite made-up numbers. A “Quillion” is also a pretty good one.

Jeff Mallet’s Frazz (October 12) isn’t explicitly about mathematics, but it’s about mathematics. “Why do I have to show my work? I got the right answer?” There are good responses on two levels, the first of which is practical, and which blends into the second: if you give me-the-instructor the wrong answer then I can hopefully work out why you got it wrong. Did you get it wrong because you made a minor but ultimately meaningless slip in your calculations, or did you get it wrong because you misunderstood the problem and did not know what kind of calculation to do? Error comes in many forms; some are boring — wrote the wrong number down at the start and never noticed, missed a carry — some are revealing — doesn’t know the order of operations, doesn’t know how the chain rule applies in differentiation — and some are majestic.

These last are the great ones, the errors that I love seeing, even though they’re the hardest to give a fair grade to. Sometimes a student will go off on a tack that doesn’t look anything like what we did in class, or could have reasonably seen in the textbook, but that shows some strange and possibly mad burst of creative energy. Usually this is rubbish and reflects the student flailing around, but, sometimes the student is on to something, might be trying an approach that, all right, doesn’t work here, but which if it were cleaned of its logical flaws might be a new and different way to work out the problem.

And that blends to the second reason: finding answers is nice enough and if you’re good at that, I’m glad, but is it all that important? We have calculators, after all. What’s interesting, and what is really worth learning in mathematics, is how to find answers: what approaches can efficiently be used on this problem, and how do you select one, and how do you do it to get a correct answer? That’s what’s really worth learning, and what is being looked for when the instruction is to show your work. Caulfield had the right answer, great, but is it because he knew a good way to work out the problem, or is it because he noticed the answer was left on the blackboard from the earlier class when this one started, or is it because he guessed and got lucky, or is it because he thought of a clever new way to solve the problem? If he did have a clever new way to do the problem, shouldn’t other people get to see it? Coming up with clever new ways to find answers is the sort of thing that gets you mathematical immortality as a pioneer of some approach that gets mysteriously named for somebody else.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (October 14) makes fun of tenure, the process by which people with a long track record of skill, talent, and drive are rewarded with no longer having to fear being laid off or fired except for cause. (Though I should sometime write about Fourier Transforms, as they’re rather neat.)

'Albert, stop daydreaming and eat your soup', which is alphabet soup, apparently, and where you could find E = m c c if you looked just right.

Margaret Shulock’s Six Chix comic for the 14th of October, 2014: Albert Einstein is evoked alongside the origins of his famous equation about c and m and soup and stuff.

Margaret Shulock’s turn at Six Chix (October 14) (the comic strip is shared among six women because … we couldn’t have six different comic strips written and drawn by women all at the same time, I guess?) evokes the classic image of Albert Einstein, the genius, and drawing his famous equation out of the ordinary stuff of daily life. (I snark a little; Shulock is also the writer for Apartment 3-G, to the extent that things can be said to be written in Apartment 3-G.)

How To Numerically Integrate Like A Mathematician


I have a guest post that I mean to put up shortly which is a spinoff of the talk last month about calculating logarithms. There are several ways to define a logarithm but one of the most popular is to define it as an integral. That has the advantages of allowing the logarithm to be studied using a lot of powerful analytic tools already built up for calculus, and allow it to be calculated numerically because there are a lot of methods for calculating logarithms out there. I wanted to precede that post with a discussion of a couple of the ways to do these numerical integrations.

A great way to interpret integrating a function is to imagine drawing a plot of function; the integral is the net area between the x-axis and the plot of that function. That may be pretty hard to do, though, so we fall back on a standard mathematician’s trick that they never tell you about in grade school, probably for good reason: don’t bother doing the problem you actually have, and instead do a problem that looks kind of like it but that you are able to do.

Normally, for what’s called a definite integral, we’re interested in the area underneath a curve and across an “interval”, that is, between some minimum and some maximum value on the x-axis. Definite integrals are the kind we can approximate numerically. An indefinite integral gives a function that would tell us what the definite integral on any interval would be, but that takes symbolic mathematics to work out and that’s way beyond this article’s scope.

A squiggly function between two vertical green lines that show the interval to be integrated over.

The red curve here represents some function. Any function will do, really. This is just one of them.

While we may have no idea what the area underneath a complicated squiggle on some interval is, we do know what the area inside a rectangle is. So if we pretend we’re interested in the area of the rectangle instead of the original area, good. Take my little drawing of a generic function here, the wavey red curve. The integral of it from wherever that left vertical green line is to the right is the area between the x-axis, the horizontal black line, and the red curve.

A generic function, with rectangles approximating the function's value along the y-axis based on the function's value at the left and at the right of the area we're integrating over.

For the Rectangle Rule, find the area of a rectangle that approximates the function whose integral you’re actually interested in. Shown here are the left-endpoint (yellow line) and the right-endpoint (blue line) approximations, but any rule can be used to find the approximation.

If we use the “Rectangle Rule”, we draw a horizontal line based on the value of the function somewhere from the left line to the right. The yellow line up top is based on the value at the left endpoint. The blue line is based on the value the function has at the right endpoint. We can use any point, although the most popular ones are the left endpoint, the right endpoint, and the midpoint, because those are nice, easy picks to make. (And if we’re trying to integrate a function whose definition we don’t know, for example because it’s the data we got from an experiment, these will often be the only data points we have.) The area under the curve is going to be something like the area of the rectangle bounded by the green lines, the horizontal black line, and the blue horizontal line or the yellow horizontal line.

Drawn this way you might complain this approximation is rubbish: the area of the blue-topped rectangle is obviously way too low, and that of the yellow-topped rectangle is way too high. The mathematician’s answer to this is: oh, hush. We were looking for easy, not good. The area is the width of the interval times the value of the function at the chosen point; how much easier can you get?

(It also happens that the blue rectangle obviously gives too low an area, while the yellow gives too high an area. This is a coincidence, caused by my not thinking to make my function wiggle up and down quite enough. Generally speaking neither the left- nor the right-endpoints are maximum or minimum values for the function. It can be useful analytically to select the points that are “where the function is its highest” and “where the function is its lowest” — this lets you find the upper and lower bounds for the area — but that’s generally too hard to use computationally.)

A generic function divided along the x-axis, with rectangles approximating the function's value along the y-axis.

For the Composite Rectangle Rule, slice the area you want to integrate into a bunch of small strips — not necessarily all the same width — and find the areas of the rectangles that approximate the function in each of those smaller strips.

But we can turn into a good approximation. What makes the blue or the yellow lines lousy approximations is that the function changes a lot in the distance between the green lines. If we were to chop up the strip into a bunch of smaller ones, and use the rectangle rule on each of those pieces, the function would change less in each of those smaller pieces, and so we’d get an area total that’s closer to the actual area. We find the distance between a pair of adjacent vertical green lines, multiply that by the height of the function at the chosen point, and add that to the running total. This is properly called the “Composite Rectangle Rule”, although it’s really only textbooks introducing the idea that make a fuss about including the “composite”. It just makes so much sense to break the interval up that we do that all the time and forget to explicitly say that except in the class where we introduce this.

(And, notice, in my drawings that in some of the regions behind vertical green lines the left-endpoint and the right-endpoint are not where the function gets its highest, or lowest, value. They can just be points.)

There’s nothing special about the Rectangle Rule that makes it uniquely suited for composition. It’s just easier to draw that way. Any numerical integration rule lets you do the same trick. Also, it’s very common to make all the smaller rectangles — called the subintervals — the same width, but that’s not because the method needs that to work. It’s easier to calculate if all the subintervals are the same width, because then you don’t have to remember how wide each different subinterval is.

A generic function, with a trapezoid approximating the function's value. The trapezoid matches the original function at the left and the right ends of the interval we integrate over.

For the Trapezoid Rule, approximate the area under the function by finding the area of a right trapezoid (or trapezium) which is as wide as the interval you want to integrate over and which matches the original function at the left- and the right-endpoints.

Rectangles are probably the easiest shape of all to deal with, but they’re not the only easy shapes. Trapezoids, or trapeziums if you prefer, are hardly a challenge to find the area for. This gives me the next really popular integration rule, the “Trapezoid Rule” or “Trapezium Rule” as your dialect favors. We take the function and approximate its area by working out the area of the trapezoid formed by the left green edge, the bottom black edge, the right green edge, and the sloping blue line that goes from where the red function touches the left end to where the function touches the right end. This is a little harder than the Rectangle Rule: we have to multiply the width of the interval between the green lines by the arithmetic mean of the function’s value at the left and at the right endpoints. That means, evaluate the function at the left endpoint and at the right endpoint, add those two values together, and divide by two. Not much harder and it’s pleasantly more accurate than the Rectangle Rule.

If that’s not good enough for you, you can break the interval up into a bunch of subintervals, just as with the Composite Rectangle Rule, and find the areas of all the trapezoids created there. This is properly called the “Composite Trapezoid Rule”, but again, after your Numerical Methods I class you won’t see the word “composite” prefixed to the name again.

And yet we can do better still. We’ll remember this when we pause a moment and think about what we’re really trying to do. When we do a numerical integration like this we want to find, instead of the area underneath our original curve, the area underneath a curve that looks like it but that’s easier to deal with. (Yes, we’re calling the straight lines of the Rectangle and Trapezoid Rules “curves”. Hush.) We can use any curve that we know how to deal with. Parabolas — the curving arc that you see if, say, you shoot the water from a garden hose into the air — may not seem terribly easy to deal with, but it turns out it’s not hard to figure out the area underneath a slice of one of them. This gives us the integration technique called “Simpson’s Rule”.

The Simpson here is Thomas Simpson, 1710 – 1761, who in accord with Mathematics Law did not discover or invent the rule named for him. Johannes Kepler knew the rule a century before Simpson got into the game, at minimum, and both Galileo’s student Bonaventura Cavalieri (who introduced logarithms to Italy, and was one of those people creeping up on infinitesimal calculus ahead of Newton) and the English mathematician/physicist James Gregory (who discovered diffraction grating, and seems to be the first person to have published a proof of the Fundamental Theorem of Calculus) were in on it too. But Simpson wrote a number of long-lived textbooks about calculus, which probably is why his name got tied to this method.

A generic function, with a parabola approximating the function's value. The parabola matches the original function at the left, middle, and right of the region we're integrating over.

For Simpson’s Rule, approximate the function you’re interested in with a parabola that exactly matches the original function at, at minimum, the left endpoint, the midpoint, and the right endpoint.

In Simpson’s Rule, you need the value of the function at the left endpoint, the midpoint, and the right endpoint of the interval. You can draw the parabola which connects those points — it’s the blue curve in my drawing — and find the area underneath that parabola. The formula may sound a little funny but it isn’t hard: the area underneath the parabola is one-third the width of the interval times the sum of the value at the left endpoint, the value at the right endpoint, and four times the value at the midpoint. It’s a bit more work but it’s a lot more accurate than the Trapezoid Rule.

There are literally infinitely many more rules you could use, with such names as “Simpson’s Three-Eighths Rule” (also called “Simpson’s Second Rule”) or “Boole’s Rule”[1], but they’re based on similar tricks of making a function that looks like the one you’re interested in but whose area you know how to calculate exactly. For the Simpson’s Three-Eighth Rule, for example, you make a cubic polynomial instead of a parabola. If you’re good at finding the areas underneath wedges of circles or underneath hyperbolas or underneath sinusoidal functions, go ahead and use those. You can find the balance of ease of use and accuracy of result that you like.


[1]: Boole’s Rule is also known as Bode’s Rule, because of a typo in the 1972 edition of the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, or as everyone ever has always referred to this definitive mathematical reference, Abromowitz and Stegun. (Milton Abromowitz and Irene Stegun were the reference’s editors.)

Reading the Comics, October 7, 2014: Repeated Comics Edition


Since my last roundup of mathematics-themed comic strips there’s been a modest drizzle of new ones, and I’m not sure that I can find any particular themes to them, except that Zach Weinersmith and the artistic collective behind Eric the Circle apparently like my attention. Well, what the heck; that’s easy enough to give.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 29) hopes to be that guy who appears somewhere around the fourth comment of every news article ever that mentions a correlation being found between two quantities. A lot of what’s valuable about science is finding causal links between things, but it’s only in rare and, often, rather artificial circumstances that such links are easy to show. What’s more often necessary is showing that as one quantity changes so does another, which allows one to suspect a link. Then, typically, one would look for a plausible reason they might have anything to do with one another, and look for ways to experiment and prove whether there is or is not.

But just because there is a correlation doesn’t by itself mean that one thing necessarily has anything to do with another. They could be coincidence, for example, or they could be influenced by some other confounding factor. To be worth mention in a decent journal, a correlation is probably going to be strong enough that it’s hard to believe it’s just coincidence, but there could yet be some confounding factor. And even if there is a causal link, in the complicated mess that is reality it can be difficult to discern which way the link flows. This is summarized in deductive logic by saying that correlation does not imply causation, but that uses deductive logic’s definition of “imply”.

In deductive logic to say “this implies that” means it is impossible for “this” to be true and “that” false simultaneously. It is perfectly permissible for both “this” and “that” to be true, and permissible for “this” to be false and “that” false, and — this is the point where Intro to Logic students typically crash — permissible for “this” to be false and “that” true. Colloquially, though, “imply” has a different connotation, something more along the lines of “this” and “that” have to both be false or both be true together. Don’t make that mistake on your logic test.

When a logician says that correlation does not imply causation, she is saying that it is imaginable for the correlation to be true while the causation is false. She is not saying the causation is false; she is just saying that the case is not proved from the fact of a correlation being true. And that’s so; if we just knew two things were correlated we would have to experiment to find whether there is a causal link. But finding a correlation one of the ways to start finding casual links; it’d be obviously daft not to use them as the start of one’s search. Anyway, that guy in about the fourth comment of every news report about a correlation just wants you to know it’s very important he tell you he’s smarter than journalists.

Saturday Morning Breakfast Cereal pops back up again (October 1) with an easier-to-describe joke about August Ferdinand Möbius and his rather famous strip, here applied to the old gag about walking to school uphill both ways. One hates to be a spoilsport, but Möbius was educated at home until 13, so this comic is not reliable as a shorthand biography of the renowned mathematician.

Eric the Circle has had a couple strips by “Griffinetsabine”, one on October 3, and another on the 7th of October, based on the Shape Singles Bar. Both strips are jokes about two points connecting by a line, suggesting that Griffinetsabine knew the premise was good for a couple of variants. I’d have spaced out the publication of them farther but perhaps this was the best that could be done.

Mikael Wulff and Anders Morgenthaler’s Truth Facts (September 30) — a panel strip that’s often engaging in showing comic charts — gives a guide to what the number of digits you’ve memorized says about you. (For what it’s worth, I peter out at “897932”.) I’m mildly delighted to find that their marker for Isaac Newton is more or less correct: Newton did work out pi to fifteen decimal places, by using his binomial theorem and a calculation of the area within a particular wedge of the circle. (As I make it out Wulff and Morgenthaler put Newton at fourteen decimal points, but they might have read references to Newton working out “fifteen decimal points” as meaning something different to what I do.) Newton’s was not the best calculation of pi in the 1660s when he worked it out — Christoph Grienberger, an Austrian Jesuit astronomer, had calculated 38 decimal places a generation earlier — but I can’t blame Wulff and Morgenthaler for supposing Newton to be a more recognizable name than Grienberger. I imagine if Einstein or Stephen Hawking had done any particularly unique work in calculating the digits of pi they’d have appeared on the chart too.

John Graziano’s Ripley’s Believe It or Not (October 1) — and don’t tell me that attribution doesn’t look weird — shares a story about the followers of the Ancient Greek mathematician, philosopher, and mystic Pythagoras, that they were forbidden to wear wool, eat beans, or pick up things they had dropped. I have heard the beans thing before and I think I’ve heard the wool prohibition before, but I don’t remember hearing about them not being able to pick up things before.

I’m not sure I can believe it, though: Pythagoras was a strange fellow, so far as the historical record is clear. It’s hard to be sure just what is true about him and his followers, though, and what is made up, either out of devoted followers building up the figure they admire or out of critics making fun of a strange fellow with his own little cult. Perhaps it’s so, perhaps it’s not. I would like to see a primary source, and I don’t think any exist.

The Little King skates a figure 8 that requires less tricky curving.

Otto Soglow’s The Little King for 29 February 1948.

Otto Soglow’s The Little King (October 5; originally run February 29, 1948) provides its normal gentle, genial humor in the Little King working his way around the problem of doing a figure 8.

How weird is it that three pairs of same-market teams made the playoffs this year?


Joseph Nebus:

The “God Plays Dice” blog has a nice little baseball-themed post built on the coincidence that a number of the teams in the postseason this year are from the same or at least neighboring markets — two from Los Angeles, a pair from San Francisco and Oakland, and another pair from Washington and Baltimore. It can’t be likely that this should happen much, but, how unlikely is it? Michael Lugo works it out in what’s probably the easiest way to do it.

Originally posted on God plays dice:

The Major League Baseball postseason is starting just as I write this.

From the National League, we have Washington, St. Louis, Pittsburgh, Los Angeles, and San Francisco.
From the American League, we have Baltimore, Kansas City, Detroit, Los Angeles (Anaheim), and Oakland.

These match up pretty well geographically, and this hasn’t gone unnoticed: see for example the New York Times blog post “the 2014 MLB playoffs have a neighborly feel” (apologies for not providing a link; I’m out of NYT views for the month, and I saw this back when I wasn’t); a couple mathematically inclined Facebook friends of mine have mentioned it as well.

In particular there are three pairs of “same-market” teams in here: Washington/Baltimore, Los Angeles/Los Angeles, San Francisco/Oakland. How likely is that?

(People have pointed out St. Louis/Kansas City as being both in Missouri, but that’s a bit more of a judgment call, and St. Louis…

View original 545 more words

My Math Blog Statistics, September 2014


Since it’s the start of a new month it’s time to review statistics for the previous month, which gives me the chance to list a bunch of countries, which is strangely popular with readers. I don’t pretend to understand this, I just accept the inevitable.

In total views I haven’t seen much change the last several months: September 2014 looks to be closing out with about 558 pages viewed, not a substantial change from August’s 561, and triflingly fewer than July’s 589. The number of unique visitors has been growing steadily, though: 286 visitors in September, compared to 255 the month before, and 231 the month before that. One can choose to read this as the views per visitor dropping to 1.95, its lowest figure since March, but I’ll take it as more people finding things that interest them, at least.

As to what those things are — well, mostly it’s comic strip posts, which I suppose makes sense given that they’re quite accessible and often contain jokes people understand. The most popular articles for September 2014 were:

As usual the country sending me the greatest number of readers was the United States (347), with Canada (29), Austria (27), the United Kingdom (26), and Puerto Rico and Turkey (20 each) coming up close behind. My single-reader countries for September were Bahrain, Brazil, Costa Rica, Czech Republic, Estonia, Finland, Germany, Iceland, Jamaica, Kazakhstan, Malaysia, the Netherlands, Pakistan, Saudi Arabia, Slovenia, and Sweden. Finland, Germany, and Sweden were single-reader countries in August, too, but at least none of them were single-reader countries in July as well.

Among the search terms bringing people here the past month have been:

I got to my 17,882nd reader this month, a little short of that tolerably nice and round 18,000 readers. If I don’t come down with sudden-onset boringness, though, I’ll reach that in the next week or so, especially if I have a couple more days of twenty or thirty readers.

Reading the Comics, September 28, 2014: Punning On A Sunday Edition


I honestly don’t intend this blog to become nothing but talk about the comic strips, but then something like this Sunday happens where Comic Strip Master Command decided to send out math joke priority orders and what am I to do? And here I had a wonderful bit about the natural logarithm of 2 that I meant to start writing sometime soon. Anyway, for whatever reason, there’s a lot of punning going on this time around; I don’t pretend to explain that.

Jason Poland’s Robbie and Bobby (September 25) puns off of a “meth lab explosion” in a joke that I’ve seen passed around Twitter and the like but not in a comic strip, possibly because I don’t tend to read web comics until they get absorbed into the Gocomics.com collective.

Brian Boychuk and Ron Boychuk’s The Chuckle Brothers (September 26) shows how an infinity pool offers the chance to finally, finally, do a one-point perspective drawing just like the art instruction book says.

Bill Watterson’s Calvin and Hobbes (September 27, rerun) wrapped up the latest round of Calvin not learning arithmetic with a gag about needing to know the difference between the numbers of things and the values of things. It also surely helps the confusion that the (United States) dime is a tiny coin, much smaller in size than the penny or nickel that it far out-values. I’m glad I don’t have to teach coin values to kids.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 27) mentions Lagrange points. These are mathematically (and physically) very interesting because they come about from what might be the first interesting physics problem. If you have two objects in the universe, attracting one another gravitationally, then you can describe their behavior perfectly and using just freshman or even high school calculus. For that matter, describing their behavior is practically what Isaac Newton invented his calculus to do.

Add in a third body, though, and you’ve suddenly created a problem that just can’t be done by freshman calculus, or really, done perfectly by anything but really exotic methods. You’re left with approximations, analytic or numerical. (Karl Fritiof Sundman proved in 1912 that one could create an infinite series solution, but it’s not a usable solution. To get a desired accuracy requires so many terms and so much calculation that you’re better off not using it. This almost sounds like the classical joke about mathematicians, coming up with solutions that are perfect but unusable. It is the most extreme case of a possible-but-not-practical solution I’m aware of, if stories I’ve heard about its convergence rate are accurate. I haven’t tried to follow the technique myself.)

But just because you can’t solve every problem of a type doesn’t mean you can’t solve some of them, and the ones you do solve might be useful anyway. Joseph-Louis Lagrange did that, studying the problem of one large body — like a sun, or a planet — and one middle-sized body — a planet, or a moon — and one tiny body — like an asteroid, or a satellite. If the middle-sized body is orbiting the large body in a nice circular orbit, then, there are five special points, dubbed the Lagrange points. A satellite that’s at one of those points (with the right speed) will keep on orbiting at the same rotational speed that the middle body takes around the large body; that is, the system will turn as if the large, middle, and tiny bodies were fixed in place, relative to each other.

Two of these spots, dubbed numbers 4 and 5, are stable: if your tiny body is not quite in the right location that’s all right, because it’ll stay nearby, much in the same way that if you roll a ball into a pit it’ll stay in the pit. But three of these spots, numbers 1, 2, and 3, are unstable: if your tiny body is not quite on those spots, it’ll fall away, in much the same way if you set a ball on the peak of the roof it’ll roll off one way or another.

When Lagrange noticed these points there wasn’t any particular reason to think of them as anything but a neat mathematical construct. But the points do exist, and they can be stable even if the medium body doesn’t have a perfectly circular orbit, or even if there are other planets in the universe, which throws off the nice simple calculations yet. Something like 1700 asteroids are known to exist in the number 4 and 5 Lagrange points for the Sun and Jupiter, and there are a handful known for Saturn and Neptune, and apparently at least five known for Mars. For Earth apparently there’s just the one known to exist, catchily named 2010 TK7, discovered in October 2010, although I’d be surprised if that were the only one. They’re just small.

Professor Peter Peddle has the crazy idea of studying boxing scientifically and preparing strategy accordingly.

Elliot Caplin and John Cullen Murphy’s Big Ben Bolt, from the 23rd of August, 1953 (rerun the 28th of September, 2014).

Elliot Caplin and John Cullen Murphy’s Big Ben Bolt (September 28, originally run August 23, 1953) has been on the Sunday strips now running a tale about a mathematics professor, Peter Peddle, who’s threatening to revolutionize Big Ben Bolt’s boxing world by reducing it to mathematical abstraction; past Sunday strips have even shown the rather stereotypically meek-looking professor overwhelming much larger boxers. The mathematics described here is nonsense, of course, but it’d be asking a bit of the comic strip writers to have a plausible mathematical description of the perfect boxer, after all.

But it’s hard for me anyway to not notice that the professor’s approach is really hard to gainsay. The past generation of baseball, particularly, has been revolutionized by a very mathematical, very rigorous bit of study, looking at questions like how many pitches can a pitcher actually throw before he loses control, and where a batter is likely to hit based on past performance (of this batter and of batters in general), and how likely is this player to have a better or a worse season if he’s signed on for another year, and how likely is it he’ll have a better enough season than some cheaper or more promising player? Baseball is extremely well structured to ask these kinds of questions, with football almost as good for it — else there wouldn’t be fantasy football leagues — and while I am ignorant of modern boxing, I would be surprised if a lot of modern boxing strategy weren’t being studied in Professor Peddle’s spirit.

Eric the Circle (September 28), this one by Griffinetsabine, goes to the Shapes Singles Bar for a geometry pun.

Bill Amend’s FoxTrot (September 28) (and not a rerun; the strip is new runs on Sundays) jumps on the Internet Instructional Video bandwagon that I’m sure exists somewhere, with child prodigy Jason Fox having the idea that he could make mathematics instruction popular enough to earn millions of dollars. His instincts are probably right, too: instructional videos that feature someone who looks cheerful and to be having fun and maybe a little crazy — well, let’s say eccentric — are probably the ones that will be most watched, at least. It’s fun to see people who are enjoying themselves, and the odder they act the better up to a point. I kind of hate to point out, though, that Jason Fox in the comic strip is supposed to be ten years old, implying that (this year, anyway) he was born nine years after Bob Ross died. I know that nothing ever really goes away anymore, but, would this be a pop culture reference that makes sense to Jason?

Tom Thaves’s Frank and Ernest (September 28) sets up the idea of Euclid as a playwright, offering a string of geometry puns.

Jef Mallet’s Frazz (September 28) wonders about why trains show up so often in story problems. I’m not sure that they do, actually — haven’t planes and cars taken their place here, too? — although the reasons aren’t that obscure. Questions about the distance between things changing over time let you test a good bit of arithmetic and algebra while being naturally about stuff it’s reasonable to imagine wanting to know. What more does the homework-assigner want?

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 28) pops back up again with the prospect of blowing one’s mind, and it is legitimately one of those amazing things, that e^{i \pi} = -1 . It is a remarkable relationship between a string of numbers each of which are mind-blowing in their ways — negative 1, and pi, and the base of the natural logarithms e, and dear old i (which, multiplied by itself, is equal to negative 1) — and here they are all bundled together in one, quite true, relationship. I do have to wonder, though, whether anyone who would in a social situation like this understand being told “e raised to the i times pi power equals negative one”, without the framing of “we’re talking now about exponentials raised to imaginary powers”, wouldn’t have already encountered this and had some of the mind-blowing potential worn off.