My love read a thread about the < and > signs, and mnemonics people had learned to tell which was which. And my love wondered, is a mnemonic needed? The symbol is wider on the side with the larger quantity; that’s what it means, right? Why imagine an alligator that’s already swallowed the smaller and is ready to eat the larger? In my elementary school it was goldfish, not alligators. Much easier to draw them in.
All right, but just because an interpretation seems obvious doesn’t mean it is. The questions are, who introduced the < and > symbols to mathematics, and what were they thinking?
And here we get complications. The symbols first appear, meaning what they do today, in Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas (“The Analytical Art by which Algebraic Equations can be Resolved”). This is a book, by Thomas Harriot, published in 1631. Thomas Harriot was one of the great English mathematicians of the late 16th and early 17th centuries. He worked on the longitude problem, on optics, on astronomy. Harriot’s observations are our first record of sunspots. He almost observed what we now call Halley’s Comet, with records used to work out its orbit. And he worked on how to solve equations, in ways that look at least recognizably close to what we do today.
There is a tradition that holds Harriot drew these symbols from the arm markings on a Native American. Harriot did sail to the New World at least once. He was on Walter Raleigh’s 1585-86 expedition to Virginia and observed the solar eclipse of April 1585. This was a rare chance to calculate the longitude of a ship at sea. So that’s possible. But there is also an argument that Harriot (or editor) drew from the example of the equals sign.
The = sign we first see in the mid-16th century, written by Robert Recorde, another of the great English mathematicians. Recorde did write, in The Whetstone of Witte (1557) that he used parallel lines of a common length because no two things could be more equal. Good mnemonic there. It seems Harriot (or editor) interpreted the common distance between the lines in the equals sign as the thing kept equal. So, on the side of the symbol with the greater number, make the distance between lines greater. On the lower-number’s side, make the distance between lines smaller. Which is another useful mnemonic for the symbol, if you need one.
It’s not an inevitable scheme. William Oughtred also had symbols for less-than and greater-than. Oughtred’s another vaguely familiar name in mathematics symbols. He gave us the symbol for multiplication, and and for the trig functions. He also pioneered slide rules. Oughtred’s symbols look like a block-letter U set on its side, with the upper leg longer than the lower. The vertical stroke and the shorter horizontal stroke would be on the left, to represent the left being greater than the right. The vertical stroke and shorter horizontal stroke would be on the right, for the left being less than the right. That is, the “open” side would face the smaller of the numbers, opposite to what we do with < and >.
And that seems to be as much as can be definitely said. If I’m reading right, we don’t have Harriot’s (or editor’s) statement of what inspired these symbols. We have guesses that seem reasonable, but that might only seem reasonable because we’ve brought our own interpretations to it. I’d love to know if there’s better information available.
There were just a handful of comic strips that mentioned mathematical topics I found substantial. Of those that did, computational science came up a couple times. So that’s how we got to here.
Rick Detorie’s One Big Happy for the 17th has Joe writing an essay on the history of computing. It’s basically right, too, within the confines of space and understandable mistakes like replacing Pennsylvania with an easier-to-spell state. And within the confines of simplification for the sake of getting the idea across briefly. Most notable is Joe explaining ENIAC as “the first electronic digital computer”. Anyone calling anything “the first” of an invention is simplifying history, possibly to the point of misleading. But we must simplify any history to have it be understandable. ENIAC is among the first computers that anyone today would agree is of a kind with the laptop I use. And it’s certainly the one that, among its contemporaries, most captured the public imagination.
Incidentally, Heman Hollerith was born on Leap Day, 1860; this coming year will in that sense see only his 39th birthday.
Ryan North’s Dinosaur Comics for the 18th is based on the question of whether P equals NP. This is, as T-Rex says, the greatest unsolved problem in computer science. These are what appear to be two different kinds of problems. Some of them we can solve in “polynomial time”, with the number of steps to find a solution growing as some polynomial function of the size of the problem. Others seem to be “non-polynomial”, meaning the number of steps to find a solution grows as … something not a polynomial.
You see one problem. Not knowing a way to solve a problem in polynomial time does not necessarily mean there isn’t a solution. It may mean we just haven’t thought of one. If there is a way we haven’t thought of, then we would say P equals NP. And many people assume that very exciting things would then follow. Part of this is because computational complexity researchers know that many NP problems are isomorphic to one another. That is, we can describe any of these problems as a translation of another of these problems. This is the other part which makes this joke: the declaration that ‘whether God likes poutine’ is isomorphic to the question ‘does P equal NP’.
We tend to assume, also, that if P does equal NP then NP problems, such as breaking public-key cryptography, are all suddenly easy. This isn’t necessarily guaranteed. When we describe something as polynomial or non-polynomial time we’re talking about the pattern by which the number of steps needed to find the solution grows. In that case, then, an algorithm that takes one million steps plus one billion times the size-of-the-problem to the one trillionth power is polynomial time. An algorithm that takes two raised to the size-of-the-problem divided by one quintillion (rounded up to the next whole number) is non-polynomial. But for most any problem you’d care to do, this non-polynomial algorithm will be done sooner. If it turns out P does equal NP, we still don’t necessarily know that NP problems are practical to solve.
Bil Keane and Jeff Keane’s The Family Circus for the 20th has Dolly explaining to Jeff about the finiteness of the alphabet and infinity of numbers. I remember in my childhood coming to understand this and feeling something unjust in the difference between the kinds of symbols. That we can represent any of those whole numbers with just ten symbols (thirteen, if we include commas, decimals, and a multiplication symbol for the sake of using scientific notation) is an astounding feat of symbolic economy.
Zach Weinersmth’s Saturday Morning Breakfast cereal for the 21st builds on the statistics of genetics. In studying the correlations between one thing and another we look at something which varies, usually as the result of many factors, including some plain randomness. If there is a correlation between one variable and another we usually can describe how much of the change in one quantity depends on the other. This is what the scientist means on saying the presence of this one gene accounts for 0.1% of the variance in eeeeevil. The way this is presented, the activity of one gene is responsible for about one-thousandth of the level of eeeeevil in the person.
As the father observes, this doesn’t seem like much. This is because there are a lot of genes describing most traits. And that before we consider epigenetics, the factors besides what is in DNA that affect how an organism develops. I am, unfortunately, too ignorant of the language of genetics to be able to say what a typical variation for a single gene would be, and thus to check whether Weinersmith has the scale of numbers right.
We come now almost to the end of the Summer 2017 A To Z. Possibly also the end of all these A To Z sequences. Gaurish of, For the love of Mathematics, proposed that I talk about the obvious logical choice. The last promising thing I hadn’t talked about. I have no idea what to do for future A To Z’s, if they’re even possible anymore. But that’s a problem for some later time.
Some good advice that I don’t always take. When starting a new problem, make a list of all the things that seem likely to be relevant. Problems that are worth doing are usually about things. They’ll be quantities like the radius or volume of some interesting surface. The amount of a quantity under consideration. The speed at which something is moving. The rate at which that speed is changing. The length something has to travel. The number of nodes something must go across. Whatever. This all sounds like stuff from story problems. But most interesting mathematics is from a story problem; we want to know what this property is like. Even if we stick to a purely mathematical problem, there’s usually a couple of things that we’re interested in and that we describe. If we’re attacking the four-color map theorem, we have the number of territories to color. We have, for each territory, the number of territories that touch it.
Next, select a name for each of these quantities. Write it down, in the table, next to the term. The volume of the tank is ‘V’. The radius of the tank is ‘r’. The height of the tank is ‘h’. The fluid is flowing in at a rate ‘r’. The fluid is flowing out at a rate, oh, let’s say ‘s’. And so on. You might take a moment to go through and think out which of these variables are connected to which other ones, and how. Volume, for example, is surely something to do with the radius times something to do with the height. It’s nice to have that stuff written down. You may not know the thing you set out to solve, but you at least know you’ve got this under control.
I recommend this. It’s a good way to organize your thoughts. It establishes what things you expect you could know, or could want to know, about the problem. It gives you some hint how these things relate to each other. It sets you up to think about what kinds of relationships you figure to study when you solve the problem. It gives you a lifeline, when you’re lost in a sea of calculation. It’s reassurance that these symbols do mean something. Better, it shows what those things are.
I don’t always do it. I have my excuses. If I’m doing a problem that’s very like one I’ve already recently done, the things affecting it are probably the same. The names to give these variables are probably going to be about the same. Maybe I’ll make a quick sketch to show how the parts of the problem relate. If it seems like less work to recreate my thoughts than to write them down, I skip writing them down. Not always good practice. I tell myself I can always go back and do things the fully right way if I do get lost. So far that’s been true.
So, the names. Suppose I am interested in, say, the length of the longest rod that will fit around this hallway corridor. Then I am in a freshman calculus book, yes. Fine. Suppose I am interested in whether this pinball machine can be angled up the flight of stairs that has a turn in it Then I will measure things like the width of the pinball machine. And the width of the stairs, and of the landing. I will measure this carefully. Pinball machines are heavy and there are many hilarious sad stories of people wedging them into hallways and stairwells four and a half stories up from the street. But: once I have identified, say, ‘width of pinball machine’ as a quantity of interest, why would I ever refer to it as anything but?
This is no dumb question. It is always dangerous to lose the link between the thing we calculate and the thing we are interested in. Without that link we are less able to notice mistakes in either our calculations or the thing we mean to calculate. Without that link we can’t do a sanity check, that reassurance that it’s not plausible we just might fit something 96 feet long around the corner. Or that we estimated that we could fit something of six square feet around the corner. It is common advice in programming computers to always give variables meaningful names. Don’t write ‘T’ when ‘Total’ or, better, ‘Total_Value_Of_Purchase’ is available. Why do we disregard this in mathematics, and switch to ‘T’ instead?
First reason is, well, try writing this stuff out. Your hand (h) will fall off (foff) in about fifteen minutes, twenty seconds. (15′ 20”). If you’re writing a program, the programming environment you have will auto-complete the variable after one or two letters in. Or you can copy and paste the whole name. It’s still good practice to leave a comment about what the variable should represent, if the name leaves any reasonable ambiguity.
Another reason is that sure, we do specific problems for specific cases. But a mathematician is naturally drawn to thinking of general problems, in abstract cases. We see something in common between the problem “a length and a quarter of the length is fifteen feet; what is the length?” and the problem “a volume plus a quarter of the volume is fifteen gallons; what is the volume?”. That one is about lengths and the other about volumes doesn’t concern us. We see a saving in effort by separating the quantity of a thing from the kind of the thing. This restores danger. We must think, after we are done calculating, about whether the answer could make sense. But we can minimize that, we hope. At the least we can check once we’re done to see if our answer makes sense. Maybe even whether it’s right.
For centuries, as the things we now recognize as algebra developed, we would use words. We would talk about the “thing” or the “quantity” or “it”. Some impersonal name, or convenient pronoun. This would often get shortened because anything you write often you write shorter. “Re”, perhaps. In the late 16th century we start to see the “New Algebra”. Here mathematics starts looking like … you know … mathematics. We start to see stuff like “addition” represented with the + symbol instead of an abbreviation for “addition” or a p with a squiggle over it or some other shorthand. We get equals signs. You start to see decimals and exponents. And we start to see letters used in place of numbers whose value we don’t know.
There are a couple kinds of “numbers whose value we don’t know”. One is the number whose value we don’t know, but hope to learn. This is the classic variable we want to solve for. Another kind is the number whose value we don’t know because we don’t care. I mean, it has some value, and presumably it doesn’t change over the course of our problem. But it’s not like our work will be so different if, say, the tank is two feet high rather than four.
Is there a problem? If we pick our letters to fit a specific problem, no. Presumably all the things we want to describe have some clear name, and some letter that best represents the name. It’s annoying when we have to consider, say, the pinball machine width and the corridor width. But we can work something out.
But what about general problems?
Is an easy problem to solve?
If we want to figure what ‘m’ is, yes. Similarly ‘y’. If we want to know what ‘b’ is, it’s tedious, but we can do that. If we want to know what ‘e’ is? Run and hide, that stuff is crazy. If you have to, do it numerically and accept an estimate. Don’t try figuring what that is.
And so we’ve developed conventions. There are some letters that, except in weird circumstances, are coefficients. They’re numbers whose value we don’t know, but either don’t care about or could look up. And there are some that, by default, are variables. They’re the ones whose value we want to know.
These conventions started forming, as mentioned, in the late 16th century. François Viète here made a name that lasts to mathematics historians at least. His texts described how to do algebra problems in the sort of procedural methods that we would recognize as algebra today. And he had a great idea for these letters. Use the whole alphabet, if needed. Use the consonants to represent the coefficients, the numbers we know but don’t care what they are. Use the vowels to represent the variables, whose values we want to learn. So he would look at that equation and see right away: it’s a terrible mess. (I exaggerate. He doesn’t seem to have known the = sign, and I don’t know offhand when ‘log’ and ‘cos’ became common. But suppose the rest of the equation were translated into his terminology.)
It’s not a bad approach. Besides the mnemonic value of consonant-coefficient, vowel-variable, it’s true that we usually have fewer variables than anything else. The more variables in a problem the harder it is. If someone expects you to solve an equation with ten variables in it, you’re excused for refusing. So five or maybe six or possibly seven choices for variables is plenty.
But it’s not what we settled on. René Descartes had a better idea. He had a lot of them, but here’s one. Use the letters at the end of the alphabet for the unknowns. Use the letters at the start of the alphabet for coefficients. And that is, roughly, what we’ve settled on. In my example nightmare equation, we’d suppose ‘y’ to probably be the variable we want to solve for.
And so, and finally, x. It is almost the variable. It says “mathematics” in only two strokes. Even π takes more writing. Descartes used it. We follow him. It’s way off at the end of the alphabet. It starts few words, very few things, almost nothing we would want to measure. (Xylem … mass? Flow? What thing is the xylem anyway?) Even mathematical dictionaries don’t have much to say about it. The letter transports almost no connotations, no messy specific problems to it. If it suggests anything, it suggests the horizontal coordinate in a Cartesian system. It almost is mathematics. It signifies nothing in itself, but long use has given it an identity as the thing we hope to learn by study.
And pirate treasure maps. I don’t know when ‘X’ became the symbol of where to look for buried treasure. My casual reading suggests “never”. Treasure maps don’t really exist. Maps in general don’t work that way. Or at least didn’t before cartoons. X marking the spot seems to be the work of Robert Louis Stevenson, renowned for creating a fanciful map and then putting together a book to justify publishing it. (I jest. But according to Simon Garfield’s On The Map: A Mind-Expanding Exploration of the Way The World Looks, his map did get lost on the way to the publisher, and he had to re-create it from studying the text of Treasure Island. This delights me to no end.) It makes me wonder if Stevenson was thinking of x’s service in mathematics. But the advantages of x as a symbol are hard to ignore. It highlights a point clearly. It’s fast to write. Its use might be coincidence.
But it is a letter that does a needed job really well.
I don’t actually like it when a split week has so many more comics one day than the next, but I also don’t like splitting across a day if I can avoid it. This week, I had to do a little of both since there were so many comic strips that were relevant enough on the 8th. But they were dominated by the idea of going back to school, yet.
Randy Glasbergen’s Glasbergen Cartoons rerun for the 8th is another back-to-school gag. And it uses arithmetic as the mathematics at its most basic. Arithmetic might not be the most fundamental mathematics, but it does seem to be one of the parts we understand first. It’s probably last to be forgotten even on a long summer break.
Mark Pett’s Mr Lowe rerun for the 8th is built on the familiar old question of why learn arithmetic when there’s computers. Quentin is unconvinced of this as motive for learning long division. I’ll grant the case could be made better. I admit I’m not sure how, though. I think long division is good as a way to teach, especially, the process of estimating and improving estimates of a calculation. There’s a lot of real mathematics in doing that.
Guy Gilchrist’s Nancy for the 8th is another back-to-school strip. Nancy’s faced with “this much math” so close to summer. Her given problem’s a bit of a mess to me. But it’s mostly teaching whether the student’s got the hang of the order of operations. And the instructor clearly hasn’t got the sense right. People can ask whether we should parse “12 divided by 3 times 4” as “(12 divided by 3) times 4” or as “12 divided by (3 times 4)”, and that does make a major difference. Multiplication commutes; you can do it in any order. Division doesn’t. Leaving ambiguous phrasing is the sort of thing you learn, instinctively, to avoid. Nancy would be justified in refusing to do the problem on the grounds that there is no unambiguous way to evaluate it, and that the instructor surely did not mean for her to evaluate it all four different plausible ways.
By the way, I’ve seen going around Normal Person Twitter this week a comment about how they just discovered the division symbol ÷, the obelus, is “just” the fraction bar with dots above and below where the unknown numbers go. I agree this is a great mnemonic for understanding what is being asked for with the symbol. But I see no evidence that this is where the symbol, historically, comes from. We first see ÷ used for division in the writings of Johann Henrich Rahn, in 1659, and the symbol gained popularity particularly when John Pell picked it up nine years later. But it’s not like Rahn invented the symbol out of nowhere; it had been used for subtraction for over 125 years at that point. There were also a good number of writers using : or / or \ for division. There were some people using a center dot before and after a / mark for this, like the % sign fell on its side. That ÷ gained popularity in English and American writing seems to be a quirk of fate, possibly augmented by it being relatively easy to produce on a standard typewriter. (Florian Cajori notes that the National Committee on Mathematical Requirements recommended dropping ÷ altogether in favor of a symbol that actually has use in non-mathematical life, the / mark. The Committee recommended this in 1923, so you see how well the form agenda is doing.)
Mark Leiknes’s Cow and Boy rerun for the 9th only mentions mathematics, and that as a course that Billy would rather be skipping. But I like the comic strip and want to promote its memory as much as possible. It’s a deeply weird thing, because it has something like 400 running jokes, and it’s hard to get into because the first couple times you see a pastoral conversation interrupted by an orca firing a bazooka at a cat-helicopter while a panda brags of blowing up the moon it seems like pure gibberish. If you can get through that, you realize why this is funny.
Dave Blazek’s Loose Parts for the 9th uses chalkboards full of stuff as the sign of a professor doing serious thinking. Mathematics is will-suited for chalkboards, at least in comic strips. It conveys a lot of thought and doesn’t need much preplanning. Although a joke about the difficulties in planning out blackboard use does take that planning. Yes, there is a particular pain that comes from having more stuff to write down in the quick yet easily collaborative medium of the chalkboard than there is board space to write.
Brian Basset’s Red and Rover for the 9th also really only casually mentions mathematics. But it’s another comic strip I like a good deal so would like to talk up. Anyway, it does show Red discovering he doesn’t mind doing mathematics when he sees the use.
As though to reinforce how nothing was basically wrong, Comic Strip Master Command sent a normal number of mathematically themed comics around this past week. They bunched the strips up in the first half of the week, but that will happen. It was a fun set of strips in any event.
Rob Harrell’s Adam @ Home for the 11th tells of a teacher explaining division through violent means. I’m all for visualization tools and if we are going to use them, the more dramatic the better. But I suspect Mrs Clark’s students will end up confused about what exactly they’ve learned. If a doll is torn into five parts, is that communicating that one divided by five is five? If the students were supposed to identify the mass of the parts of the torn-up dolls as the result of dividing one by five, was that made clear to them? Maybe it was. But there’s always the risk in a dramatic presentation that the audience will misunderstand the point. The showier the drama the greater the risk, it seems to me. But I did only get the demonstration secondhand; who knows how well it was done?
Greg Cravens’ The Buckets for the 11th has the kid, Toby, struggling to turn a shirt backwards and inside-out without taking it off. As the commenters note this is the sort of problem we get into all the time in topology. The field is about what can we say about shapes when we don’t worry about distance? If all we know about a shape is the ways it’s connected, the number of holes it has, whether we can distinguish one side from another, what else can we conclude? I believe Gocomics.com commenter Mike is right: take one hand out the bottom of the shirt and slide it into the other sleeve from the outside end, and proceed from there. But I have not tried it myself. I haven’t yet started wearing long-sleeve shirts for the season.
Bill Amend’s FoxTrot for the 11th — a new strip — does a story problem featuring pizzas cut into some improbable numbers of slices. I don’t say it’s unrealistic someone might get this homework problem. Just that the story writer should really ask whether they’ve ever seen a pizza cut into sevenths. I have a faint memory of being served a pizza cut into tenths by same daft pizza shop, which implies fifths is at least possible. Sevenths I refuse, though.
Mark Tatulli’s Heart of the City for the 12th plays on the show-your-work directive many mathematics assignments carry. I like Heart’s showiness. But the point of showing your work is because nobody cares what (say) 224 divided by 14 is. What’s worth teaching is the ability to recognize what approaches are likely to solve what problems. What’s tested is whether someone can identify a way to solve the problem that’s likely to succeed, and whether that can be carried out successfully. This is why it’s always a good idea, if you are stumped on a problem, to write out how you think this problem should be solved. Writing out what you mean to do can clarify the steps you should take. And it can guide your instructor to whether you’re misunderstanding something fundamental, or whether you just missed something small, or whether you just had a bad day.
Norm Feuti’s Gil for the 12th, another rerun, has another fanciful depiction of showing your work. The teacher’s got a fair complaint in the note. We moved away from tally marks as a way to denote numbers for reasons. Twelve depictions of apples are harder to read than the number 12. And they’re terrible if we need to depict numbers like one-half or one-third. Might be an interesting side lesson in that.
Brian Basset’s Red and Rover for the 14th is a rerun and one I’ve mentioned in these parts before. I understand Red getting fired up to be an animator by the movie. It’s been a while since I watched Donald Duck in Mathmagic Land but my recollection is that while it was breathtaking and visually inventive it didn’t really get at mathematics. I mean, not at noticing interesting little oddities and working out whether they might be true always, or sometimes, or almost never. There is a lot of play in mathematics, especially in the exciting early stages where one looks for a thing to prove. But it’s also in seeing how an ingenious method lets you get just what you wanted to know. I don’t know that the short demonstrates enough of that.
Bud Blake’s Tiger rerun for the 15th gives Punkinhead the chance to ask a question. And it’s a great question. I’m not sure what I’d say arithmetic is, not if I’m going to be careful. Offhand I’d say arithmetic is a set of rules we apply to a set of things we call numbers. The rules are mostly about how we can take two numbers and a rule and replace them with a single number. And these turn out to correspond uncannily well with the sorts of things we do with counting, combining, separating, and doing some other stuff with real-world objects. That it’s so useful is why, I believe, arithmetic and geometry were the first mathematics humans learned. But much of geometry we can see. We can look at objects and see how they fit together. Arithmetic we have to infer from the way the stuff we like to count works. And that’s probably why it’s harder to do when we start school.
What’s not good about that as an answer is that it actually applies to a lot of mathematical constructs, including those crazy exotic ones you sometimes see in science press. You know, the ones where there’s this impossibly complicated tangle with ribbons of every color and a headline about “It’s Revolutionary. It’s 46-Dimensional. It’s Breaking The Rules Of Geometry. Is It The Shape That Finally Quantizes Gravity?” or something like that. Well, describe a thing vaguely and it’ll match a lot of other things. But also when we look to new mathematical structures, we tend to look for things that resemble arithmetic. Group theory, for example, is one of the cornerstones of modern mathematical thought. It’s built around having a set of things on which we can do something that looks like addition. So it shouldn’t be a surprise that many groups have a passing resemblance to arithmetic. Mathematics may produce universal truths. But the ones we see are also ones we are readied to see by our common experience. Arithmetic is part of that common experience.
I concede this isn’t a set of mathematically-themed comics that inspires deep discussions. That’s all right. It’s got three that I can give pictures for, which is important. Also it means I can wrap up April with another essay. This gives me two months in a row of posting something every day, and I’d have bet that couldn’t happen.
Ted Shearer’s Quincy for the 1st of March, 1977, rerun the 25th of April, is not actually a “mathematics is useless in the real world” comic strip. It’s more about the uselessness of any school stuff in the face of problems like the neighborhood bully. Arithmetic just fits on the blackboard efficiently. There’s some sadness in the setting. There’s also some lovely artwork, though, and it’s worth noticing it. The lines are nice and expressive, and the greyscale wash well-placed. It’s good to look at.
dro-mo for the 26th I admit I’m not sure what exactly is going on. I suppose it’s a contest to describe the most interesting geometric shape. I believe the fourth panel is meant to be a representation of the tesseract, the four-dimensional analog of the cube. This causes me to realize I don’t remember any illustrations of a five-dimensional hypercube. Wikipedia has a couple, but they’re a bit disappointing. They look like the four-dimensional cube with some more lines. Maybe it has some more flattering angles somewhere.
Bill Amend’s FoxTrot for the 26th (a rerun from the 3rd of May, 2005) poses a legitimate geometry problem. Amend likes to do this. It was one of the things that first attracted me to the comic strip, actually, that his mathematics or physics or computer science jokes were correct. “Determine the sum of the interior angles for an N-sided polygon” makes sense. The commenters at Gocomics.com are quick to say what the sum is. If there are N sides, the interior angles sum up to (N – 2) times 180 degrees. I believe the commenters misread the question. “Determine”, to me, implies explaining why the sum is given by that formula. That’s a more interesting question and I think still reasonable for a freshman in high school. I would do it by way of triangles.
David L Hoyt and Jeff Knurek’s Jumble for the 27th of April gives us another arithmetic puzzle. As often happens, you can solve the surprise-answer by looking hard at the cartoon and picking up the clues from there. And it gives us an anthropomorphic-numerals gag for this collection.
Bill Holbrook’s On The Fastrack for the 28th of April has the misanthropic Fi explain some of the glories of numbers. As she says, they can be reliable, consistent partners. If you have learned something about ‘6’, then it not only is true, it must be true, at least if we are using ‘6’ to mean the same thing. This is the sort of thing that transcends ordinary knowledge and that’s so wonderful about mathematics.
Fi describes ‘x’ and ‘y’ as “shifty little goobers”, which is a bit unfair. ‘x’ and ‘y’ are names we give to numbers when we don’t yet know what values they have, or when we don’t care what they have. We’ve settled on those names mostly in imitation of Réné Descartes. Trying to do without names is a mess. You can do it, but it’s rather like novels in which none of the characters has a name. The most skilled writers can carry that off. The rest of us make a horrid mess. So we give placeholder names. Before ‘x’ and ‘y’ mathematicians would use names like ‘the thing’ (well, ‘re’) or ‘the heap’. Anything that the quantity we talk about might measure. It’s done better that way.
Elzie Segar’s Thimble Theater is a comic strip you maybe vaguely remember hearing about for some reason. The reason is that, ten years into its run, Segar discovered a charismatic sailor named Popeye. People who read my humor blog know I’m a bit Popeye-mad, even still. Comics Kingdom has in its Vintage comics run the strips from the first story where Popeye appeared. This isn’t it. That story resolved, and the comic tried to carry on with the old cast. It didn’t last. After a few dull weeks Segar started making excuses to put Popeye back on-screen. It’s quite like Dickens’s Pickwick Papers and the discovery of Sam Weller, right down to this being the character that made the author famous.
As part of Segar’s excuses to keep Popeye on panel, nominal lead Castor Oyl has hired a tutor. It’s not going well. I blame the tutor, who’s berating Popeye for being wrong and giving no hint what to do right. But in this installment, originally run the 14th of September, 1929, we get around to arithmetic. Popeye is either a natural, has experience we don’t know about, or is quite lucky. It wouldn’t be absurd for Popeye to be good at some kinds of arithmetic. If he’s trained in navigation he’d probably pick up a good bit of practice calculating. I don’t know anything but the most trivial points of how to calculate one’s position at sea. So I can’t say if it’s plausible Popeye would have practiced calculations like “six and a half times 656”. He may just be lucky.
Mark Tatulli’s Lio for the 26th features soap bubbles made into geometry diagrams. I like that; it’s cute. Coincidentally, Guy Gilchrist’s Nancy for the 29th turns the pieces of a geometry puzzle into pizza. I think that’s a lesser version of the joke. It’s less absurd.
Nick Seluk’s The Awkard Yeti for the 2nd of March is a Schrödinger’s Cat reference alongside a butterfly reference. It seems Comic Strip Master Command challenges my “I’ve said all I can say, for now, about Schrödinger’s Cat and Chaos Butterflies” policy.
Missy Meyer’s Holiday Dodles mentions the 2nd of March was World Maths Day. I hadn’t heard about this; had you? Wikipedia indicates it’s a worldwide mathematics competition event sponsored by 3P Learning. Also that the first one was held on “Pi Day”, the 14th of March, which would make sense. I didn’t know it was Dr Seuss’s birthday either until I ran across a third comic strip doing some Dr Seuss jokes. Comic strips sometimes line up by accident. But I’m always impressed when they spontaneously (I assume) line up for some minor event like that.
Charles Schulz’s Peanuts for the 3rd of March originally ran the 6th of March, 1969. It’s part of a storyline in which Linus’s favorite teacher, Miss Othmar, is replaced following a teacher’s strike. This is why he complains to the new teacher about how Miss Othmar never did things that way.
It gets to appear here because Linus suggests that for some problem or other “we could divide instead of subtract”. I’m a little curious what the problem might have been. Division is often presented as a sort of hurried-up subtraction, or at least it was when I was Linus’s age. But they don’t quite address the same sorts of questions. I suppose something like “how many times eight goes into thirty-two”. But I wouldn’t do that by subtraction except to point out how division answers that question so much better. Still, there is a good point in showing how there can be several ways to do a problem. There almost always are. Sometimes a particular approach is faster than another. Sometimes it’s less confusing than another. Sometimes it gives better insight into other problems than another. If all you are interested in is the right answer, then you can use whatever method works, including letting Popeye guess for you. But, except on the frontier of research where we don’t quite know what we’re studying, there are always choices in how to find an answer.
Tom Toles’s Randolph Itch, 2 am for the 3rd I feel confident I’ve shown before. The strip didn’t run long originally and it’s in its third or fourth rerun cycle on Gocomics.com. It’s still an amusing bit of figure drawing, drawn by figures, being figured out. I make it out to 111,193.
I confess I’m dissatisfied with this batch of Reading the Comics posts. I like having something like six to eight comics for one of these roundups. But there was this small flood of mathematically-themed comics on the 6th of December. I could either make do with a slightly short edition, or have an overstuffed edition. I suppose it’s possible to split one day’s comics across two Reading the Comics posts, but that’s crazy talk. So, a short edition today.
Jef Mallett’s Frazz for the 4th of December was part of a series in which Caulfield resists learning about reciprocals. The 4th offers a fair example of the story. At heart the joke is just the student-resisting-class, or student-resisting-story-problems. It certainly reflects a lack of motivation to learn what they are.
We use reciprocals most often to write division problems as multiplication. “a ÷ b” is the same as “a times the reciprocal of b”. But where do we get the reciprocal of b from? … Well, we can say it’s the multiplicative inverse of b. That is, it’s whatever number you have to multiply ‘b’ by in order to get ‘1’. But we’re almost surely going to find that taking 1 and dividing it by b. So we’ve swapped out one division problem for a slightly different one. This doesn’t seem to be getting us anywhere.
But we have gotten a new idea. If we can define the multiplication of things, we might be able to get division for almost free. Could we divide one matrix by another? We can certainly multiply a matrix by the inverse of another. (There are complications at work here. We’ll save them for another time.) A lot of sets allow us to define things that make sense as addition and multiplication. And if we can define a complicated operation in terms of addition and multiplication … If we follow this path, we get to do things like define the cosine of a matrix. Then we just have to figure out why we’d want have a cosine of a matrix.
There’s a simpler practical use of reciprocals. This relates to numerical mathematics, computer work. Computer chips do addition (and subtraction) really fast. They do multiplication a little slower. They do division a lot slower. Division is harder than multiplication, as anyone who’s done both knows. However, dividing by (say) 4 is the same thing as multiplying by 0.25. So if you know you need to divide by a number a lot, then it might make for a faster program to change division into multiplication by a reciprocal. You have to work out the reciprocal, but if you only have to do that once instead of many times over, this might make for faster code. Reciprocals are one of the tools we can use to change a mathematical process into something faster.
(In practice, you should never do this. You have a compiler that does this, and you should let it do its work. But it’s enlightening to know these are the sorts of things your compiler is looking for when it turns your code into something the computer does. And looking for ways to do the same work in less time is a noble side of mathematics.)
Charles Schulz’s Peanuts for the 4th of December (originally from 1968, on the same day) sees Peppermint Patty’s education crash against a word problem. It’s another problem in motivating a student to do a word problem. I admit when I was a kid I’d have been enchanted by this puzzle. But I was a weird one.
Dave Coverly’s Speed Bump for the 4th of December is a mathematics-symbols joke as applied to toast. I think you could probably actually sell those. At least the greater-than and the less-than signs. The approximately-equal-to signs would be hard to use. And people would think they were for bacon anyway.
Ruben Bolling’s Super-Fun-Pak Comix for the 4th of December showcases Young Albert Einstein. That counts as mathematical content, doesn’t it? The strip does make me wonder if they’re still selling music CDs and other stuff for infant or even prenatal development. I’m skeptical that they ever did any good, but it isn’t a field I’ve studied.
Bill Whitehead’s Free Range for the 5th of December uses a blackboard full of mathematical and semi-mathematical symbols to denote “stuff too complicated to understand”. The symbols don’t parse as anything. It is authentic to mathematical work to sometimes skip writing all the details of a thing and write in instead a few words describing it. Or to put in an abbreviation for the thing. That often gets circled or boxed or in some way marked off. That keeps us from later on mistaking, say, “MUB” as the product of M and U and B, whatever that would mean. Then we just have to remember we meant “minimum upper bound” by that.
Bill Amend’s FoxTrot Classics for the 28th of November (originally run in 2004) depicts a “Christmas Card For Smart People”. It uses the familiar motif of “ability to do arithmetic” as denoting smartness. The key to the first word is remembering that mathematicians use the symbol ‘e’ to represent a number that’s just a little over 2.71828. We call the number ‘e’, or something ‘the base of the natural logarithm’. It turns up all over the place. If you have almost any quantity that grows or that shrinks at a speed proportional to how much there is, and describe how much of stuff there is over time, you’ll find an ‘e’. Leonhard Euler, who’s renowned for major advances in every field of mathematics, is also renowned for major advances in notation in physics, and he gave us ‘e’ for that number.
The key to the second word there is remembering from physics that force equals mass times acceleration. Therefore the force divided by the acceleration is …
And so that inspires this essay’s edition title. There are several comics in this selection that are about the symbols or the representations of mathematics, and that touch on the subject as a visual art.
Matt Janz’s Out of the Gene Pool for the 28th of November first ran the 26th of October, 2002. It would make for a good word problem, too, with a couple of levels: given the constraints of (a slightly looser) budget, how do they get the greatest number of cookies? Or if some cookies are better than others, how do they get the most enjoyment from their cookie purchase? Working out the greatest amount of enjoyment within a given cookie budget, with different qualities of cookies, can be a good introduction to optimization problems and how subtle they can be.
Bill Holbrook’s On The Fastrack for the 29th of November speaks in support of accounting. It’s a worthwhile message. It doesn’t get much respect, not from the general public, and not from typical mathematics department. The general public maybe thinks of accounting as not much more than a way companies nickel-and-dime them. If the mathematics departments I’ve associated with are fair representatives, accounting isn’t even thought of except by the assistant professor doing a seminar on financial mathematics. (And I’m not sure accounting gets mentioned there, since there’s exciting stuff about the Black-Scholes Equation and options markets to think about instead.) This despite that accounting is probably, by volume, the most used part of mathematics. Anyway, Holbrook’s strip probably won’t get the field a better reputation. But it has got some great illustrations of doing things with numbers. The folks in mathematics departments certainly have had days feeling like they’ve done each of these things.
Dave Coverly’s Speed Bump for the 30th of November is a compound interest joke. I admit I’ve told this sort of joke myself, proposing that the hour cut out of the day in spring when Daylight Saving Time starts comes back as a healthy hour and three minutes in autumn when it’s taken out of saving. If I can get the delivery right I might have someone going for that three minutes.
Mikael Wulff and Anders Morgenthaler’s Truth Facts for the 30th of November is a Venn diagram joke for breakfast. I would bet they’re kicking themselves for not making the intersection be the holes in the center.
Mark Anderson’s Andertoons for this week interests me. It uses a figure to try explaining how to relate gallon and quart an pint and other units relate to each other. I like it, but I’m embarrassed to say how long it took in my life to work out the relations between pints, quarts, gallons, and particularly whether the quart or the pint was the larger unit. I blame part of that on my never really having to mix a pint of something with a quart of something else, which ought to have sorted that out. Anyway, let’s always cherish good representations of information. Good representations organize information and relationships in ways that are easy to remember, or easy to reconstruct or extend.
John Graziano’s Ripley’s Believe It or Not for the 2nd of December tries to visualize how many ways there are to arrange a Rubik’s Cube. Counting off permutations of things by how many seconds it’d take to get through them all is a common game. The key to producing a staggering length of time is that it one billion seconds are nearly 32 years, and the number of combinations of things adds up really really fast. There’s over eight billion ways to draw seven letters in a row, after all, if every letter is equally likely and if you don’t limit yourself to real or even imaginable words. Rubik’s Cubes have a lot of potential arrangements. Graziano misspells Rubik, but I have to double-check and make sure I’ve got it right every time myself. I didn’t know that about the pigeons.
Charles Schulz’s Peanuts for the 2nd of December (originally run in 1968) has Peppermint Patty reflecting on the beauty of numbers. I don’t think it’s unusual to find some numbers particularly pleasant and others not. Some numbers are easy to work with; if I’m trying to add up a set of numbers and I have a 3, I look instinctively for a 7 because of how nice 10 is. If I’m trying to multiply numbers, I’d so like to multiply by a 5 or a 25 than by a 7 or an 18. Typically, people find they do better on addition and multiplication with lower numbers like two and three, and get shaky with sevens and eights and such. It may be quirky. My love is a wizard with 7’s, but can’t do a thing with 8. But it’s no more irrational than the way a person might a pyramid attractive but a sphere boring and a stellated icosahedron ugly.
I’ve seen some comments suggesting that Peppermint Patty is talking about numerals, that is, the way we represent numbers. That she might find the shape of the 2 gentle, while 5 looks hostile. (I can imagine turning a 5 into a drawing of a shouting person with a few pencil strokes.) But she doesn’t seem to say one way or another. She might see a page of numbers as visual art; she might see them as wonderful things with which to play.
Eric the Circle for the 5th of November, by “andei”, is a mathematics-vocabulary pun. Ellipses are measured with a property called eccentricity. It measures, in a sense, how far any conic section is from being a circle. A circle has an eccentricity of zero. An ellipse, other than a circle, has an eccentricity between 0 and 1. The smaller the eccentricity the harder it is to tell the ellipse from a circle. The larger the eccentricity the longer one direction of the ellipse is compared to the other. For example, the Earth’s orbit around the sun, a very circular thing, has an eccentricity of about 0.0167 these days. Halley’s Comet, which gets closer to the Sun than Venus does, and farther from the sun than Neptune does, has an eccentricity of about 0.967. An eccentricity of exactly 1 means the shape is a parabola. An eccentricity of greater than 1 means the shape is a hyperbola.
Mark Pett’s Mr Lowe for the 5th of November (originally the 2nd of November, 2000) gives a lousy reason to learn long division. I admit I’m not sure I can give a good reason anyone needs to know long division now that calculators are a well-proven technology. Perhaps the best reason is that long division works like much of computational mathematics does. You make a best guess for an answer, and test it, and improve it as necessary. Needing to improve an answer does not mean one started out wrong. It just means that we can approximate and modify solutions.
Russell Myers’s Broom Hilda for the 6th of November is almost this entry’s anthropomorphic numerals joke. I’m not sure just how to categorize it. Perhaps “literal” is the best to be done.
Mark Anderson’s Andertoons for the 8th of November is a joke about turning a wrong answer into a “teach the controversy!” special plea. There are mathematical controversies. But I think the only ones thriving are in fields too abstract for the average person to know or care about. But we can look to controversies of the past. An example an elementary school kid might understand is “should 1 be considered a prime number?” It’s generally not regarded as a prime number. If it were, it would add special cases or extra words to many theorems about prime numbers. That would add boring parts to a lot of work. If we move the number 1 off to its own category (a “unit”), then we can talk about prime numbers and composite numbers more easily. Is that good enough reason? If it isn’t, then what would be a good enough reason?
Bill Amend’s FoxTrot for the 8th of November (a new strip, not a rerun) is a subverted word problem joke. It does contain a mention of curves (of happiness) going to infinity, and how they might do that. There’s some interesting linguistics at work here. A plot of a function — call it f(x), for convenience — is a graph that shows sets of values where the equation y = f(x) is true. We talk about functions “going to infinity”, although properly speaking they don’t “go” anywhere at all, any more than a photograph in a paper book moves.
But it’s hard to resist the image we get from imagining drawing the curve. The eye follows the pen that sweeps, usually left to right, fluttering up and down. And near some points the pen goes soaring off the top (or bottom) of the page. If we imagine zooming out, again and again, the pen still soars off the edge of the page. So we call that “going to infinity”. What we mean is there are some values in the domain which the function matches to numbers in the range that are greater than any finite number. (Or less than any finite but negative number, if we’re going off to negative infinity.)
We can even talk about how cuves “go to” infinity. If the function y = f(x) becomes infinitely large at some point, what does the function f(x)/x do? If that function stays finite we can say f(x) grows to infinity in the same way than x does. If f(x)/x grows infinitely large we can say that f(x) grows to infinity faster than x does. If f(x)/ex stays finite, we can say that f(x) grows to infinity in the same way that the exponential function ex does.
Rates of growth may seem like a dull thing to worry about. They become more obviously relevant if we’re interested in functions that measure, for example, how much of a resource is required to do something. Suppose we have different ways to find the best choice out of a set of things. How long finding that takes depends on how many things there are to look through. If we are looking at scalability — how well we’ll be able to find the best choice out of a much larger set of things — then the rate of growth of these functions can be quite important. If doubling the set of things to look through means searching takes ten thousand times longer, we know we’re probably searching wrong, and should find a better way to do it. If doubling the set of things to look through means we have to take one-and-a-half times as long to find what we want, we’re probably using a good approach.
Greg Evans and Karen Evans’s Luann for the 8th of November builds its joke on the idea that mathematical symbols are funny-looking things you have to interpret, just the same way emojis are. Gunther gives his best shot at explaining the various symbols. The grouping of them makes me wonder exactly what mathematics class he’s taking, though. I can’t think offhand of one that would have all of these in the same textbook.
There’s also an actual mistake right up front. He identifies “(f, g)” as the inner product. The “inner product” is a name we give to a collection of functions, all with different domains but all with the range of real numbers. It allows us to describe a “norm”, or size, of whatever kind of thing we have. It also allows us to describe something that works like an angle between two things, and from it, orthogonality. If we’re looking at vectors, then this inner product is also known as the dot product. The mistake, though, is that the inner product is normally written with angled braces, as <f, g> instead. Normal parentheses usually mean we are giving a set of coordinates or an n-tuple. They can also mean that we are taking a Cartesian product, which looks a lot like giving a set of coordinates or an n-tuple. Probably the writer or artist made an understandable mistake while transcribing notes.
The talk of an inner product suggests more than anything else that the subject is linear algebra. The reference to “Dim(U)” is consistent with this. If U is a matrix, we can talk about its dimension. This is a measure of how many of the rows of the matrix U cannot be made as the sum of scalar products of other rows. That’s useful because it tells us how many of the rows are “linearly independent”, or in a way, tell us something that we can’t get from other rows. So this is linear algebra work.
φ is indeed the Golden Ratio, the number approximately 1.618. It’s a famous number but it’s really got no mathematical significance. Its reciprocal, 1/φ, is about 0.618, and that’s pretty, but that’s all. Many have tried to imbue the Golden Ratio with biological or aesthetic significance, and have failed, because it has none. In mathematics, the Golden Ratio is one of those celebrities who’s famous for no discernable reason or accomplishment.
Δ is the Delta symbol, yes. It’s often used as a shorthand for “change in”. So “Δ x” means “the change in x”. We usually take this to mean a small but noticeable change. If we mean a much smaller change, or a perturbation from what we originally wanted, we might switch to a lowercase “δ x”. If we mean an incredibly tiny change we go to “dx”. This is important in calculus and analysis, as well as in many numerical methods classes.
∝ does mean proportional to. We use it to say one quantity varies as the other one does. For example, that the distance you go in an hour is proportional to how fast you go. Go twice as fast, you go twice as far. This turns up in analysis some, and in applied mathematics that tries to model real-world phenomena. We may be unsure of the precise relationship between two things, but we can say how we expect one thing to affect the other. ∝ is a symbol that lets us talk about qualitative relationships among things.
The equals sign with a triangle above it baffled me, and I had to search about for it. It seems to baffle a modest number of people. Apparently it’s used as a way of saying “is defined as”. That is, the term on the left side of this symbol is by definition equal to whatever appears on the right side. I don’t remember seeing it before, and I don’t get what role it serves that the three-line equals sign ≡ doesn’t already do. I’m not saying the Evanses are wrong to use it, just that it’s not one I’m familiar with.
But you see why I can’t figure what course Gunther is taking. Two of the symbols make sense for linear algebra. One fits in almost anywhere in calculus or applied mathematics. One is mostly an applied mathematics term. One is useless. The last is obscure, anyway. What do they have in common? And what could Tiffany’s message showing a heart-eyed smiley face, pizza, and two check marks mean? “I love to watch pizza voting”?
Dave Kellett’s science fiction/humor comic Drive for the 9th of November reveals the probability of a catastrophe has been mis-reported. The choice of numbers is amusing. It’s hard to have an instinctive feel for the difference between a chance of 1-in-600 and a chance of 1-in-400. The difference makes itself known after a few hundred attempts, at least.
Comic Strip Master Command was pretty kind to me this week, and didn’t overload me with too many comics when my computer problems were the most time-demanding. You’ve seen how bad that is by how long it’s taken me to get to answering people’s comments. But they have kept publishing mathematical comic strips, and so I’m ready for another review. This time around a couple of the strips talk about the symbols of mathematics, so that’s enough of a hook for my titling needs.
Henry Scarpelli and Craig Boldman’s Archie (June 30, rerun) is about living with long odds. People react to very improbable events in strange ways. Moose is being maybe more consistent than normal for folks in figuring that if he’s going to be lucky enough to win a contest then he’s just lucky enough to be hit by a meteor too. (It feels like a lottery to me, although I guess Moose has to be too young to enter a lottery.) And I’m amused by the logic of someone’s behavior becoming funny because it is logically consistent.
I’ve got enough comics to do a mathematics-comics roundup post again, but none of them are the King Features or Creators or other miscellaneous sources that demand they be included here in pictures. I could wait a little over three hours and give the King Features Syndicate comics another chance to say anything on point, or I could shrug and go with what I’ve got. It’s a tough call. Ah, what the heck; besides, it’s been over a week since I did the last one of these.
Bill Amend’s FoxTrot (December 7) bids to get posted on mathematics teachers’ walls with a bit of play on two common uses of the term “degree”. It’s also natural to wonder why the same word “degree” should be used to represent the units of temperature and the size of an angle, to the point that they even use the same symbol of a tiny circle elevated from the baseline as a shorthand representation. As best I can make out, the use of the word degree traces back to Old French, and “degré”, meaning a step, as in a stair. In Middle English this got expanded to the notion of one of a hierarchy of steps, and if you consider the temperature of a thing, or the width of an angle, as something that can be grown or shrunk then … I’m left wondering if the Middle English folks who extended “degree” to temperatures and angles thought there were discrete steps by which either quantity could change.
As for the little degree symbol, Florian Cajori notes in A History Of Mathematical Notations that while the symbol (and the ‘ and ” for minutes and seconds) can be found in Ptolemy (!), in describing Babylonian sexagesimal fractions, this doesn’t directly lead to the modern symbols. Medieval manuscripts and early printed books would use abbreviations of Latin words describing what the numbers represented. Cajori rates as the first modern appearance of the degree symbol an appendix, composed by one Jacques Peletier, to the 1569 edition of the text Arithmeticae practicae methods facilis by Gemma Frisius (you remember him; the guy who made triangulation into something that could be used for surveying territories). Peletier was describing astronomical fractions, and used the symbol to denote that the thing before it was a whole number. By 1571 Erasmus Reinhold (whom you remember from working out the “Prutenic Tables”, updated astronomical charts that helped convince people of the use of the Copernican model of the solar system and advance the cause of calendar reform) was using the little circle to represent degrees, and Tycho Brahe followed his example, and soon … well, it took a century or so of competing symbols, including “Grad” or “Gr” or “G” to represent degree, but the little circle eventually won out. (Assume the story is more complicated than this. It always is.)
Mark Litzer’s Joe Vanilla (December 7) uses a panel of calculus to suggest something particularly deep or intellectually challenging. As it happens, the problem isn’t quite defined well enough to solve, but if you make a reasonable assumption about what’s meant, then it becomes easy to say: this expression is “some infinitely large number”. Here’s why.
The numerator is the integral . You can think of the integral of a positive-valued expression as the area underneath that expression and between the lines marked by, on the left, (the number on the bottom of the integral sign), and on the right, (the number on the top of the integral sign). (You know that it’s x because the integral symbol ends with “dx”; if it ended “dy” then the integral would tell you the left and the right bounds for the variable y instead.) Now, is a number that depends on x, yes, but which is never smaller than (about 23.14) nor bigger than (about 24.14). So the area underneath this expression has to be at least as big as the area within a rectangle that’s got a bottom edge at y = 0, a top edge at y = 23, a left edge at x = 0, and a right edge at x infinitely far off to the right. That rectangle’s got an infinitely large area. The area underneath this expression has to be no smaller than that.
Just because the numerator’s infinitely large doesn’t mean that the fraction is, though. It’s imaginable that the denominator is also infinitely large, and more wondrously, is large in a way that makes the ratio some more familiar number like “3”. Spoiler: it isn’t.
Actually, as it is, the denominator isn’t quite much of anything. It’s a summation; that’s what the capital sigma designates there. By convention, the summation symbol means to evaluate whatever expression there is to the right of it — in this case, it’s — for each of a series of values of some index variable. That variable is normally identified underneath the sigma, with a line such as x = 1, and (again by convention) for x = 2, x = 3, x = 4, and so on, until x equals whatever the number on top of the sigma is. In this case, the bottom doesn’t actually say what the index should be, although since “x” is the only thing that makes sense as a variable within the expression — “cos” means the cosine function, and “e” means the number that’s about 2.71828 unless it’s otherwise made explicit — we can suppose that this is a normal bit of shorthand like you use when context is clear.
With that assumption about what’s meant, then, we know the denominator is whatever number is represented by (and 1/e is about 0.368). That’s a number about 16.549, which falls short of being infinitely large by an infinitely large amount.
So, the original fraction shown represents an infinitely large number.
Greg Evans’s Luann Againn (December 7, I suppose technically a rerun) only has a bit of mathematical content, as it’s really playing more on short- and long-term memories. Normal people, it seems, have a buffer of something around eight numbers that they can remember without losing track of them, and it’s surprisingly easy to overload that. I recall reading, I think in Joseph T Hallinan’s Why We Make Mistakes: How We Look Without Seeing, Forget Things In Seconds, And Are All Pretty Sure We are Way Above Average, and don’t think I’m not aware of how funny it would be if I were getting this source wrong, that it’s possible to cheat a little bit on the size of one’s number-buffer.
Hallinan (?) gave the example of a runner who was able to remember strings of dozens of numbers, well past the norm, but apparently by the trick of parsing numbers into plausible running times. That is, the person would remember “834126120820” perfectly because it could be expressed as four numbers, “8:34, 1:26, 1:20, 8:20”, that might be credible running times for something or other and the runner was used to remembering such times. Supporting the idea that this trick was based on turning a lot of digits into a few small numbers was that the runner would be lost if the digits could not be parsed into a meaningful time, like, “489162693077”. So, in short, people are really weird in how they remember and don’t remember things.
Harley Schwadron’s 9 to 5 (December 8) is a “reluctant student” question who, in the tradition of kids in comic strips, tosses out the word “app” in the hopes of upgrading the action into a joke. I’m sympathetic to the kid not wanting to do long division. In arithmetic the way I was taught it, this was the first kind of problem where you pretty much had to approximate and make a guess what the answer might be and improve your guess from that starting point, and that’s a terrifying thing when, up to that point, arithmetic has been a series of predictable, discrete, universally applicable rules not requiring you to make a guess. It feels wasteful of effort to work out, say, what seven times your divisor is when it turns out it’ll go into the dividend eight times. I am glad that teaching approaches to arithmetic seem to be turning towards “make approximate or estimated answers, and try to improve those” as a general rule, since often taking your best guess and then improving it is the best way to get a good answer, not just in long division, and the less terrifying that move is, the better.
Several comics chose to mention the coincidence of the 13th of December being (in the United States standard for shorthand dating) 12-13-14. Chip Sansom’s The Born Loser does the joke about how yes, this sequence won’t recur in (most of our) lives, but neither will any other. Stuart Carlson and Jerry Resler’s Gray Matters takes a little imprecision in calling it “the last date this century to have a consecutive pattern”, something the Grays, if the strip is still running, will realize on 1/2/34 at the latest. And Francesco Marciuliano’s Medium Large uses the neat pattern of the dates as a dip into numerology and the kinds of manias that staring too closely into neat patterns can encourage.
I reached my 16,000th page view, sometime on Thursday. That’s a tiny bit slower than I projected based on May’s readership statistics, but May was a busy month and I’ve had a little less time to write stuff this month, so I’m not feeling bad about that.
Meanwhile, while looking for something else, I ran across a bit about mathematical notation in Florian Cajori’s A History of Mathematical Notation which has left me with a grin since. The book is very good about telling the stories of just what the title suggests. It’s a book well worth dipping into because everything you see written down is the result of a long process of experimentation and fiddling about to find the right balance of “expressing an idea clearly” and “expressing an idea concisely” and “expressing an idea so it’s not too hard to work with”.
The idea here is the square of a variable, which these days we’d normally write as . According to Cajori (section 304), René Descartes “preferred the notation to .” Cajori notes that Carl Gauss had this same preference and defended it on the grounds that doubling the symbol didn’t take any more (or less) space than the superscript 2 did. Cajori lists other great mathematicians who preferred doubling the letter for squaring, including Christiaan Huygens, Edmond Halley, Leonhard Euler, and Isaac Newton. Among mathematicians who preferred were Blaise Pascal, David Gregory (who was big in infinite series), and Wilhelm Leibniz.
Well of course Newton and Leibniz would be on opposite sides of the versus debate. How could the universe be sensible otherwise?
I just noticed that over at archive.org they have Volume I of Florian Cajori’s A History Of Mathematical Notations. There’s a fair chance this means nothing to you, but, Dr Cajori did a great deal of work in writing the history of mathematics in the early 20th century, and with a scope and prose style that still leaves me a bit awed. (He also wrote a history of physics; I remember reading the book, originally written in the mid-1920s, with his description of one of the mysteries of the day. With the advantage of decades on my side I knew this to be the Zeeman effect, a way that magnetic fields affect spectral lines.)
Archive.org has several of Cajori’s books, including the histories mentioned, but Mathematical Notations I like as it’s an indispensable reference. It describes, with abundant examples, the origins of all sorts of the ways we write out mathematical ideas, from numerals themselves to the choices of symbols like the + and x signs to how we got to using letters to represent quantities to something called alligation which was apparently practiced in 15th-century Venice.
Unfortunately archive.org hasn’t yet got Volume II, which includes topics like where the $ symbol for United States currency came from — Cajori had some strong opinions about this, suggesting he was tired of tracking down false leads — but it’s a book you can feel confident in leafing through to find something interesting most any time. I think his description of the way historical opinions had changed particularly fascinating, and recommend particularly Paragraph 96 (pages 64 through 68 of the book, and not one enormous block of text), describing “Fanciful hypotheses on the origins of the numeral forms”, many of them based on ideas that the symbols for numbers contain the number of vertices or strokes or some other mnemonic to how big a number is represented. Of those hypothesis formers he says, “Nor did these writers feel that they were indulging simply in pleasing pastimes or merely contributing to mathematical recreations. With perhaps only one exception, they were as convinced of the correctness of their explanations as are circle-squarers of the soundness of their quadratures”.
Dover publishing, of course, reprints the entire book on paper if you want Volumes I and II together. I admit that’s the form I have, and enjoy, since it becomes one of those books you could use to beat off an intruder if need be.