## Reading the Comics, May 20, 2019: I Guess I Took A Week Off Edition

I’d meant to get back into discussing continuous functions this week, and then didn’t have the time. I hope nobody was too worried.

Bill Amend’s FoxTrot for the 19th is set up as geometry or trigonometry homework. There are a couple of angles that we use all the time, and they do correspond to some common unit fractions of a circle: a quarter, a sixth, an eighth, a twelfth. These map nicely to common cuts of circular pies, at least. Well, it’s a bit of a freak move to cut a pie into twelve pieces, but it’s not totally out there. If someone cuts a pie into 24 pieces, flee.

Tom Batiuk’s vintage Funky Winkerbean for the 19th of May is a real vintage piece, showing off the days when pocket electronic calculators were new. The sales clerk describes the calculator as having “a floating decimal”. And here I must admit: I’m poorly read on early-70s consumer electronics. So I can’t say that this wasn’t a thing. But I suspect that Batiuk either misunderstood “floating-point decimal”, which would be a selling point, or shortened the phrase in order to make the dialogue less needlessly long. Which is fine, and his right as an author. The technical detail does its work, for the setup, by existing. It does not have to be an actual sales brochure. Reducing “floating point decimal” to “floating decimal” is a useful artistic shorthand. It’s the dialogue equivalent to the implausibly few, but easy to understand, buttons on the calculator in the title panel.

Floating point is one of the ways to represent numbers electronically. The storage scheme is much like scientific notation. That is, rather than think of 2,038, think of 2.038 times 103. In the computer’s memory are stored the 2.038 and the 3, with the “times ten to the” part implicit in the storage scheme. The advantage of this is the range of numbers one can use now. There are different ways to implement this scheme; a common one will let one represent numbers as tiny as 10-308 or as large as 10308, which is enough for most people’s needs.

The disadvantage is that floating point numbers aren’t perfect. They have only around (commonly) sixteen digits of significance. That is, the first sixteen or so nonzero numbers in the number you represent mean anything; everything after that is garbage. Most of the time, that trailing garbage doesn’t hurt. But most is not always. Trying to add, for example, a tiny number, like 10-20, to a huge number, like 1020 won’t get the right answer. And there are numbers that can’t be represented correctly anyway, including such exotic and novel numbers as $\frac{1}{3}$. A lot of numerical mathematics is about finding ways to compute that avoid these problems.

Back when I was a grad student I did have one casual friend who proclaimed that no real mathematician ever worked with floating point numbers, because of the limitations they impose. I could not get him to accept that no, in fact, mathematicians are fine with these limitations. Every scheme for representing numbers on a computer has limitations, and floating point numbers work quite well. At some point, you have to suspect some people would rather fight for a mistaken idea they already have than accept something new.

Mac King and Bill King’s Magic in a Minute for the 19th does a bit of stage magic supported by arithmetic: forecasting the sum of three numbers. The trick is that all eight possible choices someone would make have the same sum. There’s a nice bit of group theory hidden in the “Howdydoit?” panel, about how to do the trick a second time. Rotating the square of numbers makes what looks, casually, like a different square. It’s hard for human to memorize a string of digits that don’t have any obvious meaning, and the longer the string the worse people are at it. If you’ve had a person — as directed — black out the rows or columns they didn’t pick, then it’s harder to notice the reused pattern.

The different directions that you could write the digits down in represent symmetries of the square. That is, geometric operations that would replace a square with something that looks like the original. This includes rotations, by 90 or 180 or 270 degrees clockwise. Mac King and Bill King don’t mention it, but reflections would also work: if the top row were 4, 9, 2, for example, and the middle 3, 5, 7, and the bottom 8, 1, 6. Combining rotations and reflections also works.

If you do the trick a second time, your mark might notice it’s odd that the sum came up 15 again. Do it a third time, even with a different rotation or reflection, and they’ll know something’s up. There are things you could do to disguise that further. Just double each number in the square, for example: a square of 4/18/8, 14/10/6, 12/2/16 will have each row or column or diagonal add up to 30. But this loses the beauty of doing this with the digits 1 through 9, and your mark might grow suspicious anyway. The same happens if, say, you add one to each number in the square, and forecast a sum of 18. Even mathematical magic tricks are best not repeated too often, not unless you have good stage patter.

Mark Anderson’s Andertoons for the 20th is the Mark Anderson’s Andertoons for the week. Wavehead’s marveling at what seems at first like an asymmetry, about squares all being rhombuses yet rhombuses not all being squares. There are similar results with squares and rectangles. Still, it makes me notice something. Nobody would write a strip where the kid marvelled that all squares were polygons but not all polygons were squares. It seems that the rhombus connotes something different. This might just be familiarity. Polygons are … well, if not a common term, at least something anyone might feel familiar. Rhombus is a more technical term. It maybe never quite gets familiar, not in the ways polygons do. And the defining feature of a rhombus — all four sides the same length — seems like the same thing that makes a square a square.

There should be another Reading the Comics post this coming week, and it should appear at this link. I’d like to publish it Tuesday but, really, Wednesday is more probable.

## Reading the Comics, October 4, 2016: Split Week Edition Part 1

The last week in mathematically themed comics was a pleasant one. By “a pleasant one” I mean Comic Strip Master Command sent enough comics out that I feel comfortable splitting them across two essays. Look for the other half of the past week’s strips in a couple days at a very similar URL.

Mac King and Bill King’s Magic in a Minute feature for the 2nd shows off a bit of number-pattern wonder. Set numbers in order on a four-by-four grid and select four as directed and add them up. You get the same number every time. It’s a cute trick. I would not be surprised if there’s some good group theory questions underlying this, like about what different ways one could arrange the numbers 1 through 16. Or for what other size grids the pattern will work for: 2 by 2? (Obviously.) 3 by 3? 5 by 5? 6 by 6? I’m not saying I actually have been having fun doing this. I just sense there’s fun to be had there.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd is based on one of those weirdnesses of the way computers add. I remember in the 90s being on a Java mailing list. Routinely it would draw questions from people worried that something was very wrong, as adding 0.01 to a running total repeatedly wouldn’t get to exactly 1.00. Java was working correctly, in that it was doing what the specifications said. It’s just the specifications didn’t work quite like new programmers expected.

What’s going on here is the same problem you get if you write down 1/3 as 0.333. You know that 1/3 plus 1/3 plus 1/3 ought to be 1 exactly. But 0.333 plus 0.333 plus 0.333 is 0.999. 1/3 is really a little bit more than 0.333, but we skip that part because it’s convenient to use only a few points past the decimal. Computers normally represent real-valued numbers with a scheme called floating point representation. At heart, that’s representing numbers with a couple of digits. Enough that we don’t normally see the difference between the number we want and the number the computer represents.

Every number base has some rational numbers it can’t represent exactly using finitely many digits. Our normal base ten, for example, has “one-third” and “two-third”. Floating point arithmetic is built on base two, and that has some problems with tenths and hundredths and thousandths. That’s embarrassing but in the main harmless. Programmers learn about these problems and how to handle them. And if they ask the mathematicians we tell them how to write code so as to keep these floating-point errors from growing uncontrollably. If they ask nice.

Random Acts of Nancy for the 3rd is a panel from Ernie Bushmiller’s Nancy. That panel’s from the 23rd of November, 1946. And it just uses mathematics in passing, arithmetic serving the role of most of Nancy’s homework. There’s a bit of spelling (I suppose) in there too, which probably just represents what’s going to read most cleanly. Random Acts is curated by Ernie Bushmiller fans Guy Gilchrist (who draws the current Nancy) and John Lotshaw.

Thom Bluemel’s Birdbrains for the 4th depicts the discovery of a new highest number. When humans discovered ‘1’ is, I would imagine, probably unknowable. Given the number sense that animals have it’s probably something that predates humans, that it’s something we’re evolved to recognize and understand. A single stroke for 1 seems to be a common symbol for the number. I’ve read histories claiming that a culture’s symbol for ‘1’ is often what they use for any kind of tally mark. Obviously nothing in human cultures is truly universal. But when I look at number symbols other than the Arabic and Roman schemes I’m used to, it is usually the symbol for ‘1’ that feels familiar. Then I get to the Thai numeral and shrug at my helplessness.

Bill Amend’s FoxTrot Classics for the 4th is a rerun of the strip from the 11th of October, 2005. And it’s made for mathematics people to clip out and post on the walls. Jason and Marcus are in their traditional nerdly way calling out sequences of numbers. Jason’s is the Fibonacci Sequence, which is as famous as mathematics sequences get. That’s the sequence of numbers in which every number is the sum of the previous two terms. You can start that sequence with 0 and 1, or with 1 and 1, or with 1 and 2. It doesn’t matter.

Marcus calls out the Perrin Sequence, which I neve heard of before either. It’s like the Fibonacci Sequence. Each term in it is the sum of two other terms. Specifically, each term is the sum of the second-previous and the third-previous terms. And it starts with the numbers 3, 0, and 2. The sequence is named for François Perrin, who described it in 1899, and that’s as much as I know about him. The sequence describes some interesting stuff. Take n points and put them in a ‘cycle graph’, which looks to the untrained eye like a polygon with n corners and n sides. You can pick subsets of those points. A maximal independent set is the biggest subset you can make that doesn’t fit into another subset. And the number of these maximal independent sets in a cyclic graph is the n-th number in the Perrin sequence. I admit this seems like a nice but not compelling thing to know. But I’m not a cyclic graph kind of person so what do I know?

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 4th is the anthropomorphic numerals joke for this essay and I was starting to worry we wouldn’t get one.

## Reading the Comics, December 5, 2014: Good Questions Edition

This week’s bundle of mathematics-themed comic strips has a pretty nice blend, to my tastes: about half of them get at good and juicy topics, and about half are pretty quick and easy things to describe. So, my thanks to Comic Strip Master Command for the distribution.

Bill Watterson’s Calvin and Hobbes (December 1, rerun) slips in a pretty good probability question, although the good part is figuring out how to word it: what are the chances Calvin’s Dad was thinking of 92,376,051 of all the possible numbers out there? Given that there’s infinitely many possible choices, if every one of them is equally likely to be drawn, then the chance he was thinking of that particular number is zero. But Calvin’s Dad couldn’t be picking from every possible number; all humanity, working for its entire existence, will only ever think of finitely many numbers, which is the kind of fact that humbles me when I stare too hard at it. And people, asked to pick a number, have ones they prefer: 37, for example, or 17. Christopher Miller’s American Cornball: A Laffopedic Guide To The Formerly Funny (a fine guide to jokes that you see lingering around long after they were actually funny) notes that what number people tend to pick seems to vary in time, and in the early 20th century 23 was just as funny a number as you could think of on a moment’s notice.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (December 1) is entitled “how introductory physics problems are written”, and yeah, that’s about the way that a fair enough problem gets rewritten so as to technically qualify as a word problem. I think I’ve mentioned my favorite example of quietly out-of-touch word problems, a 1970s-era probability book which asked the probability of two out of five transistors in a radio failing. That was blatantly a rewrite of a problem about a five-vacuum-tube radio (my understanding is many radios in that era used five tubes) and each would have a non-negligible chance of failing on any given day. But that’s a slightly different problem, as the original question would have made good sense when it was composed, and it only in the updating became ridiculous.

Julie Larson’s The Dinette Set (December 2) illustrates one of the classic sampling difficulties: how can something be generally true if, in your experience, it isn’t? If you make the reasonable assumption that there’s nothing exceptional about you, then, shouldn’t your experience of, say, fraction of people who exercise, or average length of commute, or neighborhood crime rate be tolerably close to what’s really going on? You could probably build an entire philosophy-of-mathematics course around this panel before even starting the question of how do you get a fair survey of a population.

Scott Hilburn’s The Argyle Sweater (December 3) tells a Roman numeral joke that actually I don’t remember encountering before. Huh.

Samson’s Dark Side Of The Horse (December 3) does some play with mathematical symbols and of course I got distracted by thinking what kind of problem Horace was working on in the first panel; it looks obvious to me that it’s something about the orbit of one body around another. In principle, it might be anything, since the great discovery of algebra is that you can replace numbers with symbols like “a” and work out relations without having to know anything about them. “G”, for example, tends to mean the gravitational constant of the universe, and “GM” makes this identification almost certain: gravitation problems need the masses of a main body, like a planet, and a smaller body, like a satellite, and that’s usually represented as either m1 and m2 or as M and m.

In orbital mechanics problems, “a” often refers to the semimajor axis — the long diameter of the ellipse the orbiting body makes — and “e” the eccentricity — a measure of how close to a circle the ellipse is (an eccentricity of zero means it’s a circle — but the fact that there’s subscripts of k makes that identification suspect: subscripts are often used to distinguish which of multiple similar things you mean to talk about, and if it’s just one body orbiting the other there’s no need for that. So what is Horace working on?

The answer is: Horace is working on an orbital perturbation problem, describing how far from the true circular orbit a satellite will drift when you consider things like atmospheric drag and the slightly non-spherical shape of the Earth. ak is still a semi-major axis and ek the eccentricity, but of the little changes from the original orbit, rather than the original orbit themselves. And now I wonder if Samson plucked the original symbol just because it looked so graphically pleasant, or if Samson was slipping in a further joke about the way an attractive body will alter another body’s course.

Jenny Campbell’s Flo and Friends (December 4) offers a less exciting joke: it’s a simple word problem joke, playing on the ambiguity of “calculate how many seconds there are in the year”. Now, the dull way to work this out is to multiply 60 seconds per minute times 60 minutes per hour times 24 hours per day times 365 (or 365.25, or 365.2422 if you want to start that argument) days per year. But we can do almost as well and purely in our heads, if we remember that a million seconds is almost twelve days long. How many twelve-day stretches are there in a year? Well, right about 31 — after all, the year is (nearly) 12 groups of 31 days, and therefore it’s also 31 groups of 12 days. Therefore the year is about 31 million seconds long. If we pull out the calculator we find that a 365-day year is 31,536,000 seconds, but isn’t it more satisfying to say “about 31 million seconds” like we just did?

John Deering’s Strange Brew (December 4) took me the longest time to work out what the joke was supposed to be. I’m still not positive but I think it’s just one colleague sneering at the higher mathematics of another.

Patrick Roberts’s Todd the Dinosaur (December 5) discovers that some numbers are quite big ones, actually. There is a challenge in working with really big numbers, even if they’re usually bigger than 2. Usually we’re not interested in a number by itself, and would rather do some kind of calculation with it, and that’s boring to do too much of, but a computer can only work with so many digits at once. The average computer uses floating point arithmetic schemes which will track, at most, about 19 decimal digits, on the reasonable guess that twenty decimal digits is the difference between 3.1415926535897932384 and 3.1415926535897932385 and how often is that difference — a millionth of a millionth of a millionth of a percent — going to matter? If it does, then, you do the kind of work that gets numerical mathematicians their big paydays: using schemes that work with more digits, or chopping up a problem so that you never have to use all 86 relevant digits at once, or rewriting your calculation so that you don’t need so many digits of accuracy all at once.

Daniel Beyer’s Offbeat Comics (December 5) gives a couple of ways to express the number 4 — including, look closely, holding up fingers — as part of a joke about the driver being a former mathematics teacher.

Greg Cravens’s The Buckets (December 5) is the old, old, old joke about never using algebra in real life. Do English teachers get this same gag about never using the knowledge of how to diagram sentences? In any case, I did use my knowledge of sentence-diagramming, and for the best possible application: I made fun of a guy on the Internet with it.

I advise against reading the comments — I mean, that’s normally good advice, but comic strips attract readers who want to complain about how stupid kids are anymore and strips that mention education give plenty of grounds for it — but I noticed one of the early comments said “try to do any repair at home without some understanding of it”. I like the claim, but, I can’t think of any home repair I’ve done that’s needed algebra. The most I’ve needed has been working out the area of a piece of plywood I needed, but if multiplying length by width is algebra than we’ve badly debased the term. Even my really ambitious project, building a PVC-frame pond cover, isn’t going to be one that uses algebra unless we take an extremely generous view of the subject.

## The Best Thing About Polynomials

[ Curious: one of the search engine terms which brought people here yesterday was “inner obnoxious”. I can think of when I’d used the words together, eg, in a phrase like “your inner obnoxious twelve-year-old”, the person who makes any kind of attempt at instruction difficult. But who’s searching for that? I find also that “the gil blog by norm feuti” and “heavenly nostrils” brought me visitors so, good for everyone, I think. ]

So polynomials have a number of really nice properties. They’re easy to work with, which is a big one. We might work with difficult mathematical objects, but, rather as with people, we’ll only work with the difficult if they offer something worthwhile in trade, such as solving problems we otherwise can’t hope to tackle. Polynomials are nice and friendly, uncomplaining, and as mathematical objects go, quite un-difficult. Polynomials can be used to approximate any function, which is another big one, as long as we don’t take that “any function” too literally. We still have to think about it some. But here’s an advantage so big it’s almost invisible: to evaluate a polynomial we take some number x and raise it to a variety of powers, which we get by multiplying x by itself over and over again. We take each of those powers and multiply them by a corresponding number, a coefficient. We then add up the products of those coefficients with those powers of x. In all that time we’ve done something great.