## Reading the Comics, February 8, 2020: Delta Edition

With this essay, I finally finish the comic strips from the first full week of February. You know how these things happen. I’ll get to the comics from last week soon enough, at an essay gathered under this link. For now, some pictures with words:

Art Sansom and Chip Sansom’s The Born Loser for the 7th builds on one of the probability questions people often use. That is the probability of an event, in the weather forecast. Predictions for what the weather will do are so common that it takes work to realize there’s something difficult about the concept. The weather is a very complicated fluid-dynamics problem. It’s almost certainly chaotic. A chaotic system is deterministic, but unpredictable, because to get a meaningful prediction requires precision that’s impossible to ever have in the real world. The slight difference between the number π and the number 3.1415926535897932 throws calculations off too quickly. Nevertheless, it implies that the “chance” of snow on the weekend means about the same thing as the “chance” that Valentinte’s Day was on the weekend this year. The way the system is set up implies it will be one or the other. This is a probability distribution, yes, but it’s a weird one.

What we talk about when we say the “chance” of snow or Valentine’s on a weekend day is one of ignorance. It’s about our estimate that the true value of something is one of the properties we find interesting. Here, past knowledge can guide us. If we know that the past hundred times the weather was like this on Friday, snow came on the weekend less than ten times, we have evidence that suggests these conditions don’t often lead to snow. This is backed up, these days, by numerical simulations which are not perfect models of the weather. But they are ones that represent something very like the weather, and that stay reasonably good for several days or a week or so.

And we have the question of whether the forecast is right. Observing this fact is used as the joke here. Still, there must be some measure of confidence in a forecast. Around here, the weather forecast is for a cold but not abnormally cold week ahead. This seems likely. A forecast that it was to jump into the 80s and stay there for the rest of February would be so implausible that we’d ignore it altogether. A forecast that it would be ten degrees (Fahrenheit) below normal, or above, though? We could accept that pretty easily.

Proving a forecast is wrong takes work, though. Mostly it takes evidence. If we look at a hundred times the forecast was for a 10% chance of snow, and it actually snowed 11% of the time, is it implausible that the forecast was right? Not really, not any more than a coin coming up tails 52 times out of 100 would be suspicious. If it actually snowed 20% of the time? That might suggest that the forecast was wrong. If it snowed 80% of the time? That suggests something’s very wrong with the forecasting methods. It’s hard to say one forecast is wrong, but we can have a sense of what forecasters are more often right than others are.

Doug Savage’s Savage Chickens for the 7th is a cute little bit about counting. Counting things out is an interesting process; for some people, hearing numbers said aloud will disrupt their progress. For others, it won’t, but seeing numbers may disrupt it instead.

Niklas Eriksson’s Carpe Diem for the 8th is a bit of silliness about the mathematical sense of animals. Studying how animals understand number is a real science, and it turns up interesting results. It shouldn’t be surprising that animals can do a fair bit of counting and some geometric reasoning, although it’s rougher than even our untrained childhood expertise. We get a good bit of our basic mathematical ability from somewhere, because we’re evolved to notice some things. It’s silly to suppose that dogs would be able to state the Pythagorean Theorem, at least in a form that we recognize. But it is probably someone’s good research problem to work out whether we can test whether dogs understand the implications of the theorem, and whether it helps them go about dog work any.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 8th speaks of the “Cinnamon Roll Delta Function”. The point is clear enough on its own. So let me spoil a good enough bit of fluff by explaining that it’s a reference to something. There is, lurking in mathematical physics, a concept called the “Dirac delta function”, named for that innovative and imaginative fellow Paul Dirac. It has some weird properties. Its domain is … well, it has many domains. The real numbers. The set of ordered pairs of real numbers, R2. The set of ordered triples of real numbers, R3. Basically any space you like, there’s a Dirac delta function for it. The Dirac delta function is equal to zero everywhere in this domain, except at one point, the “origin”. At that one function, though? There it’s equal to …

Here we step back a moment. We really, really, really want to say that it’s infinitely large at that point, which is what Weinersmith’s graph shows. If we’re being careful, we don’t say that though. Because if we did say that, then we would lose the thing that we use the Dirac delta function for. The Dirac delta function, represented with δ, is a function with the property that for any set D, in the domain, that you choose to integrate over

$\int_D \delta(x) dx = 1$

whenever the origin is inside the interval of integration D. It’s equal to 0 if the origin is not inside the interval of integration. This, whatever the set is. If we use the ordinary definitions for what it means to integrate a function, and say that the delta function is “infinitely big” at the origin, then this won’t happen; the integral will be zero everywhere.

This is one of those cases where physicists worked out new mathematical concepts, and the mathematicians had to come up with a rationalization by which this made sense. This because the function is quite useful. It allows us, mathematically, to turn descriptions of point particles into descriptions of continuous fields. And vice-versa: we can turn continuous fields into point particles. It turns out we like to do this a lot. So if we’re being careful we don’t say just what the Dirac delta function “is” at the origin, only some properties about what it does. And if we’re being further careful we’ll speak of it as a “distribution” rather than a function.

But colloquially, we think of the Dirac delta function as one that’s zero everywhere, except for the one point where it’s somehow “a really big infinity” and we try to not look directly at it.

The sharp-eyed observer may notice that Weinersmith’s graph does not put the great delta spike at the origin, that is, where the x-axis represents zero. This is true. We can create a delta-like function with a singular spot anywhere we like by the process called “translation”. That is, if we would like the function to be zero everywhere except at the point $a$, then we define a function $\delta_a(x) = \delta(x - a)$ and are done. Translation is a simple step, but it turns out to be useful all the time.

Thanks again for reading. See you soon.

## Reading the Comics, January 27, 2020: Alley Oop Followup Edition

I apologize for missing Sunday. I wasn’t able to make the time to write about last week’s mathematically-themed comic strips. But I’m back in the swing of things. Here are some of the comic strips that got my attention.

Jonathan Lemon and Joey Alison Sayers’s Little Oop for the 26th has something neat in the background. Oop and Garg walk past a vendor showing off New Numbers. This is, among other things, a cute callback to one of the first of Lemon and Sayers’s Little Oop strips.. (And has nothing to do with the daily storyline featuring the adult Alley Oop.) And it is a funny idea to think of “new numbers”. I imagine most of us trust that numbers are just … existing, somewhere, as concepts independent of our knowing them. We may not be too sure about the Platonic Forms. But, like, “eight” seems like something that could plausibly exist independently of our understanding of it.

Still, we do keep discovering things we didn’t know were numbers before. The earliest number notations, in the western tradition, for example, used letters to represent numbers. This did well for counting numbers, up to a large enough total. But it required idiosyncratic treatment if you wanted to handle large numbers. Hindu-Arabic numerals make it easy to represent whole numbers as large as you like. But that’s at the cost of adding ten (well, I guess eight) symbols that have nothing to do with the concept represented. Not that, like, ‘J’ looks like the letter J either. (There is a folk etymology that the Arabic numerals correspond to the number of angles made if you write them out in a particular way. Or less implausibly, the number of strokes needed for the symbol. This is ingenious and maybe possibly has helped one person somewhere, ever, learn the symbols. But it requires writing, like, ‘7’ in a way nobody has ever done, and it’s ahistorical nonsense. See section 96, on page 64 of the book and 84 of the web presentation, in Florian Cajori’s History of Mathematical Notations.)

Still, in time we discovered, for example, that there were irrational numbers and those were useful to have. Negative numbers, and those are useful to have. That there are complex-valued numbers, and those are useful to have. That there are quaternions, and … I guess we can use them. And that we can set up systems that resemble arithmetic, and work a bit like numbers. Those are often quite useful. I expect Lemon and Sayers were having fun with the idea of new numbers. They are a thing that, effectively, happens.

Lincoln Peirce’s Big Nate: First Class for the 26th has Nate badgering Francis for mathematics homework answers. Could be any subject, but arithmetic will let Peirce fit in a couple answers in one panel.

Art Sansom and Chip Sansom’s The Born Loser for the 26th is another strip on the theme of people winning the lottery and being hit by lightning. And, as I’ve mentioned, there is at least one person known to have won a lottery and survived a lightning strike.

David Malki’s Wondermark for the 27th describes engineering as “like math, but louder”, which is a pretty good line. And it uses backgrounds of long calculations to make the point of deep thought going on. I don’t recognize just what calculations are being done there, but they do look naggingly familiar. And, you know, that’s still a pretty lucky day.

Mark Anderson’s Andertoons for the 27th is the Mark Anderson’s Andertoons for the week. It depicts Wavehead having trouble figuring where to put the decimal point in the multiplication of two decimal numbers. Relatable issue. There are rules you can follow for where to put the decimal in this sort of operation. But the convention of dropping terminal zeroes after the decimal point can make that hazardous. It’s something that needs practice, or better: though. In this case, what catches my eye is that 2.95 times 3.2 has to be some number close to 3 times 3. So 9.440 is the plausible answer.

Mike Twohy’s That’s Life for the 27th presents a couple of plausible enough word problems, framed as Sports Math. It’s funny because of the idea that the workers who create events worth billions of dollars a year should be paid correspondingly.

This isn’t all for the week from me. I hope to have another Reading the Comics installment at this link, soon. Thanks for reading.

## Reading the Comics, January 13, 2020: The State Pinball Championships Were Yesterday Edition

I am not my state’s pinball champion, although for the first time I did make it through the first round of play. What is important about this is that between that and a work trip I needed time for things which were not mathematics this past week. So my first piece this week will be a partial listing of comic strips that, last week, mentioned mathematics but not in a way I could build an essay around. … It’s not going to be a week with long essays, either, though. Here’s a start, though.

Henry Scarpelli’s Archie rerun for the 12th of January was about Moose’s sudden understanding of algebra, and wish for it to be handy. Well, every mathematician knows the moment when suddenly something makes sense, maybe even feels inevitably true. And then we do go looking for excuses to show it off.

Art Sansom and Chip Sansom’s The Born Loser for the 12th has the Loser helping his kid with mathematics homework. And the kid asking about when they’ll use it outside school.

Jason Chatfield’s Ginger Meggs for the 13th has Meggs fail a probability quiz, an outcome his teacher claims is almost impossible. If the test were multiple-choice (including true-or-false) it is possible to calculate the probability of a person making wild guesses getting every answer wrong (or right) and it usually is quite the feat, at least if the test is of appreciable length. For more open answers it’s harder to say what the chance of someone getting the question right, or wrong, is. And then there’s the strange middle world of partial credit.

My love does give multiple-choice quizzes occasionally and it is always a source of wonder when a student does worse than blind chance would. Everyone who teaches has seen that, though.

Jan Eliot’s Stone Soup Classics for the 13th just mentions the existence of mathematics homework, as part of the morning rush of events.

Ed Allison’s Unstrange Phenomenon for the 13th plays with optical illusions, which include several based on geometric tricks. Humans have some abilities at estimating relative areas and distances and lengths. But they’re not, like, smart abilities. They can be fooled, basically because their settings are circumstances where there’s no evolutionary penalty for being fooled this way. So we can go on letting the presence of arrow pointers mislead us about the precise lengths of lines, and that’s all right. There are, like, eight billion cognitive tricks going on all around us and most of them are much more disturbing.

That’s a fair start for the week. I hope to have a second part to this Tuesday. Thanks for reading.

## Reading the Comics, August 23, 2019: Basics of Logic Edition

While there were a good number of comic strips to mention mathematics this past week, there were only a few that seemed substantial to me. This works well enough. This probably is going to be the last time I keep the Reading the Comics post until after Sunday, at least until the Fall 2019 A To Z is finished.

And I’m still open to topics for the first third of the alphabet. If you’d like to see my try to understand a thing of your choice please nominate one or more concepts over at this page. You might be the one to name a topic I can’t possibly summarize!

Gordon Bess’s Redeye rerun for the 18th is a joke building on animals’ number sense. And, yeah, about dumb parents too. Horses doing arithmetic have a noteworthy history. But more in the field of understanding how animals learn, than in how they do arithmetic. In particular in how animals learn to respond to human cues, and how slight a cue has to be to be recognized and acted on. I imagine this reflects horses being unwieldy experimental animals. Birds — pigeons and ravens, particularly — make better test animals.

Art Sansom and Chip Sansom’s The Born Loser for the 18th gives a mental arithmetic problem. It’s a trick question, yes. But Brutus gives up too soon on what the problem is supposed to be. Now there’s no calculating, in your head, exactly how many seconds are in a year; that’s just too much work. But an estimate? That’s easy.

At least it’s easy if you remember one thing: a million seconds is about eleven and a half days. I find this easy to remember because it’s one of the ideas used all the time to express how big a million, a billion, and a trillion are. A million seconds are about eleven and a half days. A billion seconds are a little under 32 years. A trillion seconds are about 32,000 years, which is about how long it’s been since the oldest known domesticated dog skulls were fossilized. I’m sure that gives everyone a clear idea of how big a trillion is. The important thing, though, is that a million seconds is about eleven and a half days.

So. Think of the year. There are — as the punch line to Hattie’s riddle puts it — twelve 2nd’s in the year. So there are something like a million seconds spent each year on days that are the 2nd of the month. There about a million seconds spent each year on days that are the 1st of the month, too. There are about a million seconds spent each year on days that are the 3rd of the month. And so on. So, there’s something like 31 million seconds in the year.

You protest. There aren’t a million seconds in twelve days; there’s a million seconds in eleven and a half days. True. Also there aren’t 31 days in every month; there’s 31 days in seven months of the year. There’s 30 days in four months, and 28 or 29 in the remainder. That’s fine. This is mental arithmetic. I’m undercounting the number of seconds by supposing that a million seconds makes twelve days. I’m overcounting the number of seconds by supposing that there are twelve months of 31 days each. I’m willing to bet this undercount and this overcount roughly balance out. How close do I get?

There are 31,536,000 seconds in a common year. That is, a non-leap-year. So “31 million” is a bit low. But it’s not bad for working without a calculator.

Ryan North’s Dinosaur Comics for the 19th lays on us the Eubulides Paradox. It’s traced back to the fourth century BCE. Eubulides was a Greek philosopher, student of “Not That” Euclid of Megara. We know Eubulides for a set of paradoxes, including the Sorites paradox. As T-Rex’s friends point out, we’ve all heard this paradox. We’ve all gone on with our lives, knowing that the person who said it wanted us to say they were very clever. Fine.

But if we take this seriously we find … this keeps not being simple. We can avoid the problem by declaring self-referential statements exist outside of truth or falsity. This forces us to declare the sentence “this sentence is true” can’t be true. This seems goofy. We can avoid the problem by supposing there are things that are neither true nor false. That solves our problem here at the mere cost of ruining our ability to prove stuff by contradiction. There’s a lot of stuff we prove by contradiction. It’s hard to give that all up for this (Although, so far as I’m aware, anything that can be proved by contradiction can also be proven by a direct line of reasoning. The direct line may just be tedious.) We can solve this problem by saying that our words are fuzzy imprecise things. This is true enough, as see any time my love and I debate how many things are in “a couple of things”. But declaring that we just can’t express the problem well enough to answer it seems like running away from the question. We can resolve things by accepting there are limits to what can be proved by logic. Gödel’s Incompleteness Theorem shows that any interesting enough logic system has statements that are true but unprovable. A version of this paradox helps us get to this interesting conclusion.

So this is one of those things it should be easy to laugh off, but why it should be easy is hard.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 21st is about the other great logic problem of the 20th century. The Halting Problem here refers to Turing Machines. This is the algorithmic model for computing devices. It’s rather abstract, so the model won’t help you with your C++ homework, but nothing will. But it turns out we can represent a computer running a program as a string of cells. Each cell holds one of a couple possible values. The program is a series of steps. Each step starts at one cell. The program resets the value of that cell to something dictated by the algorithm. Then, the program moves focus to another cell, again as the algorithm dictates. Do enough of this and you get SimCity 2000. I don’t know all the steps in-between.

So. The Halting Program is this: take a program. Run it. What happens in the long run? Well, it does something or other, yes. But there’s three kinds of things it can do. It can run for a while and then finish, that is, ‘halt’. It can run for a while and then get into a repeating loop, after which it repeats things forever. It can run forever without repeating itself. (Yeah, I see the structural resemblance to terminating decimals, repeating decimals, and irrational numbers too, but I don’t know of any link there.) The Halting Problem asks, if all we know is the algorithm, can we know what happens? Can we say for sure the program will always end, regardless of what the data it works on are? Can we say for sure the program won’t end if we feed it the right data to start?

If the program is simple enough — and it has to be extremely simple — we can say. But, basically, if the program is complicated enough to be even the least bit interesting, it’s impossible to say. Even just running the program isn’t enough: how do you know the difference between a program that takes a trillion seconds to finish and one that never finishes?

For human needs, yes, a program that needs a trillion seconds might as well be one that never finishes. Which is not precisely the joke Weinersmith makes here, but is circling around similar territory.

Mark Anderson’s Andertoons for the 23rd is the Mark Anderson’s Andertoons for the week. And it teases my planned post for Thursday, available soon at this link. Thanks for reading.

## Reading the Comics, July 20, 2019: What Are The Chances Edition

The temperature’s cooled. So let me get to the comics that, Saturday, I thought were substantial enough to get specific discussion. It’s possible I was overestimating how much there was to say about some of these. These are the risks I take.

Paige Braddock’s Jane’s World for the 15th sees Jane’s niece talk about enjoying mathematics. I’m glad to see. You sometimes see comic strip characters who are preposterously good at mathematics. Here I mean Jason and Marcus over in Bill Amend’s FoxTrot. But even they don’t often talk about why mathematics is appealing. There is no one answer for all people. I suspect even for a single person the biggest appeal changes over time. That mathematics seems to offer certainty, though, appeals to many. Deductive logic promises truths that can be known independent of any human failings. (The catch is actually doing a full proof, because that takes way too many boring steps. Mathematicians more often do enough of a prove to convince anyone that the full proof could be produced if needed.)

Alexa also enjoys math for there always being a right answer. Given her age there probably always is. There are mathematical questions for which there is no known right answer. Some of these are questions for which we just don’t know the answer, like, “is there an odd perfect number?” Some of these are more like value judgements, though. Is Euclidean geometry or non-Euclidean geometry more correct? The answer depends on what you want to do. There’s no more a right answer to that question than there is a right answer to “what shall I eat for dinner”.

Jane is disturbed by the idea of there being a right answer that she doesn’t know. She would not be happy to learn about “existence proofs”. This is a kind of proof in which the goal is not to find an answer. It’s just to show that there is an answer. This might seem pointless. But there are problems for which there can’t be an answer. If an answer’s been hard to find, it’s worth checking whether there are answers to find.

Art Sansom and Chip Sansom’s The Born Loser for the 16th builds on comparing the probability of winning the lottery to that of being hit by lightning. This comparison’s turned up a couple of times, including in Mister Boffo and The Wandering Melon, when I learned that Peter McCathie had both won the lottery and been hit by lightning.

Pab Sungenis’s New Adventures of Queen Victoria for the 17th is maybe too marginal for full discussion. It’s just reeling off a physics-major joke. The comedy is from it being a pun: Planck’s Constant is a number important in many quantum mechanics problems. It’s named for Max Planck, one of the pioneers of the field. The constant is represented in symbols as either $h$ or as $\hbar$. The constant $\hbar$ is equal to $\frac{h}{2 \pi}$ and might be used even more often. It turns out $\frac{h}{2 \pi}$ appears all over the place in quantum mechanics, so it’s convenient to write it with fewer symbols. $\hbar$ is maybe properly called the reduced Planck’s constant, although in my physics classes I never encountered anyone calling it “reduced”. We just accepted there were these two Planck’s Constants and trusted context to make clear which one we wanted. It was $\hbar$. Planck’s Constant made some news among mensuration fans recently. The International Bureau of Weights and Measures chose to fix the value of this constant. This, through various physics truths, thus fixes the mass of the kilogram in terms of physical constants. This is regarded as better than the old method, where we just had a lump of metal that we used as reference.

Jonathan Lemon’s Rabbits Against Magic for the 17th is another probability joke. If a dropped piece of toast is equally likely to land butter-side-up or butter-side-down, then it’s quite unlikely to have it turn up the same way twenty times in a row. There’s about one chance in 524,288 of doing it in a string of twenty toast-flips. (That is, of twenty butter-side-up or butter-side-down in a row. If all you want is twenty butter-side-up, then there’s one chance in 1,048,576.) It’s understandable that Eight-Ball would take Lettuce to be quite lucky just now.

But there’s problems with the reasoning. First is the supposition that toast is as likely to fall butter-side-up as butter-side-down. I have a dim recollection of a mid-2000s pop physics book explaining why, given how tall a table usually is, a piece of toast is more likely to make half a turn — to land butter-side-down — before falling. Lettuce isn’t shown anywhere near a table, though. She might be dropping toast from a height that makes butter-side-up more likely. And there’s no reason to suppose that luck in toast-dropping connects to any formal game of chance. Or that her luck would continue to hold: even if she can drop the toast consistently twenty times there’s not much reason to think she could do it twenty-five times, or even twenty-one.

And then there’s this, a trivia that’s flawed but striking. Suppose that all seven billion people in the world have, at some point, tossed a coin at least twenty times. Then there should be seven thousand of them who had the coin turn up tails every single one of the first twenty times they’ve tossed a coin. And, yes, not everyone in the world has touched a coin, much less tossed it twenty times. But there could reasonably be quite a few people who grew up just thinking that every time you toss a coin it comes up tails. That doesn’t mean they’re going to have any luck gambling.

Thanks for waiting for me. The weather looks like I should have my next Reading the Comics post at this link, and on time. I’ll let you know if circumstances change.

## Reading the Comics, March 2, 2019: Process Edition

There were a handful of comic strips from last week which I didn’t already discuss. Two of them inspire me to write about how we know how to do things. That makes a good theme.

Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 27th gets into deep territory. How does we could count to a million? Maybe some determined soul has actually done it. But it would take the better part of a month. Things improve some if we allow that anything a computing machine can do, a person could do. This seems reasonable enough. It’s heady to imagine that all the computing done to support, say, a game of Roller Coaster Tycoon could be done by one person working alone with a sheet of paper. Anyway, a computer could show counting up to a million, a billion, a trillion, although then we start asking whether anyone’s checked that it hasn’t skipped some numbers. (Don’t laugh. The New York Times print edition includes an issue number, today at 58,258, at the top of the front page. It’s meant to list the number of published daily editions since the paper started. They mis-counted once, in 1898, and nobody noticed until 1999.)

Anyway, allow that. Nobody doubts that, if we put enough time and effort into it, we could count up to any positive whole number, or as they say in the trade, “counting number”. But … there is some largest number that we could possibly count to, even if we put every possible resource and all the time left in the universe to that counting. So how do we know we “could” count to a number bigger than that? What does it mean to say we “could” if the circumstances of the universe are such that we literally could not?

Counting up to a number seems uncontroversial enough. If I wanted to prove it I’d say something like “if we can count to the whole number with value N, then we can count to the whole number with value N + 1 by … going one higher.” And “We can count to the whole number 1”, proving that by enunciating as clearly as I can. The induction follows. Fine enough. That’s a nice little induction proof.

But … what if we needed to do more work? What if we needed to do a lot of work? There is a corner of logic which considers infinitely long proofs, or infinitely long statements. They’re not part of the usual deductive logic that any mathematician knows and relies on. We’re used to, at least in principle, being able to go through and check every step of a proof. If that becomes impossible is that still a proof? It’s not my field, so I feel comfortable not saying what’s right and what’s wrong. But it is one of those lectures in your Mathematical Logic course that leaves you hanging your jaw open.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 27th is a joke about algorithms. These are the processes by which we know how to do a thing. Here, Hansel and Gretel are shown using what’s termed a “greedy algorithm” to follow pebbles back home. This kind of thing reflects trying to find an acceptable solution, in this case, finding a path somewhere. What makes it “greedy” is each step. You’re at a pebble. You can see other pebbles nearby. Which one do you go to? Go to some extreme one; in this case, the nearest. It could instead have been the biggest, or the shiniest, the one at the greatest altitude, the one nearest a water source. Doesn’t matter. You choose your summum bonum and, at each step, take the move that maximizes that.

The wicked mother knows something about this sort of algorithm, one that promises merely a solution and not the best solution. And that is that all these solutions can be broken. You can set up a problem that the algorithm can’t solve. Greedy algorithms are particularly vulnerable to this. They’re called “local maximums”. You find the best answer of the ones nearby, but not the best one you possibly could locate.

Why use an algorithm like this, that can be broken so? That’s because we often want to do problems like finding a path through the woods. There are so many possible paths that it’s hard to find one of the acceptable ones. But there are processes that will, typically, find an acceptable answer. Maybe processes that will let us take an acceptable answer and improve it to a good answer. And this is getting into my field.

Actual persons encountering one of these pebble rings would (probably) notice they were caught in a loop. And what they’d do, then, is suspend the greedy rule: instead of going to the nearest pebble they could find, they’d pick something else. Maybe simply the nearest pebble they hadn’t recently visited. Maybe the second-nearest pebble. Maybe they’d give up and strike out in a random direction, trusting they’ll find some more pebbles. This can lead them out of the local maximum they don’t want toward the “global maximum”, the path home, that they do. There’s no reason they can’t get trapped again — this is why the wicked mother made many loops — and no reason they might not get caught in a loop of loops again. Every algorithm like this can get broken by some problem, after all. But sometimes taking the not-the-best steps can lead you to a better solution. That’s the insight at the heart of “Metropolis-Hastings” algorithms, which was my field before I just read comic strips all the time.

Dan Thompson’s Brevity for the 28th is a nice simple anthropomorphic figures joke. It would’ve been a good match for the strips I talked about Sunday. I’m just normally reluctant to sort these comic strips other than by publication date.

And there were some comic strips I didn’t think worth making paragraphs about. Chris Giarrusso’s G-Man Webcomics for the 25th of February mentioned negative numbers and built a joke on the … negative … connotations of that word. (And inaugurates a tag for that comic strip. This fact will certainly come back to baffle me some later day.) Art Sansom and Chip Sansom’s The Born Loser for the 2nd of March has a bad mathematics report card. Tony Rubino and Gary Markstein’s Daddy’s Home for the 2nd has geometry be the subject parents don’t understand. Bill Amend’s FoxTrot Classics for the 2nd has a mathematics-anxiety dream.

And this closes out my mathematics comics for the week. Come Sunday I should have a fresh post with more comics, and I thank you for considering reading that.

## Reading the Comics, February 16, 2019: The Rest And The Rejects

I’d promised on Sunday the remainder of last week’s mathematically-themed comic strips. I got busy with house chores yesterday and failed to post on time. That’s why this is late. It’s only a couple of comics here, but it does include my list of strips that I didn’t think were on-topic enough. You might like them, or be able to use them, yourself, though.

Niklas Eriksson’s Carpe Diem for the 14th depicts a kid enthusiastic about the abilities of mathematics to uncover truths. Suppressed truths, in this case. Well, it’s not as if mathematics hasn’t been put to the service of conspiracy theories before. Mathematics holds a great promise of truth. Answers calculated correctly are, after all, universally true. They can also offer a hypnotizing precision, with all the digits past the decimal point that anyone could want. But one catch among many is whether your calculations are about anything relevant to what you want to know. Another is whether the calculations were done correctly. It’s easy to make a mistake. If one thinks one has found exciting results it’s hard to imagine even looking for one.

You can’t use shadow analysis to prove the Moon landings fake. But the analysis of shadows can be good mathematics. It can locate things in space and in time. This is a kind of “inverse problem”: given this observable result, what combinations of light and shadow and position would have caused that? And there is a related problem. Johannes Vermeer produced many paintings with awesome, photorealistic detail. One hypothesis for how he achieved this skill is that he used optical tools, including a camera obscura and appropriate curved mirrors. So, is it possible to use the objects shown in perspective in his paintings to project where the original objects had to be, and where the painter had to be, to see them? We can calculate this, at least. I am not well enough versed in art history to say whether we have compelling answers.

Art Sansom and Chip Sansom’s The Born Loser for the 16th is the rare Roman Numerals joke strip that isn’t anthropomorphizing the numerals. Or a play on how the numerals used are also letters. But yeah, there’s not much use for them that isn’t decorative. Hindu-Arabic numerals have great advantages in compactness, and multiplication and division, and handling fractions of a whole number. And handling big numbers. Roman numerals are probably about as good for adding or subtracting small numbers, but that’s not enough of what we do anymore.

And past that there were three comic strips that had some mathematics element. But they were slight ones, and I didn’t feel I could write about them at length. Might like them anyway. Gordon Bess’s Redeye for the 10th of February, and originally run the 24th of September, 1972, has the start of a word problem as example of Pokey’s homework. Mark Litzler’s Joe Vanilla for the 11th has a couple scientist-types standing in front of a board with some mathematics symbols. The symbols don’t quite parse, to me, but they look close to it. Like, the line about $l(\omega) = \int_{-\infty}^{\infty} l(x) e$ is close to what one would write for the Fourier transformation of the function named l. It would need to be more like $L(\omega) = \int_{-\infty}^{\infty} l(x) e^{\imath \omega x} dx$ and even then it wouldn’t be quite done. So I guess Litzler used some actual reference but only copied as much as worked for the composition. (Which is not a problem, of course. The mathematics has no role in this strip beyond its visual appeal, so only the part that looks good needs to be there.) The Fourier transform’s a commonly-used trick; among many things, it lets us replace differential equations (hard, but instructive, and everywhere) with polynomials (comfortable and familiar and well-understood). Finally among the not-quite-comment-worthy is Pascal Wyse and Joe Berger’s Berger And Wyse for the 12th, showing off a Venn Diagram for its joke.

Next Sunday should be a fresh Reading the Comics post, which like all its kind, should appear at this link.

## Reading the Comics, December 8, 2018: Sam and Son Edition

That there were twelve comic strips making my cut as mention-worthy this week should have let me do three essays of four comics each. But the desire to include all the comics from the same day in one essay leaves me one short here. So be it. Three of the four cartoonists featured here have a name of Sansom or Samson, so, that’s an edition title for you. No, Sam and Silo do not appear here.

Art Sansom and Chip Sansom’s Born Loser for the 6th uses arithmetic as a test of deference. Will someone deny a true thing in order to demonstrate loyalty? Arithmetic is full of things that are inarguably true. If we take the ordinary meanings of one, plus, equals, and three, it can’t be that one plus one equals three. Most fields of human endeavor are vulnerable to personal taste, or can get lost in definitions and technicalities. Or the advance of knowledge: my love and I were talking last night how we remembered hearing, as kids, the trivia that panda bears were not really bears, but a kind of raccoon. (Genetic evidence has us now put giant pandas with the bears, and red pandas as part of the same superfamily as raccoons, but barely.) Or even be subject to sarcasm. Arithmetic has a harder time of that. Mathematical ideas do evolve in time, certainly. But basic arithmetic is pretty stable. Logic is also a reliable source of things we can be confident are true. But arithmetic is more familiar than most logical propositions.

Samson’s Dark Side of the Horse for the 8th is the Roman Numerals joke for the week. It’s also a bit of a wordplay joke, although the music wordplay rather tha mathematics. Me, I still haven’t heard a clear reason why ‘MIC’ wouldn’t be a legitimate Roman numeral representation of 1099. I’m not sure whether ‘MIC’ would step on or augment the final joke, though.

Pab Sungenis’s New Adventures of Queen Victoria for the 8th has a comedia dell’arte-based structure for its joke. (The strip does that, now and then.) The comic uses a story problem, with the calculated answer rejected for the nonsense it would be. I suppose it must be possible for someone to eat eighty apples over a long enough time that it’s not distressing, and yet another twenty apples wouldn’t spoil. I wouldn’t try it, though.

This and my other Reading the Comics posts should all be available at this link.