While there were a good number of comic strips to mention mathematics this past week, there were only a few that seemed substantial to me. This works well enough. This probably is going to be the last time I keep the Reading the Comics post until after Sunday, at least until the Fall 2019 A To Z is finished.

And I’m still open to topics for the first third of the alphabet. If you’d like to see my try to understand a thing of your choice please nominate one or more concepts over at this page. You might be the one to name a topic I can’t possibly summarize!

Gordon Bess’s **Redeye** rerun for the 18th is a joke building on animals’ number sense. And, yeah, about dumb parents too. Horses doing arithmetic have a noteworthy history. But more in the field of understanding how animals learn, than in how they do arithmetic. In particular in how animals learn to respond to human cues, and how slight a cue has to be to be recognized and acted on. I imagine this reflects horses being unwieldy experimental animals. Birds — pigeons and ravens, particularly — make better test animals.

Art Sansom and Chip Sansom’s **The Born Loser** for the 18th gives a mental arithmetic problem. It’s a trick question, yes. But Brutus gives up too soon on what the problem is supposed to be. Now there’s no calculating, in your head, exactly how many seconds are in a year; that’s just too much work. But an estimate? That’s easy.

At least it’s easy if you remember one thing: a million seconds is about eleven and a half days. I find this easy to remember because it’s one of the ideas used all the time to express how big a million, a billion, and a trillion are. A million seconds are about eleven and a half days. A billion seconds are a little under 32 years. A trillion seconds are about 32,000 years, which is about how long it’s been since the oldest known domesticated dog skulls were fossilized. I’m sure that gives everyone a clear idea of how big a trillion is. The important thing, though, is that a million seconds is about eleven and a half days.

So. Think of the year. There are — as the punch line to Hattie’s riddle puts it — twelve 2nd’s in the year. So there are something like a million seconds spent each year on days that are the 2nd of the month. There about a million seconds spent each year on days that are the 1st of the month, too. There are about a million seconds spent each year on days that are the 3rd of the month. And so on. So, there’s something like 31 million seconds in the year.

You protest. There aren’t a million seconds in twelve days; there’s a million seconds in eleven and a half days. True. Also there aren’t 31 days in every month; there’s 31 days in seven months of the year. There’s 30 days in four months, and 28 or 29 in the remainder. That’s fine. This is mental arithmetic. I’m undercounting the number of seconds by supposing that a million seconds makes twelve days. I’m overcounting the number of seconds by supposing that there are twelve months of 31 days each. I’m willing to bet this undercount and this overcount roughly balance out. How close do I get?

There are 31,536,000 seconds in a common year. That is, a non-leap-year. So “31 million” is a bit low. But it’s not bad for working without a calculator.

Ryan North’s **Dinosaur Comics** for the 19th lays on us the Eubulides Paradox. It’s traced back to the fourth century BCE. Eubulides was a Greek philosopher, student of “Not That” Euclid of Megara. We know Eubulides for a set of paradoxes, including the Sorites paradox. As T-Rex’s friends point out, we’ve all heard this paradox. We’ve all gone on with our lives, knowing that the person who said it wanted us to say they were very clever. Fine.

But if we take this seriously we find … this keeps not being simple. We can avoid the problem by declaring self-referential statements exist outside of truth or falsity. This forces us to declare the sentence “this sentence is true” can’t be true. This seems goofy. We can avoid the problem by supposing there are things that are neither true nor false. That solves our problem here at the mere cost of ruining our ability to prove stuff by contradiction. There’s a lot of stuff we prove by contradiction. It’s hard to give that all up for *this* (Although, so far as I’m aware, anything that can be proved by contradiction can also be proven by a direct line of reasoning. The direct line may just be tedious.) We can solve this problem by saying that our words are fuzzy imprecise things. This is true enough, as see any time my love and I debate how many things are in “a couple of things”. But declaring that we just can’t express the problem well enough to answer it seems like running away from the question. We can resolve things by accepting there are limits to what can be proved by logic. Gödel’s Incompleteness Theorem shows that any interesting enough logic system has statements that are true but unprovable. A version of this paradox helps us get to this interesting conclusion.

So this is one of those things it should be easy to laugh off, but why it should be easy is hard.

Zach Weinersmith’s **Saturday Morning Breakfast Cereal** for the 21st is about the other great logic problem of the 20th century. The Halting Problem here refers to Turing Machines. This is *the* algorithmic model for computing devices. It’s rather abstract, so the model won’t help you with your C++ homework, but nothing will. But it turns out we can represent a computer running a program as a string of cells. Each cell holds one of a couple possible values. The program is a series of steps. Each step starts at one cell. The program resets the value of that cell to something dictated by the algorithm. Then, the program moves focus to another cell, again as the algorithm dictates. Do enough of this and you get SimCity 2000. I don’t know all the steps in-between.

So. The Halting Program is this: take a program. Run it. What happens in the long run? Well, it does something or other, yes. But there’s three kinds of things it can do. It can run for a while and then finish, that is, ‘halt’. It can run for a while and then get into a repeating loop, after which it repeats things forever. It can run forever without repeating itself. (Yeah, I see the structural resemblance to terminating decimals, repeating decimals, and irrational numbers too, but I don’t know of any link there.) The Halting Problem asks, if all we know is the algorithm, can we know what happens? Can we say for sure the program will always end, regardless of what the data it works on are? Can we say for sure the program won’t end if we feed it the right data to start?

If the program is simple enough — and it has to be *extremely* simple — we can say. But, basically, if the program is complicated enough to be even the least bit interesting, it’s impossible to say. Even just running the program isn’t enough: how do you know the difference between a program that takes a trillion seconds to finish and one that never finishes?

For human needs, yes, a program that needs a trillion seconds might as well be one that never finishes. Which is not precisely the joke Weinersmith makes here, but is circling around similar territory.

Mark Anderson’s **Andertoons** for the 23rd is the Mark Anderson’s **Andertoons** for the week. And it teases my planned post for Thursday, available soon at this link. Thanks for reading.