## My All 2020 Mathematics A to Z: Fibonacci

Dina Yagodich suggested today’s A-to-Z topic. I thought a quick little biography piece would be a nice change of pace. I discovered things were more interesting than that.

# Fibonacci.

I realized preparing for this that I have never read a biography of Fibonacci. This is hardly unique to Fibonacci. Mathematicians buy into the legend that mathematics is independent of human creation. So the people who describe it are of lower importance. They learn a handful of romantic tales or good stories. In this way they are much like humans. I know at least a loose sketch of many mathematicians. But Fibonacci is a hard one for biography. Here, I draw heavily on the book Fibonacci, his numbers and his rabbits, by Andriy Drozdyuk and Denys Drozdyuk.

We know, for example, that Fibonacci lived until at least 1240. This because in 1240 Pisa awarded him an annual salary in recognition of his public service. We think he was born around 1170, and died … sometime after 1240. This seems like a dismal historical record. But, for the time, for a person of slight political or military importance? That’s about as good as we could hope for. It is hard to appreciate how much documentation we have of lives now, and how recent a phenomenon that is.

Even a fact like “he was alive in the year 1240” evaporates under study. Italian cities, then as now, based the year on the time since the notional birth of Christ. Pisa, as was common, used the notional conception of Christ, on the 25th of March, as the new year. But we have a problem of standards. Should we count the year as the number of full years since the notional conception of Christ? Or as the number of full and partial years since that important 25th of March?

If the question seems confusing and perhaps angering let me try to clarify. Would you say that the notional birth of Christ that first 25th of December of the Christian Era happened in the year zero or in the year one? (Pretend there was a year zero. You already pretend there was a year one AD.) Pisa of Leonardo’s time would have said the year one. Florence would have said the year zero, if they knew of “zero”. Florence matters because when Florence took over Pisa, they changed Pisa’s dating system. Sometime later Pisa changed back. And back again. Historians writing, aware of the Pisan 1240 on the document, may have corrected it to the Florence-style 1241. Or, aware of the change of the calendar and not aware that their source already accounted for it, redated it 1242. Or tried to re-correct it back and made things worse.

This is not a problem unique to Leonardo. Different parts of Europe, at the time, had different notions for the year count. Some also had different notions for what New Year’s Day would be. There were many challenges to long-distance travel and commerce in the time. Not the least is that the same sun might shine on at least three different years at once.

We call him Fibonacci. Did he? The question defies a quick answer. His given name was Leonardo, and he came from Pisa, so a reliable way to address him would have “Leonardo of Pisa”, albeit in Italian. He was born into the Bonacci family. He did in some manuscripts describe himself as “Leonardo filio Bonacci Pisano”, give or take a few letters. My understanding is you can get a good fun quarrel going among scholars of this era by asking whether “Filio Bonacci” would mean “the son of Bonacci” or “of the family Bonacci”. Either is as good for us. It’s tempting to imagine the “Filio” being shrunk to “Fi” and the two words smashed together. But that doesn’t quite say that Leonardo did that smashing together.

Nor, exactly, when it did happen. We see “Fibonacci” used in mathematical works in the 19th century, followed shortly by attempts to explain what it means. We know of a 1506 manuscript identifying Leonardo as Fibonacci. But there remains a lot of unexplored territory.

If one knows one thing about Fibonacci though, one knows about the rabbits. They give birth to more rabbits and to the Fibonacci Sequence. More on that to come. If one knows two things about Fibonacci, the other is about his introducing Arabic numerals to western mathematics. I’ve written of this before. And the subject is … more ambiguous, again.

Most of what we “know” of Fibonacci’s life is some words he wrote to explain why he was writing his bigger works. If we trust he was not creating a pleasant story for the sake of engaging readers, then we can finally say something. (If one knows three things about Fibonacci, and then five things, and then eight, one is making a joke.)

Fibonacci’s father was, in the 1290s, posted to Bejaia, a port city on the Algerian coast. The father did something for Pisa’s duana there. And what is a duana? … Again, certainty evaporates. We have settled on saying it’s a customs house, and suppose our readers know what goes on in a customs house. The duana had something to do with clearing trade through the port. His father’s post was as a scribe. He was likely responsible for collecting duties and registering accounts and keeping books and all that. We don’t know how long Fibonacci spent there. “Some days”, during which he alleges he learned the digits 1 through 9. And after that, travelling around the Mediterranean, he saw why this system was good, and useful. He wrote books to explain it all and convince Europe that while Roman numerals were great, Arabic numerals were more practical.

It is always dangerous to write about “the first” person to do anything. Except for Yuri Gagarin, Alexei Leonov, and Neil Armstrong, “the first” to do anything dissolves into ambiguity. Gerbert, who would become Pope Sylvester II, described Arabic numerals (other than zero) by the end of the 10th century. He added in how this system along with the abacus made computation easier. Arabic numerals appear in the Codex Conciliorum Albeldensis seu Vigilanus, written in 976 AD in Spain. And it is not as though Fibonacci was the first European to travel to a land with Arabic numerals, or the first perceptive enough to see their value.

Allow that, though. Every invention has precursors, some so close that it takes great thinking to come up with a reason to ignore them. There must be some credit given to the person who gathers an idea into a coherent, well-explained whole. And with Fibonacci, and his famous manuscript of 1202, the Liber Abaci, we have … more frustration.

It’s not that Liber Abaci does not exist, or that it does not have what we credit it for having. We don’t have any copies of the 1202 edition, but we do have a 1228 manuscript, at least, and that starts out with the Arabic numeral system. And why this system is so good, and how to use it. It should convince anyone who reads it.

If anyone read it. We know of about fifteen manuscripts of Liber Abaci, only two of them reasonably complete. This seems sparse even for manuscripts in the days they had to be hand-copied. This until you learn that Baldassarre Boncompagni published the first known printed version in 1857. In print, in Italian, it took up 459 pages of text. Its first English translation, published by Laurence E Sigler in 2002(!) takes up 636 pages (!!). Suddenly it’s amazing that as many as two complete manuscripts survive. (Wikipedia claims three complete versions from the 13th and 14th centuries exist. And says there are about nineteen partial manuscripts with another nine incomplete copies. I do not explain this discrepancy.)

He had other books. The Liber Quadratorum, for example, a book about algebra. Wikipedia seems to say we have it through a single manuscript, copied in the 15th century. Practica Geometriae, translated from Latin in 1442 at least. A couple other now-lost manuscripts. A couple pieces about specific problems.

So perhaps only a handful of people read Fibonacci. Ah, but if they were the right people? He could have been a mathematical Velvet Underground, read by a hundred people, each of whom founded a new mathematics.

We could trace those hundred readers by the first thing anyone knows Fibonacci for. His rabbits, breeding in ways that rabbits do not, and the sequence of whole numbers those provide. Fibonacci did not discover this sequence. You knew that. Nothing in mathematics gets named for its discoverer. Virahanka, an Indian mathematician who lived somewhere between the sixth and eighth centuries, described the sequence exactly. Gopala, writing sometime in the 1130s, expanded on this.

This is not to say Fibonacci copied any of these (and more) Indian mathematicians. The world is large and manuscripts are hard to read. The sequence can be re-invented by anyone bored in the right way. Ah, but think of those who learned of the sequence and used it later on, following Fibonacci’s lead. For example, in 1611 Johannes Kepler wrote a piece that described Fibonacci’s sequence. But that does not name Fibonacci. He mentions other mathematicians, ancient and contemporary. The easiest supposition is he did not know he was writing something already seen. In 1844, Gabriel Lamé used Fibonacci numbers in studying algorithm complexity. He did not name Fibonacci either, though. (Lamé is famous today for making some progress on Fermat’s last theorem. He’s renowned for work in differential equations and on ellipse-like curves. If you have thought what a neat weird shape the equation $x^4 + y^4 = 1$ can describe you have tread in Lamé’s path.)

Things picked up for Fibonacci’s reputation in 1876, thanks to Édouard Lucas. (Lucas is notable for other things. Normal people might find interesting that he proved by hand the number $2^{127} - 1$ was prime. This seems to be the largest prime number ever proven by hand. He also created the Tower of Hanoi problem.) In January of 1876, Lucas wrote about the Fibonacci sequence, describing it as “the series of Lamé”. By May, though in writing about prime numbers, he has read Boncompagni’s publications. He says how this thing “commonly known as the sequence of Lamé was first presented by Fibonacci”.

And Fibonacci caught Lucas’s imagination. Lucas shared, particularly, the phrasing of this sequence as something in the reproduction of rabbits. This captured mathematicians’, and then people’s imaginations. It’s akin to Émile Borel’s room of a million typing monkeys. By the end of the 19th century Leonardo of Pisa had both a name and fame.

We can still ask why. The proximate cause is Édouard Lucas, impressed (I trust) by Boncompagni’s editions of Fibonacci’s work. Why did Baldassarre Boncompagni think it important to publish editions of Fibonacci? Well, he was interested in the history of science. He edited the first Italian journal dedicated to the history of mathematics. He may have understood that Fibonacci was, if not an important mathematician, at least one who had interesting things to write. Boncompagni’s edition of Liber Abaci came out in 1857. By 1859 the state of Tuscany voted to erect a statue.

So I speculate, without confirming that at least some of Fibonacci’s good name in the 19th century was a reflection of Italian unification. The search for great scholars whose intellectual achievements could reflect well on a nation trying to build itself.

And so we have bundles of ironies. Fibonacci did write impressive works of great mathematical insight. And he was recognized at the time for that work. The things he wrote about Arabic numerals were correct. His recommendation to use them was taken, but by people who did not read his advice. After centuries of obscurity he got some notice. And a problem he did not create nor particularly advance brought him a fame that’s lasted a century and a half now, and looks likely to continue.

I am always amazed to learn there are people not interested in history.

And now I can try to get ahead of deadline for next week. This and all my other A-to-Z topics for the year should be at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still taking topics to discuss in the coming weeks. Thank you for reading and please take care.

## How December 2017 Treated My Mathematics Journal

Before I even look at the statistics I can say: December 2017 treated my mathematics journal better than it treated me. A third of the way in, our pet rabbit died, suddenly and unexpectedly. And this was days short of a year from our previous pet rabbit’s death. So that’s the cryptic plan-scrambling stuff I had been talking about, and why my writing productivity dropped. We don’t know when we’ll take in a new rabbit (or rabbits). Possibly this month, although not until late in January at soonest.

And … well, thank you for the condolences that I honestly do not mean to troll for. I can’t say we’re used to the idea that we lost our rabbit so soon. It’s becoming a familiar thought is all.

But to the blog contents. How did they, quantifiably, go?

I fell back below a thousand page views. Just under 900, too: 899 page views over the month, from 599 unique visitors, as if both numbers were trying to tease Price Is Right Item-Up-For-Bids offerings. That’s down from the 1,052 page views in November, but only technically down from the 604 unique visitors then. October had 1,069 page views from a basically-equal 614 unique visitors. And it turns out that while I thought I stopped writing stuff, especially after our rabbit’s death, I had 11 posts in the month. That’s low but in the normal range for a month that has no A-to-Z sequence going. Curious.

There were 71 pages liked around here in December. That’s technically up from November’s 70, but not really. It’s less technically up from October’s 64. Still makes me wonder what might have happened if I’d scraped together a 12th post for the month. And the other big measure of reader involvement? 24 comments posted in December, down from November’s 28 but above October’s 12. I may need to start offering bounties for interesting comments. Or, less ridiculously, start some open threads for people who want to recommend good blogs or books or Twitter threads they’ve found.

2018 starts with a total 56,318 page views from 26,491 tracked unique visitors. The numbers don’t look bad, although I keep running across those WordPress blogs that’s, like, someone who started posting an inspirational message once a week two months ago and has just broken a million page views and gets 242 likes on every post and wonder if it’s just me. It’s not.

How about the roster of nations? For that I figure there were 53 countries sending me readers in December, technically down from November’s 56 and technically up from October’s 51. There were 15 single-reader countries, down from November’s 22 but slightly above October’s 13. And who were they? These places:

United States 553
United Kingdom 41
India 35
Ireland 19
Philippines 16
Austria 13
Germany 12
Turkey 12
Australia 11
Sweden 9
Singapore 8
France 7
Italy 7
Slovenia 7
New Zealand 6
Spain 6
Indonesia 5
Norway 5
South Korea 5
Brazil 4
Hong Kong SAR China 4
Malaysia 4
Poland 4
Belgium 3
Denmark 3
Finland 3
Japan 3
Netherlands 3
Portugal 3
Taiwan 3
Thailand 3
Argentina 2
Colombia 2
Serbia 2
Slovakia 2
United Arab Emirates 2
Albania 1
Croatia 1
Egypt 1
Israel 1
Jamaica 1
Lebanon 1 (*)
Mexico 1 (*)
Peru 1 (*)
Romania 1 (*)
Russia 1
South Africa 1
Switzerland 1
Uruguay 1
Venezuela 1

Lebanon, Mexico, Peru, and Romania were also single-reader countries on November, and there’s no nation that’s on a three-month single-reader streak.

So what was the roster of popular posts for the month? My perennials, plus Reading the Comics, and some of that Wronski π stuff just squeaks in, tied for fifth place. What people wanted to read here was:

Have I got plans for January 2018? Yes, I have. Besides keeping on Reading the Comics, I hope to get through Wronski’s formula for π. I know there’s readers eager to find out what the punch line is. I know at least one has already worked it out and been surprised. And I’m hoping to work out a question about pinball tournaments that my love set me on. I’ve done a little thinking about the issue, and don’t believe the results, so I’m hoping to think some more and then make my computer do a bunch of simulations. Could be fun.

And I’ll be spending it hoping that you, the reader, are around. If you’re here now there’s a good chance you’re reading this. If you’d like to follow on your WordPress reader, there’s a ‘Follow on WordPress’ button in the upper right corner of the page. If you’d rather get it by e-mail, before I’ve made corrections to things that are only obviously wrong two minutes after publication, there’s the ‘Follow by e-mail’ button near that. And if you’d like to follow me on Twitter, try @Nebusj. I’m currently running only like four weeks behind on responding to follow-up tweets or direct messages, which is practically living a year in the future compared to my e-mail. Thanks for being here.

## Reading the Comics, July 22, 2017: Counter-mudgeon Edition

I’m not sure there is an overarching theme to the past week’s gifts from Comic Strip Master Command. If there is, it’s that I feel like some strips are making cranky points and I want to argue against their cases. I’m not sure what the opposite of a curmudgeon is. So I shall dub myself, pending a better idea, a counter-mudgeon. This won’t last, as it’s not really a good name, but there must be a better one somewhere. We’ll see it, now that I’ve said I don’t know what it is.

Niklas Eriksson’s Carpe Diem for the 17th features the blackboard full of equations as icon for serious, deep mathematical work. It also features rabbits, although probably not for their role in shaping mathematical thinking. Rabbits and their breeding were used in the simple toy model that gave us Fibonacci numbers, famously. And the population of Arctic hares gives those of us who’ve reached differential equations a great problem to do. The ecosystem in which Arctic hares live can be modelled very simply, as hares and a generic predator. We can model how the populations of both grow with simple equations that nevertheless give us surprises. In a rich, diverse ecosystem we see a lot of population stability: one year where an animal is a little more fecund than usual doesn’t matter much. In the sparse ecosystem of the Arctic, and the one we’re building worldwide, small changes can have matter enormously. We can even produce deterministic chaos, in which if we knew exactly how many hares and predators there were, and exactly how many of them would be born and exactly how many would die, we could predict future populations. But the tiny difference between our attainable estimate and the reality, even if it’s as small as one hare too many or too few in our model, makes our predictions worthless. It’s thrilling stuff.

Vic Lee’s Pardon My Planet for the 17th reads, to me, as a word problem joke. The talk about how much change Marian should get back from Blake could be any kind of minor hassle in the real world where one friend covers the cost of something for another but expects to be repaid. But counting how many more nickels one person has than another? That’s of interest to kids and to story-problem authors. Who else worries about that count?

Jef Mallet’s Frazz for the 17th straddles that triple point joining mathematics, philosophy, and economics. It seems sensible, in an age that embraces the idea that everything can be measured, to try to quantify happiness. And it seems sensible, in age that embraces the idea that we can model and extrapolate and act on reasonable projections, to try to see what might improve our happiness. This is so even if it’s as simple as identifying what we should or shouldn’t be happy about. Caulfield is circling around the discovery of utilitarianism. It’s a philosophy that (for my money) is better-suited to problems like how ought the city arrange its bus lines than matters too integral to life. But it, too, can bring comfort.

Corey Pandolph’s Barkeater Lake rerun for the 20th features some mischievous arithmetic. I’m amused. It turns out that people do have enough of a number sense that very few people would let “17 plus 79 is 4,178” pass without comment. People might not be able to say exactly what it is, on a glance. If you answered that 17 plus 79 was 95, or 102, most people would need to stop and think about whether either was right. But they’re likely to know without thinking that it can’t be, say, 56 or 206. This, I understand, is so even for people who aren’t good at arithmetic. There is something amazing that we can do this sort of arithmetic so well, considering that there’s little obvious in the natural world that would need the human animal to add 17 and 79. There are things about how animals understand numbers which we don’t know yet.

Alex Hallatt’s Human Cull for the 21st seems almost a direct response to the Barkeater Lake rerun. Somehow “making change” is treated as the highest calling of mathematics. I suppose it has a fair claim to the title of mathematics most often done. Still, I can’t get behind Hallatt’s crankiness here, and not just because Human Cull is one of the most needlessly curmudgeonly strips I regularly read. For one, store clerks don’t need to do mathematics. The cash registers do all the mathematics that clerks might need to do, and do it very well. The machines are cheap, fast, and reliable. Not using them is an affectation. I’ll grant it gives some charm to antiques shops and boutiques where they write your receipt out by hand, but that’s for atmosphere, not reliability. And it is useful the clerk having a rough idea what the change should be. But that’s just to avoid the risk of mistakes getting through. No matter how mathematically skilled the clerk is, there’ll sometimes be a price entered wrong, or the customer’s money counted wrong, or a one-dollar bill put in the five-dollar bill’s tray, or a clerk picking up two nickels when three would have been more appropriate. We should have empathy for the people doing this work.

## The End 2016 Mathematics A To Z: Image

It’s another free-choice entry. I’ve got something that I can use to make my Friday easier.

## Image.

So remember a while back I talked about what functions are? I described them the way modern mathematicians like. A function’s got three components to it. One is a set of things called the domain. Another is a set of things called the range. And there’s some rule linking things in the domain to things in the range. In shorthand we’ll write something like “f(x) = y”, where we know that x is in the domain and y is in the range. In a slightly more advanced mathematics class we’ll write $f: x \rightarrow y$. That maybe looks a little more computer-y. But I bet you can read that already: “f matches x to y”. Or maybe “f maps x to y”.

We have a couple ways to think about what ‘y’ is here. One is to say that ‘y’ is the image of ‘x’, under ‘f’. The language evokes camera trickery, or at least the way a trick lens might make us see something different. Pretend that the domain is something you could gaze at. If the domain is, say, some part of the real line, or a two-dimensional plane, or the like that’s not too hard to do. Then we can think of the rule part of ‘f’ as some distorting filter. When we look to where ‘x’ would be, we see the thing in the range we know as ‘y’.

At this point you probably imagine this is a pointless word to have. And that it’s backed up by a useless analogy. So it is. As far as I’ve gone this addresses a problem we don’t need to solve. If we want “the thing f matches x to” we can just say “f(x)”. Well, we write “f(x)”. We say “f of x”. Maybe “f at x”, or “f evaluated at x” if we want to emphasize ‘f’ more than ‘x’ or ‘f(x)’.

Where it gets useful is that we start looking at subsets. Bunches of points, not just one. Call ‘D’ some interesting-looking subset of the domain. What would it mean if we wrote the expression ‘f(D)’? Could we make that meaningful?

We do mean something by it. We mean what you might imagine by it. If you haven’t thought about what ‘f(D)’ might mean, take a moment — a short moment — and guess what it might. Don’t overthink it and you’ll have it right. I’ll put the answer just after this little bit so you can ponder.

So. ‘f(D)’ is a set. We make that set by taking, in turn, every single thing that’s in ‘D’. And find everything in the range that’s matched by ‘f’ to those things in ‘D’. Collect them all together. This set, ‘f(D)’, is “the image of D under f”.

We use images a lot when we’re studying how functions work. A function that maps a simple lump into a simple lump of about the same size is one thing. A function that maps a simple lump into a cloud of disparate particles is a very different thing. A function that describes how physical systems evolve will preserve the volume and some other properties of these lumps of space. But it might stretch out and twist around that space, which is how we discovered chaos.

Properly speaking, the range of a function ‘f’ is just the image of the whole domain under that ‘f’. But we’re not usually that careful about defining ranges. We’ll say something like ‘the domain and range are the sets of real numbers’ even though we only need the positive real numbers in the range. Well, it’s not like we’re paying for unnecessary range. Let me call the whole domain ‘X’, because I went and used ‘D’ earlier. Then the range, let me call that ‘Y’, would be ‘Y = f(X)’.

Images will turn up again. They’re a handy way to let us get at some useful ideas.

## Reading the Comics, January 27, 2015: Rabbit In A Trapezoid Edition

So the reason I fell behind on this Reading the Comics post is that I spent more time than I should have dithering about which ones to include. I hope it’s not disillusioning to learn that I have no clearly defined rules about what comics to include and what to leave out. It depends on how clearly mathematical in content the comic strip is; but it also depends on how much stuff I have gathered. If there’s a slow week, I start getting more generous about what I might include. And last week gave me a string of comics that I could argue my way into including, but few that obviously belonged. So I had a lot of time dithering.

To make it up to you, at the end of the post I should have our pet rabbit tucked within a trapezoid of his own construction. If that doesn’t make everything better I don’t know what will.

Mark Pett’s Mr Lowe for the 22nd of January (a rerun from the 19th of January, 2001) is really a standardized-test-question joke. But it brings up a debate about cultural biases in standardized tests that I don’t remember hearing lately. I may just be moving in the wrong circles. I remember self-assured rich white males explaining how it’s absurd to think cultural bias could affect test results since, after all, they’re standardized tests. I’ve sulked some around these parts about how I don’t buy mathematics’ self-promoted image of being culturally neutral either. A mathematical truth may be universal, but that we care about this truth is not. Anyway, Pett uses a mathematics word problem to tell the joke. That was probably the easiest way to put a cultural bias into a panel that

T Lewis and Michael Fry’s Over The Hedge for the 25th of January uses a bit of calculus to represent “a lot of hard thinking”. Hammy the Squirrel particularly is thinking of the Fundamental Theorem of Calculus. This particular part is the one that says the derivative of the integral of a function is the original function. It’s part of how integration and differentiation link together. And it shows part of calculus’s great appeal. It has those beautiful long-s integral signs that make this part of mathematics look like artwork.

Leigh Rubin’s Rubes for the 25th of January is a panel showing “Schrödinger’s Job Application”. It’s referring to Schrödinger’s famous thought experiment, meant to show there are things we don’t understand about quantum mechanics. It sets up a way that a quantum phenomenon can be set up to have distinct results in the everyday environment. The mathematics suggests that a cat, poisoned or not by toxic gas released or not by the decay of one atom, would be both alive and dead until some outside observer checks and settles the matter. How can this be? For that matter, how can the cat not be a qualified judge to whether it’s alive? Well, there are things we don’t understand about quantum mechanics.

Roy Schneider’s The Humble Stumble for the 26th of January (a rerun from the 30th of January, 2007) uses a bit of mathematics to mark Tommy, there, as a frighteningly brilliant weirdo. The equation is right, although trivial. The force it takes to keep something with a mass m moving in a circle of radius R at the linear speed v is $\frac{m v^2}{R}$. The radius of the Moon’s orbit around the Earth is strikingly close to sixty times the Earth’s radius. The Ancient Greeks were able to argue that from some brilliantly considered geometry. Here, RE gets used as a name for “the radius of the Earth”. So the force holding the Moon in its orbit has to be approximately $\frac{m v^2}{60 R_e}$. That’s if we say m is the mass of the Moon, and v is its linear speed, and if we suppose the Moon’s orbit is a circle. It nearly is, and this would give us a good approximate answer to how much force holds the Moon in its orbit. It would be only a start, though; the precise movements of the Moon are surprisingly complicated. Newton himself could not fully explain them, even with the calculus and physics tools he invented for the task.

Dave Whamond’s Reality Check for the 26th of January isn’t quite the anthropomorphic-numerals joke for this essay. But we do get personified geometric constructs, which is close, and some silly wordplay. Much as I like the art for Over The Hedge showcasing a squirrel so burdened with thoughts that his head flops over, this might be my favorite of this bunch.

Dave Blazek’s Loose Parts for the 27th of January is a runner-up for the silly jokes trophy this time around.

Now I know what you’re thinking: isn’t that actually a trapezoidal prism, underneath a rectangular prism? Yes, I suppose so. The only people who’re going to say so are trying to impress people by saying so, though. And those people won’t be impressed by it. I’m sorry. We gave him the box because rabbits generally like having cardboard boxes to go in and chew apart. He did on his own the pulling-in of the side flaps to make it stand so trapezoidal.

## Reading the Comics, November 28, 2014: Greatest Hits Edition?

I don’t ever try speaking for Comic Strip Master Command, and it almost never speaks to me, but it does seem like this week’s strips mentioning mathematical themes was trying to stick to the classic subjects: anthropomorphized numbers, word problems, ways to measure time and space, under-defined probability questions, and sudoku. It feels almost like a reunion weekend to have all these topics come together.

Dan Thompson’s Brevity (November 23) is a return to the world-of-anthropomorphic-numbers kind of joke, and a pun on the arithmetic mean, which is after all the statistic which most lends itself to puns, just edging out the “range” and the “single-factor ANOVA F-Test”.

Phil Frank Joe Troise’s The Elderberries (November 23, rerun) brings out word problem humor, using train-leaves-the-station humor as a representative of the kinds of thinking academics do. Nagging slightly at me is that I think the strip had established the Professor as one of philosophy and while it’s certainly not unreasonable for a philosopher to be interested in mathematics I wouldn’t expect this kind of mathematics to strike him as very interesting. But then there is the need to get the idea across in two panels, too.

Jonathan Lemon’s Rabbits Against Magic (November 25) brings up a way of identifying the time — “half seven” — which recalls one of my earliest essays around here, “How Many Numbers Have We Named?”, because the construction is one that I find charming and that was glad to hear was still current. “Half seven” strikes me as similar in construction to saying a number as “five and twenty” instead of “twenty-five”, although I’m ignorant as to whether the actually is any similarity.

Scott Hilburn’s The Argyle Sweater (November 26) brings out a joke that I thought had faded out back around, oh, 1978, when the United States decided it wasn’t going to try converting to metric after all, now that we had two-liter bottles of soda. The curious thing about this sort of hyperconversion (it’s surely a satiric cousin to the hypercorrection that makes people mangle a sentence in the misguided hope of perfecting it) — besides that the “yard” in Scotland Yard is obviously not a unit of measure — is the notion that it’d be necessary to update idiomatic references that contain old-fashioned units of measurement. Part of what makes idioms anything interesting is that they can be old-fashioned while still making as much sense as possible; “in for a penny, in for a pound” is a sensible thing to say in the United States, where the pound hasn’t been legal tender since 1857; why would (say) “an ounce of prevention is worth a pound of cure” be any different? Other than that it’s about the only joke easily found on the ground once you’ve decided to look for jokes in the “systems of measurement” field.

Mark Heath’s Spot the Frog (November 26, rerun) I’m not sure actually counts as a mathematics joke, although it’s got me intrigued: Surly Toad claims to have a stick in his mouth to use to give the impression of a smile, or 37 (“Sorry, 38”) other facial expressions. The stick’s shown as a bundle of maple twigs, wound tightly together and designed to take shapes easily. This seems to me the kind of thing that’s grown as an application of knot theory, the study of, well, it’s almost right there in the name. Knots, the study of how strings of things can curl over and around and cross themselves (or other strings), seemed for a very long time to be a purely theoretical playground, not least because, to be addressable by theory, the knots had to be made of an imaginary material that could be stretched arbitrarily finely, and could be pushed frictionlessly through it, which allows for good theoretical work but doesn’t act a thing like a shoelace. Then I think everyone was caught by surprise when it turned out the mathematics of these very abstract knots also describe the way proteins and other long molecules fold, and unfold; and from there it’s not too far to discovering wonderful structures that can change almost by magic with slight bits of pressure. (For my money, the most astounding thing about knots is that you can describe thermodynamics — the way heat works — on them, but I’m inclined towards thermodynamic problems.)

Henry Scarpelli and Crag Boldman’s Archie (November 28, rerun) offers an interesting problem: when Veronica was out of town for a week, Archie’s test scores improved. Is there a link? This kind of thing is awfully interesting to study, and awfully difficult to: there’s no way to run a truly controlled experiment to see whether Veronica’s presence affects Archie’s test scores. After all, he never takes the same test twice, even if he re-takes a test on the same subject (and even if the re-test were the exact same questions, he would go into it the second time with relevant experience that he didn’t have the first time). And a couple good test scores might be relevant, or might just be luck, or it might be that something else happened to change that week that we haven’t noticed yet. How can you trace down plausible causal links in a complicated system?

One approach is an experimental design that, at least in the psychology textbooks I’ve read, gets called A-B-A, or A-B-A-B, experiment design: measure whatever it is you’re interested in during a normal time, “A”, before whatever it is whose influence you want to see has taken hold. Then measure it for a time “B” where something has changed, like, Veronica being out of town. Then go back as best as possible to the normal situation, “A” again; and, if your time and research budget allow, going back to another stretch of “B” (and, hey, maybe even “A” again) helps. If there is an influence, it ought to appear sometime after “B” starts, and fade out again after the return to “A”. The more you’re able to replicate this the sounder the evidence for a link is.

(We’re actually in the midst of something like this around home: our pet rabbit was diagnosed with a touch of arthritis in his last checkup, but mildly enough and in a strange place, so we couldn’t tell whether it’s worth putting him on medication. So we got a ten-day prescription and let that run its course and have tried to evaluate whether it’s affected his behavior. This has proved difficult to say because we don’t really have a clear way of measuring his behavior, although we can say that the arthritis medicine is apparently his favorite thing in the world, based on his racing up to take the liquid and his trying to grab it if we don’t feed it to him fast enough.)

Ralph Hagen’s The Barn (November 28) has Rory the sheep wonder about the chances he and Stan the bull should be together in the pasture, given how incredibly vast the universe is. That’s a subtly tricky question to ask, though. If you want to show that everything that ever existed is impossibly unlikely you can work out, say, how many pastures there are on Earth multiply it by an estimate of how many Earth-like planets there likely are in the universe, and take one divided by that number and marvel at Rory’s incredible luck. But that number’s fairly meaningless: among other obvious objections, wouldn’t Rory wonder the same thing if he were in a pasture with Dan the bull instead? And Rory wouldn’t be wondering anything at all if it weren’t for the accident by which he happened to be born; how impossibly unlikely was that? And that Stan was born too? (And, obviously, that all Rory and Stan’s ancestors were born and survived to the age of reproducing?)

Except that in this sort of question we seem to take it for granted, for instance, that all Stan’s ancestors would have done their part by existing and doing their part to bringing Stan around. And we’d take it for granted that the pasture should exist, rather than be a farmhouse or an outlet mall or a rocket base. To come up with odds that mean anything we have to work out what the probability space of all possible relevant outcomes is, and what the set of all conditions that satisfy the concept of “we’re stuck here together in this pasture” is.

Mark Pett’s Lucky Cow (November 28) brings up sudoku puzzles and the mystery of where they come from, exactly. This prompted me to wonder about the mechanics of making sudoku puzzles and while it certainly seems they could be automated pretty well, making your own amounts to just writing the digits one through nine nine times over, and then blanking out squares until the puzzle is hard. A casual search of the net suggests the most popular way of making sure you haven’t blanking out squares so that the puzzle becomes unsolvable (in this case, that there’s two or more puzzles that fit the revealed information) is to let an automated sudoku solver tell you. That’s true enough but I don’t see any mention of any algorithms by which one could check if you’re blanking out a solution-foiling set of squares. I don’t know whether that reflects there being no algorithm for this that’s more efficient than “try out possible solutions”, or just no algorithm being more practical. It’s relatively easy to make a computer try out possible solutions, after all.

A paper published by Mária Ercsey-Ravasz and Zoltán Toroczkai in Nature Scientific Reports in 2012 describes the recasting of the problem of solving sudoku into a deterministic, dynamical system, and matches the difficulty of a sudoku puzzle to chaotic behavior of that system. (If you’re looking at the article and despairing, don’t worry. Go to the ‘Puzzle hardness as transient chaotic dynamics’ section, and read the parts of the sentence that aren’t technical terms.) Ercsey-Ravasz and Toroczkai point out their chaos-theory-based definition of hardness matches pretty well, though not perfectly, the estimates of difficulty provided by sudoku editors and solvers. The most interesting (to me) result they report is that sudoku puzzles which give you the minimum information — 17 or 18 non-blank numbers to start — are generally not the hardest puzzles. 21 or 22 non-blank numbers seem to match the hardest of puzzles, though they point out that difficulty has got to depend on the positioning of the non-blank numbers and not just how many there are.

## Reading the Comics, February 1, 2014

For today’s round of mathematics-themed comic strips a little deeper pattern turns out to have emerged: π, that most popular of the transcendental numbers, turns up quite a bit in the comics that drew my attention the past couple weeks. Let me explain.

Dan Thompson’s Brevity (January 23) returns to the anthropomorphic numbers racket, with the kind of mathematics puns designed to get the strip pasted to the walls of the teacher’s lounge. I wonder how that’s going for him.

Greg Evans’s Luann Againn (January 25, rerun from 1986) has Luann not understanding how to work out an arithmetic problem until it’s shown how to do it: use the calculator. This is a joke that’s probably going to be with us as long as there are practical, personal calculating devices, because it is a good question why someone should bother learning arithmetic when a device will do it faster and better by every reasonable measure. I admit not being sure there is much point to learning arithmetic, other than as a way to practice a particular way of learning how to apply algorithms. I suppose it also stands as a way to get people who are really into mathematics to highlight themselves: someone who memorizes the times tables is probably interested in the kinds of systematic thought that mathematics depends on. But that’s a weak reason to demand it of every student. I suppose arithmetic is very testable, but that’s an even worse reason to make students go through it.

Mind you, I am quite open to the idea that arithmetic drills are useful for students. That I don’t know a particular reason why I should care whether a seventh-grader can divide 391 by 17 by hand doesn’t mean that I don’t think there is one.

## Reading the Comics, December 29, 2013

I haven’t quite got seven comics mentioning mathematics themes this time around, but, it’s so busy the end of the year that maybe it’s better publishing what I have and not worrying about an arbitrary quota like mine.

Wuff and Morgenthaler’s WuMo (December 16) uses a spray of a bit of mathematics to stand in for “something just too complicated to understand”, and even uses a caricature of Albert Einstein to represent the person who’s just too smart to be understood. I’m a touch disappointed that, as best I can tell, the equations sprayed out don’t mean anything; I’ve enjoyed WuMo — a new comic to North American audiences — so far and kind of expected they would get an irrelevant detail like that plausibly right.

I’m also interested that sixty years after his death the portrait of Einstein still hasn’t been topped as an image for The Really, Really Smart Guy. Possibly nobody since him has managed to combine being both incredibly important — even if it weren’t for relativity, Einstein would be an important figure in science for his work in quantum mechanics, and if he didn’t have relativity or quantum mechanics, he’d still be important for statistical mechanics — and iconic-looking, which I guess really means he let his hair grow wild. I wonder if Stephen Hawking will be able to hold some of that similar pop cultural presence.