With this essay, I finally finish the comic strips from the first full week of February. You know how these things happen. I’ll get to the comics from last week soon enough, at an essay gathered under this link. For now, some pictures with words:

Art Sansom and Chip Sansom’s **The Born Loser** for the 7th builds on one of the probability questions people often use. That is the probability of an event, in the weather forecast. Predictions for what the weather will do are so common that it takes work to realize there’s something difficult about the concept. The weather is a very complicated fluid-dynamics problem. It’s almost certainly chaotic. A chaotic system is deterministic, but unpredictable, because to get a meaningful prediction requires precision that’s impossible to ever have in the real world. The slight difference between the number π and the number 3.1415926535897932 throws calculations off too quickly. Nevertheless, it implies that the “chance” of snow on the weekend means about the same thing as the “chance” that Valentinte’s Day was on the weekend this year. The way the system is set up implies it will be one or the other. This is a probability distribution, yes, but it’s a weird one.

What we talk about when we say the “chance” of snow or Valentine’s on a weekend day is one of ignorance. It’s about our estimate that the true value of something is one of the properties we find interesting. Here, past knowledge can guide us. If we know that the past hundred times the weather was like this on Friday, snow came on the weekend less than ten times, we have evidence that suggests these conditions don’t often lead to snow. This is backed up, these days, by numerical simulations which are not perfect models of the weather. But they are ones that represent something very like the weather, and that stay reasonably good for several days or a week or so.

And we have the question of whether the forecast is right. Observing this fact is used as the joke here. Still, there must be some measure of confidence in a forecast. Around here, the weather forecast is for a cold but not abnormally cold week ahead. This seems likely. A forecast that it was to jump into the 80s and stay there for the rest of February would be so implausible that we’d ignore it altogether. A forecast that it would be ten degrees (Fahrenheit) below normal, or above, though? We could accept that pretty easily.

Proving a forecast is wrong takes work, though. Mostly it takes evidence. If we look at a hundred times the forecast was for a 10% chance of snow, and it actually snowed 11% of the time, is it implausible that the forecast was right? Not really, not any more than a coin coming up tails 52 times out of 100 would be suspicious. If it actually snowed 20% of the time? That might suggest that the forecast was wrong. If it snowed 80% of the time? That suggests something’s very wrong with the forecasting methods. It’s hard to say one forecast is wrong, but we can have a sense of what forecasters are more often right than others are.

Doug Savage’s **Savage Chickens** for the 7th is a cute little bit about counting. Counting things out is an interesting process; for some people, hearing numbers said aloud will disrupt their progress. For others, it won’t, but seeing numbers may disrupt it instead.

Niklas Eriksson’s **Carpe Diem** for the 8th is a bit of silliness about the mathematical sense of animals. Studying how animals understand number is a real science, and it turns up interesting results. It shouldn’t be surprising that animals can do a fair bit of counting and some geometric reasoning, although it’s rougher than even our untrained childhood expertise. We get a good bit of our basic mathematical ability from somewhere, because we’re evolved to notice some things. It’s silly to suppose that dogs would be able to state the Pythagorean Theorem, at least in a form that we recognize. But it is probably someone’s good research problem to work out whether we can test whether dogs understand the implications of the theorem, and whether it helps them go about dog work any.

Zach Weinersmith’s **Saturday Morning Breakfast Cereal** for the 8th speaks of the “Cinnamon Roll Delta Function”. The point is clear enough on its own. So let me spoil a good enough bit of fluff by explaining that it’s a reference to something. There is, lurking in mathematical physics, a concept called the “Dirac delta function”, named for that innovative and imaginative fellow Paul Dirac. It has some weird properties. Its domain is … well, it has many domains. The real numbers. The set of ordered pairs of real numbers, R^{2}. The set of ordered triples of real numbers, R^{3}. Basically any space you like, there’s a Dirac delta function for it. The Dirac delta function is equal to zero *everywhere* in this domain, except at one point, the “origin”. At that one function, though? There it’s equal to …

Here we step back a moment. We really, really, really want to say that it’s infinitely large at that point, which is what Weinersmith’s graph shows. If we’re being careful, we don’t say that though. Because if we *did* say that, then we would lose the thing that we use the Dirac delta function *for*. The Dirac delta function, represented with δ, is a function with the property that for any set D, in the domain, that you choose to integrate over

whenever the origin is inside the interval of integration D. It’s equal to 0 if the origin is not inside the interval of integration. This, whatever the set is. If we use the ordinary definitions for what it means to integrate a function, and say that the delta function is “infinitely big” at the origin, then this won’t happen; the integral will be zero everywhere.

This is one of those cases where physicists worked out new mathematical concepts, and the mathematicians had to come up with a rationalization by which this made sense. This because the function is quite useful. It allows us, mathematically, to turn descriptions of point particles into descriptions of continuous fields. And vice-versa: we can turn continuous fields into point particles. It turns out we like to do this a lot. So if we’re being careful we don’t say just what the Dirac delta function “is” at the origin, only some properties about what it does. And if we’re being further careful we’ll speak of it as a “distribution” rather than a function.

But colloquially, we think of the Dirac delta function as one that’s zero everywhere, except for the one point where it’s somehow “a really big infinity” and we try to not look directly at it.

The sharp-eyed observer may notice that Weinersmith’s graph does not put the great delta spike at the origin, that is, where the x-axis represents zero. This is true. We can create a delta-like function with a singular spot anywhere we like by the process called “translation”. That is, if we would like the function to be zero everywhere except at the point , then we define a function and are done. Translation is a simple step, but it turns out to be useful *all* the *time*.

Thanks again for reading. See you soon.

Rare for me to disagree with Zach Weinersmith, but I think cinnamon rolls are pretty darn tasty throughout. Of course, my family’s recipe is the best. (And our other one 2nd best, but that’s bragging.)

LikeLike

Yeah, I similarly don’t know that I’ve ever had a cinnamon roll I didn’t think was tasty enough for my needs. It’s possible this reflects my big-family childhood, which got me practiced in thinking that all food was pretty good, really, as long as you got to eat it.

LikeLike