## The End 2016 Mathematics A To Z: Principal

Functions. They’re at the center of so much mathematics. They have three pieces: a domain, a range, and a rule. The one thing functions absolutely must do is match stuff in the domain to one and only one thing in the range. So this is where it gets tricky.

## Principal.

Thing with this one-and-only-one thing in the range is it’s not always practical. Sometimes it only makes sense to allow for something in the domain to match several things in the range. For example, suppose we have the domain of positive numbers. And we want a function that gives us the numbers which, squared, are whatever the original function was. For any positive real number there’s two numbers that do that. 4 should match to both +2 and -2.

You might ask why I want a function that tells me the numbers which, squared, equal something. I ask back, what business is that of yours? I want a function that does this and shouldn’t that be enough? We’re getting off to a bad start here. I’m sorry; I’ve been running ragged the last few days. I blame the flat tire on my car.

Anyway. I’d want something like that function because I’m looking for what state of things makes some other thing true. This turns up often in “inverse problems”, problems in which we know what some measurement is and want to know what caused the measurement. We do that sort of problem all the time.

We can handle these multi-valued functions. Of course we can. Mathematicians are as good at loopholes as anyone else is. Formally we declare that the range isn’t the real numbers but rather sets of real numbers. My what-number-squared function then matches ‘4’ in the domain to the set of numbers ‘+2 and -2’. The set has several things in it, but there’s just the one set. Clever, huh?

This sort of thing turns up a lot. There’s two numbers that, squared, give us any real number (except zero). There’s three numbers that, squared, give us any real number (again except zero). Polynomials might have a whole bunch of numbers that make some equation true. Trig functions are worse. The tangent of 45 degrees equals 1. So is the tangent of 225 degrees. Also 405 degrees. Also -45 degrees. Also -585 degrees. OK, a mathematician would use radians instead of degrees, but that just changes what the numbers are. Not that there’s infinitely many of them.

It’s nice to have options. We don’t always want options. Sometimes we just want one blasted simple answer to things. It’s coded into the language. We say “the square root of four”. We speak of “the arctangent of 1”, which is to say, “the angle with tangent of 1”. We only say “all square roots of four” if we’re making a point about overlooking options.

If we’ve got a set of things, then we can pick out one of them. This is obvious, which means it is so very hard to prove. We just have to assume we can. Go ahead; assume we can. Our pick of the one thing out of this set is the “principal”. It’s not any more inherently right than the other possibilities. It’s just the one we choose to grab first.

So. The principal square root of four is positive two. The principal arctangent of 1 is 45 degrees, or in the dialect of mathematicians π divided by four. We pick these values over other possibilities because they’re nice. What makes them nice? Well, they’re nice. Um. Most of their numbers aren’t that big. They use positive numbers if we have a choice in the matter. Deep down we still suspect negative numbers of being up to something.

If nobody says otherwise then the principal square root is the positive one, or the one with a positive number in front of the imaginary part. If nobody says otherwise the principal arcsine is between -90 and +90 degrees (-π/2 and π/2). The principal arccosine is between 0 and 180 degrees (0 and π), unless someone says otherwise. The principal arctangent is … between -90 and 90 degrees, unless it’s between 0 and 180 degrees. You can count on the 0 to 90 part. Use your best judgement and roll with whatever develops for the other half of the range there. There’s not one answer that’s right for every possible case. The point of a principal value is to pick out one answer that’s usually a good starting point.

When you stare at what it means to be a function you realize that there’s a difference between the original function and the one that returns the principal value. The original function has a range that’s “sets of values”. The principal-value version has a range that’s just one value. If you’re being kind to your audience you make some note of that. Usually we note this by capitalizing the start of the function: “arcsin z” gives way to “Arcsin z”. “Log z” would be the principal-value version of “log z”. When you start pondering logarithms for negative numbers or for complex-valued numbers you get multiple values. It’s the same way that the arcsine function does.

And it’s good to warn your audience which principal value you mean, especially for the arc-trigonometric-functions or logarithms. (I’ve never seen someone break the square root convention.) The principal value is about picking the most obvious and easy-to-work-with value out of a set of them. It’s just impossible to get everyone to agree on what the obvious is.

## How November 2016 Treated My Mathematics Blog

I didn’t forget about reviewing my last month’s readership statistics. I just ran short on time to gather and publish results is all. But now there’s an hour or so free to review that WordPress says my readership was like in November and I can see what was going on.

Well.

So, that was a bit disappointing. The start of an A To Z Glossary usually sees a pretty good bump in my readership. The steady publishing of a diverse set of articles usually helps. My busiest months have always been ones with an A To Z series going on. This November, though, there were 923 page views around here, from 575 distinct visitors. That’s up from October, with 907 page views and 536 distinct visitors. But it’s the same as September’s 922 page views from 575 distinct visitors. I blame the US presidential election. I don’t think it’s just that everyone I can still speak to was depressed by it. My weekly readership the two weeks after the election were about three-quarters that of the week before or the last two weeks of November. I’d be curious what other people saw. My humor blog didn’t see as severe a crash the week of the 14th, though.

Well, the people who were around liked what they saw. There were 157 pages liked in November, up from 115 in September and October. That’s lower than what June and July, with Theorem Thursdays posts, had, and below what the A To Z in March and April drew. But it’s up still. Comments were similarly up, to 35 in November from October’s 24 and September’s 20. That’s up to around what Theorem Thursdays attracted.

December starts with my mathematics blog having had 43,145 page views from a reported 18,022 distinct viewers. And it had 636 WordPress.com followers. You can be among them by clicking the “Follow” button on the upper right corner. It’s up from the 626 WordPress.com followers I had at the start of November. That’s not too bad, considering.

I had a couple of perennial favorites among the most popular articles in November:

This is the first time I can remember that a Reading The Comics post didn’t make the top five.

Sundays are the most popular days for reading posts here. 18 percent of page views come that day. I suppose that’s because I have settled on Sunday as a day to reliably post Reading the Comics essays. The most popular hour is 6 pm, which drew 11 percent of page views. In October Sundays were the most popular day, with 18 percent of page views. 6 pm as the most popular hour, but then it drew 14 percent of page views. Same as September. I don’t know why 6 pm is so special.

As ever there wasn’t any search term poetry. But there were some good searches, including:

• how many different ways can you draw a trapizium
• comics back ground of the big bang nucleosynthesis
• why cramer’s rule sucks (well, it kinda does)
• oliver twist comic strip digarm
• work standard approach sample comics
• what is big bang nucleusynthesis comics strip

I don’t understand the Oliver Twist or the nucleosynthesis stuff.

And now the roster of countries and their readership, which for some reason is always popular:

Country Page Views
United States 534
United Kingdom 78
India 36
Philippines 22
Germany 21
Austria 18
Puerto Rico 17
Slovenia 14
Singapore 13
France 12
Sweden 8
Spain 8
New Zealand 7
Australia 6
Israel 6
Pakistan 5
Hong Kong SAR China 4
Portugal 4
Belgium 3
Colombia 3
Netherlands 3
Norway 3
Serbia 3
Thailand 3
Brazil 2
Croatia 2
Finland 2
Malaysia 2
Poland 2
Switzerland 2
Argentina 1
Bulgaria 1
Cameroon 1
Cyprus 1
Czech Republic 1 (***)
Denmark 1
Japan 1 (*)
Lithuania 1
Macedonia 1
Mexico 1 (*)
Russia 1
Saudi Arabia 1 (*)
South Africa 1 (*)
United Arab Emirates 1 (*)
Vietnam 1

That’s 46 countries, the same as last month. 15 of them were single-reader countries; there were 20 single-reader countries in October. Japan, Mexico, Saudi Arabia, South Africa, and the United Arab Emirates have been single-reader countries for two months running. Czech has been one for four months.

Always happy to see Singapore reading me (I taught there for several years). The “European Union” listing seems to have vanished, here and on my humor blog. I’m sure that doesn’t signal anything ominous at all.

## The End 2016 Mathematics A To Z: Osculating Circle

I’m happy to say it’s another request today. This one’s from HowardAt58, author of the Saving School Math blog. He’s given me some great inspiration in the past.

## Osculating Circle.

It’s right there in the name. Osculating. You know what that is from that one Daffy Duck cartoon where he cries out “Greetings, Gate, let’s osculate” while wearing a moustache. Daffy’s imitating somebody there, but goodness knows who. Someday the mystery drives the young you to a dictionary web site. Osculate means kiss. This doesn’t seem to explain the scene. Daffy was imitating Jerry Colonna. That meant something in 1943. You can find him on old-time radio recordings. I think he’s funny, in that 40s style.

Make the substitution. A kissing circle. Suppose it’s not some playground antic one level up from the Kissing Bandit that plagues recess yet one or two levels down what we imagine we’d do in high school. It suggests a circle that comes really close to something, that touches it a moment, and then goes off its own way.

But then touching. We know another word for that. It’s the root behind “tangent”. Tangent is a trigonometry term. But it appears in calculus too. The tangent line is a line that touches a curve at one specific point and is going in the same direction as the original curve is at that point. We like this because … well, we do. The tangent line is a good approximation of the original curve, at least at the tangent point and for some region local to that. The tangent touches the original curve, and maybe it does something else later on. What could kissing be?

The osculating circle is about approximating an interesting thing with a well-behaved thing. So are similar things with names like “osculating curve” or “osculating sphere”. We need that a lot. Interesting things are complicated. Well-behaved things are understood. We move from what we understand to what we would like to know, often, by an approximation. This is why we have tangent lines. This is why we build polynomials that approximate an interesting function. They share the original function’s value, and its derivative’s value. A polynomial approximation can share many derivatives. If the function is nice enough, and the polynomial big enough, it can be impossible to tell the difference between the polynomial and the original function.

The osculating circle, or sphere, isn’t so concerned with matching derivatives. I know, I’m as shocked as you are. Well, it matches the first and the second derivatives of the original curve. Anything past that, though, it matches only by luck. The osculating circle is instead about matching the curvature of the original curve. The curvature is what you think it would be: it’s how much a function curves. If you imagine looking closely at the original curve and an osculating circle they appear to be two arcs that come together. They must touch at one point. They might touch at others, but that’s incidental.

Osculating circles, and osculating spheres, sneak out of mathematics and into practical work. This is because we often want to work with things that are almost circles. The surface of the Earth, for example, is not a sphere. But it’s only a tiny bit off. It’s off in ways that you only notice if you are doing high-precision mapping. Or taking close measurements of things in the sky. Sometimes we do this. So we map the Earth locally as if it were a perfect sphere, with curvature exactly what its curvature is at our observation post.

Or we might be observing something moving in orbit. If the universe had only two things in it, and they were the correct two things, all orbits would be simple: they would be ellipses. They would have to be “point masses”, things that have mass without any volume. They never are. They’re always shapes. Spheres would be fine, but they’re never perfect spheres even. The slight difference between a perfect sphere and whatever the things really are affects the orbit. Or the other things in the universe tug on the orbiting things. Or the thing orbiting makes a course correction. All these things make little changes in the orbiting thing’s orbit. The actual orbit of the thing is a complicated curve. The orbit we could calculate is an osculating — well, an osculating ellipse, rather than an osculating circle. Similar idea, though. Call it an osculating orbit if you’d rather.

That osculating circles have practical uses doesn’t mean they aren’t respectable mathematics. I’ll concede they’re not used as much as polynomials or sine curves are. I suppose that’s because polynomials and sine curves have nicer derivatives than circles do. But osculating circles do turn up as ways to try solving nonlinear differential equations. We need the help. Linear differential equations anyone can solve. Nonlinear differential equations are pretty much impossible. They also turn up in signal processing, as ways to find the frequencies of a signal from a sampling of data. This, too, we would like to know.

We get the name “osculating circle” from Gottfried Wilhelm Leibniz. This might not surprise. Finding easy-to-understand shapes that approximate interesting shapes is why we have calculus. Isaac Newton described a way of making them in the Principia Mathematica. This also might not surprise. Of course they would on this subject come so close together without kissing.

## Reading the Comics, December 3, 2016: Cute Little Jokes Edition

Comic Strip Master Command apparently wanted me to have a bunch of easy little pieces that don’t inspire rambling essays. Message received!

Mark Litzler’s Joe Vanilla for the 27th is a wordplay joke in which any mathematical content is incidental. It could be anything put in a positive light; numbers are just easy things to arrange so. From the prominent appearance of ‘3’ and ‘4’ I supposed Litzler was using the digits of π, but if he is, it’s from some part of π that I don’t recognize. (That would be any part after the seventeenth digit. I’m not obsessive about π digits.)

Samson’s Dark Side Of The Horse is whatever the equivalent of punning is for Roman Numerals. I like Horace blushing.

John Deering’s Strange Brew for the 28th is a paint-by-numbers joke, and one I don’t see done often. And there is beauty in the appearance of mathematics. It’s not appreciated enough. I think looking at the tables of integral formulas on the inside back cover of a calculus book should prove the point, though. All those rows of integral signs and sprawls of symbols after show this abstract beauty. I’ve surely mentioned the time when the creative-arts editor for my undergraduate leftist weekly paper asked for a page of mathematics or physics work to include as an picture, too. I used the problem that inspired my “Why Stuff Can Orbit” sequence over on my mathematics blog. The editor loved the look of it all, even if he didn’t know what most of it meant.

Niklas Eriksson’s Carpe Diem for the 29th of November, 2016. I’m not sure why this has to be worked out in the break room but I guess you work out life where you do. Anyway, I’m glad to see the Grey Aliens allow for Green Aliens representing them on t-shirts.

Niklas Eriksson’s Carpe Diem for the 29th is a joke about life, I suppose. It uses a sprawled blackboard full of symbols to play the part of the proof. It’s gibberish, of course, although I notice how many mathematics cliches get smooshed into it. There’s a 3.1451 — I assume that’s a garbed digits of π — under a square root sign. There’s an “E = mc”, I suppose a garbled bit of Einstein’s Famous Equation in there. There’s a “cos 360”. 360 evokes the number of degrees in a circle, but mathematicians don’t tend to use degrees. There’s analytic reasons why we find it nicer to use radians, for which the equivalent would be “cos 2π”. If we wrote that at all, since the cosine of 2π is one of the few cosines everyone knows. Every mathematician knows. It’s 1. Well, maybe the work just got to that point and it hasn’t been cleaned up.

Eriksson’s Carpe Diem reappears the 30th, with a few blackboards with less mathematics to suggest someone having a creative block. It does happen to us all. My experience is mathematicians don’t tend to say “Eureka” when we do get a good idea, though. It’s more often some vague mutterings and “well what if” while we form the idea. And then giggling or even laughing once we’re sure we’ve got something. This may be just me and my friends. But it is a real rush when we have it.

Dan Collins’s Looks Good On Paper for the 29t tells the Möbius strip joke. It’s a well-rendered one, though; I like that there is a readable strip in there and that it’s distorted to fit the geometry.

Henry Scarpelli and Craig Boldman’s Archie rerun for the 2nd of December tosses off the old gag about not needing mathematics now that we have calculators. It’s not a strip about that, and that’s fine.

Henry Scarpelli and Craig Boldman’s Archie rerun for the 2nd of December, 2016. Now, not to nitpick, but Jughead and Archie don’t declare it *is* a waste of time to learn mathematics or spelling when a computer can do that work. Also, why don’t we have a word like ‘calculator’ for ‘spell-checker’? I mean, yes, ‘spell-checker’ is an acceptable word, but it’s like calling a ‘calculator’ an ‘arithmetic-doer’.

Mark Anderson’s Andertoons finally appeared the 2nd. It’s a resistant-student joke. And a bit of wordplay.

Ruben Bolling’s Super-Fun-Pak Comix from the 2nd featured an installment of Tautological But True. One might point out they’re using “average” here to mean “arithmetic mean”. There probably isn’t enough egg salad consumed to let everyone have a median-sized serving. And I wouldn’t make any guesses about the geometric mean serving. But the default meaning of “average” is the arithmetic mean. Anyone using one of the other averages would say so ahead of time or else is trying to pull something.

## The End 2016 Mathematics A To Z: Normal Numbers

Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.

## Normal Numbers

A normal number is any real number you never heard of.

Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.

We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?

Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.

In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.

It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.

That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.

The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.

This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.

Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.

I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher 1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.

But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.

And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.

Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?

We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.

It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.

It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.

It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.

We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.

Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.

So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.

• #### gaurish 5:42 am on Saturday, 3 December, 2016 Permalink | Reply

Beautiful exposition! Using pi as motivation for the discussion was a great idea. The fact that unlike pimality, normality is associated with base system involved, fascinated me when I first came across normal numbers. Thanks!

Liked by 1 person

## When Is Thanksgiving Most Likely To Happen?

So my question from last Thursday nagged at my mind. And I learned that Octave (a Matlab clone that’s rather cheaper) has a function that calculates the day of the week for any given day. And I spent longer than I would have expected fiddling with the formatting to get what I wanted to know.

It turns out there are some days in November more likely to be the fourth Thursday than others are. (This is the current standard for Thanksgiving Day in the United States.) And as I’d suspected without being able to prove, this doesn’t quite match the breakdown of which months are more likely to have Friday the 13ths. That is, it’s more likely that an arbitrarily selected month will start on Sunday than any other day of the week. It’s least likely that an arbitrarily selected month will start on a Saturday or Monday. The difference is extremely tiny; there are only four more Sunday-starting months than there are Monday-starting months over the course of 400 years.

But an arbitrary month is different from an arbitrary November. It turns out Novembers are most likely to start on a Sunday, Tuesday, or Thursday. And that makes the 26th, 24th, and 22nd the most likely days to be Thanksgiving. The 23rd and 25th are the least likely days to be Thanksgiving. Here’s the full roster, if I haven’t made any serious mistakes with it:

November Will Be Thanksgiving
22 58
23 56
24 58
25 56
26 58
27 57
28 57
times in 400 years

I don’t pretend there’s any significance to this. But it is another of those interesting quirks of probability. What you would say the probability is of a month starting on the 1st — equivalently, of having a Friday the 13th, or a Fourth Thursday of the Month that’s the 26th — depends on how much you know about the month. If you know only that it’s a month on the Gregorian calendar it’s one thing (specifically, it’s 688/4800, or about 0.14333). If you know only that it’s a November than it’s another (58/400, or 0.145). If you know only that it’s a month in 2016 then it’s another yet (1/12, or about 0.08333). If you know that it’s November 2016 then the probability is 0. Information does strange things to probability questions.

## The End 2016 Mathematics A To Z: Monster Group

Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

## Monster Group.

It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

(Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

## Reading the Comics, November 26, 2016: What is Pre-Algebra Edition

Here I’m just closing out last week’s mathematically-themed comics. The new week seems to be bringing some more in at a good pace, too. Should have stuff to talk about come Sunday.

Darrin Bell and Theron Heir’s Rudy Park for the 24th brings out the ancient question, why do people need to do mathematics when we have calculators? As befitting a comic strip (and Sadie’s character) the question goes unanswered. But it shows off the understandable confusion people have between mathematics and calculation. Calculation is a fine and necessary thing. And it’s fun to do, within limits. And someone who doesn’t like to calculate probably won’t be a good mathematician. (Or will become one of those master mathematicians who sees ways to avoid calculations in getting to an answer!) But put aside the obviou that we need mathematics to know what calculations to do, or to tell whether a calculation done makes sense. Much of what’s interesting about mathematics isn’t a calculation. Geometry, for an example that people in primary education will know, doesn’t need more than slight bits of calculation. Group theory swipes a few nice ideas from arithmetic and builds its own structure. Knot theory uses polynomials — everything does — but more as a way of naming structures. There aren’t things to do that a calculator would recognize.

Richard Thompson’s Poor Richard’s Almanac for the 25th I include because I’m a fan, and on the grounds that the Summer Reading includes the names of shapes. And I’ve started to notice how often “rhomboid” is used as a funny word. Those who search for the evolution and development of jokes, take heed.

John Atkinson’s Wrong Hands for the 25th is the awaited anthropomorphic-numerals and symbols joke for this past week. I enjoy the first commenter’s suggestion tha they should have stayed in unknown territory.

Rick Kirkman and Jerry Scott’s Baby Blues for the 26th of November, 2016. I suppose Kirkman and Scott know their characters better than I do but isn’t Zoe like nine or ten? Isn’t pre-algebra more a 7th or 8th grade thing? I can’t argue Grandma being post-algebra but I feel like the punch line was written and then retrofitted onto the characters.

Rick Kirkman and Jerry Scott’s Baby Blues for the 26th does a little wordplay built on pre-algebra. I’m not sure that Zoe is quite old enough to take pre-algebra. But I also admit not being quite sure what pre-algebra is. The central idea of (primary school) algebra — that you can do calculations with a number without knowing what the number is — certainly can use some preparatory work. It’s a dazzling idea and needs plenty of introduction. But my dim recollection of taking it was that it was a bit of a subject heap, with some arithmetic, some number theory, some variables, some geometry. It’s all stuff you’ll need once algebra starts. But it is hard to say quickly what belongs in pre-algebra and what doesn’t.

Art Sansom and Chip Sansom’s The Born Loser for the 26th uses two ancient staples of jokes, probabilities and weather forecasting. It’s a hard joke not to make. The prediction for something is that it’s very unlikely, and it happens anyway? We all laugh at people being wrong, which might be our whistling past the graveyard of knowing we will be wrong ourselves. It’s hard to prove that a probability is wrong, though. A fairly tossed die may have only one chance in six of turning up a ‘4’. But there’s no reason to think it won’t, and nothing inherently suspicious in it turning up ‘4’ four times in a row.

We could do it, though. If the die turned up ‘4’ four hundred times in a row we would no longer call it fair. (This even if examination proved the die really was fair after all!) Or if it just turned up a ‘4’ significantly more often than it should; if it turned up two hundred times out of four hundred rolls, say. But one or two events won’t tell us much of anything. Even the unlikely happens sometimes.

Even the impossibly unlikely happens if given enough attempts. If we do not understand that instinctively, we realize it when we ponder that someone wins the lottery most weeks. Presumably the comic’s weather forecaster supposed the chance of snow was so small it could be safely rounded down to zero. But even something with literally zero percent chance of happening might.

Imagine tossing a fair coin. Imagine tossing it infinitely many times. Imagine it coming up tails every single one of those infinitely many times. Impossible: the chance that at least one toss of a fair coin will turn up heads, eventually, is 1. 100 percent. The chance heads never comes up is zero. But why could it not happen? What law of physics or logic would it defy? It challenges our understanding of ideas like “zero” and “probability” and “infinity”. But we’re well-served to test those ideas. They hold surprises for us.

• #### Matthew Wright 6:55 pm on Tuesday, 29 November, 2016 Permalink | Reply

‘Rhomboid’ is a wonderful word. Always makes me think of British First World War tanks.

Like

• #### Joseph Nebus 9:30 pm on Wednesday, 30 November, 2016 Permalink | Reply

It is a great word and you’re right; it’s perfectly captured by British First World War tanks.

Liked by 1 person

• #### Matthew Wright 6:09 am on Thursday, 1 December, 2016 Permalink | Reply

A triumph of mathematics on the part of Sir Eustace Tennyson-d’Eyncourt and his colleagues – as I understand it the shape was calculated to match the diameter of a 60-foot wheel as a trench-crossing mechanism, but without the radius (well, a triumph of geometry, which isn’t exactly mathematical in the pure sense…). I probably should stop making appalling puns now…

Like

• #### davekingsbury 5:35 pm on Wednesday, 30 November, 2016 Permalink | Reply

Your comments about tossing a coin suggests to me than working out probability is probably an inherited instinct, which is probably why it’s so tempting to enter a betting shop. (Do you guys have betting shops over the Pond?)

Like

• #### Joseph Nebus 9:40 pm on Wednesday, 30 November, 2016 Permalink | Reply

I think we don’t have any instinct for probability. There’s maybe a vague idea but it’s just awful for any but the simplest problems. Which is fair enough; for most of our existence probability questions were relatively straightforward things. But it took a generation of mathematicians to work out whether you were more likely to roll a 9 or a 10 on tossing three dice.

There are some betting parlors in the United States, mostly under the name Off-Track Betting shops. I don’t think there’s really a culture of them, though, at least not away from the major horse-racing tracks. I may be mistaken though; it’s not a hobby I’ve been interested in. I believe they’re all limited to horse- and greyhound-racing, though. There are many places that sell state-sponsored lotteries but that isn’t really what I understand betting shops to be about. And lottery tickets are just sidelines from some more reputable concern like being a convenience store.

Like

• #### davekingsbury 1:37 am on Thursday, 1 December, 2016 Permalink | Reply

Our betting shops are plentiful, several on every high street, and they are full of FOBTs – fixed odds betting terminals – which are a prime source of problem gambling in poorer communities. Looking this up, I’ve just watched a worrying clip of somebody gambling while convincing themselves erroneously that they’re on the verge of a big win … it’s been described as the crack cocaine of gambling and there are 35,000 machines in the UK. If we have any instinct for probability, it’s being abused …

Like

## The End 2016 Mathematics A To Z: Local

Today’s is another of those words that means nearly what you would guess. There are still seven letters left, by the way, which haven’t had any requested terms. If you’d like something described please try asking.

## Local.

Stops at every station, rather than just the main ones.

OK, I’ll take it seriously.

So a couple years ago I visited Niagara Falls, and I stepped into the river, just above the really big drop.

Niagara Falls, demonstrating some locally unsafe waters to be in. Background: Canada (left), United States (right).

I didn’t have any plans to go over the falls, and didn’t, but I liked the thrill of claiming I had. I’m not crazy, though; I picked a spot I knew was safe to step in. It’s only in the retelling I went into the Niagara River just above the falls.

Because yes, there is surely danger in certain spots of the Niagara River. But there are also spots that are perfectly safe. And not isolated spots either. I wouldn’t have been less safe if I’d stepped into the river a few feet closer to the edge. Nor if I’d stepped in a few feet farther away. Where I stepped in was locally safe.

The Niagara River, and some locally safe enough waters to be in. That’s not me in the picture; if you do know who it is, I have no way of challenging you. But it’s the area I stepped into and felt this lovely illicit thrill doing so.

Over in mathematics we do a lot of work on stuff that’s true or false depending on what some parameters are. We can look at bunches of those parameters, and they often look something like normal everyday space. There’s some values that are close to what we started from. There’s others that are far from that.

So, a “neighborhood” of some point is that point and some set of points containing it. It needs to be an “open” set, which means it doesn’t contain its boundary. So, like, everything less than one minute’s walk away, but not the stuff that’s precisely one minute’s walk away. (If we include boundaries we break stuff that we don’t want broken is why.) And certainly not the stuff more than one minute’s walk away. A neighborhood could have any shape. It’s easy to think of it as a little disc around the point you want. That’s usually the easiest to describe in a proof, because it’s “everything a distance less than (something) away”. (That “something” is either ‘δ’ or ‘ε’. Both Greek letters are called in to mean “a tiny distance”. They have different connotations about what the tiny distance is in.) It’s easiest to draw as little amoeba-like blob around a point, and contained inside a bigger amoeba-like blob.

Anyway, something is true “locally” to a point if it’s true in that neighborhood. That means true for everything in that neighborhood. Which is what you’d expect. “Local” means just that. It’s the stuff that’s close to where we started out.

Often we would like to know something “globally”, which means … er … everywhere. Universally so. But it’s usually easier to prove a thing locally. I suppose having a point where we know something is so makes it easier to prove things about what’s nearby. Distant stuff, who knows?

“Local” serves as an adjective for many things. We think of a “local maximum”, for example, or “local minimum”. This is where whatever we’re studying has a value bigger (or smaller) than anywhere else nearby has. Or we speak of a function being “locally continuous”, meaning that we know it’s continuous near this point and we make no promises away from it. It might be “locally differentiable”, meaning we can take derivatives of it close to some interesting point. We say nothing about what happens far from it.

Unless we do. We can talk about something being “local to infinity”. Your first reaction to that should probably be to slap the table and declare that’s it, we’re done. But we can make it sensible, at least to other mathematicians. We do it by starting with a neighborhood that contains the origin, zero, that point in the middle of everything. So, what’s the inverse of that? It’s everything that’s far enough away from the origin. (Don’t include the boundary, we don’t need those headaches.) So why not call that the “neighborhood of infinity”? Other than that it’s a weird set of words to put together? And if something is true in that “neighborhood of infinity”, what is that thing other than true “local to infinity”?

I don’t blame you for being skeptical.

## Reading the Comics, November 23, 2016: Featuring A Betty Boop Cartoon Edition

I admit to padding this week’s collection of mathematically-themed comic strips. There’s just barely enough to justify my splitting this into a Sunday and a Tuesday installment. I’m including a follow-the-bouncing-ball cartoon to make up for that though. Enjoy!

Jimmy Hatlo’s Little Iodine from the 20th originally ran the 18th of September, 1955. It’s a cute enough bit riffing on realistic word problems. If the problems do reflect stuff ordinary people want to know, after all, then they’re going to be questions people in the relevant fields know how to solve. A limitation is that word problems will tend to pick numbers that make for reasonable calculations, which may be implausible for actual problems. None of the examples Iodine gives seem implausible to me, but what do I know about horses? But I do sometimes encounter problems which have the form but not content of a reasonable question, like an early 80s probability book asking about the chances of one or more defective transistors in a five-transistor radio set. (The problem surely began as one about burned-out vacuum tubes in a radio.)

Daniel Beyer’s Long Story Short for the 21st is another use of Albert Einstein as iconic for superlative first-rate genius. I’m curious how long it did take for people to casually refer to genius as Einstein. The 1930 song Kitty From Kansas City (and its 1931 Screen Songs adaptation, starring Betty Boop) mention Einstein as one of those names any non-stupid person should know. But that isn’t quite the same as being the name for a genius.

My love asked if I’d include Stephen Pastis’s Pearls Before Swine of the 22nd. It has one of the impossibly stupid crocodiles say, poorly, that he was a mathematics major. I admitted it depended how busy the week was. On a slow week I’ll include more marginal stuff.

Is it plausible that the Croc is, for all his stupidity, a mathematics major? Well, sure. Perseverance makes it possible to get any degree. And given Croc’s spent twenty years trying to eat Zebra without getting close clearly perseverance is one of his traits. But are mathematics majors bad at communication?

Certainly we get the reputation for it. Part of that must be that any specialized field — whether mathematics, rocket science, music, or pasta-making — has its own vocabulary and grammar for that vocabulary that outsiders just don’t know. If it were easy to follow it wouldn’t be something people need to be trained in. And a lay audience starts scared of mathematics in a way they’re not afraid of pasta technology; you can’t communicate with people who’ve decided they can’t hear you. And many mathematical constructs just can’t be explained in a few sentences, the way vacuum extrusion of spaghetti noodles could be. And, must be said, it’s often the case a mathematics major (or a major in a similar science or engineering-related field) has English as a second (or third) language. Even a slight accent can make someone hard to follow, and build an undeserved reputation.

The Pearls crocodiles are idiots, though. The main ones, anyway; their wives and children are normal.

Ernie Bushmiller’s Nancy Classics for the 23rd originally appeared the 23rd of November, 1949. It’s just a name-drop of mathematics, though, using it as the sort of problem that can be put on the blackboard easily. And it’s not the most important thing going on here, but I do notice Bushmiller drawing the blackboard as … er … not black. It makes the composition of the last panel easier to read, certainly. And makes the visual link between the paper in the second panel and the blackboard in the last stronger. It seems more common these days to draw a blackboard that’s black. I wonder if that’s so, or if it reflects modern technology making white-on-black-text easier to render. A Photoshop select-and-invert is instantaneous compared to what Bushmiller had to do.

• #### davekingsbury 11:04 pm on Monday, 28 November, 2016 Permalink | Reply

Where do you find the comics you review, may I ask? It’s extraordinary how many math-related items you find …

Like

• #### Joseph Nebus 9:29 pm on Wednesday, 30 November, 2016 Permalink | Reply

Nearly all the comics I read come from two sites. The first is Gocomics.com, and the other is ComicsKingdom.com. I’ve got paid subscriptions to both; the value is fantastic, especially given ComicsKingdom’s vintage selection.

And yeah, I read nearly everything they offer. There’s some comics I don’t read because they’re in perpetual reruns and I’ve seen them all enough times (eg, Berekeley Breathead’s Academica Waltz or the Gocomics slice of Al Capp’s Li’l Abner). And there’s some I don’t read because they’ve irritated or offended me too often. Not many, though; it takes a special knack to be a truly objectionable comic strip.

After that there’s a couple stragglers on Creators.com. Jumble and the various Joe Martin comics I get from the Houston Chronicle’s web site. Those don’t have subscriptions; I just have a tab group for them.

And yeah, this does leave me spending quite some time reading comics each day. It makes a pleasant way to ease into putting off the rest of the morning’s tasks.

Like

• #### davekingsbury 12:54 am on Thursday, 1 December, 2016 Permalink | Reply

Thanks for that! You cover the mathematics but there’s also plenty of social history in there, which I also glean from your excellent posts. I’m 67 but I sometimes think I learned most – quickest, anyway – when I was reading comics on a regular basis. My favourite comic strip moment is described in this post, if I may be so bold as to provide a link to it …

https://davekingsbury.wordpress.com/2015/12/01/the-kids-are-all-right/

Like

## The End 2016 Mathematics A To Z: Kernel

I told you that Image thing would reappear. Meanwhile I learned something about myself in writing this.

## Kernel.

I want to talk about functions again. I’ve been keeping like a proper mathematician to a nice general idea of what a function is. The sort where a function’s this rule matching stuff in a set called the domain with stuff in a set called the range. And I’ve tried not to commit myself to saying anything about what that domain and range are. They could be numbers. They could be other functions. They could be the set of DVDs you own but haven’t watched in more than two years. They could be collections socks. Haven’t said.

But we know what functions anyone cares about. They’re stuff that have domains and ranges that are numbers. Preferably real numbers. Complex-valued numbers if we must. If we look at more exotic sets they’re ones that stick close to being numbers: vectors made up of an ordered set of numbers. Matrices of numbers. Functions that are themselves about numbers. Maybe we’ll get to something exotic like a rotation, but then what is a rotation but spinning something a certain number of degrees? There are a bunch of unavoidably common domains and ranges.

Fine, then. I’ll stick to functions with ranges that look enough like regular old numbers. By “enough” I mean they have a zero. That is, something that works like zero does. You know, add it to something else and that something else isn’t changed. That’s all I need.

A natural thing to wonder about a function — hold on. “Natural” is the wrong word. Something we learn to wonder about in functions, in pre-algebra class where they’re all polynomials, is where the zeroes are. They’re generally not at zero. Why would we say “zeroes” to mean “zero”? That could let non-mathematicians think they knew what we were on about. By the “zeroes” we mean the things in the domain that get matched to the zero in the range. It might be zero; no reason it couldn’t, until we know what the function’s rule is. Just we can’t count on that.

A polynomial we know has … well, it might have zero zeroes. Might have no zeroes. It might have one, or two, or so on. If it’s an n-th degree polynomial it can have up to n zeroes. And if it’s not a polynomial? Well, then it could have any conceivable number of zeroes and nobody is going to give you a nice little formula to say where they all are. It’s not that we’re being mean. It’s just that there isn’t a nice little formula that works for all possibilities. There aren’t even nice little formulas that work for all polynomials. You have to find zeroes by thinking about the problem. Sorry.

But! Suppose you have a collection of all the zeroes for your function. That’s all the points in the domain that match with zero in the range. Then we have a new name for the thing you have. And that’s the kernel of your function. It’s the biggest subset in the domain with an image that’s just the zero in the range.

So we have a name for the zeroes that isn’t just “the zeroes”. What does this get us?

If we don’t know anything about the kind of function we have, not much. If the function belongs to some common kinds of functions, though, it tells us stuff.

For example. Suppose the function has domain and range that are vectors. And that the function is linear, which is to say, easy to deal with. Let me call the function ‘f’. And let me pick out two things in the domain. I’ll call them ‘x’ and ‘y’ because I’m writing this after Thanksgiving dinner and can’t work up a cleverer name for anything. If f is linear then f(x + y) is the same thing as f(x) + f(y). And now something magic happens. If x and y are both in the kernel, then x + y has to be in the kernel too. Think about it. Meanwhile, if x is in the kernel but y isn’t, then f(x + y) is f(y). Again think about it.

What we can see is that the domain fractures into two directions. One of them, the direction of the kernel, is invisible to the function. You can move however much you like in that direction and f can’t see it. The other direction, perpendicular (“orthogonal”, we say in the trade) to the kernel, is visible. Everything that might change changes in that direction.

This idea threads through vector spaces, and we study a lot of things that turn out to look like vector spaces. It keeps surprising us by letting us solve problems, or find the best-possible approximate solutions. This kernel gives us room to match some fiddly conditions without breaking the real solution. The size of the null space alone can tell us whether some problems are solvable, or whether they’ll have infinitely large sets of solutions.

In this vector-space construct the kernel often takes on another name, the “null space”. This means the same thing. But it reminds us that superhero comics writers miss out on many excellent pieces of terminology by not taking advanced courses in mathematics.

Kernels also appear in group theory, whenever we get into rings. We’re always working with rings. They’re nearly as unavoidable as vector spaces.

You know how you can divide the whole numbers into odd and even? And you can do some neat tricks with that for some problems? You can do that with every ring, using the kernel as a dividing point. This gives us information about how the ring is shaped, and what other structures might look like the ring. This often lets us turn proofs that might be hard into a collection of proofs on individual cases that are, at least, doable. Tricks about odd and even numbers become, in trained hands, subtle proofs of surprising results.

We see vector spaces and rings all over the place in mathematics. Some of that’s selection bias. Vector spaces capture a lot of what’s important about geometry. Rings capture a lot of what’s important about arithmetic. We have understandings of geometry and arithmetic that transcend even our species. Raccoons understand space. Crows understand number. When we look to do mathematics we look for patterns we understand, and these are major patterns we understand. And there are kernels that matter to each of them.

Some mathematical ideas inspire metaphors to me. Kernels are one. Kernels feel to me like the process of holding a polarized lens up to a crystal. This lets one see how the crystal is put together. I realize writing this down that my metaphor is unclear: is the kernel the lens or the structure seen in the crystal? I suppose the function has to be the lens, with the kernel the crystallization planes made clear under it. It’s curious I had enjoyed this feeling about kernels and functions for so long without making it precise. Feelings about mathematical structures can be like that.

• #### Barb Knowles 8:42 pm on Friday, 25 November, 2016 Permalink | Reply

Don’t be mad if I tell you I’ve never had a feeling about a mathematical structure, lol. But it is immensely satisfying to solve an equation. I’m not a math person. As an English as a New Language teacher, I have to help kids with algebra at times. I usually break out in a sweat and am ecstatic when I can actually help them.

Liked by 1 person

• #### Joseph Nebus 11:24 pm on Friday, 25 November, 2016 Permalink | Reply

I couldn’t be mad about that! I don’t have feeling like that about most mathematical constructs myself. There’s just a few that stand out for one reason or another.

I am intrigued by the ways teaching differs for different subjects. How other people teach mathematics (or physics) interests me too, but I’ve noticed some strong cultural similarities across different departments and fields. Other subjects have a greater novelty value for me.

Liked by 2 people

• #### Barb Knowles 11:42 pm on Friday, 25 November, 2016 Permalink | Reply

My advisor in college (Romance Language manor) told me that I should do well in math because it is a language, formulas are like grammar and there is a lot of memorization. Not being someone with math skills, I replied ummmm. I don’t think she was impressed, lol.

Liked by 1 person

• #### Joseph Nebus 9:22 pm on Wednesday, 30 November, 2016 Permalink | Reply

I’m not sure that I could go along with the idea of mathematics as a language. But there is something that seems like a grammar to formulas. That is, there are formulas that just look right or look wrong, even before exploring their content. Sometimes a formula just looks … ungrammatical. Sometimes that impression is wrong. But there is something that stands out.

As for mathematics skills, well, I think people usually have more skill than they realize. There’s a lot of mathematics out there, much of it not related to calculations, and it’d be amazing if none of it intrigued you or came easily.

Liked by 1 person

## A Thanksgiving Thought Fresh From The Shower

It’s well-known, at least in calendar-appreciation circles, that the 13th of a month is more likely to be Friday than any other day of the week. That’s on the Gregorian calendar, which has some funny rules about whether a century year — 1900, 2000, 2100 — will be a leap year. Three of them aren’t in every four centuries. The result is the pattern of dates on the calendar is locked into this 400-year cycle, instead of the 28-year cycle you might imagine. And this makes some days of the week more likely for some dates than they otherwise might be.

This got me wondering. Does the 13th being slightly more likely imply that the United States Thanksgiving is more likely to be on the 26th of the month? The current rule is that Thanksgiving is the fourth Thursday of November. We’ll pretend that’s an unalterable fact of nature for the sake of having a problem we can solve. So if the 13th is more likely to be a Friday than any other day of the week, isn’t the 26th more likely to be a Thursday than any other day of the week?

And that’s so, but I’m not quite certain yet. What’s got me pondering this in the shower is that the 13th is more likely a Friday for an arbitrary month. That is, if I think of a month and don’t tell you anything about what it is, all we can say is it chance of the 13th being a Friday is such-and-such. But if I pick a particular month — say, November 2017 — things are different. The chance the 13th of November, 2017 is a Friday is zero. So the chance the 26th of December, 2017 is a Thursday is zero. Our calendar system sets rules. We’ll pretend that’s an unalterable fact of nature for the sake of having a problem we can solve, too.

So: does knowing that I am thinking of November, rather than a completely unknown month, change the probabilities? And I don’t know. My gut says “it’s plausible the dates of Novembers are different from the dates of arbitrary months”. I don’t know a way to argue this purely logically, though. It might have to be tested by going through 400 years of calendars and counting when the fourth Thursdays are. (The problem isn’t so tedious as that. There’s formulas computers are good at which can do this pretty well.)

But I would like to know if it can be argued there’s a difference, or that there isn’t.

## The End 2016 Mathematics A To Z: Jordan Curve

I realize I used this thing in one of my Theorem Thursday posts but never quite said what it was. Let me fix that.

## Jordan Curve

Get a rubber band. Well, maybe you can’t just now, even if you wanted to after I gave orders like that. Imagine a rubber band. I apologize to anyone so offended by my imperious tone that they’re refusing. It’s the convention for pop mathematics or science.

Anyway, take your rubber band. Drop it on a table. Fiddle with it so it hasn’t got any loops in it and it doesn’t twist over any. I want the whole of one edge of the band touching the table. You can imagine the table too. That is a Jordan Curve, at least as long as the rubber band hasn’t broken.

This may not look much like a circle. It might be close, but I bet it’s got some wriggles in its curves. Maybe it even curves so much the thing looks more like a kidney bean than a circle. Maybe it pinches so much that it looks like a figure eight, a couple of loops connected by a tiny bridge on the interior. Doesn’t matter. You can bring out the circle. Put your finger inside the rubber band’s loops and spiral your finger around. Do this gently and the rubber band won’t jump off the table. It’ll round out to as perfect a circle as the limitations of matter allow.

And for that matter, if we wanted, we could take a rubber band laid down as a perfect circle. Then nudge it here and push it there and wrinkle it up into as complicated a figure as you like. Either way is as possible.

A Jordan Curve is a closed curve, a curve that loops around back to itself. And it’s simple. That is, it doesn’t cross over itself at any point. However weird and loopy this figure is, as long as it doesn’t cross over itself, it’s got in a sense the same shape as a circle. We can imagine a function that matches every point on a true circle to a point on the Jordan Curve. A set of points in order on the original circle will match to points in the same order on the Jordan Curve. There’s nothing missing and there’s no jumps or ambiguous points. And no point on the Jordan Curve matches to two or more on the original circle. (This is why we don’t let the curve to cross over itself.)

When I wrote about the Jordan Curve Theorem it was about how to tell how a curve divides a plane into two pieces, an inside and an outside. You can have some pretty complicated-looking figures. I have an example on the Jordan Curve Theorem essay, but you can make your own by doodling. And we can look at it as a circle, as a rubber band, twisted all around.

This all dips into topology, the study of how shapes connect when we don’t care about distance. But there are simple wondrous things to find about them. For example. Draw a Jordan Curve, please. Any that you like. Now draw a triangle. Again, any that you like.

There is some trio of points in your Jordan Curve which connect to a triangle the same shape as the one you drew. It may be bigger than your triangle, or smaller. But it’ll look similar. The angles inside will all be the same as the ones you started with. This should help make doodling during a dull meeting even more exciting.

There may be four points on your Jordan Curve that make a square. I don’t know. Nobody knows for sure. There certainly are if your curve is convex, that is, if no line between any two points on the curve goes outside the curve. And it’s true even for curves that aren’t complex if they are smooth enough. But generally? For an arbitrary curve? We don’t know. It might be true. It might be impossible to find a square in some Jordan Curve. It might be the Jordan Curve you drew. Good luck looking.

• #### gaurish 3:52 am on Thursday, 24 November, 2016 Permalink | Reply

Jordan curve theorem is again in news: http://wp.me/p3qzP-2tV

Like

• #### Joseph Nebus 11:11 pm on Friday, 25 November, 2016 Permalink | Reply

Ooh, thank you, that’s interesting stuff. And on that conjecture about squares, too, which is so neat.

Liked by 1 person

## Reading the Comics, November 19, 2016: Thought I Featured This Already Edition

For the second half of last week Comic Strip Master Command sent me a couple comics I would have sworn I showed off here before.

Jason Poland’s Robbie and Bobby for the 16th I would have sworn I’d featured around here before. I still think it’s a rerun but apparently I haven’t written it up. It’s a pun, I suppose, playing on the use of “power” to mean both exponentials and the thing knowledge is. I’m curious why Polard used 10 for the new exponent. Normally if there isn’t an exponent explicitly written we take that to be “1”, and incrementing 1 would give 2. Possibly that would have made a less-clear illustration. Or possibly the idea of sleeping squared lacked the Brobdingnagian excess of sleeping to the tenth power.

Exponentials have been written as a small number elevated from the baseline since 1636. James Hume then published an edition of François Viète’s text on algebra. Hume used a Roman numeral in the superscript — xii instead of x2 — but apart from that it’s the scheme we use today. The scheme was in the air, though. Renée Descartes also used the notation, but with Arabic numerals throughout, from 1637. (With quirks; he would write “xx” instead of “x2”, possibly because it’s the same number of characters to write.) And Pierre Hérigone just wrote the exponent after the variable: x2, like you see in bad character-recognition texts. That isn’t a bad scheme, particularly since it’s so easy to type, although we would add a caret: x^2. (I draw all this history, as ever, from Florian Cajori’s A History of Mathematical Notations, particularly sections 297 through 299).

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th has a fun concept about statisticians running wild and causing chaos. I appreciate a good healthy prank myself. It does point out something valuable, though. People in general have gotten to understand the idea that there are correlations between things. An event happening and some effect happening seem to go together. This is sometimes because the event causes the effect. Sometimes they’re both caused by some other factor; the event and effect are spuriously linked. Sometimes there’s just no meaningful connection. Coincidences do happen. But there’s really no good linking of how strong effects can be. And that’s not just a pop culture thing. For example, doing anything other than driving while driving increases the risk of crashing. But by how much? It’s easy to take something with the shape of a fact. Suppose it’s “looking at a text quadruples your risk of crashing”. (I don’t know what the risk increase is. Pretend it’s quadruple for the sake of this.) That’s easy to remember. But what’s my risk of crashing? Suppose it’s a clear, dry day, no winds, and I’m on a limited-access highway with light traffic. What’s the risk of crashing? Can’t be very high, considering how long I’ve done that without a crash. Quadruple that risk? That doesn’t seem terrifying. But I don’t know what that is, or how to express it in a way that helps make decisions. It’s not just newscasters who have this weakness.

Mark Anderson’s Andertoons for the 18th is the soothing appearance of Andertoons for this essay. And while it’s the familiar form of the student protesting the assignment the kid does have a point. There are times an estimate is all we need, and there’s times an exact answer is necessary. When are those times? That’s another skill that people have to develop.

Arthur C Clarke, in his semi-memoir Astounding Days, wrote of how his early-40s civil service job had him auditing schoolteacher pension contributions. He worked out that he really didn’t need to get the answers exactly. If the contribution was within about one percent of right it wasn’t worth his time to track it down more precisely. I’m not sure that his supervisors would take the same attitude. But the war soon took everyone to other matters without clarifying just how exactly he was supposed to audit.

Mark Anderson’s Mr Lowe rerun for the 18th is another I would have sworn I’ve brought up before. The strip was short-lived and this is at least its second time through. But then mathematics is only mentioned here as a dull things students must suffer through. It might not have seemed interesting enough for me to mention before.

Rick Detorie’s One Big Happy rerun for the 19th is another sort of pun. At least it plays on the multiple meanings of “negative”. And I suspect that negative numbers acquired a name with, er, negative connotations because the numbers were suspicious. It took centuries for mathematicians to move them from “obvious nonsense” to “convenient but meaningless tools for useful calculations” to “acceptable things” to “essential stuff”. Non-mathematicians can be forgiven for needing time to work through that progression. Also I’m not sure I didn’t show this one off here when it was first-run. Might be wrong.

Saturday Morning Breakfast Cereal pops back into my attention for the 19th. That’s with a bit about Dad messing with his kid’s head. Not much to say about that so let me bury the whimsy with my earnestness. The strip does point out that what we name stuff is arbitrary. We would say that 4 and 12 and 6 are “composite numbers”, while 2 and 3 are “prime numbers”. But if we all decided one day to swap the meanings of the terms around we wouldn’t be making any mathematics wrong. Or linguistics either. We would probably want to clarify what “a really good factor” is, but all the comic really does is mess with the labels of groups of numbers we’re already interested in.

## The End 2016 Mathematics A To Z: Image

It’s another free-choice entry. I’ve got something that I can use to make my Friday easier.

## Image.

So remember a while back I talked about what functions are? I described them the way modern mathematicians like. A function’s got three components to it. One is a set of things called the domain. Another is a set of things called the range. And there’s some rule linking things in the domain to things in the range. In shorthand we’ll write something like “f(x) = y”, where we know that x is in the domain and y is in the range. In a slightly more advanced mathematics class we’ll write $f: x \rightarrow y$. That maybe looks a little more computer-y. But I bet you can read that already: “f matches x to y”. Or maybe “f maps x to y”.

We have a couple ways to think about what ‘y’ is here. One is to say that ‘y’ is the image of ‘x’, under ‘f’. The language evokes camera trickery, or at least the way a trick lens might make us see something different. Pretend that the domain is something you could gaze at. If the domain is, say, some part of the real line, or a two-dimensional plane, or the like that’s not too hard to do. Then we can think of the rule part of ‘f’ as some distorting filter. When we look to where ‘x’ would be, we see the thing in the range we know as ‘y’.

At this point you probably imagine this is a pointless word to have. And that it’s backed up by a useless analogy. So it is. As far as I’ve gone this addresses a problem we don’t need to solve. If we want “the thing f matches x to” we can just say “f(x)”. Well, we write “f(x)”. We say “f of x”. Maybe “f at x”, or “f evaluated at x” if we want to emphasize ‘f’ more than ‘x’ or ‘f(x)’.

Where it gets useful is that we start looking at subsets. Bunches of points, not just one. Call ‘D’ some interesting-looking subset of the domain. What would it mean if we wrote the expression ‘f(D)’? Could we make that meaningful?

We do mean something by it. We mean what you might imagine by it. If you haven’t thought about what ‘f(D)’ might mean, take a moment — a short moment — and guess what it might. Don’t overthink it and you’ll have it right. I’ll put the answer just after this little bit so you can ponder.

Our pet rabbit on the beach in Omena, Michigan back in July this year. Which is a small town on the Traverse Bay, which is just off Lake Michigan where … oh, you have Google Maps, you don’t need me. Anyway we wondered what he would make of vast expanses of water, considering he doesn’t like water what with being a rabbit and all that. And he watched it for a while and then shuffled his way in to where the waves come up and could wash over his front lets, making us wonder what kind of crazy rabbit he is, exactly.

So. ‘f(D)’ is a set. We make that set by taking, in turn, every single thing that’s in ‘D’. And find everything in the range that’s matched by ‘f’ to those things in ‘D’. Collect them all together. This set, ‘f(D)’, is “the image of D under f”.

We use images a lot when we’re studying how functions work. A function that maps a simple lump into a simple lump of about the same size is one thing. A function that maps a simple lump into a cloud of disparate particles is a very different thing. A function that describes how physical systems evolve will preserve the volume and some other properties of these lumps of space. But it might stretch out and twist around that space, which is how we discovered chaos.

Properly speaking, the range of a function ‘f’ is just the image of the whole domain under that ‘f’. But we’re not usually that careful about defining ranges. We’ll say something like ‘the domain and range are the sets of real numbers’ even though we only need the positive real numbers in the range. Well, it’s not like we’re paying for unnecessary range. Let me call the whole domain ‘X’, because I went and used ‘D’ earlier. Then the range, let me call that ‘Y’, would be ‘Y = f(X)’.

Images will turn up again. They’re a handy way to let us get at some useful ideas.

## Reading the Comics, November 16, 2016: Seeing the Return of Jokes

Comic Strip Master Command sent out a big mass of comics this past week. Today’s installment will only cover about half of them. This half does feature a number of comics that show off jokes that’ve run here before. I’m sure it was coincidence. Comic Strip Master Command must have heard I was considering alerting cartoonists that I was talking about them. That’s fine for something like last week when I could talk about NP-complete problems or why we call something a “hypotenuse”. It can start a conversation. But “here’s a joke treating numerals as if they were beings”? All they can do is agree, that is what the joke is. If they disagree at that point they’re just trying to start a funny argument.

Scott Metzger’s The Bent Pinky for the 14th sees the return of anthropomorphic numerals humor. I’m a bit surprised Metzger goes so far as to make every numeral either a 3 or a 9. I’d have expected a couple of 2’s and 4’s. I understand not wanting to get into two-digit numbers. The premise of anthropomorphic numerals is troublesome if you need multiple-digit numbers.

Jon Rosenberg’s Goats for the 14th doesn’t directly mention a mathematical topic. But the story has the characters transported to a world with monkeys at typewriters. We know where that is. So we see that return after no time away, really.

Rick Detorie’s One Big Happy rerun for the 14th sees the return of “110 percent”. Happily the joke’s structured so that we can dodge arguing about whether it’s even possible to give 110 percent. I’m inclined to say of course it’s possible. “Giving 100 percent” in the context of playing a sport would mean giving the full reasonable effort. Or it does if we want to insist on idiomatic expressions making sense. It seems late to be insisting on that standard, but some people like it as an idea.

George Herriman’s Krazy Kat for the 22nd of December, 1938. Rerun the 15th of November, 2016. Really though who could sleep when they have a sweet adding machine like that to play with? Someone who noticed that that isn’t machine tape coming out the top, of course, but rather is the punch-cards for a band organ. Curiously low-dialect installment of the comic.

George Herriman’s Krazy Kat for the 22nd of December, 1938, was rerun on Tuesday. And it’s built on counting as a way of soothing the mind into restful sleep. Mathematics as a guide to sleep also appears, in minor form, in Darrin Bell’s Candorville for the 13th. I’m not sure why counting, or mental arithmetic, is able to soothe one into sleep. I suppose it’s just that it’s a task that’s engaging enough the semi-conscious mind can do it without having the emotional charge or complexity to wake someone up. I’ve taken to Collatz Conjecture problems, myself.

Terri Libenson’s Pajama Diaries for the 16th sees the return of Venn Diagram jokes. And it’s a properly-formed Venn Diagram, with the three circles coming together to indicate seven different conditions.

Terri Libenson’s Pajama Diaries for the 16th of November, 2016. I was never one for buying too much of the bakery aisle, myself, but then I also haven’t got teenagers. And I did go through so much of my life figuring there was no reason I shouldn’t eat another bagel again.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 16th just name-drops rhomboids, using them as just a funny word. Geometry is filled with wonderful, funny-sounding words. I’m fond of “icosahedron” myself. But “rhomboid” and its related words are good ones. I think they hit that sweet spot between being uncommon in ordinary language without being so exotic that a reader’s eye trips over it. However funny a “triacontahedron” might be, no writer should expect the reader to forgive that pile of syllables. A rhomboid is a kind of parallelogram, so it’s got four sides. The sides come in two parallel pairs. Both members of a pair have the same length, but the different pairs don’t. They look like the kitchen tiles you’d get for a house you couldn’t really afford, not with tiling like that.

• #### sheldonk2014 12:08 am on Monday, 21 November, 2016 Permalink | Reply

Hey Joseph I just dropped by to see
How the other half live
Play any pinball lately
Sheldon

Like

• #### Joseph Nebus 11:03 pm on Friday, 25 November, 2016 Permalink | Reply

Aw thanks, and glad to see you around. I’ve been pretty well. Played a lot of pinball lately, but the schedule is letting up after a lot of busy weeks. Everyone’s more or less found their place in the state’s championship series and there’s only a few people who can change their positions usefully. Should be a calm month ahead.

Liked by 1 person

## The End 2016 Mathematics A To Z: Hat

I was hoping to pick a term that was a quick and easy one to dash off. I learned better.

## Hat.

This is a simple one. It’s about notation. Notation is never simple. But it’s important. Good symbols organize our thoughts. They tell us what are the common ordinary bits of our problem, and what are the unique bits we need to pay attention to here. We like them to be easy to write. Easy to type is nice, too, but in my experience mathematicians work by hand first. Typing is tidying-up, and we accept that being sluggish. Unique would be nice, so that anyone knows what kind of work we’re doing just by looking at the symbols. I don’t think anything manages that. But at least some notation has alternate uses rare enough we don’t have to worry about it.

“Hat” has two major uses I know of. And we call it “hat”, although our friends in the languages department would point out this is a caret. The little pointy corner that goes above a letter, like so: $\hat{i}$. $\hat{x}$. $\hat{e}$. It’s not something we see on its own. It’s always above some variable.

The first use of the hat like this comes up in statistics. It’s a way of marking that something is an estimate. By “estimate” here we mean what anyone might mean by “estimate”. Statistics is full of uses for this sort of thing. For example, we often want to know what the arithmetic mean of some quantity is. The average height of people. The average temperature for the 18th of November. The average weight of a loaf of bread. We have some letter that we use to mean “the value this has for any one example”. By some letter we mean ‘x’, maybe sometimes ‘y’. We can use any and maybe the problem begs for something. But it’s ‘x’, maybe sometimes ‘y’.

For the arithmetic mean of ‘x’ for the whole population we write the letter with a horizontal bar over it. (The arithmetic mean is the thing everybody in the world except mathematicians calls the average. Also, it’s what mathematicians mean when they say the average. We just get fussy because we know if we don’t say “arithmetic mean” someone will come along and point out there are other averages.) That arithmetic mean is $\bar{x}$. Maybe $\bar{y}$ if we must. Must be some number. But what is it? If we can’t measure whatever it is for every single example of our group — the whole population — then we have to make an estimate. We do that by taking a sample, ideally one that isn’t biased in some way. (This is so hard to do, or at least be sure you’ve done.) We can find the mean for this sample, though, because that’s how we picked it. The mean of this sample is probably close to the mean of the whole population. It’s an estimate. So we can write $\hat{x}$ and understand. This is not $\bar{x}$ but it does give us a good idea what $\hat{x}$ should be.

(We don’t always use the caret ^ for this. Sometimes we use a tilde ~ instead. ~ has the advantage that it’s often used for “approximately equal to”. So it will carry that suggestion over to its new context.)

The other major use of the hat comes in vectors. Mathematics types do a lot of work with vectors. It turns out a lot of mathematical structures work the way that pointing and moving in directions in ordinary space do. That’s why back when I talked about what vectors were I didn’t say “they’re like arrows pointing some length in some direction”. Arrows pointing some length in some direction are vectors, yes, but there are many more things that are vectors. Thinking of moving in particular directions gives us good intuition for how to work with vectors, and for stuff that turns out to be vectors. But they’re not everything.

If we need to highlight that something is a vector we put a little arrow over its name. $\vec{x}$. $\vec{e}$. That sort of thing. (Or if we’re typing, we might put the letter in boldface: x. This was good back before computers let us put in mathematics without giving the typesetters hazard pay.) We don’t always do that. By the time we do a lot of stuff with vectors we don’t always need the reminder. But we will include it if we need a warning. Like if we want to have both $\vec{r}$ telling us where something is and to use a plain old $r$ to tell us how big the vector $\vec{r}$ is. That turns up a lot in physics problems.

Every vector has some length. Even vectors that don’t seem to have anything to do with distances do. We can make a perfectly good vector out of “polynomials defined for the domain of numbers between -2 and +2”. Those polynomials are vectors, and they have lengths.

There’s a special class of vectors, ones that we really like in mathematics. They’re the “unit vectors”. Those are vectors with a length of 1. And we are always glad to see them. They’re usually good choices for a basis. Basis vectors are useful things. They give us, in a way, a representative slate of cases to solve. Then we can use that representative slate to give us whatever our specific problem’s solution is. So mathematicians learn to look instinctively to them. We want basis vectors, and we really like them to have a length of 1. Even if we aren’t putting the arrow over our variables we’ll put the caret over the unit vectors.

There are some unit vectors we use all the time. One is just the directions in space. That’s $\hat{e}_1$ and $\hat{e}_2$ and for that matter $\hat{e}_3$ and I bet you have an idea what the next one in the set might be. You might be right. These are basis vectors for normal, Euclidean space, which is why they’re labelled “e”. We have as many of them as we have dimensions of space. We have as many dimensions of space as we need for whatever problem we’re working on. If we need a basis vector and aren’t sure which one, we summon one of the letters used as indices all the time. $\hat{e}_i$, say, or $\hat{e}_j$. If we have an n-dimensional space, then we have unit vectors all the way up to $\hat{e}_n$.

We also use the hat a lot if we’re writing quaternions. You remember quaternions, vaguely. They’re complex-valued numbers for people who’re bored with complex-valued numbers and want some thrills again. We build them as a quartet of numbers, each added together. Three of them are multiplied by the mysterious numbers ‘i’, ‘j’, and ‘k’. Each ‘i’, ‘j’, or ‘k’ multiplied by itself is equal to -1. But ‘i’ doesn’t equal ‘j’. Nor does ‘j’ equal ‘k’. Nor does ‘k’ equal ‘i’. And ‘i’ times ‘j’ is ‘k’, while ‘j’ times ‘i’ is minus ‘k’. That sort of thing. Easy to look up. You don’t need to know all the rules just now.

But we often end up writing a quaternion as a number like $4 + 2\hat{i} - 3\hat{j} + 1 \hat{k}$. OK, that’s just the one number. But we will write numbers like $a + b\hat{i} + c\hat{j} + d\hat{k}$. Here a, b, c, and d are all real numbers. This is kind of sloppy; the pieces of a quaternion aren’t in fact vectors added together. But it is hard not to look at a quaternion and see something pointing in some direction, like the first vectors we ever learn about. And there are some problems in pointing-in-a-direction vectors that quaternions handle so well. (Mostly how to rotate one direction around another axis.) So a bit of vector notation seeps in where it isn’t appropriate.

I suppose there’s some value in pointing out that the ‘i’ and ‘j’ and ‘k’ in a quaternion are fixed and set numbers. They’re unlike an ‘a’ or an ‘x’ we might see in the expression. I’m not sure anyone was thinking they were, though. Notation is a tricky thing. It’s as hard to get sensible and consistent and clear as it is to make words and grammar sensible. But the hat is a simple one. It’s good to have something like that to rely on.

• #### howardat58 7:26 pm on Friday, 18 November, 2016 Permalink | Reply

Kill the division sign !

I am posting a problem for you. If you haven’t yet followed me you can do it now!

Like

• #### Joseph Nebus 4:02 am on Sunday, 20 November, 2016 Permalink | Reply

Minimal Abstract Algebra? Thank you, I’m interested.

I think — yes, I am following you, and relieved for that as you’ve long been one of my kind readers. I’ve been failing to get to visit other blogs recently and am sorry to be this discourteous. It’s just the demands of each day getting to me.

Like

## The End 2016 Mathematics A To Z: General Covariance

Today’s term is another request, and another of those that tests my ability to make something understandable. I’ll try anyway. The request comes from Elke Stangl, whose “Research Notes on Energy, Software, Life, the Universe, and Everything” blog I first ran across years ago, when she was explaining some dynamical systems work.

## General Covariance

So, tensors. They’re the things mathematicians get into when they figure vectors just aren’t hard enough. Physics majors learn about them too. Electrical engineers really get into them. Some material science types too.

You maybe notice something about those last three groups. They’re interested in subjects that are about space. Like, just, regions of the universe. Material scientists wonder how pressure exerted on something will get transmitted. The structure of what’s in the space matters here. Electrical engineers wonder how electric and magnetic fields send energy in different directions. And physicists — well, everybody who’s ever read a pop science treatment of general relativity knows. There’s something about the shape of space something something gravity something equivalent acceleration.

So this gets us to tensors. Tensors are this mathematical structure. They’re about how stuff that starts in one direction gets transmitted into other directions. You can see how that’s got to have something to do with transmitting pressure through objects. It’s probably not too much work to figure how that’s relevant to energy moving through space. That it has something to do with space as just volume is harder to imagine. But physics types have talked about it quite casually for over a century now. Science fiction writers have been enthusiastic about it almost that long. So it’s kind of like the Roman Empire. It’s an idea we hear about early and often enough we’re never really introduced to it. It’s never a big new idea we’re presented, the way, like, you get specifically told there was (say) a War of 1812. We just soak up a couple bits we overhear about the idea and carry on as best our lives allow.

But to think of space. Start from somewhere. Imagine moving a little bit in one direction. How far have you moved? If you started out in this one direction, did you somehow end up in a different one? Now imagine moving in a different direction. Now how far are you from where you started? How far is your direction from where you might have imagined you’d be? Our intuition is built around a Euclidean space, or one close enough to Euclidean. These directions and distances and combined movements work as they would on a sheet of paper, or in our living room. But there is a difference. Walk a kilometer due east and then one due north and you will not be in exactly the same spot as if you had walked a kilometer due north and then one due east. Tensors are efficient ways to describe those little differences. And they tell us something of the shape of the Earth from knowing these differences. And they do it using much of the form that matrices and vectors do, so they’re not so hard to learn as they might be.

That’s all prelude. Here’s the next piece. We go looking at transformations. We take a perfectly good coordinate system and a point in it. Now let the light of the full Moon shine upon it, so that it shifts to being a coordinate werewolf. Look around you. There’s a tensor that describes how your coordinates look here. What is it?

You might wonder why we care about transformations. What was wrong with the coordinates we started with? But that’s because mathematicians have lumped a lot of stuff into the same name of “transformation”. A transformation might be something as dull as “sliding things over a little bit”. Or “turning things a bit”. It might be “letting a second of time pass”. Or “following the flow of whatever’s moving”. Stuff we’d like to know for physics work.

“General covariance” is a term that comes up when thinking about transformations. Suppose we have a description of some physics problem. By this mostly we mean “something moving in space” or “a bit of light moving in space”. That’s because they’re good building blocks. A lot of what we might want to know can be understood as some mix of those two problems.

Put your description through the same transformation your coordinate system had. This will (most of the time) change the details of how your problem’s represented. But does it change the overall description? Is our old description no longer even meaningful?

I trust at this point you’ve nodded and thought something like “well, that makes sense”. Give it another thought. How could we not have a “generally covariant” description of something? Coordinate systems are our impositions on a problem. We create them to make our lives easier. They’re real things in exactly the same way that lines of longitude and latitude are real. If we increased the number describing the longitude of every point in the world by 14, we wouldn’t change anything real about where stuff was or how to navigate to it. We couldn’t.

Here I admit I’m stumped. I can’t think of a good example of a system that would look good but not be generally covariant. I’m forced to resort to metaphors and analogies that make this essay particularly unsuitable to use for your thesis defense.

So here’s the thing. Longitude is a completely arbitrary thing. Measuring where you are east or west of some prime meridian might be universal, or easy for anyone to tumble onto. But the prime meridian is a cultural choice. It’s changed before. It may change again. Indeed, Geographic Information Services people still work with many different prime meridians. Most of them are for specialized purposes. Stuff like mapping New Jersey in feet north and east of some reference, for which Greenwich would make the numbers too ugly. But if our planet is mapped in an alien’s records, that map has at its center some line almost surely not Greenwich.

But latitude? Latitude is, at least, less arbitrary. That we measure it from zero to ninety degrees, north or south, is a cultural choice. (Or from -90 to 90 degrees. Same thing.) But that there’s a north pole an a south pole? That’s true as long as the planet is rotating. And that’s forced on us. If we tried to describe the Earth as rotating on an axis between Paris and Mexico City, we would … be fighting an uphill struggle, at least. It’s hard to see any problem that might make easier, apart from getting between Paris and Mexico City.

In models of the laws of physics we don’t really care about the north or south pole. A planet might have them or might not. But it has got some privileged stuff that just has to be so. We can’t have stuff that makes the speed of light in a vacuum change. And we have to make sense of a block of space that hasn’t got anything in it, no matter, no light, no energy, no gravity. I think those are the important pieces actually. But I’ll defer, growling angrily, to an expert in general relativity or non-Euclidean coordinates if I’ve misunderstood.

It’s often put that “general covariance” is one of the requirements for a scheme to describe General Relativity. I shall risk sounding like I’m making a joke and say that depends on your perspective. One can use different philosophical bases for describing General Relativity. In some of them you can see general covariance as a result rather than use it as a basic assumption. Here’s a 1993 paper by Dr John D Norton that describes some of the different ways to understand the point of general covariance.

By the way the term “general covariance” comes from two pieces. The “covariance” is because it describes how changes in one coordinate system are reflected in another. It’s “general” because we talk about coordinate transformations without knowing much about them. That is, we’re talking about transformations in general, instead of some specific case that’s easy to work with. This is why the mathematics of this can be frightfully tricky; we don’t know much about the transformations we’re working with. For a parallel, it’s easy to tell someone how to divide 14 into 112. It’s harder to tell them how to divide absolutely any number into absolutely any other number.

Quite a bit of mathematical physics plays into geometry. Gravity physicists mostly see as a problem of geometry. People who like reading up on science take that as given too. But many problems can be understood as a point or a blob of points in some kind of space, and how that point moves or that blob evolves in time. We don’t see “general covariance” in these other fields exactly. But we do see things that resemble it. It’s an idea with considerable reach.

I’m not sure how I feel about this. For most of my essays I’ve kept away from equations, even for the Why Stuff Can Orbit sequence. But this is one of those subjects it’s hard to be exact about without equations. I might revisit this in a special all-symbols, calculus-included, edition. Depends what my schedule looks like.

• #### elkement (Elke Stangl) 7:03 pm on Wednesday, 16 November, 2016 Permalink | Reply

Thanks for accepting this challenge – I think you explained it as good as one possibly can without equations!!

I think for understanding General Relativity you have to revisit some ideas from ‘flat space’ tensor calculus you took for granted, like a vector being sort of an arrow that can be moved around carelessly in space or what a coordinate transformation actually means (when applied to curved space). It seems GR is introduced either very formally, not to raise any false intuition, explaining the abstract big machinery with differentiable manifolds and atlases etc. and adding the actual physics as late as possible, or by starting from flat space metrics, staying close to ‘tangible physics’ and adding unfamiliar stuff slowly.
Sometimes I wonder if one (when trying to explain this to a freshman) could skip the ‘flat space’ part and start with the seemingly abstract but more general foundations as those cover anything? Perhaps it would be easier and more efficient never to learn about Gauss and Stokes theorem first but start with integration on manifolds and present such theorems as special cases?

And thanks for the pointer to this very interesting paper!

Liked by 1 person

• #### Joseph Nebus 3:47 am on Sunday, 20 November, 2016 Permalink | Reply

Thanks so for the kind words. I worried through the writing of this that I was going too far wrong and I admit I’m still waiting for a real expert to come along and destroy my essay and my spirits. Another few weeks and I should be far enough from writing it that I can take being told all the ways I’m wrong, though.

You’re right about the ways General Relativity seems to be often taught. And I also wonder if it couldn’t be better-taught starting from a completely abstract base and then filling in why this matches the way the world looks. Something equivalent to introducing vectors as “things that are in a vector space, which are things with these properties” instead of as arrows in space. I suspect it might not be really doable, though, based on how many times I crashed against covariant versus contravariant indices and that’s incredibly small stuff.

But there are so many oddball-perspective physics books out there that someone must have tried it at least once. And many of them are really good at least in making stuff look different, if not better. I’m sorry not to be skilled enough in the field to give it a fair try. Maybe some semester I’ll go through a proper text on this and post the notes I make on it.

Liked by 1 person

• #### elkement (Elke Stangl) 10:35 am on Sunday, 20 November, 2016 Permalink | Reply

I’ve recently stumbled upon this GR course http://www.infocobuild.com/education/audio-video-courses/physics/gravity-and-light-2015-we-heraeus.html : this lecturer is really very careful in introducing the foundations in the most abstract way, just as you say, without any intuitive references. No physics until lecture 9 (to prove that Newtonian gravity can also be presented in a generally covariant way – very interesting to read the history of science paper you linked in relation to this, BWT), then more math only, until finally in lecture 13 we return to ‘our’ spacetime.

I am also learning GR as a hobbyist project as this was not a mandatory subject in my physics degree program (I specialized in condensed matter, lasers, optics, superconductors…), and I admit I use mainly freely available sources like such lectures or detailed lecture notes. I have sort of planned to post about my favorite resources and/or that learning experience, too, but given my typical blogging frequency compared to yours I suppose I can wait for your postings and just use those as a reference :-)

Like

• #### Joseph Nebus 11:02 pm on Friday, 25 November, 2016 Permalink | Reply

Ooh, that’s a great-looking series, though I’ve lacked the time to watch it yet. I’ve regretted not taking a proper course on general relativity. When I was an undergraduate my physics department did a lecture series on general relativity without advanced mathematics, but it conflicted with something on my schedule and I hoped they’d rerun the series another semester. Of course they didn’t at least during my time there.

Liked by 1 person

## The End 2016 Mathematics A To Z: The Fredholm Alternative

Some things are created with magnificent names. My essay today is about one of them. It’s one of my favorite terms and I get a strange little delight whenever it needs to be mentioned in a proof. It’s also the title I shall use for my 1970s Paranoid-Conspiracy Thriller.

## The Fredholm Alternative.

So the Fredholm Alternative is about whether this supercomputer with the ability to monitor every commercial transaction in the country falls into the hands of the Parallax Corporation or whether — ahm. Sorry. Wrong one. OK.

The Fredholm Alternative comes from the world of functional analysis. In functional analysis we study sets of functions with tools from elsewhere in mathematics. Some you’d be surprised aren’t already in there. There’s adding functions together, multiplying them, the stuff of arithmetic. Some might be a bit surprising, like the stuff we draw from linear algebra. That’s ideas like functions having length, or being at angles to each other. Or that length and those angles changing when we take a function of those functions. This may sound baffling. But a mathematics student who’s got into functional analysis usually has a happy surprise waiting. She discovers the subject is easy. At least, it relies on a lot of stuff she’s learned already, applied to stuff that’s less difficult to work with than, like, numbers.

(This may be a personal bias. I found functional analysis a thoroughgoing delight, even though I didn’t specialize in it. But I got the impression from other grad students that functional analysis was well-liked. Maybe we just got the right instructor for it.)

I’ve mentioned in passing “operators”. These are functions that have a domain that’s a set of functions and a range that’s another set of functions. Suppose you come up to me with some function, let’s say $f(x) = x^2$. I give you back some other function — say, $F(x) = \frac{1}{3}x^3 - 4$. Then I’m acting as an operator.

Why should I do such a thing? Many operators correspond to doing interesting stuff. Taking derivatives of functions, for example. Or undoing the work of taking a derivative. Describing how changing a condition changes what sorts of outcomes a process has. We do a lot of stuff with these. Trust me.

Let me use the name T’ for some operator. I’m not going to say anything about what it does. The letter’s arbitrary. We like to use capital letters for operators because it makes the operators look extra important. And we don’t want to use `O’ because that just looks like zero and we don’t need that confusion.

Anyway. We need two functions. One of them will be called ‘f’ because we always call functions ‘f’. The other we’ll call ‘v’. In setting up the Fredholm Alternative we have this important thing: we know what ‘f’ is. We don’t know what ‘v’ is. We’re finding out something about what ‘v’ might be. The operator doing whatever it does to a function we write down as if it were multiplication, that is, like ‘Tv’. We get this notation from linear algebra. There we multiple matrices by vectors. Matrix-times-vector multiplication works like operator-on-a-function stuff. So much so that if we didn’t use the same notation young mathematics grad students would rise in rebellion. “This is absurd,” they would say, in unison. “The connotations of these processes are too alike not to use the same notation!” And the department chair would admit they have a point. So we write ‘Tv’.

If you skipped out on mathematics after high school you might guess we’d write ‘T(v)’ and that would make sense too. And, actually, we do sometimes. But by the time we’re doing a lot of functional analysis we don’t need the parentheses so much. They don’t clarify anything we’re confused about, and they require all the work of parenthesis-making. But I do see it sometimes, mostly in older books. This makes me think mathematicians started out with ‘T(v)’ and then wrote less as people got used to what they were doing.

I admit we might not literally know what ‘f’ is. I mean we know what ‘f’ is in the same way that, for a quadratic equation, “ax2 + bx + c = 0”, we “know” what ‘a’, ‘b’, and ‘c’ are. Similarly we don’t know what ‘v’ is in the same way we don’t know what ‘x’ there is. The Fredholm Alternative tells us exactly one of these two things has to be true:

For operators that meet some requirements I don’t feel like getting into, either:

1. There’s one and only one ‘v’ which makes the equation $Tv = f$ true.
2. Or else $Tv = 0$ for some ‘v’ that isn’t just zero everywhere.

That is, either there’s exactly one solution, or else there’s no solving this particular equation. We can rule out there being two solutions (the way quadratic equations often have), or ten solutions (the way some annoying problems will), or infinitely many solutions (oh, it happens).

It turns up often in boundary value problems. Often before we try solving one we spend some time working out whether there is a solution. You can imagine why it’s worth spending a little time working that out before committing to a big equation-solving project. But it comes up elsewhere. Very often we have problems that, at their core, are “does this operator match anything at all in the domain to a particular function in the range?” When we try to answer we stumble across Fredholm’s Alternative over and over.

Fredholm here was Ivar Fredholm, a Swedish mathematician of the late 19th and early 20th centuries. He worked for Uppsala University, and for the Swedish Social Insurance Agency, and as an actuary for the Skandia insurance company. Wikipedia tells me that his mathematical work was used to calculate buyback prices. I have no idea how.

## Reading the Comics, November 12, 2016: Frazz and Monkeys Edition

Two things made repeat appearances in the mathematically-themed comics this week. They’re the comic strip Frazz and the idea of having infinitely many monkeys typing. Well, silly answers to word problems also turned up, but that’s hard to say many different things about. Here’s what I make the week in comics out to be.

Sandra Bell-Lundy’s Between Friends for the 6th of November, 2016. I’m surprised Bell-Lundy used the broader space of a Sunday strip for a joke that doesn’t need that much illustration, but I understand sometimes you just have to go with the joke that you have. And it isn’t as though Sunday comics get that much space anymore either. Anyway, I suppose we have all been there, although for me that’s more often because I used to have a six-digit pin, and a six-digit library card pin, and those were just close enough to each other that I could never convince myself I was remembering the right one in context, so I would guess wrong.

Sandra Bell-Lundy’s Between Friends for the 6th introduces the infinite monkeys problem. I wonder sometimes why the monkeys-on-typewriters thing has so caught the public imagination. And then I remember it encourages us to stare directly into infinity and its intuition-destroying nature from the comfortable furniture of the mundane — typewriters, or keyboards, for goodness’ sake — with that childish comic dose of monkeys. Given that it’s a wonder we ever talk about anything else, really.

Monkeys writing Shakespeare has for over a century stood as a marker for what’s possible but incredibly improbable. I haven’t seen it compared to finding a four-digit PIN. It has got me wondering about the chance that four randomly picked letters will be a legitimate English word. I’m sure the chance is more than the one-in-a-thousand chance someone would guess a randomly drawn PIN correctly on one try. More than one in a hundred? I’m less sure. The easy-to-imagine thing to do is set a computer to try out all 456,976 possible sets of four letters and check them against a dictionary. The number of hits divided by the number of possibilities would be the chance of drawing a legitimate word. If I had a less capable computer, or were checking even longer words, I might instead draw some set number of words, never minding that I didn’t get every possibility. The fraction of successful words in my sample would be something close to the chance of drawing any legitimate word.

If I thought a little deeper about the problem, though, I’d just count how many four-letter words are already in my dictionary and divide that into 456,976. It’s always a mistake to start programming before you’ve thought the problem out. The trouble is not being able to tell when that thinking-out is done.

Richard Thompson’s Poor Richard’s Almanac for the 7th is the other comic strip to mention infinite monkeys. Well, chimpanzees in this case. But for the mathematical problem they’re not different. I’ve featured this particular strip before. But I’m a Thompson fan. And goodness but look at the face on the T S Eliot fan in the lower left corner there.

Jeff Mallet’s Frazz for the 6th gives Caulfield one of those flashes of insight that seems like it should be something but doesn’t mean much. He’s had several of these lately, as mentioned here last week. As before this is a fun discovery about Roman Numerals, but it doesn’t seem like it leads to much. Perhaps a discussion of how the subtractive principle — that you can write “four” as “IV” instead of “IIII” — evolved over time. But then there isn’t much point to learning Roman Numerals at all. It’s got some value in showing how much mathematics depends on culture. Not just that stuff can be expressed in different ways, but that those different expressions make different things easier or harder to do. But I suspect that isn’t the objective of lessons about Roman Numerals.

Frazz got my attention again the 12th. This time it just uses arithmetic, and a real bear of an arithmetic problem, as signifier for “a big pile of hard work”. This particular problem would be — well, I have to call it tedious, rather than hard. doing it is just a long string of adding together two numbers. But to do that over and over, by my count, at least 47 times for this one problem? Hardly any point to doing that much for one result.

Patrick Roberts’s Todd the Dinosaur for the 7th calls out fractions, and arithmetic generally, as the stuff that ruins a child’s dreams. (Well, a dinosaur child’s dreams.) Still, it’s nice to see someone reminding mathematicians that a lot of their field is mostly used by accountants. Actuaries we know about; mathematics departments like to point out that majors can get jobs as actuaries. I don’t know of anyone I went to school with who chose to become one or expressed a desire to be an actuary. But I admit not asking either.

Patrick Roberts’s Todd the Dinosaur for the 7th of November, 2016. I don’t remember being talked to by classmates’ parents about what they where, but that might just be that it’s been a long time since I was in elementary school and everybody had the normal sorts of jobs that kids don’t understand. I guess we talked about what our parents did but that should make a weaker impression.

Mike Thompson’s Grand Avenue started off a week of students-resisting-the-test-question jokes on the 7th. Most of them are hoary old word problem jokes. But, hey, I signed up to talk about it when a comic strip touches a mathematics topic and word problems do count.

Zach Weinersmith’s Saturday Morning Breakfast Cereal reprinted the 7th is a higher level of mathematical joke. It’s from the genre of nonsense calculation. This one starts off with what’s almost a cliche, at least for mathematics and physics majors. The equation it starts with, $e^{i Pi} = -1$, is true. And famous. It should be. It links exponentiation, imaginary numbers, π, and negative numbers. Nobody would have seen it coming. And from there is the sort of typical gibberish reasoning, like writing “Pi” instead of π so that it can be thought of as “P times i”, to draw to the silly conclusion that P = 0. That much work is legitimate.

From there it sidelines into “P = NP”, which is another equation famous to mathematicians and computer scientists. It’s a shorthand expression of a problem about how long it takes to find solutions. That is, how many steps it takes. How much time it would take a computer to solve a problem. You can see why it’s important to have some study of how long it takes to do a problem. It would be poor form to tie up your computer on a problem that won’t be finished before the computer dies of old age. Or just take too long to be practical.

Most problems have some sense of size. You can look for a solution in a small problem or in a big one. You expect searching for the solution in a big problem to take longer. The question is how much longer? Some methods of solving problems take a length of time that grows only slowly as the size of the problem grows. Some take a length of time that grows crazy fast as the size of the problem grows. And there are different kinds of time growth. One kind is called Polynomial, because everything is polynomials. But there’s a polynomial in the problem’s size that describes how long it takes to solve. We call this kind of problem P. Another is called Non-Deterministic Polynomial, for problems that … can’t. We assume. We don’t know. But we know some problems that look like they should be NP (“NP Complete”, to be exact).

It’s an open question whether P and NP are the same thing. It’s possible that everything we think might be NP actually can be solved by a P-class algorithm we just haven’t thought of yet. It would be a revolution in our understanding of how to find solutions if it were. Most people who study algorithms think P is not NP. But that’s mostly (as I understand it) because it seems like if P were NP then we’d have some leads on proving that by now. You see how this falls short of being rigorous. But it is part of expertise to get a feel for what seems to make sense in light of everything else we know. We may be surprised. But it would be inhuman not to have any expectations of a problem like this.

Mark Anderson’s Andertoons for the 8th gives us the Andertoons content for the week. It’s a fair question why a right triangle might have three sides, three angles, three vertices, and just the one hypotenuse. The word’s origin, from Greek, meaning “stretching under” or “stretching between”. It’s unobjectionable that we might say this is the stretch from one leg of the right triangle to another. But that leaves unanswered why there’s just the one hypothenuse, since the other two legs also stretch from the end of one leg to another. Dr Sarah on The Math Forum suggests we need to think of circles. Draw a circle and a diameter line on it. Now pick any point on the circle other than where the diameter cuts it. Draw a line from one end of the diameter to your point. And from your point to the other end of the diameter. You have a right triangle! And the hypothenuse is the leg stretching under the other two. Yes, I’m assuming you picked a point above the diameter. You did, though, didn’t you? Humans do that sort of thing.

I don’t know if Dr Sarah’s explanation is right. It sounds plausible and sensible. But those are weak pins to hang an etymology on. But I have no reason to think she’s mistaken. And the explanation might help people accept there is the one hypothenuse and there’s something interesting about it.

The first (and as I write this only) commenter, Kristiaan, has a good if cheap joke there.

• #### davekingsbury 10:38 pm on Monday, 14 November, 2016 Permalink | Reply

I reckon it was Bob Newhart’s sketch about it that made the monkey idea so popular. Best bit, something like, hey one of them has something over here er to be or not to be that is the … gezoinebplatf!

Like

• #### Joseph Nebus 3:35 am on Sunday, 20 November, 2016 Permalink | Reply

I like to think that helped. I fear that that particular routine’s been forgotten, though. I was surprised back in the 90s when I was getting his albums and ran across that bit, as I’d never heard it before. But it might’ve been important in feeding the idea to other funny people. There’s probably a good essay to be written tracing the monkeys at typewriters through pop culture.

Liked by 1 person

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r