A Leap Day 2016 Mathematics A To Z: Grammar


My next entry for this A To Z was another request, this one from Jacob Kanev, who doesn’t seem to have a WordPress or other blog. (If I’m mistaken, please, let me know.) Kanev’s given me several requests, some of them quite challenging. Some too challenging: I have to step back from describing “both context sensitive and not” kinds of grammar just now. I hope all will forgive me if I just introduce the base idea.

Grammar.

One of the ideals humans hold when writing a mathematical proof is to crush all humanity from the proof. It’s nothing personal. It reflects a desire to be certain we have proved things without letting any unstated assumptions or unnoticed biases interfering. The 19th century was a lousy century for mathematicians and their intuitions. Many ideas that seemed clear enough turned out to be paradoxical. It’s natural to want to not make those mistakes again. We can succeed.

We can do this by stripping out everything but the essentials. We can even do away with words. After all, if I say something is a “square”, that suggests I mean what we mean by “square” in English. Our mathematics might not have proved all the square-ness of the thing. And so we reduce the universe to symbols. Letters will do as symbols, if we want to be kind to our typesetters. We do want to be kind now that, thanks to LaTeX, we do our own typesetting.

This is called building a “formal language”. The “formal” here means “relating to the form” rather than “the way you address people when you can’t just say `heya, gang’.” A formal language has two important components. One is the symbols that can be operated on. The other is the operations you can do on the symbols.

If we’ve set it all up correctly then we get something wonderful. We have “statements”. They’re strings of the various symbols. Some of the statements are axioms; they’re assumed to be true without proof. We can turn a statement into another one by using a statement we have and one of the operations. If the operation requires, we can add in something else we already know to be true. Something we’ve already proven.

Any statement we build this way — starting from an axiom and building with the valid operations — is a new and true statement. It’s a theorem. The proof of the theorem? It’s the full sequence of symbols and operations that we’ve built. The line between advanced mathematics and magic is blurred. To give a theorem its full name is to give its proof. (And now you understand why the biographies of many of the pioneering logicians of the late 19th and early 20th centuries include a period of fascination with the Kabbalah and other forms of occult or gnostic mysticism.)

A grammar is what’s required to describe a language like this. It’s defined to be a quartet of properties. The first property is the collection of symbols that can’t be the end of a statement. These are called nonterminal symbols. The second property is the collection of symbols that can end a statement. These are called terminal symbols. (You see why we want to have those as separate lists.) The third property is the collection of rules that let you build new statements from old. The fourth property is the collection of things we take to be true to start. We only have finitely many options for each of these, at least for your typical grammar. I imagine someone has experimented with infinite grammars. But that hasn’t got to be enough of a research field people have to pay attention to them. Not yet, anyway.

Now it’s reasonable to ask if we need mathematicians at all. If building up theorems is just a matter of applying the finitely many rules of inference on finitely many collections of symbols, finitely many times over, then what about this can’t be done by computer? And done better by a computer, since a computer doesn’t need coffee, or bathroom breaks an hour later, or the hope of moving to a tenure-track position?

Well, we do need mathematicians. I don’t say that just because I hope someone will give me money in exchange for doing mathematics. It’s because setting up a computer to just grind out every possible theorem will never turn up what you want to know now. There are several reasons for this.

Here’s a way to see why. It’s drawn from Douglas Hofstadter’s Gödel, Escher, Bach, a copy of which you can find in any college dorm room or student organization office. At least you could back when I was an undergraduate. I don’t know what the kids today use.

Anyway, this scheme has three nonterminal symbols: I, M, and U. As a terminal symbol … oh, let’s just use the space at the end of a string. That way everything looks like words. We will include a couple variables, lowercase letters like x and y and z. They stand for any string of nonterminal symbols. They’re falsework. They help us get work done, but must not appear in our final result.

There’s four rules of inference. The first: if xI is valid, then so is xIM. The second: if Mx is valid, then so is Mxx. The third: if MxIIIy is valid, then so is MxUy. The fourth: if MxUUy is valid, then so is Mxy.

We have one axiom, assumed without proof to be true: MI.

So let’s putter around some. MI is true. So by the second rule, so is MII. That’s a theorem. And since MII is true, by the second rule again, so is MIIII. That’s another theorem. Since MIIII is true, by the first rule, so is MIIIIM. We’ve got another theorem already. Since MIIIIM is true, by the third rule, so is MIUM. We’ve got another theorem. For that matter, since MIIIIM is true, again by the third rule, so is MUIM. Would you like MIUMIUM? That’s waiting there to be proved too.

And that will do. First question: what does any of this even mean? Nobody cares about whether MIUMIUM is a theorem in this system. Nobody cares about figuring out whether MUIUMUIUI might be a theorem. We care about questions like “what’s the smallest odd perfect number?” or “how many equally-strong vortices can be placed in a ring without the system becoming unstable?” With everything reduced to symbol-shuffling like this we’re safe from accidentally assuming something which isn’t justified. But we’re pretty far from understanding what these theorems even mean.

In this case, these strings don’t mean anything. They’re a toy so we can get comfortable with the idea of building theorems this way. We don’t expect them to do any more work than we expect Lincoln Logs to build usable housing. But you can see how we’re starting pretty far from most interesting mathematics questions.

Still, if we started from a system that meant something, we would get there in time, right? … Surely? …

Well, maybe. The thing is, even with this I, M, U scheme and its four rules there are a lot of things to try out. From the first axiom, MI, we can produce either MII or MIM. From MII we can produce MIIM or MIIII. From MIIII we could produce MIIIIM, or MUI, or MIU, or MIIIIIIII. From each of those we can produce … quite a bit of stuff.

All of those are theorems in this scheme and that’s nice. But it’s a lot. Suppose we have set up symbols and axioms and rules that have clear interpretations that relate to something we care about. If we set the computer to produce every possible legitimate result we are going to produce an enormous number of results that we don’t care about. They’re not wrong, they’re just off-point. And there’s a lot more true things that are off-point than there are true things on-point. We need something with judgement to pick out results that have anything to do with what we want to know. And trying out combinations to see if we can produce the pattern we want is hard. Really hard.

And there’s worse. If we set up a formal language that matches real mathematics, then we need a lot of work to prove anything. Even simple statements can take forever. I seem to remember my logic professor needing 27 steps to work out the uncontroversial theorem “if x = y and y = z, then x = z”. (Granting he may have been taking the long way around for demonstration purposes.) We would have to look in theorems of unspeakably many symbols to find the good stuff.

Now it’s reasonable to ask what the point of all this is. Why create a scheme that lets us find everything that can be proved, only to have all we’re interested in buried in garbage?

There are some uses. To make us swear we’ve read Jorge Luis Borges, for one. Another is to study the theory of what we can prove. That is, what are we able to learn by logical deduction? And another is to design systems meant to let us solve particular kinds of problems. That approach makes the subject merge into computer science. Code for a computer is, in a sense, about how to change a string of data into another string of data. What are the legitimate data to start with? What are the rules by which to change the data? And these are the sorts of things grammars, and the study of grammars, are about.

Reading the Comics, January 15, 2015: Electric Brains and Klein Bottles Edition


I admit I don’t always find a theme running through Comic Strip Master Command’s latest set of mathematically-themed comics. The edition names are mostly so that I can tell them apart when I see a couple listed in the Popular Posts roundup anyway.

Little Iodine and her parents see an electronic brain capable of solving any problem; her father offers 'square root of 7,921 x^2 y^2'. It gets it correct, 89 xy. Little Iodine, inspired, makes her own. 'Where are you getting all the money for those ice cream cones and stuff?' her father demands. 'I made a 'lectric brain --- the kids pay me a nickel when they got homework --- here --- give me a problem.' He offers 9 times 16. The electric brain writes out 'Dere teecher, Plees xcuse my chidl for not doing his homwork'. 'And then these letters come out --- the kid gives it to the teacher and everything's okey --- '
Jimmy Hatlo’s Little Iodine for the 12th of January, 2016. Originally run the 7th of November, 1954.

Jimmy Hatlo’s Little Iodine is a vintage comic strip from the 1950s. It strikes me as an unlicensed adaptation of Baby Schnooks, but that’s not something for me to worry about. The particular strip, originally from the 7th of November, 1954 (and just run the 12th of January this year) interests me for its ancient views of computers. It’s from the days they were called “electric brains”. I’m also impressed that the machine on display early on is able to work out the “square root of 7921 x2 y2”. The square root of 7921 is no great feat. Being able to work with the symbols of x and y without knowing what they stand for, though, does impress me. I’m not sure there were computers which could handle that sort of symbolic manipulation in 1954. That sort of ability to work with a quantity by name rather than value is what we would buy Mathematica for, if we could afford it. It’s also at least a bit impressive that someone knows the square of 89 offhand. All told, I think this is my favorite of this essay’s set of strips. But it’s a weak field considering none of them are “students giving a snarky reply to a homework/exam/blackboard question”.

Joe Martin’s Willy and Ethel for the 13th of January is a percentages joke. Some might fault it for talking about people giving 110 percent, but of course, what is “100 percent”? If it’s the standard amount of work being done then it does seem like ten people giving 110 percent gets the job done as quickly as eleven people doing 100 percent. If work worked like that.

Willy asks his kid: 'OK, here's a question of my own that involves math and principles. If you're on an 11 man crew and 10 of them are giving 110%, do you have to show up for work?
Joe Martin’s Willy and Ethel for the 13th of January, 2016. The link will likely expire in mid-February.

Steve Sicula’s Home and Away for the 13th (a rerun from the 8th of October, 2004) gives a wrongheaded application of a decent principle. The principle is that of taking several data points and averaging their value. The problem with data is that it’s often got errors in it. Something weird happened and it doesn’t represent what it’s supposed to. Or it doesn’t represent it well. By averaging several data points together we can minimize the influence of a fluke reading. Or if we’re measuring something that changes in time, we might use a running average of the last several sampled values. In this way a short-term spike or a meaningless flutter will be minimized. We can avoid wasting time reacting to something that doesn’t matter. (The cost of this, though, is that if a trend is developing we will notice it later than we otherwise would.) Still, sometimes a data point is obviously wrong.

Zach Weinersmith’s Saturday Morning Breakfast Cereal wanted my attention, and so on the 13th it did a joke about Zeno’s Paradox. There are actually four classic Zeno’s Paradoxes, although the one riffed on here I think is the most popular. This one — the idea that you can’t finish something (leaving a room is the most common form) because you have to get halfway done, and have to get halfway to being halfway done, and halfway to halfway to halfway to being done — is often resolved by people saying that Zeno just didn’t understand that an infinite series could converge. That is, that you can add together infinitely many numbers and get a finite number. I’m inclined to think Zeno did not, somehow, think it was impossible to leave rooms. What the paradoxes as a whole get to are questions about space and time: they’re either infinitely divisible or they’re not. And either way produces effects that don’t seem to quite match our intuitions.

The next day Saturday Morning Breakfast Cereal does a joke about Klein bottles. These are famous topological constructs. At least they’re famous in the kinds of places people talk about topological constructs. It’s much like the Möbius strip, a ribbon given a twist and joined back to its edge. The Klein bottle similarly you can imagine as a cylinder stretched out into the fourth dimension, given a twist, then joined back to itself. We can’t really do this, what with it being difficult to craft four-dimensional objects. But we can imagine this, and it creates an object that doesn’t have a boundary, and has only one side. There’s not an inside or an outside. There’s no making this in the real world, but we can make nice-looking approximations, usually as bottles.

Ruben Bolling’s Super-Fun-Pak Comix for the 13th of January is an extreme installment of Chaos Butterfly. The trouble with touching Chaos Butterfly to cause disasters is that you don’t know — you can’t know — what would have happened had you not touched the butterfly. You change your luck, but there’s no way to tell whether for the better or worse. One of the commenters at Gocomics.com alludes to this problem.

Jon Rosenberg’s Scenes From A Multiverse for the 13th of January makes quite literal quantum mechanics talk about probability waves and quantum foam and the like. The wave formulation of quantum mechanics, the most popular and accessible one, describes what’s going on in equations that look much like the equations for things diffusing into space. And quantum mechanical problems are often solved by supposing that the probability distribution we’re interested in can be broken up into a series of sinusoidal waves. Representing a complex function as a set of waves is a common trick, not just in quantum mechanics, because it works so well so often. Sinusoidal waves behave in nice, predictable ways for most differential equations. So converting a hard differential equation problem into a long string of relatively easy differential equation problems is usually a good trade.

Tom Thaves’s Frank and Ernest for the 14th of January ties together the baffling worlds of grammar and negative numbers. It puts Frank and Ernest on panel with Euclid, who’s a fair enough choice to represent the foundation of (western) mathematics. He’s famous for the geometry we now call Euclidean. That’s the common everyday kind of blackboards and tabletops and solid cubes and spheres. But among his writings are compilations of arithmetic, as understood at the time. So if we know anyone in Ancient Greece to have credentials to talk about negative numbers it’s him. But the choice of Euclid traps the panel into an anachronism: the Ancient Greeks just didn’t think of negative numbers. They could work through “a lack of things” or “a shortage of something”, but a negative? That’s a later innovation. But it’s hard to think of a good rewriting of the joke. You might have Isaac Newton be consulted, but Newton makes normal people think of gravity and physics, confounding the mathematics joke. There’s a similar problem with Albert Einstein. Leibniz or Gauss should be good, but I suspect they’re not the household names that even Euclid is. And if we have to go “less famous mathematician than Gauss” we’re in real trouble. (No, not Andrew Wiles. Normal people know him as “the guy that proved Fermat’s thing”, and that’s too many words to fit on panel.) Perhaps the joke can’t be made to read cleanly and make good historic sense.

Reading the Comics, June 11, 2015: Bonus Education Edition


The coming US summer vacation suggests Comic Strip Master Command will slow down production of mathematics-themed comic strips. But they haven’t quite yet. And this week I also found a couple comics that, while not about mathematics, amused me enough that I want to include them anyway. So those bonus strips I’ll run at the end of my regular business here.

Bill Hinds’s Tank McNamara (June 6) does a pi pun. The pithon mathematical-snake idea is fun enough and I’d be interested in a character design. I think the strip’s unjustifiably snotty about tattoos. But comic strips have a strange tendency to get snotty about other forms of art.

A friend happened to mention one problem with tattoos that require straight lines or regular shapes is that human skin has a non-flat Gaussian curvature. Yes, that’s how the friend talks. Gaussian curvature is, well, a measure of how curved a surface is. That sounds obvious enough, but there are surprises: a circular cylinder, such as the label of a can, has the same curvature as a flat sheet of paper. You can see that by how easy it is to wrap a sheet of paper around a can. But a ball hasn’t, and you see that by how you can’t neatly wrap a sheet of paper around a ball without crumpling or tearing the paper. Human skin is kind of cylindrical in many places, but not perfectly so, and it changes as the body moves. So any design that looks good on paper requires some artistic imagination to adapt to the skin.

Bill Amend’s FoxTrot (June 7) sets Jason and Marcus working on their summer tans. It’s a good strip for adding to the cover of a trigonometry test as part of the cheat-sheet.

Dana Simpson’s Phoebe and her Unicorn (June 8) makes what I think is its first appearance in my Reading the Comics series. The strip, as a web comic, had been named Heavenly Nostrils. Then it got the vanishingly rare chance to run as a syndicated newspaper comic strip. And newspaper comics page editors don’t find the word “nostril” too inherently funny to pass up. Thus the more marketable name. After that interesting background I’m sad to say Simpson delivers a bog-standard “kids not understanding fractions” joke. I can’t say much about that.

Ruben Bolling’s Super Fun-Pak Comix (June 10, rerun) is an installment of everyone’s favorite literary device model of infinite probabilities. A Million Monkeys At A Million Typewriters subverts the model. A monkey thinking about the text destroys the randomness that it depends upon. This one’s my favorite of the mathematics strips this time around.

And Dan Thompson’s traditional Brevity appearance is the June 11th strip, an Anthropomorphic Numerals joke combining a traditional schoolyard gag with a pun I didn’t notice the first time I read the panel.


And now here’s a couple strips that aren’t mathematical but that I just liked too much to ignore. Also this lets Mark Anderson’s Andertoons get back on my page. The June 10th strip is a funny bit of grammar play.

Percy Crosby’s Skippy (June 6, rerun from sometime in 1928) tickles me for its point about what you get at the top and the bottom of the class. Although tutorials and office hours and extracurricular help, and automated teaching tools, do customize things a bit, teaching is ultimately a performance given to an audience. Some will be perfectly in tune with the performance, and some won’t. Audiences are like that.

Reading the Comics, April 27, 2015: Anthropomorphic Mathematics Edition


They’re not running at the frantic pace of April 21st, but there’s still been a fair clip of comic strips that mention some kind of mathematical topic. I imagine Comic Strip Master Command wants to be sure to use as many of these jokes up as possible before the (United States) summer vacation sets in.

Dan Thompson’s Brevity (April 23) is a straightforward pun strip. It also shows a correct understanding of how to draw a proper Venn Diagram. And after all why shouldn’t an anthropomorphized Venn Diagram star in movies too?

John Atkinson’sWrong Hands (April 23) gets into more comfortable territory with plain old numbers being anthropomorphized. The 1 is fair to call this a problem. What kind of problem depends on whether you read the x as a multiplication sign or as a variable x. If it’s a multiplication sign then I can’t think of any true statement that can be made from that bundle of symbols. If it’s the variable x then there are surprisingly many problems which could be made, particularly if you’re willing to count something like “x = 718” as a problem. I think that it works out to 24 problems but would accept contrary views. This one ended up being the most interesting to me once I started working out how many problems you could make with just those symbols. There’s a fun question for your combinatorics exam in that.

Continue reading “Reading the Comics, April 27, 2015: Anthropomorphic Mathematics Edition”

%d bloggers like this: