I had remembered this comic strip, and I hoped to use it for yesterday’s A-to-Z essay about Imaginary Numbers. But I wasn’t able to find it before publishing deadline. I figured I could go back and add this to the essay once I found it, and I likely will anyway. (The essay is quite long and any kind of visual appeal helps.)
But I also wanted folks to have the chance to notice it, and an after-the-fact addition doesn’t give that chance.
It is almost certain that Bill Watterson read this strip, and long before his own comic with eleventeen and thirty-twelve and such. Watterson has spoken of Schulz’s influence. That isn’t to say that he copied the joke. “Gibberish number-like words” is not a unique idea, and it’s certainly not original to Schulz. I’d imagine a bit of effort could find prior examples even within comic strips. (I’m reminded in Pogo of Howland Owl describing the Groundhog Child’s gibberish as first-rate algebra.) It’s just fun to see great creative minds working out similar ideas, and how they use those ideas for different jokes.
I have another topic today suggested by Beth, of the I Didn’t Have My Glasses On …. inspiration blog. It overlaps a bit with other essays I’ve posted this A-to-Z sequence, but that’s all right. We get a better understanding of things by considering them from several perspectives. This one will be a bit more historical.
Pop science writer Isaac Asimov told a story he was proud of about his undergraduate days. A friend’s philosophy professor held court after class. One day he declared mathematicians were mystics, believing in things they even admit are “imaginary numbers”. Young Asimov, taking offense, offered to prove the reality of the square root of minus one, if the professor gave him one-half pieces of chalk. The professor snapped a piece of chalk in half and gave one piece to him. Asimov said this is one piece of chalk. The professor answered it was half the length of a piece of chalk and Asimov said that’s not what he asked for. Even if we accept “half the length” is okay, how do we know this isn’t 48 percent the length of a standard piece of chalk? If the professor was that bad on “one-half” how could he have opinions on “imaginary numbers”?
This story is another “STEM undergraduates outwitting the philosophy expert” legend. (Even if it did happen. What we know is the story Asimov spun it into, in which a plucky young science fiction fan out-argued someone whose job is forming arguments.) Richard Feynman tells a similar story, befuddling a philosophy class with the question of how we can prove a brick has a interior. It helps young mathematicians and science majors feel better about their knowledge. But Asimov’s story does get at a couple points. First, that “imaginary” is a terrible name for a class of numbers. The square root of minus one is as “real” as one-half is. Second, we’ve decided that one-half is “real” in some way. What the philosophy professor would have baffled Asimov to explain is: in what way is one-half real? Or minus one?
We’re introduced to imaginary numbers through polynomials. I mean in education. It’s usually right after getting into quadratics, looking for solutions to equations like . That quadratic has two solutions, but it’s possible to have a quadratic with only one, such as . Or to have a quadratic with no solutions, such as, iconically, . We might underscore that by plotting the curve whose x- and y-coordinates makes true the equation . There’s no point on the curve with a y-coordinate of zero, so, there we go.
Having established that has no solutions, the course then asks “what if we go ahead and say there was one”? Two solutions, in fact, and . This is all right for introducing the idea that mathematics is a tool. If it doesn’t do something we need, we can alter it.
But I see trouble in teaching someone how you can’t take square roots of negative numbers and then teaching them how to take square roots of negative numbers. It’s confusing at least. It needs some explanation about what changed. We might do better introducing them in a more historical method.
Historically, imaginary numbers (in the West) come from polynomials, yes. Different polynomials. Cubics, and quartics. Mathematicians still liked finding roots of them. Mathematicians would challenge one another to solve sets of polynomials. This seems hard to believe, but many sources agree on this. I hope we’re not all copying Eric Temple Bell here. (Bell’s Men of Mathematics is an inspiring collection of biographical sketches. But it’s not careful differentiating legends from documented facts.) And there are enough nerd challenges today that I can accept people daring one another to find solutions of .
Quadratics, equations we can write as for some real numbers a, b, and c, we’ve known about forever. Euclid solved these kinds of equations using geometric reasoning. Chinese mathematicians 2200 years ago described rules for how to find roots. The Indian mathematician Brahmagupta, by the early 7th century, described the quadratic formula to find at least one root. Both possible roots were known to Indian mathematicians a thousand years ago. We’ve reduced the formula today to
With that filtering into Western Europe, the search was on for similar formulas for other polynomials. This turns into several interesting threads. One is a tale of intrigue and treachery involving Gerolamo Cardano, Niccolò Tartaglia, and Ludovico Ferrari. I’ll save that for another essay because I have to cut something out, so of course I skip the dramatic thing. Another thread is the search for quadratic-like formulas for other polynomials. They exist for third-power and fourth-power polynomials. Not (generally) for the fifth- or higher-powers. That is, there are individual polynomials you can solve by formulas, like, . But stare at it and you can see where that’s “really” a quadratic pretending to be sixth-power. Finding there was no formula to find, though, lead people to develop group theory. And group theory underlies much of mathematics and modern physics.
The first great breakthrough solving the general cubic, , came near the end of the 14th century in some manuscripts out of Florence. It’s built on a transformation. Transformations are key to mathematics. The point of a transformation is to turn a problem you don’t know how to do into one you do. As I write this, MathWorld lists 543 pages as matching “transformation”. That’s about half what “polynomial” matches (1,199) and about three times “trigonometric” (184). So that can help you judge importance.
Here, the transformation to make is to write a related polynomial in terms of a new variable. You can call that new variable x’ if you like, or z. I’ll use z so as to not have too many superscript marks flying around. This will be a “depressed polynomial”. “Depressed” here means that at least one of the coefficients in the new polynomial is zero. (Here, for this problem, it means we won’t have a squared term in the new polynomial.) I suspect the term is old-fashioned.
Let z be the new variable, related to x by the equation . And then figure out what and are. Using all that, and the knowledge that , and a lot of arithmetic, you get to one of these three equations:
where p and q are some new coefficients. They’re positive numbers, or possibly zeros. They’re both derived from a, b, c, and d. And so in the 15th Century the search was on to solve one or more of these equations.
From our perspective in the 21st century, our first question is: what three equations? How are these not all the same equation? And today, yes, we would write this as one depressed equation, most likely . We would allow that p or q or both might be negative numbers.
And there is part of the great mysterious historical development. These days we generally learn about negative numbers. Once we are comfortable, our teachers hope, with those we get imaginary numbers. But in the Western tradition mathematicians noticed both, and approached both, at roughly the same time. With roughly similar doubts, too. It’s easy to point to three apples; who can point to “minus three” apples? We can arrange nine apples into a neat square. How big a square can we set “minus nine” apples in?
Hesitation and uncertainty about negative numbers would continue quite a long while. At least among Western mathematicians. Indian mathematicians seem to have been more comfortable with them sooner. And merchants, who could model a negative number as a debt, seem to have gotten the idea better.
But even seemingly simple questions could be challenging. John Wallis, in the 17th century, postulated that negative numbers were larger than infinity. Leonhard Euler seems to have agreed. (The notion may seem odd. It has echoes today, though. Computers store numbers as bit patterns. The normal scheme represents negative numbers by making the first bit in a pattern 1. These bit patterns make the negative numbers look bigger than the biggest positive numbers. And thermodynamics gives us a temperature defined by the relationship of energy to entropy. That definition implies there can be negative temperatures. Those are “hotter” — higher-energy, at least — than infinitely-high positive temperatures.) In the 18th century we see temperature scales designed so that the weather won’t give negative numbers too often. Augustus De Morgan wrote in 1831 that a negative number “occurring as the solution of a problem indicates some inconsistency or absurdity”. De Morgan was not an amateur. He coded the rules for deductive logic so well we still call them De Morgan’s laws. He put induction on a logical footing. And he found negative numbers (and imaginary numbers) a sign of defective work. In 1831. 1831!
But back to cubic equations. Allow that we’ve gotten comfortable enough with negative numbers we only want to solve the one depressed equation of . How to do it? … Another transformation, then. There are a couple you can do. Modern mathematicians would likely define a new variable w, set so that . This turns the depressed equation into
And this, believe it or not, is a disguised quadratic. Multiply everything in it by and move things around a little. You get
From there, quadratic formula to solve for . Then from that, take cube roots and you get three values of z. From that, you get your three values of x.
You see why nobody has taught this in high school algebra since 1959. Also why I am not touching the quartic formula, the equivalent of this for polynomials of degree four.
There are other approaches. And they can work out easier for particular problems. Take, for example, which I introduced in the first act. It’s past the time we set it off.
Rafael Bombelli, in the 1570s, pondered this particular equation. Notice it’s already depressed. A formula developed by Cardano addressed this, in the form . Notice that’s the second of the three sorts of depressed polynomial. Cardano’s formula says that one of the roots will be at
Put to this problem, we get something that looks like a compelling reason to stop:
Bombelli did not stop with that, though. He carried on as though these expressions of the square root of -121 made sense. And, if he did that he found these terms added up. You get an x of 4.
Which is true. It’s easy to check that it’s right. And here is the great surprising thing. Start from the respectable enough equation. It has nothing suspicious in it, not even negative numbers. Follow it through and you need to use negative numbers. Worse, you need to use the square roots of negative numbers. But keep going, as though you were confident in this, and you get a correct answer. And a real number.
We can get the other roots. Divide out of . What’s left is . You can use the quadratic formula for this. The other two roots are , about -0.268, and , about -3.732.
So here we have good reasons to work with negative numbers, and with imaginary numbers. We may not trust them. But they get us to correct answers. And this brings up another little secret of mathematics. If all you care about is an answer, then it’s all right to use a dubious method to get an answer.
There is a logical rigor missing in “we got away with it, I guess”. The name “imaginary numbers” tells of the disapproval of its users. We get the name from René Descartes, who was more generally discussing complex numbers. He wrote something like “in many cases no quantity exists which corresponds to what one imagines”.
John Wallis, taking a break from negative numbers and his other projects and quarrels, thought of how to represent imaginary numbers as branches off a number line. It’s a good scheme that nobody noticed at the time. Leonhard Euler envisioned matching complex numbers with points on the plane, but didn’t work out a logical basis for this. In 1797 Caspar Wessel presented a paper that described using vectors to represent complex numbers. It’s a good approach. Unfortunately that paper too sank without a trace, undiscovered for a century.
In 1806 Jean-Robert Argand wrote an “Essay on the Geometrical Interpretation of Imaginary Quantities”. Jacques Français got a copy, and published a paper describing the basics of complex numbers. He credited the essay, but noted that there was no author on the title page and asked the author to identify himself. Argand did. We started to get some good rigor behind the concept.
In 1831 William Rowan Hamilton, of Hamiltonian fame, described complex numbers using ordered pairs. Once we can define their arithmetic using the arithmetic of real numbers we have a second solid basis. More reason to trust them. Augustin-Louis Cauchy, who proved about four billion theorems of complex analysis, published a new construction of them. This used a group theory approach, a polynomial ring we denote as . I don’t have the strength to explain all that today. Matrices give us another approach. This matches complex numbers with particular two-row, two-column matrices. This turns the addition and multiplication of numbers into what Hamilton described.
And here we have some idea why mathematicians use negative numbers, and trust imaginary numbers. We are pushed toward them by convenience. Negative numbers let us work with one equation, , rather than three. (Or more than three equations, if we have to work with an x we know to be negative.) Imaginary numbers we can start with, and find answers we know to be true. And this encourages us to find reasons to trust the results. Having one line of reasoning is good. Having several lines — Argand’s geometric, Hamilton’s coordinates, Cauchy’s rings — is reassuring. We may not be able to point to an imaginary number of anything. But if we can trust our arithmetic on real numbers we can trust our arithmetic on imaginary numbers.
As I mentioned Descartes gave the name “imaginary number” to all of what we would now call “complex numbers”. Gauss published a geometric interpretation of complex numbers in 1831. And gave us the term “complex number”. Along the way he complained about the terminology, though. He noted “had +1, -1, and , instead of being called positive, negative, and imaginary (or worse still, impossible) unity, been given the names say, of direct, inverse, and lateral unity, there would hardly have been any scope for such obscurity”. I’ve never heard them term “impossible numbers”, except as an adjective.
The name of a thing doesn’t affect what it is. It can affect how we think about it, though. We can ask whether Asimov’s professor would dismiss “lateral numbers” as mysticism. Or at least as more mystical than “three” is. We can, in context, understand why Descartes thought of these as “imaginary numbers”. He saw them as something to use for the length of a calculation, and that would disappear once its use was done. We still have such concepts, things like “dummy variables” in a calculus problem. We can’t think of a use for dummy variables except to let a calculation proceed. But perhaps we’ll see things differently in four hundred years. Shall have to come back and check.
Learning of imaginary numbers, things created to be the square roots of negative numbers, inspired me. It probably inspires anyone who’s the sort of person who’d become a mathematician. The trick was great. I wondered could I do it? Could I find some other useful expansion of the number system?
The square root of a complex-valued number sounded like the obvious way to go, until a little later that week when I learned that’s just some other complex-valued numbers. The next thing I hit on: how about the logarithm of a negative number? Couldn’t that be a useful expansion of numbers?
No. It turns out you can make a sensible logarithm of negative, and complex-valued, numbers using complex-valued numbers. Same with trigonometric and inverse trig functions, tangents and arccosines and all that. There isn’t anything we can do with the normal mathematical operations that needs something bigger than the complex-valued numbers to play with. It’s possible to expand on the complex-valued numbers. We can make quaternions and some more elaborate constructs there. They don’t solve any particular shortcoming in complex-valued numbers, but they’ve got their uses. I never got anywhere near reinventing them. I don’t regret the time spent on that. There’s something useful in trying to invent something even if it fails.
One problem with mathematics — with all intellectual fields, really — is that it’s easy, when teaching, to give the impression that this stuff is the Word of God, built into the nature of the universe and inarguable. It’s so not. The stuff we find interesting and how we describe those things are the results of human thought, attempts to say what is interesting about a thing and what is useful. And what best approximates our ideas of what we would like to know. So I was happy to see this come across my Twitter feed:
Some background on Euler, Leibniz, Bernoulli and the controversy about logs of negative numbers. https://t.co/MZAJ6LkTwX
Also: it turns out there’s not “the” logarithm of a complex-valued number. There’s infinitely many logarithms. But they’re a family, all strikingly similar, so we can pick one that’s convenient and just use that. Ask if you’re really interested.
I had figured to do Reading the Comics posts weekly, and then last week went and gave me too big a flood of things to do. I have no idea what the rest of this week is going to look like. But given that I had four strips dated before last Sunday I’m going to err on the side of posting too much about comic strips.
Scott Metzger’s The Bent Pinky for the 24th uses mathematics as something that dogs can be adorable about not understanding. Thus all the heads tilted, as if it were me in a photograph. The graph here is from economics, which has long had a challenging relationship with mathematics. This particular graph is qualitative; it doesn’t exactly match anything in the real world. But it helps one visualize how we might expect changes in the price of something to affect its sales. A graph doesn’t need to be precise to be instructional.
Dave Whamond’s Reality Check for the 24th is this essay’s anthropomorphic-numerals joke. And it’s a reminder that something can be quite true without being reassuring. It plays on the difference between “real” numbers and things that really exist. It’s hard to think of a way that a number such as two could “really” exist that doesn’t also allow the square root of -1 to “really” exist.
And to be a bit curmudgeonly, it’s a bit sloppy to speak of “the square root of negative one”, even though everyone does. It’s all right to expand the idea of square roots to cover stuff it didn’t before. But there’s at least two numbers that would, squared, equal -1. We usually call them i and -i. Square roots naturally have this problem,. Both +2 and -2 squared give us 4. We pick out “the” square root by selecting the positive one of the two. But neither i nor -i is “positive”. (Don’t let the – sign fool you. It doesn’t count.) You can’t say either i or -i is greater than zero. It’s not possible to define a “greater than” or “less than” for complex-valued numbers. And that’s even before we get into quaternions, in which we summon two more “square roots” of -1 into existence. Octonions can be even stranger. I don’t blame 1 for being worried.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th is a pleasant bit of pop-mathematics debunking. I’ve explained in the past how I’m a doubter of the golden ratio. The Fibonacci Sequence has a bit more legitimate interest to it. That’s sequences of numbers in which the next term is the sum of the previous two terms. The famous one of that is 1, 1, 2, 3, 5, 8, 13, 21, et cetera. It may not surprise you to know that the Fibonacci Sequence has a link to the golden ratio. As it goes on, the ratio between one term and the next one gets close to the golden ratio.
The Harmonic Series is much more deeply weird. A series is the number we get from adding together everything in a sequence. The Harmonic Series grows out of the first sequence you’d imagine ever adding up. It’s 1 plus 1/2 plus 1/3 plus 1/4 plus 1/5 plus 1/6 plus … et cetera. The first time you hear of this you get the surprise: this sum doesn’t ever stop piling up. We say it ‘diverges’. It won’t on your computer; the floating-point arithmetic it does won’t let you add enormous numbers like ‘1’ to tiny numbers like ‘1/531,325,263,953,066,893,142,231,356,120’ and get the right answer. But if you actually added this all up, it would.
The proof gets a little messy. But it amounts to this: 1/2 plus 1/3 plus 1/4? That’s more than 1. 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + 1/10 + 1/11 + 1/12? That’s also more than 1. 1/13 + 1/14 + 1/15 + et cetera up through + 1/32 + 1/33 + 1/34 is also more than 1. You need to pile up more and more terms each time, but a finite string of these numbers will add up to more than 1. So the whole series has to be more than 1 + 1 + 1 + 1 + 1 … and so more than any finite number.
That’s all amazing enough. And then the series goes on to defy all kinds of intuition. Obviously dropping a couple of terms from the series won’t change whether it converges or diverges. Multiplying alternating terms by -1, so you have (say) 1 – 1/2 + 1/3 – 1/4 + 1/5 et cetera produces something that looks like it converges. It equals the natural logarithm of 2. But if you take those terms and rearrange them, you can produce any real number, positive or negative, that you want.
And, as Weinersmith describes here, if you just skip the correct set of terms, you can make the sum converge. The ones with 9 in the denominator will be, then, 1/9, 1/19, 1/29, 1/90, 1/91, 1/92, 1/290, 1/999, those sorts of things. Amazing? Yes. Absurd? I suppose so. This is why mathematicians learn to be very careful when they do anything, even addition, infinitely many times.
John Deering’s Strange Brew for the 25th is a fear-of-mathematics joke. The sign the warrior’s carrying is legitimate algebra, at least so far as it goes. The right-hand side of the equation gets cut off. In time, it would get to the conclusion that x equals –19/2, or -9.5.
I’ve got another request from Gaurish today. And it’s a word I had been thinking to do anyway. When one looks for mathematical terms starting with ‘q’ this is one that stands out. I’m a little surprised I didn’t do it for last summer’s A To Z. But here it is at last:
I remember the seizing of my imagination the summer I learned imaginary numbers. If we could define a number i, so that i-squared equalled negative 1, and work out arithmetic which made sense out of that, why not do it again? Complex-valued numbers are great. Why not something more? Maybe we could also have some other non-real number. I reached deep into my imagination and picked j as its name. It could be something else. Maybe the logarithm of -1. Maybe the square root of i. Maybe something else. And maybe we could build arithmetic with a whole second other non-real number.
My hopes of this brilliant idea petered out over the summer. It’s easy to imagine a super-complex number, something that’s “1 + 2i + 3j”. And it’s easy to work out adding two super-complex numbers like this together. But multiplying them together? What should i times j be? I couldn’t solve the problem. Also I learned that we didn’t need another number to be the logarithm of -1. It would be π times i. (Or some other numbers. There’s some surprising stuff in logarithms of negative or of complex-valued numbers.) We also don’t need something special to be the square root of i, either. will do. (So will another number.) So I shelved the project.
Even if I hadn’t given up, I wouldn’t have invented something. Not along those lines. Finer minds had done the same work and had found a way to do it. The most famous of these is the quaternions. It has a famous discovery. Sir William Rowan Hamilton — the namesake of “Hamiltonian mechanics”, so you already know what a fantastic mind he was — had a flash of insight that’s come down in the folklore and romance of mathematical history. He had the idea on the 16th of October, 1843, while walking with his wife along the Royal Canal, in Dublin, Ireland. While walking across the bridge he saw what was missing. It seems he lacked pencil and paper. He carved it into the bridge:
And they are a mysterious three! i, j, and k are somehow not the same number. But each of them, multiplied by themselves, gives us -1. And the product of the three is -1. They are even more mysterious. To work sensibly, i times j can’t be the same thing as j times i. Instead, i times j equals minus j times i. And j times k equals minus k times j. And k times i equals minus i times k. We must give up commutivity, the idea that the order in which we multiply things doesn’t matter.
But if we’re willing to accept that the order matters, then quaternions are well-behaved things. We can add and subtract them just as we would think to do if we didn’t know they were strange constructs. If we keep the funny rules about the products of i and j and k straight, then we can multiply them as easily as we multiply polynomials together. We can even divide them. We can do all the things we do with real numbers, only with these odd sets of four real numbers.
The way they look, that pattern of 1 + 2i + 3j + 4k, makes them look a lot like vectors. And we can use them like vectors pointing to stuff in three-dimensional space. It’s not quite a comfortable fit, though. That plain old real number at the start of things seems like it ought to signify something, but it doesn’t. In practice, it doesn’t give us anything that regular old vectors don’t. And vectors allow us to ponder not just three- or maybe four-dimensional spaces, but as many as we need. You might wonder why we need more than four dimensions, even allowing for time. It’s because if we want to track a lot of interacting things, it’s surprisingly useful to put them all into one big vector in a very high-dimension space. It’s hard to draw, but the mathematics is nice. Hamiltonian mechanics, particularly, almost beg for it.
That’s not to call them useless, or even a niche interest. They do some things fantastically well. One of them is rotations. We can represent rotating a point around an arbitrary axis by an arbitrary angle as the multiplication of quaternions. There are many ways to calculate rotations. But if we need to do three-dimensional rotations this is a great one because it’s easy to understand and easier to program. And as you’d imagine, being able to calculate what rotations do is useful in all sorts of applications.
They’ve got good uses in number theory too, as they correspond well to the different ways to solve problems, often polynomials. They’re also popular in group theory. They might be the simplest rings that work like arithmetic but that don’t commute. So they can serve as ways to learn properties of more exotic ring structures.
Knowing of these marvelous exotic creatures of the deep mathematics your imagination might be fired. Can we do this again? Can we make something with, say, four unreal numbers? No, no we can’t. Four won’t work. Nor will five. If we keep going, though, we do hit upon success with seven unreal numbers.
This is a set called the octonions. Hamilton had barely worked out the scheme for quaternions when John T Graves, a friend of his at least up through the 16th of December, 1843, wrote of this new scheme. (Graves didn’t publish before Arthur Cayley did. Cayley’s one of those unspeakably prolific 19th century mathematicians. He has at least 967 papers to his credit. And he was a lawyer doing mathematics on the side for about 250 of those papers. This depresses every mathematician who ponders it these days.)
But where quaternions are peculiar, octonions are really peculiar. Let me call a couple quaternions p, q, and r. p times q might not be the same thing as q times r. But p times the product of q and r will be the same thing as the product of p and q itself times r. This we call associativity. Octonions don’t have that. Let me call a couple quaternions s, t, and u. s times the product of t times u may be either positive or negative the product of s and t times u. (It depends.)
Octonions have some neat mathematical properties. But I don’t know of any general uses for them that are as catchy as understanding rotations. Not rotations in the three-dimensional world, anyway.
Yes, yes, we can go farther still. There’s a construct called “sedenions”, which have fifteen non-real numbers on them. That’s 16 terms in each number. Where octonions are peculiar, sedenions are really peculiar. They work even less like regular old numbers than octonions do. With octonions, at least, when you multiply s by the product of s and t, you get the same number as you would multiplying s by s and then multiplying that by t. Sedenions don’t even offer that shred of normality. Besides being a way to learn about abstract algebra structures I don’t know what they’re used for.
I also don’t know of further exotic terms along this line. It would seem to fit a pattern if there’s some 32-term construct that we can define something like multiplication for. But it would presumably be even less like regular multiplication than sedenion multiplication is. If you want to fiddle about with that please do enjoy yourself. I’d be interested to hear if you turn up anything, but I don’t expect it’ll revolutionize the way I look at numbers. Sorry. But the discovery might be the fun part anyway.
This will be a hastily-written installment since I married just this weekend and have other things occupying me. But there’s still comics mentioning math subjects so let me summarize them for you. The first since my last collection of these, on the 13th of June, came on the 15th, with Dave Whamond’s Reality Check, which goes into one of the minor linguistic quirks that bothers me: the claim that one can’t give “110 percent,” since 100 percent is all there is. I don’t object to phrases like “110 percent”, though, since it seems to me the baseline, the 100 percent, must be to some standard reference performance. For example, the Space Shuttle Main Engines routinely operated at around 104 percent, not because they were exceeding their theoretical limits, but because the original design thrust was found to be not quite enough, and the engines were redesigned to deliver more thrust, and it would have been far too confusing to rewrite all the documentation so that the new design thrust was the new 100 percent. Instead 100 percent was the design capacity of an engine which never flew but which existed in paper form. So I’m forgiving of “110 percent” constructions, is the important thing to me.
Since I suspect that the comics roundup posts are the most popular ones I post, I’m very glad to see there was a bumper crop of strips among the ones I read regularly (from King Features Syndicate and from gocomics.com) this past week. Some of those were from cancelled strips in perpetual reruns, but that’s fine, I think: there aren’t any particular limits on how big an electronic comics page one can have, after all, and while it’s possible to read a short-lived strip long enough that you see all its entries, it takes a couple go-rounds to actually have them all memorized.
I want to talk about some numbers which have names, and to argue that surprisingly few of numbers do. To make that argument it would be useful to say what numbers I think have names, and which ones haven’t; perhaps if I say enough I will find out.
For example, “one” is certainly a name of a number. So are “two” and “three” and so on, and going up to “twenty”, and going down to “zero”. But is “twenty-one” the name of a number, or just a label for the number described by the formula “take the number called twenty and add to it the number called one”?
It feels to me more like a label. I note for support the former London-dialect preference for writing such numbers as one-and-twenty, two-and-twenty, and so on, a construction still remembered in Charles Dickens, in nursery rhymes about blackbirds baked in pies, in poetry about the ways of constructing tribal lays correctly. It tells you how to calculate the number based on a few named numbers and some operations.
None of these are negative numbers. I can’t think of a properly named negative number, just ones we specify by prepending “minus” or “negative” to the label given a positive number. But negative numbers are fairly new things, a concept we have found comfortable for only a few centuries. Perhaps we will find something that simply must be named.
That tips my attitude (for today) about these names, that I admit “thirty” and “forty” and so up to a “hundred” as names. After that we return to what feel like formulas: a hundred and one, a hundred and ten, two hundred and fifty. We name a number, to say how many hundreds there are, and then whatever is left over. In ruling “thirty” in as a name and “three hundred” out I am being inconsistent; fortunately, I am speaking of peculiarities of the English language, so no one will notice. My dictionary notes the “-ty” suffix, going back to old English, means “groups of ten”. This makes “thirty” just “three tens”, stuffed down a little, yet somehow I think of “thirty” as different from “three hundred”, possibly because the latter does not appear in my dictionary. Somehow the impression formed in my mind before I thought to look. Continue reading “How Many Numbers Have We Named?”