From my First A-to-Z: Orthogonal


I haven’t had the space yet to finish my Little 2021 A-to-Z, so let me resume playing the hits of past ones. For my first, Summer 2015, one, I picked all the topics myself. This one, Orthogonal, I remember as one of the challenging ones. The challenge was the question put in the first paragraph: why do we have this term, which is so nearly a synonym for “perpendicular”? I didn’t find an answer, then, or since. But I was able to think about how we use “orthogonal” and what it might do that “perpendicular ” doesn’t..


Orthogonal.

Orthogonal is another word for perpendicular. So why do we need another word for that?

It helps to think about why “perpendicular” is a useful way to organize things. For example, we can describe the directions to a place in terms of how far it is north-south and how far it is east-west, and talk about how fast it’s travelling in terms of its speed heading north or south and its speed heading east or west. We can separate the north-south motion from the east-west motion. If we’re lucky these motions separate entirely, and we turn a complicated two- or three-dimensional problem into two or three simpler problems. If they can’t be fully separated, they can often be largely separated. We turn a complicated problem into a set of simpler problems with a nice and easy part plus an annoying yet small hard part.

And this is why we like perpendicular directions. We can often turn a problem into several simpler ones describing each direction separately, or nearly so.

And now the amazing thing. We can separate these motions because the north-south and the east-west directions are at right angles to one another. But we can describe something that works like an angle between things that aren’t necessarily directions. For example, we can describe an angle between things like functions that have the same domain. And once we can describe the angle between two functions, we can describe functions that make right angles between each other.

This means we can describe functions as being perpendicular to one another. An example. On the domain of real numbers from -1 to 1, the function f(x) = x is perpendicular to the function g(x) = x^2 . And when we want to study a more complicated function we can separate the part that’s in the “direction” of f(x) from the part that’s in the “direction” of g(x). We can treat functions, even functions we don’t know, as if they were locations in space. And we can study and even solve for the different parts of the function as if we were pinning down the north-south and the east-west movements of a thing.

So if we want to study, say, how heat flows through a body, we can work out a series of “direction” for functions, and work out the flow in each of those “directions”. These don’t have anything to do with left-right or up-down directions, but the concepts and the convenience is similar.

I’ve spoken about this in terms of functions. But we can define the “angle” between things for many kinds of mathematical structures. Once we can do that, we can have “perpendicular” pairs of things. I’ve spoken only about functions, but that’s because functions are more familiar than many of the mathematical structures that have orthogonality.

Ah, but why call it “orthogonal” rather than “perpendicular”? And I don’t know. The best I can work out is that it feels weird to speak of, say, the cosine function being “perpendicular” to the sine function when you can’t really say either is in any particular direction. “Orthogonal” seems to appeal less directly to physical intuition while still meaning something. But that’s my guess, rather than the verdict of a skilled etymologist.

A Leap Day 2016 Mathematics A To Z: Vector


And as we approach the last letters of the alphabet, my Leap Day A To Z gets to the lats of Gaurish’s requests.

Vector.

A vector’s a thing you can multiply by a number and then add to another vector.

Oh, I know what you’re thinking. Wasn’t a vector one of those things that points somewhere? A direction and a length in that direction? (Maybe dressed up in more formal language. I’m glad to see that apparently New Jersey Tech’s student newspaper is still The Vector and still uses the motto “With Magnitude And Direction’.) Yeah, that’s how we’re always introduced to it. Pointing to stuff is a good introduction to vectors. Nearly everyone finds their way around places. And it’s a good learning model, to learn how to multiply vectors by numbers and to add vectors together.

But thinking too much about directions, either in real-world three-dimensional space, or in the two-dimensional space of the thing we’re writing notes on, can be limiting. We can get too hung up on a particular representation of a vector. Usually that’s an ordered set of numbers. That’s all right as far as it goes, but why limit ourselves? A particular representation can be easy to understand, but as the scary people in the philosophy department have been pointing out for 26 centuries now, a particular example of a thing and the thing are not identical.

And if we look at vectors as “things we can multiply by a number, then add another vector to”, then we see something grand. We see a commonality in many different kinds of things. We can do this multiply-and-add with those things that point somewhere. Call those coordinates. But we can also do this with matrices, grids of numbers or other stuff it’s convenient to have. We can also do this with ordinary old numbers. (Think about it.) We can do this with polynomials. We can do this with sets of linear equations. We can do this with functions, as long as they’re defined for compatible domains. We can even do this with differential equations. We can see a unity in things that seem, at first, to have nothing to do with one another.

We call these collections of things “vector spaces”. It’s a space much like the space you happen to exist in is. Adding two things in the space together is much like moving from one place to another, then moving again. You can’t get out of the space. Multiplying a thing in the space by a real number is like going in one direction a short or a long or whatever great distance you want. Again you can’t get out of the space. This is called “being closed”.

(I know, you may be wondering if it isn’t question-begging to say a vector is a thing in a vector space, which is made up of vectors. It isn’t. We define a vector space as a set of things that satisfy a certain group of rules. The things in that set are the vectors.)

Vector spaces are nice things. They work much like ordinary space does. We can bring many of the ideas we know from spatial awareness to vector spaces. For example, we can usually define a “length” of things. And something that works like the “angle” between things. We can define bases, breaking down a particular element into a combination of standard reference elements. This helps us solve problems, by finding ways they’re shadows of things we already know how to solve. And it doesn’t take much to satisfy the rules of being a vector space. I think mathematicians studying new groups of objects look instinctively for how we might organize them into a vector space.

We can organize them further. A vector space that satisfies some rules about sequences of terms, and that has a “norm” which is pretty much a size, becomes a Banach space. It works a little more like ordinary three-dimensional space. A Banach space that has a norm defined by a certain common method is a Hilbert space. These work even more like ordinary space, but they don’t need anything in common with it. For example, the functions that describe quantum mechanics are in a Hilbert space. There’s a thing called a Sobolev Space, a kind of vector space that also meets criteria I forget, but the name has stuck with me for decades because it is so wonderfully assonant.

I mentioned how vectors are stuff you can multiply by numbers, and add to other vectors. That’s true, but it’s a little limiting. The thing we multiply a vector by is called a scalar. And the scalar is a number — real or complex-valued — so often it’s easy to think that’s the default. But it doesn’t have to be. The scalar just has to be an element of some field. A ‘field’ is a ring that you can do addition, multiplication, and division on. So numbers are the obvious choice. They’re not the only ones, though. The scalar has to be able to multiply with the vector, since otherwise the entire concept collapses into gibberish. But we wouldn’t go looking among the gibberish except to be funny anyway.

The idea of the ‘vector’ is straightforward and powerful. So we see it all over a wide swath of mathematics. It’s one of the things that shapes how we expect mathematics to look.

Reading the Comics, December 13, 2015: More Nearly Like It Edition


This has got me closer to the number of comics I like for a Reading the Comics post. There’s two comics already in my file, for the 14th of December, but those can wait until later in the week.

David L Hoyt and Jeff Knurek’s Jumble for the 11th of December has a mathematics topic. The quotes in the final answer are the hint that it’s a bit of wordplay. The mention of “subtraction” is a hint.

Words: 'SOLPI', 'NALST', 'BAVEHE', 'CANYLU'. Circled letters, O O - - O, O - - O -, - O - O - O, O - O - - -. The puzzle: To teach subtraction the teacher had a '- - - - - -' - - - -.

David L Hoyt and Jeff Knurek’s Jumble for the 11th of December, 2015. The link will probably expire in mid-January 2016. Also somehow I’m writing about 2016 being in the imminent future.

Brian Kliban’s cartoon for the 11th of December (a rerun from who knows when) promises an Illegal Cube Den, and delivers. I’m just delighted by the silliness of it all.

Greg Evans’s Luann Againn for the 11th of December reprints the 1987 Luann. “Geometric principles of equitorial [sic] astronomical coordinate systems” gets mentioned as a math-or-physics-sounding complicated thing to do. The basic idea is to tell where things are in the sky, as we see them from the surface of the Earth. In an equatorial coordinate system we imagine — we project — where the plane of the equator is, and we can measure things as north or south of that plane. (North is on the same side that the Earth’s north pole is.) That celestial equator is functionally equivalent to longitude, although it’s called declination.

We also need something functionally equivalent to longitude; that’s called the right ascension. To define that, we need something that works like the prime meridian. Projecting the actual prime meridian out to the stars doesn’t work. The prime meridian is spinning every 24 hours and we can’t publish updated star charts that quickly. What we use as a reference meridian instead is spring. That is, it’s where the path of the sun in the sky crosses the celestial equator in March and the (northern hemisphere) spring.

There are catches and subtleties, which is why this makes for a good research project. The biggest one is that this crossing point changes over time. This is because the Earth’s orbit around the sun changes. So right ascensions of points change a little every year. So when we give coordinates, we have to say in which system, and which reference year. 2000 is a popular one these days, but its time will pass. 1950 and 1900 were popular in their generations. It’s boring but not hard to convert between these reference dates. And if you need this much precision, it’s not hard to convert between the reference year of 2000 and the present year. I understand many telescopes will do that automatically. I don’t know directly because I have little telescope experience, and I couldn’t even swear I had seen a meteor until 2013. In fairness, I grew up in New Jersey, so with the light pollution I was lucky to see night sky.

Peter Maresca’s Origins of the Sunday Comics for the 11th of December showcases a strip from 1914. That, Clare Victor Dwiggins’s District School for the 12th of April, 1914, is just a bunch of silly vignettes. It’s worth zooming in to look at. It’s got a student going “figger juggling” and that gives me an excuse to point out the strip to anyone who’ll listen.

Samson’s Dark Side of the Horse for the 13th of December enters another counting-sheep joke into the ranks. Tying it into angles is cute. It’s tricky to estimate angles by sight. I think people tend to over-estimate how big an angle is when it’s around fifteen or twenty degrees. 45 degrees is easy enough to tell by sight. But for angles smaller than that, I tend to estimate angles by taking the number I think it is and cutting it in half, and I get closer to correct. I’m sure other people use a similar trick.

Brian Anderson’s Dog Eat Doug for the 13th of December has the dog, Sophie, deploy a lot of fraction talk to confuse a cookie out of Doug. A lot of new fields of mathematics are like that the first time you encounter them. I am curious where Sophie’s reasoning would have led, if not interrupted. How much cookie might she have cadged by the judicious splitting of halves and quarters and, perhaps, eighths and such? I’m not sure where her patter was going.

Shannon Wheeler’s Too Much Coffee Man for the 13th of December uses the traditional blackboard full of symbols to denote a lot of deeply considered thinking. Did you spot the error?

A Summer 2015 Mathematics A To Z: orthogonal


Orthogonal.

Orthogonal is another word for perpendicular. So why do we need another word for that?

It helps to think about why “perpendicular” is a useful way to organize things. For example, we can describe the directions to a place in terms of how far it is north-south and how far it is east-west, and talk about how fast it’s travelling in terms of its speed heading north or south and its speed heading east or west. We can separate the north-south motion from the east-west motion. If we’re lucky these motions separate entirely, and we turn a complicated two- or three-dimensional problem into two or three simpler problems. If they can’t be fully separated, they can often be largely separated. We turn a complicated problem into a set of simpler problems with a nice and easy part plus an annoying yet small hard part.

And this is why we like perpendicular directions. We can often turn a problem into several simpler ones describing each direction separately, or nearly so.

And now the amazing thing. We can separate these motions because the north-south and the east-west directions are at right angles to one another. But we can describe something that works like an angle between things that aren’t necessarily directions. For example, we can describe an angle between things like functions that have the same domain. And once we can describe the angle between two functions, we can describe functions that make right angles between each other.

This means we can describe functions as being perpendicular to one another. An example. On the domain of real numbers from -1 to 1, the function f(x) = x is perpendicular to the function g(x) = x^2 . And when we want to study a more complicated function we can separate the part that’s in the “direction” of f(x) from the part that’s in the “direction” of g(x). We can treat functions, even functions we don’t know, as if they were locations in space. And we can study and even solve for the different parts of the function as if we were pinning down the north-south and the east-west movements of a thing.

So if we want to study, say, how heat flows through a body, we can work out a series of “direction” for functions, and work out the flow in each of those “directions”. These don’t have anything to do with left-right or up-down directions, but the concepts and the convenience is similar.

I’ve spoken about this in terms of functions. But we can define the “angle” between things for many kinds of mathematical structures. Once we can do that, we can have “perpendicular” pairs of things. I’ve spoken only about functions, but that’s because functions are more familiar than many of the mathematical structures that have orthogonality.

Ah, but why call it “orthogonal” rather than “perpendicular”? And I don’t know. The best I can work out is that it feels weird to speak of, say, the cosine function being “perpendicular” to the sine function when you can’t really say either is in any particular direction. “Orthogonal” seems to appeal less directly to physical intuition while still meaning something. But that’s my guess, rather than the verdict of a skilled etymologist.

Reading the Comics, December 14, 2014: Pictures Gone Again? Edition


I’ve got enough comics to do a mathematics-comics roundup post again, but none of them are the King Features or Creators or other miscellaneous sources that demand they be included here in pictures. I could wait a little over three hours and give the King Features Syndicate comics another chance to say anything on point, or I could shrug and go with what I’ve got. It’s a tough call. Ah, what the heck; besides, it’s been over a week since I did the last one of these.

Bill Amend’s FoxTrot (December 7) bids to get posted on mathematics teachers’ walls with a bit of play on two common uses of the term “degree”. It’s also natural to wonder why the same word “degree” should be used to represent the units of temperature and the size of an angle, to the point that they even use the same symbol of a tiny circle elevated from the baseline as a shorthand representation. As best I can make out, the use of the word degree traces back to Old French, and “degré”, meaning a step, as in a stair. In Middle English this got expanded to the notion of one of a hierarchy of steps, and if you consider the temperature of a thing, or the width of an angle, as something that can be grown or shrunk then … I’m left wondering if the Middle English folks who extended “degree” to temperatures and angles thought there were discrete steps by which either quantity could change.

As for the little degree symbol, Florian Cajori notes in A History Of Mathematical Notations that while the symbol (and the ‘ and ” for minutes and seconds) can be found in Ptolemy (!), in describing Babylonian sexagesimal fractions, this doesn’t directly lead to the modern symbols. Medieval manuscripts and early printed books would use abbreviations of Latin words describing what the numbers represented. Cajori rates as the first modern appearance of the degree symbol an appendix, composed by one Jacques Peletier, to the 1569 edition of the text Arithmeticae practicae methods facilis by Gemma Frisius (you remember him; the guy who made triangulation into something that could be used for surveying territories). Peletier was describing astronomical fractions, and used the symbol to denote that the thing before it was a whole number. By 1571 Erasmus Reinhold (whom you remember from working out the “Prutenic Tables”, updated astronomical charts that helped convince people of the use of the Copernican model of the solar system and advance the cause of calendar reform) was using the little circle to represent degrees, and Tycho Brahe followed his example, and soon … well, it took a century or so of competing symbols, including “Grad” or “Gr” or “G” to represent degree, but the little circle eventually won out. (Assume the story is more complicated than this. It always is.)

Mark Litzer’s Joe Vanilla (December 7) uses a panel of calculus to suggest something particularly deep or intellectually challenging. As it happens, the problem isn’t quite defined well enough to solve, but if you make a reasonable assumption about what’s meant, then it becomes easy to say: this expression is “some infinitely large number”. Here’s why.

The numerator is the integral \int_{0}^{\infty} e^{\pi} + \sin^2\left(x\right) dx . You can think of the integral of a positive-valued expression as the area underneath that expression and between the lines marked by, on the left, x = 0 (the number on the bottom of the integral sign), and on the right, x = \infty (the number on the top of the integral sign). (You know that it’s x because the integral symbol ends with “dx”; if it ended “dy” then the integral would tell you the left and the right bounds for the variable y instead.) Now, e^{\pi} + \sin^2\left(x\right) is a number that depends on x, yes, but which is never smaller than e^{\pi} (about 23.14) nor bigger than e^{\pi} + 1 (about 24.14). So the area underneath this expression has to be at least as big as the area within a rectangle that’s got a bottom edge at y = 0, a top edge at y = 23, a left edge at x = 0, and a right edge at x infinitely far off to the right. That rectangle’s got an infinitely large area. The area underneath this expression has to be no smaller than that.

Just because the numerator’s infinitely large doesn’t mean that the fraction is, though. It’s imaginable that the denominator is also infinitely large, and more wondrously, is large in a way that makes the ratio some more familiar number like “3”. Spoiler: it isn’t.

Actually, as it is, the denominator isn’t quite much of anything. It’s a summation; that’s what the capital sigma designates there. By convention, the summation symbol means to evaluate whatever expression there is to the right of it — in this case, it’s x^{\frac{1}{e}} + \cos\left(x\right) — for each of a series of values of some index variable. That variable is normally identified underneath the sigma, with a line such as x = 1, and (again by convention) for x = 2, x = 3, x = 4, and so on, until x equals whatever the number on top of the sigma is. In this case, the bottom doesn’t actually say what the index should be, although since “x” is the only thing that makes sense as a variable within the expression — “cos” means the cosine function, and “e” means the number that’s about 2.71828 unless it’s otherwise made explicit — we can suppose that this is a normal bit of shorthand like you use when context is clear.

With that assumption about what’s meant, then, we know the denominator is whatever number is represented by \left(1^{\frac{1}{e}} + \cos\left(1\right)\right) + \left(2^{\frac{1}{e}} + \cos\left(2\right)\right) + \left(3^{\frac{1}{e}} + \cos\left(3\right)\right) + \left(4^{\frac{1}{e}} + \cos\left(4\right)\right) +  \cdots + \left(10^{\frac{1}{e}} + \cos\left(10\right)\right) (and 1/e is about 0.368). That’s a number about 16.549, which falls short of being infinitely large by an infinitely large amount.

So, the original fraction shown represents an infinitely large number.

Mark Tatulli’s Lio (December 7) is another “anthropomorphic numbers” genre comic, and since it’s Lio the numbers naturally act a bit mischievously.

Greg Evans’s Luann Againn (December 7, I suppose technically a rerun) only has a bit of mathematical content, as it’s really playing more on short- and long-term memories. Normal people, it seems, have a buffer of something around eight numbers that they can remember without losing track of them, and it’s surprisingly easy to overload that. I recall reading, I think in Joseph T Hallinan’s Why We Make Mistakes: How We Look Without Seeing, Forget Things In Seconds, And Are All Pretty Sure We are Way Above Average, and don’t think I’m not aware of how funny it would be if I were getting this source wrong, that it’s possible to cheat a little bit on the size of one’s number-buffer.

Hallinan (?) gave the example of a runner who was able to remember strings of dozens of numbers, well past the norm, but apparently by the trick of parsing numbers into plausible running times. That is, the person would remember “834126120820” perfectly because it could be expressed as four numbers, “8:34, 1:26, 1:20, 8:20”, that might be credible running times for something or other and the runner was used to remembering such times. Supporting the idea that this trick was based on turning a lot of digits into a few small numbers was that the runner would be lost if the digits could not be parsed into a meaningful time, like, “489162693077”. So, in short, people are really weird in how they remember and don’t remember things.

Harley Schwadron’s 9 to 5 (December 8) is a “reluctant student” question who, in the tradition of kids in comic strips, tosses out the word “app” in the hopes of upgrading the action into a joke. I’m sympathetic to the kid not wanting to do long division. In arithmetic the way I was taught it, this was the first kind of problem where you pretty much had to approximate and make a guess what the answer might be and improve your guess from that starting point, and that’s a terrifying thing when, up to that point, arithmetic has been a series of predictable, discrete, universally applicable rules not requiring you to make a guess. It feels wasteful of effort to work out, say, what seven times your divisor is when it turns out it’ll go into the dividend eight times. I am glad that teaching approaches to arithmetic seem to be turning towards “make approximate or estimated answers, and try to improve those” as a general rule, since often taking your best guess and then improving it is the best way to get a good answer, not just in long division, and the less terrifying that move is, the better.

Justin Boyd’s Invisible Bread (December 12) reveals the joy and the potential menace of charts and graphs. It’s a reassuring red dot at the end of this graph of relevant-graph-probabilities.

Several comics chose to mention the coincidence of the 13th of December being (in the United States standard for shorthand dating) 12-13-14. Chip Sansom’s The Born Loser does the joke about how yes, this sequence won’t recur in (most of our) lives, but neither will any other. Stuart Carlson and Jerry Resler’s Gray Matters takes a little imprecision in calling it “the last date this century to have a consecutive pattern”, something the Grays, if the strip is still running, will realize on 1/2/34 at the latest. And Francesco Marciuliano’s Medium Large uses the neat pattern of the dates as a dip into numerology and the kinds of manias that staring too closely into neat patterns can encourage.

%d bloggers like this: