Tagged: algebra Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Thursday, 23 February, 2017 Permalink | Reply
    Tags: algebra, , compulsions, , Over The Hedge, , Super-Fun-Pak Comix, Wide Open   

    Reading the Comics, February 15, 2017: SMBC Does Not Cut In Line Edition 

    On reflection, that Saturday Morning Breakfast Cereal I was thinking about was not mathematically-inclined enough to be worth including here. Helping make my mind up on that was that I had enough other comic strips to discuss here that I didn’t need to pad my essay. Yes, on a slow week I let even more marginal stuff in. Here’s the comic I don’t figure to talk about. Enjoy!

    Jack Pullan’s Boomerangs rerun for the 16th is another strip built around the “algebra is useless in real life” notion. I’m too busy noticing Mom in the first panel saying “what are you doing play [sic] video games?” to respond.

    Ruben Bolling’s Super-Fun-Pak Comix excerpt for the 16th is marginal, yeah, but fun. Numeric coincidence and numerology can sneak into compulsions with terrible ease. I can believe easily the need to make the number of steps divisible by some favored number.

    Rich Powell’s Wide Open for the 16th is a caveman science joke, and it does rely on a chalkboard full of algebra for flavor. The symbols come tantalizingly close to meaningful. The amount of kinetic energy, K or KE, of a particle of mass m moving at speed v is indeed K = \frac{1}{2} m v^2 . Both 16 and 32 turn up often in the physics of falling bodies, at least if we’re using feet to measure. a = -\frac{k}{m} x turns up in physics too. It comes from the acceleration of a mass on a spring. But an equation of the same shape turns up whenever you describe things that go through tiny wobbles around the normal value. So the blackboard is gibberish, but it’s a higher grade of gibberish than usual.

    Rick Detorie’s One Big Happy rerun for the 17th is a resisting-the-word-problem joke, made fresher by setting it in little Ruthie’s playing at school.

    T Lewis and Michael Fry’s Over The Hedge for the 18th mentions the three-body problem. As Verne the turtle says, it’s a problem from physics. The way two objects — sun and planet, planet and moon, pair of planets, whatever — orbit each other if they’re the only things in the universe is easy. You can describe it all perfectly and without using more than freshman physics majors know. Introduce a third body, though, and we don’t know anymore. Chaos can happen.

    Emphasis on can. There’s no good way to solve the “general” three-body problem, the one where the star and planets can have any sizes and any starting positions and any starting speeds. We can do well for special cases, though. If you have a sun, a planet, and a satellite — each body negligible compared to the other — we can predict orbits perfectly well. If the bodies have to stay in one plane of motion, instead of moving in three-dimensional space, we can do pretty well. If we know two of the bodies orbit each other tightly and the third is way off in the middle of nowhere we can do pretty well.

    But there’s still so many interesting cases for which we just can’t be sure chaos will not break out. Three interacting bodies just offer so much more chance for things to happen. (To mention something surely coincidental, it does seem to be a lot easier to write good comedy, or drama, with three important characters rather than two. Any pair of characters can gang up on the third, after all. I notice how much more energetic Over The Hedge became when Hammy the Squirrel joined RJ and Verne as the core cast.)

    Dave Whamond’s Reality Check for the 18th is your basic mathematics-illiteracy joke, done well enough.

  • Joseph Nebus 6:00 pm on Sunday, 5 February, 2017 Permalink | Reply
    Tags: algebra, , , , , , Pajama Diaries, ,   

    Reading the Comics, February 2, 2017: I Haven’t Got A Jumble Replacement Source Yet 

    If there was one major theme for this week it was my confidence that there must be another source of Jumble strips out there. I haven’t found it, but I admit not making it a priority either. The official Jumble site says I can play if I activate Flash, but I don’t have enough days in the year to keep up with Flash updates. And that doesn’t help me posting mathematics-relevant puzzles here anyway.

    Mark Anderson’s Andertoons for January 29th satisfies my Andertoons need for this week. And it name-drops the one bit of geometry everyone remembers. To be dour and humorless about it, though, I don’t think one could likely apply the Pythagorean Theorem. Typically the horizontal axis and the vertical axis in a graph like this measure different things. Squaring the different kinds of quantities and adding them together wouldn’t mean anything intelligible. What would even be the square root of (say) a squared-dollars-plus-squared-weeks? This is something one learns from dimensional analysis, a corner of mathematics I’ve thought about writing about some. I admit this particular insight isn’t deep, but everything starts somewhere.

    Norm Feuti’s Gil rerun for the 30th is a geometry name-drop, listing it as the sort of category Jeopardy! features. Gil shouldn’t quit so soon. The responses for the category are “What is the Pythagorean Theorem?”, “What is acute?”, “What is parallel?”, “What is 180 degrees?” (or, possibly, 360 or 90 degrees), and “What is a pentagon?”.

    Parents' Glossary Of Terms: 'Mortifraction': That utter shame when you realize you can no longer do math in your head. Parent having trouble making change at a volunteer event.

    Terri Libenson’s Pajama Diaries for the 1st of February, 2017. You know even for a fundraising event $17.50 seems a bit much for a hot dog and bottled water. Maybe the friend’s 8-year-old child is way off too.

    Terri Libenson’s Pajama Diaries for the 1st of February shows off the other major theme of this past week, which was busy enough that I have to again split the comics post into two pieces. That theme is people getting basic mathematics wrong. Mostly counting. (You’ll see.) I know there’s no controlling what people feel embarrassed about. But I think it’s unfair to conclude you “can no longer” do mathematics in your head because you’re not able to make change right away. It’s normal to be slow or unreliable about something you don’t do often. Inexperience and inability are not the same thing, and it’s unfair to people to conflate them.

    Gordon Bess’s Redeye for the 21st of September, 1970, got rerun the 1st of February. And it’s another in the theme of people getting basic mathematics wrong. And even more basic mathematics this time. There’s more problems-with-counting comics coming when I finish the comics from the past week.

    'That was his sixth shot!' 'Good! OK, Paleface! You've had it now!' (BLAM) 'I could never get that straight, does six come after four or after five?'

    Gordon Bess’s Redeye for the 21st of September, 1970. Rerun the 1st of February, 2017. I don’t see why they’re so worried about counting bullets if being shot just leaves you a little discombobulated.

    Dave Whamond’s Reality Check for the 1st hopes that you won’t notice the label on the door is painted backwards. Just saying. It’s an easy joke to make about algebra, also, that it should put letters in to perfectly good mathematics. Letters are used for good reasons, though. We’ve always wanted to work out the value of numbers we only know descriptions of. But it’s way too wordy to use the whole description of the number every time we might speak of it. Before we started using letters we could use placeholder names like “re”, meaning “thing” (as in “thing we want to calculate”). That works fine, although it crashes horribly when we want to track two or three things at once. It’s hard to find words that are decently noncommittal about their values but that we aren’t going to confuse with each other.

    So the alphabet works great for this. An individual letter doesn’t suggest any particular number, as long as we pretend ‘O’ and ‘I’ and ‘l’ don’t look like they do. But we also haven’t got any problem telling ‘x’ from ‘y’ unless our handwriting is bad. They’re quick to write and to say aloud, and they don’t require learning to write any new symbols.

    Later, yes, letters do start picking up connotations. And sometimes we need more letters than the Roman alphabet allows. So we import from the Greek alphabet the letters that look different from their Roman analogues. That’s a bit exotic. But at least in a Western-European-based culture they aren’t completely novel. Mathematicians aren’t really trying to make this hard because, after all, they’re the ones who have to deal with the hard parts.

    Bu Fisher’s Mutt and Jeff rerun for the 2nd is another of the basic-mathematics-wrong jokes. But it does get there by throwing out a baffling set of story-problem-starter points. Particularly interesting to me is Jeff’s protest in the first panel that they couldn’t have been doing 60 miles an hour as they hadn’t been out an hour. It’s the sort of protest easy to use as introduction to the ideas of average speed and instantaneous speed and, from that, derivatives.

  • Joseph Nebus 6:00 pm on Sunday, 18 December, 2016 Permalink | Reply
    Tags: algebra, dinosaurs,   

    Reading the Comics, December 17, 2016: Sleepy Week Edition 

    Comic Strip Master Command sent me a slow week in mathematical comics. I suppose they knew I was on somehow a busier schedule than usual and couldn’t spend all the time I wanted just writing. I appreciate that but don’t want to see another of those weeks when nothing qualifies. Just a warning there.

    'Dadburnit! I ain't never gonna git geometry!' 'Bah! Don't fret, Jughaid --- I never understood it neither! But I still manage to work all th' angles!'

    John Rose’s Barney Google and Snuffy Smith for the 12th of December, 2016. I appreciate the desire to pay attention to continuity that makes Rose draw in the coffee cup both panels, but Snuffy Smith has to swap it from one hand to the other to keep it in view there. Not implausible, just kind of busy. Also I can’t fault Jughaid for looking at two pages full of unillustrated text and feeling lost. That’s some Bourbaki-grade geometry going on there.

    John Rose’s Barney Google and Snuffy Smith for the 12th is a bit of mathematical wordplay. It does use geometry as the “hard mathematics we don’t know how to do”. That’s a change from the usual algebra. And that’s odd considering the joke depends on an idiom that is actually used by real people.

    Patrick Roberts’s Todd the Dinosaur for the 12th uses mathematics as the classic impossibly hard subject a seven-year-old can’t be expected to understand. The worry about fractions seems age-appropriate. I don’t know whether it’s fashionable to give elementary school students experience thinking of ‘x’ and ‘y’ as numbers. I remember that as a time when we’d get a square or circle and try to figure what number fits in the gap. It wasn’t a 0 or a square often enough.

    'Teacher! Todd just passed out! But he's waring one of those medic alert bracelets! ... Do not expose the wearer of this bracelet to anything mathematical, especially x's and y's, fractions, or anything that he should remember for a test!' 'Amazing how much writing they were able to fit on a little ol' T-Rex wrist!'

    Patrick Roberts’s Todd the Dinosaur for the 12th of December, 2016. Granting that Todd’s a kid dinosaur and that T-Rexes are not renowned for the hugeness of their arms, wouldn’t that still be enough space for a lot of text to fit around? I would have thought so anyway. I feel like I’m pluralizing ‘T-Rex’ wrong, but what would possibly be right? ‘Ts-rex’? Don’t make me try to spell tyrannosaurus.

    Jef Mallett’s Frazz for the 12th uses one of those great questions I think every child has. And it uses it to question how we can learn things from statistical study. This is circling around the “Bayesian” interpretation of probability, of what odds mean. It’s a big idea and I’m not sure I’m competent to explain it. It amounts to asking what explanations would be plausibly consistent with observations. As we get more data we may be able to rule some cases in or out. It can be unsettling. It demands we accept right up front that we may be wrong. But it lets us find reasonably clean conclusions out of the confusing and muddy world of actual data.

    Sam Hepburn’s Questionable Quotebook for the 14th illustrates an old observation about the hypnotic power of decimal points. I think Hepburn’s gone overboard in this, though: six digits past the decimal in this percentage is too many. It draws attention to the fakeness of the number. One, two, maybe three digits past the decimal would have a more authentic ring to them. I had thought the John Allen Paulos tweet above was about this comic, but it’s mere coincidence. Funny how that happens.

  • Joseph Nebus 6:00 pm on Tuesday, 13 December, 2016 Permalink | Reply
    Tags: , algebra, , MacArthur Genius Grants, , ,   

    Reading the Comics, December 10, 2016: E = mc^2 Edition 

    And now I can finish off last week’s mathematically-themed comic strips. There’s a strong theme to them, for a refreshing change. It would almost be what we’d call a Comics Synchronicity, on Usenet group rec.arts.comics.strips, had they all appeared the same day. Some folks claiming to be open-minded would allow a Synchronicity for strips appearing on subsequent days or close enough in publication, but I won’t have any of that unless it suits my needs at the time.

    Ernie Bushmiller’s for the 6th would fit thematically better as a Cameo Edition comic. It mentions arithmetic but only because it’s the sort of thing a student might need a cheat sheet on. I can’t fault Sluggo needing help on adding eight or multiplying by six; they’re hard. Not remembering 4 x 2 is unusual. But everybody has their own hangups. The strip originally ran the 6th of December, 1949.

    People contorted to look like a 4, a 2, and a 7 bounce past Dethany's desk. She ponders: 'Performance review time ... when the company reduces people to numbers.' Wendy, previous star of the strip, tells Dethany 'You're next.' Wendy's hair is curled into an 8.

    Bill holbrook’s On The Fastrack for the 7th of December, 2016. Don’t worry about the people in the first three panels; they’re just temps, and weren’t going to appear in the comic again.

    Bill holbrook’s On The Fastrack for the 7th seems like it should be the anthropomorphic numerals joke for this essay. It doesn’t seem to quite fit the definition, but, what the heck.

    Brian Boychuk and Ron Boychuk’s The Chuckle Brothers on the 7th starts off the run of E = mc2 jokes for this essay. This one reminds me of Gary Larson’s Far Side classic with the cleaning woman giving Einstein just that little last bit of inspiration about squaring things away. It shouldn’t surprise anyone that E equalling m times c squared isn’t a matter of what makes an attractive-looking formula. There’s good reasons when one thinks what energy and mass are to realize they’re connected like that. Einstein’s famous, deservedly, for recognizing that link and making it clear.

    Mark Pett’s Lucky Cow rerun for the 7th has Claire try to use Einstein’s famous quote to look like a genius. The mathematical content is accidental. It could be anything profound yet easy to express, and it’s hard to beat the economy of “E = mc2” for both. I’d agree that it suggests Claire doesn’t know statistics well to suppose she could get a MacArthur “Genius” Grant by being overheard by a grant nominator. On the other hand, does anybody have a better idea how to get their attention?

    Harley Schwadron’s 9 to 5 for the 8th completes the “E = mc2” triptych. Calling a tie with the equation on it a power tie elevates the gag for me. I don’t think of “E = mc2” as something that uses powers, even though it literally does. I suppose what gets me is that “c” is a constant number. It’s the speed of light in a vacuum. So “c2” is also a constant number. In form the equation isn’t different from “E = m times seven”, and nobody thinks of seven as a power.

    Morrie Turner’s Wee Pals rerun for the 8th is a bit of mathematics wordplay. It’s also got that weird Morrie Turner thing going on where it feels unquestionably earnest and well-intentioned but prejudiced in that way smart 60s comedies would be.

    Sarge demands to know who left this algebra book on his desk; Zero says not him. Sarge ignores him and asks 'Who's been figuring all over my desk pad?' Zero also unnecessarily denies it. 'Come on, whose is it?!' Zero reflects, 'Gee, he *never* picks on *me*!'

    Mort Walker’s vintage Beetle Bailey for the 18th of May, 1960. Rerun the 9th of December, 2016. For me the really fascinating thing about ancient Beetle Bailey strips is that they could run today with almost no changes and yet they feel like they’re from almost a different cartoon universe from the contemporary comic. I don’t know how that is, or why it is.

    Mort Walker’s Beetle Bailey for the 18th of May, 1960 was reprinted on the 9th. It mentions mathematics — algebra specifically — as the sort of thing intelligent people do. I’m going to take a leap and suppose it’s the sort of algebra done in high school about finding values of ‘x’ rather than the mathematics-major sort of algebra, done with groups and rings and fields. I wonder when holding a mop became the signifier of not just low intelligence but low ambition. It’s subverted in Jef Mallet’s Frazz, the title character of which works as a janitor to support his exercise and music habits. But it is a standard prop to signal something.

  • Joseph Nebus 6:00 pm on Wednesday, 30 November, 2016 Permalink | Reply
    Tags: , algebra, , , , , , , Monster Group, , ,   

    The End 2016 Mathematics A To Z: Monster Group 

    Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

    Monster Group.

    It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

    The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

    All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

    So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

    (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

    So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

    An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

    Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

    One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

    The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

    So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

    And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

    Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

    Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

    The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

    The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

    It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

    We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

    And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

    There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

    You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

    I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

    There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

    The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

    • gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

      It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?


      • Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

        Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

        I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.


  • Joseph Nebus 6:00 pm on Friday, 25 November, 2016 Permalink | Reply
    Tags: , algebra, , , kernels, , null space,   

    The End 2016 Mathematics A To Z: Kernel 

    I told you that Image thing would reappear. Meanwhile I learned something about myself in writing this.


    I want to talk about functions again. I’ve been keeping like a proper mathematician to a nice general idea of what a function is. The sort where a function’s this rule matching stuff in a set called the domain with stuff in a set called the range. And I’ve tried not to commit myself to saying anything about what that domain and range are. They could be numbers. They could be other functions. They could be the set of DVDs you own but haven’t watched in more than two years. They could be collections socks. Haven’t said.

    But we know what functions anyone cares about. They’re stuff that have domains and ranges that are numbers. Preferably real numbers. Complex-valued numbers if we must. If we look at more exotic sets they’re ones that stick close to being numbers: vectors made up of an ordered set of numbers. Matrices of numbers. Functions that are themselves about numbers. Maybe we’ll get to something exotic like a rotation, but then what is a rotation but spinning something a certain number of degrees? There are a bunch of unavoidably common domains and ranges.

    Fine, then. I’ll stick to functions with ranges that look enough like regular old numbers. By “enough” I mean they have a zero. That is, something that works like zero does. You know, add it to something else and that something else isn’t changed. That’s all I need.

    A natural thing to wonder about a function — hold on. “Natural” is the wrong word. Something we learn to wonder about in functions, in pre-algebra class where they’re all polynomials, is where the zeroes are. They’re generally not at zero. Why would we say “zeroes” to mean “zero”? That could let non-mathematicians think they knew what we were on about. By the “zeroes” we mean the things in the domain that get matched to the zero in the range. It might be zero; no reason it couldn’t, until we know what the function’s rule is. Just we can’t count on that.

    A polynomial we know has … well, it might have zero zeroes. Might have no zeroes. It might have one, or two, or so on. If it’s an n-th degree polynomial it can have up to n zeroes. And if it’s not a polynomial? Well, then it could have any conceivable number of zeroes and nobody is going to give you a nice little formula to say where they all are. It’s not that we’re being mean. It’s just that there isn’t a nice little formula that works for all possibilities. There aren’t even nice little formulas that work for all polynomials. You have to find zeroes by thinking about the problem. Sorry.

    But! Suppose you have a collection of all the zeroes for your function. That’s all the points in the domain that match with zero in the range. Then we have a new name for the thing you have. And that’s the kernel of your function. It’s the biggest subset in the domain with an image that’s just the zero in the range.

    So we have a name for the zeroes that isn’t just “the zeroes”. What does this get us?

    If we don’t know anything about the kind of function we have, not much. If the function belongs to some common kinds of functions, though, it tells us stuff.

    For example. Suppose the function has domain and range that are vectors. And that the function is linear, which is to say, easy to deal with. Let me call the function ‘f’. And let me pick out two things in the domain. I’ll call them ‘x’ and ‘y’ because I’m writing this after Thanksgiving dinner and can’t work up a cleverer name for anything. If f is linear then f(x + y) is the same thing as f(x) + f(y). And now something magic happens. If x and y are both in the kernel, then x + y has to be in the kernel too. Think about it. Meanwhile, if x is in the kernel but y isn’t, then f(x + y) is f(y). Again think about it.

    What we can see is that the domain fractures into two directions. One of them, the direction of the kernel, is invisible to the function. You can move however much you like in that direction and f can’t see it. The other direction, perpendicular (“orthogonal”, we say in the trade) to the kernel, is visible. Everything that might change changes in that direction.

    This idea threads through vector spaces, and we study a lot of things that turn out to look like vector spaces. It keeps surprising us by letting us solve problems, or find the best-possible approximate solutions. This kernel gives us room to match some fiddly conditions without breaking the real solution. The size of the null space alone can tell us whether some problems are solvable, or whether they’ll have infinitely large sets of solutions.

    In this vector-space construct the kernel often takes on another name, the “null space”. This means the same thing. But it reminds us that superhero comics writers miss out on many excellent pieces of terminology by not taking advanced courses in mathematics.

    Kernels also appear in group theory, whenever we get into rings. We’re always working with rings. They’re nearly as unavoidable as vector spaces.

    You know how you can divide the whole numbers into odd and even? And you can do some neat tricks with that for some problems? You can do that with every ring, using the kernel as a dividing point. This gives us information about how the ring is shaped, and what other structures might look like the ring. This often lets us turn proofs that might be hard into a collection of proofs on individual cases that are, at least, doable. Tricks about odd and even numbers become, in trained hands, subtle proofs of surprising results.

    We see vector spaces and rings all over the place in mathematics. Some of that’s selection bias. Vector spaces capture a lot of what’s important about geometry. Rings capture a lot of what’s important about arithmetic. We have understandings of geometry and arithmetic that transcend even our species. Raccoons understand space. Crows understand number. When we look to do mathematics we look for patterns we understand, and these are major patterns we understand. And there are kernels that matter to each of them.

    Some mathematical ideas inspire metaphors to me. Kernels are one. Kernels feel to me like the process of holding a polarized lens up to a crystal. This lets one see how the crystal is put together. I realize writing this down that my metaphor is unclear: is the kernel the lens or the structure seen in the crystal? I suppose the function has to be the lens, with the kernel the crystallization planes made clear under it. It’s curious I had enjoyed this feeling about kernels and functions for so long without making it precise. Feelings about mathematical structures can be like that.

    • Barb Knowles 8:42 pm on Friday, 25 November, 2016 Permalink | Reply

      Don’t be mad if I tell you I’ve never had a feeling about a mathematical structure, lol. But it is immensely satisfying to solve an equation. I’m not a math person. As an English as a New Language teacher, I have to help kids with algebra at times. I usually break out in a sweat and am ecstatic when I can actually help them.

      Liked by 1 person

      • Joseph Nebus 11:24 pm on Friday, 25 November, 2016 Permalink | Reply

        I couldn’t be mad about that! I don’t have feeling like that about most mathematical constructs myself. There’s just a few that stand out for one reason or another.

        I am intrigued by the ways teaching differs for different subjects. How other people teach mathematics (or physics) interests me too, but I’ve noticed some strong cultural similarities across different departments and fields. Other subjects have a greater novelty value for me.

        Liked by 2 people

        • Barb Knowles 11:42 pm on Friday, 25 November, 2016 Permalink | Reply

          My advisor in college (Romance Language manor) told me that I should do well in math because it is a language, formulas are like grammar and there is a lot of memorization. Not being someone with math skills, I replied ummmm. I don’t think she was impressed, lol.

          Liked by 1 person

          • Joseph Nebus 9:22 pm on Wednesday, 30 November, 2016 Permalink | Reply

            I’m not sure that I could go along with the idea of mathematics as a language. But there is something that seems like a grammar to formulas. That is, there are formulas that just look right or look wrong, even before exploring their content. Sometimes a formula just looks … ungrammatical. Sometimes that impression is wrong. But there is something that stands out.

            As for mathematics skills, well, I think people usually have more skill than they realize. There’s a lot of mathematics out there, much of it not related to calculations, and it’d be amazing if none of it intrigued you or came easily.

            Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 2 November, 2016 Permalink | Reply
    Tags: , algebra, eigenvalues, , , , ,   

    The End 2016 Mathematics A To Z: Algebra 

    So let me start the End 2016 Mathematics A To Z with a word everybody figures they know. As will happen, everybody’s right and everybody’s wrong about that.


    Everybody knows what algebra is. It’s the point where suddenly mathematics involves spelling. Instead of long division we’re on a never-ending search for ‘x’. Years later we pass along gifs of either someone saying “stop asking us to find your ex” or someone who’s circled the letter ‘x’ and written “there it is”. And make jokes about how we got through life without using algebra. And we know it’s the thing mathematicians are always doing.

    Mathematicians aren’t always doing that. I expect the average mathematician would say she almost never does that. That’s a bit of a fib. We have a lot of work where we do stuff that would be recognizable as high school algebra. It’s just we don’t really care about that. We’re doing that because it’s how we get the problem we are interested in done. the most recent few pieces in my “Why Stuff can Orbit” series include a bunch of high school algebra-style work. But that was just because it was the easiest way to answer some calculus-inspired questions.

    Still, “algebra” is a much-used word. It comes back around the second or third year of a mathematics major’s career. It comes in two forms in undergraduate life. One form is “linear algebra”, which is a great subject. That field’s about how stuff moves. You get to imagine space as this stretchy material. You can stretch it out. You can squash it down. You can stretch it in some directions and squash it in others. You can rotate it. These are simple things to build on. You can spend a whole career building on that. It becomes practical in surprising ways. For example, it’s the field of study behind finding equations that best match some complicated, messy real data.

    The second form is “abstract algebra”, which comes in about the same time. This one is alien and baffling for a long while. It doesn’t help that the books all call it Introduction to Algebra or just Algebra and all your friends think you’re slumming. The mathematics major stumbles through confusing definitions and theorems that ought to sound comforting. (“Fermat’s Little Theorem”? That’s a good thing, right?) But the confusion passes, in time. There’s a beautiful subject here, one of my favorites. I’ve talked about it a lot.

    We start with something that looks like the loosest cartoon of arithmetic. We get a bunch of things we can add together, and an ‘addition’ operation. This lets us do a lot of stuff that looks like addition modulo numbers. Then we go on to stuff that looks like picking up floor tiles and rotating them. Add in something that we call ‘multiplication’ and we get rings. This is a bit more like normal arithmetic. Add in some other stuff and we get ‘fields’ and other structures. We can keep falling back on arithmetic and on rotating tiles to build our intuition about what we’re doing. This trains mathematicians to look for particular patterns in new, abstract constructs.

    Linear algebra is not an abstract-algebra sort of algebra. Sorry about that.

    And there’s another kind of algebra that mathematicians talk about. At least once they get into grad school they do. There’s a huge family of these kinds of algebras. The family trait for them is that they share a particular rule about how you can multiply their elements together. I won’t get into that here. There are many kinds of these algebras. One that I keep trying to study on my own and crash hard against is Lie Algebra. That’s named for the Norwegian mathematician Sophus Lie. Pronounce it “lee”, as in “leaning”. You can understand quantum mechanics much better if you’re comfortable with Lie Algebras and so now you know one of my weaknesses. Another kind is the Clifford Algebra. This lets us create something called a “hypercomplex number”. It isn’t much like a complex number. Sorry. Clifford Algebra does lend to a construct called spinors. These help physicists understand the behavior of bosons and fermions. Every bit of matter seems to be either a boson or a fermion. So you see why this is something people might like to understand.

    Boolean Algebra is the algebra of this type that a normal person is likely to have heard of. It’s about what we can build using two values and a few operations. Those values by tradition we call True and False, or 1 and 0. The operations we call things like ‘and’ and ‘or’ and ‘not’. It doesn’t sound like much. It gives us computational logic. Isn’t that amazing stuff?

    So if someone says “algebra” she might mean any of these. A normal person in a non-academic context probably means high school algebra. A mathematician speaking without further context probably means abstract algebra. If you hear something about “matrices” it’s more likely that she’s speaking of linear algebra. But abstract algebra can’t be ruled out yet. If you hear a word like “eigenvector” or “eigenvalue” or anything else starting “eigen” (or “characteristic”) she’s more probably speaking of abstract algebra. And if there’s someone’s name before the word “algebra” then she’s probably speaking of the last of these. This is not a perfect guide. But it is the sort of context mathematicians expect other mathematicians notice.

    • John Friedrich 2:13 am on Thursday, 3 November, 2016 Permalink | Reply

      The cruelest trick that happened to me was when a grad school professor labeled the Galois Theory class “Algebra”. Until then, the lowest score I’d ever gotten in a math class was a B. After that, I decided to enter the work force and abandon my attempts at a master’s degree.


      • Joseph Nebus 3:32 pm on Friday, 4 November, 2016 Permalink | Reply

        Well, it’s true enough that it’s part of algebra. But I’d feel uncomfortable plunging right into that without the prerequisites being really clear. I’m not sure I’ve even run into a nice clear pop-culture explanation of Galois Theory past some notes about how there’s two roots to a quadratic equation and see how they mirror each other.


  • Joseph Nebus 6:00 pm on Sunday, 28 August, 2016 Permalink | Reply
    Tags: algebra, ,   

    Reading the Comics, August 27, 2016: Calm Before The Term Edition 

    Here in the United States schools are just lurching back into the mode where they have students come in and do stuff all day. Perhaps this is why it was a routine week. Comic Strip Master Command wants to save up a bunch of story problems for us. But here’s what the last seven days sent into my attention.

    Jeff Harris’s Shortcuts educational feature for the 21st is about algebra. It’s got a fair enough blend of historical trivia and definitions and examples and jokes. I don’t remember running across the “number cruncher” joke before.

    Mark Anderson’s Andertoons for the 23rd is your typical student-in-lecture joke. But I do sympathize with students not understanding when a symbol gets used for different meanings. It throws everyone. But sometimes the things important to note clearly in one section are different from the needs in another section. No amount of warning will clear things up for everybody, but we try anyway.

    Tom Thaves’s Frank and Ernest for the 23rd tells a joke about collapsing wave functions, which is why you never see this comic in a newspaper but always see it on a physics teacher’s door. This is properly physics, specifically quantum mechanics. But it has mathematical import. The most practical model of quantum mechanics describes what state a system is in by something called a wave function. And we can turn this wave function into a probability distribution, which describes how likely the system is to be in each of its possible states. “Collapsing” the wave function is a somewhat mysterious and controversial practice. It comes about because if we know nothing about a system then it may have one of many possible values. If we observe, say, the position of something though, then we have one possible value. The wave functions before and after the observation are different. We call it collapsing, reflecting how a universe of possibilities collapsed into a mere fact. But it’s hard to find an explanation for what that is that’s philosophically and physically satisfying. This problem leads us to Schrödinger’s Cat, and to other challenges to our sense of how the world could make sense. So, if you want to make your mark here’s a good problem for you. It’s not going to be easy.

    John Allison’s Bad Machinery for the 24th tosses off a panel full of mathematics symbols as proof of hard thinking. In other routine references John Deering’s Strange Brew for the 26th is just some talk about how hard fractions are.

    While it’s outside the proper bounds of mathematics talk, Tom Toles’s Randolph Itch, 2 am for the 23rd is a delight. My favorite strip of this bunch. Should go on the syllabus.

  • Joseph Nebus 6:00 pm on Sunday, 3 July, 2016 Permalink | Reply
    Tags: algebra, , fairy tales, , ,   

    Reading the Comics, June 29, 2016: Math Is Just This Hard Stuff, Right? Edition 

    We’ve got into that stretch of the year when (United States) schools are out of session. Comic Strip Master Command seems to have thus ordered everyone to clean out their mathematics gags, even if they didn’t have any particularly strong ones. There were enough the past week I’m breaking this collection into two segments, though. And the first segment, I admit, is mostly the same joke repeated.

    Russell Myers’s Broom Hilda for the 27th is the type case for my “Math Is Just This Hard Stuff, Right?” name here. In fairness to Broom Hilda, mathematics is a lot harder now than it was 1,500 years ago. It’s fair not being able to keep up. There was a time that finding roots of third-degree polynomials was the stuff of experts. Today it’s within the powers of any Boring Algebra student, although she’ll have to look up the formula for it.

    John McPherson’s Close To Home for the 27th is a bunch of trigonometry-cheat tattoos. I’m sure some folks have gotten mathematics tattoos that include … probably not these formulas. They’re not beautiful enough. Maybe some diagrams of triangles and the like, though. The proof of the Pythagoran Theorem in Euclid’s Elements, for example, includes this intricate figure I would expect captures imaginations and could be appreciated as a beautiful drawing.

    Missy Meyer’s Holiday Doodles observed that the 28th was “Tau Day”, which takes everything I find dubious about “Pi Day” and matches it to the silly idea that we would somehow make life better by replacing π with a symbol for 2π.

    Ginger Bread Boulevard: a witch with her candy house, and a witch with a house made of a Geometry book, a compass, erasers, that sort of thing. 'I'll eat any kid, but my sister prefers the nerdy ones.' Bonus text in the title panel: 'Cme on in, little child, we'll do quadratic equations'.

    Hilary Price’s Rhymes With Orange for the 29th of June, 2016. I like the Number-two-pencil fence.

    Hilary Price’s Rhymes With Orange for the 29th uses mathematics as the way to sort out nerds. I can’t say that’s necessarily wrong. It’s interesting to me that geometry and algebra communicate “nerdy” in a shorthand way that, say, an obsession with poetry or history or other interests wouldn’t. It wouldn’t fit the needs of this particular strip, but I imagine that a well-diagrammed sentence would be as good as a page full of equations for expressing nerdiness. The title card’s promise of doing quadratic equations would have worked on me as a kid, but I thought they sounded neat and exotic and something to discover because they sounded hard. When I took Boring High School Algebra that charm wore off.

    Aaron McGruder’s The Boondocks rerun for the 29th starts a sequence of Riley doubting the use of parts of mathematics. The parts about making numbers smaller. It’s a better-than-normal treatment of the problem of getting a student motivated. The strip originally ran the 18th of April, 2001, and the story continued the several days after that.

    Bill Whitehead’s Free Range for the 29th uses Boring Algebra as an example of the stuff kinds have to do for homework.

  • Joseph Nebus 3:00 pm on Thursday, 16 June, 2016 Permalink | Reply
    Tags: algebra, , differentials, , , slopes, ,   

    Theorem Thursday: One Mean Value Theorem Of Many 

    For this week I have something I want to follow up on. We’ll see if I make it that far.

    The Mean Value Theorem.

    My subject line disagrees with the header just above here. I want to talk about the Mean Value Theorem. It’s one of those things that turns up in freshman calculus and then again in Analysis. It’s introduced as “the” Mean Value Theorem. But like many things in calculus it comes in several forms. So I figure to talk about one of them here, and another form in a while, when I’ve had time to make up drawings.

    Calculus can split effortlessly into two kinds of things. One is differential calculus. This is the study of continuity and smoothness. It studies how a quantity changes if someting affecting it changes. It tells us how to optimize things. It tells us how to approximate complicated functions with simpler ones. Usually polynomials. It leads us to differential equations, problems in which the rate at which something changes depends on what value the thing has.

    The other kind is integral calculus. This is the study of shapes and areas. It studies how infinitely many things, all infinitely small, add together. It tells us what the net change in things are. It tells us how to go from information about every point in a volume to information about the whole volume.

    They aren’t really separate. Each kind informs the other, and gives us tools to use in studying the other. And they are almost mirrors of one another. Differentials and integrals are not quite inverses, but they come quite close. And as a result most of the important stuff you learn in differential calculus has an echo in integral calculus. The Mean Value Theorem is among them.

    The Mean Value Theorem is a rule about functions. In this case it’s functions with a domain that’s an interval of the real numbers. I’ll use ‘a’ as the name for the smallest number in the domain and ‘b’ as the largest number. People talking about the Mean Value Theorem often do. The range is also the real numbers, although it doesn’t matter which ones.

    I’ll call the function ‘f’ in accord with a longrunning tradition of not working too hard to name functions. What does matter is that ‘f’ is continuous on the interval [a, b]. I’ve described what ‘continuous’ means before. It means that here too.

    And we need one more thing. The function f has to be differentiable on the interval (a, b). You maybe noticed that before I wrote [a, b], and here I just wrote (a, b). There’s a difference here. We need the function to be continuous on the “closed” interval [a, b]. That is, it’s got to be continuous for ‘a’, for ‘b’, and for every point in-between.

    But we only need the function to be differentiable on the “open” interval (a, b). That is, it’s got to be continuous for all the points in-between ‘a’ and ‘b’. If it happens to be differentiable for ‘a’, or for ‘b’, or for both, that’s great. But we won’t turn away a function f for not being differentiable at those points. Only the interior. That sort of distinction between stuff true on the interior and stuff true on the boundaries is common. This is why mathematicians have words for “including the boundaries” (“closed”) and “never minding the boundaries” (“open”).

    As to what “differentiable” is … A function is differentiable at a point if you can take its derivative at that point. I’m sure that clears everything up. There are many ways to describe what differentiability is. One that’s not too bad is to imagine zooming way in on the curve representing a function. If you start with a big old wobbly function it waves all around. But pick a point. Zoom in on that. Does the function stay all wobbly, or does it get more steady, more straight? Keep zooming in. Does it get even straighter still? If you zoomed in over and over again on the curve at some point, would it look almost exactly like a straight line?

    If it does, then the function is differentiable at that point. It has a derivative there. The derivative’s value is whatever the slope of that line is. The slope is that thing you remember from taking Boring Algebra in high school. That rise-over-run thing. But this derivative is a great thing to know. You could approximate the original function with a straight line, with slope equal to that derivative. Close to that point, you’ll make a small enough error nobody has to worry about it.

    That there will be this straight line approximation isn’t true for every function. Here’s an example. Picture a line that goes up and then takes a 90-degree turn to go back down again. Look at the corner. However close you zoom in on the corner, there’s going to be a corner. It’s never going to look like a straight line; there’s a 90-degree angle there. It can be a smaller angle if you like, but any sort of corner breaks this differentiability. This is a point where the function isn’t differentiable.

    There are functions that are nothing but corners. They can be differentiable nowhere, or only at a tiny set of points that can be ignored. (A set of measure zero, as the dialect would put it.) Mathematicians discovered this over the course of the 19th century. They got into some good arguments about how that can even make sense. It can get worse. Also found in the 19th century were functions that are continuous only at a single point. This smashes just about everyone’s intuition. But we can’t find a definition of continuity that’s as useful as the one we use now and avoids that problem. So we accept that it implies some pathological conclusions and carry on as best we can.

    Now I get to the Mean Value Theorem in its differential calculus pelage. It starts with the endpoints, ‘a’ and ‘b’, and the values of the function at those points, ‘f(a)’ and ‘f(b)’. And from here it’s easiest to figure what’s going on if you imagine the plot of a generic function f. I recommend drawing one. Just make sure you draw it without lifting the pen from paper, and without including any corners anywhere. Something wiggly.

    Draw the line that connects the ends of the wiggly graph. Formally, we’re adding the line segment that connects the points with coordinates (a, f(a)) and (b, f(b)). That’s coordinate pairs, not intervals. That’s clear in the minds of the mathematicians who don’t see why not to use parentheses over and over like this. (We are short on good grouping symbols like parentheses and brackets and braces.)

    Per the Mean Value Theorem, there is at least one point whose derivative is the same as the slope of that line segment. If you were to slide the line up or down, without changing its orientation, you’d find something wonderful. Most of the time this line intersects the curve, crossing from above to below or vice-versa. But there’ll be at least one point where the shifted line is “tangent”, where it just touches the original curve. Close to that touching point, the “tangent point”, the shifted line and the curve blend together and can’t be easily told apart. As long as the function is differentiable on the open interval (a, b), and continuous on the closed interval [a, b], this will be true. You might convince yourself of it by drawing a couple of curves and taking a straightedge to the results.

    This is an existence theorem. Like the Intermediate Value Theorem, it doesn’t tell us which point, or points, make the thing we’re interested in true. It just promises us that there is some point that does it. So it gets used in other proofs. It lets us mix information about intervals and information about points.

    It’s tempting to try using it numerically. It looks as if it justifies a common differential-calculus trick. Suppose we want to know the value of the derivative at a point. We could pick a little interval around that point and find the endpoints. And then find the slope of the line segment connecting the endpoints. And won’t that be close enough to the derivative at the point we care about?

    Well. Um. No, we really can’t be sure about that. We don’t have any idea what interval might make the derivative of the point we care about equal to this line-segment slope. The Mean Value Theorem won’t tell us. It won’t even tell us if there exists an interval that would let that trick work. We can’t invoke the Mean Value Theorem to let us get away with that.

    Often, though, we can get away with it. Differentiable functions do have to follow some rules. Among them is that if you do pick a small enough interval then approximations that look like this will work all right. If the function flutters around a lot, we need a smaller interval. But a lot of the functions we’re interested in don’t flutter around that much. So we can get away with it. And there’s some grounds to trust in getting away with it. The Mean Value Theorem isn’t any part of the grounds. It just looks so much like it ought to be.

    I hope on a later Thursday to look at an integral-calculus form of the Mean Value Theorem.

  • Joseph Nebus 3:00 pm on Thursday, 9 June, 2016 Permalink | Reply
    Tags: algebra, Cramer's Rule, determinants, , , , , , ,   

    Theorem Thursday: What Is Cramer’s Rule? 

    KnotTheorist asked for this one during my appeal for theorems to discuss. And I’m taking an open interpretation of what a “theorem” is. I can do a rule.

    Cramer’s Rule

    I first learned of Cramer’s Rule in the way I expect most people do. It was an algebra course. I mean high school algebra. By high school algebra I mean you spend roughly eight hundred years learning ways to solve for x or to plot y versus x. Then take a pause for polar coordinates and matrices. Then you go back to finding both x and y.

    Cramer’s Rule came up in the context of solving simultaneous equations. You have more than one variable. So x and y. Maybe z. Maybe even a w, before whoever set up the problem gives up and renames everything x1 and x2 and x62 and all that. You also have more than one equation. In fact, you have exactly as many equations as you have variables. Are there any sets of values those variables can have which make all those variable true simultaneously? Thus the imaginative name “simultaneous equations” or the search for “simultaneous solutions”.

    If all the equations are linear then we can always say whether there’s simultaneous solutions. By “linear” we mean what we always mean in mathematics, which is, “something we can handle”. But more exactly it means the equations have x and y and whatever other variables only to the first power. No x-squared or square roots of y or tangents of z or anything. (The equations are also allowed to omit a variable. That is, if you have one equation with x, y, and z, and another with just x and z, and another with just y and z, that’s fine. We pretend the missing variable is there and just multiplied by zero, and proceed as before.) One way to find these solutions is with Cramer’s Rule.

    Cramer’s Rule sets up some matrices based on the system of equations. If the system has two equations, it sets up three matrices. If the system has three equations, it sets up four matrices. If the system has twelve equations, it sets up thirteen matrices. You see the pattern here. And then you can take the determinant of each of these matrices. Dividing the determinant of one of these matrices by another one tells you what value of x makes all the equations true. Dividing the determinant of another matrix by the determinant of one of these matrices tells you which values of y makes all the equations true. And so on. The Rule tells you which determinants to use. It also says what it means if the determinant you want to divide by equals zero. It means there’s either no set of simultaneous solutions or there’s infinitely many solutions.

    This gets dropped on us students in the vain effort to convince us knowing how to calculate determinants is worth it. It’s not that determinants aren’t worth knowing. It’s just that they don’t seem to tell us anything we care about. Not until we get into mappings and calculus and differential equations and other mathematics-major stuff. We never see it in high school.

    And the hard part of determinants is that for all the cool stuff they tell us, they take forever to calculate. The determinant for a matrix with two rows and two columns isn’t bad. Three rows and three columns is getting bad. Four rows and four columns is awful. The determinant for a matrix with five rows and five columns you only ever calculate if you’ve made your teacher extremely cross with you.

    So there’s the genius and the first problem with Cramer’s Rule. It takes a lot of calculating. Many any errors along the way with the calculation and your work is wrong. And worse, it won’t be wrong in an obvious way. You can find the error only by going over every single step and hoping to catch the spot where you, somehow, got 36 times -7 minus 21 times -8 wrong.

    The second problem is nobody in high school algebra mentions why systems of linear equations should be interesting to solve. Oh, maybe they’ll explain how this is the work you do to figure out where two straight lines intersect. But that just shifts the “and we care because … ?” problem back one step. Later on we might come to understand the lines represent cases where something we’re interested in is true, or where it changes from true to false.

    This sort of simultaneous-solution problem turns up naturally in optimization problems. These are problems where you try to find a maximum subject to some constraints. Or find a minimum. Maximums and minimums are the same thing when you think about them long enough. If all the constraints can be satisfied at once and you get a maximum (or minimum, whatever), great! If they can’t … Well, you can study how close it’s possible to get, and what happens if you loosen one or more constraint. That’s worth knowing about.

    The third problem with Cramer’s Rule is that, as a method, it kind of sucks. We can be convinced that simultaneous linear equations are worth solving, or at least that we have to solve them to get out of High School Algebra. And we have computers. They can grind away and work out thirteen determinants of twelve-row-by-twelve-column matrices. They might even get an answer back before the end of the term. (The amount of work needed for a determinant grows scary fast as the matrix gets bigger.) But all that work might be meaningless.

    The trouble is that Cramer’s Rule is numerically unstable. Before I even explain what that is you already sense it’s a bad thing. Think of all the good things in your life you’ve heard described as unstable. Fair enough. But here’s what we mean by numerically unstable.

    Is 1/3 equal to 0.3333333? No, and we know that. But is it close enough? Sure, most of the time. Suppose we need a third of sixty million. 0.3333333 times 60,000,000 equals 19,999,998. That’s a little off of the correct 20,000,000. But I bet you wouldn’t even notice the difference if nobody pointed it out to you. Even if you did notice it you might write off the difference. “If we must, make up the difference out of petty cash”, you might declare, as if that were quite sensible in the context.

    And that’s so because this multiplication is numerically stable. Make a small error in either term and you get a proportional error in the result. A small mistake will — well, maybe it won’t stay small, necessarily. But it’ll not grow too fast too quickly.

    So now you know intuitively what an unstable calculation is. This is one in which a small error doesn’t necessarily stay proportionally small. It might grow huge, arbitrarily huge, and in few calculations. So your answer might be computed just fine, but actually be meaningless.

    This isn’t because of a flaw in the computer per se. That is, it’s working as designed. It’s just that we might need, effectively, infinitely many digits of precision for the result to be correct. You see where there may be problems achieving that.

    Cramer’s Rule isn’t guaranteed to be nonsense, and that’s a relief. But it is vulnerable to this. You can set up problems that look harmless but which the computer can’t do. And that’s surely the worst of all worlds, since we wouldn’t bother calculating them numerically if it weren’t too hard to do by hand.

    (Let me direct the reader who’s unintimidated by mathematical jargon, and who likes seeing a good Wikipedia Editors quarrel, to the Cramer’s Rule Talk Page. Specifically to the section “Cramer’s Rule is useless.”)

    I don’t want to get too down on Cramer’s Rule. It’s not like the numerical instability hurts every problem you might use it on. And you can, at the cost of some more work, detect whether a particular set of equations will have instabilities. That requires a lot of calculation but if we have the computer to do the work fine. Let it. And a computer can limit its numerical instabilities if it can do symbolic manipulations. That is, if it can use the idea of “one-third” rather than 0.3333333. The software package Mathematica, for example, does symbolic manipulations very well. You can shed many numerical-instability problems, although you gain the problem of paying for a copy of Mathematica.

    If you just care about, or just need, one of the variables then what the heck. Cramer’s Rule lets you solve for just one or just some of the variables. That seems like a niche application to me, but it is there.

    And the Rule re-emerges in pure analysis, where numerical instability doesn’t matter. When we look to differential equations, for example, we often find solutions are combinations of several independent component functions. Bases, in fact. Testing whether we have found independent bases can be done through a thing called the Wronskian. That’s a way that Cramer’s Rule appears in differential equations.

    Wikipedia also asserts the use of Cramer’s Rule in differential geometry. I believe that’s a true statement, and that it will be reflected in many mechanics problems. In these we can use our knowledge that, say, energy and angular momentum of a system are constant values to tell us something of how positions and velocities depend on each other. But I admit I’m not well-read in differential geometry. That’s something which has indeed caused me pain in my scholarly life. I don’t know whether differential geometers thank Cramer’s Rule for this insight or whether they’re just glad to have got all that out of the way. (See the above Wikipedia Editors quarrel.)

    I admit for all this talk about Cramer’s Rule I haven’t said what it is. Not in enough detail to pass your high school algebra class. That’s all right. It’s easy to find. MathWorld has the rule in pretty simple form. Mathworld does forget to define what it means by the vector d. (It’s the vector with components d1, d2, et cetera.) But that’s enough technical detail. If you need to calculate something using it, you can probably look closer at the problem and see if you can do it another way instead. Or you’re in high school algebra and just have to slog through it. It’s all right. Eventually you can put x and y aside and do geometry.

    • KnotTheorist 3:44 pm on Thursday, 9 June, 2016 Permalink | Reply

      Thanks for the post! It’s good to know I’m not the only one who wondered about the usefulness of Cramer’s Rule for computation. That was part of my motivation for asking about it, actually; I was curious about what, if anything, it was good for.

      Also, thanks for linking to that Wikipedia article. It was an interesting read.

      Liked by 1 person

      • Joseph Nebus 3:21 am on Saturday, 11 June, 2016 Permalink | Reply

        I’m happy to be of service. And, as I say, it’s not like the rule is ever wrong. The worst you can hold against it is that it’s not the quickest or most stable way of doing a lot of problems. But if you can control that, then it’s a tool you have.

        But I admit not using it except as the bit that justifies some work in later proofs since I got out of high school algebra. It’s so beautiful a thing it seems like it ought to be useful.


    • xianyouhoule 12:08 pm on Friday, 10 June, 2016 Permalink | Reply

      Can we understand Cramer’s Rule

      in a geometrical way??


    • xianyouhoule 12:16 pm on Friday, 10 June, 2016 Permalink | Reply

      Thanks for your post!
      Can we understand Cramer’s Rule in a geometrical way??


      • Joseph Nebus 3:32 am on Saturday, 11 June, 2016 Permalink | Reply

        Happy to help.

        We can work out geometric interpretations of Cramer’s Rule. But I’m not sure how compelling they are. They come about from looking at sets of linear equations as a linear transformation. That is, they’re stretching out and rotating and adding together directions in space. Then the determinant of the matrix corresponding to a set of equations has a good geometric interpretation. It’s how much a unit square gets expanded, or shrunk, by the projection the matrix represents.

        For Cramer’s Rule we look at the determinants of two matrices. One of them is the matrix of the original set of equations. And the other is a similar matrix that has, in the place of (say) constants-times-x, the constant numbers from the right-hand-sides of the original equations. The constants with no variables on them. This matrix projects space in a slightly different way.

        So Cramer’s Rule tells us that the value of x (say) which makes all the equations true is equal to how much the modified matrix with constants instead of x-coefficients expands space, divided by how much the original matrix expands space. And similarly for y and for z and whatever other coordinates you have. And as I say, Wikipedia’s entry on Cramer’s Rule has some fair pictures showing this.

        I admit I’m not sure that’s compelling, though. I don’t have a good answer offhand for why we should expect these ratios to be important, or why these particular modified matrices should enter into it. But it is there and it might help someone at least remember how this rule works.


    • xianyouhoule 4:21 pm on Sunday, 12 June, 2016 Permalink | Reply

      Thanks for your detail explanation.


    • howardat58 3:48 pm on Thursday, 16 June, 2016 Permalink | Reply

      I am of the opinion that cramers rule sucks. What is wrong with Gaussian Elimination ????????

      (apart from the relatively enormous speed !!!)


      • Joseph Nebus 4:34 am on Friday, 17 June, 2016 Permalink | Reply

        Well, speed is the big strike against Gaussian Elimination. But Gaussian Elimination is a lot better off than Cramer’s Rule on that count. Gaussian Elimination also isn’t numerically stable for every matrix. But for diagonally dominant or positive-definite matrices it is, and that’s usually good enough.

        As often happens with numerical techniques, nothing’s quite right all the time. Best you can do is have some idea what’s usually all right.


  • Joseph Nebus 3:00 pm on Monday, 23 May, 2016 Permalink | Reply
    Tags: , algebra, , , , , pyramids   

    Reading the Comics, May 17, 2016: Again, No Pictures Edition 

    Last week’s Reading The Comics was a bunch of Gocomics.com strips. And I don’t feel the need to post the images for those, since they’re reasonably stable links. Today’s is also a bunch of Gocomics.com strips. I know how every how-to-bring-in-readers post ever says you should include images. Maybe I will commission someone to do some icons. It couldn’t hurt.

    Someone looking close at the title, with responsible eye protection, might notice it’s dated the 17th, a day this is not. There haven’t been many mathematically-themed comic strips since the 17th is all. And I’m thinking to try out, at least for a while, making the day on which a Reading the Comics post is issued regular. Maybe Monday. This might mean there are some long and some short posts, but being a bit more scheduled might help my writing.

    Mark Anderson’s Andertoons for the 14th is the charting joke for this essay. Also the Mark Anderson joke for this essay. I was all ready to start explaining ways that the entropy of something can decrease. The easiest way is by expending energy, which we can see as just increasing entropy somewhere else in the universe. The one requiring the most patience is simply waiting: entropy almost always increases, or at least doesn’t decrease. But “almost always” isn’t the same as “always”. But I have to pass. I suspect Anderson drew the chart going down because of the sense of entropy being a winding-down of useful stuff. Or because of down having connotations of failure, and the increase of entropy suggesting the failing of the universe. And we can also read this as a further joke: things are falling apart so badly that even entropy isn’t working like it ought. Anderson might not have meant for a joke that sophisticated, but if he wants to say he did I won’t argue it.

    Scott Adams’s Dilbert Classics for the 14th reprinted the comic of the 20th of March, 1993. I admit I do this sort of compulsive “change-simplifying” paying myself. It’s easy to do if you have committed to memory pairs of numbers separated by five: 0 and 5, 1 and 6, 2 and 7, and so on. So if I get a bill for (say) $4.18, I would look for whether I have three cents in change. If I have, have I got 23 cents? That would give me back a nickel. 43 cents would give me back a quarter in change. And a quarter is great because I can use that for pinball.

    Sometimes the person at the cash register doesn’t want a ridiculous bunch of change. I don’t blame them. It’s easy to suppose that someone who’s given you $5.03 for a $4.18 charge misunderstood what the bill was. Some folks will take this as a chance to complain mightily about how kids don’t learn even the basics of mathematics anymore and the world is doomed because the young will follow their job training and let machines that are vastly better at arithmetic than they are do arithmetic. This is probably what Adams was thinking, since, well, look at the clerk’s thought balloon in the final panel.

    But consider this: why would Dilbert have handed over $7.14? Or, specifically, how could he give $7.14 to the clerk but not have been able to give $2.14, which would make things easier on everybody? There’s no combination of bills — in United States or, so far as I’m aware, any major world currency — in which you can give seven dollars but not two dollars. He had to be handing over five dollars he was getting right back. The clerk would be right to suspect this. It looks like the start of a change scam, begun by giving a confusing amount of money.

    Had Adams written it so that the charge was $6.89, and Dilbert “helpfully” gave $12.14, then Dilbert wouldn’t be needlessly confusing things.

    Dave Whamond’s Reality Check for the 15th is that pirate-based find-x joke that feels like it should be going around Facebook, even though I don’t think it has been. I can’t say the combination of jokes quite makes logical sense, but I’m amused. It might be from the Reality Check squirrel in the corner.

    Nate Fakes’s Break of Day for the 16th is the anthropomorphized shapes joke for this essay. It’s not the only shapes joke, though.

    Doug Bratton’s Pop Culture Shock Therapy for the 16th is the Einstein joke for this essay.

    Rick Detorie’s One Big Happy rerun for the 17th is another shapes joke. Ruthie has strong ideas about what distinguishes a pyramid from a triangle. In this context I can’t say she’s wrong to assert what a pyramid is.

  • Joseph Nebus 3:00 pm on Wednesday, 4 May, 2016 Permalink | Reply
    Tags: , algebra, , , , , , ,   

    Reading the Comics, May 3, 2016: Lots Of Images Edition 

    After the heavy pace of March and April I figure to take it easy and settle to about a three-a-week schedule around here. That doesn’t mean that Comic Strip Master Command wants things to be too slow for me. And this time they gave me more comics than usual that have expiring URLs. I don’t think I’ve had this many pictures to include in a long while.

    Bill Whitehead’s Free Range for the 28th presents an equation-solving nightmare. From my experience, this would be … a great pain, yes. But it wouldn’t be a career-wrecking mess. Typically a problem that’s hard to solve is hard because you have no idea what to do. Given an expression, you’re allowed to do anything that doesn’t change its truth value. And many approaches might look promising without quite resolving to something useful. The real breakthrough is working out what approach should be used. For an astrophysics problem, there are some classes of key decisions to make. One class is what to include and what to omit in the model. Another class is what to approximate — and how — versus what to treat exactly. Another class is what sorts of substitutions and transformations turn the original expression into one that reveals what you want. Those are the hard parts, and those are unlikely to have been forgotten. Applying those may be tedious, and I don’t doubt it would be anguishing to have the finished work wiped out. But it wouldn’t set one back years either. It would just hurt.

    Christopher Grady’s Lunar Babboon for the 29th I classify as the “anthropomorphic numerals” joke for this essay. Boy, have we all been there.

    'Numbers are boring!' complains the audience. 'Not so. They contain high drama and narrative. Here's an expense account that was turned in to me last week. Can you create a *story* based on these numbers?' 'Once upon a time, a guy was fired for malfeasance ... ' 'If you skip right to the big finish, sure.'

    Bill Holbrook’s On The Fastrack for the 29th of April, 2016. Spoiler: there aren’t any numbers in the second panel.

    Bill Holbrook’s On The Fastrack for the 29th continues the storyline about Fi giving her STEM talk. She is right, as I see it, in attributing drama and narrative to numbers. This is most easily seen in the sorts of finance and accounting mathematics which the character does. And the inevitable answer to “numbers are boring” (or “mathematics is boring”) is surely to show how they are about people. Even abstract mathematics is about things (some) people find interesting, and that must be about the people too.

    'Look, Grandpa! I got 100% on my math test! Do you know what that means? It means that out of ten questions, I got at least half of them correct!' 'It must be that new, new, new math.' 'So many friendly numbers!'

    Rick Detorie’s One Big Happy for the 3rd of May, 2016. Ever notice how many shirt pockets Grandpa has? I’m not saying it’s unrealistic, just that it’s more than the average.

    Rick Detorie’s One Big Happy for the 16th is a confused-mathematics joke. Grandpa tosses off a New Math joke that’s reasonably age-appropriate too, which is always nice to see in a comic strip. I don’t know how seriously to take Ruthie’s assertion that a 100% means she only got at least half of the questions correct. It could be a cartoonist grumbling about how kids these days never learn anything, the same way ever past generation of cartoonists had complained. But Ruthie is also the sort of perpetually-confused, perpetually-confusing character who would get the implications of a 100% on a test wrong. Or would state them weirdly, since yes, a 100% does imply getting at least half the test’s questions right.

    Border Collies, as we know, are highly intelligent. 'Yup, the math confirms it --- we can't get by without people.'

    Niklas Eriksson’s Carpe Diem for the 3rd of May, 2016. I’m a little unnerved there seems to be a multiplication x at the end of the square root vinculum on the third line there.

    Niklas Eriksson’s Carpe Diem for the 3rd uses the traditional board full of mathematical symbols as signifier of intelligence. There’s some interesting mixes of symbols here. The c2, for example, isn’t wrong for mathematics. But it does evoke Einstein and physics. There’s the curious mix of the symbol π and the approximation 3.14. But then I’m not sure how we would get from any of this to a proposition like “whether we can survive without people”.

    'What comes after eleven?' 'I can't do it. I don't have enough fingers to count on!' Tiger hands him a baseball glove. 'Use this.'

    Bud Blake’s Tiger for the 3rd of May, 2016. How did Punkinhead get up to eleven?

    Bud Blake’s Tiger for the 3rd is a cute little kids-learning-to-count thing. I suppose it doesn’t really need to be here. But Punkinhead looks so cute wearing his tie dangling down onto the floor, the way kids wear their ties these days.

    Tony Murphy’s It’s All About You for the 3rd name-drops algebra. I think what the author really wanted here was arithmetic, if the goal is to figure out the right time based on four clocks. They seem to be trying to do a simple arithmetic mean of the time on the four clocks, which is fair if we make some assumptions about how clocks drift away from the correct time. Mostly those assumptions are that the clocks all started right and are equally likely to drift backwards or forwards, and do that drifting at the same rate. If some clocks are more reliable than others, then, their claimed time should get more weight than the others. And something like that must be at work here. The mean of 7:56, 8:02, 8:07, and 8:13, uncorrected, is 8:04 and thirty seconds. That’s not close enough to 8:03 “and five-eighths” unless someone’s been calculating wrong, or supposing that 8:02 is more probably right than 8:13 is.

  • Joseph Nebus 3:00 pm on Saturday, 30 April, 2016 Permalink | Reply
    Tags: algebra, , , ,   

    Reading the Comics, April 27, 2016: Closing The Month (April) Out Edition 

    I concede this isn’t a set of mathematically-themed comics that inspires deep discussions. That’s all right. It’s got three that I can give pictures for, which is important. Also it means I can wrap up April with another essay. This gives me two months in a row of posting something every day, and I’d have bet that couldn’t happen.

    Ted Shearer’s Quincy for the 1st of March, 1977, rerun the 25th of April, is not actually a “mathematics is useless in the real world” comic strip. It’s more about the uselessness of any school stuff in the face of problems like the neighborhood bully. Arithmetic just fits on the blackboard efficiently. There’s some sadness in the setting. There’s also some lovely artwork, though, and it’s worth noticing it. The lines are nice and expressive, and the greyscale wash well-placed. It’s good to look at.

    'I've got a question, Miss Reid, about this stuff we're learning in school.' 'What's that, Quincy?' Quincy points to the bully out the window. 'How's it gonna help us out in the real world?'

    Ted Shearer’s Quincy for the 1st of March, 1977, rerun the 25th of April, 2016. I just noticed ‘Miss Reid’ is probably a funny-character-name.

    dro-mo for the 26th I admit I’m not sure what exactly is going on. I suppose it’s a contest to describe the most interesting geometric shape. I believe the fourth panel is meant to be a representation of the tesseract, the four-dimensional analog of the cube. This causes me to realize I don’t remember any illustrations of a five-dimensional hypercube. Wikipedia has a couple, but they’re a bit disappointing. They look like the four-dimensional cube with some more lines. Maybe it has some more flattering angles somewhere.

    Bill Amend’s FoxTrot for the 26th (a rerun from the 3rd of May, 2005) poses a legitimate geometry problem. Amend likes to do this. It was one of the things that first attracted me to the comic strip, actually, that his mathematics or physics or computer science jokes were correct. “Determine the sum of the interior angles for an N-sided polygon” makes sense. The commenters at Gocomics.com are quick to say what the sum is. If there are N sides, the interior angles sum up to (N – 2) times 180 degrees. I believe the commenters misread the question. “Determine”, to me, implies explaining why the sum is given by that formula. That’s a more interesting question and I think still reasonable for a freshman in high school. I would do it by way of triangles.

    BEIRB -OO-O; CINEM OOO--; CCHITE O--OO-; RUSPRE O---OO. Two, Three, Five, and Seven will always be -- ----- -----.

    David L Hoyt and Jeff Knurek’s Jumble for the 27th of April, 2016. I bet the link’s already expired by the time you read this.

    David L Hoyt and Jeff Knurek’s Jumble for the 27th of April gives us another arithmetic puzzle. As often happens, you can solve the surprise-answer by looking hard at the cartoon and picking up the clues from there. And it gives us an anthropomorphic-numerals gag for this collection.

    Bill Holbrook’s On The Fastrack for the 28th of April has the misanthropic Fi explain some of the glories of numbers. As she says, they can be reliable, consistent partners. If you have learned something about ‘6’, then it not only is true, it must be true, at least if we are using ‘6’ to mean the same thing. This is the sort of thing that transcends ordinary knowledge and that’s so wonderful about mathematics.

    Fi's STEM talk: 'Numbers are reliable. They're consistent. They don't lie. They don't betray you. They don't pretend to be something they're NOT. x and y, on the other hand, are shifty little goobers.'

    Bill Holbrook’s On The Fastrack for the 28th of April, 2016. I don’t know why the ‘y’ would be in kind-of-cursive while the ‘x’ isn’t, but you do see this sort of thing a fair bit in normal mathematics.

    Fi describes ‘x’ and ‘y’ as “shifty little goobers”, which is a bit unfair. ‘x’ and ‘y’ are names we give to numbers when we don’t yet know what values they have, or when we don’t care what they have. We’ve settled on those names mostly in imitation of Réné Descartes. Trying to do without names is a mess. You can do it, but it’s rather like novels in which none of the characters has a name. The most skilled writers can carry that off. The rest of us make a horrid mess. So we give placeholder names. Before ‘x’ and ‘y’ mathematicians would use names like ‘the thing’ (well, ‘re’) or ‘the heap’. Anything that the quantity we talk about might measure. It’s done better that way.

  • Joseph Nebus 3:00 pm on Thursday, 31 March, 2016 Permalink | Reply
    Tags: algebra, ,   

    Reading the Comics, March 30, 2016: Official At-Bat Edition 

    Comic Strip Master Command slowed down the pace at which the newspaper comics were to talk mathematical subjects. All right, that’s their prerogative. But it leaves me here, at Thursday, with slightly too few comics for my tastes. On the other hand, if I don’t run with what I have, I might not have anything to post for the 31st of March, and it would be a shame to go this whole month with something posted every day only to spoil it on the 31st. This is a pretty juvenile reason to do a thing, so here we are. Enjoy, please.

    Tom Thaves’s Frank and Ernest for the 25th of March is a students-grumbling joke. I’m not sure what to make of the argument “arithmetic might be education, but that algebra stuff is indoctrination”. I imagine it reflects the feeling that the rules of arithmetic are all these nice straightforward things, and then algebra’s rules seem a bewildering set of near-gibberish. I can understand people looking at the quadratic formula, being told it has something to do with parabolas and an axis, throwing up their hands, and declaring it all this crazy game they’ll never play.

    What people are forgetting in this is that everything sounds like this crazy gibberish game at first. The confusion you felt when first trying to factor a quadratic polynomial? It’s the same confusion you felt when first doing long division. And when you first multiplied a three-digit by a two-digit number. And when you had to subtract with borrowing. It’s also the same confusion you have when you first hear the first European settlement of Manhattan was driven by the Netherlands’ war for independence from Spain. Learning is changing the baffling confusion of life into an understandable pattern.

    Which is not to deny that we could do a better job motivating stuff. You have no idea how many drafts of the Dedekind Domain essay I threw out because there were just too many words describing conditions and not why any of them mattered. I’m lazy; I don’t like scrapping that much text. And I’m still not quite happy with Normal Groups.

    Jeff Mallet’s Frazz for the 27th is an easier joke to explain. It’s also one whose appeal I really understand. There is a compelling beauty to the notation and the symbols of higher mathematics. I remember when a kid I peered at one of my parents’ calculus textbooks. The reference page of common integrals was enchanting. It wasn’t the only thing that drove me towards mathematics. But the aesthetic beauty is there.

    And it’s not just mathematicians and mathematics-based fields that see it. The arts editor for my undergraduate school’s unread leftist weekly newspaper asked me to work out a problem, any problem, to include as graphic arts. I was happy to. (I was the managing editor for the paper at the time.) I even had a great problem, from the final exam in my freshman Classical Mechanics course. The problem was to derive the equivalent of Kepler’s Laws of Motion with a different force law. Instead of the inverse-square attraction of gravity we used the exponential-decay-style interactions of the weak force. It was a brilliant exam question, frankly, and made for a page of symbols that maybe nobody understood but that I’ll bet everyone thought pretty.

    John Forgetta and L A Rose’s The Meaning of Lila for the 27th is probably a rerun. The strip mostly is, although a few new or updated comics are fit into the rotation. It’s an example of a census joke, in which you classify away the whole population of the world. I remember first seeing it, as a kid, in a church bulletin. That one worked out how the entire working population of the United States was actually only two people and that’s why you’re always so tired. You could probably use the logic of this sort of joke to teach Venn diagrams. The logic that produces a funny low count relies on counting people several times, once for each of many categories they might fit in.

    Mark Anderson’s Andertoons for the 30th made me giggle. I suppose there’s an essay to be written about whether we need mathematics, and what we need it for. But wouldn’t that just take away from the fun of it?

  • Joseph Nebus 3:00 pm on Wednesday, 30 March, 2016 Permalink | Reply
    Tags: , algebra, , , , subgroups   

    A Leap Day 2016 Mathematics A To Z: Normal Subgroup 

    The Leap Day Mathematics A to Z term today is another abstract algebra term. This one again comes from from Gaurish, chief author of the Gaurish4Math blog. Part of it is going to be easy. Part of it is going to need a running start.

    Normal Subgroup.

    The “subgroup” part of this is easy. Remember that a “group” means a collection of things and some operation that lets us combine them. We usually call that either addition or multiplication. We usually write it out like it’s multiplication. If a and b are things from the collection, we write “ab” to mean adding or multiplying them together. (If we had a ring, we’d have something like addition and something like multiplication, and we’d be able to do “a + b” or “ab” as needed.)

    So with that in mind, the first thing you’d imagine a subgroup to be? That’s what it is. It’s a collection of things, all of which are in the original group, and that uses the same operation as the original group. For example, if the original group has a set that’s the whole numbers and the operation of addition, a subgroup would be the even numbers and the same old addition.

    Now things will get much clearer if I have names. Let me use G to mean some group. This is a common generic name for a group. Let me use H as the name for a subgroup of G. This is a common generic name for a subgroup of G. You see how deeply we reach to find names for things. And we’ll still want names for elements inside groups. Those are almost always lowercase letters: a and b, for example. If we want to make clear it’s something from G’s set, we might use g. If we want to be make clear it’s something from H’s set, we might use h.

    I need to tax your imagination again. Suppose “g” is some element in G’s set. What would you imagine the symbol “gH” means? No, imagine something simpler.

    Mathematicians call this “left-multiplying H by g”. What we mean is, take every single element h that’s in the set H, and find out what gh is. Then take all these products together. That’s the set “gH”. This might be a subgroup. It might not. No telling. Not without knowing what G is, what H is, what g is, and what the operation is. And we call it left-multiplying even if the operation is called addition or something else. It’s just easier to have a standard name even if the name doesn’t make perfect sense.

    That we named something left-multiplying probably inspires a question. Is there right-multiplying? Yes, there is. We’d write that as “Hg”. And that means take every single element h that’s in the set H, and find out what hg is. Then take all these products together.

    You see the subtle difference between left-multiplying and right-multiplying. In the one, you multiply everything in H on the left. In the other, you multiply everything in H on the right.

    So. Take anything in G. Let me call that g. If it’s always, necessarily, true that the left-product, gH, is the same set as the right-product, Hg, then H is a normal subgroup of G.

    The mistake mathematics majors make in doing this: we need the set gH to be the same as the set Hg. That is, the whole collection of products has to be the same for left-multiplying as right-multiplying. Nobody cares whether for any particular thing, h, inside H whether gh is the same as hg. It doesn’t matter. It’s whether the whole collection of things is the same that counts. I assume every mathematics major makes this mistake. I did, anyway.

    The natural thing to wonder here: how can the set gH ever not be the same as Hg? For that matter, how can a single product gh ever not be the same as hg? Do mathematicians just forget how multiplication works?

    Technically speaking no, we don’t. We just want to be able to talk about operations where maybe the order does too matter. With ordinary regular-old-number addition and multiplication the order doesn’t matter. gh always equals hg. We say this “commutes”. And if the operation for a group commutes, then every subgroup is a normal subgroup.

    But sometimes we’re interested in things that don’t commute. Or that we can’t assume commute. The example every algebra book uses for this is three-dimensional rotations. Set your algebra book down on a table. If you don’t have an algebra book you may use another one instead. I recommend Christopher Miller’s American Cornball: A Laffopedic Guide To The Formerly Funny. It’s a fine guide to all sorts of jokes that used to amuse and what was supposed to be amusing about them. If you don’t have a table then I don’t know what to suggest.

    Spin the book clockwise on the table and then stand it up on the edge nearer you. Then try again. Put the book back where it started. Stand it up on the edge nearer you and then spin it clockwise on the table. The book faces a different way this time around. (If it doesn’t, you spun too much. Try again until you get the answer I said.)

    Three-dimensional rotations like this form a group. The different ways you can turn something are the elements of its set. The operation between two rotations is just to do one and then the other, in order. But they don’t commute, not most of the time. So they can have a subgroup that isn’t normal.

    You may believe me now that such things exist. Now you can move on to wondering why we should care.

    Let me start by saying every group has at least two normal subgroups. Whatever your group G is, there’s a subgroup that’s made up just of the identity element and the group’s operation. The identity element is the thing that acts like 1 does for multiplication. You can multiply stuff by it and you get the same thing you started. The identity and the operator make a subgroup. And you’ll convince yourself that it’s a normal subgroup as soon as you write down g1 = 1g.

    (Wait, you might ask! What if multiplying on the left has a different identity than multiplying on the right does? Great question. Very good insight. You’ve got a knack for asking good questions. If we have that then we’re working with a more exotic group-like mathematical object, so don’t worry.)

    So the identity, ‘1’, makes a normal subgroup. Here’s another normal subgroup. The whole of G qualifies. (It’s OK if you feel uneasy. Think it over.)

    So ‘1’ is a normal subgroup of G. G is a normal subgroup of G. They’re boring answers. We know them before we even know anything about G. But they qualify.

    Does this sound familiar any? We have a thing. ‘1’ and the original thing subdivide it. It might be possible to subdivide it more, but maybe not.

    Is this all … factoring?

    Please here pretend I make a bunch of awkward faces while trying not to say either yes or no. But if H is a normal subgroup of G, then we can write something G/H, just like we might write 4/2, and that means something.

    That G/H we call a quotient group. It’s a subgroup, sure. As to what it is … well, let me go back to examples.

    Let’s say that G is the set of whole numbers and the operation of ordinary old addition. And H is the set of whole numbers that are multiples of 4, again with addition. So the things in H are 0, 4, 8, 12, and so on. Also -4, -8, -12, and so on.

    Suppose we pick things in G. And we use the group operation on the set of things in H. How many different sets can we get out of it? So for example we might pick the number 1 out of G. The set 1 + H is … well, list all the things that are in H, and add 1 to them. So that’s 1 + 0, 1 + 4, 1 + 8, 1 + 12, and 1 + -4, 1 + -8, 1 + -12, and so on. All told, it’s a bunch of numbers one more than a whole multiple of 4.

    Or we might pick the number 7 out of G. The set 7 + H is 7 + 0, 7 + 4, 7 + 8, 7 + 12, and so on. It’s also got 7 + -4, 7 + -8, 7 + -12, and all that. These are all the numbers that are three more than a whole multiple of 4.

    We might pick the number 8 out of G. This happens to be in H, but so what? The set 8 + H is going to be 8 + 0, 8 + 4, 8 + 8 … you know, these are all going to be multiples of 4 again. So 8 + H is just H. Some of these are simple.

    How about the number 3? 3 + H is 3 + 0, 3 + 4, 3 + 8, and so on. The thing is, the collection of numbers you get by 3 + H is the same as the collection of numbers you get by 7 + H. Both 3 and 7 do the same thing when we add them to H.

    Fiddle around with this and you realize there’s only four possible different sets you get out of this. You can get 0 + H, 1 + H, 2 + H, or 3 + H. Any other numbers in G give you a set that looks exactly like one of those. So we can speak of 0, 1, 2, and 3 as being a new group, the “quotient group” that you get by G/H. (This looks more like remainders to me, too, but that’s the terminology we have.)

    But we can do something like this with any group and any normal subgroup of that group. The normal subgroup gives us a way of picking out a representative set of the original group. That set shows off all the different ways we can manipulate the normal subgroup. It tells us things about the way the original group is put together.

    Normal subgroups are not just “factors, but for groups”. They do give us a way to see groups as things built up of other groups. We can see structures in sets of things.

    • gaurish 4:23 pm on Wednesday, 30 March, 2016 Permalink | Reply

      Why are you writing “G is the set of whole numbers and the operation of ordinary old addition” but you consider negative numbers while studying subgroups ? Whole numbers are 0, 1, 2, 3, … and doesn’t have inverse elements for this operation.

      I agree with: “The mistake mathematics majors make in doing this: we need the set gH to be the same as the set Hg….Nobody cares whether for any particular thing, h, inside H whether gh is the same as hg. It doesn’t matter…. assume every mathematics major makes this mistake. I did, anyway..”


      • Joseph Nebus 6:18 pm on Monday, 4 April, 2016 Permalink | Reply

        Well, that’s just a mistake on my part. I’d been trying to write whole numbers'' instead ofintegers” so as to sound less technical. And then I failed to specify I mean positive and negative whole numbers, probably because I knew what I meant to write and didn’t notice that I failed to. But thanks for spotting and asking; that’s what engaged readers are for.

        I don’t know how many introduction-to-algebra problems I made harder for myself by looking for gh equalling hg instead of gH equalling Hg. It’s a level of abstraction I needed to work up to.


    • davekingsbury 9:41 pm on Thursday, 31 March, 2016 Permalink | Reply

      Yeah, but what would be the point of belonging to one?


      • Joseph Nebus 6:18 pm on Monday, 4 April, 2016 Permalink | Reply

        Oh, I don’t know. It’s nice to have something to be a member of, isn’t it?


  • Joseph Nebus 3:00 pm on Monday, 28 March, 2016 Permalink | Reply
    Tags: , algebra, , , ,   

    A Leap Day 2016 Mathematics A To Z: Matrix 

    I get to start this week with another request. Today’s Leap Day Mathematics A To Z term is a famous one, and one that I remember terrifying me in the earliest days of high school. The request comes from Gaurish, chief author of the Gaurish4Math blog.


    Lewis Carroll didn’t like the matrix. Well, Charles Dodgson, anyway. And it isn’t that he disliked matrices particularly. He believed it was a bad use of a word. “Surely,” he wrote, “[ matrix ] means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He might have had etymology on his side. The word meant the place where something was developed, the source of something else. History has outvoted him, and his preferred “block”. The first mathematicians to use the word “matrix” were interested in things derived from the matrix. So for them, the matrix was the source of something else.

    What we mean by a matrix is a collection of some number of rows and columns. Inside each individual row and column is some mathematical entity. We call this an element. Elements are almost always real numbers. When they’re not real numbers they’re complex-valued numbers. (I’m sure somebody, somewhere has created matrices with something else as elements. You’ll never see these freaks.)

    Matrices work a lot like vectors do. We can add them together. We can multiply them by real- or complex-valued numbers, called scalars. But we can do other things with them. We can define multiplication, at least sometimes. The definition looks like a lot of work, but it represents something useful that way. And for square matrices, ones with equal numbers of rows and columns, we can find other useful stuff. We give that stuff wonderful names like traces and determinants and eigenvalues and eigenvectors and such.

    One of the big uses of matrices is to represent a mapping. A matrix can describe how points in a domain map to points in a range. Properly, a matrix made up of real numbers can only describe what are called linear mappings. These are ones that turn the domain into the range by stretching or squeezing down or rotating the whole domain the same amount. A mapping might follow different rules in different regions, but that’s all right. We can write a matrix that approximates the original mapping, at least in some areas. We do this in the same way, and for pretty much the same reason, we can approximate a real and complicated curve with a bunch of straight lines. Or the way we can approximate a complicated surface with a bunch of triangular plates.

    We can compound mappings. That is, we can start with a domain and a mapping, and find the image of that domain. We can then use a mapping again and find the image of the image of that domain. The matrix that describes this mapping-of-a-mapping is the one you get by multiplying the matrix of the first mapping and the matrix of the second mapping together. This is why we define matrix multiplication the odd way we do. Mapping are that useful, and matrices are that tied to them.

    I wrote about some of the uses of matrices in a Set Tour essay. That was based on a use of matrices in physics. We can describe the changing of a physical system with a mapping. And we can understand equilibriums, states where a system doesn’t change, by looking at the matrix that approximates what the mapping does near but not exactly on the equilibrium.

    But there are other uses of matrices. Many of them have nothing to do with mappings or physical systems or anything. For example, we have graph theory. A graph, here, means a bunch of points, “vertices”, connected by curves, “edges”. Many interesting properties of graphs depend on how many other vertices each vertex is connected to. And this is well-represented by a matrix. Index your vertices. Then create a matrix. If vertex number 1 connects to vertex number 2, put a ‘1’ in the first row, second column. If vertex number 1 connects to vertex number 3, put a ‘1’ in the first row, third column. If vertex number 2 isn’t connected to vertex number 3, put a ‘0’ in the second row, third column. And so on.

    We don’t have to use ones and zeroes. A “network” is a kind of graph where there’s some cost associated with each edge. We can put that cost, that number, into the matrix. Studying the matrix of a graph or network can tell us things that aren’t obvious from looking at the drawing.

    • howardat58 3:19 pm on Monday, 28 March, 2016 Permalink | Reply

      It should noted that a matrix as operator or function usually gets its numbers from the coefficients of a bunch of linear functions which represent the operator.

      And for next year, how about “E” for envelopes of families of straight lines, eg normals to a curve?

      Liked by 1 person

      • Joseph Nebus 5:20 am on Wednesday, 30 March, 2016 Permalink | Reply

        This is true, and it is worth noting. I ended up being vague to the point of useless in saying where the things in a matrix might come from.

        Envelopes might work. I need to get a better-organized list together for the next season.


    • gaurish 4:20 pm on Monday, 28 March, 2016 Permalink | Reply

      Nice explanation, especially graph theory part. :)
      My favourite matrix is Wrońskian matrix (http://specfun.inria.fr/bostan/publications/BoDu10.pdf)


      • Joseph Nebus 5:23 am on Wednesday, 30 March, 2016 Permalink | Reply

        Oh, now, the Wrońskian was such a delight to learn, mostly for the fun of saying it. Calculating it was a nightmare, at least back when I was a major and we were doing all this by hand. A lot of hand.


  • Joseph Nebus 3:00 pm on Monday, 21 March, 2016 Permalink | Reply
    Tags: , algebra, , ,   

    A Leap Day 2016 Mathematics A To Z: Jacobian 

    I don’t believe I got any requests for a mathematics term starting ‘J’. I’m as surprised as you. Well, maybe less surprised. I’ve looked at the alphabetical index for Wolfram MathWorld and noticed its relative poverty for ‘J’. It’s not as bad as ‘X’ or ‘Y’, though. But it gives me room to pick a word of my own.


    The Jacobian is named for Carl Gustav Jacob Jacobi, who lived in the first half of the 19th century. He’s renowned for work in mechanics, the study of mathematically modeling physics. He’s also renowned for matrices, rectangular grids of numbers which represent problems. There’s more, of course, but those are the points that bring me to the Jacobian I mean to talk about. There are other things named for Jacobi, including other things named “Jacobian”. But I mean to limit the focus to two, related, things.

    I discussed mappings some while describing homomorphisms and isomorphisms. A mapping’s a relationship matching things in one set, a domain, to things in a set, the range. The domain and the range can be anything at all. They can even be the same thing, if you like.

    A very common domain is … space. Like, the thing you move around in. It’s a region full of points that are all some distance and some direction from one another. There’s almost always assumed to be multiple directions possible. We often call this “Euclidean space”. It’s the space that works like we expect for normal geometry. We might start with a two- or three-dimensional space. But it’s often convenient, especially for physics problems, to work with more dimensions. Four-dimensions. Six-dimensions. Incredibly huge numbers of dimensions. Honest, this often helps. It’s just harder to sketch out.

    So we might for a problem need, say, 12-dimensional space. We can describe a point in that with an ordered set of twelve coordinates. Each describes how far you are from some standard reference point known as The Origin. If it doesn’t matter how many dimensions we’re working with, we call it an N-dimensional space. Or we use another letter if N is committed to something or other.

    This is our stage. We are going to be interested in some N-dimensional Euclidean space. Let’s pretend N is 2; then our stage looks like the screen you’re reading now. We don’t need to pretend N is larger yet.

    Our player is a mapping. It matches things in our N-dimensional space back to the same N-dimensional space. For example, maybe we have a mapping that takes the point with coordinates (3, 1) to the point (-3, -1). And it takes the point with coordinates (5.5, -2) to the point (-5.5, 2). And it takes the point with coordinates (-6, -π) to the point (6, π). You get the pattern. If we start from the point with coordinates (x, y) for some real numbers x and y, then the mapping gives us the point with coordinates (-x, -y).

    One more step and then the play begins. Let’s not just think about a single point. Think about a whole region. If we look at the mapping of every point in that whole region, we get out … probably, some new region. We call this the “image” of the original region. With the mapping from the paragraph above, it’s easy to say what the image of a region is. It’ll look like the reflection in a corner mirror of the original region.

    What if the mapping’s more complicated? What if we had a mapping that described how something was reflected in a cylindrical mirror? Or a mapping that describes how the points would move if they represent points of water flowing around a drain? — And that last explains why Jacobians appear in mathematical physics.

    Many physics problems can be understood as describing how points that describe the system move in time. The dynamics of a system can be understood by how moving in time changes a region of starting conditions. A system might keep a region pretty much unchanged. Maybe it makes the region move, but it doesn’t change size or shape much. Or a system might change the region impressively. It might keep the area about the same, but stretch it out and fold it back, the way one might knead cookie dough.

    The Jacobian, the one I’m interested in here, is a way of measuring these changes. The Jacobian matrix describes, for each point in the original domain, how a tiny change in one coordinate causes a change in the mapping’s coordinates. So if we have a mapping from an N-dimensional space to an N-dimensional space, there are going to be N times N values at work. Each one represents a different piece. How much does a tiny change in the first coordinate of the original point change the first coordinate of the mapping of the point? How much does a tiny change in the first coordinate of the original point change the second coordinate of the mapping of the the point? How much does a tiny change in the first coordinate of the original point change the third coordinate of the mapping of the point? … how much does a tiny change in the second coordinate of the original point change the first coordinate of the mapping of the point? And on and on and now you know why mathematics majors are trained on Jacobians with two-by-two and three-by-three matrices. We do maybe a couple four-by-four matrices to remind us that we are born to suffer. We never actually work out bigger matrices. Life is just too short.

    (I’ve been talking, by the way, about the mapping of an N-dimensional space to an N-dimensional space. This is because we’re about to get to something that requires it. But we can write a matrix like this for a mapping of an N-dimensional space to an M-dimensional space, a different-sized space. It has uses. Let’s not worry about that.)

    If you have a square matrix, one that has as many rows as columns, then you can calculate something named the determinant. This involves a lot of work. It takes even more work the bigger the matrix is. This is why mathematics majors learn to calculate determinants on two-by-two and three-by-three matrices. We do a couple four-by-four matrices and maybe one five-by-five to again remind us about suffering.

    Anyway, by calculating the determinant of a Jacobian matrix, we get the Jacobian determinant. Finally we have something simple. The Jacobian determinant says how the area of a region changes in the mapping. Suppose the Jacobian determinant at a point is 2. Then a small region containing that point has an image with twice the original area. Suppose the Jacobian determinant is 0.8. Then a small region containing that point has an image with area 0.8 times the original area. Suppose the Jacobian determinant is -1. Then —

    Well, what would you imagine?

    If the Jacobian determinant is -1, then a small region around that point gets mapped to something with the same area. What changes is called the handedness. The mapping doesn’t just stretch or squash the region, but it also flips it along at least one dimension. The Jacobian determinant can tell us that.

    So the Jacobian matrix, and the Jacobian determinant, are ways to describe how mappings change areas. Mathematicians will often call either of them just “the Jacobian”. We trust context to make clear what we mean. Either one is a way of describing how mappings change space: how they expand or contract, how they rotate, how they reflect spaces. Some fields of mathematics, including a surprising amount of the study of physics, are about studying how space changes.

    • howardat58 3:59 pm on Monday, 21 March, 2016 Permalink | Reply

      Well done ! That was going to be a hard one.

      Liked by 2 people

      • Joseph Nebus 2:59 am on Thursday, 24 March, 2016 Permalink | Reply

        I was sweating over how to explain it! I seem to be doing better the farther I get from definitions, though.


    • gaurish 4:04 pm on Monday, 21 March, 2016 Permalink | Reply

      Finally I know what a Jacobian is (heard a lot about it from my Physics major friends) :)

      Liked by 1 person

      • Joseph Nebus 3:06 am on Thursday, 24 March, 2016 Permalink | Reply

        Glad to be of service. The Jacobian also turns up when you make a change of variables for vector-valued variables. This is really the same thing as what I spent most of my time talking about. But it’s got different connocations.


    • elkement (Elke Stangl) 7:24 am on Tuesday, 5 April, 2016 Permalink | Reply

      I am stunned again and again by how you can explain such things so easily without a single formula or figure! Great post!

      Liked by 1 person

      • Joseph Nebus 1:58 am on Saturday, 9 April, 2016 Permalink | Reply

        Oh, goodness, thank you so.

        Formulas I do try to limit, but that’s mostly because I feel like I can’t make WordPress’s LaTeX engine print them large enough to be cleanly read. (Is there some trick I’m not getting to producing equations on their own lines rather than as tiny inline things?) Figures, now, those I’d include more of except I get terribly lazy. It’s less effort to write another 200 words than it is to get to ArtRage on the iPad and finish something there. And even that wouldn’t be so bad except that lettering is so awful without a nice firm-pointed stylus.

        Liked by 1 person

        • elkement (Elke Stangl) 6:26 am on Sunday, 10 April, 2016 Permalink | Reply

          Re LaTex equations: I (nearly) always put them on a separate line, just by hitting Enter and starting a new paragraph. I also find them a bit too small sometimes, but there is a parameter ‘s’ that can be added at the end of the string in the editor, like ‘&s=2’, before the $ sign at the end of the LaTex string.
          In this post of mine the first three equations (math-y part is the ‘Appendix’, the second 1000 of 2000 words ;-)) are in size 2, then I switched to normal size 1 (or no s parameter) as the exponential function would have looked to big in size 2:
          Sorry for posting the link, I hope it is not too ‘spammy’ but I could not resist. I wanted to write this for a long time but had postponed it as I was not sure about how not to intimidate readers too much. Finally I came up with the ‘Appendix’ idea. There is a saying among ‘science writers’: Every equation halves your number of readers, that’s why I admire your style…


          • Joseph Nebus 2:38 am on Friday, 15 April, 2016 Permalink | Reply

            Ooh, now, thank you. That’s just the sort of thing I hoped for. Your size-2 equations are about the right size for my tastes.

            Also nothing to apologize for. I’m always happy to see interesting articles posted here and there’s the obvious relevance .

            The appendix is probably the best workable solution between writing to a mass audience and wanting to show one’s work. I admit I do like trying to write without equations or diagrams, but that’s just laziness. I can type fast enough that a couple hundred extra words are nothing. At least not on my part.

            Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 20 March, 2016 Permalink | Reply
    Tags: algebra, , otr, Vic and Sade   

    Algebra, Explained 

    It always feels odd to toss folks from my mathematics to my humor blog. I suppose it only sometimes seems on-point. Last week, though, I ran a series of essays about the old-time radio series Vic and Sade. One of them, happening to star neither Vic nor Sade, was all about Uncle Fletcher trying to explain algebra, or arithmetic, or something or other. The radio program won’t be to everyone’s tastes. It had a very dry style, closer in tone to a modern one-camera sitcom than anything where there’s a studio audience and easy-to-quote patter. And it does start with an interminable advertisement for sponsor Crisco. (It also includes a contest that adds to the announcement’s length. I assume we’ve missed the contest deadline.) But past the first three minutes and twenty seconds you get some fine mathematics exposition. I hope you enjoy.

  • Joseph Nebus 3:00 pm on Friday, 18 March, 2016 Permalink | Reply
    Tags: , , algebra, , , , , , ,   

    A Leap Day 2016 Mathematics A To Z: Isomorphism 

    Gillian B made the request that’s today’s A To Z word. I’d said it would be challenging. Many have been, so far. But I set up some of the work with “homomorphism” last time. As with “homomorphism” it’s a word that appears in several fields and about different kinds of mathematical structure. As with homomorphism, I’ll try describing what it is for groups. They seem least challenging to the imagination.


    An isomorphism is a kind of homomorphism. And a homomorphism is a kind of thing we do with groups. A group is a mathematical construct made up of two things. One is a set of things. The other is an operation, like addition, where we take two of the things and get one of the things in the set. I think that’s as far as we need to go in this chain of defining things.

    A homomorphism is a mapping, or if you like the word better, a function. The homomorphism matches everything in a group to the things in a group. It might be the same group; it might be a different group. What makes it a homomorphism is that it preserves addition.

    I gave an example last time, with groups I called G and H. G had as its set the whole numbers 0 through 3 and as operation addition modulo 4. H had as its set the whole numbers 0 through 7 and as operation addition modulo 8. And I defined a homomorphism φ which took a number in G and matched it the number in H which was twice that. Then for any a and b which were in G’s set, φ(a + b) was equal to φ(a) + φ(b).

    We can have all kinds of homomorphisms. For example, imagine my new φ1. It takes whatever you start with in G and maps it to the 0 inside H. φ1(1) = 0, φ1(2) = 0, φ1(3) = 0, φ1(0) = 0. It’s a legitimate homomorphism. Seems like it’s wasting a lot of what’s in H, though.

    An isomorphism doesn’t waste anything that’s in H. It’s a homomorphism in which everything in G’s set matches to exactly one thing in H’s, and vice-versa. That is, it’s both a homomorphism and a bijection, to use one of the terms from the Summer 2015 A To Z. The key to remembering this is the “iso” prefix. It comes from the Greek “isos”, meaning “equal”. You can often understand an isomorphism from group G to group H showing how they’re the same thing. They might be represented differently, but they’re equivalent in the lights you use.

    I can’t make an isomorphism between the G and the H I started with. Their sets are different sizes. There’s no matching everything in H’s set to everything in G’s set without some duplication. But we can make other examples.

    For instance, let me start with a new group G. It’s got as its set the positive real numbers. And it has as its operation ordinary multiplication, the kind you always do. And I want a new group H. It’s got as its set all the real numbers, positive and negative. It has as its operation ordinary addition, the kind you always do.

    For an isomorphism φ, take the number x that’s in G’s set. Match it to the number that’s the logarithm of x, found in H’s set. This is a one-to-one pairing: if the logarithm of x equals the logarithm of y, then x has to equal y. And it covers everything: all the positive real numbers have a logarithm, somewhere in the positive or negative real numbers.

    And this is a homomorphism. Take any x and y that are in G’s set. Their “addition”, the group operation, is to multiply them together. So “x + y”, in G, gives us the number xy. (I know, I know. But trust me.) φ(x + y) is equal to log(xy), which equals log(x) + log(y), which is the same number as φ(x) + φ(y). There’s a way to see the postive real numbers being multiplied together as equivalent to all the real numbers being added together.

    You might figure that the positive real numbers and all the real numbers aren’t very different-looking things. Perhaps so. Here’s another example I like, drawn from Wikipedia’s entry on Isomorphism. It has as sets things that don’t seem to have anything to do with one another.

    Let me have another brand-new group G. It has as its set the whole numbers 0, 1, 2, 3, 4, and 5. Its operation is addition modulo 6. So 2 + 2 is 4, while 2 + 3 is 5, and 2 + 4 is 0, and 2 + 5 is 1, and so on. You get the pattern, I hope.

    The brand-new group H, now, that has a more complicated-looking set. Its set is ordered pairs of whole numbers, which I’ll represent as (a, b). Here ‘a’ may be either 0 or 1. ‘b’ may be 0, 1, or 2. To describe its addition rule, let me say we have the elements (a, b) and (c, d). Find their sum first by adding together a and c, modulo 2. So 0 + 0 is 0, 1 + 0 is 1, 0 + 1 is 1, and 1 + 1 is 0. That result is the first number in the pair. The second number we find by adding together b and d, modulo 3. So 1 + 0 is 1, and 1 + 1 is 2, and 1 + 2 is 0, and so on.

    So, for example, (0, 1) plus (1, 1) will be (1, 2). But (0, 1) plus (1, 2) will be (1, 0). (1, 2) plus (1, 0) will be (0, 2). (1, 2) plus (1, 2) will be (0, 1). And so on.

    The isomorphism matches up things in G to things in H this way:

    In G φ(G), in H
    0 (0, 0)
    1 (1, 1)
    2 (0, 2)
    3 (1, 0)
    4 (0, 1)
    5 (1, 2)

    I recommend playing with this a while. Pick any pair of numbers x and y that you like from G. And check their matching ordered pairs φ(x) and φ(y) in H. φ(x + y) is the same thing as φ(x) + φ(y) even though the things in G’s set don’t look anything like the things in H’s.

    Isomorphisms exist for other structures. The idea extends the way homomorphisms do. A ring, for example, has two operations which we think of as addition and multiplication. An isomorphism matches two rings in ways that preserve the addition and multiplication, and which match everything in the first ring’s set to everything in the second ring’s set, one-to-one. The idea of the isomorphism is that two different things can be paired up so that they look, and work, remarkably like one another.

    One of the common uses of isomorphisms is describing the evolution of systems. We often like to look at how some physical system develops from different starting conditions. If you make a little variation in how things start, does this produce a small change in how it develops, or does it produce a big change? How big? And the description of how time changes the system is, often, an isomorphism.

    Isomorphisms also appear when we study the structures of groups. They turn up naturally when we look at things called “normal subgroups”. The name alone gives you a good idea what a “subgroup” is. “Normal”, well, that’ll be another essay.

    • Gillian B 10:27 pm on Friday, 18 March, 2016 Permalink | Reply


      I chose that, of all things, from an old Dr Who episode in “The Pyramids of Mars”. Sutek (old Egyptian god) wants to use the TARDIS himself, but the Doctor tells him it’s isomorphic – and my mother yelled from the kitchen “I KNOW WHAT THAT MEANS!” (she was about halfway through her maths degree at the time). So thank you! I’m going to pass this on to her, for the memories.

      Liked by 1 person

    • Gillian B 2:14 am on Monday, 21 March, 2016 Permalink | Reply

      Allow me to reprint an email I received today:

      “Liz Richards”


      Re: Isomorphism

      Thank you, thank you, the you. I’ve printed out the isomorphic page.




Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: