Tagged: glossary Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Tuesday, 3 January, 2017 Permalink | Reply
    Tags: , , glossary, ,   

    The End 2016 Mathematics A To Z Roundup 


    As is my tradition for the end of these roundups (see Summer 2015 and then Leap Day 2016) I want to just put up a page listing the whole set of articles. It’s a chance for people who missed a piece to easily see what they missed. And it lets me recover that little bit extra from the experience. Run over the past two months were:

     
  • Joseph Nebus 6:00 pm on Wednesday, 9 November, 2016 Permalink | Reply
    Tags: , , , glossary, , Josiah Willard Gibbs   

    The End 2016 Mathematics A To Z: Distribution (statistics) 


    As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

    Distribution (statistics)

    The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

    But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

    And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

    We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

    There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

    We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

    That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

    Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

    There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

     
  • Joseph Nebus 6:00 pm on Monday, 7 November, 2016 Permalink | Reply
    Tags: , , , , glossary, , ,   

    The End 2016 Mathematics A To Z: Cantor’s Middle Third 


    Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

    Cantor’s Middle Third.

    I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

    Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

    Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

    We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

    That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

    Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

    The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

    Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

    Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

    What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

    We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

    Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

    It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

    (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

    It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

    HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

    You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

     
  • Joseph Nebus 6:00 pm on Wednesday, 2 November, 2016 Permalink | Reply
    Tags: , , eigenvalues, , glossary, , ,   

    The End 2016 Mathematics A To Z: Algebra 


    So let me start the End 2016 Mathematics A To Z with a word everybody figures they know. As will happen, everybody’s right and everybody’s wrong about that.

    Algebra.

    Everybody knows what algebra is. It’s the point where suddenly mathematics involves spelling. Instead of long division we’re on a never-ending search for ‘x’. Years later we pass along gifs of either someone saying “stop asking us to find your ex” or someone who’s circled the letter ‘x’ and written “there it is”. And make jokes about how we got through life without using algebra. And we know it’s the thing mathematicians are always doing.

    Mathematicians aren’t always doing that. I expect the average mathematician would say she almost never does that. That’s a bit of a fib. We have a lot of work where we do stuff that would be recognizable as high school algebra. It’s just we don’t really care about that. We’re doing that because it’s how we get the problem we are interested in done. the most recent few pieces in my “Why Stuff can Orbit” series include a bunch of high school algebra-style work. But that was just because it was the easiest way to answer some calculus-inspired questions.

    Still, “algebra” is a much-used word. It comes back around the second or third year of a mathematics major’s career. It comes in two forms in undergraduate life. One form is “linear algebra”, which is a great subject. That field’s about how stuff moves. You get to imagine space as this stretchy material. You can stretch it out. You can squash it down. You can stretch it in some directions and squash it in others. You can rotate it. These are simple things to build on. You can spend a whole career building on that. It becomes practical in surprising ways. For example, it’s the field of study behind finding equations that best match some complicated, messy real data.

    The second form is “abstract algebra”, which comes in about the same time. This one is alien and baffling for a long while. It doesn’t help that the books all call it Introduction to Algebra or just Algebra and all your friends think you’re slumming. The mathematics major stumbles through confusing definitions and theorems that ought to sound comforting. (“Fermat’s Little Theorem”? That’s a good thing, right?) But the confusion passes, in time. There’s a beautiful subject here, one of my favorites. I’ve talked about it a lot.

    We start with something that looks like the loosest cartoon of arithmetic. We get a bunch of things we can add together, and an ‘addition’ operation. This lets us do a lot of stuff that looks like addition modulo numbers. Then we go on to stuff that looks like picking up floor tiles and rotating them. Add in something that we call ‘multiplication’ and we get rings. This is a bit more like normal arithmetic. Add in some other stuff and we get ‘fields’ and other structures. We can keep falling back on arithmetic and on rotating tiles to build our intuition about what we’re doing. This trains mathematicians to look for particular patterns in new, abstract constructs.

    Linear algebra is not an abstract-algebra sort of algebra. Sorry about that.

    And there’s another kind of algebra that mathematicians talk about. At least once they get into grad school they do. There’s a huge family of these kinds of algebras. The family trait for them is that they share a particular rule about how you can multiply their elements together. I won’t get into that here. There are many kinds of these algebras. One that I keep trying to study on my own and crash hard against is Lie Algebra. That’s named for the Norwegian mathematician Sophus Lie. Pronounce it “lee”, as in “leaning”. You can understand quantum mechanics much better if you’re comfortable with Lie Algebras and so now you know one of my weaknesses. Another kind is the Clifford Algebra. This lets us create something called a “hypercomplex number”. It isn’t much like a complex number. Sorry. Clifford Algebra does lend to a construct called spinors. These help physicists understand the behavior of bosons and fermions. Every bit of matter seems to be either a boson or a fermion. So you see why this is something people might like to understand.

    Boolean Algebra is the algebra of this type that a normal person is likely to have heard of. It’s about what we can build using two values and a few operations. Those values by tradition we call True and False, or 1 and 0. The operations we call things like ‘and’ and ‘or’ and ‘not’. It doesn’t sound like much. It gives us computational logic. Isn’t that amazing stuff?

    So if someone says “algebra” she might mean any of these. A normal person in a non-academic context probably means high school algebra. A mathematician speaking without further context probably means abstract algebra. If you hear something about “matrices” it’s more likely that she’s speaking of linear algebra. But abstract algebra can’t be ruled out yet. If you hear a word like “eigenvector” or “eigenvalue” or anything else starting “eigen” (or “characteristic”) she’s more probably speaking of abstract algebra. And if there’s someone’s name before the word “algebra” then she’s probably speaking of the last of these. This is not a perfect guide. But it is the sort of context mathematicians expect other mathematicians notice.

     
    • John Friedrich 2:13 am on Thursday, 3 November, 2016 Permalink | Reply

      The cruelest trick that happened to me was when a grad school professor labeled the Galois Theory class “Algebra”. Until then, the lowest score I’d ever gotten in a math class was a B. After that, I decided to enter the work force and abandon my attempts at a master’s degree.

      Like

      • Joseph Nebus 3:32 pm on Friday, 4 November, 2016 Permalink | Reply

        Well, it’s true enough that it’s part of algebra. But I’d feel uncomfortable plunging right into that without the prerequisites being really clear. I’m not sure I’ve even run into a nice clear pop-culture explanation of Galois Theory past some notes about how there’s two roots to a quadratic equation and see how they mirror each other.

        Like

  • Joseph Nebus 3:00 pm on Friday, 6 May, 2016 Permalink | Reply
    Tags: , glossary, , lessons, planning,   

    What I Learned Doing The Leap Day 2016 Mathematics A To Z 


    The biggest thing I learned in the recently concluded mathematics glossary is that continued fractions have enthusiasts. I hadn’t intended to cause controversy when I claimed they weren’t much used anymore. The most I have grounds to say is that the United States educational process as I experienced it doesn’t use them for more than a few special purposes. There is a general lesson there. While my experience may be typical, that doesn’t mean everyone’s is like it. There is a mystery to learn from in that.

    The next big thing I learned was the Kullbach-Leibler Divergence. I’m glad to know it now. And I would not have known it, I imagine, if it weren’t for my trying something novel and getting a fine result from it. That was throwing open the A To Z glossary to requests from readers. At least half the terms were ones that someone reading my original call had asked for.

    And that was thrilling. It gave me a greater feeling that I was communicating with specific people than most of the things that I’ve written, is the biggest point. I understand that I have readers, and occasionally chat with some. This was a rare chance to feel engaged, though.

    And getting asked things I hadn’t thought of, or in some cases hadn’t heard of, was great. It foiled the idea of two months’ worth of easy postings, but it made me look up and learn and think about a variety of things. And also to re-think them. My first drafts of the Dedekind Domain and the Kullbach-Leibler divergence essays were completely scrapped, and the Jacobian made it through only with a lot of rewriting. I’ve been inclined to write with few equations and even fewer drawings around here. Part of that’s to be less intimidating. Part of that’s because of laziness. Some stuff is wonderfully easy to express in a sketch, but transferring that to a digital form is the heavy work of getting out the scanner and plugging it in. Or drawing from scratch on my iPad. Cleaning it up is even more work. So better to spend a thousand extra words on the setup.

    But that seemed to work! I’m especially surprised that the Jacobian and the Lagrangian essays seemed to make sense without pictures or equations. Homomorphisms and isomorphisms were only a bit less surprising. I feel like I’ve been writing better thanks to this.

    I do figure on another A To Z for sometime this summer. Perhaps I should open nominations already, and with a better-organized scheme for knocking out letters. Some people were disappointed (I suppose) by picking letters that had already got assigned. And I could certainly use time and help finding more x- and y-words. Q isn’t an easy one either.

     
  • Joseph Nebus 3:00 pm on Friday, 29 April, 2016 Permalink | Reply
    Tags: , glossary,   

    A Leap Day 2016 Mathematics A To Z: The Roundup 


    And with the conclusion of the alphabet I move now into posting about each of the counting numbers. … No, wait, that’s already being done. But I should gather together the A To Z posts in order that it’s easier to find them later on.

    I mean to put together some thoughts about this A To Z. I haven’t had time yet. I can say that it’s been a lot of fun to write, even if after the first two weeks I was never as far ahead of deadline as I hoped to be. I do expect to run another one of these, although I don’t know when that will be. After I’ve had some chance to recuperate, though. It’s fun going two months without missing a day’s posting on my mathematics blog. But it’s also work and who wants that?

     
  • Joseph Nebus 3:00 pm on Friday, 18 March, 2016 Permalink | Reply
    Tags: , , , glossary, , , , , ,   

    A Leap Day 2016 Mathematics A To Z: Isomorphism 


    Gillian B made the request that’s today’s A To Z word. I’d said it would be challenging. Many have been, so far. But I set up some of the work with “homomorphism” last time. As with “homomorphism” it’s a word that appears in several fields and about different kinds of mathematical structure. As with homomorphism, I’ll try describing what it is for groups. They seem least challenging to the imagination.

    Isomorphism.

    An isomorphism is a kind of homomorphism. And a homomorphism is a kind of thing we do with groups. A group is a mathematical construct made up of two things. One is a set of things. The other is an operation, like addition, where we take two of the things and get one of the things in the set. I think that’s as far as we need to go in this chain of defining things.

    A homomorphism is a mapping, or if you like the word better, a function. The homomorphism matches everything in a group to the things in a group. It might be the same group; it might be a different group. What makes it a homomorphism is that it preserves addition.

    I gave an example last time, with groups I called G and H. G had as its set the whole numbers 0 through 3 and as operation addition modulo 4. H had as its set the whole numbers 0 through 7 and as operation addition modulo 8. And I defined a homomorphism φ which took a number in G and matched it the number in H which was twice that. Then for any a and b which were in G’s set, φ(a + b) was equal to φ(a) + φ(b).

    We can have all kinds of homomorphisms. For example, imagine my new φ1. It takes whatever you start with in G and maps it to the 0 inside H. φ1(1) = 0, φ1(2) = 0, φ1(3) = 0, φ1(0) = 0. It’s a legitimate homomorphism. Seems like it’s wasting a lot of what’s in H, though.

    An isomorphism doesn’t waste anything that’s in H. It’s a homomorphism in which everything in G’s set matches to exactly one thing in H’s, and vice-versa. That is, it’s both a homomorphism and a bijection, to use one of the terms from the Summer 2015 A To Z. The key to remembering this is the “iso” prefix. It comes from the Greek “isos”, meaning “equal”. You can often understand an isomorphism from group G to group H showing how they’re the same thing. They might be represented differently, but they’re equivalent in the lights you use.

    I can’t make an isomorphism between the G and the H I started with. Their sets are different sizes. There’s no matching everything in H’s set to everything in G’s set without some duplication. But we can make other examples.

    For instance, let me start with a new group G. It’s got as its set the positive real numbers. And it has as its operation ordinary multiplication, the kind you always do. And I want a new group H. It’s got as its set all the real numbers, positive and negative. It has as its operation ordinary addition, the kind you always do.

    For an isomorphism φ, take the number x that’s in G’s set. Match it to the number that’s the logarithm of x, found in H’s set. This is a one-to-one pairing: if the logarithm of x equals the logarithm of y, then x has to equal y. And it covers everything: all the positive real numbers have a logarithm, somewhere in the positive or negative real numbers.

    And this is a homomorphism. Take any x and y that are in G’s set. Their “addition”, the group operation, is to multiply them together. So “x + y”, in G, gives us the number xy. (I know, I know. But trust me.) φ(x + y) is equal to log(xy), which equals log(x) + log(y), which is the same number as φ(x) + φ(y). There’s a way to see the postive real numbers being multiplied together as equivalent to all the real numbers being added together.

    You might figure that the positive real numbers and all the real numbers aren’t very different-looking things. Perhaps so. Here’s another example I like, drawn from Wikipedia’s entry on Isomorphism. It has as sets things that don’t seem to have anything to do with one another.

    Let me have another brand-new group G. It has as its set the whole numbers 0, 1, 2, 3, 4, and 5. Its operation is addition modulo 6. So 2 + 2 is 4, while 2 + 3 is 5, and 2 + 4 is 0, and 2 + 5 is 1, and so on. You get the pattern, I hope.

    The brand-new group H, now, that has a more complicated-looking set. Its set is ordered pairs of whole numbers, which I’ll represent as (a, b). Here ‘a’ may be either 0 or 1. ‘b’ may be 0, 1, or 2. To describe its addition rule, let me say we have the elements (a, b) and (c, d). Find their sum first by adding together a and c, modulo 2. So 0 + 0 is 0, 1 + 0 is 1, 0 + 1 is 1, and 1 + 1 is 0. That result is the first number in the pair. The second number we find by adding together b and d, modulo 3. So 1 + 0 is 1, and 1 + 1 is 2, and 1 + 2 is 0, and so on.

    So, for example, (0, 1) plus (1, 1) will be (1, 2). But (0, 1) plus (1, 2) will be (1, 0). (1, 2) plus (1, 0) will be (0, 2). (1, 2) plus (1, 2) will be (0, 1). And so on.

    The isomorphism matches up things in G to things in H this way:

    In G φ(G), in H
    0 (0, 0)
    1 (1, 1)
    2 (0, 2)
    3 (1, 0)
    4 (0, 1)
    5 (1, 2)

    I recommend playing with this a while. Pick any pair of numbers x and y that you like from G. And check their matching ordered pairs φ(x) and φ(y) in H. φ(x + y) is the same thing as φ(x) + φ(y) even though the things in G’s set don’t look anything like the things in H’s.

    Isomorphisms exist for other structures. The idea extends the way homomorphisms do. A ring, for example, has two operations which we think of as addition and multiplication. An isomorphism matches two rings in ways that preserve the addition and multiplication, and which match everything in the first ring’s set to everything in the second ring’s set, one-to-one. The idea of the isomorphism is that two different things can be paired up so that they look, and work, remarkably like one another.

    One of the common uses of isomorphisms is describing the evolution of systems. We often like to look at how some physical system develops from different starting conditions. If you make a little variation in how things start, does this produce a small change in how it develops, or does it produce a big change? How big? And the description of how time changes the system is, often, an isomorphism.

    Isomorphisms also appear when we study the structures of groups. They turn up naturally when we look at things called “normal subgroups”. The name alone gives you a good idea what a “subgroup” is. “Normal”, well, that’ll be another essay.

     
    • Gillian B 10:27 pm on Friday, 18 March, 2016 Permalink | Reply

      Yayay!

      I chose that, of all things, from an old Dr Who episode in “The Pyramids of Mars”. Sutek (old Egyptian god) wants to use the TARDIS himself, but the Doctor tells him it’s isomorphic – and my mother yelled from the kitchen “I KNOW WHAT THAT MEANS!” (she was about halfway through her maths degree at the time). So thank you! I’m going to pass this on to her, for the memories.

      Liked by 1 person

    • Gillian B 2:14 am on Monday, 21 March, 2016 Permalink | Reply

      Allow me to reprint an email I received today:

      From:
      “Liz Richards”

      To:
      reynardo

      Subject:
      Re: Isomorphism

      Thank you, thank you, the you. I’ve printed out the isomorphic page.

      Love

      Mum

      Like

  • Joseph Nebus 3:00 pm on Wednesday, 16 March, 2016 Permalink | Reply
    Tags: , , , glossary, , ,   

    A Leap Day 2016 Mathematics A To Z: Homomorphism 


    I’m not sure how, but many of my Mathematics A To Z essays seem to circle around algebra. I mean abstract algebra, not the kind that involves petty concerns like ‘x’ and ‘y’. In abstract algebra we worry about letters like ‘g’ and ‘h’. For special purposes we might even have ‘e’. Maybe it’s that the subject has a lot of familiar-looking words. For today’s term, I’m doing an algebra term, and one that wasn’t requested. But it’ll make my life a little easier when I get to a word that was requested.

    Homomorphism.

    Also, I lied when I said this was an abstract algebra word. At least I was imprecise. The word appears in a fairly wide swath of mathematics. But abstract algebra is where most mathematics majors first encounter it. And the other uses hearken back to this. If you understand what an algebraist means by “homomorphism” then you understand the essence of what someone else means by it.

    One of the things mathematicians study a lot is mapping. This is matching the things in one set to things in another set. Most often we want this to be done by some easy-to-understand rule. Why? Well, we often want to understand how one group of things relates to another group. So we set up maps between them. These describe how to match the things in one set to the things in another set. You may think this sounds like it’s just a function. You’re right. I suppose the name “mapping” carries connotations of transforming things into other things that a “function” might not have. And “functions”, I think, suggest we’re working with numbers. “Mappings” sound more abstract, at least to my ear. But it’s just a difference in dialect, not substance.

    A homomorphism is a mapping that obeys a couple of rules. What they are depends on the kind of things the homomorphism maps between. I want a simple example, so I’m going to use groups.

    A group is made up of two things. One is a set, a collection of elements. For example, take the whole numbers 0, 1, 2, and 3. That’s a good enough set. The second thing in the group is an operation, something to work like addition. For example, we might use “addition modulo 4”. In this scheme, addition (and subtraction) work like they do with ordinary whole numbers. But if the result would be more than 3, we subtract 4 from the result, until we get something that’s 0, 1, 2, or 3. Similarly if the result would be less than 0, we add 4, until we get something that’s 0, 1, 2, or 3. The result is an addition table that looks like this:

    + 0 1 2 3
    0 0 1 2 3
    1 1 2 3 0
    2 2 3 0 1
    3 3 0 1 2

    So let me call G the group that has as its elements 0, 1, 2, and 3, and that has addition be this modulo-4 addition.

    Now I want another group. I’m going to name it H, because the alternative is calling it G2 and subscripts are tedious to put on web pages. H will have a set with the elements 0, 1, 2, 3, 4, 5, 6, and 7. Its addition will be modulo-8 addition, which works the way you might have guessed after looking at the above. But here’s the addition table:

    + 0 1 2 3 4 5 6 7
    0 0 1 2 3 4 5 6 7
    1 1 2 3 4 5 6 7 0
    2 2 3 4 5 6 7 0 1
    3 3 4 5 6 7 0 1 2
    4 4 5 6 7 0 1 2 3
    5 5 6 7 0 1 2 3 4
    6 6 7 0 1 2 3 4 5
    7 7 0 1 2 3 4 5 6

    G and H look a fair bit like each other. Their sets are made up of familiar numbers, anyway. And the addition rules look a lot like what we’re used to.

    We can imagine mapping from one to the other pretty easily. At least it’s easy to imagine mapping from G to H. Just match a number in G’s set — say, ‘1’ — to a number in H’s set — say, ‘2’. Easy enough. We’ll do something just as daring in matching ‘0’ to ‘1’, and we’ll map ‘2’ to ‘3’. And ‘3’? Let’s match that to ‘4’. Let me call that mapping f.

    But f is not a homomorphism. What makes a homomorphism an interesting map is that the group’s original addition rule carries through. This is easier to show than to explain.

    In the original group G, what’s 1 + 2? … 3. That’s easy to work out. But in H, what’s f(1) + f(2)? f(1) is 2, and f(2) is 3. So f(1) + f(2) is 5. But what is f(3)? We set that to be 4. So in this mapping, f(1) + f(2) is not equal to f(3). And so f is not a homomorphism.

    Could anything be? After all, G and H have different sets, sets that aren’t even the same size. And they have different addition rules, even if the addition rules look like they should be related. Why should we expect it’s possible to match the things in group G to the things in group H?

    Let me show you how they could be. I’m going to define a mapping φ. The letter’s often used for homomorphisms. φ matches things in G’s set to things in H’s set. φ(0) I choose to be 0. φ(1) I choose to be 2. φ(2) I choose to be 4. φ(3) I choose to be 6.

    And now look at this … φ(1) + φ(2) is equal to 2 + 4, which is 6 … which is φ(3). Was I lucky? Try some more. φ(2) + φ(2) is 4 + 4, which in the group H is 0. In the group G, 2 + 2 is 0, and φ(0) is … 0. We’re all right so far.

    One more. φ(3) + φ(3) is 6 + 6, which in group H is 4. In group G, 3 + 3 is 2. φ(2) is 4.

    If you want to test the other thirteen possibilities go ahead. If you want to argue there’s actually only seven other possibilities do that, too. What makes φ a homomorphism is that if x and y are things from the set of G, then φ(x) + φ(y) equals φ(x + y). φ(x) + φ(y) uses the addition rule for group H. φ(x + y) uses the addition rule for group G. Some mappings keep the addition of things from breaking. We call this “preserving” addition.

    This particular example is called a group homomorphism. That’s because it’s a homomorphism that starts with one group and ends with a group. There are other kinds of homomorphism. For example, a ring homomorphism is a homomorphism that maps a ring to a ring. A ring is like a group, but it has two operations. One works like addition and the other works like multiplication. A ring homomorphism preserves both the addition and the multiplication simultaneously.

    And there are homomorphisms for other structures. What makes them homomorphisms is that they preserve whatever the important operations on the strutures are. That’s typically what you might expect when you are introduced to a homomorphism, whatever the field.

     
  • Joseph Nebus 3:00 pm on Friday, 4 March, 2016 Permalink | Reply
    Tags: , , glossary, , , ,   

    A Leap Day 2016 Mathematics A To Z: Conjecture 


    For today’s entry in the Leap Day 2016 Mathematics A To Z I have an actual request from from Elke Stangl. I’d had another ‘c’ request, for ‘continued fractions’. I’ve decided to address that by putting ‘Fractions, continued’ on the roster. If you have other requests, for letters not already committed, please let me know. I’ve got some letters I can use yet.

    Conjecture.

    An old joke says a mathematician’s job is to turn coffee into theorems. I prefer tea, which may be why I’m not employed as a mathematician. A theorem is a logical argument that starts from something known to be true. Or we might start from something assumed to be true, if we think the setup interesting and plausible. And it uses laws of logical inference to draw a conclusion that’s also true and, hopefully, interesting. If it isn’t interesting, maybe it’s useful. If it isn’t either, maybe at least the argument is clever.

    How does a mathematician know what theorems to try proving? We could assemble any combination of premises as the setup to a possible theorem. And we could imagine all sorts of possible conclusions. Most of them will be syntactically gibberish, the equivalent of our friends the monkeys banging away on keyboards. Of those that aren’t, most will be untrue, or at least impossible to argue. Of the rest, potential theorems that could be argued, many will be too long or too unfocused to follow. Only a tiny few potential combinations of premises and conclusions could form theorems of any value. How does a mathematician get a good idea where to spend her time?

    She gets it from experience. In learning what theorems, what arguments, have been true in the past she develops a feeling for things that would plausibly be true. In playing with mathematical constructs she notices patterns that seem to be true. As she gains expertise she gets a sense for things that feel right. And she gets a feel for what would be a reasonable set of premises to bundle together. And what kinds of conclusions probably follow from an argument that people can follow.

    This potential theorem, this thing that feels like it should be true, a conjecture.

    Properly, we don’t know whether a conjecture is true or false. The most we can say is that we don’t have evidence that it’s false. New information might show that we’re wrong and we would have to give up the conjecture. Finding new examples that it’s true might reinforce our idea that it’s true, but that doesn’t prove it’s true.

    For example, we have the Goldbach Conjecture. According to it every even number greater than two can be written as the sum of exactly two prime numbers. The evidence for it is very good: every even number we’ve tied has worked out, up through at least 4,000,000,000,000,000,000. But it isn’t proven. It’s possible that it’s impossible from the standard rules of arithmetic.

    That’s a famous conjecture. It’s frustrated mathematicians for centuries. It’s easy to understand and nobody’s found a proof. Famous conjectures, the ones that get names, tend to do that. They looked nice and simple and had hidden depths.

    Most conjectures aren’t so storied. They instead appear as notes at the end of a section in a journal article or a book chapter. Or they’re put on slides meant to refresh the audience’s interest where it’s needed. They are needed at the fifteen-minute park of a presentation, just after four slides full of dense equations. They are also needed at the 35-minute mark, in the middle of a field of plots with too many symbols and not enough labels. And one’s needed just before the summary of the talk, so that the audience can try to remember what the presentation was about and why they thought they could understand it. If the deadline were not so tight, if the conference were a month or so later, perhaps the mathematician would find a proof for these conjectures.

    Perhaps. As above, some conjectures turn out to be hard. Fermat’s Last Theorem stood for four centuries as a conjecture. Its first proof turned out to be nothing like anything Fermat could have had in mind. Mathematics popularizers lost an easy hook when that was proven. We used to be able to start an essay on Fermat’s Last Theorem by huffing about how it was properly a conjecture but the wrong term stuck to it because English is a perverse language. Now we have to start by saying how it used to be a conjecture instead.

    But few are like that. Most conjectures are ideas that feel like they ought to be true. They appear because a curious mind will look for new ideas that resemble old ones, or will notice patterns that seem to resemble old patterns.

    And sometimes conjectures turn out to be false. Something can look like it ought to be true, or maybe would be true, and yet be false. Often we can prove something isn’t true by finding an example, just as you might expect. But that doesn’t mean it’s easy. Here’s a false conjecture, one that was put forth by Goldbach. All odd numbers are either prime, or can be written as the sum of a prime and twice a square number. (He considered 1 to be a prime number.) It’s not true, but it took over a century to show that. If you want to find a counterexample go ahead and have fun trying.

    Still, if a mathematician turns coffee into theorems, it is through the step of finding conjectures, promising little paths in the forest of what is not yet known.

     
    • elkement (Elke Stangl) 9:38 pm on Friday, 4 March, 2016 Permalink | Reply

      Thanks :-) So you say that experts’ intuition that might look like magic to laymen is actually pattern recognition, correct? (I think I have read about this in pop-sci psychology books) And if an unproven theorem passes the pattern recognition filter it is promoted to conjecture.

      Like

      • Joseph Nebus 7:27 am on Wednesday, 9 March, 2016 Permalink | Reply

        I think that there is a large aspect of it that’s pattern recognition, yes. But some of that may be that we look for things that resemble what’s already worked. So, like, if we already have a theorem about how a sequence of real-valued functions converges to a new real-valued function, then it’s natural to think about variants. Can we say something about sequences of complex-valued functions? If the original theorem demanded functions that were continuous and had infinitely many derivatives, can we loosen that to a function that’s continuous and has only finitely many derivatives? Can we lose the requirement that there be derivatives and still say something?

        I realized at one point while taking real analysis in grad school that many of the theorems we were moving into looked a lot like what we already had with one or two variations, and could sometimes write out the next theorem almost by rote. There is certainly a kind of pattern recognition at work here, though sometimes it can feel like playing with the variations on a theme.

        Liked by 1 person

        • elkement (Elke Stangl) 7:37 am on Wednesday, 9 March, 2016 Permalink | Reply

          Yes, I agree – I meant pattern recognition in exactly this way, in a very broad way … searching for a similar pattern in your own experiences, among things you have encountered and that worked. I was thinking in general terms and comparing to other skills and expertise, like what makes you successful in any kind of tech troubleshooting. It seems that you have an intuitive feeling about what may work but actually you draw on related scenarios or aspects of scenarios we had solved.

          Like

    • Pen & Shutter 1:09 pm on Saturday, 5 March, 2016 Permalink | Reply

      I understood all that! I definitely deserve a prize … I am no mathematician … And I enjoyed every word! I love your use of English.

      Like

    • davekingsbury 3:25 pm on Saturday, 5 March, 2016 Permalink | Reply

      If you’ve nothing for Q, what about Quadratic Equations … though I start twitching whenever I think about them!

      Like

      • Joseph Nebus 7:43 am on Wednesday, 9 March, 2016 Permalink | Reply

        I’m sorry to say Q already got claimed, by ‘quaternion’. But P got ‘polynomial’, which should be close enough to quadratic equations that there’s at least some help there.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Wednesday, 2 March, 2016 Permalink | Reply
    Tags: , , glossary, ,   

    A Leap Day 2016 Mathematics A To Z: Basis 


    Today’s glossary term is one that turns up in many areas of mathematics. But these all share some connotations. So I mean to start with the easiest one to understand.

    Basis.

    Suppose you are somewhere. Most of us are. Where is something else?

    That isn’t hard to answer if conditions are right. If we’re allowed to point and the something else is in sight, we’re done. It’s when pointing and following the line of sight breaks down that we’re in trouble. We’re also in trouble if we want to say how to get from that something to yet another spot. How can we guide someone from one point to another?

    We have a good answer from everyday life. We can impose some order, some direction, on space. We’re familiar with this from the cardinal directions. We say where things on the surface of the Earth are by how far they are north or south, east or west, from something else. The scheme breaks down a bit if we’re at the North or the South pole exactly, but there we can fall back on pointing.

    When we start using north and south and east and west as directions we are choosing basis vectors. Vectors are directions in how far to move and in what direction. Suppose we have two vectors that aren’t pointing in the same direction. Then we can describe any two-dimensional movement using them. We can say “go this far in the direction of the first vector and also that far in the direction of the second vector”. With the cardinal directions, we consider north and east, or east and south, or south and west, or west and north to be a pair of vectors going in different directions.

    (North and south, in this context, are the same thing. “Go twenty paces north” says the same thing as “go negative twenty paces south”. Most mathematicians don’t pull this sort of stunt when telling you how to get somewhere unless they’re trying to be funny without succeeding.)

    A basis vector is just a direction, and distance in that direction, that we’ve decided to be a reference for telling different points in space apart. A basis set, or basis, is the collection of all the basis vectors we need. What do we need? We need enough basis vectors to get to all the points in whatever space we’re working with.

    (If you are going to ask about doesn’t “east” point in different directions as we go around the surface of the Earth, you’re doing very well. Please pretend we never move so far from where we start that anyone could notice the difference. If you can’t do that, please pretend the Earth has been smooshed into a huge flat square with north at one end and we’re only just now noticing.)

    We are free to choose whatever basis vectors we like. The worst that can happen if we choose a lousy basis is that we have to write out more things than we otherwise would. Our work won’t be less true, it’ll just be more tedious. But there are some properties that often make for a good basis.

    One is that the basis should relate to the problem you’re doing. Suppose you were in one of mathematicians’ favorite places, midtown Manhattan. There is a compelling grid here of streets running north-south and avenues running east-west. (Broadway we ignore as an implementation error retained for reasons of backwards compatibility.) Well, we pretend they run north-south and east-west. They’re actually a good bit clockwise of north-south and east-west. They do that to better match the geography of the island. A “north” street runs about parallel to the way Manhattan’s long dimension runs. In the circumstance, it would be daft to describe directions by true north or true east. We would say to go so many streets “north” and so many avenues “east”.

    Purely mathematical problems aren’t concerned with streets and avenues. But there will often be preferred directions. Mathematicians often look at the way a process alters shapes or redirects forces. There’ll be some directions where the alterations are biggest. There’ll be some where the alterations are shortest. Those directions are probably good choices for a basis. They stand out as important.

    We also tend to like basis vectors that are a unit length. That is, their size is 1 in some convenient unit. That’s for the same reason it’s easier to say how expensive something is if it costs 45 dollars instead of nine five-dollar bills. Or if you’re told it was 180 quarter-dollars. The length of your basis vector is just a scaling factor. But the more factors you have to work with the more likely you are to misunderstand something.

    And we tend to like basis vectors that are perpendicular to one another. They don’t have to be. But if they are then it’s easier to divide up our work. We can study each direction separately. Mathematicians tend to like techniques that let us divide problems up into smaller ones that we can study separately.

    I’ve described basis sets using vectors. They have intuitive appeal. It’s easy to understand directions of things in space. But the idea carries across into other things. For example, we can build functions out of other functions. So we can choose a set of basis functions. We can multiply them by real numbers (scalars) and add them together. This makes whatever function we’re interested in into a kind of weighted average of basis functions.

    Why do that? Well, again, we often study processes that change shapes and directions. If we choose a basis well, though, the process changes the basis vectors in easy to describe ways. And many interesting processes let us describe the changing of an arbitrary function as the weighted sum of the changes in the basis vectors. By solving a couple of simple problems we get the ability to solve every interesting problem.

    We can even define something that works like the angle between functions. And something that works a lot like perpendicularity for functions.

    And this carries on to other mathematical constructs. We look for ways to impose some order, some direction, on whatever structure we’re looking at. We’re often successful, and can work with unreal things using tools like those that let us find our place in a city.

     
  • Joseph Nebus 3:00 pm on Monday, 29 February, 2016 Permalink | Reply
    Tags: , axioms, glossary, , , reality   

    A Leap Day 2016 Mathematics A To Z: Axiom 


    I had a great deal of fun last summer with an A To Z glossary of mathematics terms. To repeat a trick with some variation, I called for requests a couple weeks back. I think the requests have settled down so let me start. (However, if you’ve got a request for one of the latter alphabet letters, please let me know. There’s ten letters not yet committed.) I’m going to call this a Leap Day 2016 Mathematics A To Z to mark when it sets off. This way I’m not committed to wrapping things up before a particular season ends. On, now, to the start and the first request, this one from Elke Stangl:

    Axiom.

    Mathematics is built of arguments. Ideally, these are all grounded in deductive logic. These would be arguments that start from things we know to be true, and use the laws of logical inference to conclude other things that are true. We want valid arguments, ones in which every implication is based on true premises and correct inferences. In practice we accept some looseness about this, because it would just take forever to justify every single little step. But the structure is there. From some things we know to be true, deduce something we hadn’t before proven was true.

    But where do we get things we know to be true? Well, we could ask the philosophy department. The question’s one of their specialties. But we might be scared of them, and they of us. After all, the mathematics department and the philosophy department are only usually both put in the College of Arts and Sciences. Sometimes philosophy is put in the College of Humanities instead. Let’s stay where we were instead.

    We know to be true stuff we’ve already proved to be true. So we can use the results of arguments we’ve already finished. That’s comforting. Whatever work we, or our forerunners, have done was not in vain. But how did we know those results were true? Maybe they were the consequences of earlier stuff we knew to be true. Maybe they came from earlier valid arguments.

    You see the regression problem. We don’t have anything we know to be true except the results of arguments, and the arguments depended on having something true to build from. We need to start somewhere.

    The real world turns out to be a poor starting point, by the way. Oh, it’s got some good sides. Reality is useful in many ways, but it has a lot of problems to be resolved. Most things we could say about the real world are transitory: they were once untrue, became true, and will someday be false again. It’s hard to see how you can build a universal truth on a transitory foundation. And that’s even if we know what’s true in the real world. We have senses that seem to tell us things about the real world. But the philosophy department, if we eavesdrop on them, would remind us of some dreadful implications. The concept of “the real world” is hard to make precise. Even if we suppose we’ve done that, we don’t know that what we could perceive has anything to do with the real world. The folks in the psychology department and the people who study physiology reinforce the direness of the situation. Even if perceptions can tell us something relevant, and even if our senses aren’t deliberately deceived, they’re still bad at perceiving stuff. We need to start somewhere else if we want certainty.

    That somewhere is the axiom. We declare some things to be a kind of basic law. Here are some thing we need not prove true; they simply are.

    (Sometimes mathematicians say “postulate” instead of “axiom”. This is because some things sound better called “postulates”. Meanwhile other things sound better called “axioms”. There is no functional difference.)

    Most axioms tend to be straightforward things. We tend to like having uncontroversial foundations for our arguments. It may hardly seem necessary to say “all right angles are congruent”, but how would you prove that? It may seem obvious that, given a collection of sets of things, it’s possible to select exactly one thing from each of those sets. How do you know you can?

    Well, they might follow from some other axioms, by some clever enough argument. This is possible. Mathematicians consider it elegant to have as few axioms as necessary for their work. (They’re not alone, or rare, in that preference.) I think that reflects a cultural desire to say as much as possible with as little work as possible. The more things we have to assume to show a thing is true, the more likely that in a new application one of those assumptions won’t hold. And that would spoil our knowledge of that conclusion. Sometimes we can show the interesting point of one axiom could be derived from some other axiom or axioms. We might replace an axiom with these alternates if that gives us more enlightening arguments.

    Sometimes people seize on this whole axiom business to argue that mathematics (and science, dragged along behind) is a kind of religion. After all, you need to have faith that some things are true. This strikes me as bad theology and poor mathematics. The most obvious difference between an article of faith and an axiom must be that axioms are voluntary. They are things you assume to be true because you expect them to enlighten something you wish to study. If they don’t, you’re free to try other axioms.

    The axiom I mentioned three paragraphs back, about selecting exactly one thing from each of a collection of sets? That’s known as the Axiom of Choice. It’s used in the theory of sets. But you don’t have to assume it’s true. Much of set theory stands independent of it. Many set theorists go about their work neither committing to the idea that it’s true or that it’s false.

    What makes a good set of axioms is rather like what makes a good set of rules for a sport. You do want to have a set that’s reasonably clear. You want them to provide for many interesting consequences. You want them to not have any contradictions. (You settle for them having no contradictions anyone’s found or suspects.) You want them to have as few ambiguities as possible. What makes up that set may evolve as the field, or as the sport, evolves. People do things that weren’t originally thought about. People get more experience and more perspective on the way the rules are laid out. People notice they had been assuming something without stating it. We revise and, we hope, improve the foundations with time.

    There’s no guarantee that every set of axioms will produce something interesting. Well, you wouldn’t expect to necessarily get a playable game by throwing together some random collection of rules from several different sports, either. Most mathematicians stick to familiar groups of axioms, for the same reason most athletes stick to sports they didn’t make up. We know from long experience that this set will give us an interesting geometry, or calculus, or topology, or so on.

    There’ll never be a standard universal set of axioms covering all mathematics. There are different sets of axioms that directly contradict each other but that are, to the best of our knowledge, internally self-consistent. The axioms that describe geometry on a flat surface, like a map, are inconsistent with those that describe geometry on a curved surface, like a globe. We need both maps and globes. So we have both flat and curved geometries, and we decide what kind fits the work we want to do.

    And there’ll never be a complete list of axioms for any interesting field, either. One of the unsettling discoveries of 20th Century logic was of incompleteness. Any set of axioms interesting enough to cover the ability to do arithmetic will have statements that would be meaningful, but that can’t be proven true or false. We might add some of these undecidable things to the set of axioms, if they seem useful. But we’ll always have other things not provably true or provably false.

     
    • gaurish 3:30 pm on Monday, 29 February, 2016 Permalink | Reply

      Amazing explanation :)

      Like

    • howardat58 5:33 pm on Monday, 29 February, 2016 Permalink | Reply

      It is difficult to believe that none of this geometry stuff existed before Euclid. His contribution was to show that an abstract system based on some reasonable axioms, those which matched practical experience, could be constructed and from which all the results and conclusions would follow, WITHOUT the use of pictures and hand-waving. Euclid’s definition of a line, “That which has no breadth”, makes it impossible to draw one !!! Nobody attempted to do this to even the natural numbers until Peano and others in 1900-1909
      https://en.wikipedia.org/wiki/Peano_axioms
      (worth a read)

      Like

      • Joseph Nebus 8:50 pm on Tuesday, 1 March, 2016 Permalink | Reply

        I don’t mean to suggest I think geometry started with Euclid. I’d be surprised if it turned out Euclid were even the first Ancient Greek to have a system which we’d recognize as organized and logically rigorous geometry. But the record of evidence is scattered, and Euclid did do so very well that he must have obliterated his precursors. It’s got to be something like how The Jazz Singer obliterates memory of the synchronized-sound movies made before then.

        The problems with definitions does point out something true about axioms. The obvious stuff, like what we mean by a line, is often extremely hard to explain. Perhaps it’s because the desire to explain terms using only simpler terms leaves us without the vocabulary or even the concepts to do work. Perhaps it’s that the most familiar things carry with them so many connotations and unstated assumptions we don’t know how to separate them out again.

        Peano axioms are a great read, yes. I’m a bit sad my undergraduate training in mathematics never gave me reason to study them directly; we were preparing for other things.

        Like

    • elkement (Elke Stangl) 7:19 am on Tuesday, 1 March, 2016 Permalink | Reply

      Thanks for the mention, but Axiom Fame should go to Christopher Adamson. He suggested Axiom and I suggested Conjecture in the Requests comment thread :-)

      Like

  • Joseph Nebus 3:00 pm on Saturday, 30 January, 2016 Permalink | Reply
    Tags: glossary, ,   

    Any Requests? 


    I’m thinking to do a second Mathematics A-To-Z Glossary. For those who missed it, last summer I had a fun string of several weeks in which I picked a mathematical term and explained it to within an inch of its life, or 950 words, whichever came first. I’m curious if there’s anything readers out there would like to see me attempt to explain. So, please, let me know of any requests. All requests must begin with a letter, although numbers might be considered.

    Meanwhile since there’s been some golden ratio talk around these parts the last few days, I thought people might like to see this neat Algebra Fact of the Day:

    People following up on the tweet pointed out that it’s technically speaking wrong. The idea can be saved, though. You can produce the golden ratio using exactly four 4’s this way:

    \phi = \frac{\cdot\left(\sqrt{4} + \sqrt{4! + 4}\right)}{4}

    If you’d like to do it with eight 4’s, here’s one approach:

    And this brings things back around to how Paul Dirac worked out a way to produce any whole number using exactly four 2’s and the normal arithmetic operations anybody knows.

     
    • Christopher Adamson 3:06 pm on Saturday, 30 January, 2016 Permalink | Reply

      How about A is for axiom?

      Like

    • KnotTheorist 8:48 pm on Saturday, 30 January, 2016 Permalink | Reply

      I enjoyed last year’s Mathematical A-To-Z Glossary, so I’m glad to see you’ll be doing another one!

      I’d like to see C for continued fractions.

      Like

      • Joseph Nebus 12:39 am on Tuesday, 2 February, 2016 Permalink | Reply

        Continued fractions … mm. Well, I’ll have to learn more about them, but that’s part of the fun of this. Thank you.

        Liked by 1 person

    • davekingsbury 6:29 pm on Sunday, 31 January, 2016 Permalink | Reply

      Energy = Mass times Twice the Speed of Light … or is that more like Physics?

      Like

      • Joseph Nebus 12:40 am on Tuesday, 2 February, 2016 Permalink | Reply

        E = mc^2 is physics, although it’s something that we learned from mathematical considerations. And a big swath of mathematics is the study of physics. There’s a lot to talk about in energy for mathematicians.

        Like

        • elkement (Elke Stangl) 7:38 am on Monday, 8 February, 2016 Permalink | Reply

          Of course I second that :-) What about explaining a Lagrangian in layman’s terms? ;-)

          Like

          • Joseph Nebus 5:27 am on Wednesday, 10 February, 2016 Permalink | Reply

            You know, I think I’ve got a hook on how to explain that. It might even get to include a bit from my high school physics class.

            Liked by 1 person

    • davekingsbury 9:14 am on Tuesday, 2 February, 2016 Permalink | Reply

      Is the equation based on theory or is there a practical mathematics behind it?

      Like

      • Joseph Nebus 12:03 am on Wednesday, 3 February, 2016 Permalink | Reply

        I’m not sure what you mean by theory versus practical mathematics. The energy-mass equivalence does follow, mathematically, from some remarkably simple principles. Those amount to uncontroversial things like the speed of light being a constant, independent of the observer, and that momentum and energy are conserved.

        It is experimentally verified, though. We can, for example, measure the mass of atoms before and after they fuse, or fission, and measure the amount of energy released or absorbed as light in the process. The amounts match up as expected. (That’s not the only test to run, of course, but it’s an easy one to understand.) So the reasoning isn’t just good, but matches what we see in the real world.

        Like

        • davekingsbury 10:36 am on Wednesday, 3 February, 2016 Permalink | Reply

          Thanks for your clear explanation. I’m not a scientist. Theory wasn’t the right word, then – I was thinking of empirically verifiable which your 2nd paragraph shows. Are the ‘uncontroversial things in your first paragraph also measurable in the real world?

          Like

          • Joseph Nebus 8:34 pm on Friday, 5 February, 2016 Permalink | Reply

            OK. Well, these are measurable things, in that experiments give results that are what we would expect from the assumptions, and that are inconsistent with what we’d expect from alternate assumptions. For example, we now assume the speed of light (in a vacuum) to be constant. That followed a century of experimentation that finds it does appear to always be constant, and it’s consistent with tests that look to see if there might be something surprising now that we have a new effect to measure or a new tool to measure with. Assumptions about, for example, the way that velocities have to add together in order for this constant-speed-of-light to work have implications for how, say, moving electric charges will produce magnetic fields, and we see magnetic fields induced by moving electric charges consistently with that.

            We can imagine our current understanding to be incomplete, and that the real world has subtleties we haven’t yet detected. But I’m not aware of any outstanding mysteries that suggest strongly that we’re near that point.

            So, given assumptions that seem straightforward enough, and that match experiment as well as we’re able to measure, physicists and mathematicians are generally inclined to say that these assumptions are correct. Or at least correct enough for the context in which they’re used. This is starting to get into the philosophy of science and the concept of experimental proof and gets, I admit, beyond what I’m competent to discuss with authority.

            Like

            • davekingsbury 8:43 pm on Friday, 5 February, 2016 Permalink | Reply

              Thanks for taking the time (and space) to explain this so clearly and enjoyably to a rookie. No more questions, promise … for now!

              Like

    • Gillian B 5:39 am on Wednesday, 3 February, 2016 Permalink | Reply

      Isomorphism.

      Like

    • gaurish 7:00 am on Monday, 8 February, 2016 Permalink | Reply

      Normal subgroup (easy one) or Number (difficult one, Bertrand Russell tried it once).

      Like

      • Joseph Nebus 5:24 am on Wednesday, 10 February, 2016 Permalink | Reply

        Oh, number is easy. Three, for example, is the thing that’s in common among Marx Brothers, blind mice, tricycle wheels, penny operas, and balls in the Midnight Multiball of the pinball game FunHouse. Normal subgroup, now that’s hard.

        Like

    • gaurish 7:10 am on Monday, 8 February, 2016 Permalink | Reply

      Transcendental numbers; Dedikind Domain; matrix; polynomial; quartenions; subjective map; vector.

      Like

      • Joseph Nebus 5:26 am on Wednesday, 10 February, 2016 Permalink | Reply

        There’s some good challenges here! My first reaction was to say I didn’t even know what a Dedekind domain was, although in looking it up I realize that I must have learned of them. I just haven’t thought of one in obviously too long, and I like the chance to learn something just in time to explain it.

        Like

    • elkement (Elke Stangl) 7:40 am on Monday, 8 February, 2016 Permalink | Reply

      C as Conjecture. More of a history of science question: When is an ‘unproven idea’ honored by being called a conjecture?

      Like

      • Joseph Nebus 5:35 am on Wednesday, 10 February, 2016 Permalink | Reply

        Conjecture may work, yes, and fit neatly against axiom trusting that I use that.

        I’m not sure there is a clear guide to when an unproven idea gets elevated to the status of conjecture. I suspect it would defy any rationally describable process. I mean about getting regarded as a name-worthy conjecture. There’s conjectures in much mathematical literature and those tend to mean the person writing the paper got a hunch that something might be so, but didn’t have the time or ability to prove it and is happy to let someone else try.

        But to be, let’s say, the Stangl Conjecture takes more. I suspect part is that it has to be something that feels likely to be true, and which has some obviously interesting consequence if true (or false). That can’t be all, though. The Collatz Conjecture, as I’ve mentioned, seems to be nothing but an amusing trifle. But then that’s also a conjecture that’s very easy for anyone to understand, and it has some beauty to it. The low importance of it might be balanced by how much fun it seems to be and how everyone can be in on the fun.

        I’ll have to do some more poking around famous conjectures, though, and see if I can better characterize what they have in common.

        Liked by 1 person

    • elkement (Elke Stangl) 7:30 am on Tuesday, 1 March, 2016 Permalink | Reply

      Equation or Differential Equation, depending on which letter is still open. I am thinking of the way THE FORMULA is depicted in movies, and I believe that it might imply that anything with an equal sign in it is more like Ohm’s Law – a ‘formula’ you just have to plug numbers into. I am sure you can explain the difference between a simple formula and a differential equation nicely :-) Or use Formula instead if F has not been taken.

      Like

      • Joseph Nebus 8:57 pm on Tuesday, 1 March, 2016 Permalink | Reply

        Hm. I may take you up on differential equation, since the first nominee — Dedekind domains — is taxing my imagination. And I’d slid continued fractions over to F … but I will think about whether I can find a way to put Formula in under another letter.

        Liked by 1 person

    • Jacob Kanev 11:54 pm on Friday, 4 March, 2016 Permalink | Reply

      First things that tumble into my mind: Itô integral, Stratonovitch integral, Kulbach-Leibler divergence, Fisher information, Turing machine, Church’s lemma (is this the correct term in English? And you have ‘C’ already, haven’t you?), grammars (both context sensitive and not), Girsanov transformation (sorry for using ‘G’ twice), filtration (I’d really like a good explanation of this one) (and ‘F’), Banach spaces. Orthogonal. Projection. Distance. Metric. Measure. NP-completeness? Gödel’s theorem? Laws of form (that calculus by George Spencer Brown)?

      Might be too nichey, though. You decide.

      Lots of regards, Jacob.

      Like

      • Joseph Nebus 7:36 am on Wednesday, 9 March, 2016 Permalink | Reply

        Well, wow. I do have a couple of these letters taken already — I’ve got through ‘F’ penciled in, plus a couple such as ‘I’ taken after that. But I’ll try to get as many of these as I can done in a coherent form. It’s going to be an exciting month ahead.

        Like

  • Joseph Nebus 9:01 pm on Thursday, 17 September, 2015 Permalink | Reply
    Tags: glossary   

    Mean Green Math Likes Me 


    I’m embarrassed to be late writing about this. I can only plead that it’s been a busy month around my parts and I somehow haven’t got back on top of things. The Mean Green Math blog, though, has been so good as to highlight my Summer A To Z series. At the risk of being too self-involved, I’d like to point folks over to that, since John Quintanilla adds some comments about what those are about, or sometimes about his own experiences learning mathematics. And the blog’s quite interesting in its own right, and points to quite a few interesting mathematical or mathematics-education topics.

     
  • Joseph Nebus 2:49 pm on Wednesday, 1 July, 2015 Permalink | Reply
    Tags: , , glossary, median, , quintiles, , word counts   

    A Summer 2015 Mathematics A To Z: quintile 


    Quintile.

    Why is there statistics?

    There are many reasons statistics got organized as a field of study mostly in the late 19th and early 20th century. Mostly they reflect wanting to be able to say something about big collections of data. People can only keep track of so much information at once. Even if we could keep track of more information, we’re usually interested in relationships between pieces of data. When there’s enough data there are so many possible relationships that we can’t see what’s interesting.

    One of the things statistics gives us is a way of representing lots of data with fewer numbers. We trust there’ll be few enough numbers we can understand them all simultaneously, and so understand something about the whole data.

    Quintiles are one of the tools we have. They’re a lesser tool, I admit, but that makes them sound more exotic. They’re descriptions of how the values of a set of data are distributed. Distributions are interesting. They tell us what kinds of values are likely and which are rare. They tell us also how variable the data is, or how reliably we are measuring data. These are things we often want to know: what is normal for the thing we’re measuring, and what’s a normal range?

    We get quintiles from imagining the data set placed in ascending order. There’s some value that one-fifth of the data points are smaller than, and four-fifths are greater than. That’s your first quintile. Suppose we had the values 269, 444, 525, 745, and 1284 as our data set. The first quintile would be the arithmetic mean of the 269 and 444, that is, 356.5.

    The second quintile is some value that two-fifths of your data points are smaller than, and that three-fifths are greater than. With that data set we started with that would be the mean of 444 and 525, or 484.5.

    The third quintile is a value that three-fifths of the data set is less than, and two-fifths greater than; in this case, that’s 635.

    And the fourth quintile is a value that four-fifths of the data set is less than, and one-fifth greater than. That’s the mean of 745 and 1284, or 1014.5.

    From looking at the quintiles we can say … well, not much, because this is a silly made-up problem that demonstrates how quintiles are calculated rather instead of why we’d want to do anything with them. At least the numbers come from real data. They’re the word counts of my first five A-to-Z definitions. But the existence of the quintiles at 365.5, 484.5, 635, and 1014.5, along with the minimum and maximum data points at 269 and 1284, tells us something. Mostly that numbers are bunched up in the three and four hundreds, but there could be some weird high numbers. If we had a bigger data set the results would be less obvious.

    If the calculating of quintiles sounds much like the way we work out the median, that’s because it is. The median is the value that half the data is less than, and half the data is greater than. There are other ways of breaking down distributions. The first quartile is the value one-quarter of the data is less than. The second quartile a value two-quarters of the data is less than (so, yes, that’s the median all over again). The third quartile is a value three-quarters of the data is less than.

    Percentiles are another variation on this. The (say) 26th percentile is a value that 26 percent — 26 hundredths — of the data is less than. The 72nd percentile a value greater than 72 percent of the data.

    Are quintiles useful? Well, that’s a loaded question. They are used less than quartiles are. And I’m not sure knowing them is better than looking at a spreadsheet’s plot of the data. A plot of the data with the quintiles, or quartiles if you prefer, drawn in is better than either separately. But these are among the tools we have to tell what data values are likely, and how tightly bunched-up they are.

     
  • Joseph Nebus 3:04 pm on Monday, 29 June, 2015 Permalink | Reply
    Tags: , , glossary, , proper, subsets, triviality   

    A Summer 2015 Mathematics A To Z: proper 


    Proper.

    So there’s this family of mathematical jokes. They run about like this:

    A couple people are in a hot air balloon that’s drifted off course. They’re floating towards a hill, and they can barely make out a person on the hill. They cry out, “Where are we?” And the person stares at them, and thinks, and watches the balloon sail aimlessly on. Just as the balloon is about to leave shouting range, the person cries out, “You are in a balloon!” And one of the balloonists says, “Great, we would have to get a mathematician.” “How do you know that was a mathematician?” “The person gave us an answer that’s perfectly true, is completely useless, and took a long time to produce.”

    (There are equivalent jokes told about lawyers and consultants and many other sorts of people.)

    A lot of mathematical questions have multiple answers. Factoring is a nice familiar example. If I ask “what’s a factor of 5,280”, you can answer “1” or “2” or “55” or “1,320” or some 44 other answers, each of them right. But some answers are boring. For example, 1 is a factor of every whole number. And any number is a factor of itself; you can divide 5,280 by 5,280 and get 1. The answers are right, yes, but they don’t tell you anything interesting. You know these two answers before you’ve even heard the question. So a boring answer like that we often write off as trivial.

    A proper solution, then, is one that isn’t boring. The word runs through mathematics, attaching to many concepts. What exactly it means depends on the concept, but the general idea is the same: it means “not one of the obvious, useless answers”. A proper factor, for example, excludes the original number. Sometimes it excludes “1”, sometimes not. Depends on who’s writing the textbook. For another example, consider sets, which are collections of things. A subset is a collection of things all of which are already in a set. Every set is therefore a subset of itself. To be a proper subset, there has to be at least one thing in the original set that isn’t in the proper subset.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel