Tagged: integers Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , integers, , , , ,   

    The Summer 2017 Mathematics A To Z: Diophantine Equations 


    I have another request from Gaurish, of the For The Love Of Mathematics blog, today. It’s another change of pace.

    Diophantine Equations

    A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that x^n + y^n = z^n thing that Fermat’s Last Theorem is all about. And you’ve probably seen ax + by = 1 . It turns up a lot because that’s a line, and we do a lot of stuff with lines.

    Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. ax + by = 1 , for example, that’s easy to solve. x^n + y^n = z^n it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that ax + by = 1 is simple while x^n + y^n = z^n is (most of the time) impossible?

    I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like x^n + y^n = z^n . For example, there’s x^4 + y^4 + z^4 = w^4 for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is 2,682,440^4 + 15,365,639^4 + 18,796,760^4 = 20,615,673^4 and that one took a computer search to find. We can forgive Euler not noticing it.

    Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is x^2 - D y^2 = 1 (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.

    But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:

    7x^2 - 20y + 18y^2 - 38z = 9

    Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.

    So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.

    There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.

    We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve ax^2 + bx + c = 0 , but specific ones, like 1x^2 - 5x + 6 = 0 . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)

    But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — x - Dy^2 < A , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.

     
  • Joseph Nebus 6:00 pm on Friday, 9 December, 2016 Permalink | Reply
    Tags: , , , evens, , integers, normal subgroups, odds,   

    The End 2016 Mathematics A To Z: Quotient Groups 


    I’ve got another request today, from the ever-interested and group-theory-minded gaurish. It’s another inspirational one.

    Quotient Groups.

    We all know about even and odd numbers. We don’t have to think about them. That’s why it’s worth discussing them some.

    We do know what they are, though. The integers — whole numbers, positive and negative — we can split into two sets. One of them is the even numbers, two and four and eight and twelve. Zero, negative two, negative six, negative 2,038. The other is the odd numbers, one and three and nine. Negative five, negative nine, negative one.

    What do we know about numbers, if all we look at is whether numbers are even or odd? Well, we know every integer is either an odd or an even number. It’s not both; it’s not neither.

    We know that if we start with an even number, its negative is also an even number. If we start with an odd number, its negative is also an odd number.

    We know that if we start with a number, even or odd, and add to it its negative then we get an even number. A specific number, too: zero. And that zero is interesting because any number plus zero is that same original number.

    We know we can add odds or evens together. An even number plus an even number will be an even number. An odd number plus an odd number is an even number. An odd number plus an even number is an odd number. And subtraction is the same as addition, by these lights. One number minus an other number is just one number plus negative the other number. So even minus even is even. Odd minus odd is even. Odd minus even is odd.

    We can pluck out some of the even and odd numbers as representative of these sets. We don’t want to deal with big numbers, nor do we want to deal with negative numbers if we don’t have to. So take ‘0’ as representative of the even numbers. ‘1’ as representative of the odd numbers. 0 + 0 is 0. 0 + 1 is 1. 1 + 0 is 1. The addition is the same thing we would do with the original set of integers. 1 + 1 would be 2, which is one of the even numbers, which we represent with 0. So 1 + 1 is 0. If we’ve picked out just these two numbers each is the minus of itself: 0 – 0 is 0 + 0. 1 – 1 is 1 + 1. All that gives us 0, like we should expect.

    Two paragraphs back I said something that’s obvious, but deserves attention anyway. An even plus an even is an even number. You can’t get an odd number out of it. An odd plus an odd is an even number. You can’t get an odd number out of it. There’s something fundamentally different between the even and the odd numbers.

    And now, kindly reader, you’ve learned quotient groups.

    OK, I’ll do some backfilling. It starts with groups. A group is the most skeletal cartoon of arithmetic. It’s a set of things and some operation that works like addition. The thing-like-addition has to work on pairs of things in your set, and it has to give something else in the set. There has to be a zero, something you can add to anything without changing it. We call that the identity, or the additive identity, because it doesn’t change something else’s identity. It makes sense if you don’t stare at it too hard. Everything has an additive inverse. That is everything has a “minus”, that you can add to it to get zero.

    With odd and even numbers the set of things is the integers. The thing-like-addition is, well, addition. I said groups were based on how normal arithmetic works, right?

    And then you need a subgroup. A subgroup is … well, it’s a subset of the original group that’s itself a group. It has to use the same addition the original group does. The even numbers are such a subgroup of the integers. Formally they make something called a “normal subgroup”, which is a little too much for me to explain right now. If your addition works like it does for normal numbers, that is, “a + b” is the same thing as “b + a”, then all your subgroups are normal groups. Yes, it can happen that they’re not. If the addition is something like rotations in three-dimensional space, or swapping the order of things, then the order you “add” things in matters.

    We make a quotient group by … OK, this isn’t going to sound like anything. It’s a group, though, like the name says. It uses the same addition that the original group does. Its set, though, that’s itself made up of sets. One of the sets is the normal subgroup. That’s the easy part.

    Then there’s something called cosets. You make a coset by picking something from the original group and adding it to everything in the subgroup. If the thing you pick was from the original subgroup that’s just going to be the subgroup again. If you pick something outside the original subgroup then you’ll get some other set.

    Starting from the subgroup of even numbers there’s not a lot to do. You can get the even numbers and you get the odd numbers. Doesn’t seem like much. We can do otherwise though. Suppose we start from the subgroup of numbers divisible by 4, though. That’s 0, 4, 8, 12, -4, -8, -12, and so on. Now there’s three cosets we can make from that. We can start with the original set of numbers. Or we have 1 plus that set: 1, 5, 9, 13, -3, -7, -11, and so on. Or we have 2 plus that set: 2, 6, 10, 14, -2, -6, -10, and so on. Or we have 3 plus that set: 3, 7, 11, 15, -1, -5, -9, and so on. None of these others are subgroups, which is why we don’t call them subgroups. We call them cosets.

    These collections of cosets, though, they’re the pieces of a new group. The quotient group. One of them, the normal subgroup you started with, is the identity, the thing that’s as good as zero. And you can “add” the cosets together, in just the same way you can add “odd plus odd” or “odd plus even” or “even plus even”.

    For example. Let me start with the numbers divisible by 4. I will have so much a better time if I give this a name. I’ll pick ‘Q’. This is because, you know, quarters, quartet, quadrilateral, this all sounds like four-y stuff. The integers — the integers have a couple of names. ‘I’, ‘J’, and ‘Z’ are the most common ones. We get ‘Z’ from German; a lot of important group theory was done by German-speaking mathematicians. I’m used to it so I’ll stick with that. The quotient group ‘Z / Q’, read “Z modulo Q”, has (it happens) four cosets. One of them is Q. One of them is “1 + Q”, that set 1, 5, 9, and so on. Another of them is “2 + Q”, that set 2, 6, 10, and so on. And the last is “3 + Q”, that set 3, 7, 11, and so on.

    And you can add them together. 1 + Q plus 1 + Q turns out to be 2 + Q. Try it out, you’ll see. 1 + Q plus 2 + Q turns out to be 3 + Q. 2 + Q plus 2 + Q is Q again.

    The quotient group uses the same addition as the original group. But it doesn’t add together elements of the original group, or even of the normal subgroup. It adds together sets made from the normal subgroup. We’ll denote them using some form that looks like “a + N”, or maybe “a N”, if ‘N’ was the normal subgroup and ‘a’ something that wasn’t in it. (Sometimes it’s more convenient writing the group operation like it was multiplication, because we do that by not writing anything at all, which saves us from writing stuff.)

    If we’re comfortable with the idea that “odd plus odd is even” and “even plus odd is odd” then we should be comfortable with adding together quotient groups. We’re not, not without practice, but that’s all right. In the Introduction To Not That Kind Of Algebra course mathematics majors take they get a lot of practice, just in time to be thrown into rings.

    Quotient groups land on the mathematics major as a baffling thing. They don’t actually turn up things from the original group. And they lead into important theorems. But to an undergraduate they all look like text huddling up to ladders of quotient groups. We’re told these are important theorems and they are. They also go along with beautiful diagrams of how these quotient groups relate to each other. But they’re hard going. It’s tough finding good examples and almost impossible to explain what a question is. It comes as a relief to be thrown into rings. By the time we come back around to quotient groups we’ve usually had enough time to get used to the idea that they don’t seem so hard.

    Really, looking at odds and evens, they shouldn’t be so hard.

     
    • gaurish 9:10 am on Saturday, 10 December, 2016 Permalink | Reply

      Thanks! When I first learnt about quotient groups (two years ago) I visualized them as the equivalence classes we create so as to have a better understanding of a bigger group (since my study of algebra has been motivated by its need in Number theory as a generalization of modulo arithmetic). Then the isomorphism theorems just changed the way I look at quotient of an algebraic structure. See: http://math.stackexchange.com/q/1816921/214604

      Like

      • Joseph Nebus 5:47 am on Saturday, 17 December, 2016 Permalink | Reply

        I’m glad that you liked. I do think equivalence classes are the easiest way into quotient groups — it’s essentially what I did here — but that’s because people get introduced to equivalence classes without knowing what they are. Odd and even numbers, for example, or checking arithmetic by casting out nines are making use of these classes. Isomorphism theorems are great and substantial but they do take so much preparation to get used to. Probably shifting from the first to the second is the sign of really mastering the idea of a quotient group.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 30 November, 2016 Permalink | Reply
    Tags: , , , , , , , integers, Monster Group, , ,   

    The End 2016 Mathematics A To Z: Monster Group 


    Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

    Monster Group.

    It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

    The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

    All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

    So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

    (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

    So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

    An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

    Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

    One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

    The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

    So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

    And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

    Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

    Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

    The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

    The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

    It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

    We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

    And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

    There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

    You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

    I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

    There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

    The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

     
    • gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

      It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?

      Like

      • Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

        Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

        I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.

        Like

  • Joseph Nebus 12:00 pm on Thursday, 29 October, 2015 Permalink | Reply
    Tags: , integers, , , , , , ,   

    The Set Tour, Part 6: One Big One Plus Some Rubble 


    I have a couple of sets for this installment of the Set Tour. It’s still an unusual installment because only one of the sets is that important for my purposes here. The rest I mention because they appear a lot, even if they aren’t much used in these contexts.

    I, or J, or maybe Z

    The important set here is the integers. You know the integers: they’re the numbers everyone knows. They’re the numbers we count with. They’re 1 and 2 and 3 and a hundred million billion. As we get older we come to accept 0 as an integer, and even the negative integers like “negative 12” and “minus 40” and all that. The integers might be the easiest mathematical construct to know. The positive integers, anyway. The negative ones are still a little suspicious.

    The set of integers has several shorthand names. I is a popular and common one. As with the real-valued numbers R and the complex-valued numbers C it gets written by hand, and typically typeset, with a double vertical stroke. And we’ll put horizontal serifs on the top and bottom of the symbol. That’s a concession to readability. You see the same effect in comic strip lettering. A capital “I” in the middle of a word will often be written without serifs, while the word by itself needs the extra visual bulk.

    The next popular symbol is J, again with a double vertical stroke. This gets used if we want to reserve “I”, or the word “I”, for some other purpose. J probably gets used because it’s so very close to I, and it’s only quite recently (in historic terms) that they’ve even been seen as different letters.

    The symbol that seems to come out of nowhere is Z. It comes less from nowhere than it does from German. The symbol derives from “Zahl”, meaning “number”. It seems to have got into mathematics by way of Nicolas Bourbaki, the renowned imaginary French mathematician. The Z gets written with a double diagonal stroke.

    Personally, I like Z most of this set, but on trivial grounds. It’s a more fun letter to write, especially since I write it with the middle horizontal stroke that. I’ve got no good cultural or historical reason for this. I just picked it up as a kid and never set it back down.

    In these Set Tour essays I’m trying to write about sets that get used often as domains and ranges for functions. The integers get used a fair bit, although not nearly as often as real numbers do. The integers are a natural way to organize sequences of numbers. If the record of a week’s temperatures (in Fahrenheit) are “58, 45, 49, 54, 58, 60, 64”, there’s an almost compelling temperature function here. f(1) = 58, f(2) = 45, f(3) = 49, f(4) = 54, f(5) = 58, f(6) = 60, f(7) = 64. This is a function that has as its domain the integers. It happens that the range here is also integers, although you might be able to imagine a day when the temperature reading was 54.5.

    Sequences turn up a lot. We are almost required to measure things we are interested in in discrete samples. So mathematical work with sequences uses integers as the domain almost by default. The use of integers as a domain gets done so often that it often becomes invisible, though. Someone studying my temperature data above might write the data as f1, f2, f3, and so on. One might reasonably never even notice there’s a function there, or a domain.

    And that’s fine. A tool can be so useful it disappears. Attend a play; the stage is in light and the audience in darkness. The roles the light and darkness play disappear unless the director chooses to draw attention to this choice.

    And to be honest, integers are a lousy domain for functions. It’s achingly hard to prove things for functions defined just on the integers. The easiest way to do anything useful is typically to find an equivalent problem for a related function that’s got the real numbers as a domain. Then show the answer for that gives you your best-possible answer for the original question.

    If all we want are the positive integers, we put a little superscript + to our symbol: I+ or J+ or Z+. That’s a popular choice if we’re using the integers as an index. If we just want the negative numbers that’s a little weird, but, change the plus sign to a minus: I.

    Now for some trouble.

    Sometimes we want the positive numbers and zero, or in the lingo, the “nonnegative numbers”. Good luck with that. Mathematicians haven’t quite settled on what this should be called, or abbreviated. The “Natural numbers” is a common name for the numbers 0, 1, 2, 3, 4, and so on, and this makes perfect sense and gets abbreviated N. You can double-brace the left vertical stroke, or the diagonal stroke, as you like and that will be understood by everybody.

    That is, everybody except the people who figure “natural numbers” should be 1, 2, 3, 4, and so on, and that zero has no place in this set. After all, every human culture counts with 1 and 2 and 3, and for that matter crows and raccoons understand the concept of “four”. Yet it took thousands of years for anyone to think of “zero”, so how natural could that be?

    So we might resort to speaking of the “whole numbers” instead. More good luck with that. Besides leaving open the question of whether zero should be considered “whole” there’s the linguistic problem. “Whole” number carries, for many, the implication of a number that is an integer with no fractional part. We already have the word “integer” for that, yes. But the fact people will talk about rounding off to a whole number suggests the phrase “whole number” serves some role that the word “integer” doesn’t. Still, W is sitting around not doing anything useful.

    Then there’s “counting numbers”. I would be willing to endorse this as a term for the integers 0, 1, 2, 3, 4, and so on, except. Have you ever met anybody who starts counting from zero? Yes, programmers for some — not all! — computer languages. You know which computer languages. They’re the languages which baffle new students because why on earth would we start counting things from zero all of a sudden? And the obvious single-letter abbreviation C is no good because we need that for complex numbers, a set that people actually use for domains a lot.

    There is a good side to this, if you aren’t willing to sit out the 150 years or so mathematicians are going to need to sort this all out. You can set out a symbol that makes sense to you, early on in your writing, and stick with it. If you find you don’t like it, you can switch to something else in your next paper and nobody will protest. If you figure out a good one, people may imitate you. If you figure out a really good one, people will change it just a tiny bit so that their usage drives you crazy. Life is like that.

    Eric Weisstein’s Mathworld recommends using Z* for the nonnegative integers. I don’t happen to care for that. I usually associate superscript * symbols with some operations involving complex-valued numbers and with the duals of sets, neither of which is in play here. But it’s not like he’s wrong and I’m right. If I were forced to pick a symbol right now I’d probably give Z0+. And for the nonpositive itself — the negative integers and zero — Z0- presents itself. I fully understand there are people who would be driven stark raving mad by this. Maybe you have a better one. I’d believe that.

    Let me close with something non-controversial.

    These are some sets that are too important to go unmentioned. But they don’t get used much in the domain-and-range role I’ve been using as basis for these essays. They are, in the terrain of these essays, some rubble.

    You know the rational numbers? They’re the things you can write as fractions: 1/2, 5/13, 32/7, -6/7, 0 (think about it). This is a quite useful set, although it doesn’t get used much for the domain or range of functions, at least not in the fields of mathematics I see. It gets abbreviated as Q, though. There’s an extra vertical stroke on the left side of the loop, just as a vertical stroke gets added to the C for complex-valued numbers. Why Q? Well, “R” is already spoken for, as we need it for the real numbers. The key here is that every rational number can be written as the quotient of one integer divided by another. So, this is the set of Quotients. This abbreviation we get thanks to Bourbaki, the same folks who gave us Z for integers. If it strikes you that the imaginary French mathematician Bourbaki used a lot of German words, all I can say is I think that might have been part of the fun of the Bourbaki project. (Well, and German mathematicians gave us many breakthroughs in the understanding of sets in the late 19th and early 20th centuries. We speak with their language because they spoke so well.)

    If you’re comfortable with real numbers and with rational numbers, you know of irrational numbers. These are (most) square roots, and pi and e, and the golden ratio and a lot of cosines of angles. Strangely, there really isn’t any common shorthand name or common notation for the irrational numbers. If we need to talk about them, we have the shorthand “R \ Q”. This means “the real numbers except for the rational numbers”. Or we have the shorthand “Qc”. This means “everything except the rational numbers”. That “everything” carries the implication “everything in the real numbers”. The “c” in the superscript stands for “complement”, everything outside the set we’re talking about. These are ungainly, yes. And it’s a bit odd considering that most real numbers are irrational numbers. The rational numbers are a most ineffable cloud of dust the atmosphere of the real numbers.

    But, mostly, we don’t need to talk about functions that have an irrational-number domain. We can do our work with a real-number domain instead. So we leave that set with a clumsy symbol. If there’s ever a gold rush of fruitful mathematics to be done with functions on irrational domains then we’ll put in some better notation. Until then, there are better jobs for our letters to do.

     
    • howardat58 1:24 pm on Thursday, 29 October, 2015 Permalink | Reply

      I have never understood why zero causes so much anguish.

      Liked by 1 person

      • Joseph Nebus 5:02 am on Saturday, 31 October, 2015 Permalink | Reply

        I’m not sure if anguish is quite the right term, but it is … well, it’s tricky. We start with an idea of numbers as counting things, and then here’s a number that counts things that aren’t there. And this number that isn’t anything can appear in the middle of other numbers (well, numerals).

        If that weren’t enough, when we work out equations we can add zero anywhere we like, and decide that this zero is going to suddenly be “add a thing and subtract the same thing” and somehow doing that and rearranging terms can make a problem easier? I suppose anything can be mysterious if you stare hard at it, but I can see where zero holds a lot of mystery.

        Liked by 1 person

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: