Tagged: number theory Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Wednesday, 9 August, 2017 Permalink | Reply
    Tags: , , , , , , , number theory,   

    The Summer 2017 Mathematics A To Z: Elliptic Curves 


    Gaurish, of the For The Love Of Mathematics gives me another subject today. It’s one that isn’t about ellipses. Sad to say it’s also not about elliptic integrals. This is sad to me because I have a cute little anecdote about a time I accidentally gave my class an impossible problem. I did apologize. No, nobody solved it anyway.

    Elliptic Curves.

    Elliptic Curves start, of course, with polynomials. Particularly, they’re polynomials with two variables. We call the ‘x’ and ‘y’ because we have no reason to be difficult. They’re of at most third degree. That is, we can have terms like ‘x’ and ‘y2‘ and ‘x2y’ and ‘y3‘. Something with higher powers, like, ‘x4‘ or ‘x2y2‘ — a fourth power, all together — is right out. Doesn’t matter. Start from this and we can do some slick changes of variables so that we can rewrite it to look like this:

    y^2 = x^3 + Ax + B

    Here, ‘A’ and ‘B’ are some numbers that don’t change for this particular curve. Also, we need it to be true that 4A^3 + 27B^2 doesn’t equal zero. It avoids problems. What we’ll be looking at are coordinates, values of ‘x’ and ‘y’ together which make this equation true. That is, it’s points on the curve. If you pick some real numbers ‘A’ and ‘B’ and draw all the values of ‘x’ and ‘y’ that make the equation true you get … well, there’s different shapes. They all look like those microscope photos of a water drop emerging and falling from a tap, only rotated clockwise ninety degrees.

    So. Pick any of these curves that you like. Pick a point. I’m going to name your point ‘P’. Now pick a point once more. I’m going to name that point ‘Q’. Now draw a line from P through Q. Keep drawing it. It’ll cross the original elliptic curve again. And that point is … not actually special. What is special is the reflection of that point. That is, the same x-coordinate, but flip the plus or minus sign for the y-coordinate. (WARNING! Do not call it “the reflection” at your thesis defense! Call it the “conjugate” point. It means “reflection”.) Your elliptic curve will be symmetric around the x-axis. If, say, the point with x-coordinate 4 and y-coordinate 3 is on the curve, so is the point with x-coordinate 4 and y-coordinate -3. So that reflected point is … something special.

    Kind of a curved-out less-than-sign shape.

    y^2 = x^3 - 1 . The water drop bulges out from the surface.

    This lets us do something wonderful. We can think of this reflected point as the sum of your ‘P’ and ‘Q’. You can ‘add’ any two points on the curve and get a third point. This means we can do something that looks like addition for points on the elliptic curve. And this means the points on this curve are a group, and we can bring all our group-theory knowledge to studying them. It’s a commutative group, too; ‘P’ added to ‘Q’ leads to the same point as ‘Q’ added to ‘P’.

    Let me head off some clever thoughts that make fair objections. What if ‘P’ and ‘Q’ are already reflections, so the line between them is vertical? That never touches the original elliptic curve again, right? Yeah, fair complaint. We patch this by saying that there’s one more point, ‘O’, that’s off “at infinity”. Where is infinity? It’s wherever your vertical lines end. Shut up, this can too be made rigorous. In any case it’s a common hack for this sort of problem. When we add that, everything’s nice. The ‘O’ serves the role in this group that zero serves in arithmetic: the sum of point ‘O’ and any point ‘P’ is going to be ‘P’ again.

    Second clever thought to head off: what if ‘P’ and ‘Q’ are the same point? There’s infinitely many lines that go through a single point so how do we pick one to find an intersection with the elliptic curve? Huh? If you did that, then we pick the tangent line to the elliptic curve that touches ‘P’, and carry on as before.

    The curved-out less-than-sign shape has a noticeable c-shaped bulge on the end.

    y^2 = x^3 + 1 . The water drop is close to breaking off, but surface tension has not yet pinched off the falling form.

    There’s more. What kind of number is ‘x’? Or ‘y’? I’ll bet that you figured they were real numbers. You know, ordinary stuff. I didn’t say what they were, so left it to our instinct, and that usually runs toward real numbers. Those are what I meant, yes. But we didn’t have to. ‘x’ and ‘y’ could be in other sets of numbers too. They could be complex-valued numbers. They could be just the rational numbers. They could even be part of a finite collection of possible numbers. As the equation y^2 = x^3 + Ax + B is something meaningful (and some technical points are met) we can carry on. The elliptical curves, and the points we “add” on them, might not look like the curves we started with anymore. They might not look like anything recognizable anymore. But the logic continues to hold. We still create these groups out of the points on these lines intersecting a curve.

    By now you probably admit this is neat stuff. You may also think: so what? We can take this thing you never thought about, draw points and lines on it, and make it look very loosely kind of like just adding numbers together. Why is this interesting? No appreciation just for the beauty of the structure involved? Well, we live in a fallen world.

    It comes back to number theory. The modern study of Diophantine equations grows out of studying elliptic curves on the rational numbers. It turns out the group of points you get for that looks like a finite collection of points with some collection of integers hanging on. How long that collection of numbers is is called the ‘rank’, and there are deep mysteries at work. We know there are elliptic equations that have a rank as big as 28. Nobody knows if the rank can be arbitrary high, though. And I believe we don’t even know if there are any curves with rank of, like, 27, or 25.

    Yeah, I’m still sensing skepticism out there. Fine. We’ll go back to the only part of number theory everybody agrees is useful. Encryption. We have roughly the same goals for every encryption scheme. We want it to be easy to encode a message. We want it to be easy to decode the message if you have the key. We want it to be hard to decode the message if you don’t have the key.

    The curved-out sign has a bulge with convex loops to it, so that it resembles the cut of a jigsaw puzzle piece.

    y^2 = 3x^2 - 3x + 3 . The water drop is almost large enough that its weight overcomes the surface tension holding it to the main body of water.

    Take something inside one of these elliptic curve groups. Especially one that’s got a finite field. Let me call your thing ‘g’. It’s really easy for you, knowing what ‘g’ is and what your field is, to raise it to a power. You can pretty well impress me by sharing the value of ‘g’ raised to some whole number ‘m’. Call that ‘h’.

    Why am I impressed? Because if all I know is ‘h’, I have a heck of a time figuring out what ‘g’ is. Especially on these finite field groups there’s no obvious connection between how big ‘h’ is and how big ‘g’ is and how big ‘m’ is. Start with a big enough finite field and you can encode messages in ways that are crazy hard to crack.

    We trust. At least, if there are any ways to break the code quickly, nobody’s shared them. And there’s one of those enormous-money-prize awards waiting for someone who does know how to break such a code quickly. (I don’t know which. I’m going by what I expect from people.)

    And then there’s fame. These were used to prove Fermat’s Last Theorem. Suppose there are some non-boring numbers ‘a’, ‘b’, and ‘c’, so that for some prime number ‘p’ that’s five or larger, it’s true that a^p + b^p = c^p . (We can separately prove Fermat’s Last Theorem for a power that isn’t a prime number, or a power that’s 3 or 4.) Then this implies properties about the elliptic curve:

    y^2 = x(x - a^p)(x + b^p)

    This is a convenient way of writing things since it showcases the ap and bp. It’s equal to:

    y^2 = x^3 + \left(b^p - a^p\right)x^2 + a^p b^p x

    (I was so tempted to leave an arithmetic error in there so I could make sure someone commented.)

    A little ball off to the side of a curved-out less-than-sign shape.

    y^2 = 3x^3 - 4x . The water drop has broken off, and the remaining surface rebounds to its normal meniscus.

    If there’s a solution to Fermat’s Last Theorem, then this elliptic equation can’t be modular. I don’t have enough words to explain what ‘modular’ means here. Andrew Wiles and Richard Taylor showed that the equation was modular. So there is no solution to Fermat’s Last Theorem except the boring ones. (Like, where ‘b’ is zero and ‘a’ and ‘c’ equal each other.) And it all comes from looking close at these neat curves, none of which looks like an ellipse.

    They’re named elliptic curves because we first noticed them when Carl Jacobi — yes, that Carl Jacobi — while studying the length of arcs of an ellipse. That’s interesting enough on its own. But it is hard. Maybe I could have fit in that anecdote about giving my class an impossible problem after all.

     
  • Joseph Nebus 6:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , , , number theory, , ,   

    The Summer 2017 Mathematics A To Z: Diophantine Equations 


    I have another request from Gaurish, of the For The Love Of Mathematics blog, today. It’s another change of pace.

    Diophantine Equations

    A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that x^n + y^n = z^n thing that Fermat’s Last Theorem is all about. And you’ve probably seen ax + by = 1 . It turns up a lot because that’s a line, and we do a lot of stuff with lines.

    Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. ax + by = 1 , for example, that’s easy to solve. x^n + y^n = z^n it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that ax + by = 1 is simple while x^n + y^n = z^n is (most of the time) impossible?

    I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like x^n + y^n = z^n . For example, there’s x^4 + y^4 + z^4 = w^4 for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is 2,682,440^4 + 15,365,639^4 + 18,796,760^4 = 20,615,673^4 and that one took a computer search to find. We can forgive Euler not noticing it.

    Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is x^2 - D y^2 = 1 (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.

    But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:

    7x^2 - 20y + 18y^2 - 38z = 9

    Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.

    So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.

    There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.

    We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve ax^2 + bx + c = 0 , but specific ones, like 1x^2 - 5x + 6 = 0 . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)

    But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — x - Dy^2 < A , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.

     
  • Joseph Nebus 4:00 pm on Monday, 31 July, 2017 Permalink | Reply
    Tags: , , , , , number theory, ,   

    The Summer 2017 Mathematics A To Z: Arithmetic 


    And now as summer (United States edition) reaches its closing months I plunge into the fourth of my A To Z mathematics-glossary sequences. I hope I know what I’m doing! Today’s request is one of several from Gaurish, who’s got to be my top requester for mathematical terms and whom I thank for it. It’s a lot easier writing these things when I don’t have to think up topics. Gaurish hosts a fine blog, For the love of Mathematics, which you might consider reading.

    Arithmetic.

    Arithmetic is what people who aren’t mathematicians figure mathematicians do all day. I remember in my childhood a Berenstain Bears book about people’s jobs. Its mathematician was an adorable little bear adding up sums on the chalkboard, in an observatory, on the Moon. I liked every part of this. I wouldn’t say it’s the whole reason I became a mathematician but it did made the prospect look good early on.

    People who aren’t mathematicians are right. At least, the bulk of what mathematics people do is arithmetic. If we work by volume. Arithmetic is about the calculations we do to evaluate or solve polynomials. And polynomials are everything that humans find interesting. Arithmetic is adding and subtracting, of multiplication and division, of taking powers and taking roots. Arithmetic is changing the units of a thing, and of breaking something into several smaller units, or of merging several smaller units into one big one. Arithmetic’s role in commerce and in finance must overwhelm the higher mathematics. Higher mathematics offers cohomologies and Ricci tensors. Arithmetic offers a budget.

    This is old mathematics. There’s evidence of humans twenty thousands of years ago recording their arithmetic computations. My understanding is the evidence is ambiguous and interpretations vary. This seems fair. I assume that humans did such arithmetic then, granting that I do not know how to interpret archeological evidence. The thing is that arithmetic is older than humans. Animals are able to count, to do addition and subtraction, perhaps to do harder computations. (I crib this from The Number Sense:
    How the Mind Creates Mathematics
    , by Stanislas Daehaene.) We learn it first, refining our rough instinctively developed sense to something rigorous. At least we learn it at the same time we learn geometry, the other branch of mathematics that must predate human existence.

    The primality of arithmetic governs how it becomes an adjective. We will have, for example, the “arithmetic progression” of terms in a sequence. This is a sequence of numbers such as 1, 3, 5, 7, 9, and so on. Or 4, 9, 14, 19, 24, 29, and so on. The difference between one term and its successor is the same as the difference between the predecessor and this term. Or we speak of the “arithmetic mean”. This is the one found by adding together all the numbers of a sample and dividing by the number of terms in the sample. These are important concepts, useful concepts. They are among the first concepts we have when we think of a thing. Their familiarity makes them easy tools to overlook.

    Consider the Fundamental Theorem of Arithmetic. There are many Fundamental Theorems; that of Algebra guarantees us the number of roots of a polynomial equation. That of Calculus guarantees us that derivatives and integrals are joined concepts. The Fundamental Theorem of Arithmetic tells us that every whole number greater than one is equal to one and only one product of prime numbers. If a number is equal to (say) two times two times thirteen times nineteen, it cannot also be equal to (say) five times eleven times seventeen. This may seem uncontroversial. The budding mathematician will convince herself it’s so by trying to work out all the ways to write 60 as the product of prime numbers. It’s hard to imagine mathematics for which it isn’t true.

    But it needn’t be true. As we study why arithmetic works we discover many strange things. This mathematics that we know even without learning is sophisticated. To build a logical justification for it requires a theory of sets and hundreds of pages of tight reasoning. Or a theory of categories and I don’t even know how much reasoning. The thing that is obvious from putting a couple objects on a table and then a couple more is hard to prove.

    As we continue studying arithmetic we start to ponder things like Goldbach’s Conjecture, about even numbers (other than two) being the sum of exactly two prime numbers. This brings us into number theory, a land of fascinating problems. Many of them are so accessible you could pose them to a person while waiting in a fast-food line. This befits a field that grows out of such simple stuff. Many of those are so hard to answer that no person knows whether they are true, or are false, or are even answerable.

    And it splits off other ideas. Arithmetic starts, at least, with the counting numbers. It moves into the whole numbers and soon all the integers. With division we soon get rational numbers. With roots we soon get certain irrational numbers. A close study of this implies there must be irrational numbers that must exist, at least as much as “four” exists. Yet they can’t be reached by studying polynomials. Not polynomials that don’t already use these exotic irrational numbers. These are transcendental numbers. If we were to say the transcendental numbers were the only real numbers we would be making only a very slight mistake. We learn they exist by thinking long enough and deep enough about arithmetic to realize there must be more there than we realized.

    Thought compounds thought. The integers and the rational numbers and the real numbers have a structure. They interact in certain ways. We can look for things that are not numbers, but which follow rules like that for addition and for multiplication. Sometimes even for powers and for roots. Some of these can be strange: polynomials themselves, for example, follow rules like those of arithmetic. Matrices, which we can represent as grids of numbers, can have powers and even something like roots. Arithmetic is inspiration to finding mathematical structures that look little like our arithmetic. We can find things that follow mathematical operations but which don’t have a Fundamental Theorem of Arithmetic.

    And there are more related ideas. These are often very useful. There’s modular arithmetic, in which we adjust the rules of addition and multiplication so that we can work with a finite set of numbers. There’s floating point arithmetic, in which we set machines to do our calculations. These calculations are no longer precise. But they are fast, and reliable, and that is often what we need.

    So arithmetic is what people who aren’t mathematicians figure mathematicians do all day. And they are mistaken, but not by much. Arithmetic gives us an idea of what mathematics we can hope to understand. So it structures the way we think about mathematics.

     
    • ivasallay 5:34 pm on Monday, 31 July, 2017 Permalink | Reply

      I think you covered arithmetic in a very clear, scholarly way.

      When I was in the early elementary grades, we didn’t study math. We studied arithmetic.

      Here’s a couple more things some people might not know about arithmetic:
      1) How to remember the proper spelling of arithmetic: A Rat In The House May Eat The Ice Cream.
      2) How to pronounce arithmetic: https://www.quora.com/Why-does-the-pronunciation-of-arithmetic-depend-on-context

      Like

      • Joseph Nebus 6:27 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thanks! … My recollection is that in elementary school we called it mathematics (or just math), but the teachers were pretty clear about whether we were doing arithmetic or geometry. If that was clear, since I grew up on the tail end of the New Math wave and we could do stuff that was more playful than multiplication tables were.

        I hadn’t thought about the shifting pronunciations of ‘arithmetic’ as a word. I suppose it’s not different from many multi-syllable words in doing that, though. My suspicion is that the distinction between ‘arithmetic’ as an adjective and as a noun is spurious, though. My hunch is people shift the emphasis based on the structure of the whole sentence, with the words coming after ‘arithmetic’ having a big role to play. I’d expect that an important word immediately follows ‘arithmetic’ often if it’s being used as an adjective (like, ‘arithmetic mean’), but that’s not infallible. As opposed to those many rules of English grammar and pronunciation that are infallible.

        Liked by 1 person

    • gaurish 9:48 am on Saturday, 12 August, 2017 Permalink | Reply

      A Beautiful introduction to Arithmetic!

      Like

  • Joseph Nebus 4:00 pm on Friday, 9 June, 2017 Permalink | Reply
    Tags: , , , , Mr Boffo, number theory, perfect numbers, Pop Culture Shock Therapy, ,   

    Reading the Comics, June 3, 2017: Feast Week Conclusion Edition 


    And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

    So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

    A Time To Worry: '[Our son] says he got a one-de-two-three-z on the math test.'

    So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

    Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

    J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

    Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

    All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product 2^{p - 1} \cdot \left(2^p - 1\right) for certain prime numbers ‘p’. (They’re the ones for which 2^p - 1 is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

    Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

    The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

    Tiger: 'I've got two plus four hours of homework. I won't be finished until ten minus three o'clock, or maybe even six plus one and a half o'clock.' Punkin: 'What subject?' Tiger: 'Arithmetic, stupid!'

    Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

    Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

    Officer Pupp: 'That Mouse is most sure an oaf of awful dumbness, Mrs Kwakk Wakk - y'know that?' Mrs Kwakk Wakk: 'By what means do you find proof of this, Officer Pupp?' 'His sense of speed is insipid - he doesn't seem to know that if I ran 60 miles an hour, and he only 40, that I would eventually catch up to him.' 'No-' 'Yes- I tell you- yes.' 'He seemed to know that a brick going 60 would catch up to a kat going 40.' 'Oh, he did, did he?' 'Why, yes.'

    George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

    George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

     
    • Joshua K. 1:33 am on Saturday, 10 June, 2017 Permalink | Reply

      I thought that the second question in “Saturday Morning Breakfast Cereal” was meant to imply that mathematicians often have beards; therefore, if God would prefer not to have a beard, he probably isn’t a mathematician.

      Like

      • Joseph Nebus 11:48 pm on Monday, 12 June, 2017 Permalink | Reply

        Oh, you may have something there. I’m so used to thinking of beards as a logic problem that I didn’t think of them as a mathematician thing. (In my defense, back in grad school I’m not sure any of the faculty had beards.). I’ll take that interpretation too.

        Like

  • Joseph Nebus 6:00 pm on Wednesday, 30 November, 2016 Permalink | Reply
    Tags: , , , , , , , , Monster Group, number theory, ,   

    The End 2016 Mathematics A To Z: Monster Group 


    Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

    Monster Group.

    It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

    The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

    All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

    So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

    (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

    So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

    An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

    Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

    One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

    The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

    So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

    And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

    Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

    Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

    The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

    The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

    It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

    We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

    And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

    There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

    You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

    I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

    There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

    The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

     
    • gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

      It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?

      Like

      • Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

        Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

        I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.

        Like

  • Joseph Nebus 6:00 pm on Sunday, 21 August, 2016 Permalink | Reply
    Tags: cheese, , , number theory, , ,   

    Reading the Comics, August 19, 2016: Mathematics Signifier Edition 


    I know it seems like when I write these essays I spend the most time on the first comic in the bunch and give the last ones a sentence, maybe two at most. I admit when there’s a lot of comics I have to write up at once my energy will droop. But Comic Strip Master Command apparently wants the juiciest topics sent out earlier in the week. I have to follow their lead.

    Stephen Beals’s Adult Children for the 14th uses mathematics to signify deep thinking. In this case Claremont, the dog, is thinking of the Riemann Zeta function. It’s something important in number theory, so longtime readers should know this means it leads right to an unsolved problem. In this case it’s the Riemann Hypothesis. That’s the most popular candidate for “what is the most important unsolved problem in mathematics right now?” So you know Claremont is a deep-thinking dog.

    The big Σ ordinary people might recognize as representing “sum”. The notation means to evaluate, for each legitimate value of the thing underneath — here it’s ‘n’ — the value of the expression to the right of the Sigma. Here that’s \frac{1}{n^s} . Then add up all those terms. It’s not explicit here, but context would make clear, n is positive whole numbers: 1, 2, 3, and so on. s would be a positive number, possibly a whole number.

    The big capital Pi is more mysterious. It’s Sigma’s less popular brother. It means “product”. For each legitimate value of the thing underneath it — here it’s “p” — evaluate the expression on the right. Here that’s \frac{1}{1 - \frac{1}{p^s}} . Then multiply all that together. In the context of the Riemann Zeta function, “p” here isn’t just any old number, or even any old whole number. It’s only the prime numbers. Hence the “p”. Good notation, right? Yeah.

    This particular equation, once shored up with the context the symbols live in, was proved by Leonhard Euler, who proved so much you sometimes wonder if later mathematicians were needed at all. It ties in to how often whole numbers are going to be prime, and what the chances are that some set of numbers are going to have no factors in common. (Other than 1, which is too boring a number to call a factor.) But even if Claremont did know that Euler got there first, it’s almost impossible to do good new work without understanding the old.

    Charlos Gary’s Working It Out for the 14th is this essay’s riff on pie charts. Or bar charts. Somewhere around here the past week I read that a French idiom for the pie chart is the “cheese chart”. That’s a good enough bit I don’t want to look more closely and find out whether it’s true. If it turned out to be false I’d be heartbroken.

    Ryan North’s Dinosaur Comics for the 15th talks about everyone’s favorite physics term, entropy. Everyone knows that it tends to increase. Few advanced physics concepts feel so important to everyday life. I almost made one expression of this — Boltzmann’s H-Theorem — a Theorem Thursday post. I might do a proper essay on it yet. Utahraptor describes this as one of “the few statistical laws of physics”, which I think is a bit unfair. There’s a lot about physics that is statistical; it’s often easier to deal with averages and distributions than the mass of real messy data.

    Utahraptor’s right to point out that it isn’t impossible for entropy to decrease. It can be expected not to, in time. Indeed decent scientists thinking as philosophers have proposed that “increasing entropy” might be the only way to meaningfully define the flow of time. (I do not know how decent the philosophy of this is. This is far outside my expertise.) However: we would expect at least one tails to come up if we simultaneously flipped infinitely many coins fairly. But there is no reason that it couldn’t happen, that infinitely many fairly-tossed coins might all come up heads. The probability of this ever happening is zero. If we try it enough times, it will happen. Such is the intuition-destroying nature of probability and of infinitely large things.

    Tony Cochran’s Agnes on the 16th proposes to decode the Voynich Manuscript. Mathematics comes in as something with answers that one can check for comparison. It’s a familiar role. As I seem to write three times a month, this is fair enough to say to an extent. Coming up with an answer to a mathematical question is hard. Checking the answer is typically easier. Well, there are many things we can try to find an answer. To see whether a proposed answer works usually we just need to go through it and see if the logic holds. This might be tedious to do, especially in those enormous brute-force problems where the proof amounts to showing there are a hundred zillion special cases and here’s an answer for each one of them. But it’s usually a much less hard thing to do.

    Johnny Hart and Brant Parker’s Wizard of Id Classics for the 17th uses what seems like should be an old joke about bad accountants and nepotism. Well, you all know how important bookkeeping is to the history of mathematics, even if I’m never that specific about it because it never gets mentioned in the histories of mathematics I read. And apparently sometime between the strip’s original appearance (the 20th of August, 1966) and my childhood the Royal Accountant character got forgotten. That seems odd given the comic potential I’d imagine him to have. Sometimes a character’s only good for a short while is all.

    Mark Anderson’s Andertoons for the 18th is the Andertoons representative for this essay. Fair enough. The kid speaks of exponents as a kind of repeating oneself. This is how exponents are inevitably introduced: as multiplying a number by itself many times over. That’s a solid way to introduce raising a number to a whole number. It gets a little strained to describe raising a number to a rational number. It’s a confusing mess to describe raising a number to an irrational number. But you can make that logical enough, with effort. And that’s how we do make the idea rigorous. A number raised to (say) the square root of two is something greater than the number raised to 1.4, but less than the number raised to 1.5. More than the number raised to 1.41, less than the number raised to 1.42. More than the number raised to 1.414, less than the number raised to 1.415. This takes work, but it all hangs together. And then we ask about raising numbers to an imaginary or complex-valued number and we wave that off to a higher-level mathematics class.

    Nate Fakes’s Break of Day for the 18th is the anthropomorphic-numerals joke for this essay.

    Lachowski’s Get A Life for the 18th is the sudoku joke for this essay. It’s also a representative of the idea that any mathematical thing is some deep, complicated puzzle at least as challenging as calculating one’s taxes. I feel like this is a rerun, but I don’t see any copyright dates. Sudoku jokes like this feel old, but comic strips have been known to make dated references before.

    Samson’s Dark Side Of The Horse for the 19th is this essay’s Dark Side Of The Horse gag. I thought initially this was a counting-sheep in a lab coat. I’m going to stick to that mistaken interpretation because it’s more adorable that way.

     
  • Joseph Nebus 6:00 pm on Sunday, 7 August, 2016 Permalink | Reply
    Tags: , discovery, divisors, number theory,   

    Reading the Comics, August 1, 2016: Kalends Edition 


    The last day of July and first day of August saw enough mathematically-themed comic strips to fill a standard-issue entry. The rest of the week wasn’t so well-stocked. But I’ll cover those comics on Tuesday if all goes well. This may be a silly plan, but it is a plan, and I will stick to that.

    Johnny Hart’s Back To BC reprints the venerable and groundbreaking comic strip from its origins. On the 31st of July it reprinted a strip from February 1959 in which Peter discovers mathematics. The work’s elaborate, much more than we would use to solve the problem today. But it’s always like that. Newly-discovered mathematics is much like any new invention or innovation, a rickety set of things that just barely work. With time we learn better how the idea should be developed. And we become comfortable with the cultural assumptions going into the work. So we get more streamlined, faster, easier-to-use mathematics in time.

    The early invention of mathematics reappears the 1st of August, in a strip from earlier in February 1959. In this case it’s the sort of word problem confusion strip that any comic with a student could do. That’s a bit disappointing but Hart had much less space than he’d have for the Sunday strip above. One must do what one can.

    Mac King and Bill King’s Magic in a Minute for the 31st maybe isn’t really mathematics. I guess there’s something in the modular-arithmetic implied by it. But it depends on a neat coincidence. Follow the directions in the comic about picking a number from one to twelve and counting out the letters in the word for that number. And then the letters in the word for the number you’re pointing to, and then once again. It turns out this leads to the same number. I’d never seen this before and it’s neat that it does.

    Rick Detorie’s One Big Happy rerun for the 31st features Ruthie teaching, as she will. She mentions offhand the “friendlier numbers”. By this she undoubtedly means the numbers that are attractive in some way, like being nice to draw. There are “friendly numbers”, though, as number theorists see things. These are sets of numbers. For each number in this set you get the same index if you add together all its divisors (including 1 and the original number) and divide it by the original number. For example, the divisors of six are 1, 2, 3, and 6. Add that together and you get 12; divide that by the original 6 and you get 2. The divisors of 28 are 1, 2, 4, 7, 14, and 28. Add that pile of numbers together and you get 56; divide that by the original 28 and you get 2. So 6 and 28 are friendly numbers, each the friend of the other.

    As often happens with number theory there’s a lot of obvious things we don’t know. For example, we know that 1, 2, 3, 4, and 5 have no friends. But we do not know whether 10 has. Nor 14 nor 20. I do not know if it is proved whether there are infinitely many sets of friendly numbers. Nor do I know if it is proved whether there are infinitely many numbers without friends. Those last two sentences are about my ignorance, though, and don’t reflect what number theory people know. I’m open to hearing from people who know better.

    There are also things called “amicable numbers”, which are easier to explain and to understand than “friendly numbers”. A pair of numbers are amicable if the sum of one number’s divisors is the other number. 220 and 284 are the smallest pair of amicable numbers. Fermat found that 17,296 and 18,416 were an amicable pair; Descartes found that 9,363,584 and 9,437,056 were. Both pairs were known to Arab mathematicians already. Amicable pairs are easy enough to produce. From the tenth century we’ve had Thâbit ibn Kurrah’s rule, which lets you generate sets of numbers. Ruthie wasn’t thinking of any of this, though, and was more thinking how much fun it is to write a 7.

    Terry Border’s Bent Objects for the 1st just missed the anniversary of John Venn’s birthday and all the joke Venn Diagrams that were going around at least if your social media universe looks anything like mine.

    Jon Rosenberg’s Scenes from a Multiverse for the 1st is set in “Mathpinion City”, in the “Numerically Flexible Zones”. And I appreciate it’s a joke about the politicization of science. But science and mathematics are human activities. They are culturally dependent. And especially at the dawn of a new field of study there will be long and bitter disputes about what basic terms should mean. It’s absurd for us to think that the question of whether 1 + 1 should equal 2 or 3 could even arise.

    But we think that because we have absorbed ideas about what we mean by ‘1’, ‘2’, ‘3’, ‘plus’, and ‘equals’ that settle the question. There was, if I understand my mathematics history right — and I’m not happy with my reading on this — a period in which it was debated whether negative numbers should be considered as less than or greater than the positive numbers. Absurd? Thermodynamics allows for the existence of negative temperatures, and those represent extremely high-energy states, things that are hotter than positive temperatures. A thing may get hotter, from 1 Kelvin to 4 Kelvin to a million Kelvin to infinitely many Kelvin to -1000 Kelvin to -6 Kelvin. If there are intuition-defying things to consider about “negative six” then we should at least be open to the proposition that the universal truths of mathematics are understood by subjective processes.

     
  • Joseph Nebus 3:00 pm on Wednesday, 6 April, 2016 Permalink | Reply
    Tags: , , Dublin, , , number theory, , rotations,   

    A Leap Day 2016 Mathematics A To Z: Quaternion 


    I’ve got another request from Gaurish today. And it’s a word I had been thinking to do anyway. When one looks for mathematical terms starting with ‘q’ this is one that stands out. I’m a little surprised I didn’t do it for last summer’s A To Z. But here it is at last:

    Quaternion.

    I remember the seizing of my imagination the summer I learned imaginary numbers. If we could define a number i, so that i-squared equalled negative 1, and work out arithmetic which made sense out of that, why not do it again? Complex-valued numbers are great. Why not something more? Maybe we could also have some other non-real number. I reached deep into my imagination and picked j as its name. It could be something else. Maybe the logarithm of -1. Maybe the square root of i. Maybe something else. And maybe we could build arithmetic with a whole second other non-real number.

    My hopes of this brilliant idea petered out over the summer. It’s easy to imagine a super-complex number, something that’s “1 + 2i + 3j”. And it’s easy to work out adding two super-complex numbers like this together. But multiplying them together? What should i times j be? I couldn’t solve the problem. Also I learned that we didn’t need another number to be the logarithm of -1. It would be π times i. (Or some other numbers. There’s some surprising stuff in logarithms of negative or of complex-valued numbers.) We also don’t need something special to be the square root of i, either. \frac{1}{2}\sqrt{2} + \frac{1}{2}\sqrt{2}\imath will do. (So will another number.) So I shelved the project.

    Even if I hadn’t given up, I wouldn’t have invented something. Not along those lines. Finer minds had done the same work and had found a way to do it. The most famous of these is the quaternions. It has a famous discovery. Sir William Rowan Hamilton — the namesake of “Hamiltonian mechanics”, so you already know what a fantastic mind he was — had a flash of insight that’s come down in the folklore and romance of mathematical history. He had the idea on the 16th of October, 1843, while walking with his wife along the Royal Canal, in Dublin, Ireland. While walking across the bridge he saw what was missing. It seems he lacked pencil and paper. He carved it into the bridge:

    i^2 = j^2 = k^2 = ijk = -1

    The bridge now has a plaque commemorating the moment. You can’t make a sensible system with two non-real numbers. But three? Three works.

    And they are a mysterious three! i, j, and k are somehow not the same number. But each of them, multiplied by themselves, gives us -1. And the product of the three is -1. They are even more mysterious. To work sensibly, i times j can’t be the same thing as j times i. Instead, i times j equals minus j times i. And j times k equals minus k times j. And k times i equals minus i times k. We must give up commutivity, the idea that the order in which we multiply things doesn’t matter.

    But if we’re willing to accept that the order matters, then quaternions are well-behaved things. We can add and subtract them just as we would think to do if we didn’t know they were strange constructs. If we keep the funny rules about the products of i and j and k straight, then we can multiply them as easily as we multiply polynomials together. We can even divide them. We can do all the things we do with real numbers, only with these odd sets of four real numbers.

    The way they look, that pattern of 1 + 2i + 3j + 4k, makes them look a lot like vectors. And we can use them like vectors pointing to stuff in three-dimensional space. It’s not quite a comfortable fit, though. That plain old real number at the start of things seems like it ought to signify something, but it doesn’t. In practice, it doesn’t give us anything that regular old vectors don’t. And vectors allow us to ponder not just three- or maybe four-dimensional spaces, but as many as we need. You might wonder why we need more than four dimensions, even allowing for time. It’s because if we want to track a lot of interacting things, it’s surprisingly useful to put them all into one big vector in a very high-dimension space. It’s hard to draw, but the mathematics is nice. Hamiltonian mechanics, particularly, almost beg for it.

    That’s not to call them useless, or even a niche interest. They do some things fantastically well. One of them is rotations. We can represent rotating a point around an arbitrary axis by an arbitrary angle as the multiplication of quaternions. There are many ways to calculate rotations. But if we need to do three-dimensional rotations this is a great one because it’s easy to understand and easier to program. And as you’d imagine, being able to calculate what rotations do is useful in all sorts of applications.

    They’ve got good uses in number theory too, as they correspond well to the different ways to solve problems, often polynomials. They’re also popular in group theory. They might be the simplest rings that work like arithmetic but that don’t commute. So they can serve as ways to learn properties of more exotic ring structures.

    Knowing of these marvelous exotic creatures of the deep mathematics your imagination might be fired. Can we do this again? Can we make something with, say, four unreal numbers? No, no we can’t. Four won’t work. Nor will five. If we keep going, though, we do hit upon success with seven unreal numbers.

    This is a set called the octonions. Hamilton had barely worked out the scheme for quaternions when John T Graves, a friend of his at least up through the 16th of December, 1843, wrote of this new scheme. (Graves didn’t publish before Arthur Cayley did. Cayley’s one of those unspeakably prolific 19th century mathematicians. He has at least 967 papers to his credit. And he was a lawyer doing mathematics on the side for about 250 of those papers. This depresses every mathematician who ponders it these days.)

    But where quaternions are peculiar, octonions are really peculiar. Let me call a couple quaternions p, q, and r. p times q might not be the same thing as q times r. But p times the product of q and r will be the same thing as the product of p and q itself times r. This we call associativity. Octonions don’t have that. Let me call a couple quaternions s, t, and u. s times the product of t times u may be either positive or negative the product of s and t times u. (It depends.)

    Octonions have some neat mathematical properties. But I don’t know of any general uses for them that are as catchy as understanding rotations. Not rotations in the three-dimensional world, anyway.

    Yes, yes, we can go farther still. There’s a construct called “sedenions”, which have fifteen non-real numbers on them. That’s 16 terms in each number. Where octonions are peculiar, sedenions are really peculiar. They work even less like regular old numbers than octonions do. With octonions, at least, when you multiply s by the product of s and t, you get the same number as you would multiplying s by s and then multiplying that by t. Sedenions don’t even offer that shred of normality. Besides being a way to learn about abstract algebra structures I don’t know what they’re used for.

    I also don’t know of further exotic terms along this line. It would seem to fit a pattern if there’s some 32-term construct that we can define something like multiplication for. But it would presumably be even less like regular multiplication than sedenion multiplication is. If you want to fiddle about with that please do enjoy yourself. I’d be interested to hear if you turn up anything, but I don’t expect it’ll revolutionize the way I look at numbers. Sorry. But the discovery might be the fun part anyway.

     
    • elkement (Elke Stangl) 7:04 am on Sunday, 10 April, 2016 Permalink | Reply

      I wonder if quaternions would be useful in physics – as so often describing the same physics using different math leads to new insights. I vaguely remember some articles proposed by people who wanted to ‘revive’ quaternions for physics (sometimes this was close to … uhm … ‘outsider physics’, so I was reminded of people willing to apply Lord Kelvin’s theory of smoke rings to atomic physics…), but I have not encountered them in theoretical physics courses.

      Like

      • elkement (Elke Stangl) 7:41 am on Sunday, 10 April, 2016 Permalink | Reply

        I should post an update – before somebody points out my ignorance of history of science and tells me to check out Wikipedia :-) https://en.wikipedia.org/wiki/Quaternion
        This quote explains it:
        “From the mid-1880s, quaternions began to be displaced by vector analysis, which had been developed by Josiah Willard Gibbs, Oliver Heaviside, and Hermann von Helmholtz. Vector analysis described the same phenomena as quaternions, so it borrowed some ideas and terminology liberally from the literature of quaternions. However, vector analysis was conceptually simpler and notationally cleaner, and eventually quaternions were relegated to a minor role in mathematics and physics.”

        Like

        • Joseph Nebus 2:57 am on Friday, 15 April, 2016 Permalink | Reply

          I was going to say, but did figure you’d get to it soon enough. And it isn’t like quaternions are wrong. If you’ve got a programming language construct for quaternions, such as because you’re using Fortran, they’ll be fine for an array of three- or four-dimensional vectors as long as you’re careful about multiplications. It’s just that if you’ve turned your system into a 3N-dimensional vector, you might as well use a vector with 3N spots, instead of an array of N quaternions.

          Liked by 1 person

  • Joseph Nebus 3:00 pm on Tuesday, 29 December, 2015 Permalink | Reply
    Tags: accessibility, , number theory, ,   

    The Equidistribution of Lattice Shapes of Rings: A Friendly Thesis 


    So, you’re never going to read my doctoral thesis. That’s fair enough. Few people are ever going to, and even the parts that got turned into papers or textbooks are going to attract few readers. They’re meant for people interested in a particular problem that I never expected most people to find interesting. I’m at ease with that. I wrote for an audience I expected knew nearly all the relevant background, and that was used to reading professional, research-level mathematics. But I knew that the non-mathematics-PhDs in my life would look at it and say they only understood the word ‘the’ on the title page. Even if they could understand more, they wouldn’t try.

    Dr Piper Alexis Harron tried meeting normal folks halfway. She took her research, and the incredible frustrations of doing that research — doctoral research is hard, and is almost as much an endurance test as anything else — and wrote what she first meant to be “a grenade launched at my ex-prison”. This turned into something more exotic. It’s a thesis “written for those who do not feel that they are encouraged to be themselves”, people who “don’t do math the `right way’ but could still greatly contribute to the community”.

    The result is certainly the most interesting mathematical academic writing I’ve run across. It’s written on several levels, including one meant for people who have heard of mathematics certainly but don’t partake in the stuff themselves. It’s exciting reading and I hope you give it a try. It may not be to your tastes — I’m wary of words like ‘laysplanation’ — but it isn’t going to be boring. And Dr Harron’s secondary thesis, that mathematicians need to more aggressively welcome people into the fold, is certainly right.

    You’re not missing anything by not reading my thesis. Although my thesis defense offered the amusing sidelight that the campus’s albino squirrel got into the building. My father and some of the grad student friends of mine were trying without anything approaching success to catch it. I’m still not sure why they thought the squirrel needed handling by them.

     
  • Joseph Nebus 3:00 pm on Wednesday, 23 December, 2015 Permalink | Reply
    Tags: Collatz Conjecture, elevators, , number theory   

    Elevator Mathematics 


    My friend ChefMongoose sent a neat little puzzle that came in a dream. I wanted to share it.

    So! It’s not often that my dreams give me math puzzles. Here’s one: You are on floor 20 of a hotel. The stairs are blocked.

    There are four elevators in front of you with display panels saying ‘3’, ‘4’, ‘7’, and ’35’. They will take you up that many floors, then the number will double. Going down an elevator will take you down that many floors, then the number will halve.

    (The dream didn’t tell me what will happen if you can’t halve the number. For good puzzle logic, let’s assume the elevator goes down that much, then breaks.)

    There is no basement, the hotel has an infinite amount of floors. Your challenge: get to floor 101. Can it be done?

    (And I have no idea if it can be done. But apparently I, Riker, and Worf were trying to do it.)

    The puzzle caught my imagination. It so happens the dream set things up so that this is possible: I worked out one path, and ChefMongoose found another.

    ChefMongoose was right, of course, that something has to be done about halving the floor steps. I’d thought to make it half of either one less than or one more than the count. That is, going down 7 would turn the elevator into one that goes down either 3 or 4 floors. (My solution turned out not to need either.)

    It looks lucky that ChefMongoose, Riker, and Worf picked a set of elevator moves, and rules, and starting, and ending floors that had a solution. Is it, though? Suppose we wanted to get to, say, floor 35? … Well, that’s possible. (Up 7, up 14, down 4, down 2.) 34 obviously follows. (Down 1 more.) 36? (Up 7, up 3, up 6.) 38? (Up 35, down 7, down 3, down 4, down 2, down 1.) The universe of reachable floors is bigger than it might seem at first.

    The elevator problem had nagged at me with the thought it was related to some famous mathematical problem. At least that it was a type of one. ChefMongoose worked out what I was thinking of, the Collatz Conjecture. But on further reflection that’s the wrong parallel. This elevator problem is more akin to the McNuggets Problem. (When McDonald’s first sold Chicken McNuggets they were in packs of six, nine, and twenty. So what is the largest number of McNuggets that could not be bought by some combination of packages?) The doubling and halving of floor range makes the problem different, though. I am curious if there are finitely many unreachable floors. I’m also curious whether allowing negative numbers — basement floors — would change what floors are accessible.

    The Collatz Conjecture is a fun one. It’s almost a game. Start with a positive whole number. If it’s even, divide it in half. If it’s odd, multiply it by three and add one. Then repeat.

    If we start with 1, that’s odd, so we triple it and add one, giving us 4. Even, so cut in half: 2. Even again, so cut in half: 1. That’s going to bring us back to 4.

    If we start with 2, we know where this is going. Cut in half: 1. Triple and add one: 4. Cut in half: 2. And repeat.

    Starting with 3 suggests something new might happen. Triple 3 and add one: 10. Halve that: 5. Triple and add one: 16. Halve: 8. Halve: 4. Halve: 2. Halve: 1.

    4 we’re already a bit sick of at this point. 5 — well, we just worked 5 out. That’ll go 5, 16, 8, 4, 2, 1, etc. Start from 6: we halve it to 3 and then we just worked out 3.

    7 jumps right up to 22, then 11, then 34 — what a interesting number there — and then 52, 26, 13, 40, 20, 10 and we’ve seen that routine already. 10, 5, 16, 8, 4, 2, 1.

    The Collatz Conjecture is that whatever positive whole number you start from will lead, eventually, to the 4, 2, 1 cycle. It may take a while to get there. I was working the numbers in my head while falling asleep the other night and got to wondering what exactly was 27’s problem anyway. (It takes over a hundred steps to settle down, and gets to numbers as high as 9,232 before finishing.)

    Nobody knows whether it’s true. It seems plausible that it might be false. We can imagine a number that doesn’t. At least I can imagine there’s some number, let me call it N, and suppose it’s odd. Then triple that and add one, so we get an even number; halve that maybe a couple times until we get an odd number, and triple that and add one and get back the original N. You might have fun trying out numbers and seeing if you can find a loop like that.

    Just do that for fun, though. Mathematicians have tested out every number less than 1,152,921,504,606,846,976. (It’s a round number in binary.) They all end in that 4, 2, 1 cycle. So it seems hard to believe that 1,152,921,504,606,846,977 and onward wouldn’t. We just don’t know that’s so.

    If you allow zero, then that’s a valid but very short cycle: 0 halves to 0 and never budges. If you allow negative numbers, then there are at least three more cycles. They start from -1, from -5, and from -17. It’s not known whether there are any more in the negative integers.

    The conjecture’s named for Lothar Collatz, 1910 – 1990, a German mathematician who specialized in numerical analysis. That’s the study of how to do calculations that meaningfully reflect the mathematics we would like to know. The Collatz Conjecture is, to the best of my knowledge, a novelty act. I don’t know of any interesting or useful results that depend on it being true (or false). It’s just a question easy to ask and understand, and that we don’t know how to solve. But those are fun to have around too.

     
    • sheldonk2014 8:41 pm on Wednesday, 20 January, 2016 Permalink | Reply

      I don’t know why Joseph but I am not getting your posts
      So I came to visit
      Just to let you know

      Like

      • Joseph Nebus 5:01 am on Sunday, 24 January, 2016 Permalink | Reply

        That is curious and I’m not sure what’s happening. I do know of one RSS aggregator that’s not been getting the feed. That might be connected, but I don’t know if it isn’t just coincidence since that aggregator isn’t very reliable.

        Like

  • Joseph Nebus 9:53 pm on Monday, 22 December, 2014 Permalink | Reply
    Tags: , Jurassic Park, Las Vegas, , , number theory, , romance, Srinivasa Ramanujan   

    Reading The Comics, December 22, 2015: National Mathematics Day Edition 


    It was a busy week — well, it’s a season for busy weeks, after all — which is why the mathematics comics pile grew so large before I could do anything about it this time around. I’m not sure which I’d pick as my favorite; the Truth Facts tickles me by playing symbols up for confusion and ambiguity, but Quincy is surely the best-drawn of this collection, and good comic strip art deserves attention. Happily that’s a vintage strip from King Features so I feel comfortable including the comic strip for you to see easily.

    Tony Murphy’s It’s All About You (December 15), a comic strip about people not being quite couples, tells a “what happens in Vegas” joke themed to mathematics. The particular topic — a “seminar on gap unification theory” — is something that might actually be a mathematics department specialty. The topic appears in number theory, and particularly in the field of partitions, the study of ways to subdivide collections of discrete things. At this point the subject starts getting specialized enough I can’t say very much intelligible about it; apparently there’s a way of studying these divisions by looking at the distances (the gaps) between where divisions are made (the partitions), but my attempts to find a clear explanation for this all turn up papers in number theory journals that I haven’t got access to and that, I confess, would take me a long while to understand. If anyone from the number theory group wanted to explain things I’d be glad to offer the space.

    (More …)

     
    • ivasallay 4:11 am on Tuesday, 23 December, 2014 Permalink | Reply

      There’s a lot to like here.

      Like

    • elkement 12:45 pm on Monday, 5 January, 2015 Permalink | Reply

      For the first time I can appreciate your descriptions as the comic about dragging time is not available anymore. I once read there is a whole subfield of psychology dedicated to investigating these effects – e.g. why we feel that time starts to ‘fly’ as we age. If I recall correctly one explanation was that it is the number of interesting and new events per period that define our subjective impression of the the ‘speed of time’. So when you life has become more of a routine there are less of those events.

      Like

      • Joseph Nebus 4:58 pm on Monday, 5 January, 2015 Permalink | Reply

        It’s not available? That’s strange and a little worrisome since that comic was from a site that I thought left links up and functional indefinitely.

        I should have realized that there are people who study why time seems to pass so quickly as we age, and I just didn’t think of it or think to look up what psychology bloggers might have to say about it. (Well, I don’t have time to read even an introductory book about the subject.) But there are probably several contributing causes and that a year is a diminishing fraction of one’s lifespan-to-date is surely just one, and the number of surprising and novel events is probably another one, and I shouldn’t be surprised if they reinforce one another.

        Liked by 1 person

  • Joseph Nebus 3:49 pm on Friday, 31 May, 2013 Permalink | Reply
    Tags: Goldbach's Conjecture, , number theory, , , twin primes   

    Odd Proofs 


    May 2013 turned out to be an interesting month for number theory, in that there’ve been big breakthroughs on two long-standing conjectures. Number theory is great fun in part because it’s got many conjectures that anybody can understand but which are really hard to prove. The one that’s gotten the most attention, at least from the web sites that I read which dip into popular mathematics, has been the one about twin primes.

    It’s long been suspected that there are infinitely many pairs of “twin primes”, such as 5 and 7, or 11 and 13, or 29 and 31, separated by only two. It’s not proven that there are such, not yet. Yitang Zhang of Harvard has announced proof that there are infinitely many pairings of primes that are no more than 70,000,000 apart. This is admittedly not the tightest bound out there, but it’s better than what there was before. But while there are infinitely many primes — anyone can prove that — how many there are in any fixed-width range tends to decrease, and it would be imaginable to think that the distance between primes just keeps increasing, without bounds, the way that (say) each pair of successive powers of two is farther apart than the previous pair were. But it’s not so, and that’s neat to see.

    Less publicized is a proof of Goldbach’s Odd Conjecture. Goldbach’s Conjecture is the famous one that every even number bigger than two can be written as the sum of two primes. An equivalent form would be to say that every whole number — even or odd — larger than five can be written as the sum of three primes. Goldbach’s Odd Conjecture cuts the problem by just saying that every odd whole number greater than five can be written as the sum of three primes. And it’s this which Harald Andres Helfgott claims to have a proof for. (He also claims to have a proof that every odd number greater than seven can be written as the sum of three odd primes, that is, that two isn’t needed for more than single-digit odd numbers.)

    (More …)

     
    • elkement 12:58 pm on Saturday, 1 June, 2013 Permalink | Reply

      I have a stupid question: Why is this that you can prove the conjecture for all numbers higher than a certain threshold? Why does the proof work for number > 10 to the power of 30, but not for smaller onces? What is special about 10_30?

      Like

      • Joseph Nebus 5:27 pm on Saturday, 1 June, 2013 Permalink | Reply

        It’s not a stupid question although it’s one I’m not sure I can answer quite well enough given that a lot of the paper calls on things that I don’t know well.

        If I am reading it correctly, the core idea of the proof is to substitute summations over numbers (as, you might, sum over primes) with integrals. This is an approximation, yes, but if you’re summing over a large enough number of small enough things, that substitution becomes as accurate as you could possibly need. Again, if I have this correctly, then, the bound of 10^{30} comes about because for numbers smaller than that bound, the integral approximation isn’t quite close enough to the summation to guarantee that there aren’t any overlooked numbers.

        I’d defer happily to the expertise of anyone who does know number theory better and can more confidently follow the course of the proof. I admit I’m intimidated just by some of the symbols (a valentine heart as a subscript? How did that come about?) and, after all, it is a 130-page proof.

        Like

        • elkement 4:42 pm on Sunday, 2 June, 2013 Permalink | Reply

          Thanks a lot! Replacing sums by integral is very common in solid state physics (summing over all electron states…), so I can relate to that. I would not have expected that in pure mathematcs though!

          Like

          • Joseph Nebus 4:56 pm on Sunday, 2 June, 2013 Permalink | Reply

            Quite glad to help (assuming I have got it right).

            I should say for anyone who’s reading this but never ventured far enough into calculus to look at integrals on purpose, the point of replacing summations with integrals is that, generally, it’s a lot easier to study integrals than it is summations. So if you can replace a summation with an integral you’re probably well-off doing that, although often, you can only make that replacement if it’s a sum over many enough pieces.

            Like

    • Danny Sichel 2:10 pm on Saturday, 16 July, 2016 Permalink | Reply

      “And yet you know just what I mean, too, and trying to be clear about what I mean would produce a pile of unintelligible words”

      hm. How about saying ‘its full decmial expansion is nearly seven million digits’ ?

      Like

      • Joseph Nebus 4:28 pm on Wednesday, 20 July, 2016 Permalink | Reply

        Yeah, you’re right. That works. Sometimes I don’t know how to say things clearly. Thank you.

        Like

  • Joseph Nebus 3:55 am on Tuesday, 25 October, 2011 Permalink | Reply
    Tags: , , number theory, ,   

    How I Make Myself Look Foolish 


    It seems to me that I need to factor numbers more often than most people do. I can’t even attribute this to my being a mathematician, since I don’t think along the lines of anything like mathematical work; I just find that I need to know, say, that 272,250 is what you get by multiplying 2 and 3 to the second power and 5 to the third power and 11 to the second power. And I reliably go to places I know will do calculations quickly, like the desktop Calculator application or what you get from typing mathematical expressions into Google, and find that since the last time I looked they still haven’t added a factorization tool. I have tools I can use, particularly Matlab or its open-source work-just-enough-alike-to-make-swapping-code-difficult replica Octave, which takes a long time to start up for one lousy number.

    So I got to thinking: I’ve wanted to learn a bit about writing apps, and surely, writing a factorization app is both easy and quick and would prove I could write something. The routine is easy, too: take a number (272,250) as input; then divide by two as many times as you can (just one, giving 136,125), then divide by three as many times as you can (twice, giving 15,125), then by five as many times as you can (three times, reaching 121), then by seven (you can’t), then eleven (twice, reaching 1), until you’ve run the whole number down. You just need to divide repeatedly by the prime numbers, starting at two, and going up only to the square root of whatever your input number is.

    Without bothering to program, then, I thought about how I could make this a more efficient routine. Figuring out more efficient ways to code is good practice, because if you think long enough about how to code efficiently, you can feel satisfied that you would have written a very good program and never bother to actually do it, which would only spoil the beauty of the code anyway. Here’s where the possible inefficiency sets in: how do you know what all the prime numbers up to the square root of whatever you’re interested in is?

    (More …)

     
    • MJ Howard (@random_bunny) 1:41 am on Wednesday, 26 October, 2011 Permalink | Reply

      Just in case you don’t know. http://www.wolframalpha.com/input/?i=factor+272250

      As an aside, prime factorization is something I find myself doing while stuck in traffic.

      Like

      • nebusresearch 3:02 am on Wednesday, 26 October, 2011 Permalink | Reply

        I had no idea about that site. That almost completely saves me from having to write anything of use on my own. Thank you.

        After publishing this I started to wonder how closely my little puzzle was tied to the McNuggets Problem.

        Like

        • MJ Howard (@random_bunny) 3:15 am on Wednesday, 26 October, 2011 Permalink | Reply

          The McNuggets Problem?

          That probably sounds much cooler in German.

          Like

          • nebusresearch 10:59 am on Wednesday, 26 October, 2011 Permalink | Reply

            It must; in German it’d join the ranks of things like eigenfunctions and Zustandssummes. And those inclined might get it with a McBier. (If that’s still served; I haven’t been in a German McDonalds in a long long while.)

            Like

    • bugq 1:34 am on Sunday, 30 October, 2011 Permalink | Reply

      Well, if you only have n primes to work with, the nth of which is M, then you should be able to factor everything up to the (n+1)th prime (let’s call it N). Since N is the first multiple of N, it’s also the first number to have a factor of N, so everything below it should be able to be factored completely by primes M and smaller. Of course, that means that the only way to find out how many numbers you can factor with a certain number of primes is to add to your list of primes. :P

      Like

      • nebusresearch 3:38 pm on Tuesday, 1 November, 2011 Permalink | Reply

        Yes, that’s the problem. If I have all the prime numbers listed up to N, then I know I can factor everything up to N+1. It’d be nice if I could say I can certainly factor up to, say, 1.5 times N, or at least have a 50-50 chance of factoring everything up to twice N, or something that extends the range some. Too bad.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: