Tagged: A-To-Z Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 4:00 pm on Tuesday, 25 July, 2017 Permalink | Reply
    Tags: , A-To-Z, , ,   

    There’s Still Time To Ask For Things For The Mathematics A To Z 


    I’m figuring to begin my Summer 2017 Mathematics A To Z next week. And I’ve got the first several letters pinned down, in part by a healthy number of requests by Gaurish, a lover of mathematics. Partly by some things I wanted to talk about.

    There are many letters not yet spoken for, though. If you’ve got something you’d like me to talk about, please head over to my first appeal and add a comment. The letters crossed out have been committed, but many are free. And the challenges are so much fun.

     
  • Joseph Nebus 4:00 pm on Thursday, 13 July, 2017 Permalink | Reply
    Tags: , A-To-Z, , ,   

    What Would You Like In The Summer 2017 Mathematics A To Z? 


    I would like to now announce exactly what everyone with the ability to draw conclusions expected after I listed the things covered in previous Mathematics A To Z summaries. I’m hoping to write essays about another 26 topics, one for each of the major letters of the alphabet. And, as ever, I’d like your requests. It’s great fun to be tossed out a subject and either know enough about it, or learn enough about it in a hurry, to write a couple hundred words about it.

    So that’s what this is for. Please, in comments, list something you’d like to see explained.

    For the most part, I’ll do a letter on a first-come, first-serve basis. I’ll try to keep this page updated so that people know which letters have already been taken. I might try rewording or rephrasing a request if I can’t do it under the original letter if I can think of a legitimate way to cover it under another. I’m open to taking another try at something I’ve already defined in the three A To Z runs I’ve previously done, especially since many of the terms have different meanings in different contexts.

    I’m always in need of requests for letters such as X and Y. But you knew that if you looked at how sparse Mathworld’s list of words for those letters are.

    Letters To Request:

    • A
    • B
    • C
    • D
    • E
    • F
    • G
    • H
    • I
    • J
    • K
    • L
    • M
    • N
    • O
    • P
    • Q
    • R
    • S
    • T
    • U
    • V
    • W
    • X
    • Y
    • Z

    I’m flexible about what I mean by “a word” or “a term” in requesting something, especially if it gives me a good subject to write about. And if you think of a clever way to get a particular word covered under a letter that’s really inappropriate, then, good. I like cleverness. I’m not sure what makes for the best kinds of glossary terms. Sometimes a broad topic is good because I can talk about how an idea expresses itself across multiple fields. Sometimes a narrow topic is good because I can dig in to a particular way of thinking. I’m just hoping I’m not going to commit myself to three 2500-word essays a week. Those are fun, but they’re exhausting, as the time between Why Stuff Can Orbit essays may have hinted.

    And finally, I’d like to thank Thomas K Dye for creating banner art for this sequence. He’s the creator of the longrunning web comic Newshounds. He’s also got the book version, Newshounds: The Complete Story freshly published, a Patreon to support his comics habit, and plans to resume his Infinity Refugees spinoff strip shortly.

     
    • gaurish 2:12 pm on Monday, 17 July, 2017 Permalink | Reply

      A – Arithmetic
      C – Cohomology
      D – Diophantine Equations
      E – Elliptic curves
      F – Functor
      G – Gaussian primes/integers/distribution
      H – Height function (elliptic curves)
      I – integration
      L – L-function
      P – Prime number
      Z – zeta function

      I will tell more later. The banner art is very nice.

      Liked by 1 person

      • Joseph Nebus 5:37 pm on Tuesday, 18 July, 2017 Permalink | Reply

        Thank you! That’s a great set of topics to start on.

        And thanks for the kind words about the art. I’m quite happy with it and hope to get more for other projects. And, as ever, do hope people consider Thomas K Dye’s comic strips and Patreon.

        Like

      • gaurish 4:57 am on Wednesday, 26 July, 2017 Permalink | Reply

        J – Jordan Canonical Form
        K – Klien Bottle
        M – Meromorphic function
        N – Nine point circle
        O – Open set
        Q – Quasirandom numbers
        R – Real number
        S – Sárkőzy’s Theorem
        T – Torus
        U – Ulam Spiral
        V – Venn diagram
        W – Well ordering principle
        X – <I couldn’t find a word with x, but can discuss the importance of x as a variable>
        Y- Young tableau

        Liked by 2 people

        • elkement (Elke Stangl) 7:06 am on Wednesday, 26 July, 2017 Permalink | Reply

          Ha – I also suggested Open Set yesterday, see my comments below ;-) (from July 25 – on letters M N O R T V).

          Like

        • Joseph Nebus 12:39 pm on Thursday, 27 July, 2017 Permalink | Reply

          And thank you again! This gets the alphabet a good bit more done.

          And, yeah, ‘x’ is hard. But it’ll all be worth it in the end, I hope.

          Like

    • The Chaos Realm 4:53 pm on Monday, 17 July, 2017 Permalink | Reply

      I used the Riemann Tensor definition/explanation to front one of my sub-chapter pages in my poetry book (courtesy the guidance of a teacher I know). :-)

      Liked by 1 person

      • Joseph Nebus 5:35 pm on Tuesday, 18 July, 2017 Permalink | Reply

        Ah, that’s wonderful! There is this beauty in the way mathematical concepts are expressed — not the structure of the ideas, but the way we write them out, especially when we get a good idea of what we want to express. I’d like if more people could appreciate that without worrying that they don’t know, say, what a Ricci Flow would be.

        Liked by 1 person

        • The Chaos Realm 5:51 pm on Tuesday, 18 July, 2017 Permalink | Reply

          Thanks! I know there’s a really poetic beauty about astrophysics that I have loved for years. I may not understand all the equations, but I do feel I “get” physics in a way. looks up Ricci Flow. It’s definitely one of my major forms of inspirations…one of my most used muses!

          Like

          • Joseph Nebus 6:18 pm on Sunday, 23 July, 2017 Permalink | Reply

            I’m glad you do enjoy. There’s a lot about physics and mathematics that can’t be understood without great equations, but then there’s a lot about architecture that can’t be understood without a lot of mathematics and legal analyses. Nevertheless anyone can appreciate a beautiful building, and surely people can be told interesting enough stories about mathematics to appreciate the beauty there. Ideally, anyway.

            Liked by 2 people

    • mathtuition88 5:11 pm on Monday, 24 July, 2017 Permalink | Reply

      V for Voronoi diagram would be nice

      Like

    • mathtuition88 4:12 pm on Tuesday, 25 July, 2017 Permalink | Reply

      How about D for discrete Morse theory and M for Morse theory? These are subjects I am not familiar with myself.. it would be great to have an article describing the gist of it :)

      Like

      • Joseph Nebus 12:35 pm on Thursday, 27 July, 2017 Permalink | Reply

        I’m thoroughly unfamiliar with either myself, but I’m excited to give them a try! ‘M’ had been free, at least.

        Liked by 1 person

    • elkement (Elke Stangl) 4:14 pm on Tuesday, 25 July, 2017 Permalink | Reply

      N – N-Sphere or N-Ball
      O – Open Set
      R – Riemann Tensor

      Like

      • elkement (Elke Stangl) 4:35 pm on Tuesday, 25 July, 2017 Permalink | Reply

        Ah, the Riemann Tensor has already been claimed :-) Sorry, I did not read the other comments carefully. So then:
        R – Ricci Tensor

        Like

        • Joseph Nebus 12:38 pm on Thursday, 27 July, 2017 Permalink | Reply

          You know, deep down, I worried I was making trouble for myself mentioning the Ricci Tensor (or was it the Ricci Flow I mentioned? Ricci something, anyway) but what’s the fun of this without making trouble for myself?

          Like

      • Joseph Nebus 12:35 pm on Thursday, 27 July, 2017 Permalink | Reply

        Thank you, that’s getting the alphabet filled out a bit more.

        Like

    • elkement (Elke Stangl) 4:16 pm on Tuesday, 25 July, 2017 Permalink | Reply

      BTW – service for the other readers: Here is the neat table showing what Joseph has already covered in previous A-Z series: https://nebusresearch.wordpress.com/2017/06/29/a-listing-of-mathematics-subjects-i-have-covered-in-a-to-z-sequences-of-the-past/

      Like

    • elkement (Elke Stangl) 4:26 pm on Tuesday, 25 July, 2017 Permalink | Reply

      M – Manifold
      T – Topology (or Topological Manifold). Alternative: Tangent Bundle
      V – Volume Forms or Vector Bundle

      So you see, I am still in awe of the math that underpins General Relativity :-) But please, totally explain it from a ‘pure math’ perspective … I am most interested in if and how your perspective may differ from how such things are introduced in theoretical physics.

      Liked by 1 person

      • Joseph Nebus 12:36 pm on Thursday, 27 July, 2017 Permalink | Reply

        I’m in awe of them myself! With luck I can get up to something like speed, though.

        Like

    • mathtuition88 9:20 am on Wednesday, 26 July, 2017 Permalink | Reply

      Reblogged this on Singapore Maths Tuition.

      Like

  • Joseph Nebus 4:00 pm on Thursday, 29 June, 2017 Permalink | Reply
    Tags: 2015, 2016, A-To-Z, , terms   

    A Listing Of Mathematics Subjects I Have Covered In A To Z Sequences Of The Past 


    I am not saying why I am posting this recap of past lists just now. But now you know why I am posting this recap of past lists just now.

    Summer 2015 Leap Day 2016 End 2016
    Ansatz Axiom Algebra
    Bijection Basis Boundary value problems
    Characteristic Conjecture Cantor’s middle third
    Dual Dedekind Domain Distribution (statistics)
    Error Energy Ergodic
    Fallacy Fractions (Continued) Fredholm alternative
    Graph Grammar General covariance
    Hypersphere Homomorphism Hat
    Into Isomorphism Image
    Jump (discontinuity) Jacobian Jordan curve
    Knot Kullbach-Leibler Divergence Kernel
    Locus Lagrangian Local
    Measure Matrix Monster Group
    N-tuple Normal Subgroup Normal numbers
    Orthogonal Orthonormal Osculating circle
    Proper Polynomials Principal
    Quintile Quaternion Quotient groups
    Ring Riemann Sphere Riemann sum
    Step Surjective Map Smooth
    Tensor Transcendental Number Tree
    Unbounded Uncountable Unlink
    Vertex (graph theory) Vector Voronoi diagram
    Well-Posed Problem Wlog Weierstrass Function
    Xor X-Intercept Xi function
    Y-Axis Yukawa Potential Yang Hui’s Triangle
    Z-Transform Z-score Zermelo-Fraenkel Axioms

    And do, please, watch this space.

     
  • Joseph Nebus 6:00 pm on Thursday, 5 January, 2017 Permalink | Reply
    Tags: A-To-Z, , , , mathematics history, recap,   

    What I Learned Doing The End 2016 Mathematics A To Z 


    The slightest thing I learned in the most recent set of essays is that I somehow slid from the descriptive “End Of 2016” title to the prescriptive “End 2016” identifier for the series. My unscientific survey suggests that most people would agree that we had too much 2016 and would have been better off doing without it altogether. So it goes.

    The most important thing I learned about this is I have to pace things better. The A To Z essays have been creeping up in length. I didn’t keep close track of their lengths but I don’t think any of them came in under a thousand words. 1500 words was more common. And that’s fine enough, but at three per week, plus the Reading the Comics posts, that’s 5500 or 6000 words of mathematics alone. And that before getting to my humor blog, which even on a brief week will be a couple thousand words. I understand in retrospect why November and December felt like I didn’t have any time outside the word mines.

    I’m not bothered by writing longer essays, mind. I can apparently go on at any length on any subject. And I like the words I’ve been using. My suspicion is between these A To Zs and the Theorem Thursdays over the summer I’ve found a mode for writing pop mathematics that works for me. It’s just a matter of how to balance workloads. The humor blog has gotten consistently better readership, for the obvious reasons (lately I’ve been trying to explain what the story comics are doing), but the mathematics more satisfying. If I should have to cut back on either it’d be the humor blog that gets the cut first.

    Another little discovery is that I can swap out equations and formulas and the like for historical discussion. That’s probably a useful tradeoff for most of my readers. And it plays to my natural tendencies. It is very easy to imagine me having gone into history than into mathematics or science. It makes me aware how mediocre my knowledge of mathematics history is, though. For example, several times in the End 2016 A To Z the Crisis of Foundations came up, directly or in passing. But I’ve never read a proper history, not even a basic essay, about the Crisis. I don’t even know of a good description of this important-to-the-field event. Most mathematics history focuses around biographies of a few figures, often cribbed from Eric Temple Bell’s great but unreliable book, or a couple of famous specific incidents. (Newton versus Leibniz, the bridges of Köningsburg, Cantor’s insanity, Gödel’s citizenship exam.) Plus Bourbaki.

    That’s not enough for someone taking the subject seriously, and I do mean to. So if someone has a suggestion for good histories of, for example, how Fourier series affected mathematicians’ understanding of what functions are, I’d love to know it. Maybe I should set that as a standing open request.

    In looking over the subjects I wrote about I find a pretty strong mix of group theory and real analysis. Maybe that shouldn’t surprise. Those are two of the maybe three legs that form a mathematics major’s education. So anyone wanting to understand mathematicians would see this stuff and have questions about it. (There are more things mathematics majors learn, but there are a handful of things almost any mathematics major is sure to spend a year being baffled by.)

    The third leg, I’d say, is differential equations. That’s a fantastic field, but it’s hard to describe without equations. Also pictures of what the equations imply. I’ve tended towards essays with few equations and pictures. That’s my laziness. Equations are best written in LaTeX, a typesetting tool that might as well be the standard for mathematicians writing papers and books. While WordPress supports a bit of LaTeX it isn’t quite effortless. That comes back around to balancing my workload. I do that a little better and I can explain solving first-order differential equations by integrating factors. (This is a prank. Nobody has ever needed to solve a first-order differential equation by integrating factors except for mathematics majors being taught the method.) But maybe I could make a go of that.

    I’m not setting any particular date for the next A-To-Z, or similar, project. I need some time to recuperate. And maybe some time to think of other running projects that would be fun or educational for me. There’ll be something, though.

     
  • Joseph Nebus 6:00 pm on Tuesday, 3 January, 2017 Permalink | Reply
    Tags: A-To-Z, , , ,   

    The End 2016 Mathematics A To Z Roundup 


    As is my tradition for the end of these roundups (see Summer 2015 and then Leap Day 2016) I want to just put up a page listing the whole set of articles. It’s a chance for people who missed a piece to easily see what they missed. And it lets me recover that little bit extra from the experience. Run over the past two months were:

     
  • Joseph Nebus 6:00 pm on Saturday, 31 December, 2016 Permalink | Reply
    Tags: 19th Century, A-To-Z, Axiom of Choice, continuum hypothesis, , , , , ZFC   

    The End 2016 Mathematics A To Z: Zermelo-Fraenkel Axioms 


    gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

    Zermelo-Fraenkel Axioms

    I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

    My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

    And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

    Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

    And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

    The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

    It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprised of all the subsets of the original set. Good stuff to know.

    There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or better, it’s as simple as it could possibly be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

    It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

    If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

    And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

    Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

    And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

    Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

    Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

    No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

    It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

     
    • elkement (Elke Stangl) 10:42 am on Sunday, 1 January, 2017 Permalink | Reply

      Very interesting – as usual! I was also subjected to the New Math in elementary school – the upside was that you got a lot of nice toys for free, as ‘add-ons’ to school books ( … plastic cubes and other toy blocks that should represents members of sets …). Not sure if it prepared one better to understand Russell’s paradox later ;-)

      Liked by 1 person

      • elkement (Elke Stangl) 10:43 am on Sunday, 1 January, 2017 Permalink | Reply

        … and I wish you a Happy New Year and more A-Zs in 2017 :-)

        Liked by 1 person

        • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

          Thanks kindly. I am going to do a fresh A-to-Z, although I don’t know just when. Not in January; haven’t got the energy for it right away.

          Liked by 1 person

      • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

        Oh, now, the toys were fantastic. I suppose it’s a fair guess whether the people who got something out of the New Math got it because they understood fundamentals better in that form or whether it was just that the toys and games made the subject more engaging.

        I am, I admit, a fan of the New Math, but that may just be because it’s the way I learned mathematics, and the way you did something as a kid is always the one natural way to do it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Thursday, 29 December, 2016 Permalink | Reply
    Tags: A-To-Z, China, , , , , Mersenne numbers, , ,   

    The End 2016 Mathematics A To Z: Yang Hui’s Triangle 


    Today’s is another request from gaurish and another I’m glad to have as it let me learn things too. That’s a particularly fun kind of essay to have here.

    Yang Hui’s Triangle.

    It’s a triangle. Not because we’re interested in triangles, but because it’s a particularly good way to organize what we’re doing and show why we do that. We’re making an arrangement of numbers. First we need cells to put the numbers in.

    Start with a single cell in what’ll be the top middle of the triangle. It spreads out in rows beneath that. The rows are staggered. The second row has two cells, each one-half width to the side of the starting one. The third row has three cells, each one-half width to the sides of the row above, so that its center cell is directly under the original one. The fourth row has four cells, two of which are exactly underneath the cells of the second row. The fifth row has five cells, three of them directly underneath the third row’s cells. And so on. You know the pattern. It’s the one that pins in a plinko board take. Just trimmed down to a triangle. Make as many rows as you find interesting. You can always add more later.

    In the top cell goes the number ‘1’. There’s also a ‘1’ in the leftmost cell of each row, and a ‘1’ in the rightmost cell of each row.

    What of interior cells? The number for those we work out by looking to the row above. Take the cells to the immediate left and right of it. Add the values of those together. So for example the center cell in the third row will be ‘1’ plus ‘1’, commonly regarded as ‘2’. In the third row the leftmost cell is ‘1’; it always is. The next cell over will be ‘1’ plus ‘2’, from the row above. That’s ‘3’. The cell next to that will be ‘2’ plus ‘1’, a subtly different ‘3’. And the last cell in the row is ‘1’ because it always is. In the fourth row we get, starting from the left, ‘1’, ‘4’, ‘6’, ‘4’, and ‘1’. And so on.

    It’s a neat little arithmetic project. It has useful application beyond the joy of making something neat. Many neat little arithmetic projects don’t have that. But the numbers in each row give us binomial coefficients, which we often want to know. That is, if we wanted to work out (a + b) to, say, the third power, we would know what it looks like from looking at the fourth row of Yanghui’s Triangle. It will be 1\cdot a^4 + 4\cdot a^3 \cdot b^1 + 6\cdot a^2\cdot b^2 + 4\cdot a^1\cdot b^3 + 1\cdot b^4 . This turns up in polynomials all the time.

    Look at diagonals. By diagonal here I mean a line parallel to the line of ‘1’s. Left side or right side; it doesn’t matter. Yang Hui’s triangle is bilaterally symmetric around its center. The first diagonal under the edges is a bit boring but familiar enough: 1-2-3-4-5-6-7-et cetera. The second diagonal is more curious: 1-3-6-10-15-21-28 and so on. You’ve seen those numbers before. They’re called the triangular numbers. They’re the number of dots you need to make a uniformly spaced, staggered-row triangle. Doodle a bit and you’ll see. Or play with coins or pool balls.

    The third diagonal looks more arbitrary yet: 1-4-10-20-35-56-84 and on. But these are something too. They’re the tetrahedronal numbers. They’re the number of things you need to make a tetrahedron. Try it out with a couple of balls. Oranges if you’re bored at the grocer’s. Four, ten, twenty, these make a nice stack. The fourth diagonal is a bunch of numbers I never paid attention to before. 1-5-15-35-70-126-210 and so on. This is — well. We just did tetrahedrons, the triangular arrangement of three-dimensional balls. Before that we did triangles, the triangular arrangement of two-dimensional discs. Do you want to put in a guess what these “pentatope numbers” are about? Sure, but you hardly need to. If we’ve got a bunch of four-dimensional hyperspheres and want to stack them in a neat triangular pile we need one, or five, or fifteen, or so on to make the pile come out neat. You can guess what might be in the fifth diagonal. I don’t want to think too hard about making triangular heaps of five-dimensional hyperspheres.

    There’s more stuff lurking in here, waiting to be decoded. Add the numbers of, say, row four up and you get two raised to the third power. Add the numbers of row ten up and you get two raised to the ninth power. You see the pattern. Add everything in, say, the top five rows together and you get the fifth Mersenne number, two raised to the fifth power (32) minus one (31, when we’re done). Add everything in the top ten rows together and you get the tenth Mersenne number, two raised to the tenth power (1024) minus one (1023).

    Or add together things on “shallow diagonals”. Start from a ‘1’ on the outer edge. I’m going to suppose you started on the left edge, but remember symmetry; it’ll be fine if you go from the right instead. Add to that ‘1’ the number you get by moving one cell to the right and going up-and-right. And then again, go one cell to the right and then one cell up-and-right. And again and again, until you run out of cells. You get the Fibonacci sequence, 1-1-2-3-5-8-13-21-and so on.

    We can even make an astounding picture from this. Take the cells of Yang Hui’s triangle. Color them in. One shade if the cell has an odd number, another if the cell has an even number. It will create a pattern we know as the Sierpiński Triangle. (Wacław Sierpiński is proving to be the surprise special guest star in many of this A To Z sequence’s essays.) That’s the fractal of a triangle subdivided into four triangles with the center one knocked out, and the remaining triangles them subdivided into four triangles with the center knocked out, and on and on.

    By now I imagine even my most skeptical readers agree this is an interesting, useful mathematical construct. Also that they’re wondering why I haven’t said the name “Blaise Pascal”. The Western mathematical tradition knows of this from Pascal’s work, particularly his 1653 Traité du triangle arithmétique. But mathematicians like to say their work is universal, and independent of the mere human beings who find it. Constructions like this triangle give support to this. Yang lived in China, in the 12th century. I imagine it possible Pascal had hard of his work or been influenced by it, by some chain, but I know of no evidence that he did.

    And even if he had, there are other apparently independent inventions. The Avanti Indian astronomer-mathematician-astrologer Varāhamihira described the addition rule which makes the triangle work in commentaries written around the year 500. Omar Khayyám, who keeps appearing in the history of science and mathematics, wrote about the triangle in his 1070 Treatise on Demonstration of Problems of Algebra. Again so far as I am aware there’s not a direct link between any of these discoveries. They are things different people in different traditions found because the tools — arithmetic and aesthetically-pleasing orders of things — were ready for them.

    Yang Hui wrote about his triangle in the 1261 book Xiangjie Jiuzhang Suanfa. In it he credits the use of the triangle (for finding roots) was invented around 1100 by mathematician Jia Xian. This reminds us that it is not merely mathematical discoveries that are found by many peoples at many times and places. So is Boyer’s Law, discovered by Hubert Kennedy.

     
    • gaurish 6:46 pm on Thursday, 29 December, 2016 Permalink | Reply

      This is first time that I have read an article about Pascal triangle without a picture of it in front of me and could still imagine it in my mind. :)

      Like

      • Joseph Nebus 5:22 am on Thursday, 5 January, 2017 Permalink | Reply

        Thank you; I’m glad you like it. I did spend a good bit of time before writing the essay thinking about why it is a triangle that we use for this figure, and that helped me think about how things are organized and why. (The one thing I didn’t get into was identifying the top row, the single cell, as row zero. Computers may index things starting from zero and there may be fair reasons to do it, but that is always going to be a weird choice for humans.)

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 27 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , Riemann hypothesis,   

    The End 2016 Mathematics A To Z: Xi Function 


    I have today another request from gaurish, who’s also been good enough to give me requests for ‘Y’ and ‘Z’. I apologize for coming to this a day late. But it was Christmas and many things demanded my attention.

    Xi Function.

    We start with complex-valued numbers. People discovered them because they were useful tools to solve polynomials. They turned out to be more than useful fictions, if numbers are anything more than useful fictions. We can add and subtract them easily. Multiply and divide them less easily. We can even raise them to powers, or raise numbers to them.

    If you become a mathematics major then somewhere in Intro to Complex Analysis you’re introduced to an exotic, infinitely large sum. It’s spoken of reverently as the Riemann Zeta Function, and it connects to something named the Riemann Hypothesis. Then you remember that you’ve heard of this, because if you’re willing to become a mathematics major you’ve read mathematics popularizations. And you know the Riemann Hypothesis is an unsolved problem. It proposes something that might be true or might be false. Either way has astounding implications for the way numbers fit together.

    Riemann here is Bernard Riemann, who’s turned up often in these A To Z sequences. We saw him in spheres and in sums, leading to integrals. We’ll see him again. Riemann just covered so much of 19th century mathematics; we can’t talk about calculus without him. Zeta, Xi, and later on, Gamma are the famous Greek letters. Mathematicians fall back on them because the Roman alphabet just hasn’t got enough letters for our needs. I’m writing them out as English words instead because if you aren’t familiar with them they look like an indistinct set of squiggles. Even if you are familiar, sometimes. I got confused in researching this some because I did slip between a lowercase-xi and a lowercase-zeta in my mind. All I can plead is it’s been a hard week.

    Riemann’s Zeta function is famous. It’s easy to approach. You can write it as a sum. An infinite sum, but still, those are easy to understand. Pick a complex-valued number. I’ll call it ‘s’ because that’s the standard. Next take each of the counting numbers: 1, 2, 3, and so on. Raise each of them to the power ‘s’. And take the reciprocal, one divided by those numbers. Add all that together. You’ll get something. Might be real. Might be complex-valued. Might be zero. We know many values of ‘s’ what would give us a zero. The Riemann Hypothesis is about characterizing all the possible values of ‘s’ that give us a zero. We know some of them, so boring we call them trivial: -2, -4, -6, -8, and so on. (This looks crazy. There’s another way of writing the Riemann Zeta function which makes it obvious instead.) The Riemann Hypothesis is about whether all the proper, that is, non-boring values of ‘s’ that give us a zero are 1/2 plus some imaginary number.

    It’s a rare thing mathematicians have only one way of writing. If something’s been known and studied for a long time there are usually variations. We find different ways to write the problem. Or we find different problems which, if solved, would solve the original problem. The Riemann Xi function is an example of this.

    I’m going to spare you the formula for it. That’s in self-defense. I haven’t found an expression of the Xi function that isn’t a mess. The normal ways to write it themselves call on the Zeta function, as well as the Gamma function. The Gamma function looks like factorials, for the counting numbers. It does its own thing for other complex-valued numbers.

    That said, I’m not sure what the advantages are in looking at the Xi function. The one that people talk about is its symmetry. Its value at a particular complex-valued number ‘s’ is the same as its value at the number ‘1 – s’. This may not seem like much. But it gives us this way of rewriting the Riemann Hypothesis. Imagine all the complex-valued numbers with the same imaginary part. That is, all the numbers that we could write as, say, ‘x + 4i’, where ‘x’ is some real number. If the size of the value of Xi, evaluated at ‘x + 4i’, always increases as ‘x’ starts out equal to 1/2 and increases, then the Riemann hypothesis is true. (This has to be true not just for ‘x + 4i’, but for all possible imaginary numbers. So, ‘x + 5i’, and ‘x + 6i’, and even ‘x + 4.1 i’ and so on. But it’s easier to start with a single example.)

    Or another way to write it. Suppose the size of the value of Xi, evaluated at ‘x + 4i’ (or whatever), always gets smaller as ‘x’ starts out at a negative infinitely large number and keeps increasing all the way to 1/2. If that’s true, and true for every imaginary number, including ‘x – i’, then the Riemann hypothesis is true.

    And it turns out if the Riemann hypothesis is true we can prove the two cases above. We’d write the theorem about this in our papers with the start ‘The Following Are Equivalent’. In our notes we’d write ‘TFAE’, which is just as good. Then we’d take which ever of them seemed easiest to prove and find out it isn’t that easy after all. But if we do get through we declare ourselves fortunate, sit back feeling triumphant, and consider going out somewhere to celebrate. But we haven’t got any of these alternatives solved yet. None of the equivalent ways to write it has helped so far.

    We know some some things. For example, we know there are infinitely many roots for the Xi function with a real part that’s 1/2. This is what we’d need for the Riemann hypothesis to be true. But we don’t know that all of them are.

    The Xi function isn’t entirely about what it can tell us for the Zeta function. The Xi function has its own exotic and wonderful properties. In a 2009 paper on arxiv.org, for example, Drs Yang-Hui He, Vishnu Jejjala, and Djordje Minic describe how if the zeroes of the Xi function are all exactly where we expect them to be then we learn something about a particular kind of string theory. I admit not knowing just what to say about a genus-one free energy of the topological string past what I have read in this paper. In another paper they write of how the zeroes of the Xi function correspond to the description of the behavior for a quantum-mechanical operator that I just can’t find a way to describe clearly in under three thousand words.

    But mathematicians often speak of the strangeness that mathematical constructs can match reality so well. And here is surely a powerful one. We learned of the Riemann Hypothesis originally by studying how many prime numbers there are compared to the counting numbers. If it’s true, then the physics of the universe may be set up one particular way. Is that not astounding?

     
    • gaurish 5:34 am on Wednesday, 28 December, 2016 Permalink | Reply

      Yes it’s astounding. You have a very nice talent of talking about mathematical quantities without showing formulas :)

      Liked by 1 person

      • Joseph Nebus 5:15 am on Thursday, 5 January, 2017 Permalink | Reply

        You’re most kind, thank you. I’ve probably gone overboard in avoiding formulas lately though.

        Like

  • Joseph Nebus 6:00 pm on Friday, 23 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , , ,   

    The End 2016 Mathematics A To Z: Weierstrass Function 


    I’ve teased this one before.

    Weierstrass Function.

    So you know how the Earth is a sphere, but from our normal vantage point right up close to its surface it looks flat? That happens with functions too. Here I mean the normal kinds of functions we deal with, ones with domains that are the real numbers or a Euclidean space. And ranges that are real numbers. The functions you can draw on a sheet of paper with some wiggly bits. Let the function wiggle as much as you want. Pick a part of it and zoom in close. That zoomed-in part will look straight. If it doesn’t look straight, zoom in closer.

    We rely on this. Functions that are straight, or at least straight enough, are easy to work with. We can do calculus on them. We can do analysis on them. Functions with plots that look like straight lines are easy to work with. Often the best approach to working with the function you’re interested in is to approximate it with an easy-to-work-with function. I bet it’ll be a polynomial. That serves us well. Polynomials are these continuous functions. They’re differentiable. They’re smooth.

    That thing about the Earth looking flat, though? That’s a lie. I’ve never been to any of the really great cuts in the Earth’s surface, but I have been to some decent gorges. I went to grad school in the Hudson River Valley. I’ve driven I-80 over Pennsylvania’s scariest bridges. There’s points where the surface of the Earth just drops a great distance between your one footstep and your last.

    Functions do that too. We can have points where a function isn’t differentiable, where it’s impossible to define the direction it’s headed. We can have points where a function isn’t continuous, where it jumps from one region of values to another region. Everyone knows this. We can’t dismiss those as abberations not worthy of the name “function”; too many of them are too useful. Typically we handle this by admitting there’s points that aren’t continuous and we chop the function up. We make it into a couple of functions, each stretching from discontinuity to discontinuity. Between them we have continuous region and we can go about our business as before.

    Then came the 19th century when things got crazy. This particular craziness we credit to Karl Weierstrass. Weierstrass’s name is all over 19th century analysis. He had that talent for probing the limits of our intuition about basic mathematical ideas. We have a calculus that is logically rigorous because he found great counterexamples to what we had assumed without proving.

    The Weierstrass function challenges this idea that any function is going to eventually level out. Or that we can even smooth a function out into basically straight, predictable chunks in-between sudden changes of direction. The function is continuous everywhere; you can draw it perfectly without lifting your pen from paper. But it always looks like a zig-zag pattern, jumping around like it was always randomly deciding whether to go up or down next. Zoom in on any patch and it still jumps around, zig-zagging up and down. There’s never an interval where it’s always moving up, or always moving down, or even just staying constant.

    Despite being continuous it’s not differentiable. I’ve described that casually as it being impossible to predict where the function is going. That’s an abuse of words, yes. The function is defined. Its value at a point isn’t any more random than the value of “x2” is for any particular x. The unpredictability I’m talking about here is a side effect of ignorance. Imagine I showed you a plot of “x2” with a part of it concealed and asked you to fill in the gap. You’d probably do pretty well estimating it. The Weierstrass function, though? No; your guess would be lousy. My guess would be lousy too.

    That’s a weird thing to have happen. A century and a half later it’s still weird. It gets weirder. The Weierstrass function isn’t differentiable generally. But there are exceptions. There are little dots of differentiability, where the rate at which the function changes is known. Not intervals, though. Single points. This is crazy. Derivatives are about how a function changes. We work out what they should even mean by thinking of a function’s value on strips of the domain. Those strips are small, but they’re still, you know, strips. But on almost all of that strip the derivative isn’t defined. It’s only at isolated points, a set with measure zero, that this derivative even exists. It evokes the medieval Mysteries, of how we are supposed to try, even though we know we shall fail, to understand how God can have contradictory properties.

    It’s not quite that Mysterious here. Properties like this challenge our intuition, if we’ve gotten any. Once we’ve laid out good definitions for ideas like “derivative” and “continuous” and “limit” and “function” we can work out whether results like this make sense. And they — well, they follow. We can avoid weird conclusions like this, but at the cost of messing up our definitions for what a “function” and other things are. Making those useless. For the mathematical world to make sense, we have to change our idea of what quite makes sense.

    That’s all right. When we look close we realize the Earth around us is never flat. Even reasonably flat areas have slight rises and falls. The ends of properties are marked with curbs or ditches, and bordered by streets that rise to a center. Look closely even at the dirt and we notice that as level as it gets there are still rocks and scratches in the ground, clumps of dirt an infinitesimal bit higher here and lower there. The flatness of the Earth around us is a useful tool, but we miss a lot by pretending it’s everything. The Weierstrass function is one of the ways a student mathematician learns that while smooth, predictable functions are essential, there is much more out there.

     
  • Joseph Nebus 6:00 pm on Wednesday, 21 December, 2016 Permalink | Reply
    Tags: A-To-Z, compression, , Markov Chains, , ,   

    The End 2016 Mathematics A To Z: Voronoi Diagram 


    This is one I never heard of before grad school. And not my first year in grad school either; I was pretty well past the point I should’ve been out of grad school before I remember hearing of it, somehow. I can’t explain that.

    Voronoi Diagram.

    Take a sheet of paper. Draw two dots on it. Anywhere you like. It’s your paper. But here’s the obvious thing: you can divide the paper into the parts of it that are nearer to the first, or that are nearer to the second. Yes, yes, I see you saying there’s also a line that’s exactly the same distance between the two and shouldn’t that be a third part? Fine, go ahead. We’ll be drawing that in anyway. But here we’ve got a piece of paper and two dots and this line dividing it into two chunks.

    Now drop in a third point. Now every point on your paper might be closer to the first, or closer to the second, or closer to the third. Or, yeah, it might be on an edge equidistant between two of those points. Maybe even equidistant to all three points. It’s not guaranteed there is such a “triple point”, but if you weren’t picking points to cause trouble there probably is. You get the page divided up into three regions that you say are coming together in a triangle before realizing that no, it’s a Y intersection. Or else the regions are three strips and they don’t come together at all.

    What if you have four points … You should get four regions. They might all come together in one grand intersection. Or they might come together at weird angles, two and three regions touching each other. You might get a weird one where there’s a triangle in the center and three regions that go off to the edge of the paper. Or all sorts of fun little abstract flag icons, maybe. It’s hard to say. If we had, say, 26 points all sorts of weird things could happen.

    These weird things are Voronoi Diagrams. They’re a partition of some surface. Usually it’s a plane or some well-behaved subset of the plane like a sheet of paper. The partitioning is into polygons. Exactly one of the points you start with is inside each of the polygons. And everything else inside that polygon is nearer to its one contained starting point than it is any other point. All you need for the diagram are your original points and the edges dividing spots between them. But the thing begs to be colored. Give in to it and you have your own, abstract, stained-glass window pattern. So I’m glad to give you some useful mathematics to play with.

    Voronoi diagrams turn up naturally whenever you want to divide up space by the shortest route to get something. Sometimes this is literally so. For example, a radio picking up two FM signals will switch to the stronger of the two. That’s what the superheterodyne does. If the two signals are transmitted with equal strength, then the receiver will pick up on whichever the nearer signal is. And unless the other mathematicians who’ve talked about this were just as misinformed, cell phones pick which signal tower to communicate with by which one has the stronger signal. If you could look at what tower your cell phone communicates with as you move around, you would produce a Voronoi diagram of cell phone towers in your area.

    Mathematicians hoping to get credit for a good thing may also bring up Dr John Snow’s famous halting of an 1854 cholera epidemic in London. He did this by tracking cholera outbreaks and measuring their proximity to public water pumps. He shut down the water pump at the center of the severest outbreak and the epidemic soon stopped. One could claim this as a triumph for Voronoi diagrams, although Snow can not have had this tool in mind. Georgy Voronoy (yes, the spelling isn’t consistent. Fashions in transliterating Eastern European names — Voronoy was Ukrainian and worked in Warsaw when Poland was part of the Russian Empire — have changed over the years) wasn’t even born until 1868. And it doesn’t require great mathematical insight to look for the things an infected population has in common. But mathematicians need some tales of heroism too. And it isn’t as though we’ve run out of epidemics with sources that need tracking down.

    Voronoi diagrams turned out to be useful in my own meager research. I needed to model the flow of a fluid over a whole planet, but could only do so with a modest number of points to represent the whole thing. Scattering points over the planet was easy enough. To represent the fluid over the whole planet as a collection of single values at a couple hundred points required this Voronoi-diagram type division. … Well, it used them anyway. I suppose there might have been other ways. But I’d just learned about them and was happy to find a reason to use them. Anyway, this is the sort of technique often used to turn information about a single point into approximate information about a region.

    (And I discover some amusing connections here. Voronoy’s thesis advisor was Andrey Markov, who’s the person being named by “Markov Chains”. You know those as those predictive-word things that are kind of amusing for a while. Markov Chains were part of the tool I used to scatter points over the whole planet. Also, Voronoy’s thesis was On A Generalization Of A Continuous Fraction, so, hi, Gaurish! … And one of Voronoy’s doctoral students was Wacław Sierpiński, famous for fractals and normal numbers.)

    Voronoi diagrams have a lot of beauty to them. Some of it is subtle. Take a point inside its polygon and look to a neighboring polygon. Where is the representative point inside that neighbor polygon? … There’s only one place it can be. It’s got to be exactly as far as the original point is from the edge between them, and it’s got to be on the direction perpendicular to the edge between them. It’s where you’d see the reflection of the original point if the border between them were a mirror. And that has to apply to all the polygons and their neighbors.

    From there it’s a short step to wondering: imagine you knew the edges. The mirrors. But you don’t know the original points. Could you figure out where the representative points must be to fit that diagram? … Or at least some points where they may be? This is the inverse problem, and it’s how I first encountered them. This inverse problem allows nice stuff like algorithm compression. Remember my description of the result of a Voronoi diagram being a stained glass window image? There’s no reason a stained glass image can’t be quite good, if we have enough points and enough gradations of color. And storing a bunch of points and the color for the region is probably less demanding than storing the color information for every point in the original image.

    If we want images. Many kinds of data turn out to work pretty much like pictures, set up right.

     
    • gaurish 5:10 am on Thursday, 22 December, 2016 Permalink | Reply

      I didn’t know that Voronoy’s thesis was on continued fractions :) Few months ago, I was delighted to see the application of Voronoi Diagram to find answer to this simple geometry problem about maximization: http://math.stackexchange.com/a/1812338/214604

      Like

      • Joseph Nebus 5:11 am on Thursday, 5 January, 2017 Permalink | Reply

        I did not know either, until I started writing the essay. I’m glad for the side bits of information I get in writing this sort of thing.

        And I’m delighted to see the problem. I didn’t think of Voronoi diagrams as a way to study maximization problems but obviously, yeah, they would be.

        Like

    • gaurish 5:04 am on Tuesday, 3 January, 2017 Permalink | Reply

      • Joseph Nebus 5:40 am on Thursday, 5 January, 2017 Permalink | Reply

        You are quite right; I do like that. And it even has a loose connection as it is to my original thesis and its work; part of the problem was getting points spread out uniformly on a plane without them spreading out infinitely far, that is, getting them to cluster according to some imposed preference. It wasn’t artistic except in the way abstract mathematics is a bit artistic.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Monday, 19 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , links,   

    The End 2016 Mathematics A To Z: Unlink 


    This is going to be a fun one. It lets me get into knot theory again.

    Unlink.

    An unlink is what knot theorists call that heap of loose rubber bands in that one drawer compartment.

    The longer way around. It starts with knots. I love knots. If I were stronger on abstract reasoning and weaker on computation I’d have been a knot theorist. At least graph theory anyway. The mathematical idea of a knot is inspired by a string tied together. In making it a mathematical idea we perfect the string. It becomes as thin as a line, though it curves as much as we want. It can stretch out or squash down as much as we want. It slides frictionlessly against itself. Gravity doesn’t make it drop any. This removes the hassles of real-world objects from it. It also means actual strings or yarns or whatever can’t be knots anymore. Only something that’s a loop which closes back on itself can be a knot. The knot you might make in a shoelace, to use an example, could be undone by pushing the tip back through the ‘knot’. Since our mathematical string is frictionless we can do that, effortlessly. We’re left with nothing.

    But you can create a pretty good approximation to a mathematical knot if you have some kind of cable that can be connected to its own end. Loop the thing around as you like, connect end to end, and you’ve got it. I recommend the glow sticks sold for people to take to parties or raves or the like. They’re fun. If you tie it up so that the string (rope, glow stick, whatever) can’t spread out into a simple O shape no matter how you shake it up (short of breaking the cable) then you have a knot. There are many of them. Trefoil knots are probably the easiest to get, but if you’re short on inspiration try looking at Celtic knot patterns.

    But if the string can be shaken out until it’s a simple O shape, the sort of thing you can place flat on a table, then you have an unknot. Just from the vocabulary this you see why I like the subject so. Since this hasn’t quite got silly enough, let me assure you that an unknot is itself a kind of knot; we call it the trivial knot. It’s the knot that’s too simple to be a knot. I’m sure you were worried about that. I only hear people call it an unknot, but maybe there are heritages that prefer “trivial knot”.

    So that’s knots. What happens if you have more than one thing, though? What if you have a couple of string-loops? Several cables. We know these things can happen in the real world, since we’ve looked behind the TV set or the wireless router and we know there’s somehow more cables there than there are even things to connect.

    Even mathematicians wouldn’t want to ignore something that caught up with real world implications. And we don’t. We get to them after we’re pretty comfortable working with knots. Describing them, working out the theoretical tools we’d use to un-knot a proper knot (spoiler: we cut things), coming up with polynomials that describe them, that sort of thing. When we’re ready for a new trick there we consider what happens if we have several knots. We call this bundle of knots a “link”. Well, what would you call it?

    A link is a collection of knots. By talking about a link we expect that at least some of the knots are going to loop around each other. This covers a lot of possibilities. We could picture one of those construction-paper chains, made of intertwined loops, that are good for elementary school craft projects to be a link. We can picture a keychain with a bunch of keys dangling from it to be a link. (Imagine each key is a knot, just made of a very fat, metal “string”. C’mon, you can give me that.) The mass of cables hiding behind the TV stand is not properly a link, since it’s not properly made out of knots. But if you can imagine taking the ends of each of those wires and looping them back to the origins, then the somehow vaster mess you get from that would be a link again.

    And then we come to an “unlink”. This has two pieces. The first is that it’s a collection of knots, yes, but knots that don’t interlink. We can pull them apart without any of them tugging the others along. The second piece is that each of the knots is itself an unknot. Trivial knots. Whichever you like to call them.

    The “unlink” also gets called the “trivial link”, since it’s as boring a link as you can imagine. Manifested in the real world, well, an unbroken rubber band is a pretty good unknot. And a pile of unbroken rubber bands will therefore be an unlink.

    If you get into knot theory you end up trying to prove stuff about complicated knots, and complicated links. Often these are easiest to prove by chopping up the knot or the link into something simpler. Maybe you chop those smaller pieces up again. And you can’t get simpler than an unlink. If you can prove whatever you want to show for that then you’ve got a step done toward proving your whole actually interesting thing. This is why we see unknots and unlinks enough to give them names and attention.

     
  • Joseph Nebus 6:00 pm on Friday, 16 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , , , trees   

    The End 2016 Mathematics A To Z: Tree 


    Graph theory begins with a beautiful legend. I have no reason to suppose it’s false, except my natural suspicion of beautiful legends as origin stories. Its organization as a field is traced to 18th century Köningsburg, where seven bridges connected the banks of a river and a small island in the center. Whether it was possible to cross each bridge exactly once and get back where one started was, they say, a pleasant idle thought to ponder and path to try walking. Then Leonhard Euler solved the problem. It’s impossible.

    Tree.

    Graph theory arises whenever we have a bunch of things that can be connected. We call the things “vertices”, because that’s a good corner-type word. The connections we call “edges”, because that’s a good connection-type word. It’s easy to create graphs that look like the edges of a crystal, especially if you draw edges as straight as much as possible. You don’t have to. You can draw them curved. Then they look like the scary tangles of wire around your wireless router complex.

    Graph theory really got organized in the 19th century, and went crazy in the 20th. It turns out there’s lots of things that connect to other things. Networks, whether computers or social or thematically linked concepts. Anything that has to be delivered from one place to another. All the interesting chemicals. Anything that could be put in a pipe or taken on a road has some graph theory thing applicable to it.

    A lot of graph theory ponders loops. The original problem was about how to use every bridge, every edge, exactly one time. Look at a tangled mass of a graph and it’s hard not to start looking for loops. They’re often interesting. It’s not easy to tell if there’s a loop that lets you get to every vertex exactly once.

    What if there aren’t loops? What if there aren’t any vertices you can step away from and get back to by another route? Well, then you have a tree.

    A tree’s a graph where all the vertices are connected so that there aren’t any closed loops. We normally draw them with straight lines, the better to look like actual trees. We then stop trying to make them look like actual trees by doing stuff like drawing them as a long horizontal spine with a couple branches sticking off above and below, or as * type stars, or H shapes. They still correspond to real-world things. If you’re not sure how consider the layout of one of those long, single-corridor hallways as in a hotel or dormitory. The rooms connect to one another as a tree once again, as long as no room opens to anything but its own closet or bathroom or the central hallway.

    We can talk about the radius of a graph. That’s how many edges away any point can be from the center of the tree. And every tree has a center. Or two centers. If it has two centers they share an edge between the two. And that’s one of the quietly amazing things about trees to me. However complicated and messy the tree might be, we can find its center. How many things allow us that?

    A tree might have some special vertex. That’s called the ‘root’. It’s what the vertices and the connections represent that make a root; it’s not something inherent in the way trees look. We pick one for some special reason and then we highlight it. Maybe put it at the bottom of the drawing, making ‘root’ for once a sensible name for a mathematics thing. Often we put it at the top of the drawing, because I guess we’re just being difficult. Well, we do that because we were modelling stuff where a thing’s properties depend on what it comes from. And that puts us into thoughts of inheritance and of family trees. And weird as it is to put the root of a tree at the top, it’s also weird to put the eldest ancestors at the bottom of a family tree. People do it, but in those illuminated drawings that make a literal tree out of things. You don’t see it in family trees used for actual work, like filling up a couple pages at the start of a king or a queen’s biography.

    Trees give us neat new questions to ponder, like, how many are there? I mean, if you have a certain number of vertices then how many ways are there to arrange them? One or two or three vertices all have just the one way to arrange them. Four vertices can be hooked up a whole two ways. Five vertices offer a whole three different ways to connect them. Six vertices offer six ways to connect and now we’re finally getting something interesting. There’s eleven ways to connect seven vertices, and 23 ways to connect eight vertices. The number keeps on rising, but it doesn’t follow the obvious patterns for growth of this sort of thing.

    And if that’s not enough to idly ponder then think of destroying trees. Draw a tree, any shape you like. Pick one of the vertices. Imagine you obliterate that. How many separate pieces has the tree been broken into? It might be as few as two. It might be as many as the number of remaining vertices. If graph theory took away the pastime of wandering around Köningsburg’s bridges, it has given us this pastime we can create anytime we have pen, paper, and a long meeting.

     
  • Joseph Nebus 6:00 pm on Wednesday, 14 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , ,   

    The End 2016 Mathematics A To Z: Smooth 


    Mathematicians affect a pose of objectivity. We justify this by working on things whose truth we can know, and which must be true whenever we accept certain rules of deduction and certain definitions and axioms. This seems fair. But we choose to pay attention to things that interest us for particular reasons. We study things we like. My A To Z glossary term for today is about one of those things we like.

    Smooth.

    Functions. Not everything mathematicians do is functions. But functions turn up a lot. We need to set some rules. “A function” is so generic a thing we can’t handle it much. Narrow it down. Pick functions with domains that are numbers. Range too. By numbers I mean real numbers, maybe complex numbers. That gives us something.

    There’s functions that are hard to work with. This is almost all of them, so we don’t touch them unless we absolutely must. But they’re functions that aren’t continuous. That means what you imagine. The value of the function at some point is wholly unrelated to its value at some nearby point. It’s hard to work with anything that’s unpredictable like that. Functions as well as people.

    We like functions that are continuous. They’re predictable. We can make approximations. We can estimate the function’s value at some point using its value at some more convenient point. It’s easy to see why that’s useful for numerical mathematics, for calculations to approximate stuff. The dazzling thing is it’s useful analytically. We step into the Platonic-ideal world of pure mathematics. We have tools that let us work as if we had infinitely many digits of precision, for infinitely many numbers at once. And yet we use estimates and approximations and errors. We use them in ways to give us perfect knowledge; we get there by estimates.

    Continuous functions are nice. Well, they’re nicer to us than functions that aren’t continuous. But there are even nicer functions. Functions nicer to us. A continuous function, for example, can have corners; it can change direction suddenly and without warning. A differentiable function is more predictable. It can’t have corners like that. Knowing the function well at one point gives us more information about what it’s like nearby.

    The derivative of a function doesn’t have to be continuous. Grumble. It’s nice when it is, though. It makes the function easier to work with. It’s really nice for us when the derivative itself has a derivative. Nothing guarantees that the derivative of a derivative is continuous. But maybe it is. Maybe the derivative of the derivative has a derivative. That’s a function we can do a lot with.

    A function is “smooth” if it has as many derivatives as we need for whatever it is we’re doing. And if those derivatives are continuous. If this seems loose that’s because it is. A proof for whatever we’re interested in might need only the original function and its first derivative. It might need the original function and its first, second, third, and fourth derivatives. It might need hundreds of derivatives. If we look through the details of the proof we might find exactly how many derivatives we need and how many of them need to be continuous. But that’s tedious. We save ourselves considerable time by saying the function is “smooth”, as in, “smooth enough for what we need”.

    If we do want to specify how many continuous derivatives a function has we call it a “Ck function”. The C here means continuous. The ‘k’ means there are the number ‘k’ continuous derivatives of it. This is completely different from a “Ck function”, which would be one that’s a k-dimensional vector. Whether the “C” is boldface or not is important. A function might have infinitely many continuous derivatives. That we call a “C function”. That’s got wonderful properties, especially if the domain and range are complex-valued numbers. We couldn’t do Complex Analysis without it. Complex Analysis is the course students take after wondering how they’ll ever survive Real Analysis. It’s much easier than Real Analysis. Mathematics can be strange.

     
  • Joseph Nebus 6:00 pm on Monday, 12 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , definite integrals,   

    The End 2016 Mathematics A To Z: Riemann Sum 


    I see for the other A To Z I did this year I did something else named for Riemann. So I did. Bernhard Riemann did a lot of work that’s essential to how we see mathematics today. We name all kinds of things for him, and correctly so. Here’s one of his many essential bits of work.

    Riemann Sum.

    The Riemann Sum is a thing we learn in Intro to Calculus. It’s essential in getting us to definite integrals. We’re introduced to it in functions of a single variable. The functions have a domain that’s an interval of real numbers and a range that’s somewhere in the real numbers. The Riemann Sum — and from it, the integral — is a real number.

    We get this number by following a couple steps. The first is we chop the interval up into a bunch of smaller intervals. That chopping-up we call a “partition” because it’s another of those times mathematicians use a word the way people might use the same word. From each one of those chopped-up pieces we pick a representative point. Now with each piece evaluate what the function is for that representative point. Multiply that by the width of the partition it was in. Then take those products for each of those pieces and add them all together. If you’ve done it right you’ve got a number.

    You need a couple pieces in place to have “the” Riemann Sum for something. You need a function, which is fair enough. And you need a partitioning of the interval. And you need some representative point for each of the partitions. Change any of them — function, partition, or point — and you may change the sum you get. You expect that for changing the function. Changing the partition? That’s less obvious. But draw some wiggly curvy function on a sheet of paper. Draw a couple of partitions of the horizontal axis. (You’ll probably want to use different colors for different partitions.) That should coax you into it. And you’d probably take it on my word that different representative points give you different sums.

    Very different? It’s possible. There’s nothing stopping it from happening. But if the results aren’t very different then we might just have an integrable function. That’s a function that gives us the same Riemann Sum no matter how we pick representative points, as long as we pick partitions that get finer and finer enough. We measure how fine a partition is by how big the widest chopped-up piece is. To be integrable the Riemann Sum for a function has to get to the same number whenever the partition’s size gets small enough and however we pick points inside. We get the lovely quiet paradox in which we add together infinitely many things, each of them infinitesimally tiny, and get a regular old number out of all that work.

    We use the Riemann Sum for what we call numerical quadrature. That’s working out integrals on the computer. Or calculator. Or by hand. When we do it by evaluating numbers instead of using analysis. It’s very easy to program. And we can do some tricks based on the Riemann Sum to make the numerical estimate a closer match to the actual integral.

    And we use the Riemann Sum to learn how the Riemann Integral works. It’s a blessedly straightforward thing. It appeals to intuition well. It lets us draw all sorts of curves with rectangular boxes overlaying them. It’s so easy to work out the area of a rectangular box. We can imagine adding up these areas without being confused.

    We don’t use the Riemann Sum to actually do integrals, though. Numerical approximations to an integral, yes. For the actual integral it’s too hard to use. What makes it hard is you need to evaluate this for every possible partition and every possible pick of representative points. In grad school my analysis professor worked through — once — using this to integrate the number 1. This is the easiest possible thing to integrate and it was barely manageable. He gave a good try at integrating the function ‘f(x) = x’ but admitted he couldn’t do it. None of us could.

    When you see the Riemann Sum in an Introduction to Calculus course you see it in simplified form. You get partitions that are very easy to work with. Like, you break the interval up into some number of equally-sized chunks. You get representative points that follow one of a couple good choices. The left end of the partition. The right end of the partition. The middle of the partition.

    That’s fine, numerically. If the function is integrable it doesn’t matter what partition or representative points we pick. And it’s fine for learning about whether functions are integrable. If it matters whether you pick left or middle or right ends of the partition then the function isn’t integrable. The instructor can give functions that break integrability based on a given partition or endpoint choice or whatever.

    But that isn’t every possible partition and every possible pick of representative points. I suppose it’s possible to work all that out for a couple of really, really simple functions. But it’s so much work. We’re better off using the Riemann Sum to get to formulas about integrals that don’t depend on actually using the Riemann Sum.

    So that is the curious position the Riemann Sum has. It is a fundament of integral calculus. It is the way we first define the definite integral. We rely on it to learn what definite integrals are like. We use it all the time numerically. We never use it analytically. It’s too hard. I hope you appreciate the strange beauty of that.

     
  • Joseph Nebus 6:00 pm on Friday, 9 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , evens, , , normal subgroups, odds,   

    The End 2016 Mathematics A To Z: Quotient Groups 


    I’ve got another request today, from the ever-interested and group-theory-minded gaurish. It’s another inspirational one.

    Quotient Groups.

    We all know about even and odd numbers. We don’t have to think about them. That’s why it’s worth discussing them some.

    We do know what they are, though. The integers — whole numbers, positive and negative — we can split into two sets. One of them is the even numbers, two and four and eight and twelve. Zero, negative two, negative six, negative 2,038. The other is the odd numbers, one and three and nine. Negative five, negative nine, negative one.

    What do we know about numbers, if all we look at is whether numbers are even or odd? Well, we know every integer is either an odd or an even number. It’s not both; it’s not neither.

    We know that if we start with an even number, its negative is also an even number. If we start with an odd number, its negative is also an odd number.

    We know that if we start with a number, even or odd, and add to it its negative then we get an even number. A specific number, too: zero. And that zero is interesting because any number plus zero is that same original number.

    We know we can add odds or evens together. An even number plus an even number will be an even number. An odd number plus an odd number is an even number. An odd number plus an even number is an odd number. And subtraction is the same as addition, by these lights. One number minus an other number is just one number plus negative the other number. So even minus even is even. Odd minus odd is even. Odd minus even is odd.

    We can pluck out some of the even and odd numbers as representative of these sets. We don’t want to deal with big numbers, nor do we want to deal with negative numbers if we don’t have to. So take ‘0’ as representative of the even numbers. ‘1’ as representative of the odd numbers. 0 + 0 is 0. 0 + 1 is 1. 1 + 0 is 1. The addition is the same thing we would do with the original set of integers. 1 + 1 would be 2, which is one of the even numbers, which we represent with 0. So 1 + 1 is 0. If we’ve picked out just these two numbers each is the minus of itself: 0 – 0 is 0 + 0. 1 – 1 is 1 + 1. All that gives us 0, like we should expect.

    Two paragraphs back I said something that’s obvious, but deserves attention anyway. An even plus an even is an even number. You can’t get an odd number out of it. An odd plus an odd is an even number. You can’t get an odd number out of it. There’s something fundamentally different between the even and the odd numbers.

    And now, kindly reader, you’ve learned quotient groups.

    OK, I’ll do some backfilling. It starts with groups. A group is the most skeletal cartoon of arithmetic. It’s a set of things and some operation that works like addition. The thing-like-addition has to work on pairs of things in your set, and it has to give something else in the set. There has to be a zero, something you can add to anything without changing it. We call that the identity, or the additive identity, because it doesn’t change something else’s identity. It makes sense if you don’t stare at it too hard. Everything has an additive inverse. That is everything has a “minus”, that you can add to it to get zero.

    With odd and even numbers the set of things is the integers. The thing-like-addition is, well, addition. I said groups were based on how normal arithmetic works, right?

    And then you need a subgroup. A subgroup is … well, it’s a subset of the original group that’s itself a group. It has to use the same addition the original group does. The even numbers are such a subgroup of the integers. Formally they make something called a “normal subgroup”, which is a little too much for me to explain right now. If your addition works like it does for normal numbers, that is, “a + b” is the same thing as “b + a”, then all your subgroups are normal groups. Yes, it can happen that they’re not. If the addition is something like rotations in three-dimensional space, or swapping the order of things, then the order you “add” things in matters.

    We make a quotient group by … OK, this isn’t going to sound like anything. It’s a group, though, like the name says. It uses the same addition that the original group does. Its set, though, that’s itself made up of sets. One of the sets is the normal subgroup. That’s the easy part.

    Then there’s something called cosets. You make a coset by picking something from the original group and adding it to everything in the subgroup. If the thing you pick was from the original subgroup that’s just going to be the subgroup again. If you pick something outside the original subgroup then you’ll get some other set.

    Starting from the subgroup of even numbers there’s not a lot to do. You can get the even numbers and you get the odd numbers. Doesn’t seem like much. We can do otherwise though. Suppose we start from the subgroup of numbers divisible by 4, though. That’s 0, 4, 8, 12, -4, -8, -12, and so on. Now there’s three cosets we can make from that. We can start with the original set of numbers. Or we have 1 plus that set: 1, 5, 9, 13, -3, -7, -11, and so on. Or we have 2 plus that set: 2, 6, 10, 14, -2, -6, -10, and so on. Or we have 3 plus that set: 3, 7, 11, 15, -1, -5, -9, and so on. None of these others are subgroups, which is why we don’t call them subgroups. We call them cosets.

    These collections of cosets, though, they’re the pieces of a new group. The quotient group. One of them, the normal subgroup you started with, is the identity, the thing that’s as good as zero. And you can “add” the cosets together, in just the same way you can add “odd plus odd” or “odd plus even” or “even plus even”.

    For example. Let me start with the numbers divisible by 4. I will have so much a better time if I give this a name. I’ll pick ‘Q’. This is because, you know, quarters, quartet, quadrilateral, this all sounds like four-y stuff. The integers — the integers have a couple of names. ‘I’, ‘J’, and ‘Z’ are the most common ones. We get ‘Z’ from German; a lot of important group theory was done by German-speaking mathematicians. I’m used to it so I’ll stick with that. The quotient group ‘Z / Q’, read “Z modulo Q”, has (it happens) four cosets. One of them is Q. One of them is “1 + Q”, that set 1, 5, 9, and so on. Another of them is “2 + Q”, that set 2, 6, 10, and so on. And the last is “3 + Q”, that set 3, 7, 11, and so on.

    And you can add them together. 1 + Q plus 1 + Q turns out to be 2 + Q. Try it out, you’ll see. 1 + Q plus 2 + Q turns out to be 3 + Q. 2 + Q plus 2 + Q is Q again.

    The quotient group uses the same addition as the original group. But it doesn’t add together elements of the original group, or even of the normal subgroup. It adds together sets made from the normal subgroup. We’ll denote them using some form that looks like “a + N”, or maybe “a N”, if ‘N’ was the normal subgroup and ‘a’ something that wasn’t in it. (Sometimes it’s more convenient writing the group operation like it was multiplication, because we do that by not writing anything at all, which saves us from writing stuff.)

    If we’re comfortable with the idea that “odd plus odd is even” and “even plus odd is odd” then we should be comfortable with adding together quotient groups. We’re not, not without practice, but that’s all right. In the Introduction To Not That Kind Of Algebra course mathematics majors take they get a lot of practice, just in time to be thrown into rings.

    Quotient groups land on the mathematics major as a baffling thing. They don’t actually turn up things from the original group. And they lead into important theorems. But to an undergraduate they all look like text huddling up to ladders of quotient groups. We’re told these are important theorems and they are. They also go along with beautiful diagrams of how these quotient groups relate to each other. But they’re hard going. It’s tough finding good examples and almost impossible to explain what a question is. It comes as a relief to be thrown into rings. By the time we come back around to quotient groups we’ve usually had enough time to get used to the idea that they don’t seem so hard.

    Really, looking at odds and evens, they shouldn’t be so hard.

     
    • gaurish 9:10 am on Saturday, 10 December, 2016 Permalink | Reply

      Thanks! When I first learnt about quotient groups (two years ago) I visualized them as the equivalence classes we create so as to have a better understanding of a bigger group (since my study of algebra has been motivated by its need in Number theory as a generalization of modulo arithmetic). Then the isomorphism theorems just changed the way I look at quotient of an algebraic structure. See: http://math.stackexchange.com/q/1816921/214604

      Like

      • Joseph Nebus 5:47 am on Saturday, 17 December, 2016 Permalink | Reply

        I’m glad that you liked. I do think equivalence classes are the easiest way into quotient groups — it’s essentially what I did here — but that’s because people get introduced to equivalence classes without knowing what they are. Odd and even numbers, for example, or checking arithmetic by casting out nines are making use of these classes. Isomorphism theorems are great and substantial but they do take so much preparation to get used to. Probably shifting from the first to the second is the sign of really mastering the idea of a quotient group.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 7 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , inverse functions, principal values,   

    The End 2016 Mathematics A To Z: Principal 


    Functions. They’re at the center of so much mathematics. They have three pieces: a domain, a range, and a rule. The one thing functions absolutely must do is match stuff in the domain to one and only one thing in the range. So this is where it gets tricky.

    Principal.

    Thing with this one-and-only-one thing in the range is it’s not always practical. Sometimes it only makes sense to allow for something in the domain to match several things in the range. For example, suppose we have the domain of positive numbers. And we want a function that gives us the numbers which, squared, are whatever the original function was. For any positive real number there’s two numbers that do that. 4 should match to both +2 and -2.

    You might ask why I want a function that tells me the numbers which, squared, equal something. I ask back, what business is that of yours? I want a function that does this and shouldn’t that be enough? We’re getting off to a bad start here. I’m sorry; I’ve been running ragged the last few days. I blame the flat tire on my car.

    Anyway. I’d want something like that function because I’m looking for what state of things makes some other thing true. This turns up often in “inverse problems”, problems in which we know what some measurement is and want to know what caused the measurement. We do that sort of problem all the time.

    We can handle these multi-valued functions. Of course we can. Mathematicians are as good at loopholes as anyone else is. Formally we declare that the range isn’t the real numbers but rather sets of real numbers. My what-number-squared function then matches ‘4’ in the domain to the set of numbers ‘+2 and -2’. The set has several things in it, but there’s just the one set. Clever, huh?

    This sort of thing turns up a lot. There’s two numbers that, squared, give us any real number (except zero). There’s three numbers that, squared, give us any real number (again except zero). Polynomials might have a whole bunch of numbers that make some equation true. Trig functions are worse. The tangent of 45 degrees equals 1. So is the tangent of 225 degrees. Also 405 degrees. Also -45 degrees. Also -585 degrees. OK, a mathematician would use radians instead of degrees, but that just changes what the numbers are. Not that there’s infinitely many of them.

    It’s nice to have options. We don’t always want options. Sometimes we just want one blasted simple answer to things. It’s coded into the language. We say “the square root of four”. We speak of “the arctangent of 1”, which is to say, “the angle with tangent of 1”. We only say “all square roots of four” if we’re making a point about overlooking options.

    If we’ve got a set of things, then we can pick out one of them. This is obvious, which means it is so very hard to prove. We just have to assume we can. Go ahead; assume we can. Our pick of the one thing out of this set is the “principal”. It’s not any more inherently right than the other possibilities. It’s just the one we choose to grab first.

    So. The principal square root of four is positive two. The principal arctangent of 1 is 45 degrees, or in the dialect of mathematicians π divided by four. We pick these values over other possibilities because they’re nice. What makes them nice? Well, they’re nice. Um. Most of their numbers aren’t that big. They use positive numbers if we have a choice in the matter. Deep down we still suspect negative numbers of being up to something.

    If nobody says otherwise then the principal square root is the positive one, or the one with a positive number in front of the imaginary part. If nobody says otherwise the principal arcsine is between -90 and +90 degrees (-π/2 and π/2). The principal arccosine is between 0 and 180 degrees (0 and π), unless someone says otherwise. The principal arctangent is … between -90 and 90 degrees, unless it’s between 0 and 180 degrees. You can count on the 0 to 90 part. Use your best judgement and roll with whatever develops for the other half of the range there. There’s not one answer that’s right for every possible case. The point of a principal value is to pick out one answer that’s usually a good starting point.

    When you stare at what it means to be a function you realize that there’s a difference between the original function and the one that returns the principal value. The original function has a range that’s “sets of values”. The principal-value version has a range that’s just one value. If you’re being kind to your audience you make some note of that. Usually we note this by capitalizing the start of the function: “arcsin z” gives way to “Arcsin z”. “Log z” would be the principal-value version of “log z”. When you start pondering logarithms for negative numbers or for complex-valued numbers you get multiple values. It’s the same way that the arcsine function does.

    And it’s good to warn your audience which principal value you mean, especially for the arc-trigonometric-functions or logarithms. (I’ve never seen someone break the square root convention.) The principal value is about picking the most obvious and easy-to-work-with value out of a set of them. It’s just impossible to get everyone to agree on what the obvious is.

     
  • Joseph Nebus 6:00 pm on Monday, 5 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , , Daffy Duck, , , , ,   

    The End 2016 Mathematics A To Z: Osculating Circle 


    I’m happy to say it’s another request today. This one’s from HowardAt58, author of the Saving School Math blog. He’s given me some great inspiration in the past.

    Osculating Circle.

    It’s right there in the name. Osculating. You know what that is from that one Daffy Duck cartoon where he cries out “Greetings, Gate, let’s osculate” while wearing a moustache. Daffy’s imitating somebody there, but goodness knows who. Someday the mystery drives the young you to a dictionary web site. Osculate means kiss. This doesn’t seem to explain the scene. Daffy was imitating Jerry Colonna. That meant something in 1943. You can find him on old-time radio recordings. I think he’s funny, in that 40s style.

    Make the substitution. A kissing circle. Suppose it’s not some playground antic one level up from the Kissing Bandit that plagues recess yet one or two levels down what we imagine we’d do in high school. It suggests a circle that comes really close to something, that touches it a moment, and then goes off its own way.

    But then touching. We know another word for that. It’s the root behind “tangent”. Tangent is a trigonometry term. But it appears in calculus too. The tangent line is a line that touches a curve at one specific point and is going in the same direction as the original curve is at that point. We like this because … well, we do. The tangent line is a good approximation of the original curve, at least at the tangent point and for some region local to that. The tangent touches the original curve, and maybe it does something else later on. What could kissing be?

    The osculating circle is about approximating an interesting thing with a well-behaved thing. So are similar things with names like “osculating curve” or “osculating sphere”. We need that a lot. Interesting things are complicated. Well-behaved things are understood. We move from what we understand to what we would like to know, often, by an approximation. This is why we have tangent lines. This is why we build polynomials that approximate an interesting function. They share the original function’s value, and its derivative’s value. A polynomial approximation can share many derivatives. If the function is nice enough, and the polynomial big enough, it can be impossible to tell the difference between the polynomial and the original function.

    The osculating circle, or sphere, isn’t so concerned with matching derivatives. I know, I’m as shocked as you are. Well, it matches the first and the second derivatives of the original curve. Anything past that, though, it matches only by luck. The osculating circle is instead about matching the curvature of the original curve. The curvature is what you think it would be: it’s how much a function curves. If you imagine looking closely at the original curve and an osculating circle they appear to be two arcs that come together. They must touch at one point. They might touch at others, but that’s incidental.

    Osculating circles, and osculating spheres, sneak out of mathematics and into practical work. This is because we often want to work with things that are almost circles. The surface of the Earth, for example, is not a sphere. But it’s only a tiny bit off. It’s off in ways that you only notice if you are doing high-precision mapping. Or taking close measurements of things in the sky. Sometimes we do this. So we map the Earth locally as if it were a perfect sphere, with curvature exactly what its curvature is at our observation post.

    Or we might be observing something moving in orbit. If the universe had only two things in it, and they were the correct two things, all orbits would be simple: they would be ellipses. They would have to be “point masses”, things that have mass without any volume. They never are. They’re always shapes. Spheres would be fine, but they’re never perfect spheres even. The slight difference between a perfect sphere and whatever the things really are affects the orbit. Or the other things in the universe tug on the orbiting things. Or the thing orbiting makes a course correction. All these things make little changes in the orbiting thing’s orbit. The actual orbit of the thing is a complicated curve. The orbit we could calculate is an osculating — well, an osculating ellipse, rather than an osculating circle. Similar idea, though. Call it an osculating orbit if you’d rather.

    That osculating circles have practical uses doesn’t mean they aren’t respectable mathematics. I’ll concede they’re not used as much as polynomials or sine curves are. I suppose that’s because polynomials and sine curves have nicer derivatives than circles do. But osculating circles do turn up as ways to try solving nonlinear differential equations. We need the help. Linear differential equations anyone can solve. Nonlinear differential equations are pretty much impossible. They also turn up in signal processing, as ways to find the frequencies of a signal from a sampling of data. This, too, we would like to know.

    We get the name “osculating circle” from Gottfried Wilhelm Leibniz. This might not surprise. Finding easy-to-understand shapes that approximate interesting shapes is why we have calculus. Isaac Newton described a way of making them in the Principia Mathematica. This also might not surprise. Of course they would on this subject come so close together without kissing.

     
  • Joseph Nebus 6:00 pm on Friday, 2 December, 2016 Permalink | Reply
    Tags: A-To-Z, , , , , , , ,   

    The End 2016 Mathematics A To Z: Normal Numbers 


    Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.

    Normal Numbers

    A normal number is any real number you never heard of.

    Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.

    We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?

    Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.

    In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.

    It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.

    That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.

    The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.

    This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.

    Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.

    I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher `1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.

    But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.

    And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.

    Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?

    We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.

    It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.

    It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.

    It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.

    We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.

    Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.

    So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.

     
    • gaurish 5:42 am on Saturday, 3 December, 2016 Permalink | Reply

      Beautiful exposition! Using pi as motivation for the discussion was a great idea. The fact that unlike pimality, normality is associated with base system involved, fascinated me when I first came across normal numbers. Thanks!

      Liked by 1 person

      • Joseph Nebus 4:48 pm on Friday, 9 December, 2016 Permalink | Reply

        Aw, thank you. You’re most kind. π is a good number to use for explaining so many kinds of numbers. It’s familiar to people and it feels friendly, but it’s still an example of so many of the most interesting traits of numbers. Or, as with normality, it looks like it probably is. It’s easy to see why the number is so fascinating.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 30 November, 2016 Permalink | Reply
    Tags: A-To-Z, , , , , , , , Monster Group, , ,   

    The End 2016 Mathematics A To Z: Monster Group 


    Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

    Monster Group.

    It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

    The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

    All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

    So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

    (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

    So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

    An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

    Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

    One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

    The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

    So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

    And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

    Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

    Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

    The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

    The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

    It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

    We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

    And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

    There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

    You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

    I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

    There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

    The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

     
    • gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

      It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?

      Like

      • Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

        Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

        I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.

        Like

  • Joseph Nebus 6:00 pm on Monday, 28 November, 2016 Permalink | Reply
    Tags: A-To-Z, , , , , local, Niagara Falls   

    The End 2016 Mathematics A To Z: Local 


    Today’s is another of those words that means nearly what you would guess. There are still seven letters left, by the way, which haven’t had any requested terms. If you’d like something described please try asking.

    Local.

    Stops at every station, rather than just the main ones.

    OK, I’ll take it seriously.

    So a couple years ago I visited Niagara Falls, and I stepped into the river, just above the really big drop.

    A view (from the United States side) of the Niagara Falls. With a lot of falling water and somehow even more mist.

    Niagara Falls, demonstrating some locally unsafe waters to be in. Background: Canada (left), United States (right).

    I didn’t have any plans to go over the falls, and didn’t, but I liked the thrill of claiming I had. I’m not crazy, though; I picked a spot I knew was safe to step in. It’s only in the retelling I went into the Niagara River just above the falls.

    Because yes, there is surely danger in certain spots of the Niagara River. But there are also spots that are perfectly safe. And not isolated spots either. I wouldn’t have been less safe if I’d stepped into the river a few feet closer to the edge. Nor if I’d stepped in a few feet farther away. Where I stepped in was locally safe.

    Speedy but not actually turbulent waters on the Niagara River, above the falls.

    The Niagara River, and some locally safe enough waters to be in. That’s not me in the picture; if you do know who it is, I have no way of challenging you. But it’s the area I stepped into and felt this lovely illicit thrill doing so.

    Over in mathematics we do a lot of work on stuff that’s true or false depending on what some parameters are. We can look at bunches of those parameters, and they often look something like normal everyday space. There’s some values that are close to what we started from. There’s others that are far from that.

    So, a “neighborhood” of some point is that point and some set of points containing it. It needs to be an “open” set, which means it doesn’t contain its boundary. So, like, everything less than one minute’s walk away, but not the stuff that’s precisely one minute’s walk away. (If we include boundaries we break stuff that we don’t want broken is why.) And certainly not the stuff more than one minute’s walk away. A neighborhood could have any shape. It’s easy to think of it as a little disc around the point you want. That’s usually the easiest to describe in a proof, because it’s “everything a distance less than (something) away”. (That “something” is either ‘δ’ or ‘ε’. Both Greek letters are called in to mean “a tiny distance”. They have different connotations about what the tiny distance is in.) It’s easiest to draw as little amoeba-like blob around a point, and contained inside a bigger amoeba-like blob.

    Anyway, something is true “locally” to a point if it’s true in that neighborhood. That means true for everything in that neighborhood. Which is what you’d expect. “Local” means just that. It’s the stuff that’s close to where we started out.

    Often we would like to know something “globally”, which means … er … everywhere. Universally so. But it’s usually easier to prove a thing locally. I suppose having a point where we know something is so makes it easier to prove things about what’s nearby. Distant stuff, who knows?

    “Local” serves as an adjective for many things. We think of a “local maximum”, for example, or “local minimum”. This is where whatever we’re studying has a value bigger (or smaller) than anywhere else nearby has. Or we speak of a function being “locally continuous”, meaning that we know it’s continuous near this point and we make no promises away from it. It might be “locally differentiable”, meaning we can take derivatives of it close to some interesting point. We say nothing about what happens far from it.

    Unless we do. We can talk about something being “local to infinity”. Your first reaction to that should probably be to slap the table and declare that’s it, we’re done. But we can make it sensible, at least to other mathematicians. We do it by starting with a neighborhood that contains the origin, zero, that point in the middle of everything. So, what’s the inverse of that? It’s everything that’s far enough away from the origin. (Don’t include the boundary, we don’t need those headaches.) So why not call that the “neighborhood of infinity”? Other than that it’s a weird set of words to put together? And if something is true in that “neighborhood of infinity”, what is that thing other than true “local to infinity”?

    I don’t blame you for being skeptical.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel