Recent Updates Page 2 Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Wednesday, 23 August, 2017 Permalink | Reply
    Tags: , , , , , ,   

    The Summer 2017 Mathematics A To Z: Klien Bottle 


    Gaurish, of the For The Love Of Mathematics blog, takes me back into topology today. And it’s a challenging one, because what can I say about a shape this involved when I’m too lazy to draw pictures or include photographs most of the time?

    In 1958 Clifton Fadiman, an open public intellectual and panelist on many fine old-time radio and early TV quiz shows, edited the book Fantasia Mathematica. It’s a pleasant read and you likely can find a copy in a library or university library nearby. It’s a collection of mathematically-themed stuff. Mostly short stories, a few poems, some essays, even that bit where Socrates works through a proof. And some of it is science fiction, this from an era when science fiction was really disreputable.

    If there’s a theme to the science fiction stories included it is: Möbius Strips, huh? There are so many stories in the book that amount to, “what is this crazy bizarre freaky weird ribbon-like structure that only has the one side? Huh?” As I remember even one of the non-science-fiction stories is a Möbius Strip story.

    I don’t want to sound hard on the writers, nor on Fadiman for collecting what he has. A story has to be about people doing something, even if it’s merely exploring some weird phenomenon. You can imagine people dealing with weird shapes. It’s hard to imagine what story you could tell about an odd perfect number. (Well, that isn’t “here’s how we discovered the odd perfect number”, which amounts to a lot of thinking and false starts. Or that doesn’t make the odd perfect number a MacGuffin, the role equally well served by letters of transit or a heap of gold or whatever.) Many of the stories that aren’t about the Möbius Strip are about four- and higher-dimensional shapes that people get caught in or pass through. One of the hyperdimensional stories, A J Deutsch’s “A Subway Named Möbius”, even pulls in the Möbius Strip. The name doesn’t fit, but it is catchy, and is one of the two best tall tales about the Boston subway system.

    Besides, it’s easy to see why the Möbius Strip is interesting. It’s a ribbon where both sides are the same side. What’s not neat about that? It forces us to realize that while we know what “sides” are, there’s stuff about them that isn’t obvious. That defies intuition. It’s so easy to make that it holds another mystery. How is this not a figure known to the ancients and used as a symbol of paradox for millennia? I have no idea; it’s hard to guess why something was not noticed when it could easily have been It dates to 1858, when August Ferdinand Möbius and Johann Bendict Listing independently published on it.

    The Klein Bottle is newer by a generation. Felix Klein, who used group theory to enlighten geometry and vice-versa, described the surface in 1882. It has much in common with the Möbius Strip. It’s a thing that looks like a solid. But it’s impossible to declare one side to be outside and the other in, at least not in any logically coherent way. Take one and dab a spot with a magic marker. You could trace, with the marker, a continuous curve that gets around to the same spot on the “other” “side” of the thing. You see why I have to put quotes around “other” and “side”. I believe you know what I mean when I say this. But taken literally, it’s nonsense.

    The Klein Bottle’s a two-dimensional surface. By that I mean that could cover it with what look like lines of longitude and latitude. Those coordinates would tell you, without confusion, where a point on the surface is. But it’s embedded in a four-dimensional space. (Or a higher-dimensional space, but everything past the fourth dimension is extravagance.) We have never seen a Klein Bottle in its whole. I suppose there are skilled people who can imagine it faithfully, but how would anyone else ever know?

    Big deal. We’ve never seen a tesseract either, but we know the shadow it casts in three-dimensional space. So it is with the Klein Bottle. Visit any university mathematics department. If they haven’t got a glass replica of one in the dusty cabinets welcoming guests to the department, never fear. At least one of the professors has one on an office shelf, probably beside some exams from eight years ago. They make nice-looking jars. Klein Bottles don’t have to. There are different shapes their projection into three dimensions can take. But the only really different one is this sort of figure-eight helical shape that looks like a roller coaster gone vicious. (There’s also a mirror image of this, the helix winding the opposite way.) These representations have the surface cross through itself. In four dimensions, it does no such thing, any more than the edges of a cube cross one another. It’s just the lines in a picture on a piece of paper that cross.

    The Möbius Strip is good practice for learning about the Klein Bottle. We can imagine creating a Bottle by the correct stitching-together of two strips. Or, if you feel destructive, we can start with a Bottle and slice it, producing a pair of Möbius Strips. Both are non-orientable. We can’t make a division between one side and another that reflects any particular feature of the shape. One of the helix-like representations of the Klein Bottle also looks like a pool toy-ring version of the Möbius Strip.

    And strange things happen on these surfaces. You might remember the four-color map theorem. Four colors are enough to color any two-dimensional map without adjacent territories having to share a color. (This isn’t actually so, as the territories have to be contiguous, with no enclaves of one territory inside another. Never mind.) This is so for territories on the sphere. It’s hard to prove (although the five-color theorem is easy.) Not so for the Möbius Strip: territories on it might need as many as six colors. And likewise for the Klein Bottle. That’s a particularly neat result, as the Heawood Conjecture tells us the Klein Bottle might need seven. The Heawood Conjecture is otherwise dead-on in telling us how many colors different kinds of surfaces need for their map-colorings. The Klein Bottle is a strange surface. And yes, it was easier to prove the six-color theorem on the Klein Bottle than it was to prove the four-color theorem on the plane or sphere.

    (Though it’s got the tentative-sounding name of conjecture, the Heawood Conjecture is proven. Heawood put it out as a conjecture in 1890. It took to 1968 for the whole thing to be finally proved. I imagine all those decades of being thought but not proven true gave it a reputation. It’s not wrong for Klein Bottles. If six colors are enough for these maps, then so are seven colors. It’s just that Klein Bottles are the lone case where the bound is tighter than Heawood suggests.)

    All that said, do we care? Do Klein Bottles represent something of particular mathematical interest? Or are they imagination-capturing things we don’t really use? I confess I’m not enough of a topologist to say how useful they are. They are easily-understood examples of algebraic or geometric constructs. These are things with names like “quotient spaces” and “deck transformations” and “fiber bundles”. The thought of the essay I would need to write to say what a fiber bundle is makes me appreciate having good examples of the thing around. So if nothing else they are educationally useful.

    And perhaps they turn up more than I realize. The geometry of Möbius Strips turns up in many surprising places: music theory and organic chemistry, superconductivity and roller coasters. It would seem out of place if the kinds of connections which make a Klein Bottle don’t turn up in our twisty world.

    Advertisements
     
    • gaurish 6:09 pm on Wednesday, 23 August, 2017 Permalink | Reply

      I am awestruck by your talent of explaining a popular topological object without using a single diagram.

      Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 22 August, 2017 Permalink | Reply
    Tags: , , , Elvis, Graffiti, , , ,   

    Reading the Comics, August 17, 2017: Professor Edition 


    To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line.

    Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that.

    Professor, by a pie chart, reading a letter: 'Dear Professor: We are excited about your new theory. Would you build us a prototype? And how much would you charge for each slice? - Sara Lee.'

    Joe Martin’s Mr Boffo for the 15th of August, 2017. I am curious what sort of breakthrough in pie-slicing would be worth the Sara Lee company’s attention. Occasionally you’ll see videos of someone who cuts a pie (or cake or whatever) into equal-area slices using some exotic curve, but that’s to show off that something can be done, not that something is practical.

    Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics.

    Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here.

    By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time.

    Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like.

    Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.

     
  • Joseph Nebus 6:00 pm on Monday, 21 August, 2017 Permalink | Reply
    Tags: , , Camille Jordan, , Jordan Canonical Form, , , , ,   

    The Summer 2017 Mathematics A To Z: Jordan Canonical Form 


    I made a mistake! I thought we had got to the end of the block of A To Z topics suggested by Gaurish, of the For The Love Of Mathematics blog. Not so and, indeed, I wonder if it wouldn’t be a viable writing strategy around here for me to just ask Gaurish to throw out topics and I have two weeks to write about them. I don’t think there’s a single unpromising one in the set.

    Jordan Canonical Form.

    Before you ask, yes, this is named for the Camille Jordan.

    So this is a thing from algebra. Particularly, linear algebra. And more particularly, matrices. Matrices are so much of linear algebra that you could be forgiven thinking they’re all of linear algebra. The thing is, matrices are a really good way of describing linear transformations. That is, where you take a block of space and stretch it out, or squash it down, or rotate it, or do some combination of these things. And stretching and squashing and rotating is a lot of what you’d ever want to do. Refer to any book on how to draw animated cartoons. The only thing matrices can’t do is have their eyes bug out huge when an attractive region of space walks past.

    Thing about a matrix is if you want to do something with it, you’re going to write it as a grid of numbers. It doesn’t have to be a grid of numbers. But about all the matrices anyone does anything with are grids of numbers. And that’s fine. They do an incredible lot of stuff. What’s not fine is that on looking at a huge block of numbers, the mind sees: huh. That’s a big block of numbers. Good luck finding what’s meaningful in them. To help find meaning we have a set of standard forms. We call them “canonical” or “normal” or some other approving term. They rearrange and change the terms in the matrix so that more interesting stuff is more obvious.

    Now you’re justified asking: how can we rearrange and change the terms in a matrix without changing what the matrix is? We can get away with doing this because we can show some rearrangements don’t change what we’re interested in. That covers the “how dare we” part of “how”. We do it by using matrix multiplication. You might remember from high school algebra that matrix multiplication is this agonizing process of multiplying every pair of numbers that ever existed together, then adding them all up, and then maybe you multiply something by minus one because you’re thinking of determinants, and it all comes out wrong anyway and you have to do it over? Yeah. Well, matrix multiplication is defined hard because it makes stuff like this work out. So that covers the “by what technique” part of “how”. We start out with some matrix, let me imaginatively name it A . And then we find some transformation matrix for which, eh, let’s say P is a good enough name. I’ll say why in a moment. Then we use that matrix and its multiplicative inverse P^{-1} . And we evaluate the product P^{-1} A P . This won’t just be the same old matrix we started with. Not usually. Promise. But what this will be, if we chose our matrix P correctly, is some new matrix that’s easier to read.

    The matrices involved here have to follow some rules. Most important, they’re all going to be square matrices. There’ll be more rules that your linear algebra textbook will tell you. Or your instructor will, after checking the textbook.

    So what makes a matrix easy to read? Zeroes. Lots and lots of zeroes. When we have a standardized form of a matrix it’s nearly all zeroes. This is for a good reason: zeroes are easy to multiply stuff by. And they’re easy to add stuff to. And almost everything we do with matrices, as a calculation, is a lot of multiplication and addition of the numbers in the matrix.

    What also makes a matrix easy to read? Everything important being on the diagonal. The diagonal is one of the two things you would imagine if you were told “here’s a grid of numbers, pick out the diagonal”. In particular it’s the one that goes from the upper left to the bottom right, that is, row one column one, and row two column two, and row three column three, and so on up to row 86 column 86 (or whatever). If everything is on the diagonal the matrix is incredibly easy to work with. If it can’t all be on the diagonal at least everything should be close to it. As close as possible.

    In the Jordan Canonical Form not everything is on the diagonal. I mean, it can be, but you shouldn’t count on that. But everything either will be on the diagonal or else it’ll be one row up from the diagonal. That is, row one column two, row two column three, row 85 column 86. Like that. There’s two other important pieces.

    First is the thing in the row above the diagonal will be either 1 or 0. Second is that on the diagonal you’ll have a sequence of all the same number. Like, you’ll get four instances of the number ‘2’ along this string of the diagonal. Third is that you’ll get a 1 above all but the row above first instance of this particular number. Fourth is that you’ll get a 0 in the row above the first instance of this number.

    Yeah, that’s fussy to visualize. This is one of those things easiest to show in a picture. A Jordan canonical form is a matrix that looks like this:

    2 1 0 0 0 0 0 0 0 0 0 0
    0 2 1 0 0 0 0 0 0 0 0 0
    0 0 2 1 0 0 0 0 0 0 0 0
    0 0 0 2 0 0 0 0 0 0 0 0
    0 0 0 0 3 1 0 0 0 0 0 0
    0 0 0 0 0 3 0 0 0 0 0 0
    0 0 0 0 0 0 4 1 0 0 0 0
    0 0 0 0 0 0 0 4 1 0 0 0
    0 0 0 0 0 0 0 0 4 0 0 0
    0 0 0 0 0 0 0 0 0 -1 0 0
    0 0 0 0 0 0 0 0 0 0 -2 1
    0 0 0 0 0 0 0 0 0 0 0 -2

    This may have you dazzled. It dazzles mathematicians too. When we have to write a matrix that’s almost all zeroes like this we drop nearly all the zeroes. If we have to write anything we just write a really huge 0 in the upper-right and the lower-left corners.

    What makes this the Jordan Canonical Form is that the matrix looks like it’s put together from what we call Jordan Blocks. Look around the diagonals. Here’s the first Jordan Block:

    2 1 0 0
    0 2 1 0
    0 0 2 1
    0 0 0 2

    Here’s the second:

    3 1
    0 3

    Here’s the third:

    4 1 0
    0 4 1
    0 0 4

    Here’s the fourth:

    -1

    And here’s the fifth:

    -2 1
    0 -2

    And we can represent the whole matrix as this might-as-well-be-diagonal thing:

    First Block 0 0 0 0
    0 Second Block 0 0 0
    0 0 Third Block 0 0
    0 0 0 Fourth Block 0
    0 0 0 0 Fifth Block

    These blocks can be as small as a single number. They can be as big as however many rows and columns you like. Each individual block is some repeated number on the diagonal, and a repeated one in the row above the diagonal. You can call this the “superdiagonal”.

    (Mathworld, and Wikipedia, assert that sometimes the row below the diagonal — the “subdiagonal” — gets the 1’s instead of the superdiagonal. That’s fine if you like it that way, and it won’t change any of the real work. I have not seen these subdiagonal 1’s in the wild. But I admit I don’t do a lot of this field and maybe there’s times it’s more convenient.)

    Using the Jordan Canonical Form for a matrix is a lot like putting an object in a standard reference pose for photographing. This is a good metaphor. We get a Jordan Canonical Form by matrix multiplication, which works like rotating and scaling volumes of space. You can view the Jordan Canonical Form for a matrix as how you represent the original matrix from a new viewing angle that makes it easy to recognize. And this is why P is not a bad name for the matrix that does this work. We can see all this as “projecting” the matrix we started with into a new frame of reference. The new frame is maybe rotated and stretched and squashed and whatnot, compared to how we started. But it’s as valid a base. Projecting a mathematical object from one frame of reference to another usually involves calculating something that looks like P^{-1} A P so, projection. That’s our name.

    Mathematicians will speak of “the” Jordan Canonical Form for a matrix as if there were such a thing. I don’t mean that Jordan Canonical Forms don’t exist. They exist just as much as matrices do. It’s the “the” that misleads. You can put the Jordan Blocks in any order and have as valid, and as useful, a Jordan Canonical Form. But it’s easy to swap the orders of these blocks around — it’s another matrix multiplication, and a blessedly easy one — so it doesn’t matter which form you have. Get any one and you have them all.

    I haven’t said anything about what these numbers on the diagonal are. They’re the eigenvalues of the original matrix. I hope that clears things up.

    Yeah, not to anyone who didn’t know what a Jordan Canonical Form was to start with. Rather than get into calculations let me go to well-established metaphor. Take a sample of an unknown chemical and set it on fire. Put the light from this through a prism and photograph the spectrum. There will be lines, interruptions in the progress of colors. The locations of those lines and how intense they are tell you what the chemical is made of, and in what proportions. These are much like the eigenvectors and eigenvalues of a matrix. The eigenvectors tell you what the matrix is made of, and the eigenvalues how much of the matrix is those. This stuff gets you very far in proving a lot of great stuff. And part of what makes the Jordan Canonical Form great is that you get the eigenvalues right there in neat order, right where anyone can see them.

    So! All that’s left is finding the things. The best way to find the Jordan Canonical Form for a given matrix is to become an instructor for a class on linear algebra and assign it as homework. The second-best way is to give the problem to your TA, who will type it in to Mathematica and return the result. It’s too much work to do most of the time. Almost all the stuff you could learn from having the thing in the Jordan Canonical Form you work out in the process of finding the matrix P that would let you calculate what the Jordan Canonical Form is. And once you had that, why go on?

    Where the Jordan Canonical Form shines is in doing proofs about what matrices can do. We can always put a square matrix into a Jordan Canonical Form. So if we want to show something is true about matrices in general, we can show that it’s true for the simpler-to-work-with Jordan Canonical Form. Then show that shifting a matrix to or from the Jordan Canonical Form doesn’t change whether the thing we’re interested in is true. It exists in that strange space: it is quite useful, but never on a specific problem.

    Oh, all right. Yes, it’s the same Camille Jordan of the Jordan Curve and also of the Jordan Curve Theorem. That fellow.

     
    • elkement (Elke Stangl) 7:09 pm on Wednesday, 13 September, 2017 Permalink | Reply

      I really like the spectroscopy metaphor for eigenvectors and eigenvalues!

      Like

      • Joseph Nebus 1:40 am on Friday, 15 September, 2017 Permalink | Reply

        Thank you. It isn’t my metaphor originally, although I don’t know where I did pick it up. Very likely either a linear algebra text if not my instructor.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 20 August, 2017 Permalink | Reply
    Tags: , , , , , , , , ,   

    Reading the Comics, August 15, 2017: Cake Edition 


    It was again a week just busy enough that I’m comfortable splitting the Reading The Comments thread into two pieces. It’s also a week that made me think about cake. So, I’m happy with the way last week shaped up, as far as comic strips go. Other stuff could have used a lot of work Let’s read.

    Stephen Bentley’s Herb and Jamaal rerun for the 13th depicts “teaching the kids math” by having them divide up a cake fairly. I accept this as a viable way to make kids interested in the problem. Cake-slicing problems are a corner of game theory as it addresses questions we always find interesting. How can a resource be fairly divided? How can it be divided if there is not a trusted authority? How can it be divided if the parties do not trust one another? Why do we not have more cake? The kids seem to be trying to divide the cake by volume, which could be fair. If the cake slice is a small enough wedge they can likely get near enough a perfect split by ordinary measures. If it’s a bigger wedge they’d need calculus to get the answer perfect. It’ll be well-approximated by solids of revolution. But they likely don’t need perfection.

    This is assuming the value of the icing side is not held in greater esteem than the bare-cake sides. This is not how I would value the parts of the cake. They’ll need to work something out about that, too.

    Mac King and Bill King’s Magic in a Minute for the 13th features a bit of numerical wizardry. That the dates in a three-by-three block in a calendar will add up to nine times the centered date. Why this works is good for a bit of practice in simplifying algebraic expressions. The stunt will be more impressive if you can multiply by nine in your head. I’d do that by taking ten times the given date and then subtracting the original date. I won’t say I’m fond of the idea of subtracting 23 from 230, or 17 from 170. But a skilled performer could do something interesting while trying to do this subtraction. (And if you practice the trick you can get the hang of the … fifteen? … different possible answers.)

    Bill Amend’s FoxTrot rerun for the 14th mentions mathematics. Young nerd Jason’s trying to get back into hand-raising form. Arithmetic has considerable advantages as a thing to practice answering teachers. The questions have clear, definitely right answers, that can be worked out or memorized ahead of time, and can be asked in under half a panel’s word balloon space. I deduce the strip first ran the 21st of August, 2006, although that image seems to be broken.

    Ed Allison’s Unstrange Phenomena for the 14th suggests changes in the definition of the mile and the gallon to effortlessly improve the fuel economy of cars. As befits Allison’s Dadaist inclinations the numbers don’t work out. As it is, if you defined a New Mile of 7,290 feet (and didn’t change what a foot was) and a New Gallon of 192 fluid ounces (and didn’t change what an old fluid ounce was) then a 20 old-miles-per-old-gallon car would come out to about 21.7 new-miles-per-new-gallon. Commenter Del_Grande points out that if the New Mile were 3,960 feet then the calculation would work out. This inspires in me curiosity. Did Allison figure out the numbers that would work and then make a mistake in the final art? Or did he pick funny-looking numbers and not worry about whether they made sense? No way to tell from here, I suppose. (Allison doesn’t mention ways to get in touch on the comic’s About page and I’ve only got the weakest links into the professional cartoon community.)

    Todd the Dinosaur in the playground. 'Kickball, here we come!' Teacher's voice: 'Hold it right there! What is 128 divided by 4?' Todd: 'Long division?' He screams until he wakes. Trent: 'What's wrong?' Todd: 'I dreamed it was the first day of school! And my teacher made me do math ... DURING RECESS!' Trent: 'Stop! That's too scary!'

    Patrick Roberts’s Todd the Dinosaur for the 15th of August, 2017. Before you snipe that there’s no room on the teacher’s worksheet for Todd to actually give an answer, remember that it’s an important part of dream-logic that it’s impossible to actually do the commanded task.

    Patrick Roberts’s Todd the Dinosaur for the 15th mentions long division as the stuff of nightmares. So it is. I guess MathWorld and Wikipedia endorse calling 128 divided by 4 long division, although I’m not sure I’m comfortable with that. This may be idiosyncratic; I’d thought of long division as where the divisor is two or more digits. A three-digit number divided by a one-digit one doesn’t seem long to me. I’d just think that was division. I’m curious what readers’ experiences have been.

     
    • goldenoj 10:00 pm on Sunday, 20 August, 2017 Permalink | Reply

      When kids are first taught division outside the multiplication table, it’s called long division. And taught with a variety of horror inspiring, place value destroying algorithms.

      Like

      • Joseph Nebus 12:37 am on Thursday, 24 August, 2017 Permalink | Reply

        Mm, all right. Maybe I’m far enough from learning long division in the first place that I’ve substituted what I think it is for what it actually is.

        I do think of it as the part of division that’s first really baffling, since dividing by (like) 26 will often involve a first guess and then a revised guess. And that’s a deep shock, I think. Up to that point I’m not sure there’s anything that can’t be done exactly right the first time without revisions being needed.

        Like

  • Joseph Nebus 6:00 pm on Friday, 18 August, 2017 Permalink | Reply
    Tags: , , , George Berkeley, , numerical integration, , ,   

    The Summer 2017 Mathematics A To Z: Integration 


    One more mathematics term suggested by Gaurish for the A-To-Z today, and then I’ll move on to a couple of others. Today’s is a good one.

    Integration.

    Stand on the edge of a plot of land. Walk along its boundary. As you walk the edge pay attention. Note how far you walk before changing direction, even in the slightest. When you return to where you started consult your notes. Contained within them is the area you circumnavigated.

    If that doesn’t startle you perhaps you haven’t thought about how odd that is. You don’t ever touch the interior of the region. You never do anything like see how many standard-size tiles would fit inside. You walk a path that is as close to one-dimensional as your feet allow. And encoded in there somewhere is an area. Stare at that incongruity and you realize why integrals baffle the student so. They have a deep strangeness embedded in them.

    We who do mathematics have always liked integration. They grow, in the western tradition, out of geometry. Given a shape, what is a square that has the same area? There are shapes it’s easy to find the area for, given only straightedge and compass: a rectangle? Easy. A triangle? Just as straightforward. A polygon? If you know triangles then you know polygons. A lune, the crescent-moon shape formed by taking a circular cut out of a circle? We can do that. (If the cut is the right size.) A circle? … All right, we can’t do that, but we spent two thousand years trying before we found that out for sure. And we can do some excellent approximations.

    That bit of finding-a-square-with-the-same-area was called “quadrature”. The name survives, mostly in the phrase “numerical quadrature”. We use that to mean that we computed an integral’s approximate value, instead of finding a formula that would get it exactly. The otherwise obvious choice of “numerical integration” we use already. It describes computing the solution of a differential equation. We’re not trying to be difficult about this. Solving a differential equation is a kind of integration, and we need to do that a lot. We could recast a solving-a-differential-equation problem as a find-the-area problem, and vice-versa. But that’s bother, if we don’t need to, and so we talk about numerical quadrature and numerical integration.

    Integrals are built on two infinities. This is part of why it took so long to work out their logic. One is the infinity of number; we find an integral’s value, in principle, by adding together infinitely many things. The other is an infinity of smallness. The things we add together are infinitesimally small. That we need to take things, each smaller than any number yet somehow not zero, and in such quantity that they add up to something, seems paradoxical. Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy. Bishop George Berkeley made a steady name for himself in calculus textbooks by pointing this out. We have worked out several logically consistent schemes for evaluating integrals. They work, mostly, by showing that we can make the error caused by approximating the integral smaller than any margin we like. This is a standard trick, or at least it is, now that we know it.

    That “in principle” above is important. We don’t actually work out an integral by finding the sum of infinitely many, infinitely tiny, things. It’s too hard. I remember in grad school the analysis professor working out by the proper definitions the integral of 1. This is as easy an integral as you can do without just integrating zero. He escaped with his life, but it was a close scrape. He offered the integral of x as a way to test our endurance, without actually doing it. I’ve never made it through that.

    But we do integrals anyway. We have tools on our side. We can show, for example, that if a function obeys some common rules then we can use simpler formulas. Ones that don’t demand so many symbols in such tight formation. Ones that we can use in high school. Also, ones we can adapt to numerical computing, so that we can let machines give us answers which are near enough right. We get to choose how near is “near enough”. But then the machines decide how long we’ll have to wait to get that answer.

    The greatest tool we have on our side is the Fundamental Theorem of Calculus. Even the name promises it’s the greatest tool we might have. This rule tells us how to connect integrating a function to differentiating another function. If we can find a function whose derivative is the thing we want to integrate, then we have a formula for the integral. It’s that function we found. What a fantastic result.

    The trouble is it’s so hard to find functions whose derivatives are the thing we wanted to integrate. There are a lot of functions we can find, mind you. If we want to integrate a polynomial it’s easy. Sine and cosine and even tangent? Yeah. Logarithms? A little tedious but all right. A constant number raised to the power x? Also tedious but doable. A constant number raised to the power x2? Hold on there, that’s madness. No, we can’t do that.

    There is a weird grab-bag of functions we can find these integrals for. They’re mostly ones we can find some integration trick for. An integration trick is some way to turn the integral we’re interested in into a couple of integrals we can do and then mix back together. A lot of a Freshman Calculus course is a heap of tricks we’ve learned. They have names like “u-substitution” and “integration by parts” and “trigonometric substitution”. Some of them are really exotic, such as turning a single integral into a double integral because that leads us to something we can do. And there’s something called “differentiation under the integral sign” that I don’t know of anyone actually using. People know of it because Richard Feynman, in his fun memoir What Do You Care What Other People Think: 250 Pages Of How Awesome I Was In Every Situation Ever, mentions how awesome it made him in so many situations. Mathematics, physics, and engineering nerds are required to read this at an impressionable age, so we fall in love with a technique no textbook ever mentions. Sorry.

    I’ve written about all this as if we were interested just in areas. We’re not. We like calculating lengths and volumes and, if we dare venture into more dimensions, hypervolumes and the like. That’s all right. If we understand how to calculate areas, we have the tools we need. We can adapt them to as many or as few dimensions as we need. By weighting integrals we can do calculations that tell us about centers of mass and moments of inertial, about the most and least probable values of something, about all quantum mechanics.

    As often happens, this powerful tool starts with something anyone might ponder: what size square has the same area as this other shape? And then think seriously about it.

     
    • gaurish 7:24 am on Saturday, 19 August, 2017 Permalink | Reply

      I myself tried to write about integration a couple of years ago, but failed. This is much better. My favourite statement: “Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy.”

      Liked by 1 person

      • Joseph Nebus 12:33 am on Thursday, 24 August, 2017 Permalink | Reply

        Aw, thank you kindly. It may be worth your trying to write again. We all come to new perspectives with time, and a variety of views are good for people trying to find one that helps them understand a thing.

        Liked by 1 person

      • elkement (Elke Stangl) 7:01 pm on Wednesday, 30 August, 2017 Permalink | Reply

        My favorite statement from this article is: “Integrals are built on two infinities. This is part of why it took so long to work out their logic. “

        Liked by 1 person

        • Joseph Nebus 1:14 am on Friday, 8 September, 2017 Permalink | Reply

          That was one of those happy sentences that’s really the whole essay, and everything else was just the run-up and the relaxation from. Have one of those and the rest of the writing is easy.

          Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 16 August, 2017 Permalink | Reply
    Tags: , , , , rank,   

    The Summer 2017 Mathematics A To Z: Height Function (elliptic curves) 


    I am one letter closer to the end of Gaurish’s main block of requests. They’re all good ones, mind you. This gets me back into elliptic curves and Diophantine equations. I might be writing about the wrong thing.

    Height Function.

    My love’s father has a habit of asking us to rate our hobbies. This turned into a new running joke over a family vacation this summer. It’s a simple joke: I shuffled the comparables. “Which is better, Bon Jovi or a roller coaster?” It’s still a good question.

    But as genial yet nasty as the spoof is, my love’s father asks natural questions. We always want to compare things. When we form a mathematical construct we look for ways to measure it. There’s typically something. We’ll put one together. We call this a height function.

    We start with an elliptic curve. The coordinates of the points on this curve satisfy some equation. Well, there are many equations they satisfy. We pick one representation for convenience. The convenient thing is to have an easy-to-calculate height. We’ll write the equation for the curve as

    y^2 = x^3 + Ax + B

    Here both ‘A’ and ‘B’ are some integers. This form might be unique, depending on whether a slightly fussy condition on prime numbers hold. (Specifically, if ‘p’ is a prime number and ‘p4‘ divides into ‘A’, then ‘p6‘ must not divide into ‘B’. Yes, I know you realized that right away. But I write to a general audience, some of whom are learning how to see these things.) Then the height of this curve is whichever is the larger number, four times the cube of the absolute value of ‘A’, or 27 times the square of ‘B’. I ask you to just run with it. I don’t know the implications of the height function well enough to say why, oh, 25 times the square of ‘B’ wouldn’t do as well. The usual reason for something like that is that some obvious manipulation makes the 27 appear right away, or disappear right away.

    This idea of height feeds in to a measure called rank. “Rank” is a term the young mathematician encounters first while learning matrices. It’s the number of rows in a matrix that aren’t equal to some sum or multiple of other rows. That is, it’s how many different things there are among a set. You can see why we might find that interesting. So many topics have something called “rank” and it measures how many different things there are in a set of things. In elliptic curves, the rank is a measure of how complicated the curve is. We can imagine the rational points on the elliptic curve as things generated by some small set of starter points. The starter points have to be of infinite order. Starter points that don’t, don’t count for the rank. Please don’t worry about what “infinite order” means here. I only mention this infinite-order business because if I don’t then something I have to say about two paragraphs from here will sound daft. So, the rank is how many of these starter points you need to generate the elliptic curve. (WARNING: Call them “generating points” or “generators” during your thesis defense.)

    There’s no known way of guessing what the rank is if you just know ‘A’ and ‘B’. There are algorithms that can calculate the rank given a particular ‘A’ and ‘B’. But it’s not something like the quadratic formula where you can just do a quick calculation and know what you’re looking for. We don’t even know if the algorithms we have will work for every elliptic curve.

    We think that there’s no limit to the height of elliptic curves. We don’t know this. We know there exist curves with ranks as high as 28. They seem to be rare [*]. I don’t know if that’s proven. But we do know there are elliptic curves with rank zero. A lot of them, in fact. (See what I meant two paragraphs back?) These are the elliptic curves that have only finitely many rational points on them.

    And there’s a lot of those. There’s a well-respected that the average rank, of all the elliptic curves there are, is ½. It might be. What we have been able to prove is that the average rank is less than or equal to 1.17. Also that it should be larger than zero. So we’re maybe closing in on the ½ conjecture? At least we know something. I admit this essay I’ve started wondering what we do know of elliptic curves.

    What do the height, and through it the rank, get us? I worry I’m repeating myself. By themselves they give us families of elliptic curves. Shapes that are similar in a particular and not-always-obvious way. And they feed into the Birch and Swinnerton-Dyer conjecture, which is the hipster’s Riemann Hypothesis. That is, it’s this big, unanswered, important problem that would, if answered, tell us things about a lot of questions that I’m not sure can be concisely explained. At least not why they’re interesting. We know some special cases, at least. Wikipedia tells me nothing’s proved for curves with rank greater than 1. Humanity’s ignorance on this point makes me feel slightly better pondering what I don’t know about elliptic curves.

    (There are some other things within the field of elliptic curves called height functions. There’s particularly a height of individual points. I was unsure which height Gaurish found interesting so chose one. The other starts by measuring something different; it views, for example, \frac{1}{2} as having a lower height than does \frac{51}{101} , even though the numbers are quite close in value. It develops along similar lines, trying to find classes of curves with similar behavior. And it gets into different unsolved conjectures. We have our ideas about how to think of fields.).


    [*] Wikipedia seems to suggest we only know of one, provided by Professor Noam Elkies in 2006, and let me quote it in full. I apologize that it isn’t in the format I suggested at top was standard. Elkies way outranks me academically so we have to do things his way:

    y^2 + xy + y = x^3 - x^2 -  20,067,762,415,575,526,585,033,208,209,338,542,750,930,230,312,178,956,502 x + 34,481,611,795,030,556,467,032,985,690,390,720,374,855,944,359,319,180,361,266,008,296,291,939,448,732,243,429

    I can’t figure how to get WordPress to present that larger. I sympathize. I’m tired just looking at an equation like that. This page lists records of known elliptic curve ranks. I don’t know if the lack of any records more recent than 2006 reflects the page not having been updated or nobody having found a rank-29 curve. I fully accept the field might be more difficult than even doing maintenance on a web page’s content is.

     
    • gaurish 6:45 pm on Thursday, 17 August, 2017 Permalink | Reply

      Yet another beautiful post. You may like this lecture about the BSD conjecture: https://youtu.be/2gbQWIzb6Dg

      Like

      • Joseph Nebus 3:05 am on Saturday, 19 August, 2017 Permalink | Reply

        Thank you! And also thank you for the link. I never think to look for videos that would explain topics. In many ways I still think of the Internet as being in about 1998, when video was nothing more than a theoretical possibility that might someday finish loading before it glitches out.

        Like

  • Joseph Nebus 6:00 pm on Tuesday, 15 August, 2017 Permalink | Reply
    Tags: accountants, , , , octopus, , , ,   

    Reading the Comics, August 12, 2017: August 10 and 12 Edition 


    The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.

    Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.

    Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.

    Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.

    Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.

    John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.

    Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.

     
  • Joseph Nebus 6:00 pm on Monday, 14 August, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Gaussian Primes 


    Once more do I have Gaurish to thank for the day’s topic. (There’ll be two more chances this week, providing I keep my writing just enough ahead of deadline.) This one doesn’t touch category theory or topology.

    Gaussian Primes.

    I keep touching on group theory here. It’s a field that’s about what kinds of things can work like arithmetic does. A group is a set of things that you can add together. At least, you can do something that works like adding regular numbers together does. A ring is a set of things that you can add and multiply together.

    There are many interesting rings. Here’s one. It’s called the Gaussian Integers. They’re made of numbers we can write as a + b\imath , where ‘a’ and ‘b’ are some integers. \imath is what you figure, that number that multiplied by itself is -1. These aren’t the complex-valued numbers, you notice, because ‘a’ and ‘b’ are always integers. But you add them together the way you add complex-valued numbers together. That is, a + b\imath plus c + d\imath is the number (a + c) + (b + d)\imath . And you multiply them the way you multiply complex-valued numbers together. That is, a + b\imath times c + d\imath is the number (a\cdot c - b\cdot d) + (a\cdot d + b\cdot c)\imath .

    We created something that has addition and multiplication. It picks up subtraction for free. It doesn’t have division. We can create rings that do, but this one won’t, any more than regular old integers have division. But we can ask what other normal-arithmetic-like stuff these Gaussian integers do have. For instance, can we factor numbers?

    This isn’t an obvious one. No, we can’t expect to be able to divide one Gaussian integer by another. But we can’t expect to divide a regular old integer by another, not and get an integer out of it. That doesn’t mean we can’t factor them. It means we divide the regular old integers into a couple classes. There’s prime numbers. There’s composites. There’s the unit, the number 1. There’s zero. We know prime numbers; they’re 2, 3, 5, 7, and so on. Composite numbers are the ones you get by multiplying prime numbers together: 4, 6, 8, 9, 10, and so on. 1 and 0 are off on their own. Leave them there. We can’t divide any old integer by any old integer. But we can say an integer is equal to this string of prime numbers multiplied together. This gives us a handle by which we can prove a lot of interesting results.

    We can do the same with Gaussian integers. We can divide them up into Gaussian primes, Gaussian composites, units, and zero. The words mean what they mean for regular old integers. A Gaussian composite can be factored into the multiples of Gaussian primes. Gaussian primes can’t be factored any further.

    If we know what the prime numbers are for regular old integers we can tell whether something’s a Gaussian prime. Admittedly, knowing all the prime numbers is a challenge. But a Gaussian integer a + b\imath will be prime whenever a couple simple-to-test conditions are true. First is if ‘a’ and ‘b’ are both not zero, but a^2 + b^2 is a prime number. So, for example, 5 + 4\imath is a Gaussian prime.

    You might ask, hey, would -5 - 4\imath also be a Gaussian prime? That’s also got components that are integers, and the squares of them add up to a prime number (41). Well-spotted. Gaussian primes appear in quartets. If a + b\imath is a Gaussian prime, so is -a -b\imath . And so are -b + a\imath and b - a\imath .

    There’s another group of Gaussian primes. These are the numbers a + b\imath where either ‘a’ or ‘b’ is zero. Then the other one is, if positive, three more than a whole multiple of four. If it’s negative, then it’s three less than a whole multiple of four. So ‘3’ is a Gaussian prime, as is -3, and as is 3\imath and so is -3\imath .

    This has strange effects. Like, ‘3’ is a prime number in the regular old scheme of things. It’s also a Gaussian prime. But familiar other prime numbers like ‘2’ and ‘5’? Not anymore. Two is equal to (1 + \imath) \cdot (1 - \imath) ; both of those terms are Gaussian primes. Five is equal to (2 + \imath) \cdot (2 - \imath) . There are similar shocking results for 13. But, roughly, the world of composites and prime numbers translates into Gaussian composites and Gaussian primes. In this slightly exotic structure we have everything familiar about factoring numbers.

    You might have some nagging thoughts. Like, sure, two is equal to (1 + \imath) \cdot (1 - \imath) . But isn’t it also equal to (1 + \imath) \cdot (1 - \imath) \cdot \imath \cdot (-\imath) ? One of the important things about prime numbers is that every composite number is the product of a unique string of prime numbers. Do we have to give that up for Gaussian integers?

    Good nag. But no; the doubt is coming about because you’ve forgotten the difference between “the positive integers” and “all the integers”. If we stick to positive whole numbers then, yeah, (say) ten is equal to two times five and no other combination of prime numbers. But suppose we have all the integers, positive and negative. Then ten is equal to either two times five or it’s equal to negative two times negative five. Or, better, it’s equal to negative one times two times negative one times five. Or suffix times any even number of negative ones.

    Remember that bit about separating ‘one’ out from the world of primes and composites? That’s because the number one screws up these unique factorizations. You can always toss in extra factors of one, to taste, without changing the product of something. If we have positive and negative integers to use, then negative one does almost the same trick. We can toss in any even number of extra negative ones without changing the product. This is why we separate “units” out of the numbers. They’re not part of the prime factorization of any numbers.

    For the Gaussian integers there are four units. 1 and -1, \imath and -\imath . They are neither primes nor composites, and we don’t worry about how they would otherwise multiply the number of factorizations we get.

    But let me close with a neat, easy-to-understand puzzle. It’s called the moat-crossing problem. In the regular old integers it’s this: imagine that the prime numbers are islands in a dangerous sea. You start on the number ‘2’. Imagine you have a board that can be set down and safely crossed, then picked up to be put down again. Could you get from the start and go off to safety, which is infinitely far away? If your board is some, fixed, finite length?

    No, you can’t. The problem amounts to how big the gap between one prime number and the next largest prime number can be. It turns out there’s no limit to that. That is, you give me a number, as small or as large as you like. I can find some prime number that’s more than your number less than its successor. There are infinitely large gaps between prime numbers.

    Gaussian primes, though? Since a Gaussian prime might have nearest neighbors in any direction? Nobody knows. We know there are arbitrarily large gaps. Pick a moat size; we can (eventually) find a Gaussian prime that’s at least that far away from its nearest neighbors. But this does not say whether it’s impossible to get from the smallest Gaussian primes — 1 + \imath and its companions -1 + \imath and on — infinitely far away. We know there’s a moat of width 6 separating the origin of things from infinity. We don’t know that there’s bigger ones.

    You’re not going to solve this problem. Unless I have more brilliant readers than I know about; if I have ones who can solve this problem then I might be too intimidated to write anything more. But there is surely a pleasant pastime, maybe a charming game, to be made from this. Try finding the biggest possible moats around some set of Gaussian prime island.

    Ellen Gethner, Stan Wagon, and Brian Wick’s A Stroll Through the Gaussian Primes describes this moat problem. It also sports some fine pictures of where the Gaussian primes are and what kinds of moats you can find. If you don’t follow the reasoning, you can still enjoy the illustrations.

     
  • Joseph Nebus 6:00 pm on Sunday, 13 August, 2017 Permalink | Reply
    Tags: , , dimensional analysis, Dog Eat Doug, , feminism, , Sylvia,   

    Reading the Comics, August 9, 2017: Pets Doing Mathematics Edition 


    I had just enough comic strips to split this week’s mathematics comics review into two pieces. I like that. It feels so much to me like I have better readership when I have many days in a row with posting something, however slight. The A to Z is good for three days a week, and if comic strips can fill two of those other days then I get to enjoy a lot of regular publication days. … Though last week I accidentally set the Sunday comics post to appear on Monday, just before the A To Z post. I’m curious how that affected my readers. That nobody said anything is ominous.

    Border collies are, as we know, highly intelligent. (Looking over a chalkboard diagramming 'fetch', with symbols.) 'There MUST be some point to it, but I guess we don't have the mathematical tools to crack it at the moment.'

    Niklas Eriksson’s Carpe Diem for the 7th of August, 2017. I have to agree the border collies haven’t worked out the point of fetch. I also question whether they’ve worked out the simple ballistics of the tossed stick. If the variables mean what they suggest they mean, then dimensional analysis suggests they’ve got at least three fiascos going on here. Maybe they have an idiosyncratic use for variables like ‘v’.

    Niklas Eriksson’s Carpe Diem for the 7th of August uses mathematics as the signifier for intelligence. I’m intrigued by how the joke goes a little different: while the border collies can work out the mechanics of a tossed stick, they haven’t figured out what the point of fetch is. But working out people’s motivations gets into realms of psychology and sociology and economics. There the mathematics might not be harder, but knowing that one is calculating a relevant thing is. (Eriksson’s making a running theme of the intelligence of border collies.)

    Nicole Hollander’s Sylvia rerun for the 7th tosses off a mention that “we’re the first generation of girls who do math”. And that therefore there will be a cornucopia of new opportunities and good things to come to them. There’s a bunch of social commentary in there. One is the assumption that mathematics skill is a liberating thing. Perhaps it is the gloom of the times but I doubt that an oppressed group developing skills causes them to be esteemed. It seems more likely to me to make the skills become devalued. Social justice isn’t a matter of good exam grades.

    Then, too, it’s not as though women haven’t done mathematics since forever. Every mathematics department on a college campus has some faded posters about Emmy Noether and Sofia Kovalevskaya and maybe Sophie Germaine. Probably high school mathematics rooms too. Again perhaps it’s the gloom of the times. But I keep coming back to the goddess’s cynical dismissal of all this young hope.

    Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th portrays arithmetic as a grand-strategic imperative. Well, it means education as a strategic imperative. But arithmetic is the thing Dot uses. I imagine because it is so easy to teach as a series of trivia and quiz about. And it fits in a single panel with room to spare.

    Dot: 'Now try it again: two and two is four.' Trixie: 'Fwee!' Dot: 'You're not TRYING! Do you want the Russians to get AHEAD of US!?' Trixie looks back and thinks: 'I didn't even know there was anyone back there!'

    Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th of August, 2017. Remember: you’re only young once, but you can be geopolitically naive forever!

    Paul Trap’s Thatababy for the 8th is not quite the anthropomorphic-numerals joke of the week. It circles around that territory, though, giving a couple of odd numbers some personality.

    Brian Anderson’s Dog Eat Doug for the 9th finally justifies my title for this essay, as cats ponder mathematics. Well, they ponder quantum mechanics. But it’s nearly impossible to have a serious thought about that without pondering its mathematics. This doesn’t mean calculation, mind you. It does mean understanding what kinds of functions have physical importance. And what kinds of things one can do to functions. Understand them and you can discuss quantum mechanics without being mathematically stupid. And there’s enough ways to be stupid about quantum mechanics that any you can cut down is progress.

     
  • Joseph Nebus 6:00 pm on Friday, 11 August, 2017 Permalink | Reply
    Tags: , , computer programming, contravariant, covariant, , functors, , ,   

    The Summer 2017 Mathematics A To Z: Functor 


    Gaurish gives me another topic for today. I’m now no longer sure whether Gaurish hopes me to become a topology blogger or a category theory blogger. I have the last laugh, though. I’ve wanted to get better-versed in both fields and there’s nothing like explaining something to learn about it.

    Functor.

    So, category theory. It’s a foundational field. It talks about stuff that’s terribly abstract. This means it’s powerful, but it can be hard to think of interesting examples. I’ll try, though.

    It starts with categories. These have three parts. The first part is a set of things. (There always is.) The second part is a collection of matches between pairs of things in the set. They’re called morphisms. The third part is a rule that lets us combine two morphisms into a new, third one. That is. Suppose ‘a’, ‘b’, and ‘c’ are things in the set. Then there’s a morphism that matches a \rightarrow b , and a morphism that matches b \rightarrow c . And we can combine them into another morphism that matches a \rightarrow c . So we have a set of things, and a set of things we can do with those things. And the set of things we can do is itself a group.

    This describes a lot of stuff. Group theory fits seamlessly into this description. Most of what we do with numbers is a kind of group theory. Vector spaces do too. Most of what we do with analysis has vector spaces underneath it. Topology does too. Most of what we do with geometry is an expression of topology. So you see why category theory is so foundational.

    Functors enter our picture when we have two categories. Or more. They’re about the ways we can match up categories. But let’s start with two categories. One of them I’ll name ‘C’, and the other, ‘D’. A functor has to match everything that’s in the set of ‘C’ to something that’s in the set of ‘D’.

    And it does more. It has to match every morphism between things in ‘C’ to some other morphism, between corresponding things in ‘D’. It’s got to do it in a way that satisfies that combining, too. That is, suppose that ‘f’ and ‘g’ are morphisms for ‘C’. And that ‘f’ and ‘g’ combine to make ‘h’. Then, the functor has to match ‘f’ and ‘g’ and ‘h’ to some morphisms for ‘D’. The combination of whatever ‘f’ matches to and whatever ‘g’ matches to has to be whatever ‘h’ matches to.

    This might sound to you like a homomorphism. If it does, I admire your memory or mathematical prowess. Functors are about matching one thing to another in a way that preserves structure. Structure is the way that sets of things can interact. We naturally look for stuff made up of different things that have the same structure. Yes, functors are themselves a category. That is, you can make a brand-new category whose set of things are the functors between two other categories. This is a good spot to pause while the dizziness passes.

    There are two kingdoms of functor. You tell them apart by what they do with the morphisms. Here again I’m going to need my categories ‘C’ and ‘D’. I need a morphism for ‘C’. I’ll call that ‘f’. ‘f’ has to match something in the set of ‘C’ to something in the set of ‘C’. Let me call the first something ‘a’, and the second something ‘b’. That’s all right so far? Thank you.

    Let me call my functor ‘F’. ‘F’ matches all the elements in ‘C’ to elements in ‘D’. And it matches all the morphisms on the elements in ‘C’ to morphisms on the elmenets in ‘D’. So if I write ‘F(a)’, what I mean is look at the element ‘a’ in the set for ‘C’. Then look at what element in the set for ‘D’ the functor matches with ‘a’. If I write ‘F(b)’, what I mean is look at the element ‘b’ in the set for ‘C’. Then pick out whatever element in the set for ‘D’ gets matched to ‘b’. If I write ‘F(f)’, what I mean is to look at the morphism ‘f’ between elements in ‘C’. Then pick out whatever morphism between elements in ‘D’ that that gets matched with.

    Here’s where I’m going with this. Suppose my morphism ‘f’ matches ‘a’ to ‘b’. Does the functor of that morphism, ‘F(f)’, match ‘F(a)’ to ‘F(b)’? Of course, you say, what else could it do? And the answer is: why couldn’t it match ‘F(b)’ to ‘F(a)’?

    No, it doesn’t break everything. Not if you’re consistent about swapping the order of the matchings. The normal everyday order, the one you’d thought couldn’t have an alternative, is a “covariant functor”. The crosswise order, this second thought, is a “contravariant functor”. Covariant and contravariant are distinctions that weave through much of mathematics. They particularly appear through tensors and the geometry they imply. In that introduction they tend to be difficult, even mean, creations, since in regular old Euclidean space they don’t mean anything different. They’re different for non-Euclidean spaces, and that’s important and valuable. The covariant versus contravariant difference is easier to grasp here.

    Functors work their way into computer science. The avenue here is in functional programming. That’s a method of programming in which instead of the normal long list of commands, you write a single line of code that holds like fourteen “->” symbols that makes the computer stop and catch fire when it encounters a bug. The advantage is that when you have the code debugged it’s quite speedy and memory-efficient. The disadvantage is if you have to alter the function later, it’s easiest to throw everything out and start from scratch, beginning from vacuum-tube-based computing machines. But it works well while it does. You just have to get the hang of it.

     
    • gaurish 9:55 am on Saturday, 12 August, 2017 Permalink | Reply

      Can you suggest a nice introductory book on category theory for beginners? What I understand is that they generalize the notions defined concretely in algebra (which were motivated by arithmetic), but I lack any concrete understanding.

      Liked by 1 person

    • mathtuition88 2:56 pm on Sunday, 13 August, 2017 Permalink | Reply

      “Categories for the Working Mathematician” by Mac Lane is good and foundational (recommended for serious readers). Another book “Cakes, Custard and Category Theory” by Eugenia Cheng is accessible even to laymen.

      Like

      • Joseph Nebus 5:08 pm on Sunday, 13 August, 2017 Permalink | Reply

        I’m grateful to MathTuition88 for the suggestion. I’m afraid I’m poorly-enough read in category theory I don’t have any good idea where beginners ought to start.

        Liked by 1 person

    • elkement (Elke Stangl) 1:59 pm on Friday, 18 August, 2017 Permalink | Reply

      May I ask a computer science question ;-) ? I tried to understand how this functor from category theory would be mapped onto (Ha – another level of mapping!! ;-)) a functor in C++ but was not very successful. In this discussion https://stackoverflow.com/questions/356950/c-functors-and-their-uses somebody says that a functor in category theory ‘has nothing to do with the C++ concept of functor’.

      Would you agree? Or if not, can you maybe explain how an ‘implementation’ of your functor example would look like in C++ (or some pseudo-code in some language…). Or keep that in mind for a future post if you ever want to return to that subject!

      Anyway: I really enjoy this series!!

      Like

      • Joseph Nebus 3:29 am on Saturday, 19 August, 2017 Permalink | Reply

        Hoo, boy, that’s a good question. I’m afraid I don’t have proper computer science training; what I do know is what I’ve picked up trying to do specific problems. In my defense, many of them lately have been database-related stuff that can benefit from these tools. Any time I need to impress the boss, I do a crash course of reading Stack Overflow for a couple weeks and rewrite some core bit of code until it breaks differently. But I will try, with the warning that I am speaking outside my actual proper training.

        To me, I see a reasonably straightforward connection between category-theory functors and C++ functors. We can look at functors as ways to match unary functions to other unary functions. This seems to me a good bit of what we’d do with C++ functors, describing ways to manipulate data without needing to know much about what the data is. If I may offer a counterbalancing Stack Overflow thread, https://stackoverflow.com/questions/2030863/in-functional-programming-what-is-a-functor has several people who seem to know what they’re talking about arguing in favor of programming-functors being enough like category-theory-functors to be enlightening.

        My understanding is that the functors of programming language Haskell are more obviously category-theory functors. But I haven’t done anything in Haskell, so I can’t say what is particularly good about doing this.

        Liked by 1 person

        • elkement (Elke Stangl) 6:56 pm on Wednesday, 30 August, 2017 Permalink | Reply

          Thanks, that was very helpful! Reading this discussion on Stack Overflow reminded me of Lisp – and then I googled for Lisp + Functors … https://en.wikipedia.org/wiki/Function_object#In_Lisp_and_Scheme – think I got it now: Quote from Wikipedia: “Many uses of functors in languages like C++ are simply emulations of the missing closure constructor. Since the programmer cannot directly construct a closure, they must define a class that has all of the necessary state variables, and also a member function.”
          It’s funny that the concept of closure feels rather natural in Lisp – not that complicated, or at least less complicated than the explanations of Functor sound…

          Like

          • Joseph Nebus 1:11 am on Friday, 8 September, 2017 Permalink | Reply

            Thank you, and let me offer something I keep not being able to believe I forget. John D Cook offers, among his many Twitter feeds, the Functor Fact of the Day account: https://twitter.com/FunctorFact

            It does go through phases of being about category theory directly and phases of being about programming, which helps me feel better thinking of what I’ve said about functors.

            Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 9 August, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Elliptic Curves 


    Gaurish, of the For The Love Of Mathematics gives me another subject today. It’s one that isn’t about ellipses. Sad to say it’s also not about elliptic integrals. This is sad to me because I have a cute little anecdote about a time I accidentally gave my class an impossible problem. I did apologize. No, nobody solved it anyway.

    Elliptic Curves.

    Elliptic Curves start, of course, with polynomials. Particularly, they’re polynomials with two variables. We call the ‘x’ and ‘y’ because we have no reason to be difficult. They’re of at most third degree. That is, we can have terms like ‘x’ and ‘y2‘ and ‘x2y’ and ‘y3‘. Something with higher powers, like, ‘x4‘ or ‘x2y2‘ — a fourth power, all together — is right out. Doesn’t matter. Start from this and we can do some slick changes of variables so that we can rewrite it to look like this:

    y^2 = x^3 + Ax + B

    Here, ‘A’ and ‘B’ are some numbers that don’t change for this particular curve. Also, we need it to be true that 4A^3 + 27B^2 doesn’t equal zero. It avoids problems. What we’ll be looking at are coordinates, values of ‘x’ and ‘y’ together which make this equation true. That is, it’s points on the curve. If you pick some real numbers ‘A’ and ‘B’ and draw all the values of ‘x’ and ‘y’ that make the equation true you get … well, there’s different shapes. They all look like those microscope photos of a water drop emerging and falling from a tap, only rotated clockwise ninety degrees.

    So. Pick any of these curves that you like. Pick a point. I’m going to name your point ‘P’. Now pick a point once more. I’m going to name that point ‘Q’. Now draw a line from P through Q. Keep drawing it. It’ll cross the original elliptic curve again. And that point is … not actually special. What is special is the reflection of that point. That is, the same x-coordinate, but flip the plus or minus sign for the y-coordinate. (WARNING! Do not call it “the reflection” at your thesis defense! Call it the “conjugate” point. It means “reflection”.) Your elliptic curve will be symmetric around the x-axis. If, say, the point with x-coordinate 4 and y-coordinate 3 is on the curve, so is the point with x-coordinate 4 and y-coordinate -3. So that reflected point is … something special.

    Kind of a curved-out less-than-sign shape.

    y^2 = x^3 - 1 . The water drop bulges out from the surface.

    This lets us do something wonderful. We can think of this reflected point as the sum of your ‘P’ and ‘Q’. You can ‘add’ any two points on the curve and get a third point. This means we can do something that looks like addition for points on the elliptic curve. And this means the points on this curve are a group, and we can bring all our group-theory knowledge to studying them. It’s a commutative group, too; ‘P’ added to ‘Q’ leads to the same point as ‘Q’ added to ‘P’.

    Let me head off some clever thoughts that make fair objections. What if ‘P’ and ‘Q’ are already reflections, so the line between them is vertical? That never touches the original elliptic curve again, right? Yeah, fair complaint. We patch this by saying that there’s one more point, ‘O’, that’s off “at infinity”. Where is infinity? It’s wherever your vertical lines end. Shut up, this can too be made rigorous. In any case it’s a common hack for this sort of problem. When we add that, everything’s nice. The ‘O’ serves the role in this group that zero serves in arithmetic: the sum of point ‘O’ and any point ‘P’ is going to be ‘P’ again.

    Second clever thought to head off: what if ‘P’ and ‘Q’ are the same point? There’s infinitely many lines that go through a single point so how do we pick one to find an intersection with the elliptic curve? Huh? If you did that, then we pick the tangent line to the elliptic curve that touches ‘P’, and carry on as before.

    The curved-out less-than-sign shape has a noticeable c-shaped bulge on the end.

    y^2 = x^3 + 1 . The water drop is close to breaking off, but surface tension has not yet pinched off the falling form.

    There’s more. What kind of number is ‘x’? Or ‘y’? I’ll bet that you figured they were real numbers. You know, ordinary stuff. I didn’t say what they were, so left it to our instinct, and that usually runs toward real numbers. Those are what I meant, yes. But we didn’t have to. ‘x’ and ‘y’ could be in other sets of numbers too. They could be complex-valued numbers. They could be just the rational numbers. They could even be part of a finite collection of possible numbers. As the equation y^2 = x^3 + Ax + B is something meaningful (and some technical points are met) we can carry on. The elliptical curves, and the points we “add” on them, might not look like the curves we started with anymore. They might not look like anything recognizable anymore. But the logic continues to hold. We still create these groups out of the points on these lines intersecting a curve.

    By now you probably admit this is neat stuff. You may also think: so what? We can take this thing you never thought about, draw points and lines on it, and make it look very loosely kind of like just adding numbers together. Why is this interesting? No appreciation just for the beauty of the structure involved? Well, we live in a fallen world.

    It comes back to number theory. The modern study of Diophantine equations grows out of studying elliptic curves on the rational numbers. It turns out the group of points you get for that looks like a finite collection of points with some collection of integers hanging on. How long that collection of numbers is is called the ‘rank’, and there are deep mysteries at work. We know there are elliptic equations that have a rank as big as 28. Nobody knows if the rank can be arbitrary high, though. And I believe we don’t even know if there are any curves with rank of, like, 27, or 25.

    Yeah, I’m still sensing skepticism out there. Fine. We’ll go back to the only part of number theory everybody agrees is useful. Encryption. We have roughly the same goals for every encryption scheme. We want it to be easy to encode a message. We want it to be easy to decode the message if you have the key. We want it to be hard to decode the message if you don’t have the key.

    The curved-out sign has a bulge with convex loops to it, so that it resembles the cut of a jigsaw puzzle piece.

    y^2 = 3x^2 - 3x + 3 . The water drop is almost large enough that its weight overcomes the surface tension holding it to the main body of water.

    Take something inside one of these elliptic curve groups. Especially one that’s got a finite field. Let me call your thing ‘g’. It’s really easy for you, knowing what ‘g’ is and what your field is, to raise it to a power. You can pretty well impress me by sharing the value of ‘g’ raised to some whole number ‘m’. Call that ‘h’.

    Why am I impressed? Because if all I know is ‘h’, I have a heck of a time figuring out what ‘g’ is. Especially on these finite field groups there’s no obvious connection between how big ‘h’ is and how big ‘g’ is and how big ‘m’ is. Start with a big enough finite field and you can encode messages in ways that are crazy hard to crack.

    We trust. At least, if there are any ways to break the code quickly, nobody’s shared them. And there’s one of those enormous-money-prize awards waiting for someone who does know how to break such a code quickly. (I don’t know which. I’m going by what I expect from people.)

    And then there’s fame. These were used to prove Fermat’s Last Theorem. Suppose there are some non-boring numbers ‘a’, ‘b’, and ‘c’, so that for some prime number ‘p’ that’s five or larger, it’s true that a^p + b^p = c^p . (We can separately prove Fermat’s Last Theorem for a power that isn’t a prime number, or a power that’s 3 or 4.) Then this implies properties about the elliptic curve:

    y^2 = x(x - a^p)(x + b^p)

    This is a convenient way of writing things since it showcases the ap and bp. It’s equal to:

    y^2 = x^3 + \left(b^p - a^p\right)x^2 + a^p b^p x

    (I was so tempted to leave an arithmetic error in there so I could make sure someone commented.)

    A little ball off to the side of a curved-out less-than-sign shape.

    y^2 = 3x^3 - 4x . The water drop has broken off, and the remaining surface rebounds to its normal meniscus.

    If there’s a solution to Fermat’s Last Theorem, then this elliptic equation can’t be modular. I don’t have enough words to explain what ‘modular’ means here. Andrew Wiles and Richard Taylor showed that the equation was modular. So there is no solution to Fermat’s Last Theorem except the boring ones. (Like, where ‘b’ is zero and ‘a’ and ‘c’ equal each other.) And it all comes from looking close at these neat curves, none of which looks like an ellipse.

    They’re named elliptic curves because we first noticed them when Carl Jacobi — yes, that Carl Jacobi — while studying the length of arcs of an ellipse. That’s interesting enough on its own. But it is hard. Maybe I could have fit in that anecdote about giving my class an impossible problem after all.

     
  • Joseph Nebus 6:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Diophantine Equations 


    I have another request from Gaurish, of the For The Love Of Mathematics blog, today. It’s another change of pace.

    Diophantine Equations

    A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that x^n + y^n = z^n thing that Fermat’s Last Theorem is all about. And you’ve probably seen ax + by = 1 . It turns up a lot because that’s a line, and we do a lot of stuff with lines.

    Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. ax + by = 1 , for example, that’s easy to solve. x^n + y^n = z^n it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that ax + by = 1 is simple while x^n + y^n = z^n is (most of the time) impossible?

    I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like x^n + y^n = z^n . For example, there’s x^4 + y^4 + z^4 = w^4 for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is 2,682,440^4 + 15,365,639^4 + 18,796,760^4 = 20,615,673^4 and that one took a computer search to find. We can forgive Euler not noticing it.

    Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is x^2 - D y^2 = 1 (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.

    But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:

    7x^2 - 20y + 18y^2 - 38z = 9

    Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.

    So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.

    There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.

    We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve ax^2 + bx + c = 0 , but specific ones, like 1x^2 - 5x + 6 = 0 . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)

    But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — x - Dy^2 < A , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.

     
  • Joseph Nebus 4:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , , , , , , , Ozy and Millie, , ,   

    Reading the Comics, August 5, 2017: Lazy Summer Week Edition 


    It wasn’t like the week wasn’t busy. Comic Strip Master Command sent out as many mathematically-themed comics as I might be able to use. But they were again ones that don’t leave me much to talk about. I’ll try anyway. It was looking like an anthropomorphic-symboles sort of week, too.

    Tom Thaves’s Frank and Ernest for the 30th of July is an anthropomorphic-symbols joke. The tick marks used for counting make an appearance and isn’t that enough? Maybe.

    Dan Thompson’s Brevity for the 31st is another entry in the anthropomorphic-symbols joke contest. This one sticks to mathematical symbols, so if the Frank and Ernest makes the cut this week so must this one.

    Eric the Circle for the 31st, this installment by “T daug”, gives the slightly anthropomorphic geometric figure a joke that at least mentions a radius, and isn’t that enough? What catches my imagination about this panel particularly is that the “fractured radius” is not just a legitimate pun but also resembles a legitimate geometry drawing. Drawing a diameter line is sensible enough. Drawing some other point on the circle and connecting that to the ends of the diameter is also something we might do.

    Scott Hilburn’s The Argyle Sweater for the 1st of August is one of the logical mathematics jokes you could make about snakes. The more canonical one runs like this: God in the Garden of Eden makes all the animals and bids them to be fruitful. And God inspects them all and finds rabbits and doves and oxen and fish and fowl all growing in number. All but a pair of snakes. God asks why they haven’t bred and they say they can’t, not without help. What help? They need some thick tree branches chopped down. The bemused God grants them this. God checks back in some time later and finds an abundance of baby snakes in the Garden. But why the delay? “We’re adders,” explain the snakes, “so we need logs to multiply”. This joke absolutely killed them in the mathematics library up to about 1978. I’m told.

    John Deering’s Strange Brew for the 1st is a monkeys-at-typewriters joke. It faintly reminds me that I might have pledged to retire mentions of the monkeys-at-typewriters joke. But I don’t remember so I’ll just have to depend on saying I don’t think I retired the monkeys-at-typewriters jokes and trust that someone will tell me if I’m wrong.

    Dana Simpson’s Ozy and Millie rerun for the 2nd name-drops multiplication tables as the sort of thing a nerd child wants to know. They may have fit the available word balloon space better than “know how to diagram sentences” would.

    Mark Anderson’s Andertoons for the 3rd is the reassuringly normal appearance of Andertoons for this week. It is a geometry class joke about rays, line segments with one point where there’s an end and … a direction where it just doesn’t. And it riffs on the notion of the existence of mathematical things. At least I can see it that way.

    Dad: 'How many library books have you read this summer, Hammie?' Hammie: 'About 47.' Zoe: 'HA!' Dad: 'Hammie ... ' Hammie: 'Okay ... two.' Dad: 'Then why did you say 47?' Hammie: 'I was rounding up.' Zoe: 'NOW he understands math!'

    Rick Kirkman and Jerry Scott’s Baby Blues for the 5th of August, 2017. Hammie totally blew it by saying “about forty-seven”. Too specific a number to be a plausible lie. “About forty” or “About fifty”, something you can see as the result of rounding off, yes. He needs to know there are rules about how to cheat.

    Rick Kirkman and Jerry Scott’s Baby Blues for the 5th is a rounding-up joke that isn’t about herds of 198 cattle.

    Stephen Bentley’s Herb and Jamaal for the 5th tosses off a mention of the New Math as something well out of fashion. There are fashions in mathematics, as in all human endeavors. It startles many to learn this.

     
  • Joseph Nebus 4:00 pm on Friday, 4 August, 2017 Permalink | Reply
    Tags: , , , , ,   

    The Summer 2017 Mathematics A To Z: Cohomology 


    Today’s A To Z topic is another request from Gaurish, of the For The Love Of Mathematics blog. Also part of what looks like a quest to make me become a topology blogger, at least for a little while. It’s going to be exciting and I hope not to faceplant as I try this.

    Also, a note about Thomas K Dye, who’s drawn the banner art for this and for the Why Stuff Can Orbit series: the publisher for collections of his comic strip is having a sale this weekend.

    Cohomology.

    The word looks intimidating, and faintly of technobabble. It’s less cryptic than it appears. We see parts of it in non-mathematical contexts. In biology class we would see “homology”, the sharing of structure in body parts that look superficially very different. We also see it in art class. The instructor points out that a dog’s leg looks like that because they stand on their toes. What looks like a backward-facing knee is just the ankle, and if we stand on our toes we see that in ourselves. We might see it in chemistry, as many interesting organic compounds differ only in how long or how numerous the boring parts are. The stuff that does work is the same, or close to the same. And this is a hint to what a mathematician means by cohomology. It’s something in shapes. It’s particularly something in how different things might have similar shapes. Yes, I am using a homology in language here.

    I often talk casually about the “shape” of mathematical things. Or their “structures”. This sounds weird and abstract to start and never really gets better. We can get some footing if we think about drawing the thing we’re talking about. Could we represent the thing we’re working on as a figure? Often we can. Maybe we can draw a polygon, with the vertices of the shape matching the pieces of our mathematical thing. We get the structure of our thing from thinking about what we can do to that polygon without changing the way it looks. Or without changing the way we can do whatever our original mathematical thing does.

    This leads us to homologies. We get them by looking for stuff that’s true even if we moosh up the original thing. The classic homology comes from polyhedrons, three-dimensional shapes. There’s a relationship between the number of vertices, the number of edges, and the number of faces of a polyhedron. It doesn’t change even if you stretch the shape out longer, or squish it down, for that matter slice off a corner. It only changes if you punch a new hole through the middle of it. Or if you plug one up. That would be unsporting. A homology describes something about the structure of a mathematical thing. It might even be literal. Topology, the study of what we know about shapes without bringing distance into it, has the number of holes that go through a thing as a homology. This gets feeling like a comfortable, familiar idea now.

    But that isn’t a cohomology. That ‘co’ prefix looks dangerous. At least it looks significant. When the ‘co’ prefix has turned up before it’s meant something is shaped by how it refers to something else. Coordinates aren’t just number lines; they’re collections of number lines that we can use to say where things are. If ‘a’ is a factor of the number ‘x’, its cofactor is the number you multiply ‘a’ by in order to get ‘x’. (For real numbers that’s just x divided by a. For other stuff it might be weirder.) A codomain is a set that a function maps a domain into (and must contain the range, at least). Cosets aren’t just sets; they’re ways we can divide (for example) the counting numbers into odds and evens.

    So what’s the ‘co’ part for a homology? I’m sad to say we start losing that comfortable feeling now. We have to look at something we’re used to thinking of as a process as though it were a thing. These things are morphisms: what are the ways we can match one mathematical structure to another? Sometimes the morphisms are easy. We can match the even numbers up with all the integers: match 0 with 0, match 2 with 1, match -6 with -3, and so on. Addition on the even numbers matches with addition on the integers: 4 plus 6 is 10; 2 plus 3 is 5. For that matter, we can match the integers with the multiples of three: match 1 with 3, match -1 with -3, match 5 with 15. 1 plus -2 is -1; 3 plus -6 is -9.

    What happens if we look at the sets of matchings that we can do as if that were a set of things? That is, not some human concept like ‘2’ but rather ‘match a number with one-half its value’? And ‘match a number with three times its value’? These can be the population of a new set of things.

    And these things can interact. Suppose we “match a number with one-half its value” and then immediately “match a number with three times its value”. Can we do that? … Sure, easily. 4 matches to 2 which goes on to 6. 8 matches to 4 which goes on to 12. Can we write that as a single matching? Again, sure. 4 matches to 6. 8 matches to 12. -2 matches to -3. We can write this as “match a number with three-halves its value”. We’ve taken “match a number with one-half its value” and combined it with “match a number with three times its value”. And it’s given us the new “match a number with three-halves its value”. These things we can do to the integers are themselves things that can interact.

    This is a good moment to pause and let the dizziness pass.

    It isn’t just you. There is something weird thinking of “doing stuff to a set” as a thing. And we have to get a touch more abstract than even this. We should be all right, but please do not try not to use this to defend your thesis in category theory. Just use it to not look forlorn when talking to your friend who’s defending her thesis in category theory.

    Now, we can take this collection of all the ways we can relate one set of things to another. And we can combine this with an operation that works kind of like addition. Some way to “add” one way-to-match-things to another and get a way-to-match-things. There’s also something that works kind of like multiplication. It’s a different way to combine these ways-to-match-things. This forms a ring, which is a kind of structure that mathematicians learn about in Introduction to Not That Kind Of Algebra. There are many constructs that are rings. The integers, for example, are also a ring, with addition and multiplication the same old processes we’ve always used.

    And just as we can sort the integers into odds and evens — or into other groupings, like “multiples of three” and “one plus a multiple of three” and “two plus a multiple of three” — so we can sort the ways-to-match-things into new collections. And this is our cohomology. It’s the ways we can sort and classify the different ways to manipulate whatever we started on.

    I apologize that this sounds so abstract as to barely exist. I admit we’re far from a nice solid example such as “six”. But the abstractness is what gives cohomologies explanatory power. We depend very little on the specifics of what we might talk about. And therefore what we can prove is true for very many things. It takes a while to get there, is all.

     
  • Joseph Nebus 4:00 pm on Thursday, 3 August, 2017 Permalink | Reply
    Tags: , , July, , Portugal, ,   

    How July 2017 Treated My Mathematics Blog 


    July was a slightly better-read month around here than June was. I expected that. There weren’t any more posts in July — 13 both months — but the run-up to an A-to-Z sequence usually draws in readers. Not so many as might have been. I didn’t break back above the 1,000 threshold. But there were 911 page views from 568 distinct visitors, according to WordPress. In June there were 878 page views from only 542 visitors. May saw 1,029 page views from 662 visitors and I anticipate that August should be closer to that.

    The biggest measure of how engaged readers were rose dramatically. There were 45 comments posted here over the month. In June there were a meager 13 comments, and in May only eight. Asking questions that demand answers, and that hold out the prospect of making me do stuff, seems to be the trick. The number of likes rose less dramatically, with 118 things liked around here. In June there were only 99 likess; in May, 78. This isn’t like the peaks of the Summer 2015 A To Z (518 Likes in June!), but we’ll see what happens.

    The most popular posts in July were the usual mix of Reading the Comics posts, the number of grooves on a record, and A To Z publicity:

    There were 60 countries sending me readers in July, up from 52 in June and in May. In a twist, the United States sent the greatest number of them:

    Country Views
    United States 466
    Philippines 59
    United Kingdom 57
    Canada 45
    India 35
    Singapore 32
    Austria 31
    France 16
    Australia 15
    Brazil 14
    Germany 12
    Spain 12
    Hong Kong SAR China 8
    Italy 7
    Puerto Rico 7
    Argentina 6
    South Africa 6
    Belgium 5
    Netherlands 4
    Norway 4
    Russia 4
    Sweden 4
    Switzerland 4
    Chile 3
    Indonesia 3
    Nigeria 3
    Slovakia 3
    Colombia 2
    Czech Republic 2
    Denmark 2
    Estonia 2
    Lebanon 2
    Malaysia 2
    New Zealand 2
    Pakistan 2
    Poland 2
    Thailand 2
    Turkey 2
    United Arab Emirates 2
    Bangladesh 1
    Belarus 1
    Bulgaria 1
    Cambodia 1
    Cape Verde 1
    Costa Rica 1
    European Union 1
    Hungary 1 (*)
    Israel 1
    Japan 1 (**)
    Kazakhstan 1
    Latvia 1
    Mexico 1 (*)
    Oman 1
    Paraguay 1
    Romania 1
    Saudi Arabia 1
    Serbia 1
    South Korea 1
    Ukraine 1 (**)
    Vietnam 1

    There were 20 single-reader countries, up from 16 in June and down from May’s 21. Hungary and Mexico were single-reader countries the previous month. Japan and Ukraine have been single-reader countries three months running now. I’ve lost my monthly lone Portuguese reader. I hope she’s well and just busy with other projects. Still don’t know what “European Union” means in this context.

    The most popular day for reading was Monday, with 19 percent of page views coming in then. Why? Good question. In June it had been Sunday, with 18 percent. In May it was Sunday, with 16 percent. This is probably a meaningless flutter. The most popular hour was, again 4 pm, when 19 percent of page views came. 4 pm Greenwich Time is when I set most stuff to appear so I understand that being a trendy hour. In June the 4 pm hour got 14 percent of my page views.

    August started with the blog having 51,034 page views from 23,322 distinct viewers that WordPress will admit to. And it lists me as having 676 followers on WordPress, up from the start of July’s triangular-number (thanks, IvaSally!) 666. If you’d like this blog to appear in your wordPress reader, please use the little blue strip labelled “Follow nebusresearch” which should appear in the upper-right corner of the page. If following by e-mail is more your thing, there’s a strip labelled “Follow Blog Via E-mail” that you can use. I have finally looked up how to make that e-mail instead of “email”. It required my trying. I’m also on Twitter, as @Nebusj. And I support a humor blog as well, a nice cozy little thing that includes useful bits of information like quick summaries of the current story comics so you can avoid sounding uninformed about the plot twists of Alley Oop. It’s a need which I can fill.

     
  • Joseph Nebus 4:00 pm on Wednesday, 2 August, 2017 Permalink | Reply
    Tags: , bookstores, , , , measurements, , ,   

    The Summer 2017 Mathematics A To Z: Benford's Law 


    Today’s entry in the Summer 2017 Mathematics A To Z is one for myself. I couldn’t post this any later.

    Benford’s Law.

    My car’s odometer first read 9 on my final test drive before buying it, in June of 2009. It flipped over to 10 barely a minute after that, somewhere near Jersey Freeze ice cream parlor at what used to be the Freehold Traffic Circle. Ask a Central New Jersey person of sufficient vintage about that place. Its odometer read 90 miles sometime that weekend, I think while I was driving to The Book Garden on Route 537. Ask a Central New Jersey person of sufficient reading habits about that place. It’s still there. It flipped over to 100 sometime when I was driving back later that day.

    The odometer read 900 about two months after that, probably while I was driving to work, as I had a longer commute in those days. It flipped over to 1000 a couple days after that. The odometer first read 9,000 miles sometime in spring of 2010 and I don’t remember what I was driving to for that. It flipped over from 9,999 to 10,000 miles several weeks later, as I pulled into the car dealership for its scheduled servicing. Yes, this kind of impressed the dealer that I got there exactly on the round number.

    The odometer first read 90,000 in late August of last year, as I was driving to some competitive pinball event in western Michigan. It’s scheduled to flip over to 100,000 miles sometime this week as I get to the dealer for its scheduled maintenance. While cars have gotten to be much more reliable and durable than they used to be, the odometer will never flip over to 900,000 miles. At least I can’t imagine owning it long enough, at my rate of driving the past eight years, that this would ever happen. It’s hard to imagine living long enough for the car to reach 900,000 miles. Thursday or Friday it should flip over to 100,000 miles. The leading digit on the odometer will be 1 or, possibly, 2 for the rest of my association with it.

    The point of this little autobiography is this observation. Imagine all the days that I have owned this car, from sometime in June 2009 to whatever day I sell, lose, or replace it. Pick one. What is the leading digit of my odometer on that day? It could be anything from 1 to 9. But it’s more likely to be 1 than it is 9. Right now it’s as likely to be any of the digits. But after this week the chance of ‘1’ being the leading digit will rise, and become quite more likely than that of ‘9’. And it’ll never lose that edge.

    This is a reflection of Benford’s Law. It is named, as most mathematical things are, imperfectly. The law-namer was Frank Benford, a physicist, who in 1938 published a paper The Law Of Anomalous Numbers. It confirmed the observation of Simon Newcomb. Newcomb was a 19th century astronomer and mathematician of an exhausting number of observations and developments. Newcomb observed the logarithm tables that anyone who needed to compute referred to often. The earlier pages were more worn-out and dirty and damaged than the later pages. People worked with numbers that start with ‘1’ more than they did numbers starting with ‘2’. And more those that start ‘2’ than start ‘3’. More that start with ‘3’ than start with ‘4’. And on. Benford showed this was not some fluke of calculations. It turned up in bizarre collections of data. The surface areas of rivers. The populations of thousands of United States municipalities. Molecular weights. The digits that turned up in an issue of Reader’s Digest. There is a bias in the world toward numbers that start with ‘1’.

    And this is, prima facie, crazy. How can the surface areas of rivers somehow prefer to be, say, 100-199 hectares instead of 500-599 hectares? A hundred is a human construct. (Indeed, it’s many human constructs.) That we think ten is an interesting number is an artefact of our society. To think that 100 is a nice round number and that, say, 81 or 144 are not is a cultural choice. Grant that the digits of street addresses of people listed in American Men of Science — one of Benford’s data sources — have some cultural bias. How can another of his sources, molecular weights, possibly?

    The bias sneaks in subtly. Don’t they all? It lurks at the edge of the table of data. The table header, perhaps, where it says “River Name” and “Surface Area (sq km)”. Or at the bottom where it says “Length (miles)”. Or it’s never explicit, because I take for granted people know my car’s mileage is measured in miles.

    What would be different in my introduction if my car were Canadian, and the odometer measured kilometers instead? … Well, I’d not have driven the 9th kilometer; someone else doing a test-drive would have. The 90th through 99th kilometers would have come a little earlier that first weekend. The 900th through 999th kilometers too. I would have passed the 99,999th kilometer years ago. In kilometers my car has been in the 100,000s for something like four years now. It’s less absurd that it could reach the 900,000th kilometer in my lifetime, but that still won’t happen.

    What would be different is the precise dates about when my car reached its milestones, and the amount of days it spent in the 1’s and the 2’s and the 3’s and so on. But the proportions? What fraction of its days it spends with a 1 as the leading digit versus a 2 or a 5? … Well, that’s changed a little bit. There is some final mile, or kilometer, my car will ever register and it makes a little difference whether that’s 239,000 or 385,000. But it’s only a little difference. It’s the difference in how many times a tossed coin comes up heads on the first 1,000 flips versus the second 1,000 flips. They’ll be different numbers, but not that different.

    What’s the difference between a mile and a kilometer? A mile is longer than a kilometer, but that’s it. They measure the same kinds of things. You can convert a measurement in miles to one in kilometers by multiplying by a constant. We could as well measure my car’s odometer in meters, or inches, or parsecs, or lengths of football fields. The difference is what number we multiply the original measurement by. We call this “scaling”.

    Whatever we measure, in whatever unit we measure, has to have a leading digit of something. So it’s got to have some chance of starting out with a ‘1’, some chance of starting out with a ‘2’, some chance of starting out with a ‘3’, and so on. But that chance can’t depend on the scale. Measuring something in smaller or larger units doesn’t change the proportion of how often each leading digit is there.

    These facts combine to imply that leading digits follow a logarithmic-scale law. The leading digit should be a ‘1’ something like 30 percent of the time. And a ‘2’ about 18 percent of the time. A ‘3’ about one-eighth of the time. And it decreases from there. ‘9’ gets to take the lead a meager 4.6 percent of the time.

    Roughly. It’s not going to be so all the time. Measure the heights of humans in meters and there’ll be far more leading digits of ‘1’ than we should expect, as most people are between 1 and 2 meters tall. Measure them in feet and ‘5’ and ‘6’ take a great lead. The law works best when data can sprawl over many orders of magnitude. If we lived in a world where people could as easily be two inches as two hundred feet tall, Benford’s Law would make more accurate predictions about their heights. That something is a mathematical truth does not mean it’s independent of all reason.

    For example, the reader thinking back some may be wondering: granted that atomic weights and river areas and populations carry units with them that create this distribution. How do street addresses, one of Benford’s observed sources, carry any unit? Well, street addresses are, at least in the United States custom, a loose measure of distance. The 100 block (for example) of a street is within one … block … from whatever the more important street or river crossing that street is. The 900 block is farther away.

    This extends further. Block numbers are proxies for distance from the major cross feature. House numbers on the block are proxies for distance from the start of the block. We have a better chance to see street number 418 than 1418, to see 418 than 488, or to see 418 than to see 1488. We can look at Benford’s Law in the second and third and other minor digits of numbers. But we have to be more cautious. There is more room for variation and quirk events. A block-filling building in the downtown area can take whatever street number the owners think most auspicious. Smaller samples of anything are less predictable.

    Nevertheless, Benford’s Law has become famous to forensic accountants the past several decades, if we allow the use of the word “famous” in this context. But its fame is thanks to the economists Hal Varian and Mark Nigrini. They observed that real-world financial data should be expected to follow this same distribution. If they don’t, then there might be something suspicious going on. This is not an ironclad rule. There might be good reasons for the discrepancy. If your work trips are always to the same location, and always for one week, and there’s one hotel it makes sense to stay at, and you always learn you’ll need to make the trips about one month ahead of time, of course the hotel bill will be roughly the same. Benford’s Law is a simple, rough tool, a way to decide what data to scrutinize for mischief. With this in mind I trust none of my readers will make the obvious leading-digit mistake when padding their expense accounts anymore.

    Since I’ve done you that favor, anyone out there think they can pick me up at the dealer’s Thursday, maybe Friday? Thanks in advance.

     
    • ivasallay 6:12 pm on Wednesday, 2 August, 2017 Permalink | Reply

      Fascinating. I’ve never given this much thought, but it makes sense. Clearly, given any random whole number greater than 9, there will be at least as many numbers less than it that start with a 1 than any other number, too.

      Back to your comment about odometers. We owned a van until it started costing more in repairs than most people pay in car payments. The odometer read something like 97,000 miles. We should have suspected from the beginning that it wasn’t made to last because IF it had made it to 99,999, it would then start over at 00,000.

      Like

      • Joseph Nebus 6:40 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thank you. This is one of my favorite little bits of mathematics because it is something lurking around us all the time, just waiting to be discovered, and it’s really there once we try measuring things.

        I’m amused to hear of a car with that short an odometer reel. I do remember thinking as a child that there was trouble if a car’s odometer rolled past 999,999. My father I remember joking that when that happened you had a brand-new car. I also remember hearing vaguely of flags that would drop beside the odometer reels if that ever happened.

        Electromechanical and early solid-state pinball machines, with scoring reels or finitely many digits to display a score, can have this problem happen. Some of them handle it by having a light turn on to show, say, ‘100,000’ above the score and which does nothing to help with someone who rolls the score twice. Some just shrug and give up; when I’ve rolled our home Tri-Zone machine, its score just goes back to the 000,000 mark. Some of the pinball machines made by European manufacturer Zaccaria in the day would have the final digit — fixed at zero by long pinball custom — switch to a flashing 1, or (I trust) 2, or 3, or so on. It’s a bit odd to read at first, but it’s a good way to make the rollover problem a much better one to have.

        Like

  • Joseph Nebus 4:00 pm on Tuesday, 1 August, 2017 Permalink | Reply
    Tags: , , Boner's Ark, , ,   

    Reading the Comics, July 29, 2017: Not Really Mathematics Concluded Edition 


    It was a busy week at Comic Strip Master Command last week, since they wanted to be sure I was overloaded ahead of the start of the Summer 2017 A To Z project. So here’s the couple of comics I didn’t have time to review on Sunday.

    Mort (“Addison”) Walker’s Boner’s Ark for the 7th of September, 1971 was rerun the 27th of July. It mentions mathematics but just as a class someone might need more work on. Could be anything, but mathematics has the connotations of something everybody struggles with, and in an American comic strip needs only four letters to write. Most economical use of word balloon space.

    Boner: 'Your math could stand a lot more work, Spot.' Aardvark: 'Yeah! Let's get at it, Buddy! Get that old nose to the grindstone!' Spot: 'YOUR nose could use a little time at the grindstone, too, Buddy!'

    Mort (“Addison”) Walker’s Boner’s Ark for the 7th of September, 1971 and rerun the 27th of July, 2017. I suppose I’m glad that Boner is making sure his animals get as good an education as possible while they’re stranded on their Ark. I’m just wondering whether Boner’s comment is meant in the parental role of a concerned responsible caretaker figure, or whether he’s serving as a teacher or principal. What exactly is the social-service infrastructure of Boner’s Ark? The world may never know.

    Neil Kohney’s The Other End for the 28th also mentions mathematics without having any real mathematics content. Barry tries to make the argument that mathematics has a timeless and universal quality that makes for good aesthetic value. I support this principle. Art has many roles. One is to make us see things which are true which are not about ourselves. This mathematics does. Whether it’s something as instantly accessible as, say, RobertLovesPi‘s illustrations of geometrical structures, or something as involved as the five-color map theorem mathematics gives us something. This isn’t any excuse to slum, though.

    Rob Harrell’s Big Top rerun for the 29th features a word problem. It’s cast in terms of what a lion might find interesting. Cultural expectations are inseparable from the mathematics we do, however much we might find universal truths about them. Word problems make the cultural biases more explicit, though. Also, note that Harrell shows an important lesson for artists in the final panel: whenever possible, draw animals wearing glasses.

    Samson’s Dark Side Of The Horse for the 29th is another sheep-counting joke. As Samson will often do this includes different representations of numbers before it all turns to chaos in the end. This is why some of us can’t sleep.

     
  • Joseph Nebus 4:00 pm on Monday, 31 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Arithmetic 


    And now as summer (United States edition) reaches its closing months I plunge into the fourth of my A To Z mathematics-glossary sequences. I hope I know what I’m doing! Today’s request is one of several from Gaurish, who’s got to be my top requester for mathematical terms and whom I thank for it. It’s a lot easier writing these things when I don’t have to think up topics. Gaurish hosts a fine blog, For the love of Mathematics, which you might consider reading.

    Arithmetic.

    Arithmetic is what people who aren’t mathematicians figure mathematicians do all day. I remember in my childhood a Berenstain Bears book about people’s jobs. Its mathematician was an adorable little bear adding up sums on the chalkboard, in an observatory, on the Moon. I liked every part of this. I wouldn’t say it’s the whole reason I became a mathematician but it did made the prospect look good early on.

    People who aren’t mathematicians are right. At least, the bulk of what mathematics people do is arithmetic. If we work by volume. Arithmetic is about the calculations we do to evaluate or solve polynomials. And polynomials are everything that humans find interesting. Arithmetic is adding and subtracting, of multiplication and division, of taking powers and taking roots. Arithmetic is changing the units of a thing, and of breaking something into several smaller units, or of merging several smaller units into one big one. Arithmetic’s role in commerce and in finance must overwhelm the higher mathematics. Higher mathematics offers cohomologies and Ricci tensors. Arithmetic offers a budget.

    This is old mathematics. There’s evidence of humans twenty thousands of years ago recording their arithmetic computations. My understanding is the evidence is ambiguous and interpretations vary. This seems fair. I assume that humans did such arithmetic then, granting that I do not know how to interpret archeological evidence. The thing is that arithmetic is older than humans. Animals are able to count, to do addition and subtraction, perhaps to do harder computations. (I crib this from The Number Sense:
    How the Mind Creates Mathematics
    , by Stanislas Daehaene.) We learn it first, refining our rough instinctively developed sense to something rigorous. At least we learn it at the same time we learn geometry, the other branch of mathematics that must predate human existence.

    The primality of arithmetic governs how it becomes an adjective. We will have, for example, the “arithmetic progression” of terms in a sequence. This is a sequence of numbers such as 1, 3, 5, 7, 9, and so on. Or 4, 9, 14, 19, 24, 29, and so on. The difference between one term and its successor is the same as the difference between the predecessor and this term. Or we speak of the “arithmetic mean”. This is the one found by adding together all the numbers of a sample and dividing by the number of terms in the sample. These are important concepts, useful concepts. They are among the first concepts we have when we think of a thing. Their familiarity makes them easy tools to overlook.

    Consider the Fundamental Theorem of Arithmetic. There are many Fundamental Theorems; that of Algebra guarantees us the number of roots of a polynomial equation. That of Calculus guarantees us that derivatives and integrals are joined concepts. The Fundamental Theorem of Arithmetic tells us that every whole number greater than one is equal to one and only one product of prime numbers. If a number is equal to (say) two times two times thirteen times nineteen, it cannot also be equal to (say) five times eleven times seventeen. This may seem uncontroversial. The budding mathematician will convince herself it’s so by trying to work out all the ways to write 60 as the product of prime numbers. It’s hard to imagine mathematics for which it isn’t true.

    But it needn’t be true. As we study why arithmetic works we discover many strange things. This mathematics that we know even without learning is sophisticated. To build a logical justification for it requires a theory of sets and hundreds of pages of tight reasoning. Or a theory of categories and I don’t even know how much reasoning. The thing that is obvious from putting a couple objects on a table and then a couple more is hard to prove.

    As we continue studying arithmetic we start to ponder things like Goldbach’s Conjecture, about even numbers (other than two) being the sum of exactly two prime numbers. This brings us into number theory, a land of fascinating problems. Many of them are so accessible you could pose them to a person while waiting in a fast-food line. This befits a field that grows out of such simple stuff. Many of those are so hard to answer that no person knows whether they are true, or are false, or are even answerable.

    And it splits off other ideas. Arithmetic starts, at least, with the counting numbers. It moves into the whole numbers and soon all the integers. With division we soon get rational numbers. With roots we soon get certain irrational numbers. A close study of this implies there must be irrational numbers that must exist, at least as much as “four” exists. Yet they can’t be reached by studying polynomials. Not polynomials that don’t already use these exotic irrational numbers. These are transcendental numbers. If we were to say the transcendental numbers were the only real numbers we would be making only a very slight mistake. We learn they exist by thinking long enough and deep enough about arithmetic to realize there must be more there than we realized.

    Thought compounds thought. The integers and the rational numbers and the real numbers have a structure. They interact in certain ways. We can look for things that are not numbers, but which follow rules like that for addition and for multiplication. Sometimes even for powers and for roots. Some of these can be strange: polynomials themselves, for example, follow rules like those of arithmetic. Matrices, which we can represent as grids of numbers, can have powers and even something like roots. Arithmetic is inspiration to finding mathematical structures that look little like our arithmetic. We can find things that follow mathematical operations but which don’t have a Fundamental Theorem of Arithmetic.

    And there are more related ideas. These are often very useful. There’s modular arithmetic, in which we adjust the rules of addition and multiplication so that we can work with a finite set of numbers. There’s floating point arithmetic, in which we set machines to do our calculations. These calculations are no longer precise. But they are fast, and reliable, and that is often what we need.

    So arithmetic is what people who aren’t mathematicians figure mathematicians do all day. And they are mistaken, but not by much. Arithmetic gives us an idea of what mathematics we can hope to understand. So it structures the way we think about mathematics.

     
    • ivasallay 5:34 pm on Monday, 31 July, 2017 Permalink | Reply

      I think you covered arithmetic in a very clear, scholarly way.

      When I was in the early elementary grades, we didn’t study math. We studied arithmetic.

      Here’s a couple more things some people might not know about arithmetic:
      1) How to remember the proper spelling of arithmetic: A Rat In The House May Eat The Ice Cream.
      2) How to pronounce arithmetic: https://www.quora.com/Why-does-the-pronunciation-of-arithmetic-depend-on-context

      Like

      • Joseph Nebus 6:27 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thanks! … My recollection is that in elementary school we called it mathematics (or just math), but the teachers were pretty clear about whether we were doing arithmetic or geometry. If that was clear, since I grew up on the tail end of the New Math wave and we could do stuff that was more playful than multiplication tables were.

        I hadn’t thought about the shifting pronunciations of ‘arithmetic’ as a word. I suppose it’s not different from many multi-syllable words in doing that, though. My suspicion is that the distinction between ‘arithmetic’ as an adjective and as a noun is spurious, though. My hunch is people shift the emphasis based on the structure of the whole sentence, with the words coming after ‘arithmetic’ having a big role to play. I’d expect that an important word immediately follows ‘arithmetic’ often if it’s being used as an adjective (like, ‘arithmetic mean’), but that’s not infallible. As opposed to those many rules of English grammar and pronunciation that are infallible.

        Liked by 1 person

    • gaurish 9:48 am on Saturday, 12 August, 2017 Permalink | Reply

      A Beautiful introduction to Arithmetic!

      Like

    • Jayeesha 12:06 pm on Thursday, 31 August, 2017 Permalink | Reply

      Mental Arithmetic, I like it

      Like

      • Joseph Nebus 1:14 am on Friday, 8 September, 2017 Permalink | Reply

        It’s a fun pastime. Also a great way to find yourself reassuring the cashier that yes, you meant to give $20.17.

        Like

  • Joseph Nebus 4:00 pm on Sunday, 30 July, 2017 Permalink | Reply
    Tags: , , , , , , Savage Chickens, ,   

    Reading the Comics, July 30, 2017: Not Really Mathematics edition 


    It’s been a busy enough week at Comic Strip Master Command that I’ll need to split the results across two essays. Any other week I’d be glad for this, since, hey, free content. But this week it hits a busy time and shouldn’t I have expected that? The odd thing is that the mathematics mentions have been numerous but not exactly deep. So let’s watch as I make something big out of that.

    Mark Tatulli’s Heart of the City closed out its “Math Camp” storyline this week. It didn’t end up having much to do with mathematics and was instead about trust and personal responsibility issues. You know, like stories about kids who aren’t learning to believe in themselves and follow their dreams usually are. Since we never saw any real Math Camp activities we don’t get any idea what they were trying to do to interest kids in mathematics, which is a bit of a shame. My guess would be they’d play a lot of the logic-driven puzzles that are fun but that they never get to do in class. The story established that what I thought was an amusement park was instead a fair, so, that might be anywhere Pennsylvania or a couple of other nearby states.

    Rick Kirkman and Jerry Scott’s Baby Blues for the 25th sees Hammie have “another” mathematics worksheet accident. Could be any subject, really, but I suppose it would naturally be the one that hey wait a minute, why is he doing mathematics worksheets in late July? How early does their school district come back from summer vacation, anyway?

    Hammie 'accidentally' taps a glass of water on his mathematics paper. Then tears it up. Then chews it. Mom: 'Another math worksheet accident?' Hammie: 'Honest, Mom, I think they're cursed!'

    Rick Kirkman and Jerry Scott’s Baby Blues for the 25th of July, 2017 Almost as alarming: Hammie is clearly way behind on his “faking plausible excuses” homework. If he doesn’t develop the skills to make a credible reason why he didn’t do something how is he ever going to dodge texts from people too important not to reply to?

    Olivia Walch’s Imogen Quest for the 26th uses a spot of mathematics as the emblem for teaching. In this case it’s a bit of physics. And an important bit of physics, too: it’s the time-dependent Schrödinger Equation. This is the one that describes how, if you know the total energy of the system, and the rules that set its potential and kinetic energies, you can work out the function Ψ that describes it. Ψ is a function, and it’s a powerful one. It contains probability distributions: how likely whatever it is you’re modeling is to have a particle in this region, or in that region. How likely it is to have a particle with this much momentum, versus that much momentum. And so on. Each of these we find by applying a function to the function Ψ. It’s heady stuff, and amazing stuff to me. Ψ somehow contains everything we’d like to know. And different functions work like filters that make clear one aspect of that.

    Dan Thompson’s Brevity for the 26th is a joke about Sesame Street‘s Count von Count. Also about how we can take people’s natural aptitudes and delights and turn them into sad, droning unpleasantness in the service of corporate overlords. It’s fun.

    Steve Sicula’s Home and Away rerun for the 26th is a misplaced Pi Day joke. It originally ran the 22nd of April, but in 2010, before Pi Day was nearly so much a thing.

    Doug Savage’s Savage Chickens for the 26th proves something “scientific” by putting numbers into it. Particularly, by putting statistics into it. Understandable impulse. One of the great trends of the past century has been taking the idea that we only understand things when they are measured. And this implies statistics. Everything is unique. Only statistical measurement lets us understand what groups of similar things are like. Does something work better than the alternative? We have to run tests, and see how the something and the alternative work. Are they so similar that the differences between them could plausibly be chance alone? Are they so different that it strains belief that they’re equally effective? It’s one of science’s tools. It’s not everything which makes for science. But it is stuff easy to communicate in one panel.

    Neil Kohney’s The Other End for the 26th is really a finance joke. It’s about the ways the finance industry can turn one thing into a dazzling series of trades and derivative trades. But this is a field that mathematics colonized, or that colonized mathematics, over the past generation. Mathematical finance has done a lot to shape ideas of how we might study risk, and probability, and how we might form strategies to use that risk. It’s also done a lot to shape finance. Pretty much any major financial crisis you’ve encountered since about 1990 has been driven by a brilliant new mathematical concept meant to govern risk crashing up against the fact that humans don’t behave the way some model said they should. Nor could they; models are simplified, abstracted concepts that let hard problems be approximated. Every model has its points of failure. Hopefully we’ll learn enough about them that major financial crises can become as rare as, for example, major bridge collapses or major airplane disasters.

     
    • ivasallay 5:00 am on Monday, 31 July, 2017 Permalink | Reply

      A pi joke that uses 22/7 as pi should run close to 22 July.

      Like

      • Joseph Nebus 6:22 pm on Wednesday, 2 August, 2017 Permalink | Reply

        You’re right, it ought. It’s got to be coincidence that it ran so close this year, though. The strip’s been in repeats a while now and as far as I know isn’t skipping or adjusting reruns to be seasonal.

        Liked by 1 person

  • Joseph Nebus 4:00 pm on Thursday, 27 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 13: To Close A Loop 


    Previously:

    And the supplemental reading:


    Today’s is one of the occasional essays in the Why Stuff Can Orbit sequence that just has a lot of equations. I’ve tried not to write everything around equations because I know what they’re like to read. They’re pretty to look at and after about four of them you might as well replace them with a big grey box that reads “just let your eyes glaze over and move down to the words”. It’s even more glaze-y than that for non-mathematicians.

    But we do need them. Equations are wonderfully compact, efficient ways to write about things that are true. Especially things that are only true if exacting conditions are met. They’re so good that I’ll often find myself checking a textbook for an explanation of something and looking only at the equations, letting my eyes glaze over the words. That’s a chilling thing to catch yourself doing. Especially when you’ve written some obscure textbooks and a slightly read mathematics blog.

    What I had been looking at was a perturbed central-force orbit. We have something, generically called a planet, that orbits the center of the universe. It’s attracted to the center of the universe by some potential energy, which we describe as ‘U(r)’. It’s some number that changes with the distance ‘r’ the planet has from the center of the universe. It usually depends on other stuff too, like some kind of mass of the planet or some constants or stuff. The planet has some angular momentum, which we can call ‘L’ and pretend is a simple number. It’s in truth a complicated number, but we’ve set up the problem where we can ignore the complicated stuff. This angular momentum implies the potential energy allows for a circular orbit at some distance which we’ll call ‘a’ from the center of the universe.

    From ‘U(r)’ and ‘L’ we can say whether this is a stable orbit. If it’s stable, a little perturbation, a nudging, from the circular orbit will stay small. If it’s unstable, a little perturbation will keep growing and never stop. If we perturb this circular orbit the planet will wobble back and forth around the circular orbit. Sometimes the radius will be a little smaller than ‘a’, and sometimes it’ll be a little larger than ‘a’. And now I want to see whether we get a stable closed orbit.

    The orbit will be closed if the planet ever comes back to the same position and same momentum that it started with. ‘Started’ is a weird idea in this case. But it’s common vocabulary. By it we mean “whatever properties the thing had when we started paying attention to it”. Usually in a problem like this we suppose there’s some measure of time. It’s typically given the name ‘t’ because we don’t want to make this hard on ourselves. The start is some convenient reference time, often ‘t = 0’. That choice usually makes the equations look simplest.

    The position of the planet we can describe with two variables. One is the distance from the center of the universe, ‘r’, which we know changes with time: ‘r(t)’. Another is the angle the planet makes with respect to some reference line. The angle we might call ‘θ’ and often do. This will also change in time, then, ‘θ(t)’. We can pick other variables to describe where something is. But they’re going to involve more algebra, more symbol work, than this choice does so who needs it?

    Momentum, now, that’s another set of variables we need to worry about. But we don’t need to worry about them. This particular problem is set up so that if we know the position of the planet we also have the momentum. We won’t be able to get both ‘r(t)’ and ‘θ(t)’ back to their starting values without also getting the momentum there. So we don’t have to worry about that. This won’t always work, as see my future series, ‘Why Statistical Mechanics Works’.

    So. We know, because it’s not that hard to work out, how long it takes for ‘r(t)’ to get back to its original, ‘r(0)’, value. It’ll take a time we worked out to be (first big equation here, although we found it a couple essays back):

    T_r = 2\pi\sqrt{ \frac{m}{ -F'(a) - \frac{3}{a} F(a) }}

    Here ‘m’ is the mass of the planet. And ‘F’ is a useful little auxiliary function. It’s the force that the planet feels when it’s a distance from the origin. It’s defined as F(r) = -\frac{dU}{dr} . It’s convenient to have around. It makes equations like this one simpler, for one. And it’s weird to think of a central force problem where we never, ever see forces. The peculiar thing is we define ‘F’ for every distance the planet might be from the center of the universe. But all we care about is its value at the equilibrium, circular orbit distance of ‘a’. We also care about its first derivative, also evaluated at the distance of ‘a’, which is that F'(a) talk early on in that denominator.

    So in the time between time ‘0’ and time ‘Tr‘ the perturbed radius will complete a full loop. It’ll reach its biggest value and its smallest value and get back to the original. (It is so much easier to suppose the perturbation starts at its biggest value at time ‘0’ that we often assume it has. It doesn’t have to be. But if we don’t have something forcing the choice of what time to call ‘0’ on us, why not pick one that’s convenient?) The question is whether ‘θ(t)’ completes a full loop in that time. If it does then we’ve gotten back to the starting position exactly and we have a closed orbit.

    Thing is that the angle will never get back to its starting value. The angle ‘θ(t)’ is always increasing at a rate we call ‘ω’, the angular velocity. This number is constant, at least approximately. Last time we found out what this number was:

    \omega = \frac{L}{ma^2}

    So the angle, over time, is going to look like:

    \theta(t) = \frac{L}{ma^2} t

    And ‘θ(Tr)’ will never equal ‘θ(0)’ again, not unless ‘ω’ is zero. And if ‘ω’ is zero then the planet is racing away from the center of the universe never to be seen again. Or it’s plummeting into the center of the universe to be gobbled up by whatever resides there. In either case, not what we traditionally think of as orbits. Even if we allow these as orbits, these would be nudges too big to call perturbations.

    So here’s the resolution. Angles are right awful pains in mathematical physics. This is because increasing an angle by 2π — or decreasing it by 2π — has no visible effect. In the language of the hew-mon, adding 360 degrees to a turn leaves you back where you started. A 45 degree angle is indistinguishable from a 405 degree angle, or a 765 degree angle, or a -315 degree angle, or so on. This makes for all sorts of irritating alternate cases to consider when you try solving for where one thing meets another. But it allows us to have closed orbits.

    Because we can have a closed orbit, now, if the radius ‘r(t)’ completes a full oscillation in the time it takes ‘θ(t)’ to grow by 2π. Or to grow by π. Or to grow by ½π. Or a third of π. Or so on.

    So. Last time we worked out that the angular velocity had to be this number:

    \omega = \frac{L}{ma^2}

    And that looked weird because the central force doesn’t seem to be there. It’s in there. It’s just implicit. We need to know what the central force is to work out what ‘a’ is. But we can make it explicit by using that auxiliary little function ‘F(r)’. In particular, at the circular orbit radius of ‘a’ we have that:

    F(a) = -\frac{L^2}{ma^3}

    I am going to use this to work out what ‘L’ has to be, in terms of ‘F’ and ‘m’ and ‘a’. First, multiply both sides of this equation by ‘ma3‘:

    F(a) \cdot ma^3 = -L^2

    And then both sides by -1:

    -ma^3 F(a) = L^2

    Take the square root — don’t worry, that it will turn out that ‘F(a)’ is a negative number so we’re not doing anything suspicious —

    \sqrt{-ma^3 F(a)} = L

    Now, take that ‘L’ we’ve got and put it back into the equation for angular velocity:

    \omega = \frac{L}{ma^2} = \frac{\sqrt{-ma^3 F(a)}}{ma^2}

    We might look stuck and at what seems like an even worse position. It’s not. When you do enough of these problems you get used to some tricks. For example, that ‘ma2‘ in the denominator we could move under the square root if we liked. This we know because ma^2 = \sqrt{ \left(ma^2\right)^2 } at least as long as ‘ma2‘ is positive. It is.

    So. We fall back on the trick of squaring and square-rooting the denominator and so generate this mess:

    \omega = \sqrt{\frac{-ma^3 F(a)}{\left(ma^2\right)^2}}	\\ \omega = \sqrt{\frac{-ma^3 F(a)}{m^2 a^4}} \\ \omega = \sqrt{\frac{-F(a)}{ma}}

    That’s getting nice and simple. Let me go complicate matters. I’ll want to know the angle that the planet sweeps out as the radius goes from its largest to its smallest value. Or vice-versa. This time is going to be half of ‘Tr‘, the time it takes to do a complete oscillation. The oscillation might have started at time ‘t’ of zero, maybe not. But how long it takes will be the same. I’m going to call this angle ‘ψ’, because I’ve written “the angle that the planet sweeps out as the radius goes from its largest to its smallest value” enough times this essay. If ‘ψ’ is equal to π, or one-half π, or one-third π, or some other nice rational multiple of π we’ll get a closed orbit. If it isn’t, we won’t.

    So. ‘ψ’ will be one-half times the oscillation time times that angular velocity. This is easy:

    \psi = \frac{1}{2} \cdot T_r \cdot \omega

    Put in the formulas we have for ‘Tr‘ and for ‘ω’. Now it’ll be complicated.

    \psi = \frac{1}{2} 2\pi \sqrt{\frac{m}{-F'(a) - \frac{3}{a} F(a)}} \sqrt{\frac{-F(a)}{ma}}

    Now we’ll make this a little simpler again. We have two square roots of fractions multiplied by each other. That’s the same as the square root of the two fractions multiplied by each other. So we can take numerator times numerator and denominator times denominator, all underneath the square root sign. See if I don’t. Oh yeah and one-half of two π is π but you saw that coming.

    \psi = \pi \sqrt{ \frac{-m F(a)}{-\left(F'(a) + \frac{3}{m}F(a)\right)\cdot ma} }

    OK, so there’s some minus signs in the numerator and denominator worth getting rid of. There’s an ‘m’ in the numerator and the denominator that we can divide out of both sides. There’s an ‘a’ in the denominator that can multiply into a term that has a denominator inside the denominator and you know this would be easier if I could use little cross-out symbols in WordPress LaTeX. If you’re not following all this, try writing it out by hand and seeing what makes sense to cancel out.

    \psi = \pi \sqrt{ \frac{F(a)}{aF'(a) + 3F(a)} }

    This is getting not too bad. Start from a potential energy ‘U(r)’. Use an angular momentum ‘L’ to figure out the circular orbit radius ‘a’. From the potential energy find the force ‘F(r)’. And then, based on what ‘F’ and the first derivative of ‘F’ happen to be, at the radius ‘a’, we can see whether a closed orbit can be there.

    I’ve gotten to some pretty abstract territory here. Next time I hope to make things simpler again.

     
    • howardat58 6:56 pm on Thursday, 27 July, 2017 Permalink | Reply

      I am getting fascinated with this, at last!
      Am I right in saying that a closed orbit can have many turns before it reaches the starting position? Rational numbers?
      And did I spot the F'(a)/F(a) lurking in there somewhere?

      Like

      • Joseph Nebus 6:21 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thank you, and I’m glad you enjoy.

        I’m not aware of any reason we can’t have many turns before the orbit closed. The only examples I can think of where this happens are Lissajous figures, from masses on springs connected in multiple dimensions, and those aren’t central forces the way I’ve set this frame up. But for peculiar enough powers ‘n’ there are what look like closed and complicated orbits to me.

        And yeah, F'(a)/F(a) is lurking in there. I’m hoping to fit at least one, maybe two, more of this sequence in-between A To Z posts and that should make the F'(a)/F(a) explicit.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: