Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 18 August, 2017 Permalink | Reply
    Tags: , , , George Berkeley, , numerical integration, , ,   

    The Summer 2017 Mathematics A To Z: Integration 


    One more mathematics term suggested by Gaurish for the A-To-Z today, and then I’ll move on to a couple of others. Today’s is a good one.

    Integration.

    Stand on the edge of a plot of land. Walk along its boundary. As you walk the edge pay attention. Note how far you walk before changing direction, even in the slightest. When you return to where you started consult your notes. Contained within them is the area you circumnavigated.

    If that doesn’t startle you perhaps you haven’t thought about how odd that is. You don’t ever touch the interior of the region. You never do anything like see how many standard-size tiles would fit inside. You walk a path that is as close to one-dimensional as your feet allow. And encoded in there somewhere is an area. Stare at that incongruity and you realize why integrals baffle the student so. They have a deep strangeness embedded in them.

    We who do mathematics have always liked integration. They grow, in the western tradition, out of geometry. Given a shape, what is a square that has the same area? There are shapes it’s easy to find the area for, given only straightedge and compass: a rectangle? Easy. A triangle? Just as straightforward. A polygon? If you know triangles then you know polygons. A lune, the crescent-moon shape formed by taking a circular cut out of a circle? We can do that. (If the cut is the right size.) A circle? … All right, we can’t do that, but we spent two thousand years trying before we found that out for sure. And we can do some excellent approximations.

    That bit of finding-a-square-with-the-same-area was called “quadrature”. The name survives, mostly in the phrase “numerical quadrature”. We use that to mean that we computed an integral’s approximate value, instead of finding a formula that would get it exactly. The otherwise obvious choice of “numerical integration” we use already. It describes computing the solution of a differential equation. We’re not trying to be difficult about this. Solving a differential equation is a kind of integration, and we need to do that a lot. We could recast a solving-a-differential-equation problem as a find-the-area problem, and vice-versa. But that’s bother, if we don’t need to, and so we talk about numerical quadrature and numerical integration.

    Integrals are built on two infinities. This is part of why it took so long to work out their logic. One is the infinity of number; we find an integral’s value, in principle, by adding together infinitely many things. The other is an infinity of smallness. The things we add together are infinitesimally small. That we need to take things, each smaller than any number yet somehow not zero, and in such quantity that they add up to something, seems paradoxical. Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy. Bishop George Berkeley made a steady name for himself in calculus textbooks by pointing this out. We have worked out several logically consistent schemes for evaluating integrals. They work, mostly, by showing that we can make the error caused by approximating the integral smaller than any margin we like. This is a standard trick, or at least it is, now that we know it.

    That “in principle” above is important. We don’t actually work out an integral by finding the sum of infinitely many, infinitely tiny, things. It’s too hard. I remember in grad school the analysis professor working out by the proper definitions the integral of 1. This is as easy an integral as you can do without just integrating zero. He escaped with his life, but it was a close scrape. He offered the integral of x as a way to test our endurance, without actually doing it. I’ve never made it through that.

    But we do integrals anyway. We have tools on our side. We can show, for example, that if a function obeys some common rules then we can use simpler formulas. Ones that don’t demand so many symbols in such tight formation. Ones that we can use in high school. Also, ones we can adapt to numerical computing, so that we can let machines give us answers which are near enough right. We get to choose how near is “near enough”. But then the machines decide how long we’ll have to wait to get that answer.

    The greatest tool we have on our side is the Fundamental Theorem of Calculus. Even the name promises it’s the greatest tool we might have. This rule tells us how to connect integrating a function to differentiating another function. If we can find a function whose derivative is the thing we want to integrate, then we have a formula for the integral. It’s that function we found. What a fantastic result.

    The trouble is it’s so hard to find functions whose derivatives are the thing we wanted to integrate. There are a lot of functions we can find, mind you. If we want to integrate a polynomial it’s easy. Sine and cosine and even tangent? Yeah. Logarithms? A little tedious but all right. A constant number raised to the power x? Also tedious but doable. A constant number raised to the power x2? Hold on there, that’s madness. No, we can’t do that.

    There is a weird grab-bag of functions we can find these integrals for. They’re mostly ones we can find some integration trick for. An integration trick is some way to turn the integral we’re interested in into a couple of integrals we can do and then mix back together. A lot of a Freshman Calculus course is a heap of tricks we’ve learned. They have names like “u-substitution” and “integration by parts” and “trigonometric substitution”. Some of them are really exotic, such as turning a single integral into a double integral because that leads us to something we can do. And there’s something called “differentiation under the integral sign” that I don’t know of anyone actually using. People know of it because Richard Feynman, in his fun memoir What Do You Care What Other People Think: 250 Pages Of How Awesome I Was In Every Situation Ever, mentions how awesome it made him in so many situations. Mathematics, physics, and engineering nerds are required to read this at an impressionable age, so we fall in love with a technique no textbook ever mentions. Sorry.

    I’ve written about all this as if we were interested just in areas. We’re not. We like calculating lengths and volumes and, if we dare venture into more dimensions, hypervolumes and the like. That’s all right. If we understand how to calculate areas, we have the tools we need. We can adapt them to as many or as few dimensions as we need. By weighting integrals we can do calculations that tell us about centers of mass and moments of inertial, about the most and least probable values of something, about all quantum mechanics.

    As often happens, this powerful tool starts with something anyone might ponder: what size square has the same area as this other shape? And then think seriously about it.

     
  • Joseph Nebus 6:00 pm on Wednesday, 16 August, 2017 Permalink | Reply
    Tags: , , , , rank,   

    The Summer 2017 Mathematics A To Z: Height Function (elliptic curves) 


    I am one letter closer to the end of Gaurish’s main block of requests. They’re all good ones, mind you. This gets me back into elliptic curves and Diophantine equations. I might be writing about the wrong thing.

    Height Function.

    My love’s father has a habit of asking us to rate our hobbies. This turned into a new running joke over a family vacation this summer. It’s a simple joke: I shuffled the comparables. “Which is better, Bon Jovi or a roller coaster?” It’s still a good question.

    But as genial yet nasty as the spoof is, my love’s father asks natural questions. We always want to compare things. When we form a mathematical construct we look for ways to measure it. There’s typically something. We’ll put one together. We call this a height function.

    We start with an elliptic curve. The coordinates of the points on this curve satisfy some equation. Well, there are many equations they satisfy. We pick one representation for convenience. The convenient thing is to have an easy-to-calculate height. We’ll write the equation for the curve as

    y^2 = x^3 + Ax + B

    Here both ‘A’ and ‘B’ are some integers. This form might be unique, depending on whether a slightly fussy condition on prime numbers hold. (Specifically, if ‘p’ is a prime number and ‘p4‘ divides into ‘A’, then ‘p6‘ must not divide into ‘B’. Yes, I know you realized that right away. But I write to a general audience, some of whom are learning how to see these things.) Then the height of this curve is whichever is the larger number, four times the cube of the absolute value of ‘A’, or 27 times the square of ‘B’. I ask you to just run with it. I don’t know the implications of the height function well enough to say why, oh, 25 times the square of ‘B’ wouldn’t do as well. The usual reason for something like that is that some obvious manipulation makes the 27 appear right away, or disappear right away.

    This idea of height feeds in to a measure called rank. “Rank” is a term the young mathematician encounters first while learning matrices. It’s the number of rows in a matrix that aren’t equal to some sum or multiple of other rows. That is, it’s how many different things there are among a set. You can see why we might find that interesting. So many topics have something called “rank” and it measures how many different things there are in a set of things. In elliptic curves, the rank is a measure of how complicated the curve is. We can imagine the rational points on the elliptic curve as things generated by some small set of starter points. The starter points have to be of infinite order. Starter points that don’t, don’t count for the rank. Please don’t worry about what “infinite order” means here. I only mention this infinite-order business because if I don’t then something I have to say about two paragraphs from here will sound daft. So, the rank is how many of these starter points you need to generate the elliptic curve. (WARNING: Call them “generating points” or “generators” during your thesis defense.)

    There’s no known way of guessing what the rank is if you just know ‘A’ and ‘B’. There are algorithms that can calculate the rank given a particular ‘A’ and ‘B’. But it’s not something like the quadratic formula where you can just do a quick calculation and know what you’re looking for. We don’t even know if the algorithms we have will work for every elliptic curve.

    We think that there’s no limit to the height of elliptic curves. We don’t know this. We know there exist curves with ranks as high as 28. They seem to be rare [*]. I don’t know if that’s proven. But we do know there are elliptic curves with rank zero. A lot of them, in fact. (See what I meant two paragraphs back?) These are the elliptic curves that have only finitely many rational points on them.

    And there’s a lot of those. There’s a well-respected that the average rank, of all the elliptic curves there are, is ½. It might be. What we have been able to prove is that the average rank is less than or equal to 1.17. Also that it should be larger than zero. So we’re maybe closing in on the ½ conjecture? At least we know something. I admit this essay I’ve started wondering what we do know of elliptic curves.

    What do the height, and through it the rank, get us? I worry I’m repeating myself. By themselves they give us families of elliptic curves. Shapes that are similar in a particular and not-always-obvious way. And they feed into the Birch and Swinnerton-Dyer conjecture, which is the hipster’s Riemann Hypothesis. That is, it’s this big, unanswered, important problem that would, if answered, tell us things about a lot of questions that I’m not sure can be concisely explained. At least not why they’re interesting. We know some special cases, at least. Wikipedia tells me nothing’s proved for curves with rank greater than 1. Humanity’s ignorance on this point makes me feel slightly better pondering what I don’t know about elliptic curves.

    (There are some other things within the field of elliptic curves called height functions. There’s particularly a height of individual points. I was unsure which height Gaurish found interesting so chose one. The other starts by measuring something different; it views, for example, \frac{1}{2} as having a lower height than does \frac{51}{101} , even though the numbers are quite close in value. It develops along similar lines, trying to find classes of curves with similar behavior. And it gets into different unsolved conjectures. We have our ideas about how to think of fields.).


    [*] Wikipedia seems to suggest we only know of one, provided by Professor Noam Elkies in 2006, and let me quote it in full. I apologize that it isn’t in the format I suggested at top was standard. Elkies way outranks me academically so we have to do things his way:

    y^2 + xy + y = x^3 - x^2 -  20,067,762,415,575,526,585,033,208,209,338,542,750,930,230,312,178,956,502 x + 34,481,611,795,030,556,467,032,985,690,390,720,374,855,944,359,319,180,361,266,008,296,291,939,448,732,243,429

    I can’t figure how to get WordPress to present that larger. I sympathize. I’m tired just looking at an equation like that. This page lists records of known elliptic curve ranks. I don’t know if the lack of any records more recent than 2006 reflects the page not having been updated or nobody having found a rank-29 curve. I fully accept the field might be more difficult than even doing maintenance on a web page’s content is.

     
  • Joseph Nebus 6:00 pm on Tuesday, 15 August, 2017 Permalink | Reply
    Tags: accountants, , , , octopus, , , Warped,   

    Reading the Comics, August 12, 2017: August 10 and 12 Edition 


    The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.

    Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.

    Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.

    Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.

    Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.

    John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.

    Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.

     
  • Joseph Nebus 6:00 pm on Monday, 14 August, 2017 Permalink | Reply
    Tags: , , , , Gaussian integers, , ,   

    The Summer 2017 Mathematics A To Z: Gaussian Primes 


    Once more do I have Gaurish to thank for the day’s topic. (There’ll be two more chances this week, providing I keep my writing just enough ahead of deadline.) This one doesn’t touch category theory or topology.

    Gaussian Primes.

    I keep touching on group theory here. It’s a field that’s about what kinds of things can work like arithmetic does. A group is a set of things that you can add together. At least, you can do something that works like adding regular numbers together does. A ring is a set of things that you can add and multiply together.

    There are many interesting rings. Here’s one. It’s called the Gaussian Integers. They’re made of numbers we can write as a + b\imath , where ‘a’ and ‘b’ are some integers. \imath is what you figure, that number that multiplied by itself is -1. These aren’t the complex-valued numbers, you notice, because ‘a’ and ‘b’ are always integers. But you add them together the way you add complex-valued numbers together. That is, a + b\imath plus c + d\imath is the number (a + c) + (b + d)\imath . And you multiply them the way you multiply complex-valued numbers together. That is, a + b\imath times c + d\imath is the number (a\cdot c - b\cdot d) + (a\cdot d + b\cdot c)\imath .

    We created something that has addition and multiplication. It picks up subtraction for free. It doesn’t have division. We can create rings that do, but this one won’t, any more than regular old integers have division. But we can ask what other normal-arithmetic-like stuff these Gaussian integers do have. For instance, can we factor numbers?

    This isn’t an obvious one. No, we can’t expect to be able to divide one Gaussian integer by another. But we can’t expect to divide a regular old integer by another, not and get an integer out of it. That doesn’t mean we can’t factor them. It means we divide the regular old integers into a couple classes. There’s prime numbers. There’s composites. There’s the unit, the number 1. There’s zero. We know prime numbers; they’re 2, 3, 5, 7, and so on. Composite numbers are the ones you get by multiplying prime numbers together: 4, 6, 8, 9, 10, and so on. 1 and 0 are off on their own. Leave them there. We can’t divide any old integer by any old integer. But we can say an integer is equal to this string of prime numbers multiplied together. This gives us a handle by which we can prove a lot of interesting results.

    We can do the same with Gaussian integers. We can divide them up into Gaussian primes, Gaussian composites, units, and zero. The words mean what they mean for regular old integers. A Gaussian composite can be factored into the multiples of Gaussian primes. Gaussian primes can’t be factored any further.

    If we know what the prime numbers are for regular old integers we can tell whether something’s a Gaussian prime. Admittedly, knowing all the prime numbers is a challenge. But a Gaussian integer a + b\imath will be prime whenever a couple simple-to-test conditions are true. First is if ‘a’ and ‘b’ are both not zero, but a^2 + b^2 is a prime number. So, for example, 5 + 4\imath is a Gaussian prime.

    You might ask, hey, would -5 - 4\imath also be a Gaussian prime? That’s also got components that are integers, and the squares of them add up to a prime number (41). Well-spotted. Gaussian primes appear in quartets. If a + b\imath is a Gaussian prime, so is -a -b\imath . And so are -b + a\imath and b - a\imath .

    There’s another group of Gaussian primes. These are the numbers a + b\imath where either ‘a’ or ‘b’ is zero. Then the other one is, if positive, three more than a whole multiple of four. If it’s negative, then it’s three less than a whole multiple of four. So ‘3’ is a Gaussian prime, as is -3, and as is 3\imath and so is -3\imath .

    This has strange effects. Like, ‘3’ is a prime number in the regular old scheme of things. It’s also a Gaussian prime. But familiar other prime numbers like ‘2’ and ‘5’? Not anymore. Two is equal to (1 + \imath) \cdot (1 - \imath) ; both of those terms are Gaussian primes. Five is equal to (2 + \imath) \cdot (2 - \imath) . There are similar shocking results for 13. But, roughly, the world of composites and prime numbers translates into Gaussian composites and Gaussian primes. In this slightly exotic structure we have everything familiar about factoring numbers.

    You might have some nagging thoughts. Like, sure, two is equal to (1 + \imath) \cdot (1 - \imath) . But isn’t it also equal to (1 + \imath) \cdot (1 - \imath) \cdot \imath \cdot (-\imath) ? One of the important things about prime numbers is that every composite number is the product of a unique string of prime numbers. Do we have to give that up for Gaussian integers?

    Good nag. But no; the doubt is coming about because you’ve forgotten the difference between “the positive integers” and “all the integers”. If we stick to positive whole numbers then, yeah, (say) ten is equal to two times five and no other combination of prime numbers. But suppose we have all the integers, positive and negative. Then ten is equal to either two times five or it’s equal to negative two times negative five. Or, better, it’s equal to negative one times two times negative one times five. Or suffix times any even number of negative ones.

    Remember that bit about separating ‘one’ out from the world of primes and composites? That’s because the number one screws up these unique factorizations. You can always toss in extra factors of one, to taste, without changing the product of something. If we have positive and negative integers to use, then negative one does almost the same trick. We can toss in any even number of extra negative ones without changing the product. This is why we separate “units” out of the numbers. They’re not part of the prime factorization of any numbers.

    For the Gaussian integers there are four units. 1 and -1, \imath and -\imath . They are neither primes nor composites, and we don’t worry about how they would otherwise multiply the number of factorizations we get.

    But let me close with a neat, easy-to-understand puzzle. It’s called the moat-crossing problem. In the regular old integers it’s this: imagine that the prime numbers are islands in a dangerous sea. You start on the number ‘2’. Imagine you have a board that can be set down and safely crossed, then picked up to be put down again. Could you get from the start and go off to safety, which is infinitely far away? If your board is some, fixed, finite length?

    No, you can’t. The problem amounts to how big the gap between one prime number and the next largest prime number can be. It turns out there’s no limit to that. That is, you give me a number, as small or as large as you like. I can find some prime number that’s more than your number less than its successor. There are infinitely large gaps between prime numbers.

    Gaussian primes, though? Since a Gaussian prime might have nearest neighbors in any direction? Nobody knows. We know there are arbitrarily large gaps. Pick a moat size; we can (eventually) find a Gaussian prime that’s at least that far away from its nearest neighbors. But this does not say whether it’s impossible to get from the smallest Gaussian primes — 1 + \imath and its companions -1 + \imath and on — infinitely far away. We know there’s a moat of width 6 separating the origin of things from infinity. We don’t know that there’s bigger ones.

    You’re not going to solve this problem. Unless I have more brilliant readers than I know about; if I have ones who can solve this problem then I might be too intimidated to write anything more. But there is surely a pleasant pastime, maybe a charming game, to be made from this. Try finding the biggest possible moats around some set of Gaussian prime island.

    Ellen Gethner, Stan Wagon, and Brian Wick’s A Stroll Through the Gaussian Primes describes this moat problem. It also sports some fine pictures of where the Gaussian primes are and what kinds of moats you can find. If you don’t follow the reasoning, you can still enjoy the illustrations.

     
  • Joseph Nebus 6:00 pm on Sunday, 13 August, 2017 Permalink | Reply
    Tags: , , dimensional analysis, Dog Eat Doug, , feminism, , Sylvia, Thatababy   

    Reading the Comics, August 9, 2017: Pets Doing Mathematics Edition 


    I had just enough comic strips to split this week’s mathematics comics review into two pieces. I like that. It feels so much to me like I have better readership when I have many days in a row with posting something, however slight. The A to Z is good for three days a week, and if comic strips can fill two of those other days then I get to enjoy a lot of regular publication days. … Though last week I accidentally set the Sunday comics post to appear on Monday, just before the A To Z post. I’m curious how that affected my readers. That nobody said anything is ominous.

    Border collies are, as we know, highly intelligent. (Looking over a chalkboard diagramming 'fetch', with symbols.) 'There MUST be some point to it, but I guess we don't have the mathematical tools to crack it at the moment.'

    Niklas Eriksson’s Carpe Diem for the 7th of August, 2017. I have to agree the border collies haven’t worked out the point of fetch. I also question whether they’ve worked out the simple ballistics of the tossed stick. If the variables mean what they suggest they mean, then dimensional analysis suggests they’ve got at least three fiascos going on here. Maybe they have an idiosyncratic use for variables like ‘v’.

    Niklas Eriksson’s Carpe Diem for the 7th of August uses mathematics as the signifier for intelligence. I’m intrigued by how the joke goes a little different: while the border collies can work out the mechanics of a tossed stick, they haven’t figured out what the point of fetch is. But working out people’s motivations gets into realms of psychology and sociology and economics. There the mathematics might not be harder, but knowing that one is calculating a relevant thing is. (Eriksson’s making a running theme of the intelligence of border collies.)

    Nicole Hollander’s Sylvia rerun for the 7th tosses off a mention that “we’re the first generation of girls who do math”. And that therefore there will be a cornucopia of new opportunities and good things to come to them. There’s a bunch of social commentary in there. One is the assumption that mathematics skill is a liberating thing. Perhaps it is the gloom of the times but I doubt that an oppressed group developing skills causes them to be esteemed. It seems more likely to me to make the skills become devalued. Social justice isn’t a matter of good exam grades.

    Then, too, it’s not as though women haven’t done mathematics since forever. Every mathematics department on a college campus has some faded posters about Emmy Noether and Sofia Kovalevskaya and maybe Sophie Germaine. Probably high school mathematics rooms too. Again perhaps it’s the gloom of the times. But I keep coming back to the goddess’s cynical dismissal of all this young hope.

    Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th portrays arithmetic as a grand-strategic imperative. Well, it means education as a strategic imperative. But arithmetic is the thing Dot uses. I imagine because it is so easy to teach as a series of trivia and quiz about. And it fits in a single panel with room to spare.

    Dot: 'Now try it again: two and two is four.' Trixie: 'Fwee!' Dot: 'You're not TRYING! Do you want the Russians to get AHEAD of US!?' Trixie looks back and thinks: 'I didn't even know there was anyone back there!'

    Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th of August, 2017. Remember: you’re only young once, but you can be geopolitically naive forever!

    Paul Trap’s Thatababy for the 8th is not quite the anthropomorphic-numerals joke of the week. It circles around that territory, though, giving a couple of odd numbers some personality.

    Brian Anderson’s Dog Eat Doug for the 9th finally justifies my title for this essay, as cats ponder mathematics. Well, they ponder quantum mechanics. But it’s nearly impossible to have a serious thought about that without pondering its mathematics. This doesn’t mean calculation, mind you. It does mean understanding what kinds of functions have physical importance. And what kinds of things one can do to functions. Understand them and you can discuss quantum mechanics without being mathematically stupid. And there’s enough ways to be stupid about quantum mechanics that any you can cut down is progress.

     
  • Joseph Nebus 6:00 pm on Friday, 11 August, 2017 Permalink | Reply
    Tags: , , computer programming, contravariant, covariant, , functors, , ,   

    The Summer 2017 Mathematics A To Z: Functor 


    Gaurish gives me another topic for today. I’m now no longer sure whether Gaurish hopes me to become a topology blogger or a category theory blogger. I have the last laugh, though. I’ve wanted to get better-versed in both fields and there’s nothing like explaining something to learn about it.

    Functor.

    So, category theory. It’s a foundational field. It talks about stuff that’s terribly abstract. This means it’s powerful, but it can be hard to think of interesting examples. I’ll try, though.

    It starts with categories. These have three parts. The first part is a set of things. (There always is.) The second part is a collection of matches between pairs of things in the set. They’re called morphisms. The third part is a rule that lets us combine two morphisms into a new, third one. That is. Suppose ‘a’, ‘b’, and ‘c’ are things in the set. Then there’s a morphism that matches a \rightarrow b , and a morphism that matches b \rightarrow c . And we can combine them into another morphism that matches a \rightarrow c . So we have a set of things, and a set of things we can do with those things. And the set of things we can do is itself a group.

    This describes a lot of stuff. Group theory fits seamlessly into this description. Most of what we do with numbers is a kind of group theory. Vector spaces do too. Most of what we do with analysis has vector spaces underneath it. Topology does too. Most of what we do with geometry is an expression of topology. So you see why category theory is so foundational.

    Functors enter our picture when we have two categories. Or more. They’re about the ways we can match up categories. But let’s start with two categories. One of them I’ll name ‘C’, and the other, ‘D’. A functor has to match everything that’s in the set of ‘C’ to something that’s in the set of ‘D’.

    And it does more. It has to match every morphism between things in ‘C’ to some other morphism, between corresponding things in ‘D’. It’s got to do it in a way that satisfies that combining, too. That is, suppose that ‘f’ and ‘g’ are morphisms for ‘C’. And that ‘f’ and ‘g’ combine to make ‘h’. Then, the functor has to match ‘f’ and ‘g’ and ‘h’ to some morphisms for ‘D’. The combination of whatever ‘f’ matches to and whatever ‘g’ matches to has to be whatever ‘h’ matches to.

    This might sound to you like a homomorphism. If it does, I admire your memory or mathematical prowess. Functors are about matching one thing to another in a way that preserves structure. Structure is the way that sets of things can interact. We naturally look for stuff made up of different things that have the same structure. Yes, functors are themselves a category. That is, you can make a brand-new category whose set of things are the functors between two other categories. This is a good spot to pause while the dizziness passes.

    There are two kingdoms of functor. You tell them apart by what they do with the morphisms. Here again I’m going to need my categories ‘C’ and ‘D’. I need a morphism for ‘C’. I’ll call that ‘f’. ‘f’ has to match something in the set of ‘C’ to something in the set of ‘C’. Let me call the first something ‘a’, and the second something ‘b’. That’s all right so far? Thank you.

    Let me call my functor ‘F’. ‘F’ matches all the elements in ‘C’ to elements in ‘D’. And it matches all the morphisms on the elements in ‘C’ to morphisms on the elmenets in ‘D’. So if I write ‘F(a)’, what I mean is look at the element ‘a’ in the set for ‘C’. Then look at what element in the set for ‘D’ the functor matches with ‘a’. If I write ‘F(b)’, what I mean is look at the element ‘b’ in the set for ‘C’. Then pick out whatever element in the set for ‘D’ gets matched to ‘b’. If I write ‘F(f)’, what I mean is to look at the morphism ‘f’ between elements in ‘C’. Then pick out whatever morphism between elements in ‘D’ that that gets matched with.

    Here’s where I’m going with this. Suppose my morphism ‘f’ matches ‘a’ to ‘b’. Does the functor of that morphism, ‘F(f)’, match ‘F(a)’ to ‘F(b)’? Of course, you say, what else could it do? And the answer is: why couldn’t it match ‘F(b)’ to ‘F(a)’?

    No, it doesn’t break everything. Not if you’re consistent about swapping the order of the matchings. The normal everyday order, the one you’d thought couldn’t have an alternative, is a “covariant functor”. The crosswise order, this second thought, is a “contravariant functor”. Covariant and contravariant are distinctions that weave through much of mathematics. They particularly appear through tensors and the geometry they imply. In that introduction they tend to be difficult, even mean, creations, since in regular old Euclidean space they don’t mean anything different. They’re different for non-Euclidean spaces, and that’s important and valuable. The covariant versus contravariant difference is easier to grasp here.

    Functors work their way into computer science. The avenue here is in functional programming. That’s a method of programming in which instead of the normal long list of commands, you write a single line of code that holds like fourteen “->” symbols that makes the computer stop and catch fire when it encounters a bug. The advantage is that when you have the code debugged it’s quite speedy and memory-efficient. The disadvantage is if you have to alter the function later, it’s easiest to throw everything out and start from scratch, beginning from vacuum-tube-based computing machines. But it works well while it does. You just have to get the hang of it.

     
    • gaurish 9:55 am on Saturday, 12 August, 2017 Permalink | Reply

      Can you suggest a nice introductory book on category theory for beginners? What I understand is that they generalize the notions defined concretely in algebra (which were motivated by arithmetic), but I lack any concrete understanding.

      Liked by 1 person

    • mathtuition88 2:56 pm on Sunday, 13 August, 2017 Permalink | Reply

      “Categories for the Working Mathematician” by Mac Lane is good and foundational (recommended for serious readers). Another book “Cakes, Custard and Category Theory” by Eugenia Cheng is accessible even to laymen.

      Like

      • Joseph Nebus 5:08 pm on Sunday, 13 August, 2017 Permalink | Reply

        I’m grateful to MathTuition88 for the suggestion. I’m afraid I’m poorly-enough read in category theory I don’t have any good idea where beginners ought to start.

        Liked by 1 person

    • elkement (Elke Stangl) 1:59 pm on Friday, 18 August, 2017 Permalink | Reply

      May I ask a computer science question ;-) ? I tried to understand how this functor from category theory would be mapped onto (Ha – another level of mapping!! ;-)) a functor in C++ but was not very successful. In this discussion https://stackoverflow.com/questions/356950/c-functors-and-their-uses somebody says that a functor in category theory ‘has nothing to do with the C++ concept of functor’.

      Would you agree? Or if not, can you maybe explain how an ‘implementation’ of your functor example would look like in C++ (or some pseudo-code in some language…). Or keep that in mind for a future post if you ever want to return to that subject!

      Anyway: I really enjoy this series!!

      Like

  • Joseph Nebus 6:00 pm on Wednesday, 9 August, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Elliptic Curves 


    Gaurish, of the For The Love Of Mathematics gives me another subject today. It’s one that isn’t about ellipses. Sad to say it’s also not about elliptic integrals. This is sad to me because I have a cute little anecdote about a time I accidentally gave my class an impossible problem. I did apologize. No, nobody solved it anyway.

    Elliptic Curves.

    Elliptic Curves start, of course, with polynomials. Particularly, they’re polynomials with two variables. We call the ‘x’ and ‘y’ because we have no reason to be difficult. They’re of at most third degree. That is, we can have terms like ‘x’ and ‘y2‘ and ‘x2y’ and ‘y3‘. Something with higher powers, like, ‘x4‘ or ‘x2y2‘ — a fourth power, all together — is right out. Doesn’t matter. Start from this and we can do some slick changes of variables so that we can rewrite it to look like this:

    y^2 = x^3 + Ax + B

    Here, ‘A’ and ‘B’ are some numbers that don’t change for this particular curve. Also, we need it to be true that 4A^3 + 27B^2 doesn’t equal zero. It avoids problems. What we’ll be looking at are coordinates, values of ‘x’ and ‘y’ together which make this equation true. That is, it’s points on the curve. If you pick some real numbers ‘A’ and ‘B’ and draw all the values of ‘x’ and ‘y’ that make the equation true you get … well, there’s different shapes. They all look like those microscope photos of a water drop emerging and falling from a tap, only rotated clockwise ninety degrees.

    So. Pick any of these curves that you like. Pick a point. I’m going to name your point ‘P’. Now pick a point once more. I’m going to name that point ‘Q’. Now draw a line from P through Q. Keep drawing it. It’ll cross the original elliptic curve again. And that point is … not actually special. What is special is the reflection of that point. That is, the same x-coordinate, but flip the plus or minus sign for the y-coordinate. (WARNING! Do not call it “the reflection” at your thesis defense! Call it the “conjugate” point. It means “reflection”.) Your elliptic curve will be symmetric around the x-axis. If, say, the point with x-coordinate 4 and y-coordinate 3 is on the curve, so is the point with x-coordinate 4 and y-coordinate -3. So that reflected point is … something special.

    Kind of a curved-out less-than-sign shape.

    y^2 = x^3 - 1 . The water drop bulges out from the surface.

    This lets us do something wonderful. We can think of this reflected point as the sum of your ‘P’ and ‘Q’. You can ‘add’ any two points on the curve and get a third point. This means we can do something that looks like addition for points on the elliptic curve. And this means the points on this curve are a group, and we can bring all our group-theory knowledge to studying them. It’s a commutative group, too; ‘P’ added to ‘Q’ leads to the same point as ‘Q’ added to ‘P’.

    Let me head off some clever thoughts that make fair objections. What if ‘P’ and ‘Q’ are already reflections, so the line between them is vertical? That never touches the original elliptic curve again, right? Yeah, fair complaint. We patch this by saying that there’s one more point, ‘O’, that’s off “at infinity”. Where is infinity? It’s wherever your vertical lines end. Shut up, this can too be made rigorous. In any case it’s a common hack for this sort of problem. When we add that, everything’s nice. The ‘O’ serves the role in this group that zero serves in arithmetic: the sum of point ‘O’ and any point ‘P’ is going to be ‘P’ again.

    Second clever thought to head off: what if ‘P’ and ‘Q’ are the same point? There’s infinitely many lines that go through a single point so how do we pick one to find an intersection with the elliptic curve? Huh? If you did that, then we pick the tangent line to the elliptic curve that touches ‘P’, and carry on as before.

    The curved-out less-than-sign shape has a noticeable c-shaped bulge on the end.

    y^2 = x^3 + 1 . The water drop is close to breaking off, but surface tension has not yet pinched off the falling form.

    There’s more. What kind of number is ‘x’? Or ‘y’? I’ll bet that you figured they were real numbers. You know, ordinary stuff. I didn’t say what they were, so left it to our instinct, and that usually runs toward real numbers. Those are what I meant, yes. But we didn’t have to. ‘x’ and ‘y’ could be in other sets of numbers too. They could be complex-valued numbers. They could be just the rational numbers. They could even be part of a finite collection of possible numbers. As the equation y^2 = x^3 + Ax + B is something meaningful (and some technical points are met) we can carry on. The elliptical curves, and the points we “add” on them, might not look like the curves we started with anymore. They might not look like anything recognizable anymore. But the logic continues to hold. We still create these groups out of the points on these lines intersecting a curve.

    By now you probably admit this is neat stuff. You may also think: so what? We can take this thing you never thought about, draw points and lines on it, and make it look very loosely kind of like just adding numbers together. Why is this interesting? No appreciation just for the beauty of the structure involved? Well, we live in a fallen world.

    It comes back to number theory. The modern study of Diophantine equations grows out of studying elliptic curves on the rational numbers. It turns out the group of points you get for that looks like a finite collection of points with some collection of integers hanging on. How long that collection of numbers is is called the ‘rank’, and there are deep mysteries at work. We know there are elliptic equations that have a rank as big as 28. Nobody knows if the rank can be arbitrary high, though. And I believe we don’t even know if there are any curves with rank of, like, 27, or 25.

    Yeah, I’m still sensing skepticism out there. Fine. We’ll go back to the only part of number theory everybody agrees is useful. Encryption. We have roughly the same goals for every encryption scheme. We want it to be easy to encode a message. We want it to be easy to decode the message if you have the key. We want it to be hard to decode the message if you don’t have the key.

    The curved-out sign has a bulge with convex loops to it, so that it resembles the cut of a jigsaw puzzle piece.

    y^2 = 3x^2 - 3x + 3 . The water drop is almost large enough that its weight overcomes the surface tension holding it to the main body of water.

    Take something inside one of these elliptic curve groups. Especially one that’s got a finite field. Let me call your thing ‘g’. It’s really easy for you, knowing what ‘g’ is and what your field is, to raise it to a power. You can pretty well impress me by sharing the value of ‘g’ raised to some whole number ‘m’. Call that ‘h’.

    Why am I impressed? Because if all I know is ‘h’, I have a heck of a time figuring out what ‘g’ is. Especially on these finite field groups there’s no obvious connection between how big ‘h’ is and how big ‘g’ is and how big ‘m’ is. Start with a big enough finite field and you can encode messages in ways that are crazy hard to crack.

    We trust. At least, if there are any ways to break the code quickly, nobody’s shared them. And there’s one of those enormous-money-prize awards waiting for someone who does know how to break such a code quickly. (I don’t know which. I’m going by what I expect from people.)

    And then there’s fame. These were used to prove Fermat’s Last Theorem. Suppose there are some non-boring numbers ‘a’, ‘b’, and ‘c’, so that for some prime number ‘p’ that’s five or larger, it’s true that a^p + b^p = c^p . (We can separately prove Fermat’s Last Theorem for a power that isn’t a prime number, or a power that’s 3 or 4.) Then this implies properties about the elliptic curve:

    y^2 = x(x - a^p)(x + b^p)

    This is a convenient way of writing things since it showcases the ap and bp. It’s equal to:

    y^2 = x^3 + \left(b^p - a^p\right)x^2 + a^p b^p x

    (I was so tempted to leave an arithmetic error in there so I could make sure someone commented.)

    A little ball off to the side of a curved-out less-than-sign shape.

    y^2 = 3x^3 - 4x . The water drop has broken off, and the remaining surface rebounds to its normal meniscus.

    If there’s a solution to Fermat’s Last Theorem, then this elliptic equation can’t be modular. I don’t have enough words to explain what ‘modular’ means here. Andrew Wiles and Richard Taylor showed that the equation was modular. So there is no solution to Fermat’s Last Theorem except the boring ones. (Like, where ‘b’ is zero and ‘a’ and ‘c’ equal each other.) And it all comes from looking close at these neat curves, none of which looks like an ellipse.

    They’re named elliptic curves because we first noticed them when Carl Jacobi — yes, that Carl Jacobi — while studying the length of arcs of an ellipse. That’s interesting enough on its own. But it is hard. Maybe I could have fit in that anecdote about giving my class an impossible problem after all.

     
  • Joseph Nebus 6:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Diophantine Equations 


    I have another request from Gaurish, of the For The Love Of Mathematics blog, today. It’s another change of pace.

    Diophantine Equations

    A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that x^n + y^n = z^n thing that Fermat’s Last Theorem is all about. And you’ve probably seen ax + by = 1 . It turns up a lot because that’s a line, and we do a lot of stuff with lines.

    Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. ax + by = 1 , for example, that’s easy to solve. x^n + y^n = z^n it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that ax + by = 1 is simple while x^n + y^n = z^n is (most of the time) impossible?

    I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like x^n + y^n = z^n . For example, there’s x^4 + y^4 + z^4 = w^4 for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is 2,682,440^4 + 15,365,639^4 + 18,796,760^4 = 20,615,673^4 and that one took a computer search to find. We can forgive Euler not noticing it.

    Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is x^2 - D y^2 = 1 (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.

    But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:

    7x^2 - 20y + 18y^2 - 38z = 9

    Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.

    So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.

    There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.

    We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve ax^2 + bx + c = 0 , but specific ones, like 1x^2 - 5x + 6 = 0 . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)

    But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — x - Dy^2 < A , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.

     
  • Joseph Nebus 4:00 pm on Monday, 7 August, 2017 Permalink | Reply
    Tags: , , , , , , Herb and Jamaal, , , Ozy and Millie, , ,   

    Reading the Comics, August 5, 2017: Lazy Summer Week Edition 


    It wasn’t like the week wasn’t busy. Comic Strip Master Command sent out as many mathematically-themed comics as I might be able to use. But they were again ones that don’t leave me much to talk about. I’ll try anyway. It was looking like an anthropomorphic-symboles sort of week, too.

    Tom Thaves’s Frank and Ernest for the 30th of July is an anthropomorphic-symbols joke. The tick marks used for counting make an appearance and isn’t that enough? Maybe.

    Dan Thompson’s Brevity for the 31st is another entry in the anthropomorphic-symbols joke contest. This one sticks to mathematical symbols, so if the Frank and Ernest makes the cut this week so must this one.

    Eric the Circle for the 31st, this installment by “T daug”, gives the slightly anthropomorphic geometric figure a joke that at least mentions a radius, and isn’t that enough? What catches my imagination about this panel particularly is that the “fractured radius” is not just a legitimate pun but also resembles a legitimate geometry drawing. Drawing a diameter line is sensible enough. Drawing some other point on the circle and connecting that to the ends of the diameter is also something we might do.

    Scott Hilburn’s The Argyle Sweater for the 1st of August is one of the logical mathematics jokes you could make about snakes. The more canonical one runs like this: God in the Garden of Eden makes all the animals and bids them to be fruitful. And God inspects them all and finds rabbits and doves and oxen and fish and fowl all growing in number. All but a pair of snakes. God asks why they haven’t bred and they say they can’t, not without help. What help? They need some thick tree branches chopped down. The bemused God grants them this. God checks back in some time later and finds an abundance of baby snakes in the Garden. But why the delay? “We’re adders,” explain the snakes, “so we need logs to multiply”. This joke absolutely killed them in the mathematics library up to about 1978. I’m told.

    John Deering’s Strange Brew for the 1st is a monkeys-at-typewriters joke. It faintly reminds me that I might have pledged to retire mentions of the monkeys-at-typewriters joke. But I don’t remember so I’ll just have to depend on saying I don’t think I retired the monkeys-at-typewriters jokes and trust that someone will tell me if I’m wrong.

    Dana Simpson’s Ozy and Millie rerun for the 2nd name-drops multiplication tables as the sort of thing a nerd child wants to know. They may have fit the available word balloon space better than “know how to diagram sentences” would.

    Mark Anderson’s Andertoons for the 3rd is the reassuringly normal appearance of Andertoons for this week. It is a geometry class joke about rays, line segments with one point where there’s an end and … a direction where it just doesn’t. And it riffs on the notion of the existence of mathematical things. At least I can see it that way.

    Dad: 'How many library books have you read this summer, Hammie?' Hammie: 'About 47.' Zoe: 'HA!' Dad: 'Hammie ... ' Hammie: 'Okay ... two.' Dad: 'Then why did you say 47?' Hammie: 'I was rounding up.' Zoe: 'NOW he understands math!'

    Rick Kirkman and Jerry Scott’s Baby Blues for the 5th of August, 2017. Hammie totally blew it by saying “about forty-seven”. Too specific a number to be a plausible lie. “About forty” or “About fifty”, something you can see as the result of rounding off, yes. He needs to know there are rules about how to cheat.

    Rick Kirkman and Jerry Scott’s Baby Blues for the 5th is a rounding-up joke that isn’t about herds of 198 cattle.

    Stephen Bentley’s Herb and Jamaal for the 5th tosses off a mention of the New Math as something well out of fashion. There are fashions in mathematics, as in all human endeavors. It startles many to learn this.

     
  • Joseph Nebus 4:00 pm on Friday, 4 August, 2017 Permalink | Reply
    Tags: , , , , ,   

    The Summer 2017 Mathematics A To Z: Cohomology 


    Today’s A To Z topic is another request from Gaurish, of the For The Love Of Mathematics blog. Also part of what looks like a quest to make me become a topology blogger, at least for a little while. It’s going to be exciting and I hope not to faceplant as I try this.

    Also, a note about Thomas K Dye, who’s drawn the banner art for this and for the Why Stuff Can Orbit series: the publisher for collections of his comic strip is having a sale this weekend.

    Cohomology.

    The word looks intimidating, and faintly of technobabble. It’s less cryptic than it appears. We see parts of it in non-mathematical contexts. In biology class we would see “homology”, the sharing of structure in body parts that look superficially very different. We also see it in art class. The instructor points out that a dog’s leg looks like that because they stand on their toes. What looks like a backward-facing knee is just the ankle, and if we stand on our toes we see that in ourselves. We might see it in chemistry, as many interesting organic compounds differ only in how long or how numerous the boring parts are. The stuff that does work is the same, or close to the same. And this is a hint to what a mathematician means by cohomology. It’s something in shapes. It’s particularly something in how different things might have similar shapes. Yes, I am using a homology in language here.

    I often talk casually about the “shape” of mathematical things. Or their “structures”. This sounds weird and abstract to start and never really gets better. We can get some footing if we think about drawing the thing we’re talking about. Could we represent the thing we’re working on as a figure? Often we can. Maybe we can draw a polygon, with the vertices of the shape matching the pieces of our mathematical thing. We get the structure of our thing from thinking about what we can do to that polygon without changing the way it looks. Or without changing the way we can do whatever our original mathematical thing does.

    This leads us to homologies. We get them by looking for stuff that’s true even if we moosh up the original thing. The classic homology comes from polyhedrons, three-dimensional shapes. There’s a relationship between the number of vertices, the number of edges, and the number of faces of a polyhedron. It doesn’t change even if you stretch the shape out longer, or squish it down, for that matter slice off a corner. It only changes if you punch a new hole through the middle of it. Or if you plug one up. That would be unsporting. A homology describes something about the structure of a mathematical thing. It might even be literal. Topology, the study of what we know about shapes without bringing distance into it, has the number of holes that go through a thing as a homology. This gets feeling like a comfortable, familiar idea now.

    But that isn’t a cohomology. That ‘co’ prefix looks dangerous. At least it looks significant. When the ‘co’ prefix has turned up before it’s meant something is shaped by how it refers to something else. Coordinates aren’t just number lines; they’re collections of number lines that we can use to say where things are. If ‘a’ is a factor of the number ‘x’, its cofactor is the number you multiply ‘a’ by in order to get ‘x’. (For real numbers that’s just x divided by a. For other stuff it might be weirder.) A codomain is a set that a function maps a domain into (and must contain the range, at least). Cosets aren’t just sets; they’re ways we can divide (for example) the counting numbers into odds and evens.

    So what’s the ‘co’ part for a homology? I’m sad to say we start losing that comfortable feeling now. We have to look at something we’re used to thinking of as a process as though it were a thing. These things are morphisms: what are the ways we can match one mathematical structure to another? Sometimes the morphisms are easy. We can match the even numbers up with all the integers: match 0 with 0, match 2 with 1, match -6 with -3, and so on. Addition on the even numbers matches with addition on the integers: 4 plus 6 is 10; 2 plus 3 is 5. For that matter, we can match the integers with the multiples of three: match 1 with 3, match -1 with -3, match 5 with 15. 1 plus -2 is -1; 3 plus -6 is -9.

    What happens if we look at the sets of matchings that we can do as if that were a set of things? That is, not some human concept like ‘2’ but rather ‘match a number with one-half its value’? And ‘match a number with three times its value’? These can be the population of a new set of things.

    And these things can interact. Suppose we “match a number with one-half its value” and then immediately “match a number with three times its value”. Can we do that? … Sure, easily. 4 matches to 2 which goes on to 6. 8 matches to 4 which goes on to 12. Can we write that as a single matching? Again, sure. 4 matches to 6. 8 matches to 12. -2 matches to -3. We can write this as “match a number with three-halves its value”. We’ve taken “match a number with one-half its value” and combined it with “match a number with three times its value”. And it’s given us the new “match a number with three-halves its value”. These things we can do to the integers are themselves things that can interact.

    This is a good moment to pause and let the dizziness pass.

    It isn’t just you. There is something weird thinking of “doing stuff to a set” as a thing. And we have to get a touch more abstract than even this. We should be all right, but please do not try not to use this to defend your thesis in category theory. Just use it to not look forlorn when talking to your friend who’s defending her thesis in category theory.

    Now, we can take this collection of all the ways we can relate one set of things to another. And we can combine this with an operation that works kind of like addition. Some way to “add” one way-to-match-things to another and get a way-to-match-things. There’s also something that works kind of like multiplication. It’s a different way to combine these ways-to-match-things. This forms a ring, which is a kind of structure that mathematicians learn about in Introduction to Not That Kind Of Algebra. There are many constructs that are rings. The integers, for example, are also a ring, with addition and multiplication the same old processes we’ve always used.

    And just as we can sort the integers into odds and evens — or into other groupings, like “multiples of three” and “one plus a multiple of three” and “two plus a multiple of three” — so we can sort the ways-to-match-things into new collections. And this is our cohomology. It’s the ways we can sort and classify the different ways to manipulate whatever we started on.

    I apologize that this sounds so abstract as to barely exist. I admit we’re far from a nice solid example such as “six”. But the abstractness is what gives cohomologies explanatory power. We depend very little on the specifics of what we might talk about. And therefore what we can prove is true for very many things. It takes a while to get there, is all.

     
  • Joseph Nebus 4:00 pm on Thursday, 3 August, 2017 Permalink | Reply
    Tags: , , July, , Portugal, ,   

    How July 2017 Treated My Mathematics Blog 


    July was a slightly better-read month around here than June was. I expected that. There weren’t any more posts in July — 13 both months — but the run-up to an A-to-Z sequence usually draws in readers. Not so many as might have been. I didn’t break back above the 1,000 threshold. But there were 911 page views from 568 distinct visitors, according to WordPress. In June there were 878 page views from only 542 visitors. May saw 1,029 page views from 662 visitors and I anticipate that August should be closer to that.

    The biggest measure of how engaged readers were rose dramatically. There were 45 comments posted here over the month. In June there were a meager 13 comments, and in May only eight. Asking questions that demand answers, and that hold out the prospect of making me do stuff, seems to be the trick. The number of likes rose less dramatically, with 118 things liked around here. In June there were only 99 likess; in May, 78. This isn’t like the peaks of the Summer 2015 A To Z (518 Likes in June!), but we’ll see what happens.

    The most popular posts in July were the usual mix of Reading the Comics posts, the number of grooves on a record, and A To Z publicity:

    There were 60 countries sending me readers in July, up from 52 in June and in May. In a twist, the United States sent the greatest number of them:

    Country Views
    United States 466
    Philippines 59
    United Kingdom 57
    Canada 45
    India 35
    Singapore 32
    Austria 31
    France 16
    Australia 15
    Brazil 14
    Germany 12
    Spain 12
    Hong Kong SAR China 8
    Italy 7
    Puerto Rico 7
    Argentina 6
    South Africa 6
    Belgium 5
    Netherlands 4
    Norway 4
    Russia 4
    Sweden 4
    Switzerland 4
    Chile 3
    Indonesia 3
    Nigeria 3
    Slovakia 3
    Colombia 2
    Czech Republic 2
    Denmark 2
    Estonia 2
    Lebanon 2
    Malaysia 2
    New Zealand 2
    Pakistan 2
    Poland 2
    Thailand 2
    Turkey 2
    United Arab Emirates 2
    Bangladesh 1
    Belarus 1
    Bulgaria 1
    Cambodia 1
    Cape Verde 1
    Costa Rica 1
    European Union 1
    Hungary 1 (*)
    Israel 1
    Japan 1 (**)
    Kazakhstan 1
    Latvia 1
    Mexico 1 (*)
    Oman 1
    Paraguay 1
    Romania 1
    Saudi Arabia 1
    Serbia 1
    South Korea 1
    Ukraine 1 (**)
    Vietnam 1

    There were 20 single-reader countries, up from 16 in June and down from May’s 21. Hungary and Mexico were single-reader countries the previous month. Japan and Ukraine have been single-reader countries three months running now. I’ve lost my monthly lone Portuguese reader. I hope she’s well and just busy with other projects. Still don’t know what “European Union” means in this context.

    The most popular day for reading was Monday, with 19 percent of page views coming in then. Why? Good question. In June it had been Sunday, with 18 percent. In May it was Sunday, with 16 percent. This is probably a meaningless flutter. The most popular hour was, again 4 pm, when 19 percent of page views came. 4 pm Greenwich Time is when I set most stuff to appear so I understand that being a trendy hour. In June the 4 pm hour got 14 percent of my page views.

    August started with the blog having 51,034 page views from 23,322 distinct viewers that WordPress will admit to. And it lists me as having 676 followers on WordPress, up from the start of July’s triangular-number (thanks, IvaSally!) 666. If you’d like this blog to appear in your wordPress reader, please use the little blue strip labelled “Follow nebusresearch” which should appear in the upper-right corner of the page. If following by e-mail is more your thing, there’s a strip labelled “Follow Blog Via E-mail” that you can use. I have finally looked up how to make that e-mail instead of “email”. It required my trying. I’m also on Twitter, as @Nebusj. And I support a humor blog as well, a nice cozy little thing that includes useful bits of information like quick summaries of the current story comics so you can avoid sounding uninformed about the plot twists of Alley Oop. It’s a need which I can fill.

     
  • Joseph Nebus 4:00 pm on Wednesday, 2 August, 2017 Permalink | Reply
    Tags: , bookstores, , , , measurements, , ,   

    The Summer 2017 Mathematics A To Z: Benford's Law 


    Today’s entry in the Summer 2017 Mathematics A To Z is one for myself. I couldn’t post this any later.

    Benford’s Law.

    My car’s odometer first read 9 on my final test drive before buying it, in June of 2009. It flipped over to 10 barely a minute after that, somewhere near Jersey Freeze ice cream parlor at what used to be the Freehold Traffic Circle. Ask a Central New Jersey person of sufficient vintage about that place. Its odometer read 90 miles sometime that weekend, I think while I was driving to The Book Garden on Route 537. Ask a Central New Jersey person of sufficient reading habits about that place. It’s still there. It flipped over to 100 sometime when I was driving back later that day.

    The odometer read 900 about two months after that, probably while I was driving to work, as I had a longer commute in those days. It flipped over to 1000 a couple days after that. The odometer first read 9,000 miles sometime in spring of 2010 and I don’t remember what I was driving to for that. It flipped over from 9,999 to 10,000 miles several weeks later, as I pulled into the car dealership for its scheduled servicing. Yes, this kind of impressed the dealer that I got there exactly on the round number.

    The odometer first read 90,000 in late August of last year, as I was driving to some competitive pinball event in western Michigan. It’s scheduled to flip over to 100,000 miles sometime this week as I get to the dealer for its scheduled maintenance. While cars have gotten to be much more reliable and durable than they used to be, the odometer will never flip over to 900,000 miles. At least I can’t imagine owning it long enough, at my rate of driving the past eight years, that this would ever happen. It’s hard to imagine living long enough for the car to reach 900,000 miles. Thursday or Friday it should flip over to 100,000 miles. The leading digit on the odometer will be 1 or, possibly, 2 for the rest of my association with it.

    The point of this little autobiography is this observation. Imagine all the days that I have owned this car, from sometime in June 2009 to whatever day I sell, lose, or replace it. Pick one. What is the leading digit of my odometer on that day? It could be anything from 1 to 9. But it’s more likely to be 1 than it is 9. Right now it’s as likely to be any of the digits. But after this week the chance of ‘1’ being the leading digit will rise, and become quite more likely than that of ‘9’. And it’ll never lose that edge.

    This is a reflection of Benford’s Law. It is named, as most mathematical things are, imperfectly. The law-namer was Frank Benford, a physicist, who in 1938 published a paper The Law Of Anomalous Numbers. It confirmed the observation of Simon Newcomb. Newcomb was a 19th century astronomer and mathematician of an exhausting number of observations and developments. Newcomb observed the logarithm tables that anyone who needed to compute referred to often. The earlier pages were more worn-out and dirty and damaged than the later pages. People worked with numbers that start with ‘1’ more than they did numbers starting with ‘2’. And more those that start ‘2’ than start ‘3’. More that start with ‘3’ than start with ‘4’. And on. Benford showed this was not some fluke of calculations. It turned up in bizarre collections of data. The surface areas of rivers. The populations of thousands of United States municipalities. Molecular weights. The digits that turned up in an issue of Reader’s Digest. There is a bias in the world toward numbers that start with ‘1’.

    And this is, prima facie, crazy. How can the surface areas of rivers somehow prefer to be, say, 100-199 hectares instead of 500-599 hectares? A hundred is a human construct. (Indeed, it’s many human constructs.) That we think ten is an interesting number is an artefact of our society. To think that 100 is a nice round number and that, say, 81 or 144 are not is a cultural choice. Grant that the digits of street addresses of people listed in American Men of Science — one of Benford’s data sources — have some cultural bias. How can another of his sources, molecular weights, possibly?

    The bias sneaks in subtly. Don’t they all? It lurks at the edge of the table of data. The table header, perhaps, where it says “River Name” and “Surface Area (sq km)”. Or at the bottom where it says “Length (miles)”. Or it’s never explicit, because I take for granted people know my car’s mileage is measured in miles.

    What would be different in my introduction if my car were Canadian, and the odometer measured kilometers instead? … Well, I’d not have driven the 9th kilometer; someone else doing a test-drive would have. The 90th through 99th kilometers would have come a little earlier that first weekend. The 900th through 999th kilometers too. I would have passed the 99,999th kilometer years ago. In kilometers my car has been in the 100,000s for something like four years now. It’s less absurd that it could reach the 900,000th kilometer in my lifetime, but that still won’t happen.

    What would be different is the precise dates about when my car reached its milestones, and the amount of days it spent in the 1’s and the 2’s and the 3’s and so on. But the proportions? What fraction of its days it spends with a 1 as the leading digit versus a 2 or a 5? … Well, that’s changed a little bit. There is some final mile, or kilometer, my car will ever register and it makes a little difference whether that’s 239,000 or 385,000. But it’s only a little difference. It’s the difference in how many times a tossed coin comes up heads on the first 1,000 flips versus the second 1,000 flips. They’ll be different numbers, but not that different.

    What’s the difference between a mile and a kilometer? A mile is longer than a kilometer, but that’s it. They measure the same kinds of things. You can convert a measurement in miles to one in kilometers by multiplying by a constant. We could as well measure my car’s odometer in meters, or inches, or parsecs, or lengths of football fields. The difference is what number we multiply the original measurement by. We call this “scaling”.

    Whatever we measure, in whatever unit we measure, has to have a leading digit of something. So it’s got to have some chance of starting out with a ‘1’, some chance of starting out with a ‘2’, some chance of starting out with a ‘3’, and so on. But that chance can’t depend on the scale. Measuring something in smaller or larger units doesn’t change the proportion of how often each leading digit is there.

    These facts combine to imply that leading digits follow a logarithmic-scale law. The leading digit should be a ‘1’ something like 30 percent of the time. And a ‘2’ about 18 percent of the time. A ‘3’ about one-eighth of the time. And it decreases from there. ‘9’ gets to take the lead a meager 4.6 percent of the time.

    Roughly. It’s not going to be so all the time. Measure the heights of humans in meters and there’ll be far more leading digits of ‘1’ than we should expect, as most people are between 1 and 2 meters tall. Measure them in feet and ‘5’ and ‘6’ take a great lead. The law works best when data can sprawl over many orders of magnitude. If we lived in a world where people could as easily be two inches as two hundred feet tall, Benford’s Law would make more accurate predictions about their heights. That something is a mathematical truth does not mean it’s independent of all reason.

    For example, the reader thinking back some may be wondering: granted that atomic weights and river areas and populations carry units with them that create this distribution. How do street addresses, one of Benford’s observed sources, carry any unit? Well, street addresses are, at least in the United States custom, a loose measure of distance. The 100 block (for example) of a street is within one … block … from whatever the more important street or river crossing that street is. The 900 block is farther away.

    This extends further. Block numbers are proxies for distance from the major cross feature. House numbers on the block are proxies for distance from the start of the block. We have a better chance to see street number 418 than 1418, to see 418 than 488, or to see 418 than to see 1488. We can look at Benford’s Law in the second and third and other minor digits of numbers. But we have to be more cautious. There is more room for variation and quirk events. A block-filling building in the downtown area can take whatever street number the owners think most auspicious. Smaller samples of anything are less predictable.

    Nevertheless, Benford’s Law has become famous to forensic accountants the past several decades, if we allow the use of the word “famous” in this context. But its fame is thanks to the economists Hal Varian and Mark Nigrini. They observed that real-world financial data should be expected to follow this same distribution. If they don’t, then there might be something suspicious going on. This is not an ironclad rule. There might be good reasons for the discrepancy. If your work trips are always to the same location, and always for one week, and there’s one hotel it makes sense to stay at, and you always learn you’ll need to make the trips about one month ahead of time, of course the hotel bill will be roughly the same. Benford’s Law is a simple, rough tool, a way to decide what data to scrutinize for mischief. With this in mind I trust none of my readers will make the obvious leading-digit mistake when padding their expense accounts anymore.

    Since I’ve done you that favor, anyone out there think they can pick me up at the dealer’s Thursday, maybe Friday? Thanks in advance.

     
    • ivasallay 6:12 pm on Wednesday, 2 August, 2017 Permalink | Reply

      Fascinating. I’ve never given this much thought, but it makes sense. Clearly, given any random whole number greater than 9, there will be at least as many numbers less than it that start with a 1 than any other number, too.

      Back to your comment about odometers. We owned a van until it started costing more in repairs than most people pay in car payments. The odometer read something like 97,000 miles. We should have suspected from the beginning that it wasn’t made to last because IF it had made it to 99,999, it would then start over at 00,000.

      Like

      • Joseph Nebus 6:40 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thank you. This is one of my favorite little bits of mathematics because it is something lurking around us all the time, just waiting to be discovered, and it’s really there once we try measuring things.

        I’m amused to hear of a car with that short an odometer reel. I do remember thinking as a child that there was trouble if a car’s odometer rolled past 999,999. My father I remember joking that when that happened you had a brand-new car. I also remember hearing vaguely of flags that would drop beside the odometer reels if that ever happened.

        Electromechanical and early solid-state pinball machines, with scoring reels or finitely many digits to display a score, can have this problem happen. Some of them handle it by having a light turn on to show, say, ‘100,000’ above the score and which does nothing to help with someone who rolls the score twice. Some just shrug and give up; when I’ve rolled our home Tri-Zone machine, its score just goes back to the 000,000 mark. Some of the pinball machines made by European manufacturer Zaccaria in the day would have the final digit — fixed at zero by long pinball custom — switch to a flashing 1, or (I trust) 2, or 3, or so on. It’s a bit odd to read at first, but it’s a good way to make the rollover problem a much better one to have.

        Like

  • Joseph Nebus 4:00 pm on Tuesday, 1 August, 2017 Permalink | Reply
    Tags: , , Boner's Ark, , ,   

    Reading the Comics, July 29, 2017: Not Really Mathematics Concluded Edition 


    It was a busy week at Comic Strip Master Command last week, since they wanted to be sure I was overloaded ahead of the start of the Summer 2017 A To Z project. So here’s the couple of comics I didn’t have time to review on Sunday.

    Mort (“Addison”) Walker’s Boner’s Ark for the 7th of September, 1971 was rerun the 27th of July. It mentions mathematics but just as a class someone might need more work on. Could be anything, but mathematics has the connotations of something everybody struggles with, and in an American comic strip needs only four letters to write. Most economical use of word balloon space.

    Boner: 'Your math could stand a lot more work, Spot.' Aardvark: 'Yeah! Let's get at it, Buddy! Get that old nose to the grindstone!' Spot: 'YOUR nose could use a little time at the grindstone, too, Buddy!'

    Mort (“Addison”) Walker’s Boner’s Ark for the 7th of September, 1971 and rerun the 27th of July, 2017. I suppose I’m glad that Boner is making sure his animals get as good an education as possible while they’re stranded on their Ark. I’m just wondering whether Boner’s comment is meant in the parental role of a concerned responsible caretaker figure, or whether he’s serving as a teacher or principal. What exactly is the social-service infrastructure of Boner’s Ark? The world may never know.

    Neil Kohney’s The Other End for the 28th also mentions mathematics without having any real mathematics content. Barry tries to make the argument that mathematics has a timeless and universal quality that makes for good aesthetic value. I support this principle. Art has many roles. One is to make us see things which are true which are not about ourselves. This mathematics does. Whether it’s something as instantly accessible as, say, RobertLovesPi‘s illustrations of geometrical structures, or something as involved as the five-color map theorem mathematics gives us something. This isn’t any excuse to slum, though.

    Rob Harrell’s Big Top rerun for the 29th features a word problem. It’s cast in terms of what a lion might find interesting. Cultural expectations are inseparable from the mathematics we do, however much we might find universal truths about them. Word problems make the cultural biases more explicit, though. Also, note that Harrell shows an important lesson for artists in the final panel: whenever possible, draw animals wearing glasses.

    Samson’s Dark Side Of The Horse for the 29th is another sheep-counting joke. As Samson will often do this includes different representations of numbers before it all turns to chaos in the end. This is why some of us can’t sleep.

     
  • Joseph Nebus 4:00 pm on Monday, 31 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    The Summer 2017 Mathematics A To Z: Arithmetic 


    And now as summer (United States edition) reaches its closing months I plunge into the fourth of my A To Z mathematics-glossary sequences. I hope I know what I’m doing! Today’s request is one of several from Gaurish, who’s got to be my top requester for mathematical terms and whom I thank for it. It’s a lot easier writing these things when I don’t have to think up topics. Gaurish hosts a fine blog, For the love of Mathematics, which you might consider reading.

    Arithmetic.

    Arithmetic is what people who aren’t mathematicians figure mathematicians do all day. I remember in my childhood a Berenstain Bears book about people’s jobs. Its mathematician was an adorable little bear adding up sums on the chalkboard, in an observatory, on the Moon. I liked every part of this. I wouldn’t say it’s the whole reason I became a mathematician but it did made the prospect look good early on.

    People who aren’t mathematicians are right. At least, the bulk of what mathematics people do is arithmetic. If we work by volume. Arithmetic is about the calculations we do to evaluate or solve polynomials. And polynomials are everything that humans find interesting. Arithmetic is adding and subtracting, of multiplication and division, of taking powers and taking roots. Arithmetic is changing the units of a thing, and of breaking something into several smaller units, or of merging several smaller units into one big one. Arithmetic’s role in commerce and in finance must overwhelm the higher mathematics. Higher mathematics offers cohomologies and Ricci tensors. Arithmetic offers a budget.

    This is old mathematics. There’s evidence of humans twenty thousands of years ago recording their arithmetic computations. My understanding is the evidence is ambiguous and interpretations vary. This seems fair. I assume that humans did such arithmetic then, granting that I do not know how to interpret archeological evidence. The thing is that arithmetic is older than humans. Animals are able to count, to do addition and subtraction, perhaps to do harder computations. (I crib this from The Number Sense:
    How the Mind Creates Mathematics
    , by Stanislas Daehaene.) We learn it first, refining our rough instinctively developed sense to something rigorous. At least we learn it at the same time we learn geometry, the other branch of mathematics that must predate human existence.

    The primality of arithmetic governs how it becomes an adjective. We will have, for example, the “arithmetic progression” of terms in a sequence. This is a sequence of numbers such as 1, 3, 5, 7, 9, and so on. Or 4, 9, 14, 19, 24, 29, and so on. The difference between one term and its successor is the same as the difference between the predecessor and this term. Or we speak of the “arithmetic mean”. This is the one found by adding together all the numbers of a sample and dividing by the number of terms in the sample. These are important concepts, useful concepts. They are among the first concepts we have when we think of a thing. Their familiarity makes them easy tools to overlook.

    Consider the Fundamental Theorem of Arithmetic. There are many Fundamental Theorems; that of Algebra guarantees us the number of roots of a polynomial equation. That of Calculus guarantees us that derivatives and integrals are joined concepts. The Fundamental Theorem of Arithmetic tells us that every whole number greater than one is equal to one and only one product of prime numbers. If a number is equal to (say) two times two times thirteen times nineteen, it cannot also be equal to (say) five times eleven times seventeen. This may seem uncontroversial. The budding mathematician will convince herself it’s so by trying to work out all the ways to write 60 as the product of prime numbers. It’s hard to imagine mathematics for which it isn’t true.

    But it needn’t be true. As we study why arithmetic works we discover many strange things. This mathematics that we know even without learning is sophisticated. To build a logical justification for it requires a theory of sets and hundreds of pages of tight reasoning. Or a theory of categories and I don’t even know how much reasoning. The thing that is obvious from putting a couple objects on a table and then a couple more is hard to prove.

    As we continue studying arithmetic we start to ponder things like Goldbach’s Conjecture, about even numbers (other than two) being the sum of exactly two prime numbers. This brings us into number theory, a land of fascinating problems. Many of them are so accessible you could pose them to a person while waiting in a fast-food line. This befits a field that grows out of such simple stuff. Many of those are so hard to answer that no person knows whether they are true, or are false, or are even answerable.

    And it splits off other ideas. Arithmetic starts, at least, with the counting numbers. It moves into the whole numbers and soon all the integers. With division we soon get rational numbers. With roots we soon get certain irrational numbers. A close study of this implies there must be irrational numbers that must exist, at least as much as “four” exists. Yet they can’t be reached by studying polynomials. Not polynomials that don’t already use these exotic irrational numbers. These are transcendental numbers. If we were to say the transcendental numbers were the only real numbers we would be making only a very slight mistake. We learn they exist by thinking long enough and deep enough about arithmetic to realize there must be more there than we realized.

    Thought compounds thought. The integers and the rational numbers and the real numbers have a structure. They interact in certain ways. We can look for things that are not numbers, but which follow rules like that for addition and for multiplication. Sometimes even for powers and for roots. Some of these can be strange: polynomials themselves, for example, follow rules like those of arithmetic. Matrices, which we can represent as grids of numbers, can have powers and even something like roots. Arithmetic is inspiration to finding mathematical structures that look little like our arithmetic. We can find things that follow mathematical operations but which don’t have a Fundamental Theorem of Arithmetic.

    And there are more related ideas. These are often very useful. There’s modular arithmetic, in which we adjust the rules of addition and multiplication so that we can work with a finite set of numbers. There’s floating point arithmetic, in which we set machines to do our calculations. These calculations are no longer precise. But they are fast, and reliable, and that is often what we need.

    So arithmetic is what people who aren’t mathematicians figure mathematicians do all day. And they are mistaken, but not by much. Arithmetic gives us an idea of what mathematics we can hope to understand. So it structures the way we think about mathematics.

     
    • ivasallay 5:34 pm on Monday, 31 July, 2017 Permalink | Reply

      I think you covered arithmetic in a very clear, scholarly way.

      When I was in the early elementary grades, we didn’t study math. We studied arithmetic.

      Here’s a couple more things some people might not know about arithmetic:
      1) How to remember the proper spelling of arithmetic: A Rat In The House May Eat The Ice Cream.
      2) How to pronounce arithmetic: https://www.quora.com/Why-does-the-pronunciation-of-arithmetic-depend-on-context

      Like

      • Joseph Nebus 6:27 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thanks! … My recollection is that in elementary school we called it mathematics (or just math), but the teachers were pretty clear about whether we were doing arithmetic or geometry. If that was clear, since I grew up on the tail end of the New Math wave and we could do stuff that was more playful than multiplication tables were.

        I hadn’t thought about the shifting pronunciations of ‘arithmetic’ as a word. I suppose it’s not different from many multi-syllable words in doing that, though. My suspicion is that the distinction between ‘arithmetic’ as an adjective and as a noun is spurious, though. My hunch is people shift the emphasis based on the structure of the whole sentence, with the words coming after ‘arithmetic’ having a big role to play. I’d expect that an important word immediately follows ‘arithmetic’ often if it’s being used as an adjective (like, ‘arithmetic mean’), but that’s not infallible. As opposed to those many rules of English grammar and pronunciation that are infallible.

        Liked by 1 person

    • gaurish 9:48 am on Saturday, 12 August, 2017 Permalink | Reply

      A Beautiful introduction to Arithmetic!

      Like

  • Joseph Nebus 4:00 pm on Sunday, 30 July, 2017 Permalink | Reply
    Tags: , , , , , , Savage Chickens, ,   

    Reading the Comics, July 30, 2017: Not Really Mathematics edition 


    It’s been a busy enough week at Comic Strip Master Command that I’ll need to split the results across two essays. Any other week I’d be glad for this, since, hey, free content. But this week it hits a busy time and shouldn’t I have expected that? The odd thing is that the mathematics mentions have been numerous but not exactly deep. So let’s watch as I make something big out of that.

    Mark Tatulli’s Heart of the City closed out its “Math Camp” storyline this week. It didn’t end up having much to do with mathematics and was instead about trust and personal responsibility issues. You know, like stories about kids who aren’t learning to believe in themselves and follow their dreams usually are. Since we never saw any real Math Camp activities we don’t get any idea what they were trying to do to interest kids in mathematics, which is a bit of a shame. My guess would be they’d play a lot of the logic-driven puzzles that are fun but that they never get to do in class. The story established that what I thought was an amusement park was instead a fair, so, that might be anywhere Pennsylvania or a couple of other nearby states.

    Rick Kirkman and Jerry Scott’s Baby Blues for the 25th sees Hammie have “another” mathematics worksheet accident. Could be any subject, really, but I suppose it would naturally be the one that hey wait a minute, why is he doing mathematics worksheets in late July? How early does their school district come back from summer vacation, anyway?

    Hammie 'accidentally' taps a glass of water on his mathematics paper. Then tears it up. Then chews it. Mom: 'Another math worksheet accident?' Hammie: 'Honest, Mom, I think they're cursed!'

    Rick Kirkman and Jerry Scott’s Baby Blues for the 25th of July, 2017 Almost as alarming: Hammie is clearly way behind on his “faking plausible excuses” homework. If he doesn’t develop the skills to make a credible reason why he didn’t do something how is he ever going to dodge texts from people too important not to reply to?

    Olivia Walch’s Imogen Quest for the 26th uses a spot of mathematics as the emblem for teaching. In this case it’s a bit of physics. And an important bit of physics, too: it’s the time-dependent Schrödinger Equation. This is the one that describes how, if you know the total energy of the system, and the rules that set its potential and kinetic energies, you can work out the function Ψ that describes it. Ψ is a function, and it’s a powerful one. It contains probability distributions: how likely whatever it is you’re modeling is to have a particle in this region, or in that region. How likely it is to have a particle with this much momentum, versus that much momentum. And so on. Each of these we find by applying a function to the function Ψ. It’s heady stuff, and amazing stuff to me. Ψ somehow contains everything we’d like to know. And different functions work like filters that make clear one aspect of that.

    Dan Thompson’s Brevity for the 26th is a joke about Sesame Street‘s Count von Count. Also about how we can take people’s natural aptitudes and delights and turn them into sad, droning unpleasantness in the service of corporate overlords. It’s fun.

    Steve Sicula’s Home and Away rerun for the 26th is a misplaced Pi Day joke. It originally ran the 22nd of April, but in 2010, before Pi Day was nearly so much a thing.

    Doug Savage’s Savage Chickens for the 26th proves something “scientific” by putting numbers into it. Particularly, by putting statistics into it. Understandable impulse. One of the great trends of the past century has been taking the idea that we only understand things when they are measured. And this implies statistics. Everything is unique. Only statistical measurement lets us understand what groups of similar things are like. Does something work better than the alternative? We have to run tests, and see how the something and the alternative work. Are they so similar that the differences between them could plausibly be chance alone? Are they so different that it strains belief that they’re equally effective? It’s one of science’s tools. It’s not everything which makes for science. But it is stuff easy to communicate in one panel.

    Neil Kohney’s The Other End for the 26th is really a finance joke. It’s about the ways the finance industry can turn one thing into a dazzling series of trades and derivative trades. But this is a field that mathematics colonized, or that colonized mathematics, over the past generation. Mathematical finance has done a lot to shape ideas of how we might study risk, and probability, and how we might form strategies to use that risk. It’s also done a lot to shape finance. Pretty much any major financial crisis you’ve encountered since about 1990 has been driven by a brilliant new mathematical concept meant to govern risk crashing up against the fact that humans don’t behave the way some model said they should. Nor could they; models are simplified, abstracted concepts that let hard problems be approximated. Every model has its points of failure. Hopefully we’ll learn enough about them that major financial crises can become as rare as, for example, major bridge collapses or major airplane disasters.

     
    • ivasallay 5:00 am on Monday, 31 July, 2017 Permalink | Reply

      A pi joke that uses 22/7 as pi should run close to 22 July.

      Like

      • Joseph Nebus 6:22 pm on Wednesday, 2 August, 2017 Permalink | Reply

        You’re right, it ought. It’s got to be coincidence that it ran so close this year, though. The strip’s been in repeats a while now and as far as I know isn’t skipping or adjusting reruns to be seasonal.

        Liked by 1 person

  • Joseph Nebus 4:00 pm on Thursday, 27 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 13: To Close A Loop 


    Previously:

    And the supplemental reading:


    Today’s is one of the occasional essays in the Why Stuff Can Orbit sequence that just has a lot of equations. I’ve tried not to write everything around equations because I know what they’re like to read. They’re pretty to look at and after about four of them you might as well replace them with a big grey box that reads “just let your eyes glaze over and move down to the words”. It’s even more glaze-y than that for non-mathematicians.

    But we do need them. Equations are wonderfully compact, efficient ways to write about things that are true. Especially things that are only true if exacting conditions are met. They’re so good that I’ll often find myself checking a textbook for an explanation of something and looking only at the equations, letting my eyes glaze over the words. That’s a chilling thing to catch yourself doing. Especially when you’ve written some obscure textbooks and a slightly read mathematics blog.

    What I had been looking at was a perturbed central-force orbit. We have something, generically called a planet, that orbits the center of the universe. It’s attracted to the center of the universe by some potential energy, which we describe as ‘U(r)’. It’s some number that changes with the distance ‘r’ the planet has from the center of the universe. It usually depends on other stuff too, like some kind of mass of the planet or some constants or stuff. The planet has some angular momentum, which we can call ‘L’ and pretend is a simple number. It’s in truth a complicated number, but we’ve set up the problem where we can ignore the complicated stuff. This angular momentum implies the potential energy allows for a circular orbit at some distance which we’ll call ‘a’ from the center of the universe.

    From ‘U(r)’ and ‘L’ we can say whether this is a stable orbit. If it’s stable, a little perturbation, a nudging, from the circular orbit will stay small. If it’s unstable, a little perturbation will keep growing and never stop. If we perturb this circular orbit the planet will wobble back and forth around the circular orbit. Sometimes the radius will be a little smaller than ‘a’, and sometimes it’ll be a little larger than ‘a’. And now I want to see whether we get a stable closed orbit.

    The orbit will be closed if the planet ever comes back to the same position and same momentum that it started with. ‘Started’ is a weird idea in this case. But it’s common vocabulary. By it we mean “whatever properties the thing had when we started paying attention to it”. Usually in a problem like this we suppose there’s some measure of time. It’s typically given the name ‘t’ because we don’t want to make this hard on ourselves. The start is some convenient reference time, often ‘t = 0’. That choice usually makes the equations look simplest.

    The position of the planet we can describe with two variables. One is the distance from the center of the universe, ‘r’, which we know changes with time: ‘r(t)’. Another is the angle the planet makes with respect to some reference line. The angle we might call ‘θ’ and often do. This will also change in time, then, ‘θ(t)’. We can pick other variables to describe where something is. But they’re going to involve more algebra, more symbol work, than this choice does so who needs it?

    Momentum, now, that’s another set of variables we need to worry about. But we don’t need to worry about them. This particular problem is set up so that if we know the position of the planet we also have the momentum. We won’t be able to get both ‘r(t)’ and ‘θ(t)’ back to their starting values without also getting the momentum there. So we don’t have to worry about that. This won’t always work, as see my future series, ‘Why Statistical Mechanics Works’.

    So. We know, because it’s not that hard to work out, how long it takes for ‘r(t)’ to get back to its original, ‘r(0)’, value. It’ll take a time we worked out to be (first big equation here, although we found it a couple essays back):

    T_r = 2\pi\sqrt{ \frac{m}{ -F'(a) - \frac{3}{a} F(a) }}

    Here ‘m’ is the mass of the planet. And ‘F’ is a useful little auxiliary function. It’s the force that the planet feels when it’s a distance from the origin. It’s defined as F(r) = -\frac{dU}{dr} . It’s convenient to have around. It makes equations like this one simpler, for one. And it’s weird to think of a central force problem where we never, ever see forces. The peculiar thing is we define ‘F’ for every distance the planet might be from the center of the universe. But all we care about is its value at the equilibrium, circular orbit distance of ‘a’. We also care about its first derivative, also evaluated at the distance of ‘a’, which is that F'(a) talk early on in that denominator.

    So in the time between time ‘0’ and time ‘Tr‘ the perturbed radius will complete a full loop. It’ll reach its biggest value and its smallest value and get back to the original. (It is so much easier to suppose the perturbation starts at its biggest value at time ‘0’ that we often assume it has. It doesn’t have to be. But if we don’t have something forcing the choice of what time to call ‘0’ on us, why not pick one that’s convenient?) The question is whether ‘θ(t)’ completes a full loop in that time. If it does then we’ve gotten back to the starting position exactly and we have a closed orbit.

    Thing is that the angle will never get back to its starting value. The angle ‘θ(t)’ is always increasing at a rate we call ‘ω’, the angular velocity. This number is constant, at least approximately. Last time we found out what this number was:

    \omega = \frac{L}{ma^2}

    So the angle, over time, is going to look like:

    \theta(t) = \frac{L}{ma^2} t

    And ‘θ(Tr)’ will never equal ‘θ(0)’ again, not unless ‘ω’ is zero. And if ‘ω’ is zero then the planet is racing away from the center of the universe never to be seen again. Or it’s plummeting into the center of the universe to be gobbled up by whatever resides there. In either case, not what we traditionally think of as orbits. Even if we allow these as orbits, these would be nudges too big to call perturbations.

    So here’s the resolution. Angles are right awful pains in mathematical physics. This is because increasing an angle by 2π — or decreasing it by 2π — has no visible effect. In the language of the hew-mon, adding 360 degrees to a turn leaves you back where you started. A 45 degree angle is indistinguishable from a 405 degree angle, or a 765 degree angle, or a -315 degree angle, or so on. This makes for all sorts of irritating alternate cases to consider when you try solving for where one thing meets another. But it allows us to have closed orbits.

    Because we can have a closed orbit, now, if the radius ‘r(t)’ completes a full oscillation in the time it takes ‘θ(t)’ to grow by 2π. Or to grow by π. Or to grow by ½π. Or a third of π. Or so on.

    So. Last time we worked out that the angular velocity had to be this number:

    \omega = \frac{L}{ma^2}

    And that looked weird because the central force doesn’t seem to be there. It’s in there. It’s just implicit. We need to know what the central force is to work out what ‘a’ is. But we can make it explicit by using that auxiliary little function ‘F(r)’. In particular, at the circular orbit radius of ‘a’ we have that:

    F(a) = -\frac{L^2}{ma^3}

    I am going to use this to work out what ‘L’ has to be, in terms of ‘F’ and ‘m’ and ‘a’. First, multiply both sides of this equation by ‘ma3‘:

    F(a) \cdot ma^3 = -L^2

    And then both sides by -1:

    -ma^3 F(a) = L^2

    Take the square root — don’t worry, that it will turn out that ‘F(a)’ is a negative number so we’re not doing anything suspicious —

    \sqrt{-ma^3 F(a)} = L

    Now, take that ‘L’ we’ve got and put it back into the equation for angular velocity:

    \omega = \frac{L}{ma^2} = \frac{\sqrt{-ma^3 F(a)}}{ma^2}

    We might look stuck and at what seems like an even worse position. It’s not. When you do enough of these problems you get used to some tricks. For example, that ‘ma2‘ in the denominator we could move under the square root if we liked. This we know because ma^2 = \sqrt{ \left(ma^2\right)^2 } at least as long as ‘ma2‘ is positive. It is.

    So. We fall back on the trick of squaring and square-rooting the denominator and so generate this mess:

    \omega = \sqrt{\frac{-ma^3 F(a)}{\left(ma^2\right)^2}}	\\ \omega = \sqrt{\frac{-ma^3 F(a)}{m^2 a^4}} \\ \omega = \sqrt{\frac{-F(a)}{ma}}

    That’s getting nice and simple. Let me go complicate matters. I’ll want to know the angle that the planet sweeps out as the radius goes from its largest to its smallest value. Or vice-versa. This time is going to be half of ‘Tr‘, the time it takes to do a complete oscillation. The oscillation might have started at time ‘t’ of zero, maybe not. But how long it takes will be the same. I’m going to call this angle ‘ψ’, because I’ve written “the angle that the planet sweeps out as the radius goes from its largest to its smallest value” enough times this essay. If ‘ψ’ is equal to π, or one-half π, or one-third π, or some other nice rational multiple of π we’ll get a closed orbit. If it isn’t, we won’t.

    So. ‘ψ’ will be one-half times the oscillation time times that angular velocity. This is easy:

    \psi = \frac{1}{2} \cdot T_r \cdot \omega

    Put in the formulas we have for ‘Tr‘ and for ‘ω’. Now it’ll be complicated.

    \psi = \frac{1}{2} 2\pi \sqrt{\frac{m}{-F'(a) - \frac{3}{a} F(a)}} \sqrt{\frac{-F(a)}{ma}}

    Now we’ll make this a little simpler again. We have two square roots of fractions multiplied by each other. That’s the same as the square root of the two fractions multiplied by each other. So we can take numerator times numerator and denominator times denominator, all underneath the square root sign. See if I don’t. Oh yeah and one-half of two π is π but you saw that coming.

    \psi = \pi \sqrt{ \frac{-m F(a)}{-\left(F'(a) + \frac{3}{m}F(a)\right)\cdot ma} }

    OK, so there’s some minus signs in the numerator and denominator worth getting rid of. There’s an ‘m’ in the numerator and the denominator that we can divide out of both sides. There’s an ‘a’ in the denominator that can multiply into a term that has a denominator inside the denominator and you know this would be easier if I could use little cross-out symbols in WordPress LaTeX. If you’re not following all this, try writing it out by hand and seeing what makes sense to cancel out.

    \psi = \pi \sqrt{ \frac{F(a)}{aF'(a) + 3F(a)} }

    This is getting not too bad. Start from a potential energy ‘U(r)’. Use an angular momentum ‘L’ to figure out the circular orbit radius ‘a’. From the potential energy find the force ‘F(r)’. And then, based on what ‘F’ and the first derivative of ‘F’ happen to be, at the radius ‘a’, we can see whether a closed orbit can be there.

    I’ve gotten to some pretty abstract territory here. Next time I hope to make things simpler again.

     
    • howardat58 6:56 pm on Thursday, 27 July, 2017 Permalink | Reply

      I am getting fascinated with this, at last!
      Am I right in saying that a closed orbit can have many turns before it reaches the starting position? Rational numbers?
      And did I spot the F'(a)/F(a) lurking in there somewhere?

      Like

      • Joseph Nebus 6:21 pm on Wednesday, 2 August, 2017 Permalink | Reply

        Thank you, and I’m glad you enjoy.

        I’m not aware of any reason we can’t have many turns before the orbit closed. The only examples I can think of where this happens are Lissajous figures, from masses on springs connected in multiple dimensions, and those aren’t central forces the way I’ve set this frame up. But for peculiar enough powers ‘n’ there are what look like closed and complicated orbits to me.

        And yeah, F'(a)/F(a) is lurking in there. I’m hoping to fit at least one, maybe two, more of this sequence in-between A To Z posts and that should make the F'(a)/F(a) explicit.

        Like

  • Joseph Nebus 4:00 pm on Tuesday, 25 July, 2017 Permalink | Reply
    Tags: , , , ,   

    There’s Still Time To Ask For Things For The Mathematics A To Z 


    I’m figuring to begin my Summer 2017 Mathematics A To Z next week. And I’ve got the first several letters pinned down, in part by a healthy number of requests by Gaurish, a lover of mathematics. Partly by some things I wanted to talk about.

    There are many letters not yet spoken for, though. If you’ve got something you’d like me to talk about, please head over to my first appeal and add a comment. The letters crossed out have been committed, but many are free. And the challenges are so much fun.

     
  • Joseph Nebus 4:00 pm on Sunday, 23 July, 2017 Permalink | Reply
    Tags: , , , devilbunnies, ecology, , , Human Cull, , , ,   

    Reading the Comics, July 22, 2017: Counter-mudgeon Edition 


    I’m not sure there is an overarching theme to the past week’s gifts from Comic Strip Master Command. If there is, it’s that I feel like some strips are making cranky points and I want to argue against their cases. I’m not sure what the opposite of a curmudgeon is. So I shall dub myself, pending a better idea, a counter-mudgeon. This won’t last, as it’s not really a good name, but there must be a better one somewhere. We’ll see it, now that I’ve said I don’t know what it is.

    Rabbits at a chalkboard. 'The result is not at all what we expected, Von Thump. According to our calculations, parallel universes may exist, and we may also be able to link them with our own by wormholes that, in strictly mathematical terms, end up in a black top hat.'

    Niklas Eriksson’s Carpe Diem for the 17th of July, 2017. First, if anyone isn’t thinking of that Pixar short then I’m not sure we can really understand each other. Second, ‘von Thump’ is a fine name for a bunny scientist and if it wasn’t ever used in the rich lore of Usenet group alt.devilbunnies I shall be disappointed. Third, Eriksson made an understandable but unfortunate mistake in composing this panel. While both rabbits are wearing glasses, they’re facing away from the viewer. It’s always correct to draw animals wearing eyeglasses, or to photograph them so. But we should get to see them in full eyeglass pelage. You’d think they would teach that in Cartoonist School or something.

    Niklas Eriksson’s Carpe Diem for the 17th features the blackboard full of equations as icon for serious, deep mathematical work. It also features rabbits, although probably not for their role in shaping mathematical thinking. Rabbits and their breeding were used in the simple toy model that gave us Fibonacci numbers, famously. And the population of Arctic hares gives those of us who’ve reached differential equations a great problem to do. The ecosystem in which Arctic hares live can be modelled very simply, as hares and a generic predator. We can model how the populations of both grow with simple equations that nevertheless give us surprises. In a rich, diverse ecosystem we see a lot of population stability: one year where an animal is a little more fecund than usual doesn’t matter much. In the sparse ecosystem of the Arctic, and the one we’re building worldwide, small changes can have matter enormously. We can even produce deterministic chaos, in which if we knew exactly how many hares and predators there were, and exactly how many of them would be born and exactly how many would die, we could predict future populations. But the tiny difference between our attainable estimate and the reality, even if it’s as small as one hare too many or too few in our model, makes our predictions worthless. It’s thrilling stuff.

    Vic Lee’s Pardon My Planet for the 17th reads, to me, as a word problem joke. The talk about how much change Marian should get back from Blake could be any kind of minor hassle in the real world where one friend covers the cost of something for another but expects to be repaid. But counting how many more nickels one person has than another? That’s of interest to kids and to story-problem authors. Who else worries about that count?

    Fortune teller: 'All of your money problems will soon be solved, including how many more nickels Beth has than Jonathan, and how much change Marian should get back from Blake.'

    Vic Lee’s Pardon My Planet for the 17th of July, 2017. I am surprised she had no questions about how many dimes Jonathan must have, although perhaps that will follow obviously from knowing the Beth nickel situation.

    Jef Mallet’s Frazz for the 17th straddles that triple point joining mathematics, philosophy, and economics. It seems sensible, in an age that embraces the idea that everything can be measured, to try to quantify happiness. And it seems sensible, in age that embraces the idea that we can model and extrapolate and act on reasonable projections, to try to see what might improve our happiness. This is so even if it’s as simple as identifying what we should or shouldn’t be happy about. Caulfield is circling around the discovery of utilitarianism. It’s a philosophy that (for my money) is better-suited to problems like how ought the city arrange its bus lines than matters too integral to life. But it, too, can bring comfort.

    Corey Pandolph’s Barkeater Lake rerun for the 20th features some mischievous arithmetic. I’m amused. It turns out that people do have enough of a number sense that very few people would let “17 plus 79 is 4,178” pass without comment. People might not be able to say exactly what it is, on a glance. If you answered that 17 plus 79 was 95, or 102, most people would need to stop and think about whether either was right. But they’re likely to know without thinking that it can’t be, say, 56 or 206. This, I understand, is so even for people who aren’t good at arithmetic. There is something amazing that we can do this sort of arithmetic so well, considering that there’s little obvious in the natural world that would need the human animal to add 17 and 79. There are things about how animals understand numbers which we don’t know yet.

    Alex Hallatt’s Human Cull for the 21st seems almost a direct response to the Barkeater Lake rerun. Somehow “making change” is treated as the highest calling of mathematics. I suppose it has a fair claim to the title of mathematics most often done. Still, I can’t get behind Hallatt’s crankiness here, and not just because Human Cull is one of the most needlessly curmudgeonly strips I regularly read. For one, store clerks don’t need to do mathematics. The cash registers do all the mathematics that clerks might need to do, and do it very well. The machines are cheap, fast, and reliable. Not using them is an affectation. I’ll grant it gives some charm to antiques shops and boutiques where they write your receipt out by hand, but that’s for atmosphere, not reliability. And it is useful the clerk having a rough idea what the change should be. But that’s just to avoid the risk of mistakes getting through. No matter how mathematically skilled the clerk is, there’ll sometimes be a price entered wrong, or the customer’s money counted wrong, or a one-dollar bill put in the five-dollar bill’s tray, or a clerk picking up two nickels when three would have been more appropriate. We should have empathy for the people doing this work.

     
    • goldenoj 8:05 pm on Sunday, 23 July, 2017 Permalink | Reply

      Human Cull may be the most disturbing idea for a comic ever.

      Like

      • Joseph Nebus 3:37 am on Tuesday, 25 July, 2017 Permalink | Reply

        It’s a strip that has really, deeply bothered me. I know the cartoonist is talking about “culling” in a comically great overreaction to people who might be a bit annoying in your daily life. But that started out feeling terribly nasty when we’re talking about, like, people who put their jackets over empty seats next to them at the movie theater. And given the turn the world’s taken for the nasty the past couple years it hasn’t improved the strip’s tone any.

        It’s not like “here’s an annoying thing people do” is an inherently bad idea for a comic strip. It drove They’ll Do It Every Time for much of its run, and it underlay comic strips like The Dinette Set or (in its implications) Pluggers. But I think there needs to be a bit more clearly expressed empathy and grace for the approach to really work.

        Like

    • The Chaos Realm 3:42 pm on Monday, 24 July, 2017 Permalink | Reply

      Now they have a machine that counts out the tills at shift changes or for money drops–I just saw it in action at a local grocery store. The “good old days” (aka massive headache) of counting it out by hand or trying to make the tills balance out are over, it seems. Whew! LOL

      Like

      • Joseph Nebus 3:47 am on Tuesday, 25 July, 2017 Permalink | Reply

        Those have turned up, where I buy stuff, sporadically over the last couple decades. I haven’t worked out what the rhyme or reason for a particular shop having one is, though. It’s not all the shops in a chain and it doesn’t seem to be particularly tied to how urban or rural the place is or how new the shop is. Maybe it’s tied to how often the management thinks cashiers are pocketing change and that’s too idiosyncratic for a mere customer to know.

        Liked by 1 person

  • Joseph Nebus 4:00 pm on Thursday, 20 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 12: How Fast Is An Orbit? 


    Previously:

    And the supplemental reading:


    On to the next piece of looking for stable, closed orbits of a central force. We start from a circular orbit of something around the sun or the mounting point or whatever. The center. I would have saved myself so much foggy writing if I had just decided to make this a sun-and-planet problem. But I had wanted to write the general problem. In this the force attracting the something towards the center has a strength that’s some constant times the distance to the center raised to a power. This is easy to describe in symbols. It’s cluttered to describe in words. This is why symbols are so nice.

    The perturbed orbit, the one I want to see close up, looks like an oscillation around that circle. The fact it is a perturbation, a small nudge away from the equilibrium, means how big the perturbation is will oscillate in time. How far the planet (whatever) is from the center will make a sine wave in time. Whether it closes depends on what it does in space.

    Part of what it does in space is easy. I just said what the distance from the planet to the center does. But to say where the planet is we need to know how far it is from the center and what angle it makes with respect to some reference direction. That’s a little harder. We also need to know where it is in the third dimension, but that’s so easy. An orbit like this is always in one plane, so we picked that plane to be that of our paper or whiteboard or tablet or whatever we’re using to sketch this out. That’s so easy to answer we don’t even count it as solved.

    The angle, though. Here, I mean the angle made by looking at the planet, the center, and some reference direction. This angle can be any real number, although a lot of those angles are going to point to the same direction in space. We’re coming at this from a mathematical view, or a physics view. Or a mathematical physics view. It means we measure this angle as radians instead of degrees. That is, a right angle is \frac{\pi}{2} , not 90 degrees, thank you. A full circle is 2\pi and not 360 degrees. We aren’t doing this to be difficult. There are good reasons to use radians. They make the mathematics simpler. What else could matter?

    We use \theta as the symbol for this angle. It’s a popular choice. \theta is going to change in time. We’ll want to know how fast it changes over time. This concept we call the angular velocity. For this there are a bunch of different possible notations. The one that I snuck in here two essays ago was ω.

    We came at the physics of this orbiting planet from a weird direction. Well, I came at it, and you followed along, and thank you for that. But I never did something like set the planet at a particular distance from the center of the universe and give it a set speed so it would have a circular enough orbit. I set up that we should have some potential energy. That energy implies a central force. It attracts things to the center of the universe. And that there should be some angular momentum that the planet has in its movement. And from that, that there would be some circular orbit. That circular orbit is one with just the right radius and just the right change in angle over time.

    From the potential energy and the angular momentum we can work out the radius of the circular orbit. Suppose your potential energy obeys a rule like V(r) = Cr^n for some number ‘C’ and some power, another number, ‘n’. Suppose your planet has the mass ‘m’. Then you’ll get a circular orbit when the planet’s a distance ‘a’ from the center, if a^{n + 2} = \frac{L^2}{n C m} . And it turns out we can also work out the angular velocity of this circular orbit. It’s all implicit in the amount of angular momentum that the planet has. This is part of why a mathematical physicist looks for concepts like angular momentum. They’re easy to work with, and they yield all sorts of interesting information, given the chance.

    I first introduced angular momentum as this number that was how much of something that our something had. It’s got physical meaning, though, reflecting how much … uh … our something would like to keep rotating around the way it has. And this can be written as a formula. The angular momentum ‘L’ is equal to the moment of inertia ‘I’ times the angular velocity ‘ω’. ‘L’ and ‘ω’ are really vectors, and ‘I’ is really a tensor. But we don’t have to worry about this because this kind of problem is easy. We can pretend these are all real numbers and nothing more.

    The moment of inertia depends on how the mass of the thing rotating is distributed in space. And it depends on how far the mass is from whatever axis it’s rotating around. For real bodies this can be challenging to work out. It’s almost always a multidimensional integral, haunting students in Calculus III. For a mass in a central force problem, though, it’s easy once again. Please tell me you’re not surprised. If it weren’t easy I’d have some more supplemental reading pieces here first.

    For a planet of mass ‘m’ that’s a distance ‘r’ from the axis of rotation, the moment of inertia ‘I’ is equal to ‘mr2‘. I’m fibbing. Slightly. This is for a point mass, that is, something that doesn’t occupy volume. We always look at point masses in this sort of physics. At least when we start. It’s easier, for one thing. And it’s not far off. The Earth’s orbit has a radius just under 150,000,000 kilometers. The difference between the Earth’s actual radius of just over 6,000 kilometers and a point-mass radius of 0 kilometers is a minor correction.

    So since we know L = I\omega , and we know I = mr^2 , we have L = mr^2\omega and from this:

    \omega = \frac{L}{mr^2}

    We know that ‘r’ changes in time. It oscillates from a maximum to a minimum value like any decent sine wave. So ‘r2‘ is going to oscillate too, like a … sine-squared wave. And then dividing the constant ‘L’ by something oscillating like a sine-squared wave … this implies ω changes in time. So it does. In a possibly complicated and annoying way. So it does. I don’t want to deal with that. So I don’t.

    Instead, I am going to summon the great powers of approximation. This perturbed orbit is a tiny change from a circular orbit with radius ‘a’. Tiny. The difference between the actual radius ‘r’ and the circular-orbit radius ‘a’ should be small enough we don’t notice it at first glance. So therefore:

    \omega = \frac{L}{ma^2}

    And this is going to be close enough. You may protest: what if it isn’t? Why can’t the perturbation be so big that ‘a’ is a lousy approximation to ‘r’? To this I say: if the perturbation is that big it’s not a perturbation anymore. It might be an interesting problem. But it’s a different problem from what I’m doing here. It needs different techniques. The Earth’s orbit is different from Halley’s Comet’s orbit in ways we can’t ignore. I hope this answers your complaint. Maybe it doesn’t. I’m on your side there. A lot of mathematical physics, and of analysis, is about making approximations. We need to find perturbations big enough to give interesting results. But not so big they need harder mathematics than you can do. It’s a strange art. I’m not sure I know how to describe how to do it. What I know I’ve learned from doing a lot of problems. You start to learn what kinds of approaches usually pan out.

    But what we’re relying on is the same trick we use in analysis. We suppose there is some error margin in the orbit’s radius and angle that’s tolerable. Then if the perturbation means we’d fall outside that error margin, we just look instead at a smaller perturbation. If there is no perturbation small enough to stay within our error margin then the orbit isn’t stable. And we already know it is. Here, we’re looking for closed orbits. People could in good faith argue about whether some particular observed orbit is a small enough perturbation from the circular equilibrium. But they can’t argue about whether there exist some small enough perturbations.

    Let me suppose that you’re all right with my answer about big perturbations. There’s at least one more good objection to have here. It’s this: where is the central force? The mass of the planet (or whatever) is there. The angular momentum is there. The equilibrium orbit is there. But where’s the force? Where’s the potential energy we started with? Shouldn’t that appear somewhere in the description of how fast this planet moves around the center?

    It should. And it is there, in an implicit form. We get the radius of the circular, equilibrium orbit, ‘a’, from knowing the potential energy. But we’ll do well to tease it out more explicitly. I hope to get there next time.

     
  • Joseph Nebus 4:00 pm on Sunday, 16 July, 2017 Permalink | Reply
    Tags: , Beetle Bailey, , ,   

    Reading the Comics, July 15, 2017: Dawn Of Mathematics Jokes 


    So I try to keep up with nearly all the comic strips run on Comics Kingdom and on GoComics. This includes some vintage strips: take some ancient comic like Peanuts or Luann and rerun it, day at a time, from the beginning. This is always enlightening. It’s always interesting to see a comic in that first flush of creative energy, before the characters have quite settled in and before the cartoonist has found stock jokes that work so well they don’t even have to be jokes anymore. One of the most startling cases for me has been Johnny Hart’s B.C. which, in its Back To B.C. incarnation, has been pretty well knocking it out of the park.

    Not this week, I’m sad to admit. This week it’s been doing a bunch of mathematics jokes, which is what gives me my permission to talk about it here. The jokes have been, eh, the usual, given the setup. A bit fresher, I suppose, for the characters in the strip having had fewer of their edges worn down by time. Probably there’ll be at least one that gets a bit of a grin.

    Back To B.C. for the 11th sets the theme going. On the 12th it gets into word problems. And then for the 13th of July it turns violent and for my money funny.

    Mark Tatulli’s Heart of the City has a number appear on the 12th. That’s been about as much mathematical content as Heart’s experience at Math Camp has taken. The story’s been more about Dana, her camp friend, who’s presented as good enough at mathematics to be bored with it, and the attempt to sneak out to the nearby amusement park. What has me distracted is wondering what amusement park this could be, given that Heart’s from Philadelphia and the camp’s within bus-trip range and in the forest. I can’t rule out that it might be Knoebels Amusement Park, in Elysburg, Pennsylvania, in which case Heart and Dana are absolutely right to sneak out of camp because it is this amazing place.

    TV Chef: 'Mix in one egg.' Cookie: 'See ... for us that would be 200 eggs.' TV Chef: 'Add a cup of flour.' Cookie: '200 cups of flour.' TV CHef: 'Now bake for two hours.' Cookie to Sarge: 'It'll be ready next week.'

    Mort Walker’s Beetle Bailey Vintage for the 21st of December, 1960 and rerun the 14th of July, 2017. Wow, I remember when they’d put recipes like this on the not-actual-news segment of the 5:00 news or so, and how much it irritated me that there wasn’t any practical way to write down the whole thing and even writing down the address to mail in for the recipe seemed like too much, what with how long it took on average to find a sheet of paper and a writing tool. In hindsight, I don’t know why this was so hard for me.

    Mort Walker’s Beetle Bailey Vintage for the 21st of December, 1960 was rerun the 14th. I can rope this into mathematics. It’s about Cookie trying to scale up a recipe to fit Camp Swampy’s needs. Increasing the ingredient count is easy, or at least it is if your units scale nicely. I wouldn’t want to multiple a third of a teaspoon by 200 without a good stretching beforehand and maybe a rubdown afterwards. But the time needed to cook a multiplied recipe, that gets mysterious. As I understand it — the chemistry of cooking is largely a mystery to me — the center of the trouble is that to cook a thing, heat has to reach throughout the interior. But heat can only really be applied from the surfaces of the cooked thing. (Yes, theoretically, a microwave oven could bake through the entire volume of something. But this would require someone inventing a way to bake using a microwave.) So we must balance the heat that can be applied over what surface to the interior volume and any reasonable time to cook the thing. Won’t deny that at some point it seems easier to just make a smaller meal.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th goes to the old “inference testing” well again. This comes up from testing whether something strange is going on. Measure something in a sample. Is the result appreciably different from what would be a plausible result if nothing interesting is going on? The null hypothesis is the supposition that there isn’t anything interesting going on: the measurement’s in the range of what you’d expect given that the world is big and complicated. I’m not sure what the physicist’s exact experiment would have been. I suppose it would be something like “you lose about as much heat through your head as you do any region of skin of about the same surface area”. So, yeah, freezing would be expected, considering.

    Percy Crosby’s Skippy for the 17th of May, 1930, and rerun the 15th, maybe doesn’t belong here. It’s just about counting. Never mind. I smiled at it, and I’m a fan of the strip. Give it a try; it’s that rare pre-Peanuts comic that still feels modern.

    And, before I forget: Have any mathematics words or terms you’d like to have explained? I’m doing a Summer 2017 A To Z and taking requests! Please offer them over there, for convenience. I mean mine.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: