Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 4:00 pm on Thursday, 20 July, 2017 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 12: How Fast Is An Orbit? 


    Previously:

    And the supplemental reading:


    On to the next piece of looking for stable, closed orbits of a central force. We start from a circular orbit of something around the sun or the mounting point or whatever. The center. I would have saved myself so much foggy writing if I had just decided to make this a sun-and-planet problem. But I had wanted to write the general problem. In this the force attracting the something towards the center has a strength that’s some constant times the distance to the center raised to a power. This is easy to describe in symbols. It’s cluttered to describe in words. This is why symbols are so nice.

    The perturbed orbit, the one I want to see close up, looks like an oscillation around that circle. The fact it is a perturbation, a small nudge away from the equilibrium, means how big the perturbation is will oscillate in time. How far the planet (whatever) is from the center will make a sine wave in time. Whether it closes depends on what it does in space.

    Part of what it does in space is easy. I just said what the distance from the planet to the center does. But to say where the planet is we need to know how far it is from the center and what angle it makes with respect to some reference direction. That’s a little harder. We also need to know where it is in the third dimension, but that’s so easy. An orbit like this is always in one plane, so we picked that plane to be that of our paper or whiteboard or tablet or whatever we’re using to sketch this out. That’s so easy to answer we don’t even count it as solved.

    The angle, though. Here, I mean the angle made by looking at the planet, the center, and some reference direction. This angle can be any real number, although a lot of those angles are going to point to the same direction in space. We’re coming at this from a mathematical view, or a physics view. Or a mathematical physics view. It means we measure this angle as radians instead of degrees. That is, a right angle is \frac{\pi}{2} , not 90 degrees, thank you. A full circle is 2\pi and not 360 degrees. We aren’t doing this to be difficult. There are good reasons to use radians. They make the mathematics simpler. What else could matter?

    We use \theta as the symbol for this angle. It’s a popular choice. \theta is going to change in time. We’ll want to know how fast it changes over time. This concept we call the angular velocity. For this there are a bunch of different possible notations. The one that I snuck in here two essays ago was ω.

    We came at the physics of this orbiting planet from a weird direction. Well, I came at it, and you followed along, and thank you for that. But I never did something like set the planet at a particular distance from the center of the universe and give it a set speed so it would have a circular enough orbit. I set up that we should have some potential energy. That energy implies a central force. It attracts things to the center of the universe. And that there should be some angular momentum that the planet has in its movement. And from that, that there would be some circular orbit. That circular orbit is one with just the right radius and just the right change in angle over time.

    From the potential energy and the angular momentum we can work out the radius of the circular orbit. Suppose your potential energy obeys a rule like V(r) = Cr^n for some number ‘C’ and some power, another number, ‘n’. Suppose your planet has the mass ‘m’. Then you’ll get a circular orbit when the planet’s a distance ‘a’ from the center, if a^{n + 2} = \frac{L^2}{n C m} . And it turns out we can also work out the angular velocity of this circular orbit. It’s all implicit in the amount of angular momentum that the planet has. This is part of why a mathematical physicist looks for concepts like angular momentum. They’re easy to work with, and they yield all sorts of interesting information, given the chance.

    I first introduced angular momentum as this number that was how much of something that our something had. It’s got physical meaning, though, reflecting how much … uh … our something would like to keep rotating around the way it has. And this can be written as a formula. The angular momentum ‘L’ is equal to the moment of inertia ‘I’ times the angular velocity ‘ω’. ‘L’ and ‘ω’ are really vectors, and ‘I’ is really a tensor. But we don’t have to worry about this because this kind of problem is easy. We can pretend these are all real numbers and nothing more.

    The moment of inertia depends on how the mass of the thing rotating is distributed in space. And it depends on how far the mass is from whatever axis it’s rotating around. For real bodies this can be challenging to work out. It’s almost always a multidimensional integral, haunting students in Calculus III. For a mass in a central force problem, though, it’s easy once again. Please tell me you’re not surprised. If it weren’t easy I’d have some more supplemental reading pieces here first.

    For a planet of mass ‘m’ that’s a distance ‘r’ from the axis of rotation, the moment of inertia ‘I’ is equal to ‘mr2‘. I’m fibbing. Slightly. This is for a point mass, that is, something that doesn’t occupy volume. We always look at point masses in this sort of physics. At least when we start. It’s easier, for one thing. And it’s not far off. The Earth’s orbit has a radius just under 150,000,000 kilometers. The difference between the Earth’s actual radius of just over 6,000 kilometers and a point-mass radius of 0 kilometers is a minor correction.

    So since we know L = I\omega , and we know I = mr^2 , we have L = mr^2\omega and from this:

    \omega = \frac{L}{mr^2}

    We know that ‘r’ changes in time. It oscillates from a maximum to a minimum value like any decent sine wave. So ‘r2‘ is going to oscillate too, like a … sine-squared wave. And then dividing the constant ‘L’ by something oscillating like a sine-squared wave … this implies ω changes in time. So it does. In a possibly complicated and annoying way. So it does. I don’t want to deal with that. So I don’t.

    Instead, I am going to summon the great powers of approximation. This perturbed orbit is a tiny change from a circular orbit with radius ‘a’. Tiny. The difference between the actual radius ‘r’ and the circular-orbit radius ‘a’ should be small enough we don’t notice it at first glance. So therefore:

    \omega = \frac{L}{ma^2}

    And this is going to be close enough. You may protest: what if it isn’t? Why can’t the perturbation be so big that ‘a’ is a lousy approximation to ‘r’? To this I say: if the perturbation is that big it’s not a perturbation anymore. It might be an interesting problem. But it’s a different problem from what I’m doing here. It needs different techniques. The Earth’s orbit is different from Halley’s Comet’s orbit in ways we can’t ignore. I hope this answers your complaint. Maybe it doesn’t. I’m on your side there. A lot of mathematical physics, and of analysis, is about making approximations. We need to find perturbations big enough to give interesting results. But not so big they need harder mathematics than you can do. It’s a strange art. I’m not sure I know how to describe how to do it. What I know I’ve learned from doing a lot of problems. You start to learn what kinds of approaches usually pan out.

    But what we’re relying on is the same trick we use in analysis. We suppose there is some error margin in the orbit’s radius and angle that’s tolerable. Then if the perturbation means we’d fall outside that error margin, we just look instead at a smaller perturbation. If there is no perturbation small enough to stay within our error margin then the orbit isn’t stable. And we already know it is. Here, we’re looking for closed orbits. People could in good faith argue about whether some particular observed orbit is a small enough perturbation from the circular equilibrium. But they can’t argue about whether there exist some small enough perturbations.

    Let me suppose that you’re all right with my answer about big perturbations. There’s at least one more good objection to have here. It’s this: where is the central force? The mass of the planet (or whatever) is there. The angular momentum is there. The equilibrium orbit is there. But where’s the force? Where’s the potential energy we started with? Shouldn’t that appear somewhere in the description of how fast this planet moves around the center?

    It should. And it is there, in an implicit form. We get the radius of the circular, equilibrium orbit, ‘a’, from knowing the potential energy. But we’ll do well to tease it out more explicitly. I hope to get there next time.

     
  • Joseph Nebus 4:00 pm on Sunday, 16 July, 2017 Permalink | Reply
    Tags: , Beetle Bailey, , ,   

    Reading the Comics, July 15, 2017: Dawn Of Mathematics Jokes 


    So I try to keep up with nearly all the comic strips run on Comics Kingdom and on GoComics. This includes some vintage strips: take some ancient comic like Peanuts or Luann and rerun it, day at a time, from the beginning. This is always enlightening. It’s always interesting to see a comic in that first flush of creative energy, before the characters have quite settled in and before the cartoonist has found stock jokes that work so well they don’t even have to be jokes anymore. One of the most startling cases for me has been Johnny Hart’s B.C. which, in its Back To B.C. incarnation, has been pretty well knocking it out of the park.

    Not this week, I’m sad to admit. This week it’s been doing a bunch of mathematics jokes, which is what gives me my permission to talk about it here. The jokes have been, eh, the usual, given the setup. A bit fresher, I suppose, for the characters in the strip having had fewer of their edges worn down by time. Probably there’ll be at least one that gets a bit of a grin.

    Back To B.C. for the 11th sets the theme going. On the 12th it gets into word problems. And then for the 13th of July it turns violent and for my money funny.

    Mark Tatulli’s Heart of the City has a number appear on the 12th. That’s been about as much mathematical content as Heart’s experience at Math Camp has taken. The story’s been more about Dana, her camp friend, who’s presented as good enough at mathematics to be bored with it, and the attempt to sneak out to the nearby amusement park. What has me distracted is wondering what amusement park this could be, given that Heart’s from Philadelphia and the camp’s within bus-trip range and in the forest. I can’t rule out that it might be Knoebels Amusement Park, in Elysburg, Pennsylvania, in which case Heart and Dana are absolutely right to sneak out of camp because it is this amazing place.

    TV Chef: 'Mix in one egg.' Cookie: 'See ... for us that would be 200 eggs.' TV Chef: 'Add a cup of flour.' Cookie: '200 cups of flour.' TV CHef: 'Now bake for two hours.' Cookie to Sarge: 'It'll be ready next week.'

    Mort Walker’s Beetle Bailey Vintage for the 21st of December, 1960 and rerun the 14th of July, 2017. Wow, I remember when they’d put recipes like this on the not-actual-news segment of the 5:00 news or so, and how much it irritated me that there wasn’t any practical way to write down the whole thing and even writing down the address to mail in for the recipe seemed like too much, what with how long it took on average to find a sheet of paper and a writing tool. In hindsight, I don’t know why this was so hard for me.

    Mort Walker’s Beetle Bailey Vintage for the 21st of December, 1960 was rerun the 14th. I can rope this into mathematics. It’s about Cookie trying to scale up a recipe to fit Camp Swampy’s needs. Increasing the ingredient count is easy, or at least it is if your units scale nicely. I wouldn’t want to multiple a third of a teaspoon by 200 without a good stretching beforehand and maybe a rubdown afterwards. But the time needed to cook a multiplied recipe, that gets mysterious. As I understand it — the chemistry of cooking is largely a mystery to me — the center of the trouble is that to cook a thing, heat has to reach throughout the interior. But heat can only really be applied from the surfaces of the cooked thing. (Yes, theoretically, a microwave oven could bake through the entire volume of something. But this would require someone inventing a way to bake using a microwave.) So we must balance the heat that can be applied over what surface to the interior volume and any reasonable time to cook the thing. Won’t deny that at some point it seems easier to just make a smaller meal.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th goes to the old “inference testing” well again. This comes up from testing whether something strange is going on. Measure something in a sample. Is the result appreciably different from what would be a plausible result if nothing interesting is going on? The null hypothesis is the supposition that there isn’t anything interesting going on: the measurement’s in the range of what you’d expect given that the world is big and complicated. I’m not sure what the physicist’s exact experiment would have been. I suppose it would be something like “you lose about as much heat through your head as you do any region of skin of about the same surface area”. So, yeah, freezing would be expected, considering.

    Percy Crosby’s Skippy for the 17th of May, 1930, and rerun the 15th, maybe doesn’t belong here. It’s just about counting. Never mind. I smiled at it, and I’m a fan of the strip. Give it a try; it’s that rare pre-Peanuts comic that still feels modern.

    And, before I forget: Have any mathematics words or terms you’d like to have explained? I’m doing a Summer 2017 A To Z and taking requests! Please offer them over there, for convenience. I mean mine.

     
  • Joseph Nebus 4:00 pm on Thursday, 13 July, 2017 Permalink | Reply
    Tags: , , , ,   

    What Would You Like In The Summer 2017 Mathematics A To Z? 


    I would like to now announce exactly what everyone with the ability to draw conclusions expected after I listed the things covered in previous Mathematics A To Z summaries. I’m hoping to write essays about another 26 topics, one for each of the major letters of the alphabet. And, as ever, I’d like your requests. It’s great fun to be tossed out a subject and either know enough about it, or learn enough about it in a hurry, to write a couple hundred words about it.

    So that’s what this is for. Please, in comments, list something you’d like to see explained.

    For the most part, I’ll do a letter on a first-come, first-serve basis. I’ll try to keep this page updated so that people know which letters have already been taken. I might try rewording or rephrasing a request if I can’t do it under the original letter if I can think of a legitimate way to cover it under another. I’m open to taking another try at something I’ve already defined in the three A To Z runs I’ve previously done, especially since many of the terms have different meanings in different contexts.

    I’m always in need of requests for letters such as X and Y. But you knew that if you looked at how sparse Mathworld’s list of words for those letters are.

    Letters To Request:

    • A
    • B
    • C
    • D
    • E
    • F
    • G
    • H
    • I
    • J
    • K
    • L
    • M
    • N
    • O
    • P
    • Q
    • R
    • S
    • T
    • U
    • V
    • W
    • X
    • Y
    • Z

    I’m flexible about what I mean by “a word” or “a term” in requesting something, especially if it gives me a good subject to write about. And if you think of a clever way to get a particular word covered under a letter that’s really inappropriate, then, good. I like cleverness. I’m not sure what makes for the best kinds of glossary terms. Sometimes a broad topic is good because I can talk about how an idea expresses itself across multiple fields. Sometimes a narrow topic is good because I can dig in to a particular way of thinking. I’m just hoping I’m not going to commit myself to three 2500-word essays a week. Those are fun, but they’re exhausting, as the time between Why Stuff Can Orbit essays may have hinted.

    And finally, I’d like to thank Thomas K Dye for creating banner art for this sequence. He’s the creator of the longrunning web comic Newshounds. He’s also got the book version, Newshounds: The Complete Story freshly published, a Patreon to support his comics habit, and plans to resume his Infinity Refugees spinoff strip shortly.

     
    • gaurish 2:12 pm on Monday, 17 July, 2017 Permalink | Reply

      A – Arithmetic
      C – Cohomology
      D – Diophantine Equations
      E – Elliptic curves
      F – Functor
      G – Gaussian primes/integers/distribution
      H – Height function (elliptic curves)
      I – integration
      L – L-function
      P – Prime number
      Z – zeta function

      I will tell more later. The banner art is very nice.

      Like

    • The Chaos Realm 4:53 pm on Monday, 17 July, 2017 Permalink | Reply

      I used the Riemann Tensor definition/explanation to front one of my sub-chapter pages in my poetry book (courtesy the guidance of a teacher I know). :-)

      Like

      • Joseph Nebus 5:35 pm on Tuesday, 18 July, 2017 Permalink | Reply

        Ah, that’s wonderful! There is this beauty in the way mathematical concepts are expressed — not the structure of the ideas, but the way we write them out, especially when we get a good idea of what we want to express. I’d like if more people could appreciate that without worrying that they don’t know, say, what a Ricci Flow would be.

        Liked by 1 person

        • The Chaos Realm 5:51 pm on Tuesday, 18 July, 2017 Permalink | Reply

          Thanks! I know there’s a really poetic beauty about astrophysics that I have loved for years. I may not understand all the equations, but I do feel I “get” physics in a way. looks up Ricci Flow. It’s definitely one of my major forms of inspirations…one of my most used muses!

          Like

  • Joseph Nebus 4:00 pm on Tuesday, 11 July, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    Why Stuff Can Orbit, Part 11: In Search Of Closure 


    Previously:

    And the supplemental reading:


    I’m not ready to finish the series off yet. But I am getting closer to wrapping up perturbed orbits. So I want to say something about what I’m looking for.

    In some ways I’m done already. I showed how to set up a central force problem, where some mass gets pulled towards the center of the universe. It can be pulled by a force that follows any rule you like. The rule has to follow some rules. The strength of the pull changes with how far the mass is from the center. It can’t depend on what angle the mass makes with respect to some reference meridian. Once we know how much angular momentum the mass has we can find whether it can have a circular orbit. And we can work out whether that orbit is stable. If the orbit is stable, then for a small nudge, the mass wobbles around that equilibrium circle. It spends some time closer to the center of the universe and some time farther away from it.

    I want something a little more, else I can’t carry on this series. I mean, we can make central force problems with more things in them. What we have now is a two-body problem. A three-body problem is more interesting. It’s pretty near impossible to give exact, generally true answers about. We can save things by only looking at very specific cases. Fortunately one is a sun, planet, and moon, where each object is much more massive than the next one. We see a lot of things like that. Four bodies is even more impossible. Things start to clear up if we look at, like, a million bodies, because our idea of what “clear” is changes. I don’t want to do that right now.

    Instead I’m going to look for closed orbits. Closed orbits are what normal people would call “orbits”. We’re used to thinking of orbits as, like, satellites going around and around the Earth. We know those go in circles, or ellipses, over and over again. They don’t, but the difference between a closed orbit and what they do is small enough we don’t need to care.

    Here, “orbit” means something very close to but not exactly what normal people mean by orbits. Maybe I should have said something about that before. But the difference hasn’t counted for much before.

    Start off by thinking of what we need to completely describe what a particular mass is doing. You need to know the central force law that the mass obeys. You need to know, for some reference time, where it is. You also need to know, for that same reference time, what its momentum is. Once you have that, you can predict where it should go for all time to come. You can also work out where it must have been before that reference time. (This we call “retrodicting”. Or “predicting the past”. With this kind of physics problem time has an unnerving symmetry. The tools which forecast what the mass will do in the future are exactly the same as those which tell us what the mass has done in the past.)

    Now imagine knowing all the sets of positions and momentums that the mass has had. Don’t look just at the reference time. Look at all the time before the reference time, and look at all the time after the reference time. Imagine highlighting all the sets of positions and momentums the mass ever took on or ever takes on. We highlight them against the universe of all the positions and momentums that the mass could have had if this were a different problem.

    What we get is this ribbon-y thread that passes through the universe of every possible setup. This universe of every possible setup we call a “phase space”. It’s easy to explain the “space” part of that name. The phase space obeys the rules we’d expect from a vector space. It also acts in a lot of ways like the regular old space that we live in. The “phase” part I’m less sure how to justify. I suspect we get it because this way of looking at physics problems comes from statistical mechanics. And in that field we’re looking, often, at the different ways a system can behave. This mathematics looks a lot like that of different phases of matter. The changes between solids and liquids and gases are some of what we developed this kind of mathematics to understand, in fact. But this is speculation on my part. I’m not sure why “phase” has attached to this name. I can think of other, harder-to-popularize reasons why the name would make sense too. Maybe it’s the convergence of several reasons. I’d love to hear if someone has a good etymology. If one exists; remember that we still haven’t got the story straight about why ‘m’ stands for the slope of a line.

    Anyway, this ribbon of all the arrangements of position and momentum that the mass does ever at any point have we call a “trajectory”. We call it a trajectory because it looks like a trajectory. Sometimes mathematics terms aren’t so complicated. We also call it an “orbit” since very often the problems we like involve trajectories that loop around some interesting area. It looks like a planet orbiting a sun.

    A “closed orbit” is an orbit that gets back to where it started. This means you can take some reference time, and wait. Eventually the mass comes back to the same position and the same momentum that you saw at that reference time. This might seem unavoidable. Wouldn’t it have to get back there? And it turns out, no, it doesn’t. A trajectory might wander all over phase space. This doesn’t take much imagination. But even if it doesn’t, if it stays within a bounded region, it could still wander forever without repeating itself. If you’re not sure about that, please consider an old sequence I wrote inspired by the Aardman Animation film Arthur Christmas. Also please consider seeing the Aardman Animation film Arthur Christmas. It is one of the best things this decade has offered us. The short version is, though, that there is a lot of room even in the smallest bit of space. A trajectory is, in a way, a one-dimensional thing that might get all coiled up. But phase space has got plenty of room for that.

    And sometimes we will get a closed orbit. The mass can wander around the center of the universe and come back to wherever we first noticed it with the same momentum it first had. A that point it’s locked into doing that same thing again, forever. If it could ever break out of the closed orbit it would have had to the first time around, after all.

    Closed orbits, I admit, don’t exist in the real world. Well, the real world is complicated. It has more than a single mass and a single force at work. Energy and momentum are conserved. But we effectively lose both to friction. We call the shortage “entropy”. Never mind. No person has ever seen a circle, and no person ever will. They are still useful things to study. So it is with closed orbits.

    An equilibrium orbit, the circular orbit of a mass that’s at exactly the right radius for its angular momentum, is closed. A perturbed orbit, wobbling around the equilibrium, might be closed. It might not. I mean next time to discuss what has to be true to close an orbit.

     
  • Joseph Nebus 4:00 pm on Sunday, 9 July, 2017 Permalink | Reply
    Tags: , Barney Google, , , , , Lucky Cow, Middletons, , ,   

    Reading the Comics, July 8, 2017: Mostly Just Pointing Edition 


    Won’t lie: I was hoping for a busy week. While Comic Strip Master Command did send a healthy number of mathematically-themed comic strips, I can’t say they were a particularly deep set. Most of what I have to say is that here’s a comic strip that mentions mathematics. Well, you’re reading me for that, aren’t you? Maybe. Tell me if you’re not. I’m curious.

    Richard Thompson’s Cul de Sac rerun for the 2nd of July is the anthropomorphic numerals joke for the week. And a great one, as I’d expect of Thompson, since it also turns into a little bit about how to create characters.

    Ralph Dunagin and Dana Summers’s Middletons for the 2nd uses mathematics as the example of the course a kid might do lousy in. You never see this for Social Studies classes, do you?

    Mark Tatulli’s Heart of the City for the 3rd made the most overtly mathematical joke for most of the week at Math Camp. The strip hasn’t got to anything really annoying yet; it’s mostly been average summer-camp jokes. I admit I’ve been distracted trying to figure out if the minor characters are Tatulli redrawing Peanuts characters in his style. I mean, doesn’t Dana (the freckled girl in the third panel, here) look at least a bit like Peppermint Patty? I’ve also seen a Possible Marcie and a Possible Shermy, who’s the Peanuts character people draw when they want an obscure Peanuts character who isn’t 5. (5 is the Boba Fett of the Peanuts character set: an extremely minor one-joke character used for a week in 1963 but who appeared very occasionally in the background until 1983. You can identify him by the ‘5’ on his shirt. He and his sisters 3 and 4 are the ones doing the weird head-sideways dance in A Charlie Brown Christmas.)

    Mark Pett’s Lucky Cow rerun for the 4th is another use of mathematics, here algebra, as a default sort of homework assignment.

    Brant Parker and Johnny Hart’s Wizard of Id Classics for the 4th reruns the Wizard of Id for the 7th of July, 1967. It’s your typical calculation-error problem, this about the forecasting of eclipses. I admit the forecasting of eclipses is one of those bits of mathematics I’ve never understood, but I’ve never tried to understand either. I’ve just taken for granted that the Moon’s movements are too much tedious work to really enlighten me and maybe I should reevaluate that. Understanding when the Moon or the Sun could be expected to disappear was a major concern for people doing mathematics for centuries.

    Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 5th is a Special Relativity joke, which is plenty of mathematical content for me. I warned you it was a week of not particularly deep discussions.

    Ashleigh Brilliant’s Pot-Shots rerun for the 5th is a cute little metric system joke. And I’m going to go ahead and pretend that’s enough mathematical content. I’ve come to quite like Brilliant’s cheerfully despairing tone.

    Jason Chatfield’s Ginger Meggs for the 7th mentions fractions, so you can see how loose the standards get around here when the week is slow enough.

    Snuffy Smith: 'I punched Barlow 'cuz I knew in all probability he wuz about to punch me, yore honor!!' Judge: 'Th' law don't deal in probabilities, Smif, we deal in CERTAINTIES!!' Snuffy, to his wife: '... An' th'minute he said THAT, I was purty CERTAIN whar I wuz headed !!'

    John Rose’s Barney Google and Snuffy Smith for the 8th of July, 2017. So I know it’s a traditional bit of comic strip graphic design to avoid using a . at the end of sentences, as it could be too easily lost — or duplicated — in a printing error. Thus the long history of comic strip sentences that end with a ! mark, unambiguous even if the dot goes missing or gets misaligned. But double exclamation points for everything? What goes on here?

    John Rose’s Barney Google and Snuffy Smith for the 8th finally gives me a graphic to include this week. It’s about the joke you would expect from the topic of probability being mentioned. And, as might be expected, the comic strip doesn’t precisely accurately describe the state of the law. Any human endeavour has to deal with probabilities. They give us the ability to have reasonable certainty about the confusing and ambiguous information the world presents.

    Einstein At Eight: equations scribbled all over the wall. Einstein Mom: 'Just look at what a mess you made here!' Einstein Dad: 'You've got some explaining to do, young man.'

    Vic Lee’s Pardon My Planet for the 8th of July, 2017. I gotta say, I look at that equation in the middle with m raised to the 7th power and feel a visceral horror. And yet I dealt with exactly this horrible thing once and it came out all right.

    Vic Lee’s Pardon My Planet for the 8th is another Albert Einstein mention. The bundle of symbols don’t mean much of anything, at least not as they’re presented, but of course superstar equation E = mc2 turns up. It could hardly not.

     
  • Joseph Nebus 4:00 pm on Friday, 7 July, 2017 Permalink | Reply
    Tags: , , , , ,   

    How June 2017 Treated My Mathematics Blog 


    I’m a little behind my usual review of the month’s readership and what’s popular around here, but I have good reason for it: I was busy earlier this week. Expect to be busy next week, too. Really, it’s going to be a bit of a mad month so do please watch this spot next week when I unleash some extra madness on myself. Thank you.

    So. Readers in June 2017: how many did I have? Disappointingly few of them, it turns out. Only 878, down from the 1029 in May and 994 in April. Heck, that’s not even close to what I had been running in previous months. Not sure what happened there. Maybe it’s everybody getting out of (US) schools and not needing comic strips read to them anymore. The number of unique visitors fell too, to 542 down from May’s 662 and April’s 696. It’s not a phenomenon related to the number of things posted, either; I had 13 posts in June versus 12 in May, and 13 in April, and 12 in March, which suggests that July I can take relatively easy, come to think of it. I finally had an uptick in the number of likes, at least, with that rising to 99 from the 78 of May and the 90 of April. I don’t think that’s statistically significant a difference, though. The number of comments also rose, but to only 13; that beats May’s 8, but April only had 16. Well, I have a scheme in mind to increase the number of comments too. You’ll know it when you see it. But, wow, a statistics page like that and I worry that I’ve passed my prime here.

    The popular stuff around here was about what I’d expected: the count of grooves in a record, and a bunch of Reading the Comics posts. And then one of the supplemental pieces in my Why Stuff Can Orbit series, which was helped by Elke Stangl’s most gracious words about it. The top articles, since there was a three-way tie for fourth place:

    Now the roster of the 52 countries that sent me readers in June, and how many each of them did. Spoiler: the United States tops the list.

    Country Views
    United States 472
    Turkey 74
    India 52
    United Kingdom 40
    Canada 38
    Austria 23
    Puerto Rico 17
    Australia 16
    Germany 15
    Singapore 12
    Brazil 11
    China 9
    France 7
    Italy 7
    Slovenia 7
    Philippines 5
    Norway 4
    Spain 4
    Switzerland 4
    Argentina 3
    Hong Kong SAR China 3
    Israel 3
    Netherlands 3
    New Zealand 3
    Russia 3
    Sweden 3
    Cambodia 2
    Chile 2
    Indonesia 2
    Kenya 2
    Malaysia 2
    Poland 2
    Saudi Arabia 2
    South Africa 2
    South Korea 2
    Thailand 2
    Azerbaijan 1
    Bahrain 1
    Bangladesh 1
    Belgium 1 (*)
    Colombia 1 (*)
    Estonia 1
    Ghana 1
    Hungary 1
    Ireland 1
    Japan 1 (*)
    Jordan 1
    Macedonia 1
    Mexico 1
    Palestinian Territories 1
    Portugal 1 (***)
    Ukraine 1 (*)

    I make that out as readers coming from 52 countries, same as in May and slightly more than there were in April. There were 16 single-reader countries in June, down from May’s 21 and up from April’s 10. Belgium, Colombia, Japan, and Ukraine have been single-reader countries for two months running now. Portugal is on a four-month single-reader streak. Hi, person from Portugal. I’m glad you like me a little bit. That’s better than not at all. I have no idea why I’m suddenly popular in Turkey.

    The most popular day for posts was Sunday, with 18 percent of page views. That’s marginally up from 16 percent in May, but the same as April’s count. The most popular hour was 4 pm, when 14 percent of my page views came. I rather suspected that would happen; I tried moving the default posting time two hours earlier this past month and sure enough, the readers followed. People stop in here right after something’s posted or not much at all. Hm.

    The mathematics blog started the month with 50,125 page views, so hey, finally broke 50,000! Nice. These came from something like 22,754 distinct viewers that WordPress is aware of existing.

    WordPress’s report of what search terms people are looking for has collapsed into uselessness. About all it admits to people wanting in June, besides “unknown search terms”, were Jumble — I want it too, but can’t find a good source that just gives me the day’s puzzle in a static picture — and “concept of pythagorean theorem” and “short conversation to explain algebra”. The Pythagorean theorem I can do, but a short conversation to explain algebra? … Well, which kind of algebra? I suppose they don’t want the fun kinds. They never do.

    The Insights panel thinks there are 666 WordPress.com followers to start the month. I can accept that. Not all of them seem to visit, but that might just be that they’re following me in their Readers rather than clicking individual links. I’ve given up on leaving a teaser of text out front and hiding the rest behind a click. That stuff might record, but nobody likes it, me included. If you’d like to follow this blog in your WordPress reader, there’s a little blue strip labelled “Follow nebusresearch” in the upper-right corner of the page. If you’d rather follow by e-mail, it’s under “Follow Blog Via Email” and don’t think I want a – in there. And I am on Twitter as well, as @Nebusj. That account sometimes gets into talking about non-mathematical stuff, including my humor blog which is a slightly more popular hangout, since I regularly explain what’s going on in the story strips. So if you looked at Mary Worth the last couple months and couldn’t figure out what the heck was going on, I can tell you: it’s CRUISE SHIPS. Only in more detail.

     
    • ivasallay 6:14 am on Saturday, 8 July, 2017 Permalink | Reply

      Should we 666 WordPress followers make everybody else a little nervous? It is a triangular number so that ‘s pretty cool and not beastly in the least.

      Liked by 1 person

      • Joseph Nebus 6:36 pm on Saturday, 8 July, 2017 Permalink | Reply

        Maybe it should! But I didn’t realize 666 was a triangular number and somehow that delights me, since 6 and 66 are triangular numbers also. Shame that 6666 isn’t.

        Liked by 1 person

    • elkement (Elke Stangl) 6:15 pm on Tuesday, 11 July, 2017 Permalink | Reply

      Ha ha – thanks for the mention – but I am not sure if my petty Twitter feed did help so much :-) But if it did I am happy as the series totally deserved many hits!

      Like

  • Joseph Nebus 6:00 pm on Tuesday, 4 July, 2017 Permalink | Reply
    Tags: , Daddy's Home, Gangbusters, , , , Mathmagic Land, , Pinocchio,   

    Reading the Comics, July 1, 2017: Deluge Edition, Part 2 


    Last week started off going like Gangbusters, a phrase I think that’s too old-fashioned for my father to say but that I’ve picked up because I like listening to old-time radio and, you know, Gangbusters really does get going like that. Give it a try sometime, if you’re open to that old-fashioned sort of narrative style and blatant FBI agitprop. You might want to turn the volume down a little before you do. It slowed down the second half of the week, which is mostly fine as I’d had other things taking up my time. Let me finish off last week and hope there’s a good set of comics to review for next Sunday and maybe Tuesday.

    Ted Shearer’s Quincy for the 4th of May, 1978 was rerun the 28th of June. It’s got the form of your student-resisting-the-word-problem joke. And mixes in a bit of percentages which is all the excuse I need to include it here. That and how Shearer uses halftone screening. It’s also a useful reminder of how many of our economic problems could be solved quickly if poor people got more money.

    Teacher explaining budgets: 'Quincy, does your granny have a budget?' Quincy: 'She sure does! 30% for rent, 30% for food, 30% for clothign, and 20% for the preacher, the doctor, lawyer, and undertaker!' 'That's 110%' 'That's our problem!'

    Ted Shearer’s Quincy for the 4th of May, 1978 Not answered: wait, Quincy’s Granny has to make regular payments to the undertaker? Is ‘the preacher, the doctor, lawyer and undertaker’ some traditional phrasing that I’m too young and white and suburban to recognize or should I infer that Granny has a shocking and possibly illicit hobby?

    Olivia Walch’s Imogen Quest for the 28th features Gottfried Leibniz — missing his birthday by three days, incidentally — and speaks of the priority dispute about the invention of calculus. I’m not sure there is much serious questioning anymore about Leibniz’s contributions to mathematics. I think they might be even more strongly appreciated these days than they ever used to be, as people learn more about his work in computing machines and the attempt to automate calculation.

    Mark Anderson’s Andertoons for the 28th is our soothing, familiar Andertoons for this essay. I remember in learning about equivalent forms of fractions wondering why anyone cared about reducing them. If two things have the same meaning, why do we need to go further? There are a couple answers. One is that it’s easier on us to understand a quantity if it’s a shorter, more familiar form. \frac{3}{4} has a meaning that \frac{1131}{1508} just does not. And another is that we often want to know whether two things are equivalent, or close. Is \frac{1147}{1517} more or less than \frac{1131}{1508} ? Good luck eyeballing that.

    And we learn, later on, that a lot of mathematics is about finding different ways to write the same thing. Each way has its uses. Sometimes a slightly more complicated way to write a thing makes proving something easier. There’s about two solids months of Real Analysis, for example, where you keep on writing that x_{n} - x_{m} \equiv x_{n} - x + x - x_{m} and this “adding zero” turns out to make proofs possible. Even easy.

    Mark Tatulli’s Heart of the City remains on my watch-with-caution list as the Math Camp story continues. But the strip from the 28th tickles me with the idea of crossing mathematics camp with Pinocchio‘s Pleasure Island. I’m imagining something where Heart starts laughing at something and ends up turning into something from Donald Duck’s Mathmagic land.

    Dave Blazek’s Loose Parts for the 28th is your traditional blackboard-full-of-symbols joke. I’m amused.

    Tony Rubino and Gary Markstein’s Daddy’s Home for the 1st of July is your traditional “mathematics is something hard” joke. I have the feeling it’s a rerun, but I lack the emotional investment in whether it is a rerun to check. The joke’s comfortable and familiar as it is, anyway.

     
  • Joseph Nebus 4:00 pm on Sunday, 2 July, 2017 Permalink | Reply
    Tags: , , , , , , , , Mom's Cancer, Perry Bible Fellowship, ,   

    Reading the Comics, June 26, 2017: Deluge Edition, Part 1 


    So this past week saw a lot of comic strips with some mathematical connection put forth. There were enough just for the 26th that I probably could have done an essay with exclusively those comics. So it’s another split-week edition, which suits me fine as I need to balance some of my writing loads the next couple weeks for convenience (mine).

    Tony Cochrane’s Agnes for the 25th of June is fun as the comic strip almost always is. And it’s even about estimation, one of the things mathematicians do way more than non-mathematicians expect. Mathematics has a reputation for precision, when in my experience it’s much more about understanding and controlling error methods. Even in analysis, the study of why calculus works, the typical proof amounts to showing that the difference between what you want to prove and what you can prove is smaller than your tolerance for an error. So: how do we go about estimating something difficult, like, the number of stars? If it’s true that nobody really knows, how do we know there are some wrong answers? And the underlying answer is that we always know some things, and those let us rule out answers that are obviously low or obviously high. We can make progress.

    Russell Myers’s Broom Hilda for the 25th is about one explanation given for why time keeps seeming to pass faster as one age. This is a mathematical explanation, built on the idea that the same linear unit of time is a greater proportion of a young person’s lifestyle so of course it seems to take longer. This is probably partly true. Most of our senses work by a sense of proportion: it’s easy to tell a one-kilogram from a two-kilogram weight by holding them, and easy to tell a five-kilogram from a ten-kilogram weight, but harder to tell a five from a six-kilogram weight.

    As ever, though, I’m skeptical that anything really is that simple. My biggest doubt is that it seems to me time flies when we haven’t got stories to tell about our days, when they’re all more or less the same. When we’re doing new or exciting or unusual things we remember more of the days and more about the days. A kid has an easy time finding new things, and exciting or unusual things. Broom Hilda, at something like 1500-plus years old and really a dour, unsociable person, doesn’t do so much that isn’t just like she’s done before. Wouldn’t that be an influence? And I doubt that’s a complete explanation either. Real things are more complicated than that yet.

    Mac and Bill King’s Magic In A Minute for the 25th features a form-a-square puzzle using some triangles. Mathematics? Well, logic anyway. Also a good reminder about open-mindedness when you’re attempting to construct something.

    'Can you tell me how much this would be with the discount?' 'It would be ... $17.50.' 'How did you do that so fast?' 'Ten percent of 25 is $2.50 ... times three is $7.50 ... round that to $8.00 ... $25 minus $8 is $17 ... add back the 50 cents and you get $17.50.' 'So you're like a math genius?' (Thinking) 'I never thought so before I started working here.'

    Norm Feuti’s Retail for the 26th of June, 2017. So, one of my retail stories that I might well have already told because I only ever really had one retail job and there’s only so many stories you get working a year and a half in a dying mall’s book store. I was a clerk at Walden Books. The customer wanted to know for this book whether the sticker’s 10 percent discount was taken before or after the state’s 6 percent sales tax was applied. I said I thought the discount taken first and then tax applied, but it didn’t matter if I were wrong as the total would be the same amount. I calculated what it would be. The customer was none too sure about this, but allowed me to ring it up. The price encoded in the UPC was wrong, something like a dollar more than the cover price, and the subtotal came out way higher. The customer declared, “See?” And wouldn’t have any of my explaining that he was hit by a freak event. I don’t remember other disagreements between the UPC price and the cover price, but that might be because we just corrected the price and didn’t get a story out of it.

    Norm Feuti’s Retail for the 26th is about how you get good at arithmetic. I suspect there’s two natural paths; you either find it really interesting in your own right, or you do it often enough you want to find ways to do it quicker. Marla shows the signs of learning to do arithmetic quickly because she does it a lot: turning “30 percent off” into “subtract ten percent three times over” is definitely the easy way to go. The alternative is multiplying by seven and dividing by ten and you don’t want to multiply by seven unless the problem gives a good reason why you should. And I certainly don’t fault the customer not knowing offhand what 30 percent off $25 would be. Why would she be in practice doing this sort of problem?

    Johnny Hart’s Back To B.C. for the 26th reruns the comic from the 30th of December, 1959. In it … uh … one of the cavemen guys has found his calendar for the next year has too many days. (Think about what 1960 was.) It’s a common problem. Every calendar people have developed has too few or too many days, as the Earth’s daily rotations on its axis and annual revolution around the sun aren’t perfectly synchronized. We handle this in many different ways. Some calendars worry little about tracking solar time and just follow the moon. Some calendars would run deliberately short and leave a little stretch of un-named time before the new year started; the ancient Roman calendar, before the addition of February and January, is famous in calendar-enthusiast circles for this. We’ve now settled on a calendar which will let the nominal seasons and the actual seasons drift out of synch slowly enough that periodic changes in the Earth’s orbit will dominate the problem before the error between actual-year and calendar-year length will matter. That’s a pretty good sort of error control.

    8,978,432 is not anywhere near the number of days that would be taken between 4,000 BC and the present day. It’s not a joke about Bishop Ussher’s famous research into the time it would take to fit all the Biblically recorded events into history. The time is something like 24,600 years ago, a choice which intrigues me. It would make fair sense to declare, what the heck, they lived 25,000 years ago and use that as the nominal date for the comic strip. 24,600 is a weird number of years. Since it doesn’t seem to be meaningful I suppose Hart went, simply enough, with a number that was funny just for being riotously large.

    Mark Tatulli’s Heart of the City for the 26th places itself on my Grand Avenue warning board. There’s plenty of time for things to go a different way but right now it’s set up for a toxic little presentation of mathematics. Heart, after being grounded, was caught sneaking out to a slumber party and now her mother is sending her to two weeks of Math Camp. I’m supposing, from Tatulli’s general attitude about how stuff happens in Heart and in Lio that Math Camp will not be a horrible, penal experience. But it’s still ominous talk and I’m watching.

    Brian Fies’s Mom’s Cancer story for the 26th is part of the strip’s rerun on GoComics. (Many comic strips that have ended their run go into eternal loops on GoComics.) This is one of the strips with mathematical content. The spatial dimension of a thing implies relationships between the volume (area, hypervolume, whatever) of a thing and its characteristic linear measure, its diameter or radius or side length. It can be disappointing.

    Nicholas Gurewitch’s Perry Bible Fellowship for the 26th is a repeat of one I get on my mathematics Twitter friends now and then. Should warn, it’s kind of racy content, at least as far as my usual recommendations here go. It’s also a little baffling because while the reveal of the unclad woman is funny … what, exactly, does it mean? The symbols don’t mean anything; they’re just what fits graphically. I think the strip is getting at Dr Loring not being able to see even a woman presenting herself for sex as anything but mathematics. I guess that’s funny, but it seems like the idea isn’t quite fully developed.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal Again for the 26th has a mathematician snort about plotting a giraffe logarithmically. This is all about representations of figures. When we plot something we usually start with a linear graph: a couple of axes perpendicular to one another. A unit of movement in the direction of any of those axes represents a constant difference in whatever that axis measures. Something growing ten units larger, say. That’s fine for many purposes. But we may want to measure something that changes by a power law, or that grows (or shrinks) exponentially. Or something that has some region where it’s small and some region where it’s huge. Then we might switch to a logarithmic plot. Here the same difference in space along the axis represents a change that’s constant in proportion: something growing ten times as large, say. The effective result is to squash a shape down, making the higher points more nearly flat.

    And to completely smother Weinersmith’s fine enough joke: I would call that plot semilogarithmically. I’d use a linear scale for the horizontal axis, the gazelle or giraffe head-to-tail. But I’d use a logarithmic scale for the vertical axis, ears-to-hooves. So, linear in one direction, logarithmic in the other. I’d be more inclined to use “logarithmic” plots to mean logarithms in both the horizontal and the vertical axes. Those are useful plots for turning up power laws, like the relationship between a planet’s orbital radius and the length of its year. Relationships like that turn into straight lines when both axes are logarithmically spaced. But I might also describe that as a “log-log plot” in the hopes of avoiding confusion.

     
  • Joseph Nebus 4:00 pm on Thursday, 29 June, 2017 Permalink | Reply
    Tags: 2015, 2016, , , terms   

    A Listing Of Mathematics Subjects I Have Covered In A To Z Sequences Of The Past 


    I am not saying why I am posting this recap of past lists just now. But now you know why I am posting this recap of past lists just now.

    Summer 2015 Leap Day 2016 End 2016
    Ansatz Axiom Algebra
    Bijection Basis Boundary value problems
    Characteristic Conjecture Cantor’s middle third
    Dual Dedekind Domain Distribution (statistics)
    Error Energy Ergodic
    Fallacy Fractions (Continued) Fredholm alternative
    Graph Grammar General covariance
    Hypersphere Homomorphism Hat
    Into Isomorphism Image
    Jump (discontinuity) Jacobian Jordan curve
    Knot Kullbach-Leibler Divergence Kernel
    Locus Lagrangian Local
    Measure Matrix Monster Group
    N-tuple Normal Subgroup Normal numbers
    Orthogonal Orthonormal Osculating circle
    Proper Polynomials Principal
    Quintile Quaternion Quotient groups
    Ring Riemann Sphere Riemann sum
    Step Surjective Map Smooth
    Tensor Transcendental Number Tree
    Unbounded Uncountable Unlink
    Vertex (graph theory) Vector Voronoi diagram
    Well-Posed Problem Wlog Weierstrass Function
    Xor X-Intercept Xi function
    Y-Axis Yukawa Potential Yang Hui’s Triangle
    Z-Transform Z-score Zermelo-Fraenkel Axioms

    And do, please, watch this space.

     
    • The Chaos Realm 5:53 pm on Tuesday, 18 July, 2017 Permalink | Reply

      So, I did a search for “Ricci Flow” and it brought up this page, but I don’t see it in the list. I imagine it’s in one of these…could you narrow it down for me and my funky eyeballs? Thanks!

      Like

  • Joseph Nebus 4:00 pm on Tuesday, 27 June, 2017 Permalink | Reply
    Tags: , , , , , , , , ,   

    Why Stuff Can Orbit, Part 10: Where Time Comes From And How It Changes Things 


    Previously:

    And the supplemental reading:


    And again my thanks to Thomas K Dye, creator of the web comic Newshounds, for the banner art. He has a Patreon to support his creative habit.

    In the last installment I introduced perturbations. These are orbits that are a little off from the circles that make equilibriums. And they introduce something that’s been lurking, unnoticed, in all the work done before. That’s time.

    See, how do we know time exists? … Well, we feel it, so, it’s hard for us not to notice time exists. Let me rephrase it then, and put it in contemporary technology terms. Suppose you’re looking at an animated GIF. How do you know it’s started animating? Or that it hasn’t stalled out on some frame?

    If the picture changes, then you know. It has to be going. But if it doesn’t change? … Maybe it’s stalled out. Maybe it hasn’t. You don’t know. You know there’s time when you can see change. And that’s one of the little practical insights of physics. You can build an understanding of special relativity by thinking hard about that. Also think about the observation that the speed of light (in vacuum) doesn’t change.

    When something physical’s in equilibrium, it isn’t changing. That’s how we found equilibriums to start with. And that means we stop keeping track of time. It’s one more thing to keep track of that doesn’t tell us anything new. Who needs it?

    For the planet orbiting a sun, in a perfect circle, or its other little variations, we do still need time. At least some. How far the planet is from the sun doesn’t change, no, but where it is on the orbit will change. We can track where it is by setting some reference point. Where the planet is at the start of our problem. How big is the angle between where the planet is now, the sun (the center of our problem’s universe), and that origin point? That will change over time.

    But it’ll change in a boring way. The angle will keep increasing in magnitude at a constant speed. Suppose it takes five time units for the angle to grow from zero degrees to ten degrees. Then it’ll take ten time units for the angle to grow from zero to twenty degrees. It’ll take twenty time units for the angle to grow from zero to forty degrees. Nice to know if you want to know when the planet is going to be at a particular spot, and how long it’ll take to get back to the same spot. At this rate it’ll be eighteen time units before the angle grows to 360 degrees, which looks the same as zero degrees. But it’s not anything interesting happening.

    We’ll label this sort of change, where time passes, yeah, but it’s too dull to notice as a “dynamic equilibrium”. There’s change, but it’s so steady and predictable it’s not all that exciting. And I’d set up the circular orbits so that we didn’t even have to notice it. If the radius of the planet’s orbit doesn’t change, then the rate at which its apsidal angle changes, its “angular velocity”, also doesn’t change.

    Now, with perturbations, the distance between the planet and the center of the universe will change in time. That was the stuff at the end of the last installment. But also the apsidal angle is going to change. I’ve used ‘r(t)’ to represent the radial distance between the planet and the sun before, and to note that what value it is depends on the time. I need some more symbols.

    There’s two popular symbols to use for angles. Both are Greek letters because, I dunno, they’ve always been. (Florian Cajori’s A History of Mathematical Notation doesn’t seem to have anything. And when my default go-to for explaining mathematician’s choices tells me nothing, what can I do? Look at Wikipedia? Sure, but that doesn’t enlighten me either.) One is to use theta, θ. The other is to use phi, φ. Both are good, popular choices, and in three-dimensional problems we’ll often need both. We don’t need both. The orbit of something moving under a central force might be complicated, but it’s going to be in a single plane of movement. The conservation of angular momentum gives us that. It’s not the last thing angular momentum will give us. The orbit might happen not to be in a horizontal plane. But that’s all right. We can tilt our heads until it is.

    So I’ll reach deep into the universe of symbols for angles and call on θ for the apsidal angle. θ will change with time, so, ‘θ(t)’ is the angular counterpart to ‘r(t)’.

    I’d said before the apsidal angle is the angle made between the planet, the center of the universe, and some reference point. What is my reference point? I dunno. It’s wherever θ(0) is, that is, where the planet is when my time ‘t’ is zero. There’s probably a bootstrapping fallacy here. I’ll cover it up by saying, you know, the reference point doesn’t matter. It’s like the choice of prime meridian. We have to have one, but we can pick whatever one is convenient. So why not pick one that gives us the nice little identity that ‘θ(0) = 0’? If you don’t buy that and insist I pick a reference point first, fine, go ahead. But you know what? The labels on my time axis are arbitrary. There’s no difference in the way physics works whether ‘t’ is ‘0’ or ‘2017’ or ‘21350’. (At least as long as I adjust any time-dependent forces, which there aren’t here.) So we get back to ‘θ(0) = 0’.

    For a circular orbit, the dynamic equilibrium case, these are pretty boring, but at least they’re easy to write. They’re:

    r(t) = a	\\ \theta(t) = \omega t

    Here ‘a’ is the radius of the circular orbit. And ω is a constant number, the angular velocity. It’s how much a bit of time changes the apsidal angle. And this set of equations is pretty dull. You can see why it barely rates a mention.

    The perturbed case gets more interesting. We know how ‘r(t)’ looks. We worked that out last time. It’s some function like:

    r(t) = a + A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    Here ‘A’ and ‘B’ are some numbers telling us how big the perturbation is, and ‘m’ is the mass of the planet, and ‘k’ is something related to how strong the central force is. And ‘a’ is that radius of the circular orbit, the thing we’re perturbed around.

    What about ‘θ(t)’? How’s that look? … We don’t seem to have a lot to go on. We could go back to Newton and all that force equalling the change in momentum over time stuff. We can always do that. It’s tedious, though. We have something better. It’s another gift from the conservation of angular momentum. When we can turn a forces-over-time problem into a conservation-of-something problem we’re usually doing the right thing. The conservation-of-something is typically a lot easier to set up and to track. We’ve used it in the conservation of energy, before, and we’ll use it again. The conservation of ordinary, ‘linear’, momentum helps other problems, though not I’ll grant this one. The conservation of angular momentum will help us here.

    So what is angular momentum? … It’s something about ice skaters twirling around and your high school physics teacher sitting on a bar stool spinning a bike wheel. All right. But it’s also a quantity. We can get some idea of it by looking at the formula for calculating linear momentum:

    \vec{p} = m\vec{v}

    The linear momentum of a thing is its inertia times its velocity. This is if the thing isn’t moving fast enough we have to notice relativity. Also if it isn’t, like, an electric or a magnetic field so we have to notice it’s not precisely a thing. Also if it isn’t a massless particle like a photon because see previous sentence. I’m talking about ordinary things like planets and blocks of wood on springs and stuff. The inertia, ‘m’, is rather happily the same thing as its mass. The velocity is how fast something is travelling and which direction it’s going in.

    Angular momentum, meanwhile, we calculate with this radically different-looking formula:

    \vec{L} = I\vec{\omega}

    Here, again, talking about stuff that isn’t moving so fast we have to notice relativity. That isn’t electric or magnetic fields. That isn’t massless particles. And so on. Here ‘I’ is the “moment of inertia” and \vec{w} is the angular velocity. The angular velocity is a vector that describes for us how fast the spinning is and what direction the axis around which the thing spins is. The moment of inertia describes how easy or hard it is to make the thing spin around each axis. It’s a tensor because real stuff can be easier to spin in some directions than in others. If you’re not sure that’s actually so, try tossing some stuff in the air so it spins in each of the three major directions. You’ll see.

    We’re fortunate. For central force problems the moment of inertia is easy to calculate. We don’t need the tensor stuff. And we don’t even need to notice that the angular velocity is a vector. We know what axis the planet’s rotating around; it’s the one pointing out of the plane of motion. We can focus on the size of the angular velocity, the number ‘ω’. See how they’re different, what with one not having an arrow over the symbol. The arrow-less version is easier. For a planet, or other object, with mass ‘m’ that’s orbiting a distance ‘r’ from the sun, the moment of inertia is:

    I = mr^2

    So we know this number is going to be constant:

    L = mr^2\omega

    The mass ‘m’ doesn’t change. We’re not doing those kinds of problem. So however ‘r’ changes in time, the angular velocity ‘ω’ has to change with it, so that this product stays constant. The angular velocity is how the apsidal angle ‘θ’ changes over time. So since we know ‘L’ doesn’t change, and ‘m’ doesn’t change, then the way ‘r’ changes must tell us something about how ‘θ’ changes. We’ll get into that next time.

     
    • mathtuition88 5:57 am on Friday, 30 June, 2017 Permalink | Reply

      The math formulas look very nice on your blog. Do you use WordPress’s built in LaTeX or others?

      Liked by 1 person

      • elkement (Elke Stangl) 8:21 am on Saturday, 1 July, 2017 Permalink | Reply

        Thought so, too! Looks like the built-in WP functionality, but perhaps using a larger-than-default size. I am already pondering to go back to my recent posts and increase the size of all equations :-)
        To Joseph: Really an excellent series – it’s now more of a book, actually!!

        Liked by 1 person

        • Joseph Nebus 3:49 am on Monday, 3 July, 2017 Permalink | Reply

          It is entirely the built-in functionality, with the size made larger by adding &s=2 before the closing $ mark. I think you taught me that trick, and if you didn’t, then I’m not sure where I did pick it up from. My recollection is that s=3 also works and I don’t know just how big a line can get before the WordPress engine rejects it.

          And thanks for the kind words. I’m thinking about where to go with the series. It’d make a terribly slim book as it is, though. And slimmer still once I took out the redundant bits that cover for the occasional months-long gaps between essays.

          Liked by 2 people

          • elkement (Elke Stangl) 6:23 am on Monday, 3 July, 2017 Permalink | Reply

            Yes, I remember we discussed the size parameter before! But I admit I just used the defaults now. Before I have tried and tested for every equation which size works best – I felt that for some equation the default size seems OK and sometimes you need to increase it.
            Perhaps I will try next time to paste the source code of a post into an editor and replace all the $ signs by &s=2$ at the end – then it’s all consistent and can be done very fast.

            Like

      • Joseph Nebus 3:47 am on Monday, 3 July, 2017 Permalink | Reply

        Thank you! I’ve just been using WordPress’s built-in LaTeX. It’ll usually take what feels like two hundred passes through saving and previewing a page before I get all my syntax errors sorted out, but it’s the least inconvenient way of including equations that I’ve stumbled across so far. The only thing I’ve missed and that I haven’t figured out is how to get equation numbers at the end of lines and now I wonder if it isn’t something as simple as starting a bit of LaTeX with double $ marks instead of a single one.

        Liked by 1 person

        • mathtuition88 5:01 am on Monday, 3 July, 2017 Permalink | Reply

          I haven’t tried equation numbers with WordPress latex yet, I suspect it may not be possible since WordPress only supports very basic LaTeX, for instance the \begin{equation} environment is not supported I think.

          Like

  • Joseph Nebus 4:00 pm on Sunday, 25 June, 2017 Permalink | Reply
    Tags: , , Ollie and Quentin, , , , , Today's Dogg, ,   

    Reading the Comics, June 24, 2017: Saturday Morning Breakfast Cereal Edition 


    Somehow this is not the title of every Reading The Comics review! But it is for this post and we’ll explore why below.

    Piers Baker’s Ollie and Quentin for the 18th is a Zeno’s Paradox-based joke. This uses the most familiar of Zeno’s Paradoxes, about the problem of covering any distance needing infinitely many steps to be done in a finite time. Zeno’s Paradoxes are often dismissed these days (probably were then, too), on the grounds that the Ancient Greeks Just Didn’t Understand about convergence. Hardly; they were as smart as we were. Zeno had a set of paradoxes, built on the questions of whether space and time are infinitely divisible or whether they’re not. Any answer to one paradox implies problems in others. There’s things we still don’t really understand about infinity and infinitesimals and continuity. Someday I should do a proper essay about them.

    Dave Coverly’s Speed Bump for the 18th is not exactly an anthropomorphic-numerals joke. It is about making symbols manifest in the real world, at least. The greater-than and less-than signs as we know them were created by the English mathematician Thomas Harriot, and introduced to the world in his posthumous Artis Analyticae Praxis (1631). He also had an idea of putting a . between the numerals of an expression and the letters multiplied by them, for example, “4.x” to mean four times x. We mostly do without that now, taking multiplication as assumed if two meaningful quantities are put next to one another. But we will use, now, a vertically-centered dot to separate terms multiplied together when that helps our organization. The equals sign we trace to the 16th century mathematician Robert Recorde, whose 1557 Whetsone of Witte uses long but recognizable equals signs. The = sign went into hibernation after that, though, until the 17th century and it took some time to quite get well-used. So it often is with symbols.

    Mr Tanner: 'Today we'll talk about where numbers come from. Take zero, for instance ... Quincy, do you know who invented the zero?' Quincy: 'I'm not sure, Mr Tanner, but from the grades I get it must have been one of my teachers.'

    Ted Shearer’s Quincy for the 25th of April, 1978 and rerun the 19th of June, 2017. The question does make me wonder how far Mr Tanner was going to go with this. The origins of zero and one are great stuff for class discussion. Two, also. But what about three? Five? Ten? Twelve? Minus one? Irrational numbers, if the class has got up to them? How many students are going to be called on to talk about number origins? And how many truly different stories are there?

    Ted Shearer’s Quincy for the 25th of April, 1978 and rerun the 19th of June, starts from the history of zero. It’s worth noting there are a couple of threads woven together in the concept of zero. One is the idea of “nothing”, which we’ve had just forever. I mean, the idea that there isn’t something to work with. Another is the idea of the … well, the additive identity, there being some number that’s one less than one and two less than two. That you can add to anything without changing the thing. And then there’s symbols. There’s the placeholder for “there are no examples of this quantity here”. There’s the denotation of … well, the additive identity. All these things are zeroes, and if you listen closely, they are not quite the same thing. Which is not weird. Most words mean a collection of several concepts. We’re lucky the concepts we mean by “zero” are so compatible in meaning. Think of the poor person trying to understand the word “bear”, or “cleave”.

    John Deering’s Strange Brew for the 19th is a “New Math” joke, fittingly done with cavemen. Well, numerals were new things once. Amusing to me is that — while I’m not an expert — in quite a few cultures the symbol for “one” was pretty much the same thing, a single slash mark. It’s hard not to suppose that numbers started out with simple tallies, and the first thing to tally might get dressed up a bit with serifs or such but is, at heart, the same thing you’d get jabbing a sharp thing into a soft rock.

    Guy Gilchrist’s Today’s Dogg for the 19th I’m sure is a rerun and I think I’ve featured it here before. So be it. It’s silly symbol-play and dog arithmetic. It’s a comic strip about how dogs are cute; embrace it or skip it.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal is properly speaking reruns when it appears on GoComics.com. For whatever reason Weinersmith ran a patch of mathematics strips there this past week. So let me bundle all that up. On the 19th he did a joke mathematicians get a lot, about how the only small talk anyone has about mathematics is how they hated mathematics. I’m not sure mathematicians have it any better than any other teachers, though. Have you ever known someone to say, “My high school gym class gave me a greater appreciation of the world”? Or talk about how grade school history opened their eyes to the wonders of the subject? It’s a sad thing. But there are a lot of things keeping teachers from making students feel joy in their subjects.

    For the 21st Weinersmith makes a statisticians joke. I can wrangle some actual mathematics out of an otherwise correctly-formed joke. How do we ever know that something is true? Well, we gather evidence. But how do we know the evidence is relevant? Even if the evidence is relevant, how do we know we’ve interpreted it correctly? Even if we have interpreted it correctly, how do we know that it shows what we want to know? Statisticians become very familiar with hypothesis testing, which amounts to the question, “does this evidence indicate that some condition is implausibly unlikely”? And they can do great work with that. But “implausibly unlikely” is not the same thing as “false”. A person knowledgeable enough and honest turns out to have few things that can be said for certain.

    The June 23rd strip I’ve seen go around Mathematics Twitter several times, as see above tweet, about the ways in which mathematical literacy would destroy modern society. It’s a cute and flattering portrait of mathematics’ power, probably why mathematicians like passing it back and forth. But … well, how would “logic” keep people from being fooled by scams? What makes a scam work is that the premise seems logical. And real-world problems — as opposed to logic-class problems — are rarely completely resolvable by deductive logic. There have to be the assumptions, the logical gaps, and the room for humbuggery that allow hoaxes and scams to slip through. And does anyone need a logic class to not “buy products that do nothing”? And what is “nothing”? I have more keychains than I have keys to chain, even if we allow for emergencies and reasonable unexpected extra needs. This doesn’t stop my buying keychains as souvenirs. Does a Penn Central-logo keychain “do nothing” merely because it sits on the windowsill rather than hold any sort of key? If so, was my love foolish to buy it as a present? Granted that buying a lottery ticket is a foolish use of money; is my life any worse for buying that than, say, a peanut butter cup that I won’t remember having eaten a week afterwards? As for credit cards — It’s not clear to me that people max out their credit cards because they don’t understand they will have to pay it back with interest. My experience has been people max out their credit cards because they have things they must pay for and no alternative but going further into debt. That people need more money is a problem of society, yes, but it’s not clear to me that a failure to understand differential equations is at the heart of it. (Also, really, differential equations are overkill to understand credit card debt. A calculator with a repeat-the-last-operation feature and ten minutes to play is enough.)

     
  • Joseph Nebus 4:00 pm on Friday, 23 June, 2017 Permalink | Reply
    Tags: , deli, , ,   

    Why Shouldn’t We Talk About Mathematics In The Deli Line? 


    You maybe saw this picture going around your social media a couple days ago. I did, but I’m connected to a lot of mathematics people who were naturally interested. Everyone who did see it was speculating about what the story behind it was. Thanks to the CBC, now we know.

    So it’s the most obvious if least excitingly gossip-worthy explanation: this Middletown, Connecticut deli is close to the Wesleyan mathematics department’s office and at least one mathematician was too engrossed talking about the subject to actually place an order. We’ve all been stuck behind people like that. It’s enough to make you wonder whether the Cole slaw there is actually that good. (Don’t know, I haven’t been there, not sure I can dispatch my agent in Groton to check just for this.) The sign’s basically a loving joke, which is a relief. Could be any group of people who won’t stop talking about a thing they enjoy, really. And who have a good joking relationship with the deli-owner.

    The CBC’s interview gets into whether mathematicians have a sense of humor. I certainly think we do. I think the habit of forming proofs builds a habit of making a wild assumption and seeing where that gets you, often to a contradiction. And it’s hard not to see that the same skills that will let you go from, say, “suppose every angle can be trisected” to a nonsensical conclusion will also let you suppose something whimsical and get to a silly result.

    Dr Anna Haensch, who made the sign kind-of famous-ish, gave as an example of a quick little mathematician’s joke going to the board and declaring “let L be a group”. I should say that’s not a riotously funny mathematician’s joke, not the say (like) talking about things purple and commutative are. It’s just a little passing quip, like if you showed a map of New Jersey and labelled the big city just across the Hudson River as “East Hoboken” or something.

    But L would be a slightly disjoint name for a group. Not wrong, just odd, unless the context of the problem gave us good reason for the name. Names of stuff are labels, and so are arbitrary and may be anything. But we use them to carry information. If we know something is a group then we know something about the way it behaves. So if in a dense mass of symbols we see that something is given one of the standard names for groups — G, H, maybe G or H with a subscript or a ‘ or * on top of it — we know that, however lost we might be, we know this thing is a group and we know it should have these properties.

    It’s a bit of doing what science fiction fans term “incluing”. That’s giving someone the necessary backstory without drawing attention to the fact we’re doing it. To avoid G or H would be like avoiding “Jane [or John] Doe” as the name for a specific but unidentified person. You can do it, but it seems off.

     
  • Joseph Nebus 4:00 pm on Wednesday, 21 June, 2017 Permalink | Reply
    Tags: David Hilbert, ,   

    Great Stuff By David Hilbert That I’ll Never Finish Reading 


    And then this came across my Twitter feed (@Nebusj, for the record):

    It is to Project Gutenberg’s edition of David Hilbert’s The Foundations Of Geometry. David Hilbert you may know as the guy who gave us 20th Century mathematics. He had help. But he worked hard on the axiomatizing of mathematics, getting rid of intuition and relying on nothing but logical deduction for all mathematical results. “Didn’t we do that already, like, with the Ancient Greeks and all?” you may ask. We aimed for that since the Ancient Greeks, yes, but it’s really hard to do. The Foundations Of Geometry is an example of Hilbert’s work of looking very critically at all of the things we assume, and all of the things that we need, and all of the things we need defined, and trying to get at it all.

    Hilbert gave much of 20th Century Mathematics its shape with a list presented at the 1900 International Congress of Mathematicians in Paris. This formed a great list of important unsolved problems. Some of them have been solved since. Some are still unsolved. Some have been proven unsolvable. Each of these results is very interesting. This tells you something about how great his questions were; only a great question is interesting however it turns out.

    The Project Gutenberg edition of The Foundations Of Geometry is, mercifully, not a stitched-together PDF version of an ancient library copy. It’s a PDF compiled by, if I’m reading the credits correctly, Joshua Hutchinson, Roger Frank, and David Starner. The text was copied into LaTeX, an incredibly powerful and standard mathematics-writing tool, and compiled into something that … looks a little bit like every mathematics paper and thesis you’ll read these days. It’s a bit odd for a 120-year-old text to look quite like that. But it does mean the formatting looks familiar, if you’re the sort of person who reads mathematics regularly.

    (There are a couple lines that read weird to me, but I can’t judge whether that owes to a typo in the preparation of the document or just that the translation from Hilbert’s original German to English produced odd effects. I’m thinking here of Axiom I, 2, shown on page 2, which I understand but feel weird about. Roll with it.)

     
  • Joseph Nebus 4:00 pm on Sunday, 18 June, 2017 Permalink | Reply
    Tags: , , , Flash Gordon, Francis, , ,   

    Reading the Comics, June 17, 2017: Icons Of Mathematics Edition 


    Comic Strip Master Command just barely missed being busy enough for me to split the week’s edition. Fine for them, I suppose, although it means I’m going to have to scramble together something for the Tuesday or the Thursday posting slot. Ah well. As befits the comics, there’s a fair bit of mathematics as an icon in the past week’s selections. So let’s discuss.

    Mark Anderson’s Andertoons for the 11th is our Mark Anderson’s Andertoons for this essay. Kind of a relief to have that in right away. And while the cartoon shows a real disaster of a student at the chalkboard, there is some truth to the caption. Ruling out plausible-looking wrong answers is progress, usually. So is coming up with plausible-looking answers to work out whether they’re right or wrong. The troubling part here, I’d say, is that the kid came up with pretty poor guesses about what the answer might be. He ought to be able to guess that it’s got to be an odd number, and has to be less than 10, and really ought to be less than 7. If you spot that then you can’t make more than two wrong guesses.

    Patrick J Marrin’s Francis for the 12th starts with what sounds like a logical paradox, about whether the Pope could make an infallibly true statement that he was not infallible. Really it sounds like a bit of nonsense. But the limits of what we can know about a logical system will often involve questions of this form. We ask whether something can prove whether it is provable, for example, and come up with a rigorous answer. So that’s the mathematical content which justifies my including this strip here.

    Border Collis are, as we know, highly intelligent. The dogs are gathered around a chalkboard full of mathematics. 'I've checked my calculations three times. Even if master's firm and calm and behaves like an alpha male, we *should* be able to whip him.'

    Niklas Eriksson’s Carpe Diem for the 13th of June, 2017. Yes, yes, it’s easy to get people excited for the Revolution, but it’ll come to a halt when someone asks about how they get the groceries afterwards.

    Niklas Eriksson’s Carpe Diem for the 13th is a traditional use of the blackboard full of mathematics as symbolic of intelligence. Of course ‘E = mc2‘ gets in there. I’m surprised that both π and 3.14 do, too, for as little as we see on the board.

    Mark Anderson’s Andertoons for the 14th is a nice bit of reassurance. Maybe the cartoonist was worried this would be a split-week edition. The kid seems to be the same one as the 11th, but the teacher looks different. Anyway there’s a lot you can tell about shapes from their perimeter alone. The one which most startles me comes up in calculus: by doing the right calculation about the lengths and directions of the edge of a shape you can tell how much area is inside the shape. There’s a lot of stuff in this field — multivariable calculus — that’s about swapping between “stuff you know about the boundary of a shape” and “stuff you know about the interior of the shape”. And finding area from tracing the boundary is one of them. It’s still glorious.

    Samson’s Dark Side Of The Horse for the 14th is a counting-sheep joke and a Pi Day joke. I suspect the digits of π would be horrible for lulling one to sleep, though. They lack the just-enough-order that something needs for a semiconscious mind to drift off. Horace would probably be better off working out Collatz sequences.

    Dana Simpson’s Phoebe and her Unicorn for the 14th mentions mathematics as iconic of what you do at school. Book reports also make the cut.

    Dr Zarkov: 'Flash, this is Professor Quita, the inventor of the ... ' Prof Quita: 'Caramba! NO! I am a mere mathematician! With numbers, equations, paper, pencil, I work ... it is my good amigo, Dr Zarkov, who takes my theories and builds ... THAT!!' He points to a bigger TV screen.

    Dan Barry’s Flash Gordon for the 31st of July, 1962, rerun the 16th of June, 2017. I am impressed that Dr Zarkov can make a TV set capable of viewing alternate universes. I still literally do not know how it is possible that we have sound for our new TV set, and I labelled and connected every single wire in the thing. Oh, wouldn’t it be a kick if Dr Zarkov has the picture from one alternate universe but the sound from a slightly different other one?

    Dan Barry’s Flash Gordon for the 31st of July, 1962 and rerun the 16th I’m including just because I love the old-fashioned image of a mathematician in Professor Quita here. At this point in the comic strip’s run it was set in the far-distant future year of 1972, and the action here is on one of the busy multinational giant space stations. Flash himself is just back from Venus where he’d set up some dolphins as assistants to a fish-farming operation helping to feed that world and ours. And for all that early-60s futurism look at that gorgeous old adding machine he’s still got. (Professor Quinta’s discovery is a way to peer into alternate universes, according to the next day’s strip. I’m kind of hoping this means they’re going to spend a week reading Buck Rogers.)

     
  • Joseph Nebus 5:00 pm on Friday, 16 June, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    Why Stuff Can Orbit, Part 9: How The Spring In The Cosmos Behaves 


    Previously:

    And the supplemental reading:


    First, I thank Thomas K Dye for the banner art I have for this feature! Thomas is the creator of the longrunning web comic Newshounds. He’s hoping soon to finish up special editions of some of the strip’s stories and to publish a definitive edition of the comic’s history. He’s also got a Patreon account to support his art habit. Please give his creations some of your time and attention.

    Now back to central forces. I’ve run out of obvious fun stuff to say about a mass that’s in a circular orbit around the center of the universe. Before you question my sense of fun, remember that I own multiple pop histories about the containerized cargo industry and last month I read another one that’s changed my mind about some things. These sorts of problems cover a lot of stuff. They cover planets orbiting a sun and blocks of wood connected to springs. That’s about all we do in high school physics anyway. Well, there’s spheres colliding, but there’s no making a central force problem out of those. You can also make some things that look like bad quantum mechanics models out of that. The mathematics is interesting even if the results don’t match anything in the real world.

    But I’m sticking with central forces that look like powers. These have potential energy functions with rules that look like V(r) = C rn. So far, ‘n’ can be any real number. It turns out ‘n’ has to be larger than -2 for a circular orbit to be stable, but that’s all right. There are lots of numbers larger than -2. ‘n’ carries the connotation of being an integer, a whole (positive or negative) number. But if we want to let it be any old real number like 0.1 or π or 18 and three-sevenths that’s fine. We make a note of that fact and remember it right up to the point we stop pretending to care about non-integer powers. I estimate that’s like two entries off.

    We get a circular orbit by setting the thing that orbits in … a circle. This sounded smarter before I wrote it out like that. Well. We set it moving perpendicular to the “radial direction”, which is the line going from wherever it is straight to the center of the universe. This perpendicular motion means there’s a non-zero angular momentum, which we write as ‘L’ for some reason. For each angular momentum there’s a particular radius that allows for a circular orbit. Which radius? It’s whatever one is a minimum for the effective potential energy:

    V_{eff}(r) = Cr^n + \frac{L^2}{2m}r^{-2}

    This we can find by taking the first derivative of ‘Veff‘ with respect to ‘r’ and finding where that first derivative is zero. This is standard mathematics stuff, quite routine. We can do with any function whether it represents something physics or not. So:

    \frac{dV_{eff}}{dr} = nCr^{n-1} - 2\frac{L^2}{2m}r^{-3} = 0

    And after some work, this gets us to the circular orbit’s radius:

    r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

    What I’d like to talk about is if we’re not quite at that radius. If we set the planet (or whatever) a little bit farther from the center of the universe. Or a little closer. Same angular momentum though, so the equilibrium, the circular orbit, should be in the same spot. It happens there isn’t a planet there.

    This enters us into the world of perturbations, which is where most of the big money in mathematical physics is. A perturbation is a little nudge away from an equilibrium. What happens in response to the little nudge is interesting stuff. And here we already know, qualitatively, what’s going to happen: the planet is going to rock around the equilibrium. This is because the circular orbit is a stable equilibrium. I’d described that qualitatively last time. So now I want to talk quantitatively about how the perturbation changes given time.

    Before I get there I need to introduce another bit of notation. It is so convenient to be able to talk about the radius of the circular orbit that would be the equilibrium. I’d called that ‘r’ up above. But I also need to be able to talk about how far the perturbed planet is from the center of the universe. That’s also really hard not to call ‘r’. Something has to give. Since the radius of the circular orbit is not going to change I’m going to give that a new name. I’ll call it ‘a’. There’s several reasons for this. One is that ‘a’ is commonly used for describing the size of ellipses, which turn up in actual real-world planetary orbits. That’s something we know because this is like the thirteenth part of an essay series about the mathematics of orbits. You aren’t reading this if you haven’t picked up a couple things about orbits on your own. Also we’ve used ‘a’ before, in these sorts of approximations. It was handy in the last supplemental as the point of expansion’s name. So let me make that unmistakable:

    a \equiv r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

    The \equiv there means “defined to be equal to”. You might ask how this is different from “equals”. It seems like more emphasis to me. Also, there are other names for the circular orbit’s radius that I could have used. ‘re‘ would be good enough, as the subscript would suggest “radius of equilibrium”. Or ‘r0‘ would be another popular choice, the 0 suggesting that this is something of key, central importance and also looking kind of like a circle. (That’s probably coincidence.) I like the ‘a’ better there because I know how easy it is to drop a subscript. If you’re working on a problem for yourself that’s easy to fix, with enough cursing and redoing your notes. On a board in front of class it’s even easier to fix since someone will ask about the lost subscript within three lines. In a post like this? It would be a mess.

    So now I’m going to look at possible values of the radius ‘r’ that are close to ‘a’. How close? Close enough that ‘Veff‘, the effective potential energy, looks like a parabola. If it doesn’t look much like a parabola then I look at values of ‘r’ that are even closer to ‘a’. (Do you see how the game is played? If you don’t, look closer. Yes, this is actually valid.) If ‘r’ is that close to ‘a’, then we can get away with this polynomial expansion:

    V_{eff}(r) \approx V_{eff}(a) + m\cdot(r - a) + \frac{1}{2} m_2 (r - a)^2

    where

    m = \frac{dV_{eff}}{dr}\left(a\right)	\\ m_2  = \frac{d^2V_{eff}}{dr^2}\left(a\right)

    The “approximate” there is because this is an approximation. V_{eff}(r) is in truth equal to the thing on the right-hand-side there plus something that isn’t (usually) zero, but that is small.

    I am sorry beyond my ability to describe that I didn’t make that ‘m’ and ‘m2‘ consistent last week. That’s all right. One of these is going to disappear right away.

    Now, what V_{eff}(a) is? Well, that’s whatever you get from putting in ‘a’ wherever you start out seeing ‘r’ in the expression for V_{eff}(r) . I’m not going to bother with that. Call it math, fine, but that’s just a search-and-replace on the character ‘r’. Also, where I’m going next, it’s going to disappear, never to be seen again, so who cares? What’s important is that this is a constant number. If ‘r’ changes, the value of V_{eff}(a) does not, because ‘r’ doesn’t appear anywhere in V_{eff}(a) .

    How about ‘m’? That’s the value of the first derivative of ‘Veff‘ with respect to ‘r’, evaluated when ‘r’ is equal to ‘a’. That might be something. It’s not, because of what ‘a’ is. It’s the value of ‘r’ which would make \frac{dV_{eff}}{dr}(r) equal to zero. That’s why ‘a’ has that value instead of some other, any other.

    So we’ll have a constant part ‘Veff(a)’, plus a zero part, plus a part that’s a parabola. This is normal, by the way, when we do expansions around an equilibrium. At least it’s common. Good to see it. To find ‘m2‘ we have to take the second derivative of ‘Veff(r)’ and then evaluate it when ‘r’ is equal to ‘a’ and ugh but here it is.

    \frac{d^2V_{eff}}{dr^2}(r) = n (n - 1) C r^{n - 2} + 3\cdot\frac{L^2}{m}r^{-4}

    And at the point of approximation, where ‘r’ is equal to ‘a’, it’ll be:

    m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C a^{n - 2} + 3\cdot\frac{L^2}{m}a^{-4}

    We know exactly what ‘a’ is so we could write that out in a nice big expression. You don’t want to. I don’t want to. It’s a bit of a mess. I mean, it’s not hard, but it has a lot of symbols in it and oh all right. Here. Look fast because I’m going to get rid of that as soon as I can.

    m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C \left(\frac{L^2}{n C m}\right)^{n - 2} + 3\cdot\frac{L^2}{m}\left(\frac{L^2}{n C m}\right)^{-4}

    For the values of ‘n’ that we actually care about because they turn up in real actual physics problems this expression simplifies some. Enough, anyway. If we pretend we know nothing about ‘n’ besides that it is a number bigger than -2 then … ugh. We don’t have a lot that can clean it up.

    Here’s how. I’m going to define an auxiliary little function. Its role is to contain our symbolic sprawl. It has a legitimate role too, though. At least it represents something that it makes sense to give a name. It will be a new function, named ‘F’ and that depends on the radius ‘r’:

    F(r) \equiv -\frac{dV}{dr}

    Notice that’s the derivative of the original ‘V’, not the angular-momentum-equipped ‘Veff‘. This is the secret of its power. It doesn’t do anything to make V_{eff}(r) easier to work with. It starts being good when we take its derivatives, though:

    \frac{dV_{eff}}{dr} = -F(r) - \frac{L^2}{m}r^{-3}

    That already looks nicer, doesn’t it? It’s going to be really slick when you think about what ‘F(a)’ is. Remember that ‘a’ is the value for ‘r’ which makes the derivative of ‘Veff‘ equal to zero. So … I may not know much, but I know this:

    0 = \frac{dV_{eff}}{dr}(a) = -F(a) - \frac{L^2}{m}a^{-3}	\\ F(a) = -\frac{L}{ma^3}

    I’m not going to say what value F(r) has for other values of ‘r’ because I don’t care. But now look at what it does for the second derivative of ‘Veff‘:

    \frac{d^2 V_{eff}}{dr^2}(r) = -F'(r) + 3\frac{L^2}{mr^4}

    Here the ‘F'(r)’ is a shorthand way of writing ‘the derivative of F with respect to r’. You can do when there’s only the one free variable to consider. And now something magic that happens when we look at the second derivative of ‘Veff‘ when ‘r’ is equal to ‘a’ …

    \frac{d^2 V_{eff}}{dr^2}(a) = -F'(a) - \frac{3}{a} F(a)

    We get away with this because we happen to know that ‘F(a)’ is equal to -\frac{L}{ma^3} and doesn’t that work out great? We’ve turned a symbolic mess into a … less symbolic mess.

    Now why do I say it’s legitimate to introduce ‘F(r)’ here? It’s because minus the derivative of the potential energy with respect to the position of something can be something of actual physical interest. It’s the amount of force exerted on the particle by that potential energy at that point. The amount of force on a thing is something that we could imagine being interested in. Indeed, we’d have used that except potential energy is usually so much easier to work with. I’ve avoided it up to this point because it wasn’t giving me anything I needed. Here, I embrace it because it will save me from some awful lines of symbols.

    Because with this expression in place I can write the approximation to the potential energy of:

    V_{eff}(r) \approx V_{eff}(a) + \frac{1}{2} \left( -F'(a) - \frac{3}{a}F(a) \right) (r - a)^2

    So if ‘r’ is close to ‘a’, then the polynomial on the right is a good enough approximation to the effective potential energy. And that potential energy has the shape of a spring’s potential energy. We can use what we know about springs to describe its motion. Particularly, we’ll have this be true:

    \frac{dp}{dt} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

    Here, ‘p’ is the (linear) momentum of whatever’s orbiting, which we can treat as equal to ‘mr’, the mass of the orbiting thing times how far it is from the center. You may sense in me some reluctance about doing this, what with that ‘we can treat as equal to’ talk. There’s reasons for this and I’d have to get deep into geometry to explain why. I can get away with specifically this use because the problem allows it. If you’re trying to do your own original physics problem inspired by this thread, and it’s not orbits like this, be warned. This is a spot that could open up to a gigantic danger pit, lined at the bottom with sharp spikes and angry poison-clawed mathematical tigers and I bet it’s raining down there too.

    So we can rewrite all this as

    m\frac{d^2r}{dt^2} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

    And when we learned everything interesting there was to know about springs we learned what the solutions to this look like. Oh, in that essay the variable that changed over time was called ‘x’ and here it’s called ‘r’, but that’s not an actual difference. ‘r’ will be some sinusoidal curve:

    r(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    where, here, ‘k’ is equal to that whole mass of constants on the right-hand side:

    k = -\left( F'(a) + \frac{3}{a} F(a)\right)

    I don’t know what ‘A’ and ‘B’ are. It’ll depend on just what the perturbation is like, how far the planet is from the circular orbit. But I can tell you what the behavior is like. The planet will wobble back and forth around the circular orbit, sometimes closer to the center, sometimes farther away. It’ll spend as much time closer to the center than the circular orbit as it does farther away. And the period of that oscillation will be

    T = 2\pi\sqrt{\frac{m}{k}} = 2\pi\sqrt{\frac{m}{-\left(F'(a) + \frac{3}{a}F(a)\right)}}

    This tells us something about what the orbit of a thing not in a circular orbit will be like. Yes, I see you in the back there, quivering with excitement about how we’ve got to elliptical orbits. You’re moving too fast. We haven’t got that. There will be elliptical orbits, yes, but only for a very particular power ‘n’ for the potential energy. Not for most of them. We’ll see.

    It might strike you there’s something in that square root. We need to take the square root of a positive number, so maybe this will tell us something about what kinds of powers we’re allowed. It’s a good thought. It turns out not to tell us anything useful, though. Suppose we started with V(r) = Cr^n . Then F(r) = -nCr^{n - 1}, and F'(r) = -n(n - 1)C^{n - 2} . Sad to say, this leads us to a journey which reveals that we need ‘n’ to be larger than -2 or else we don’t get oscillations around a circular orbit. We already knew that, though. We already found we needed it to have a stable equilibrium before. We can see there not being a period for these oscillations around the circular orbit as another expression of the circular orbit not being stable. Sad to say, we haven’t got something new out of this.

    We will get to new stuff, though. Maybe even ellipses.

     
  • Joseph Nebus 4:00 pm on Tuesday, 13 June, 2017 Permalink | Reply
    Tags: , ,   

    My Mathematics Reading For The 13th of June 


    I’m working on the next Why Stuff Can Orbit post, this one to feature a special little surprise. In the meanwhile here’s some of the things I’ve read recently and liked.

    The Theorem of the Day is just what the name offers. They’re fit onto single slides, so there’s not much text to read. I’ll grant some of them might be hard reading at once, though, if you’re not familiar with the lingo. Anyway, this particular theorem, the Lindemann-Weierstrass Theorem, is one of the famous ones. Also one of the best-named ones. Karl Weierstrass is one of those names you find all over analysis. Over the latter half of the 19th century he attacked the logical problems that had bugged calculus for the previous three centuries and beat them all. I’m lying, but not by much. Ferdinand von Lindemann’s name turns up less often, but he’s known in mathematics circles for proving that π is transcendental (and so, ultimately, that the circle can’t be squared by compass and straightedge). And he was David Hilbert’s thesis advisor.

    The Lindemann-Weierstrass Theorem is one of those little utility theorems that’s neat on its own, yes, but is good for proving other stuff. This theorem says that if a given number is algebraic (ask about that some A To Z series) then e raised to that number has to be transcendental, and vice-versa. (The exception: e raised to 0 is equal to 1.) The page also mentions one of those fun things you run across when you have a scientific calculator and can repeat an operation on whatever the result of the last operation was.

    I’ve mentioned Maths By A Girl before, but, it’s worth checking in again. This is a piece about Apéry’s Constant, which is one of those numbers mathematicians have heard of, and that we don’t know whether is transcendental or not. It’s hard proving numbers are transcendental. If you go out trying to build a transcendental number it’s easy, but otherwise, you have to hope you know your number is the exponential of an algebraic number.

    I forget which Twitter feed brought this to my attention, but here’s a couple geometric theorems demonstrated and explained some by Dave Richeson. There’s something wonderful in a theorem that’s mostly a picture. It feels so supremely mathematical to me.

    And last, Katherine Bourzac writing for Nature.com reports the creation of a two-dimensional magnet. This delights me since one of the classic problems in statistical mechanics is a thing called the Ising model. It’s a basic model for the mathematics of how magnets would work. The one-dimensional version is simple enough that you can give it to undergrads and have them work through the whole problem. The two-dimensional version is a lot harder to solve and I’m not sure I ever saw it laid out even in grad school. (Mind, I went to grad school for mathematics, not physics, and the subject is a lot more physics.) The four- and higher-dimensional model can be solved by a clever approach called mean field theory. The three-dimensional model .. I don’t think has any exact solution, which seems odd given how that’s the version you’d think was most useful.

    That there’s a real two-dimensional magnet (well, a one-molecule-thick magnet) doesn’t really affect the model of two-dimensional magnets. The model is interesting enough for its mathematics, which teaches us about all kinds of phase transitions. And it’s close enough to the way certain aspects of real-world magnets behave to enlighten our understanding. The topic couldn’t avoid drawing my eye, is all.

     
  • Joseph Nebus 4:00 pm on Sunday, 11 June, 2017 Permalink | Reply
    Tags: , , , Randolph Itch 2am, , Tank McNamara, The Flying McCoys, Wee Pals,   

    Reading the Comics, June 10, 2017: Some Vintage Comics Edition 


    It’s too many comics to call this a famine edition, after last week’s feast. But there’s not a lot of theme to last week’s mathematically-themed comic strips. There’s a couple that include vintage comic strips from before 1940, though, so let’s run with that as a title.

    Glenn McCoy and Gary McCoy’s The Flying McCoys for the 4th of June is your traditional blackboard full of symbols to indicate serious and deep thought on a subject. It’s a silly subject, but that’s fine. The symbols look to me gibberish, but clown research will go along non-traditional paths, I suppose.

    Bill Hinds’s Tank McNamara for the 4th is built on mathematics’ successful invasion and colonization of sports management. Analytics, sabermetrics, Moneyball, whatever you want to call it, is built on ideas not far removed from the quality control techniques that changed corporate management so. Look for patterns; look for correlations; look for the things that seem to predict other things. It seems bizarre, almost inhuman, that we might be able to think of football players as being all of a kind, that what we know about (say) one running back will tell us something about another. But if we put roughly similarly capable people through roughly similar training and set them to work in roughly similar conditions, then we start to see why they might perform similarly. Models can help us make better, more rational, choices.

    Morrie Turner’s Wee Pals rerun for the 4th is another word-problem resistance joke. I suppose it’s also a reminder about the unspoken assumptions in a problem. It also points out why mathematicians end up speaking in an annoyingly precise manner. It’s an attempt to avoid being shown up like Oliver is.

    Which wouldn’t help with Percy Crosby’s Skippy for the 7th of April, 1930, and rerun the 5th. Skippy’s got a smooth line of patter to get out of his mother’s tutoring. You can see where Percy Crosby has the weird trait of drawing comics in 1930 that would make sense today still; few pre-World-War-II comics do.

    Why some of us don't like math. One part of the brain: 'I'm trying to solve an equation, but it's HARD when someone in here keeps shouting FIGHT, FLIGHT, FIGHT, FLIGHT the whole time.' Another part: 'I know, but we should fight or run away.' Another part: 'I just want to cry.'

    Niklas Eriksson’s Carpe Diem for the 7th of June, 2017. If I may intrude in someone else’s work, it seems to me that the problem-solver might find a hint to what ‘x’ is by looking to the upper right corner of the page and the x = \sqrt{13} already there.

    Niklas Eriksson’s Carpe Diem for the 7th is a joke about mathematics anxiety. I don’t know that it actually explains anything, but, eh. I’m not sure there is a rational explanation for mathematics anxiety; if there were, I suppose it wouldn’t be anxiety.

    George Herriman’s Krazy Kat for the 15th of July, 1939, and rerun the 8th, extends that odd little faintly word-problem-setup of the strips I mentioned the other day. I suppose identifying when two things moving at different speeds will intersect will always sound vaguely like a story problem.

    Krazy: 'The ida is that I run this way at fotty miles a hour eh?' Ignatz: 'Right, and my good arm will speed this brick behind you, at a sixty-mile gait - come on - get going - ' And Krazy runs past a traffic signal. The brick reaches the signal, which has changed to 'stop', and drops dead. Ignatz: 'According to the ballistic law, my projectile must be well up to him by now.' Officer Pupp: 'Unless the traffic law interferes, mousie.'

    George Herriman’s Krazy Kat for the 15th of July, 1939, as rerun the 8th of June, 2017. I know the comic isn’t to everyone’s taste, but I like it. I’m also surprised to see something as directly cartoonish as the brick stopping in midair like that in the third panel. The comic is usually surreal, yes, but not that way.

    Tom Toles’s Randolph Itch, 2 am rerun for the 9th is about the sometimes-considered third possibility from a fair coin toss, and how to rig the results of that.

     
    • goldenoj 9:01 pm on Sunday, 11 June, 2017 Permalink | Reply

      Skippy is fascinating. Had to check if it was really from the 30s http://www.gocomics.com/skippy/2017/06/06 might also be a math comic.

      You might want to put your Twitter handle in the sidebar – didn’t realize I had already seen you there via the blog-bot.

      Like

      • Joseph Nebus 11:57 pm on Monday, 12 June, 2017 Permalink | Reply

        I didn’t realize I didn’t have my Twitter handle in the sidebar. Thanks, though, I’m glad to do stuff that makes me easier to find or understand, especially if it doesn’t require ongoing work.

        Skippy, now, that’s not just a 1930s comic but one of the defining (American) comic strips. Basically every comic strip that stars kids who think is imitating it, either directly or through its influences, particularly Charles Schulz and Peanuts. It’s uncanny, especially when you compare it to its contemporaries, how nearly seamlessly it would fit into a modern comics page. It’s rather like Robert Benchley or Frank Fay in that way; now-obscure humorists or performers whose work is so modern and so influential that a wide swath of the modern genre is quietly imitating them.

        Liked by 1 person

      • Joseph Nebus 11:58 pm on Monday, 12 June, 2017 Permalink | Reply

        Oh yes, and you’re right; I could’ve fit the comic from the 6th of June into a Reading the Comics post if I’d thought a bit more about it. Good eye!

        Like

  • Joseph Nebus 4:00 pm on Friday, 9 June, 2017 Permalink | Reply
    Tags: , , , , Mr Boffo, , perfect numbers, Pop Culture Shock Therapy, ,   

    Reading the Comics, June 3, 2017: Feast Week Conclusion Edition 


    And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

    So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

    A Time To Worry: '[Our son] says he got a one-de-two-three-z on the math test.'

    So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

    Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

    J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

    Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

    All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product 2^{p - 1} \cdot \left(2^p - 1\right) for certain prime numbers ‘p’. (They’re the ones for which 2^p - 1 is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

    Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

    The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

    Tiger: 'I've got two plus four hours of homework. I won't be finished until ten minus three o'clock, or maybe even six plus one and a half o'clock.' Punkin: 'What subject?' Tiger: 'Arithmetic, stupid!'

    Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

    Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

    Officer Pupp: 'That Mouse is most sure an oaf of awful dumbness, Mrs Kwakk Wakk - y'know that?' Mrs Kwakk Wakk: 'By what means do you find proof of this, Officer Pupp?' 'His sense of speed is insipid - he doesn't seem to know that if I ran 60 miles an hour, and he only 40, that I would eventually catch up to him.' 'No-' 'Yes- I tell you- yes.' 'He seemed to know that a brick going 60 would catch up to a kat going 40.' 'Oh, he did, did he?' 'Why, yes.'

    George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

    George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

     
    • Joshua K. 1:33 am on Saturday, 10 June, 2017 Permalink | Reply

      I thought that the second question in “Saturday Morning Breakfast Cereal” was meant to imply that mathematicians often have beards; therefore, if God would prefer not to have a beard, he probably isn’t a mathematician.

      Like

      • Joseph Nebus 11:48 pm on Monday, 12 June, 2017 Permalink | Reply

        Oh, you may have something there. I’m so used to thinking of beards as a logic problem that I didn’t think of them as a mathematician thing. (In my defense, back in grad school I’m not sure any of the faculty had beards.). I’ll take that interpretation too.

        Like

  • Joseph Nebus 4:00 pm on Wednesday, 7 June, 2017 Permalink | Reply
    Tags: , , , ,   

    What Second Derivatives Are And What They Can Do For You 


    Previous supplemental reading for Why Stuff Can Orbit:


    This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

    This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

    When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

    Necessary qualifiers: pages 65 through 82 of any book on real analysis.

    So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write f: \Re \rightarrow \Re . If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

    I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

    (One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

    I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

    So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

    I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

    F^0(x) = f(a)

    That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

    We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

    But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

    F^1(x) = f(a) + m\cdot\left(x - a\right)

    Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

    So I will define a new function, spoken of as f-prime, this way:

    f'(x) = \frac{df}{dx}\left(x\right)

    If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that \frac{df}{dx} . That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols \frac{df}{dx} so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

    Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

    m = f'(a) = \frac{df}{dx}\left(a\right)

    which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

    F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)

    This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

    We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2

    What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

    m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)

    We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2

    This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

    If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

    We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

    F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3

    There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be \frac{1}{4\cdot 3\cdot 2} . The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

    We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

     
    • elkement (Elke Stangl) 4:20 pm on Wednesday, 7 June, 2017 Permalink | Reply

      This is great – I’ve just written a very short version of that (a much too succinct one) … as an half-hearted attempt to explain Taylor expansions that I need in an upcoming post. But now I won’t feel bad anymore about its incomprehensibility and simply link to this post of yours :-)

      Like

  • Joseph Nebus 4:00 pm on Sunday, 4 June, 2017 Permalink | Reply
    Tags: , , , , , Motley, , , , The Norm,   

    Reading the Comics, May 31, 2017: Feast Week Edition 


    You know we’re getting near the end of the (United States) school year when Comic Strip Master Command orders everyone to clear out their mathematics jokes. I’m assuming that’s what happened here. Or else a lot of cartoonists had word problems on their minds eight weeks ago. Also eight weeks ago plus whenever they originally drew the comics, for those that are deep in reruns. It was busy enough to split this week’s load into two pieces and might have been worth splitting into three, if I thought I had publishing dates free for all that.

    Larry Wright’s Motley Classics for the 28th of May, a rerun from 1989, is a joke about using algebra. Occasionally mathematicians try to use the the ability of people to catch things in midair as evidence of the sorts of differential equations solution that we all can do, if imperfectly, in our heads. But I’m not aware of evidence that anyone does anything that sophisticated. I would be stunned if we didn’t really work by a process of making a guess of where the thing should be and refining it as time allows, with experience helping us make better guesses. There’s good stuff to learn in modeling how to catch stuff, though.

    Michael Jantze’s The Norm Classics rerun for the 28th opines about why in algebra you had to not just have an answer but explain why that was the answer. I suppose mathematicians get trained to stop thinking about individual problems and instead look to classes of problems. Is it possible to work out a scheme that works for many cases instead of one? If it isn’t, can we at least say something interesting about why it’s not? And perhaps that’s part of what makes algebra classes hard. To think about a collection of things is usually harder than to think about one, and maybe instructors aren’t always clear about how to turn the specific into the general.

    Also I want to say some very good words about Jantze’s graphical design. The mock textbook cover for the title panel on the left is so spot-on for a particular era in mathematics textbooks it’s uncanny. The all-caps Helvetica, the use of two slightly different tans, the minimalist cover art … I know shelves stuffed full in the university mathematics library where every book looks like that. Plus, “[Mathematics Thing] And Their Applications” is one of the roughly four standard approved mathematics book titles. He paid good attention to his references.

    Gary Wise and Lance Aldrich’s Real Life Adventures for the 28th deploys a big old whiteboard full of equations for the “secret” of the universe. This makes a neat change from finding the “meaning” of the universe, or of life. The equations themselves look mostly like gibberish to me, but Wise and Aldrich make good uses of their symbols. The symbol \vec{B} , a vector-valued quantity named B, turns up a lot. This symbol we often use to represent magnetic flux. The B without a little arrow above it would represent the intensity of the magnetic field. Similarly an \vec{H} turns up. This we often use for magnetic field strength. While I didn’t spot a \vec{E} — electric field — which would be the natural partner to all this, there are plenty of bare E symbols. Those would represent electric potential. And many of the other symbols are what would naturally turn up if you were trying to model how something is tossed around by a magnetic field. Q, for example, is often the electric charge. ω is a common symbol for how fast an electromagnetic wave oscillates. (It’s not the frequency, but it’s related to the frequency.) The uses of symbols is consistent enough, in fact, I wonder if Wise and Aldrich did use a legitimate sprawl of equations and I’m missing the referenced problem.

    John Graziano’s Ripley’s Believe It Or Not for the 28th mentions how many symbols are needed to write out the numbers from 1 to 100. Is this properly mathematics? … Oh, who knows. It’s just neat to know.

    Mark O’Hare’s Citizen Dog rerun for the 29th has the dog Fergus struggle against a word problem. Ordinary setup and everything, but I love the way O’Hare draws Fergus in that outfit and thinking hard.

    The Eric the Circle rerun for the 29th by ACE10203040 is a mistimed Pi Day joke.

    Bill Amend’s FoxTrot Classicfor the 31st, a rerun from the 7th of June, 2006, shows the conflation of “genius” and “good at mathematics” in everyday use. Amend has picked a quixotic but in-character thing for Jason Fox to try doing. Euclid’s Fifth Postulate is one of the classic obsessions of mathematicians throughout history. Euclid admitted the thing — a confusing-reading mess of propositions — as a postulate because … well, there’s interesting geometry you can’t do without it, and there doesn’t seem any way to prove it from the rest of his geometric postulates. So it must be assumed to be true.

    There isn’t a way to prove it from the rest of the geometric postulates, but it took mathematicians over two thousand years of work at that to be convinced of the fact. But I know I went through a time of wanting to try finding a proof myself. It was a mercifully short-lived time that ended in my humbly understanding that as smart as I figured I was, I wasn’t that smart. We can suppose Euclid’s Fifth Postulate to be false and get interesting geometries out of that, particularly the geometries of the surface of the sphere, and the geometry of general relativity. Jason will surely sometime learn.

     
    • goldenoj 9:08 pm on Sunday, 4 June, 2017 Permalink | Reply

      Just found these recently. I really enjoy them and catching up is fun. Thanks!

      Like

      • Joseph Nebus 1:05 am on Wednesday, 7 June, 2017 Permalink | Reply

        Thanks for finding the pieces. I hope you enjoy; they’re probably my most reliable feature around here.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: