Tagged: stability Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 28 October, 2016 Permalink | Reply
    Tags: , , , , , , , stability   

    Why Stuff Can Orbit, Part 7: ALL the Circles 


    Previously:

    And some supplemental reading:


    Last time around I showed how to do a central-force problem for normal gravity. That’s one where a planet, or moon, or satellite, or whatever is drawn towards the center of space. It’s drawn by a potential energy that equals some constant times the inverse of the distance from the origin. That is, V(r) = C r-1. With a little bit of fussing around we could find out what distance from the center lets a circular orbit happen. And even Kepler’s Third Law, connecting how long an orbit takes to how big it must be.

    There are two natural follow-up essays. One is to work out elliptical orbits. We know there are such things; all real planets and moons have them, and nearly all satellites do. The other is to work out circular orbits for another easy-to-understand example, like a mass on a spring. That’s something with a potential energy that looks like V(r) = C r2.

    I want to do the elliptical orbits later on. The mass-on-a-spring I could do now. So could you, if you look follow last week’s essay and just change the numbers a little. But, you know, why bother working out one problem? Why not work out a lot of them? Why not work out every central-force problem, all at once?

    Because we can’t. I mean, I can describe how to do that, but it isn’t going to save us much time. Like, the quadratic formula is great because it’ll give you the roots of a quadratic polynomial in one step. You don’t have to do anything but a little arithmetic. We can’t get a formula that easy if we try to solve for every possible potential energy.

    But we can work out a lot of central-force potential energies all at once. That is, we can solve for a big set of similar problems, a “family” as we call them. The obvious family is potential energies that are powers of the planet’s distance from the center. That is, they’re potential energies that follow the rule

    V(r) = C r^n

    Here ‘C’ is some number. It might depend on the planet’s mass, or the sun’s mass. Doesn’t matter. All that’s important is that it not change over the course of the problem. So, ‘C’ for Constant. And ‘n’ is another constant number. Some numbers turn up a lot in useful problems. If ‘n’ is -1 then this can describe gravitational attraction. If ‘n’ is 2 then this can describe a mass on a spring. This ‘n’ can be any real number. That’s not an ideal choice of letter. ‘n’ usually designates a whole number. By using that letter I’m biasing people to think of numbers like ‘2’ at the expense of perfectly legitimate alternatives such as ‘2.1’. But now that I’ve made that explicit maybe we won’t make a casual mistake.

    So what I want is to find where there are stable circular orbits for an arbitrary radius-to-a-power force. I don’t know what ‘C’ and ‘n’ are, but they’re some numbers. To find where a planet can have a circular orbit I need to suppose the planet has some mass, ‘m’. And that its orbit has some angular momentum, a number called ‘L’. From this we get the effective potential energy. That’s what the potential energy looks like when we remember that angular momentum has to be conserved.

    V_{eff}(r) = C r^n + \frac{L^2}{2m} r^{-2}

    To find where a circular orbit can be we have to take the first derivative of Veff with respect to ‘r’. The circular orbit can happen at a radius for which this first derivative equals zero. So we need to solve this:

    \frac{dV_{eff}}{dr} = n C r^{n-1} - 2\frac{L^2}{2m} r^{-3} = 0

    That derivative we know from the rules of how to take derivatives. And from this point on we have to do arithmetic. We want to get something which looks like ‘r = (some mathematics stuff here)’. Hopefully it’ll be something not too complicated. And hey, in the second term there, the one with L2 in it, we have a 2 in the numerator and a 2 in the denominator. So those cancel out and that’s simpler. That’s hopeful, isn’t it?

    n C r^{n-1} - \frac{L^2}{m}r^{-3} = 0

    OK. Add \frac{L^2}{m}r^{-3} to both sides of the equation; we’re used to doing that. At least in high school algebra we are.

    n C r^{n-1} = \frac{L^2}{m}r^{-3}

    Not looking much better? Try multiplying both left and right sides by ‘r3‘. This gets rid of all the ‘r’ terms on the right-hand side of the equation.

    n C r^{n+2} = \frac{L^2}{m}

    Now we’re getting close to the ideal of ‘r = (some mathematics stuff)’. Divide both sides by the constant number ‘n times C’.

    r^{n+2} = \frac{L^2}{n C m}

    I know how much everybody likes taking (n+2)-nd roots of a quantity. I’m sure you occasionally just pick an object at random — your age, your telephone number, a potato, a wooden block — and find its (n+2)-nd root. I know. I’ll spoil some of the upcoming paragraphs to say that it’s going to be more useful knowing ‘rn + 2‘ than it is knowing ‘r’. But I’d like to have the radius of a circular orbit on the record. Here it is.

    r = \left(\frac{L^2}{n C m}\right)^{\frac{1}{n + 2}}

    Can we check that this is right? Well, we can at least check that things aren’t wrong. We can check against the example we already know. That’s the gravitational potential energy problem. For that one, ‘C’ is the number ‘G M m’. That’s the gravitational constant of the universe times the mass of the sun times the mass of the planet. And for gravitational potential energy, ‘n’ is equal to -1. This implies that, for a gravitational potential energy problem, we get a circular orbit when

    r_{grav} = \left(\frac{L^2}{n G M m^2}\right)^{\frac{1}{1}}

    I’m labelling it ‘rgrav‘ to point out it’s the radius of a circular orbit for gravitational problems. Might or might not need that in the future, but the label won’t hurt anything.

    Go ahead and guess whether that agrees with last week’s work. I’m feeling confident.

    OK, so, we know where a circular orbit might turn up for an arbitrary power function potential energy. Is it stable? We know from the third “Why Stuff Can Orbit” essay that it’s not a sure thing. We can have potential energies that don’t have any circular orbits. So it must be possible there are unstable orbits.

    Whether our circular orbit is stable demands we do the same work we did last time. It will look a little harder to start, because there’s one more variable in it. What had been ‘-1’ last time is now an ‘n’, and stuff like ‘-2’ becomes ‘n-1’. Is that actually harder? Really?

    So here’s the second derivative of the effective potential:

    \frac{d^2V_{eff}}{dr^2} = (n-1)nCr^{n - 2} + 3\frac{L^2}{m}r^{-4}

    My first impulse when I worked this out was to take the ‘r’ for a circular orbit, the thing worked out five paragraphs above, and plug it in to that expression. This is madness. Don’t do it. Or, you know, go ahead and start doing it and see how long it takes before you regret the errors of your ways.

    The non-madness-inducing way to work out if this is a positive number? It involves noticing r^{n-2} is the same number as r^{n+2}\cdot r^{-4} . So we have this bit of distribution-law magic:

    \frac{d^2V_{eff}}{dr^2} = (n-1)nCr^{n + 2}r^{-4} + 3\frac{L^2}{m}r^{-4}

    \frac{d^2V_{eff}}{dr^2} = \left((n-1)nCr^{n + 2} + 3\frac{L^2}{m}\right) \cdot r^{-4}

    I’m sure we all agree that’s better, right? No, honestly, let me tell you why this is better. When will this expression be true?

    \left((n-1)nCr^{n + 2} + 3\frac{L^2}{m}\right) \cdot r^{-4} > 0

    That’s the product of two expressions. One of them is ‘r-4‘. ‘r’ is the radius of the planet’s orbit. That has to be a positive number. It’s how far the planet is from the origin. The number can’t be anything but positive. So we don’t have to worry about that.

    SPOILER: I just palmed a card there. Did you see me palm a card there? Because I totally did. Watch for where that card turns up. It’ll be after this next bit.

    So let’s look at the non-card-palmed part of this. We’re going to have a stable equilibrium when the other factor of that mess up above is positive. We need to know when this is true:

    (n-1)nCr^{n + 2} + 3\frac{L^2}{m}  > 0

    OK. Well. We do know what ‘rn+2‘ is. Worked that out … uhm … twelve(?) paragraphs ago. I’ll say twelve and hope I don’t mess that up in editing. Anyway, what’s important is r^{n+2} = \frac{L^2}{n C m} . So we put that in where ‘rn+2‘ appeared in that above expression.

    (n-1)nC\frac{L^2}{n C m} + 3 \frac{L^2}{m} > 0

    This is going to simplify down some. Look at that first term, with an ‘n C’ in the numerator and again in the denominator. We’re going to be happier soon as we cancel those out.

    (n-1)\frac{L^2}{m} + 3\frac{L^2}{m} > 0

    And now we get to some fine distributive-law action, the kind everyone likes:

    \left( (n-1) + 3 \right)\frac{L^2}{m} > 0

    Well, we know \frac{L^2}{m} has to be positive. The angular momentum ‘L’ might be positive or might be negative but its square is certainly positive. The mass ‘m’ has to be a positive number. So we’ll get a stable equilibrium whenever (n - 1) + 3 is greater than 0. That is, whenever n > -2 . Done.

    No we’re not done. That’s nonsense. We knew that going in. We saw that a couple essays ago. If your potential energy were something like, say, V(r) = -2 r^3 you wouldn’t have any orbits at all, never mind stable orbits. But 3 is certainly greater than -2. So what’s gone wrong here?

    Let’s go back to that palmed card. Remember I mentioned how the radius of our circular orbit was a positive number. This has to be true, if there is a circular orbit. What if there isn’t one? Do we know there is a radius ‘r’ that the planet can orbit the origin? Here’s the formula giving us that circular orbit’s radius once again:

    r = \left(\frac{L^2}{n C m}\right)^{\frac{1}{n + 2}}

    Do we know that’s going to exist? … Well, sure. That’s going to be some meaningful number as long as we avoid obvious problems. Like, we can’t have the power ‘n’ be equal to zero, because dividing by zero is all sorts of bad. Also we can’t have the constant ‘C’ be zero, again because dividing by zero is bad.

    Not a problem, though. If either ‘C’ or ‘n’ were zero, or if both were, then the original potential energy would be a constant number. V(r) would be equal to ‘C’ (if ‘n’ were zero), or ‘0’ (if ‘C’ were zero). It wouldn’t change with the radius ‘r’. This is a case called the ‘free particle’. There’s no force pushing the planet in one direction or another. So if the planet were not moving it would never start. If the planet were already moving, it would keep moving in the same direction in a straight line. No circular orbits.

    Similarly if ‘n’ were equal to ‘-2’ there’d be problems because the power we raise that parenthetical expression to would be equal to one divided by zero, which is bad. Is there anything else that could be trouble there?

    What if the thing inside parentheses is a negative number? I may not know what ‘n’ is. I don’t. We started off by supposing we didn’t know beyond that it was a number. But I do know that the n-th root of a negative number is going to be trouble. It might be negative. It might be complex-valued. But it won’t be a positive number. And we need a radius that’s a positive number. So that’s the palmed card. To have a circular orbit at all, positive or negative, we have to have:

    \frac{L^2}{n C m} > 0

    ‘L’ is a regular old number, maybe positive, maybe negative. So ‘L2‘ is a positive number. And the mass ‘m’ is a positive number. We don’t know what ‘n’ and C’ are. But as long as their product is positive we’re good. The whole equation will be true. So ‘n’ and ‘C’ can both be negative numbers. We saw that with gravity: V(r) = -\frac{GMm}{r} . ‘G’ is the gravitational constant of the universe, a positive number. ‘M’ and ‘m’ are masses, also positive.

    Or ‘n’ and ‘C’ can both be positive numbers. That turns up with spring problems: V(r) = K r^2 , where ‘K’ is the ‘spring constant’. That’s some positive number again.

    That time we found potential energies that didn’t have orbits? They were ones that had a positive ‘C’ and negative ‘n’, or a negative ‘C’ and positive ‘n’. The case we just worked out doesn’t have circular orbits. It’s nice to have that sorted out at least.

    So what does it mean that we can’t have a stable orbit if ‘n’ is less than or equal to -2? Even if ‘C’ is negative? It turns out that if you have a negative ‘C’ and big negative ‘n’, like say -5, the potential energy drops way down to something infinitely large and negative at smaller and smaller radiuses. If you have a positive ‘C’, the potential energy goes way up at smaller and smaller radiuses. For large radiuses the potential drops to zero. But there’s never the little U-shaped hill in the middle, the way you get for gravity-like potentials or spring potentials or normal stuff like that. Yeah, who would have guessed?

    What if we do have a stable orbit? How long does an orbit take? How does that relate to the radius of the orbit? We used this radius expression to work out Kepler’s Third Law for the gravity problem last week. We can do that again here.

    Last week we worked out what the angular momentum ‘L’ had to be in terms of the radius of the orbit and the time it takes to complete one orbit. The radius of the orbit we called ‘r’. The time an orbit takes we call ‘T’. The formula for angular momentum doesn’t depend on what problem we’re doing. It just depends on the mass ‘m’ of what’s spinning around and how it’s spinning. So:

    L = 2\pi m \frac{r^2}{T}

    And from this we know what ‘L2‘ is.

    L^2 = 4\pi^2 m^2 \frac{r^4}{T^2}

    That’s convenient because we have an ‘L2‘ term in the formula for what the radius is. I’m going to stick with the formula we got for ‘rn+2‘ because that is so, so much easier to work with than ‘r’ by itself. So we go back to that starting point and then substitute what we know ‘L2‘ to be in there.

    r^{n + 2} = \frac{L^2}{n C m}

    This we rewrite as:

    r^{n + 2} = \frac{4 \pi^2 m^2}{n C m}\frac{r^4}{T^2}

    Some stuff starts cancelling out again. One ‘m’ in the numerator and one in the denominator. Small thing but it makes our lives a bit better. We can multiply the left side and the right side by T2. That’s more obviously an improvement. We can divide the left side and the right side by ‘rn + 2‘. And yes that is too an improvement. Watch all this:

    r^{n + 2} = \frac{4 \pi^2 m}{n C}\frac{r^4}{T^2}

    T^2 \cdot r^{n + 2} = \frac{4 \pi^2 m}{n C}r^4

    T^2  = \frac{4 \pi^2 m}{n C}r^{2 - n}

    And that last bit is the equivalent of Kepler’s Third Law for our arbitrary power-law style force.

    Are we right? Hard to say offhand. We can check that we aren’t wrong, at least. We can check against the gravitational potential energy. For this ‘n’ is equal to -1. ‘C’ is equal to ‘-G M m’. Make those substitutions; what do we get?

    T^2  = \frac{4 \pi^2 m}{(-1) (-G M m)}r^{2 - (-1)}

    T^2  = \frac{4 \pi^2}{G M}r^{3}

    Well, that is what we expected for this case. So the work looks good, this far. Comforting.

     
  • Joseph Nebus 6:00 pm on Friday, 21 October, 2016 Permalink | Reply
    Tags: , , , , , , , , stability   

    Why Stuff Can Orbit, Part 6: Circles and Where To Find Them 


    Previously:

    And some supplemental reading:


    So now we can work out orbits. At least orbits for a central force problem. Those are ones where a particle — it’s easy to think of it as a planet — is pulled towards the center of the universe. How strong that pull is depends on some constants. But it only changes as the distance the planet is from the center changes.

    What we’d like to know is whether there are circular orbits. By “we” I mean “mathematical physicists”. And I’m including you in that “we”. If you’re reading this far you’re at least interested in knowing how mathematical physicists think about stuff like this.

    It’s easiest describing when these circular orbits exist if we start with the potential energy. That’s a function named ‘V’. We write it as ‘V(r)’ to show it’s an energy that changes as ‘r’ changes. By ‘r’ we mean the distance from the center of the universe. We’d use ‘d’ for that except we’re so used to thinking of distance from the center as ‘radius’. So ‘r’ seems more compelling. Sorry.

    Besides the potential energy we need to know the angular momentum of the planet (or whatever it is) moving around the center. The amount of angular momentum is a number we call ‘L’. It might be positive, it might be negative. Also we need the planet’s mass, which we call ‘m’. The angular momentum and mass let us write a function called the effective potential energy, ‘Veff(r)’.

    And we’ll need to take derivatives of ‘Veff(r)’. Fortunately that “How Differential Calculus Works” essay explains all the symbol-manipulation we need to get started. That part is calculus, but the easy part. We can just follow the rules already there. So here’s what we do:

    • The planet (or whatever) can have a circular orbit around the center at any radius which makes the equation \frac{dV_{eff}}{dr} = 0 true.
    • The circular orbit will be stable if the radius of its orbit makes the second derivative of the effective potential, \frac{d^2V_{eff}}{dr^2} , some number greater than zero.

    We’re interested in stable orbits because usually unstable orbits are boring. They might exist but any little perturbation breaks them down. The mathematician, ordinarily, sees this as a useless solution except in how it describes different kinds of orbits. The physicist might point out that sometimes it can take a long time, possibly millions of years, before the perturbation becomes big enough to stand out. Indeed, it’s an open question whether our solar system is stable. While it seems to have gone millions of years without any planet changing its orbit very much we haven’t got the evidence to say it’s impossible that, say, Saturn will be kicked out of the solar system anytime soon. Or worse, that Earth might be. “Soon” here means geologically soon, like, in the next million years.

    (If it takes so long for the instability to matter then the mathematician might allow that as “metastable”. There are a lot of interesting metastable systems. But right now, I don’t care.)

    I realize now I didn’t explain the notation for the second derivative before. It looks funny because that’s just the best we can work out. In that fraction \frac{d^2V_{eff}}{dr^2} the ‘d’ isn’t a number so we can’t cancel it out. And the superscript ‘2’ doesn’t mean squaring, at least not the way we square numbers. There’s a functional analysis essay in there somewhere. Again I’m sorry about this but there’s a lot of things mathematicians want to write out and sometimes we can’t find a way that avoids all confusion. Roll with it.

    So that explains the whole thing clearly and easily and now nobody could be confused and yeah I know. If my Classical Mechanics professor left it at that we’d have open rebellion. Let’s do an example.

    There are two and a half good examples. That is, they’re central force problems with answers we know. One is gravitation: we have a planet orbiting a star that’s at the origin. Another is springs: we have a mass that’s connected by a spring to the origin. And the half is electric: put a positive electric charge at the center and have a negative charge orbit that. The electric case is only half a problem because it’s the same as the gravitation problem except for what the constants involved are. Electric charges attract each other crazy way stronger than gravitational masses do. But that doesn’t change the work we do.

    This is a lie. Electric charges accelerating, and just orbiting counts as accelerating, cause electromagnetic effects to happen. They give off light. That’s important, but it’s also complicated. I’m not going to deal with that.

    I’m going to do the gravitation problem. After all, we know the answer! By Kepler’s something law, something something radius cubed something G M … something … squared … After all, we can look up the answer!

    The potential energy for a planet orbiting a sun looks like this:

    V(r) = - G M m \frac{1}{r}

    Here ‘G’ is a constant, called the Gravitational Constant. It’s how strong gravity in the universe is. It’s not very strong. ‘M’ is the mass of the sun. ‘m’ is the mass of the planet. To make sense ‘M’ should be a lot bigger than ‘m’. ‘r’ is how far the planet is from the sun. And yes, that’s one-over-r, not one-over-r-squared. This is the potential energy of the planet being at a given distance from the sun. One-over-r-squared gives us how strong the force attracting the planet towards the sun is. Different thing. Related thing, but different thing. Just listing all these quantities one after the other means ‘multiply them together’, because mathematicians multiply things together a lot and get bored writing multiplication symbols all the time.

    Now for the effective potential we need to toss in the angular momentum. That’s ‘L’. The effective potential energy will be:

    V_{eff}(r) = - G M m \frac{1}{r} + \frac{L^2}{2 m r^2}

    I’m going to rewrite this in a way that means the same thing, but that makes it easier to take derivatives. At least easier to me. You’re on your own. But here’s what looks easier to me:

    V_{eff}(r) = - G M m r^{-1} + \frac{L^2}{2 m} r^{-2}

    I like this because it makes every term here look like “some constant number times r to a power”. That’s easy to take the derivative of. Check back on that “How Differential Calculus Works” essay. The first derivative of this ‘Veff(r)’, taken with respect to ‘r’, looks like this:

    \frac{dV_{eff}}{dr} = -(-1) G M m r^{-2} -2\frac{L^2}{2m} r^{-3}

    We can tidy that up a little bit: -(-1) is another way of writing 1. The second term has two times something divided by 2. We don’t need to be that complicated. In fact, when I worked out my notes I went directly to this simpler form, because I wasn’t going to be thrown by that. I imagine I’ve got people reading along here who are watching these equations warily, if at all. They’re ready to bolt at the first sign of something terrible-looking. There’s nothing terrible-looking coming up. All we’re doing from this point on is really arithmetic. It’s multiplying or adding or otherwise moving around numbers to make the equation prettier. It happens we only know those numbers by cryptic names like ‘G’ or ‘L’ or ‘M’. You can go ahead and pretend they’re ‘4’ or ‘5’ or ‘7’ if you like. You know how to do the steps coming up.

    So! We allegedly can have a circular orbit when this first derivative is equal to zero. What values of ‘r’ make true this equation?

    G M m r^{-2} - \frac{L^2}{m} r^{-3} = 0

    Not so helpful there. What we want is to have something like ‘r = (mathematics stuff here)’. We have to do some high school algebra moving-stuff-around to get that. So one thing we can do to get closer is add the quantity \frac{L^2}{m} r^{-3} to both sides of this equation. This gets us:

    G M m r^{-2} = \frac{L^2}{m} r^{-3}

    Things are getting better. Now multiply both sides by the same number. Which number? r3. That’s because ‘r-3‘ times ‘r3‘ is going to equal 1, while ‘r-2‘ times ‘r3‘ will equal ‘r1‘, which normal people call ‘r’. I kid; normal people don’t think of such a thing at all, much less call it anything. But if they did, they’d call it ‘r’. We’ve got:

    G M m r = \frac{L^2}{m}

    And now we’re getting there! Divide both sides by whatever number ‘G M’ is, as long as it isn’t zero. And then we have our circular orbit! It’s at the radius

    r = \frac{L^2}{G M m^2}

    Very good. I’d even say pretty. It’s got all those capital letters and one little lowercase. Something squared in the numerator and the denominator. Aesthetically pleasant. Stinks a little that it doesn’t look like anything we remember from Kepler’s Laws once we’ve looked them up. We can fix that, though.

    The key is the angular momentum ‘L’ there. I haven’t said anything about how that number relates to anything. It’s just been some constant of the universe. In a sense that’s fair enough. Angular momentum is conserved, exactly the same way energy is conserved, or the way linear momentum is conserved. Why not just let it be whatever number it happens to be?

    (A note for people who skipped earlier essays: Angular momentum is not a number. It’s really a three-dimensional vector. But in a central force problem with just one planet moving around we aren’t doing any harm by pretending it’s just a number. We set it up so that the angular momentum is pointing directly out of, or directly into, the sheet of paper we pretend the planet’s orbiting in. Since we know the direction before we even start work, all we have to car about is the size. That’s the number I’m talking about.)

    The angular momentum of a thing is its moment of inertia times its angular velocity. I’m glad to have cleared that up for you. The moment of inertia of a thing describes how easy it is to start it spinning, or stop it spinning, or change its spin. It’s a lot like inertia. What it is depends on the mass of the thing spinning, and how that mass is distributed, and what it’s spinning around. It’s the first part of physics that makes the student really have to know volume integrals.

    We don’t have to know volume integrals. A single point mass spinning at a constant speed at a constant distance from the origin is the easy angular momentum to figure out. A mass ‘m’ at a fixed distance ‘r’ from the center of rotation moving at constant speed ‘v’ has an angular momentum of ‘m’ times ‘r’ times ‘v’.

    So great; we’ve turned ‘L’ which we didn’t know into ‘m r v’, where we know ‘m’ and ‘r’ but don’t know ‘v’. We’re making progress, I promise. The planet’s tracing out a circle in some amount of time. It’s a circle with radius ‘r’. So it traces out a circle with perimeter ‘2 π r’. And it takes some amount of time to do that. Call that time ‘T’. So its speed will be the distance travelled divided by the time it takes to travel. That’s \frac{2 \pi r}{T} . Again we’ve changed one unknown number ‘L’ for another unknown number ‘T’. But at least ‘T’ is an easy familiar thing: it’s how long the orbit takes.

    Let me show you how this helps. Start off with what ‘L’ is:

    L = m r v = m r \frac{2\pi r}{T} = 2\pi m \frac{r^2}{T}

    Now let’s put that into the equation I got eight paragraphs ago:

    r = \frac{L^2}{G M m^2}

    Remember that one? Now put what I just said ‘L’ was, in where ‘L’ shows up in that equation.

    r = \frac{\left(2\pi m \frac{r^2}{T}\right)^2}{G M m^2}

    I agree, this looks like a mess and possibly a disaster. It’s not so bad. Do some cleaning up on that numerator.

    r = \frac{4 \pi^2 m^2}{G M m^2} \frac{r^4}{T^2}

    That’s looking a lot better, isn’t it? We even have something we can divide out: the mass of the planet is just about to disappear. This sounds bizarre, but remember Kepler’s laws: the mass of the planet never figures into things. We may be on the right path yet.

    r = \frac{4 \pi^2}{G M} \frac{r^4}{T^2}

    OK. Now I’m going to multiply both sides by ‘T2‘ because that’ll get that out of the denominator. And I’ll divide both sides by ‘r’ so that I only have the radius of the circular orbit on one side of the equation. Here’s what we’ve got now:

    T^2 = \frac{4 \pi^2}{G M} r^3

    And hey! That looks really familiar. A circular orbit’s radius cubed is some multiple of the square of the orbit’s time. Yes. This looks right. At least it looks reasonable. Someone else can check if it’s right. I like the look of it.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about more and different … um …

    I’d like to talk about the different … oh, dear. Yes. You’re going to ask about that, aren’t you?

    Ugh. All right. I’ll do it.

    How do we know this is a stable orbit? Well, it just is. If it weren’t the Earth wouldn’t have a Moon after all this. Heck, the Sun wouldn’t have an Earth. At least it wouldn’t have a Jupiter. If the solar system is unstable, Jupiter is probably the most stable part. But that isn’t convincing. I’ll do this right, though, and show what the second derivative tells us. It tells us this is too a stable orbit.

    So. The thing we have to do is find the second derivative of the effective potential. This we do by taking the derivative of the first derivative. Then we have to evaluate this second derivative and see what value it has for the radius of our circular orbit. If that’s a positive number, then the orbit’s stable. If that’s a negative number, then the orbit’s not stable. This isn’t hard to do, but it isn’t going to look pretty.

    First the pretty part, though. Here’s the first derivative of the effective potential:

    \frac{dV_{eff}}{dr} = G M m r^{-2} - \frac{L^2}{m} r^{-3}

    OK. So the derivative of this with respect to ‘r’ isn’t hard to evaluate again. This is again a function with a bunch of terms that are all a constant times r to a power. That’s the easiest sort of thing to differentiate that isn’t just something that never changes.

    \frac{d^2 V_{eff}}{dr^2} = -2 G M m r^{-3} - (-3)\frac{L^2}{m} r^{-4}

    Now the messy part. We need to work out what that line above is when our planet’s in our circular orbit. That circular orbit happens when r = \frac{L^2}{G M m^2} . So we have to substitute that mess in for ‘r’ wherever it appears in that above equation and you’re going to love this. Are you ready? It’s:

    -2 G M m \left(\frac{L^2}{G M m^2}\right)^{-3} + 3\frac{L^2}{m}\left(\frac{L^2}{G M m^2}\right)^{-4}

    This will get a bit easier promptly. That’s because something raised to a negative power is the same as its reciprocal raised to the positive of that power. So that terrible, terrible expression is the same as this terrible, terrible expression:

    -2 G M m \left(\frac{G M m^2}{L^2}\right)^3 + 3 \frac{L^2}{m}\left(\frac{G M m^2}{L^2}\right)^4

    Yes, yes, I know. Only thing to do is start hacking through all this because I promise it’s going to get better. Putting all those third- and fourth-powers into their parentheses turns this mess into:

    -2 G M m \frac{G^3 M^3 m^6}{L^6} + 3 \frac{L^2}{m} \frac{G^4 M^4 m^8}{L^8}

    Yes, my gut reaction when I see multiple things raised to the eighth power is to say I don’t want any part of this either. Hold on another line, though. Things are going to start cancelling out and getting shorter. Group all those things-to-powers together:

    -2 \frac{G^4 M^4 m^7}{L^6} + 3 \frac{G^4 M^4 m^7}{L^6}

    Oh. Well, now this is different. The second derivative of the effective potential, at this point, is the number

    \frac{G^4 M^4 m^7}{L^6}

    And I admit I don’t know what number that is. But here’s what I do know: ‘G’ is a positive number. ‘M’ is a positive number. ‘m’ is a positive number. ‘L’ might be positive or might be negative, but ‘L6‘ is a positive number either way. So this is a bunch of positive numbers multiplied and divided together.

    So this second derivative what ever it is must be a positive number. And so this circular orbit is stable. Give the planet a little nudge and that’s all right. It’ll stay near its orbit. I’m sorry to put you through that but some people raised the, honestly, fair question.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about the other kinds of central forces that you might get. We only solved one problem here. We can solve way more than that.

     
    • howardat58 6:18 pm on Friday, 21 October, 2016 Permalink | Reply

      I love the chatty approach.

      Like

      • Joseph Nebus 5:03 am on Saturday, 22 October, 2016 Permalink | Reply

        Thank you. I realized doing Theorem Thursdays over the summer that it was hard to avoid that voice, and then that it was fun writing in it. So eventually I do learn, sometimes.

        Like

  • Joseph Nebus 6:00 pm on Friday, 14 October, 2016 Permalink | Reply
    Tags: , , , , , harmonic motion, perturbations, stability   

    How Mathematical Physics Works: Another Course In 2200 Words 


    OK, I need some more background stuff before returning to the Why Stuff Can Orbit series. Last week I explained how to take derivatives, which is one of the three legs of a Calculus I course. Now I need to say something about why we take derivatives. This essay won’t really qualify you to do mathematical physics, but it’ll at least let you bluff your way through a meeting with one.

    We care about derivatives because we’re doing physics a smart way. This involves thinking not about forces but instead potential energy. We have a function, called V or sometimes U, that changes based on where something is. If we need to know the forces on something we can take the derivative, with respect to position, of the potential energy.

    The way I’ve set up these central force problems makes it easy to shift between physical intuition and calculus. Draw a scribbly little curve, something going up and down as you like, as long as it doesn’t loop back on itself. Also, don’t take the pen from paper. Also, no corners. That’s just cheating. Smooth curves. That’s your potential energy function. Take any point on this scribbly curve. If you go to the right a little from that point, is the curve going up? Then your function has a positive derivative at that point. Is the curve going down? Then your function has a negative derivative. Find some other point where the curve is going in the other direction. If it was going up to start, find a point where it’s going down. Somewhere in-between there must be a point where the curve isn’t going up or going down. The Intermediate Value Theorem says you’re welcome.

    These points where the potential energy isn’t increasing or decreasing are the interesting ones. At least if you’re a mathematical physicist. They’re equilibriums. If whatever might be moving happens to be exactly there, then it’s not going to move. It’ll stay right there. Mathematically: the force is some fixed number times the derivative of the potential energy there. The potential energy’s derivative is zero there. So the force is zero and without a force nothing’s going to change. Physical intuition: imagine you laid out a track with exactly the shape of your curve. Put a marble at this point where the track isn’t rising and isn’t falling. Does the marble move? No, but if you’re not so sure about that read on past the next paragraph.

    Mathematical physicists learn to look for these equilibriums. We’re taught to not bother with what will happen if we release this particle at this spot with this velocity. That is, you know, not looking at any particular problem someone might want to know. We look instead at equilibriums because they help us describe all the possible behaviors of a system. Mathematicians are sometimes characterized as lazy in spirit. This is fair. Mathematicians will start out with a problem looking to see if it’s just like some other problem someone already solved. But the flip side is if one is going to go to the trouble of solving a new problem, she’s going to really solve it. We’ll work out not just what happens from some one particular starting condition. We’ll try to describe all the different kinds of thing that could happen, and how to tell which of them does happen for your measly little problem.

    If you actually do have a curvy track and put a marble down on its equilibrium it might yet move. Suppose the track is rising a while and then falls back again; putting the marble at top and it’s likely to roll one way or the other. If it doesn’t it’s probably because of friction; the track sticks a little. If it were a really smooth track and the marble perfectly round then it’d fall. Give me this. But even with a perfectly smooth track and perfectly frictionless marble it’ll still roll one way or another. Unless you put it exactly at the spot that’s the top of the hill, not a bit to the left or the right. Good luck.

    What’s happening here is the difference between a stable and an unstable equilibrium. This is again something we all have a physical intuition for. Imagine you have something that isn’t moving. Give it a little shove. Does it stay about like it was? Then it’s stable. Does it break? Then it’s unstable. The marble at the top of the track is at an unstable equilibrium; a little nudge and it’ll roll away. If you had a marble at the bottom of a track, inside a valley, then it’s a stable equilibrium. A little nudge will make the marble rock back and forth but it’ll stay nearby.

    Yes, if you give it a crazy big whack the marble will go flying off, never to be seen again. We’re talking about small nudges. No, smaller than that. This maybe sounds like question-begging to you. But what makes for an unstable equilibrium is that no nudge is too small. The nudge — perturbation, in the trade — will just keep growing. In a stable equilibrium there’s nudges small enough that they won’t keep growing. They might not shrink, but they won’t grow either.

    So how to tell which is which? Well, look at your potential energy and imagine it as a track with a marble again. Where are the unstable equilibriums? They’re the ones at tops of hills. Near them the curve looks like a cup pointing down, to use the metaphor every Calculus I class takes. Where are the stable equilibriums? They’re the ones at bottoms of valleys. Near them the curve looks like a cup pointing up. Again, see Calculus I.

    We may be able to tell the difference between these kinds of equilibriums without drawing the potential energy. We can use the second derivative. To find the second derivative of a function you take the derivative of a function and then — you may want to think this one over — take the derivative of that. That is, you take the derivative of the original function a second time. Sometimes higher mathematics gives us terms that aren’t too hard.

    So if you have a spot where you know there’s an equilibrium, look at what the second derivative at that spot is. If it’s positive, you have a stable equilibrium. If it’s negative, you have an unstable equilibrium. This is called “Second Derivative Test”, as it was named by a committee that figured it was close enough to 5 pm and why cause trouble?

    If the second derivative is zero there, um, we can’t say anything right now. The equilibrium may also be an inflection point. That’s where the growth of something pauses a moment before resuming. Or where the decline of something pauses a moment before resuming. In either case that’s still an unstable equilibrium. But it doesn’t have to be. It could still be a stable equilibrium. It might just have a very smoothly flat base. No telling just from that one piece of information and this is why we have to go on to other work.

    But this gets at how we’d like to look at a system. We look for its equilibriums. We figure out which equilibriums are stable and which ones are unstable. With a little more work we can say, if the system starts out like this it’ll stay near that equilibrium. If it starts out like that it’ll stay near this whole other equilibrium. If it starts out this other way, it’ll go flying off to the end of the universe. We can solve every possible problem at once and never have to bother with a particular case. This feels good.

    It also gives us a little something more. You maybe have heard of a tangent line. That’s a line that’s, er, tangent to a curve. Again with the not-too-hard terms. What this means is there’s a point, called the “point of tangency”, again named by a committee that wanted to get out early. And the line just touches the original curve at that point, and it’s going in exactly the same direction as the original curve at that point. Typically this means the line just grazes the curve, at least around there. If you’ve ever rolled a pencil until it just touched the edge of your coffee cup or soda can, you’ve set up a tangent line to the curve of your beverage container. You just didn’t think of it as that because you’re not daft. Fair enough.

    Mathematicians will use tangents because a tangent line has values that are so easy to calculate. The function describing a tangent line is a polynomial and we llllllllove polynomials, correctly. The tangent line is always easy to understand, however hard the original function was. Its value, at the equilibrium, is exactly what the original function’s was. Its first derivative, at the equilibrium, is exactly what the original function’s was at that point. Its second derivative is zero, which might or might not be true of the original function. We don’t care.

    We don’t use tangent lines when we look at equilibriums. This is because in this case they’re boring. If it’s an equilibrium then its tangent line is a horizontal line. No matter what the original function was. It’s trivial: you know the answer before you’ve heard the question.

    Ah, but, there is something mathematical physicists do like. The tangent line is boring. Fine. But how about, using the second derivative, building a tangent … well, “parabola” is the proper term. This is a curve that’s a quadratic, that looks like an open bowl. It exactly matches the original function at the equilibrium. Its derivative exactly matches the original function’s derivative at the equilibrium. Its second derivative also exactly matches the original function’s second derivative, though. Third derivative we don’t care about. It’s so not important here I can’t even finish this sentence in a

    What this second-derivative-based approximation gives us is a parabola. It will look very much like the original function if we’re close to the equilibrium. And this gives us something great. The great thing is this is the same potential energy shape of a weight on a spring, or anything else that oscillates back and forth. It’s the potential energy for “simple harmonic motion”.

    And that’s great. We start studying simple harmonic motion, oh, somewhere in high school physics class because it’s so much fun to play with slinkies and springs and accidentally dropping weights on our lab partners. We never stop. The mathematics behind it is simple. It turns up everywhere. If you understand the mathematics of a mass on a spring you have a tool that relevant to pretty much every problem you ever have. This approximation is part of that. Close to a stable equilibrium, whatever system you’re looking at has the same behavior as a weight on a spring.

    It may strike you that a mass on a spring is itself a central force. And now I’m saying that within the central force problem I started out doing, stuff that orbits, there’s another central force problem. This is true. You’ll see that in a few Why Stuff Can Orbit essays.

    So far, by the way, I’ve talked entirely about a potential energy with a single variable. This is for a good reason: two or more variables is harder. Well of course it is. But the basic dynamics are still open. There’s equilibriums. They can be stable or unstable. They might have inflection points. There is a new kind of behavior. Mathematicians call it a “saddle point”. This is where in one direction the potential energy makes it look like a stable equilibrium while in another direction the potential energy makes it look unstable. Examples of it kind of look like the shape of a saddle, if you haven’t looked at an actual saddle recently. (If you really want to know, get your computer to plot the function z = x2 – y2 and look at the origin, where x = 0 and y = 0.) Well, there’s points on an actual saddle that would be saddle points to a mathematician. It’s unstable, because there’s that direction where it’s definitely unstable.

    So everything about multivariable functions is longer, and a couple bits of it are harder. There’s more chances for weird stuff to happen. I think I can get through most of Why Stuff Can Orbit without having to know that. But do some reading up on that before you take a job as a mathematical physicist.

     
  • Joseph Nebus 3:00 pm on Friday, 6 November, 2015 Permalink | Reply
    Tags: , , , , , , stability   

    The Set Tour, Part 7: Matrices 


    I feel a bit odd about this week’s guest in the Set Tour. I’ve been mostly concentrating on sets that get used as the domains or ranges for functions a lot. The ones I want to talk about here don’t tend to serve the role of domain or range. But they are used a great deal in some interesting functions. So I loosen my rule about what to talk about.

    Rm x n and Cm x n

    Rm x n might explain itself by this point. If it doesn’t, then this may help: the “x” here is the multiplication symbol. “m” and “n” are positive whole numbers. They might be the same number; they might be different. So, are we done here?

    Maybe not quite. I was fibbing a little when I said “x” was the multiplication symbol. R2 x 3 is not a longer way of saying R6, an ordered collection of six real-valued numbers. The x does represent a kind of product, though. What we mean by R2 x 3 is an ordered collection, two rows by three columns, of real-valued numbers. Say the “x” here aloud as “by” and you’re pronouncing it correctly.

    What we get is called a “matrix”. If we put into it only real-valued numbers, it’s a “real matrix”, or a “matrix of reals”. Sometimes mathematical terminology isn’t so hard to follow. Just as with vectors, Rn, it matters just how the numbers are organized. R2 x 3 means something completely different from what R3 x 2 means. And swapping which positions the numbers in the matrix occupy changes what matrix we have, as you might expect.

    You can add together matrices, exactly as you can add together vectors. The same rules even apply. You can only add together two matrices of the same size. They have to have the same number of rows and the same number of columns. You add them by adding together the numbers in the corresponding slots. It’s exactly what you would do if you went in without preconceptions.

    You can also multiply a matrix by a single number. We called this scalar multiplication back when we were working with vectors. With matrices, we call this scalar multiplication. If it strikes you that we could see vectors as a kind of matrix, yes, we can. Sometimes that’s wise. We can see a vector as a matrix in the set R1 x n or as one in the set Rn x 1, depending on just what we mean to do.

    It’s trickier to multiply two matrices together. As with vectors multiplying the numbers in corresponding positions together doesn’t give us anything. What we do instead is a time-consuming but not actually hard process. But according to its rules, something in Rm x n we can multiply by something in Rn x k. “k” is another whole number. The second thing has to have exactly as many rows as the first thing has columns. What we get is a matrix in Rm x k.

    I grant you maybe didn’t see that coming. Also a potential complication: if you can multiply something in Rm x n by something in Rn x k, can you multiply the thing in Rn x k by the thing in Rm x n? … No, not unless k and m are the same number. Even if they are, you can’t count on getting the same product. Matrices are weird things this way. They’re also gateways to weirder things. But it is a productive weirdness, and I’ll explain why in a few paragraphs.

    A matrix is a way of organizing terms. Those terms can be anything. Real matrices are surely the most common kind of matrix, at least in mathematical usage. Next in common use would be complex-valued matrices, much like how we get complex-valued vectors. These are written Cm x n. A complex-valued matrix is different from a real-valued matrix. The terms inside the matrix can be complex-valued numbers, instead of real-valued numbers. Again, sometimes, these mathematical terms aren’t so tricky.

    I’ve heard occasionally of people organizing matrices of other sets. The notation is similar. If you’re building a matrix of “m” rows and “n” columns out of the things you find inside a set we’ll call H, then you write that as Hm x n. I’m not saying you should do this, just that if you need to, that’s how to tell people what you’re doing.

    Now. We don’t really have a lot of functions that use matrices as domains, and I can think of fewer that use matrices as ranges. There are a couple of valuable ones, ones so valuable they get special names like “eigenvalue” and “eigenvector”. (Don’t worry about what those are.) They take in Rm x n or Cm x n and return a set of real- or complex-valued numbers, or real- or complex-valued vectors. Not even those, actually. Eigenvectors and eigenfunctions are only meaningful if there are exactly as many rows as columns. That is, for Rm x m and Cm x m. These are known as “square” matrices, just as you might guess if you were shaken awake and ordered to say what you guessed a “square matrix” might be.

    They’re important functions. There are some other important functions, with names like “rank” and “condition number” and the like. But they’re not many. I believe they’re not even thought of as functions, any more than we think of “the length of a vector” as primarily a function. They’re just properties of these matrices, that’s all.

    So why are they worth knowing? Besides the joy that comes of knowing something, I mean?

    Here’s one answer, and the one that I find most compelling. There is cultural bias in this: I come from an applications-heavy mathematical heritage. We like differential equations, which study how stuff changes in time and in space. It’s very easy to go from differential equations to ordered sets of equations. The first equation may describe how the position of particle 1 changes in time. It might describe how the velocity of the fluid moving past point 1 changes in time. It might describe how the temperature measured by sensor 1 changes as it moves. It doesn’t matter. We get a set of these equations together and we have a majestic set of differential equations.

    Now, the dirty little secret of differential equations: we can’t solve them. Most interesting physical phenomena are nonlinear. Linear stuff is easy. Small change 1 has effect A; small change 2 has effect B. If we make small change 1 and small change 2 together, this has effect A plus B. Nonlinear stuff, though … it just doesn’t work. Small change 1 has effect A; small change 2 has effect B. Small change 1 and small change 2 together has effect … A plus B plus some weird A times B thing plus some effect C that nobody saw coming and then C does something with A and B and now maybe we’d best hide.

    There are some nonlinear differential equations we can solve. Those are the result of heroic work and brilliant insights. Compared to all the things we would like to solve there’s not many of them. Methods to solve nonlinear differential equations are as precious as ways to slay krakens.

    But here’s what we can do. What we usually like to know about in systems are equilibriums. Those are the conditions in which the system stops changing. Those are interesting. We can usually find those points by boring but not conceptually challenging calculations. If we can’t, we can declare x0 represents the equilibrium. If we still care, we leave calculating its actual values to the interested reader or hungry grad student.

    But what’s really interesting is: what happens if we’re near but not exactly at the equilibrium? Sometimes, we stay near it. Think of pushing a swing. However good a push you give, it’s going to settle back to the boring old equilibrium of dangling straight down. Sometimes, we go racing away from it. Think of trying to balance a pencil on its tip; if we did this perfectly it would stay balanced. It never does. We’re never perfect, or there’s some wind or somebody walks by and the perfect balance is foiled. It falls down and doesn’t bounce back up. Sometimes, whether it it stays near or goes away depends on what way it’s away from the equilibrium.

    And now we finally get back to matrices. Suppose we are starting out near an equilibrium. We can, usually, approximate the differential equations that describe what will happen. The approximation may only be good if we’re just a tiny bit away from the equilibrium, but that might be all we really want to know. That approximation will be some linear differential equations. (If they’re not, then we’re just wasting our time.) And that system of linear differential equations we can describe using matrices.

    If we can write what we are interested in as a set of linear differential equations, then we have won. We can use the many powerful tools of matrix arithmetic — linear algebra, specifically — to tell us everything we want to know about the system. We can say whether a small push away from the equilibrium stays small, or whether it grows, or whether it depends. We can say how fast the small push shrinks, or grows (for a while). We can say how the system will change, approximately.

    This is what I love in matrices. It’s not everything there is to them. But it’s enough to make matrices important to me.

     
  • Joseph Nebus 3:00 pm on Wednesday, 22 July, 2015 Permalink | Reply
    Tags: , , , , , signals, stability   

    A Summer 2015 Mathematics A To Z: z-transform 


    z-transform.

    The z-transform comes to us from signal processing. The signal we take to be a sequence of numbers, all representing something sampled at uniformly spaced times. The temperature at noon. The power being used, second-by-second. The number of customers in the store, once a month. Anything. The sequence of numbers we take to stretch back into the infinitely great past, and to stretch forward into the infinitely distant future. If it doesn’t, then we pad the sequence with zeroes, or some other safe number that we know means “nothing”. (That’s another classic mathematician’s trick.)

    It’s convenient to have a name for this sequence. “a” is a good one. The different sampled values are denoted by an index. a0 represents whatever value we have at the “start” of the sample. That might represent the present. That might represent where sampling began. That might represent just some convenient reference point. It’s the equivalent of mileage maker zero; we have to have something be the start.

    a1, a2, a3, and so on are the first, second, third, and so on samples after the reference start. a-1, a-2, a-3, and so on are the first, second, third, and so on samples from before the reference start. That might be the last couple of values before the present.

    So for example, suppose the temperatures the last several days were 77, 81, 84, 82, 78. Then we would probably represent this as a-4 = 77, a-3 = 81, a-2 = 84, a-1 = 82, a0 = 78. We’ll hope this is Fahrenheit or that we are remotely sensing a temperature.

    The z-transform of a sequence of numbers is something that looks a lot like a polynomial, based on these numbers. For this five-day temperature sequence the z-transform would be the polynomial 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 . (z1 is the same as z. z0 is the same as the number “1”. I wrote it this way to make the pattern more clear.)

    I would not be surprised if you protested that this doesn’t merely look like a polynomial but actually is one. You’re right, of course, for this set, where all our samples are from negative (and zero) indices. If we had positive indices then we’d lose the right to call the transform a polynomial. Suppose we trust our weather forecaster completely, and add in a1 = 83 and a2 = 76. Then the z-transform for this set of data would be 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2 . You’d probably agree that’s not a polynomial, although it looks a lot like one.

    The use of z for these polynomials is basically arbitrary. The main reason to use z instead of x is that we can learn interesting things if we imagine letting z be a complex-valued number. And z carries connotations of “a possibly complex-valued number”, especially if it’s used in ways that suggest we aren’t looking at coordinates in space. It’s not that there’s anything in the symbol x that refuses the possibility of it being complex-valued. It’s just that z appears so often in the study of complex-valued numbers that it reminds a mathematician to think of them.

    A sound question you might have is: why do this? And there’s not much advantage in going from a list of temperatures “77, 81, 84, 81, 78, 83, 76” over to a polynomial-like expression 77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2 .

    Where this starts to get useful is when we have an infinitely long sequence of numbers to work with. Yes, it does too. It will often turn out that an interesting sequence transforms into a polynomial that itself is equivalent to some easy-to-work-with function. My little temperature example there won’t do it, no. But consider the sequence that’s zero for all negative indices, and 1 for the zero index and all positive indices. This gives us the polynomial-like structure \cdots + 0z^2 + 0z^1 + 1 + 1\left(\frac{1}{z}\right)^1 + 1\left(\frac{1}{z}\right)^2 + 1\left(\frac{1}{z}\right)^3 + 1\left(\frac{1}{z}\right)^4 + \cdots . And that turns out to be the same as 1 \div \left(1 - \left(\frac{1}{z}\right)\right) . That’s much shorter to write down, at least.

    Probably you’ll grant that, but still wonder what the point of doing that is. Remember that we started by thinking of signal processing. A processed signal is a matter of transforming your initial signal. By this we mean multiplying your original signal by something, or adding something to it. For example, suppose we want a five-day running average temperature. This we can find by taking one-fifth today’s temperature, a0, and adding to that one-fifth of yesterday’s temperature, a-1, and one-fifth of the day before’s temperature a-2, and one-fifth a-3, and one-fifth a-4.

    The effect of processing a signal is equivalent to manipulating its z-transform. By studying properties of the z-transform, such as where its values are zero or where they are imaginary or where they are undefined, we learn things about what the processing is like. We can tell whether the processing is stable — does it keep a small error in the original signal small, or does it magnify it? Does it serve to amplify parts of the signal and not others? Does it dampen unwanted parts of the signal while keeping the main intact?

    We can understand how data will be changed by understanding the z-transform of the way we manipulate it. That z-transform turns a signal-processing idea into a complex-valued function. And we have a lot of tools for studying complex-valued functions. So we become able to say a lot about the processing. And that is what the z-transform gets us.

     
    • sheldonk2014 4:45 pm on Wednesday, 22 July, 2015 Permalink | Reply

      Do you go to that pinball place in New Jersey

      Like

      • Joseph Nebus 1:46 am on Thursday, 23 July, 2015 Permalink | Reply

        When I’m able to, yes! Fortunately work gives me occasional chances to revisit my ancestral homeland and from there it’s a quite reasonable drive to Asbury Park and the Silverball Museum. It’s a great spot and I recommend it highly.

        There’s apparently also a retro arcade in Redbank, with a dozen or so pinball machines and a fair number of old video games. I’ve not been there yet, though.

        Like

    • howardat58 2:18 am on Thursday, 23 July, 2015 Permalink | Reply

      Here is a bit more.

      z is used in dealing with recurrence relations and their active form, with input as well, in the form of “z transfer function:
      a(n) is the input at time n, u(n) is the output at time n, these can be viewed as sequences
      u(n+1) = u(n) + a(n+1) represents the integral/accumulation/sum of series for the input process
      z is considered as an operator which moves the whole sequence back one step,
      Applied to the sequence equation shown you get u(n+1) = zu(n),
      and the equation becomes
      zu(n) = u(n) + za(n)
      Now since everything has (n) we don’t need it, and get
      zu = u + za
      Solving for u gives
      u = z/(z-1)a
      which describes the behaviour of the output for a given sequence of inputs
      z/(z-1) is called the transfer function of the input/output system
      and in this case of summation or integration the expression z/(z-1) represents the process of adding up the terms of the sequence.
      One nice thing is that if you do all of this for the successive differences process
      u(n+1) = a(n+1) – a(n)
      you get the transfer function (z-1)/z, the discrete differentiation process.

      Liked by 1 person

      • Joseph Nebus 2:11 pm on Saturday, 25 July, 2015 Permalink | Reply

        That’s a solid example of using these ideas. May I bump it up to a main post in the next couple days so that (hopefully) more people catch it?

        Like

  • Joseph Nebus 2:35 pm on Wednesday, 15 July, 2015 Permalink | Reply
    Tags: , , inverse problems, problems, , stability, variations   

    A Summer 2015 Mathematics A To Z: well-posed problem 


    Well-Posed Problem.

    This is another mathematical term almost explained by what the words mean in English. Probably you’d guess a well-posed problem to be a question whose answer you can successfully find. This also implies that there is an answer, and that it can be found by some method other than guessing luckily.

    Mathematicians demand three things of a problem to call it “well-posed”. The first is that a solution exists. The second is that a solution has to be unique. It’s imaginable there might be several answers that answer a problem. In that case we weren’t specific enough about what we’re looking for. Or we should have been looking for a set of answers instead of a single answer.

    The third requirement takes some time to understand. It’s that the solution has to vary continuously with the initial conditions. That is, suppose we started with a slightly different problem. If the answer would look about the same, then the problem was well-posed to begin with. Suppose we’re looking at the problem of how a block of ice gets melted by a heater set in its center. The way that melts won’t change much if the heater is a little bit hotter, or if it’s moved a little bit off center. This heating problem is well-posed.

    There are problems that don’t have this continuous variation, though. Typically these are “inverse problems”. That is, they’re problems in which you look at the outcome of something and try to say what caused it. That would be looking at the puddle of melted water and the heater and trying to say what the original block of ice looked like. There are a lot of blocks of ice that all look about the same once melted, and there’s no way of telling which was the one you started with.

    You might think of these conditions as “there’s an answer, there’s only one answer, and you can find it”. That’s good enough as a memory aid, but it isn’t quite so. A problem’s solution might have this continuous variation, but still be “numerically unstable”. This is a difficulty you can run across when you try doing calculations on a computer.

    You know the thing where on a calculator you type in 1 / 3 and get back 0.333333? And you multiply that by three and get 0.999999 instead of exactly 1? That’s the thing that underlies numerical instability. We want to work with numbers, but the calculator or computer will let us work with only an approximation to them. 0.333333 is close to 1/3, but isn’t exactly that.

    For many calculations the difference doesn’t matter. 0.999999 is really quite close to 1. If you lost 0.000001 parts of every dollar you earned there’s a fine chance you’d never even notice. But in some calculations, numerically unstable ones, that difference matters. It gets magnified until the error created by the difference between the number you want and the number you can calculate with is too big to ignore. In that case we call the calculation we’re doing “ill-conditioned”.

    And it’s possible for a problem to be well-posed but ill-conditioned. This is annoying and is why numerical mathematicians earn the big money, or will tell you they should. Trying to calculate the answer will be so likely to give something meaningless that we can’t trust the work that’s done. But often it’s possible to rework a calculation into something equivalent but well-conditioned. And a well-posed, well-conditioned problem is great. Not only can we find its solution, but we can usually have a computer do the calculations, and that’s a great breakthrough.

     
  • Joseph Nebus 4:15 pm on Saturday, 13 June, 2015 Permalink | Reply
    Tags: , , , stability,   

    Conditions of equilibrium and stability 


    This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

    For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

    In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

    CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

    Like

    carnotcycle

    cse01

    In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

    cse02

    Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

    If we restrict our attention to a thermodynamic system of unchanging composition and apply…

    View original post 2,534 more words

     
    • sheldonk2014 4:29 pm on Saturday, 13 June, 2015 Permalink | Reply

      I love these theories,great break down of physics,makes me want to look closer at life

      Like

      • Joseph Nebus 2:19 am on Tuesday, 16 June, 2015 Permalink | Reply

        Well, thank you. If you can feel inspired to learn about remarkable things then I’m quite happy.

        Like

  • Joseph Nebus 2:00 am on Tuesday, 5 June, 2012 Permalink | Reply
    Tags: , , , , parallel axis theorem, , , , stability,   

    A Second Way To Fall Over 


    I admit not being perfectly satisfied with my answer, about whether a box is easier to tip over by pushing on the middle of one of its top edges or by pushing on its corner, just by looking at it from the energy both approaches need to raise the box’s center of mass above the pivot. It’s straightforward enough, but I don’t do this sort of calculation often, so maybe I’m looking at the wrong things. Can I find another, independent, line of argument? If I can, does that get to the same answer? If it does, good. If it doesn’t, then I get to wonder which line of argument I believe in more. So here’s one.

    (More …)

     
  • Joseph Nebus 12:15 am on Saturday, 2 June, 2012 Permalink | Reply
    Tags: axle, center of mass, , , , , pivot point, , stability, , violin   

    One Way To Fall Over 


    [ Huh. My statistics page says that someone came to me yesterday looking for the “mathematics behind rap music”. I don’t doubt there is such mathematics, but I’ve never written anything approaching it. I admit that despite the long intertwining of mathematics and music, and my own childhood of playing a three-quarter size violin in a way that must be characterized as “technically playing”, I don’t know anything nontrivial about the mathematics of any music. So, whoever was searching for that, I’m sorry to have disappointed you. ]

    Now, let me try my first guess at saying whether it’s easier to tip the cube over by pushing along the middle of the edge or by pushing at the corner. I laid out the ground rules, and particularly, the symbols used for the size of the box (it’s of length a) and how far the center of mass (the dead center of the box) is from the edges and the corners last time around. Here’s my first thought about what has to be done to tip the box over: we have to make the box pivot on some point — along one edge, if we’re pushing on the edge; along one corner, if we’re pushing on the corner — and so make it start to roll. If we can raise the center of mass above the pivot then we can drop the box back down with some other face to the floor, which has to count as tipping the box over. If we don’t raise the center of mass we aren’t tipping the box at all, we’re just shoving it.

    (More …)

     
  • Joseph Nebus 12:54 am on Friday, 1 June, 2012 Permalink | Reply
    Tags: , cube, , , , Pythagorean Theorem, stability,   

    Tipping The Toy 


    My brother phoned to remind me how much more generally nervous I should be about things, as well as to ask my opinion in an utterly pointless dispute he was having with his significant other. The dispute was over no stakes whatsoever and had no consequences of any practical value so I can see why it’d call for an outside expert. It’s more one of physics, but I did major in physics long ago, and it’s easier to treat mathematically anyway, and it was interesting enough that I spent the rest of the night working it out and I’m still not positive I’m unambiguously right. I could probably find out for certain with some simple experiments, but that would be precariously near trying, and so is right out. Let me set up the problem, though, since it’s interesting and should offer room for people to argue I’m completely wrong.

    (More …)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: