Updates from June, 2017 Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 5:00 pm on Friday, 16 June, 2017 Permalink | Reply
    Tags: , , , , , , , ,   

    Why Stuff Can Orbit, Part 9: How The Spring In The Cosmos Behaves 


    Previously:

    And the supplemental reading:


    First, I thank Thomas K Dye for the banner art I have for this feature! Thomas is the creator of the longrunning web comic Newshounds. He’s hoping soon to finish up special editions of some of the strip’s stories and to publish a definitive edition of the comic’s history. He’s also got a Patreon account to support his art habit. Please give his creations some of your time and attention.

    Now back to central forces. I’ve run out of obvious fun stuff to say about a mass that’s in a circular orbit around the center of the universe. Before you question my sense of fun, remember that I own multiple pop histories about the containerized cargo industry and last month I read another one that’s changed my mind about some things. These sorts of problems cover a lot of stuff. They cover planets orbiting a sun and blocks of wood connected to springs. That’s about all we do in high school physics anyway. Well, there’s spheres colliding, but there’s no making a central force problem out of those. You can also make some things that look like bad quantum mechanics models out of that. The mathematics is interesting even if the results don’t match anything in the real world.

    But I’m sticking with central forces that look like powers. These have potential energy functions with rules that look like V(r) = C rn. So far, ‘n’ can be any real number. It turns out ‘n’ has to be larger than -2 for a circular orbit to be stable, but that’s all right. There are lots of numbers larger than -2. ‘n’ carries the connotation of being an integer, a whole (positive or negative) number. But if we want to let it be any old real number like 0.1 or π or 18 and three-sevenths that’s fine. We make a note of that fact and remember it right up to the point we stop pretending to care about non-integer powers. I estimate that’s like two entries off.

    We get a circular orbit by setting the thing that orbits in … a circle. This sounded smarter before I wrote it out like that. Well. We set it moving perpendicular to the “radial direction”, which is the line going from wherever it is straight to the center of the universe. This perpendicular motion means there’s a non-zero angular momentum, which we write as ‘L’ for some reason. For each angular momentum there’s a particular radius that allows for a circular orbit. Which radius? It’s whatever one is a minimum for the effective potential energy:

    V_{eff}(r) = Cr^n + \frac{L^2}{2m}r^{-2}

    This we can find by taking the first derivative of ‘Veff‘ with respect to ‘r’ and finding where that first derivative is zero. This is standard mathematics stuff, quite routine. We can do with any function whether it represents something physics or not. So:

    \frac{dV_{eff}}{dr} = nCr^{n-1} - 2\frac{L^2}{2m}r^{-3} = 0

    And after some work, this gets us to the circular orbit’s radius:

    r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

    What I’d like to talk about is if we’re not quite at that radius. If we set the planet (or whatever) a little bit farther from the center of the universe. Or a little closer. Same angular momentum though, so the equilibrium, the circular orbit, should be in the same spot. It happens there isn’t a planet there.

    This enters us into the world of perturbations, which is where most of the big money in mathematical physics is. A perturbation is a little nudge away from an equilibrium. What happens in response to the little nudge is interesting stuff. And here we already know, qualitatively, what’s going to happen: the planet is going to rock around the equilibrium. This is because the circular orbit is a stable equilibrium. I’d described that qualitatively last time. So now I want to talk quantitatively about how the perturbation changes given time.

    Before I get there I need to introduce another bit of notation. It is so convenient to be able to talk about the radius of the circular orbit that would be the equilibrium. I’d called that ‘r’ up above. But I also need to be able to talk about how far the perturbed planet is from the center of the universe. That’s also really hard not to call ‘r’. Something has to give. Since the radius of the circular orbit is not going to change I’m going to give that a new name. I’ll call it ‘a’. There’s several reasons for this. One is that ‘a’ is commonly used for describing the size of ellipses, which turn up in actual real-world planetary orbits. That’s something we know because this is like the thirteenth part of an essay series about the mathematics of orbits. You aren’t reading this if you haven’t picked up a couple things about orbits on your own. Also we’ve used ‘a’ before, in these sorts of approximations. It was handy in the last supplemental as the point of expansion’s name. So let me make that unmistakable:

    a \equiv r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

    The \equiv there means “defined to be equal to”. You might ask how this is different from “equals”. It seems like more emphasis to me. Also, there are other names for the circular orbit’s radius that I could have used. ‘re‘ would be good enough, as the subscript would suggest “radius of equilibrium”. Or ‘r0‘ would be another popular choice, the 0 suggesting that this is something of key, central importance and also looking kind of like a circle. (That’s probably coincidence.) I like the ‘a’ better there because I know how easy it is to drop a subscript. If you’re working on a problem for yourself that’s easy to fix, with enough cursing and redoing your notes. On a board in front of class it’s even easier to fix since someone will ask about the lost subscript within three lines. In a post like this? It would be a mess.

    So now I’m going to look at possible values of the radius ‘r’ that are close to ‘a’. How close? Close enough that ‘Veff‘, the effective potential energy, looks like a parabola. If it doesn’t look much like a parabola then I look at values of ‘r’ that are even closer to ‘a’. (Do you see how the game is played? If you don’t, look closer. Yes, this is actually valid.) If ‘r’ is that close to ‘a’, then we can get away with this polynomial expansion:

    V_{eff}(r) \approx V_{eff}(a) + m\cdot(r - a) + \frac{1}{2} m_2 (r - a)^2

    where

    m = \frac{dV_{eff}}{dr}\left(a\right)	\\ m_2  = \frac{d^2V_{eff}}{dr^2}\left(a\right)

    The “approximate” there is because this is an approximation. V_{eff}(r) is in truth equal to the thing on the right-hand-side there plus something that isn’t (usually) zero, but that is small.

    I am sorry beyond my ability to describe that I didn’t make that ‘m’ and ‘m2‘ consistent last week. That’s all right. One of these is going to disappear right away.

    Now, what V_{eff}(a) is? Well, that’s whatever you get from putting in ‘a’ wherever you start out seeing ‘r’ in the expression for V_{eff}(r) . I’m not going to bother with that. Call it math, fine, but that’s just a search-and-replace on the character ‘r’. Also, where I’m going next, it’s going to disappear, never to be seen again, so who cares? What’s important is that this is a constant number. If ‘r’ changes, the value of V_{eff}(a) does not, because ‘r’ doesn’t appear anywhere in V_{eff}(a) .

    How about ‘m’? That’s the value of the first derivative of ‘Veff‘ with respect to ‘r’, evaluated when ‘r’ is equal to ‘a’. That might be something. It’s not, because of what ‘a’ is. It’s the value of ‘r’ which would make \frac{dV_{eff}}{dr}(r) equal to zero. That’s why ‘a’ has that value instead of some other, any other.

    So we’ll have a constant part ‘Veff(a)’, plus a zero part, plus a part that’s a parabola. This is normal, by the way, when we do expansions around an equilibrium. At least it’s common. Good to see it. To find ‘m2‘ we have to take the second derivative of ‘Veff(r)’ and then evaluate it when ‘r’ is equal to ‘a’ and ugh but here it is.

    \frac{d^2V_{eff}}{dr^2}(r) = n (n - 1) C r^{n - 2} + 3\cdot\frac{L^2}{m}r^{-4}

    And at the point of approximation, where ‘r’ is equal to ‘a’, it’ll be:

    m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C a^{n - 2} + 3\cdot\frac{L^2}{m}a^{-4}

    We know exactly what ‘a’ is so we could write that out in a nice big expression. You don’t want to. I don’t want to. It’s a bit of a mess. I mean, it’s not hard, but it has a lot of symbols in it and oh all right. Here. Look fast because I’m going to get rid of that as soon as I can.

    m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C \left(\frac{L^2}{n C m}\right)^{n - 2} + 3\cdot\frac{L^2}{m}\left(\frac{L^2}{n C m}\right)^{-4}

    For the values of ‘n’ that we actually care about because they turn up in real actual physics problems this expression simplifies some. Enough, anyway. If we pretend we know nothing about ‘n’ besides that it is a number bigger than -2 then … ugh. We don’t have a lot that can clean it up.

    Here’s how. I’m going to define an auxiliary little function. Its role is to contain our symbolic sprawl. It has a legitimate role too, though. At least it represents something that it makes sense to give a name. It will be a new function, named ‘F’ and that depends on the radius ‘r’:

    F(r) \equiv -\frac{dV}{dr}

    Notice that’s the derivative of the original ‘V’, not the angular-momentum-equipped ‘Veff‘. This is the secret of its power. It doesn’t do anything to make V_{eff}(r) easier to work with. It starts being good when we take its derivatives, though:

    \frac{dV_{eff}}{dr} = -F(r) - \frac{L^2}{m}r^{-3}

    That already looks nicer, doesn’t it? It’s going to be really slick when you think about what ‘F(a)’ is. Remember that ‘a’ is the value for ‘r’ which makes the derivative of ‘Veff‘ equal to zero. So … I may not know much, but I know this:

    0 = \frac{dV_{eff}}{dr}(a) = -F(a) - \frac{L^2}{m}a^{-3}	\\ F(a) = -\frac{L}{ma^3}

    I’m not going to say what value F(r) has for other values of ‘r’ because I don’t care. But now look at what it does for the second derivative of ‘Veff‘:

    \frac{d^2 V_{eff}}{dr^2}(r) = -F'(r) + 3\frac{L^2}{mr^4}

    Here the ‘F'(r)’ is a shorthand way of writing ‘the derivative of F with respect to r’. You can do when there’s only the one free variable to consider. And now something magic that happens when we look at the second derivative of ‘Veff‘ when ‘r’ is equal to ‘a’ …

    \frac{d^2 V_{eff}}{dr^2}(a) = -F'(a) - \frac{3}{a} F(a)

    We get away with this because we happen to know that ‘F(a)’ is equal to -\frac{L}{ma^3} and doesn’t that work out great? We’ve turned a symbolic mess into a … less symbolic mess.

    Now why do I say it’s legitimate to introduce ‘F(r)’ here? It’s because minus the derivative of the potential energy with respect to the position of something can be something of actual physical interest. It’s the amount of force exerted on the particle by that potential energy at that point. The amount of force on a thing is something that we could imagine being interested in. Indeed, we’d have used that except potential energy is usually so much easier to work with. I’ve avoided it up to this point because it wasn’t giving me anything I needed. Here, I embrace it because it will save me from some awful lines of symbols.

    Because with this expression in place I can write the approximation to the potential energy of:

    V_{eff}(r) \approx V_{eff}(a) + \frac{1}{2} \left( -F'(a) - \frac{3}{a}F(a) \right) (r - a)^2

    So if ‘r’ is close to ‘a’, then the polynomial on the right is a good enough approximation to the effective potential energy. And that potential energy has the shape of a spring’s potential energy. We can use what we know about springs to describe its motion. Particularly, we’ll have this be true:

    \frac{dp}{dt} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

    Here, ‘p’ is the (linear) momentum of whatever’s orbiting, which we can treat as equal to ‘mr’, the mass of the orbiting thing times how far it is from the center. You may sense in me some reluctance about doing this, what with that ‘we can treat as equal to’ talk. There’s reasons for this and I’d have to get deep into geometry to explain why. I can get away with specifically this use because the problem allows it. If you’re trying to do your own original physics problem inspired by this thread, and it’s not orbits like this, be warned. This is a spot that could open up to a gigantic danger pit, lined at the bottom with sharp spikes and angry poison-clawed mathematical tigers and I bet it’s raining down there too.

    So we can rewrite all this as

    m\frac{d^2r}{dt^2} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

    And when we learned everything interesting there was to know about springs we learned what the solutions to this look like. Oh, in that essay the variable that changed over time was called ‘x’ and here it’s called ‘r’, but that’s not an actual difference. ‘r’ will be some sinusoidal curve:

    r(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    where, here, ‘k’ is equal to that whole mass of constants on the right-hand side:

    k = -\left( F'(a) + \frac{3}{a} F(a)\right)

    I don’t know what ‘A’ and ‘B’ are. It’ll depend on just what the perturbation is like, how far the planet is from the circular orbit. But I can tell you what the behavior is like. The planet will wobble back and forth around the circular orbit, sometimes closer to the center, sometimes farther away. It’ll spend as much time closer to the center than the circular orbit as it does farther away. And the period of that oscillation will be

    T = 2\pi\sqrt{\frac{m}{k}} = 2\pi\sqrt{\frac{m}{-\left(F'(a) + \frac{3}{a}F(a)\right)}}

    This tells us something about what the orbit of a thing not in a circular orbit will be like. Yes, I see you in the back there, quivering with excitement about how we’ve got to elliptical orbits. You’re moving too fast. We haven’t got that. There will be elliptical orbits, yes, but only for a very particular power ‘n’ for the potential energy. Not for most of them. We’ll see.

    It might strike you there’s something in that square root. We need to take the square root of a positive number, so maybe this will tell us something about what kinds of powers we’re allowed. It’s a good thought. It turns out not to tell us anything useful, though. Suppose we started with V(r) = Cr^n . Then F(r) = -nCr^{n - 1}, and F'(r) = -n(n - 1)C^{n - 2} . Sad to say, this leads us to a journey which reveals that we need ‘n’ to be larger than -2 or else we don’t get oscillations around a circular orbit. We already knew that, though. We already found we needed it to have a stable equilibrium before. We can see there not being a period for these oscillations around the circular orbit as another expression of the circular orbit not being stable. Sad to say, we haven’t got something new out of this.

    We will get to new stuff, though. Maybe even ellipses.

     
  • Joseph Nebus 4:00 pm on Wednesday, 7 June, 2017 Permalink | Reply
    Tags: , , , ,   

    What Second Derivatives Are And What They Can Do For You 


    Previous supplemental reading for Why Stuff Can Orbit:


    This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

    This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

    When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

    Necessary qualifiers: pages 65 through 82 of any book on real analysis.

    So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write f: \Re \rightarrow \Re . If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

    I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

    (One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

    I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

    So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

    I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

    F^0(x) = f(a)

    That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

    We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

    But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

    F^1(x) = f(a) + m\cdot\left(x - a\right)

    Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

    So I will define a new function, spoken of as f-prime, this way:

    f'(x) = \frac{df}{dx}\left(x\right)

    If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that \frac{df}{dx} . That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols \frac{df}{dx} so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

    Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

    m = f'(a) = \frac{df}{dx}\left(a\right)

    which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

    F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)

    This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

    We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2

    What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

    m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)

    We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2

    This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

    If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

    We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

    F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3

    There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be \frac{1}{4\cdot 3\cdot 2} . The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

    We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

     
    • elkement (Elke Stangl) 4:20 pm on Wednesday, 7 June, 2017 Permalink | Reply

      This is great – I’ve just written a very short version of that (a much too succinct one) … as an half-hearted attempt to explain Taylor expansions that I need in an upcoming post. But now I won’t feel bad anymore about its incomprehensibility and simply link to this post of yours :-)

      Like

  • Joseph Nebus 6:00 pm on Thursday, 18 May, 2017 Permalink | Reply
    Tags: , , , , , ,   

    Everything Interesting There Is To Say About Springs 


    I need another supplemental essay to get to the next part in Why Stuff Can Orbit. (Here’s the last part.) You probably guessed it’s about springs. They’re useful to know about. Why? That one killer Mystery Science Theater 3000 short, yes. But also because they turn up everywhere.

    Not because there are literally springs in everything. Not with the rise in anti-spring political forces. But what makes a spring is a force that pushes something back where it came from. It pushes with a force that grows just as fast as the distance from where it came grows. Most anything that’s stable, that has some normal state which it tends to look like, acts like this. A small nudging away from the normal state gets met with some resistance. A bigger nudge meets bigger resistance. And most stuff that we see is stable. If it weren’t stable it would have broken before we got there.

    (There are exceptions. Stable is, sometimes, about perspective. It can be that something is unstable but it takes so long to break that we don’t have to worry about it. Uranium, for example, is dying, turning slowly into stable elements like lead and helium. There will come a day there’s none left in the Earth. But it takes so long to break down that, barring surprises, the Earth will have broken down into something else first. And it may be that something is unstable, but it’s created by something that’s always going on. Oxygen in the atmosphere is always busy combining with other chemicals. But oxygen stays in the atmosphere because life keeps breaking it out of other chemicals.)

    Now I need to put in some terms. Start with your thing. It’s on a spring, literally or metaphorically. Don’t care. If it isn’t being pushed in any direction then it’s at rest. Or it’s at an equilibrium. I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? I can tell you what it acts like. It’s your business whether it should. Anyway, your thing has an equilibrium.

    Next term is the displacement. It’s how far your thing is from the equilibrium. If it’s really a block of wood on a spring, like it is in high school physics, this displacement is how far the spring is stretched out. In equations I’ll represent this as ‘x’ because I’m not going to go looking deep for letters for something like this. What value ‘x’ has will change with time. This is what makes it a physics problem. If we want to make clear that ‘x’ does depend on time we might write ‘x(t)’. We might go all the way and start at the top of the page with ‘x = x(t)’, just in case.

    If ‘x’ is a positive number it means your thing is displaced in one direction. If ‘x’ is a negative number it was displaced in the opposite direction. By ‘one direction’ I mean ‘to the right, or else up’. By ‘the opposite direction’ I mean ‘to the left, or else down’. Yes, you can pick any direction you like but why are you making life harder for everyone? Unless there’s something compelling about the setup of your thing that makes another choice make sense just go along with what everyone else is doing. Apply your creativity and iconoclasm where it’ll make your life better instead.

    Also, we only have to worry about one direction. This might surprise you. If you’ve played much with springs you might have noticed how they’re three-dimensional objects. You can set stuff swinging back and forth in two directions at once. That’s all right. We can describe a two-dimensional displacement as a displacement in one direction plus a displacement perpendicular to that. And if there’s no such thing as friction, they won’t interact. We can pretend they’re two problems that happen to be running on the same spring at the same time. So here I declare: we can ignore friction and pretend it doesn’t matter. We don’t have to deal with more than one direction at a time.

    (It’s not only friction. There’s problems about how energy gets transmitted between ways the thing can oscillate. This is what causes what starts out as a big whack in one direction to turn into a middling little circular wobbling. That’s a higher level physics than I want to do right now. So here I declare: we can ignore that and pretend it doesn’t matter.)

    Whether your thing is displaced or not it’s got some potential energy. This can be as large or as small as you like, down to some minimum when your thing is at equilibrium. The potential energy we represent as a number named ‘U’ because of good reasons that somebody surely had. The potential energy of a spring depends on the square of the displacement. We can write its value as ‘U = ½ k x2‘. Here ‘k’ is a number known as the spring constant. It describes how strongly the spring reacts; the bigger ‘k’ is, the more any displacement’s met with a contrary force. It’ll be a positive number. ½ is that same old one-half that you know from ideas being half-baked or going-off being half-cocked.

    Potential energy is great. If you can describe a physics problem with its energy you’re in good shape. It lets us bring physical intuition into understanding things. Imagine a bowl or a Habitrail-type ramp that’s got the cross-section of your potential energy. Drop a little marble into it. How the marble rolls? That’s what your thingy does in that potential energy.

    Also we have mathematics. Calculus, particularly differential equations, lets us work out how the position of your thing will change. We need one more piece for this. That’s the momentum of your thing. Momentum is traditionally represented with the letter ‘p’. And now here’s how stuff moves when you know the potential energy ‘U’:

    \frac{dp}{dt} = - \frac{\partial U}{\partial x}

    Let me unpack that. \frac{dp}{dt} — also known as \frac{d}{dt}p if that looks better — is “the derivative of p with respect to t”. It means “how the value of the momentum changes as the time changes”. And that is equal to minus one times …

    You might guess that \frac{\partial U}{\partial x} — also written as \frac{\partial}{\partial x} U — is some kind of derivative. The \partial looks kind of like a cursive d, after all. It’s known as the partial derivative, because it means we look at how ‘U’ changes as ‘x’ and nothing else at all changes. With the normal, ‘d’ style full derivative, we have to track how all the variables change as the ‘t’ we’re interested in changes. In this particular problem the difference doesn’t matter. But there are problems where it does matter and that’s why I’m careful about the symbols.

    So now we fall back on how to take derivatives. This gives us the equation that describes how the physics of your thing on a spring works:

    \frac{dp}{dt} = - k x

    You’re maybe underwhelmed. This is because we haven’t got any idea how the momentum ‘p’ relates to the displacement ‘x’. Well, we do, because I know and if you’re still reading at this point you know full well what momentum is. But let me make it official. Momentum is, for this kind of thing, the mass ‘m’ of your thing times how its position is changing, which is \frac{dx}{dt} . The mass of your thing isn’t changing. If you’re going to let it change then we’re doing some screwy rocket problem and that’s a different article. So its easy to get the momentum out of that problem. We get instead the second derivative of the displacement with respect to time:

    m\frac{d^2 x}{dt^2} = - kx

    Fine, then. Does that tell us anything about what ‘x(t)’ is? Not yet, but I will now share with you one of the top secrets that only real mathematicians know. We will take a guess to what the answer probably is. Then we’ll see in what circumstances that answer could possibly be right. Does this seem ad hoc? Fine, so it’s ad hoc. Here is the secret of mathematicians:

    It’s fine if you get your answer by any stupid method you like, including guessing and getting lucky, as long as you check that your answer is right.

    Oh, sure, we’d rather you get an answer systematically, since a system might give us ideas how to find answers in new problems. But if all we want is an answer then, by definition, we don’t care where it came from. Anyway, we’re making a particular guess, one that’s very good for this sort of problem. Indeed, this guess is our system. A lot of guesses at solving differential equations use exactly this guess. Are you ready for my guess about what solves this? Because here it is.

    We should expect that

    x(t) = C e^{r t}

    Here ‘C’ is some constant number, not yet known. And ‘r’ is some constant number, not yet known. ‘t’ is time. ‘e’ is that number 2.71828(etc) that always turns up in these problems. Why? Because its derivative is very easy to take, and if we have to take derivatives we want them to be easy to take. The first derivative of Ce^{rt} with respect to ‘t’ is r Ce^{rt} . The second derivative with respect to ‘t’ is r^2 Ce^{rt} . so here’s what we have:

    m r^2 Ce^{rt} = - k Ce^{rt}

    What we’d like to find are the values for ‘C’ and ‘r’ that make this equation true. It’s got to be true for every value of ‘t’, yes. But this is actually an easy equation to solve. Why? Because the C e^{rt} on the left side has to equal the C e^{rt} on the right side. As long as they’re not equal to zero and hey, what do you know? C e^{rt} can’t be zero unless ‘C’ is zero. So as long as ‘C’ is any number at all in the world except zero we can divide this ugly lump of symbols out of both sides. (If ‘C’ is zero, then this equation is 0 = 0 which is true enough, I guess.) What’s left?

    m r^2 = -k

    OK, so, we have no idea what ‘C’ is and we’re not going to have any. That’s all right. We’ll get it later. What we can get is ‘r’. You’ve probably got there already. There’s two possible answers:

    r = \pm\sqrt{-\frac{k}{m}}

    You might not like that. You remember that ‘k’ has to be positive, and if mass ‘m’ isn’t positive something’s screwed up. So what are we doing with the square root of a negative number? Yes, we’re getting imaginary numbers. Two imaginary numbers, in fact:

    r = \imath \sqrt{\frac{k}{m}}, r = - \imath \sqrt{\frac{k}{m}}

    Which is right? Both. In some combination, too. It’ll be a bit with that first ‘r’ plus a bit with that second ‘r’. In the differential equations trade this is called superposition. We’ll have information that tells us how much uses the first ‘r’ and how much uses the second.

    You might still be upset. Hey, we’ve got these imaginary numbers here describing how a spring moves and while you might not be one of those high-price physicists you see all over the media you know springs aren’t imaginary. I’ve got a couple responses to that. Some are semantic. We only call these numbers “imaginary” because when we first noticed they were useful things we didn’t know what to make of them. The label is an arbitrary thing that doesn’t make any demands of the numbers. If we had called them, oh, “Cardanic numbers” instead would you be upset that you didn’t see any Cardanos in your springs?

    My high-class semantic response is to ask in exactly what way is the “square root of minus one” any less imaginary than “three”? Can you give me a handful of three? No? Didn’t think so.

    And then the practical response is: don’t worry. Exponentials raised to imaginary numbers do something amazing. They turn into sine waves. Well, sine and cosine waves. I’ll spare you just why. You can find it by looking at the first twelve or so posts of any pop mathematics blog and its article about how amazing Euler’s Formula is. Given that Euler published, like, 2,038 books and papers through his life and the fifty years after his death it took to clear the backlog you might think, “Euler had a lot of Formulas, right? Identities too?” Yes, he did, but you’ll know this one when you see it.

    What’s important is that the displacement of your thing on a spring will be described by a function which looks like this:

    x(t) = C_1 e^{\sqrt{\frac{k}{m}} t} + C_2 e^{-\sqrt{\frac{k}{m}} t}

    for two constants, ‘C1‘ and ‘C2‘. These were the things we called ‘C’ back when we thought the answer might be Ce^{rt} ; there’s two of them because there’s two r’s. I give you my word this is equivalent to a formula like this, but you can make me show my work if you must:

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    for some (other) constants ‘A’ and ‘B’. Cosine and sine are the old things you remember from learning about cosine and sine.

    OK, but what are ‘A’ and ‘B’?

    Generically? We don’t care. Some numbers. Maybe zero. Maybe not. The pattern, how the displacement changes over time, will be the same whatever they are. It’ll be regular oscillation. At one time your thing will be as far from the equilibrium as it gets, and not moving toward or away from the center. At one time it’ll be back at the center and moving as fast as it can. At another time it’ll be as far away from the equilibrium as it gets, but on the other side. At another time it’ll be back at the equilibrium and moving as fast as it ever does, but the other way. How far is that maximum? What’s the fastest it travels?

    The answer’s in how we started. If we start at the equilibrium without any kind of movement we’re never going to leave the equilibrium. We have to get nudged out of it. But what kind of nudge? There’s three ways you can do to nudge something out.

    You can tug it out some and let it go from rest. This is the easiest: then ‘A’ is however big your tug was and ‘B’ is zero.

    You can let it start from equilibrium but give it a good whack so it’s moving at some initial velocity. This is the next-easiest: ‘A’ is zero, and ‘B’ is … no, not the initial velocity. You need to look at what the velocity of your thing is at the start. That’s the first derivative:

    \frac{dx}{dt} = -\sqrt{\frac{k}{m}}A sin\left(\sqrt{\frac{k}{m}} t\right) + \sqrt{\frac{k}{m}} B sin\left(\sqrt{\frac{k}{m}} t\right)

    The start is when time is zero because we don’t need to be difficult. when ‘t’ is zero the above velocity is \sqrt{\frac{k}{m}} B . So that product has to be the initial velocity. That’s not much harder.

    The third case is when you start with some displacement and some velocity. A combination of the two. Then, ugh. You have to figure out ‘A’ and ‘B’ that make both the position and the velocity work out. That’s the simultaneous solutions of equations, and not even hard equations. It’s more work is all. I’m interested in other stuff anyway.

    Because, yeah, the spring is going to wobble back and forth. What I’d like to know is how long it takes to get back where it started. How long does a cycle take? Look back at that position function, for example. That’s all we need.

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    Sine and cosine functions are periodic. They have a period of 2π. This means if you take the thing inside the parentheses after a sine or a cosine and increase it — or decrease it — by 2π, you’ll get the same value out. What’s the first time that the displacement and the velocity will be the same as their starting values? If they started at t = 0, then, they’re going to be back there at a time ‘T’ which makes true the equation

    \sqrt{\frac{k}{m}} T = 2\pi

    And that’s going to be

    T = 2\pi\sqrt{\frac{m}{k}}

    Maybe surprising thing about this: the period doesn’t depend at all on how big the displacement is. That’s true for perfect springs, which don’t exist in the real world. You knew that. Imagine taking a Junior Slinky from the dollar store and sticking a block of something on one end. Imagine stretching it out to 500,000 times the distance between the Earth and Jupiter and letting go. Would it act like a spring or would it break? Yeah, we know. It’s sad. Think of the animated-cartoon joy a spring like that would produce.

    But this period not depending on the displacement is true for small enough displacements, in the real world. Or for good enough springs. Or things that work enough like springs. By “true” I mean “close enough to true”. We can give that a precise mathematical definition, which turns out to be what you would mean by “close enough” in everyday English. The difference is it’ll have Greek letters included.

    So to sum up: suppose we have something that acts like a spring. Then we know qualitatively how it behaves. It oscillates back and forth in a sine wave around the equilibrium. Suppose we know what the spring constant ‘k’ is. Suppose we also know ‘m’, which represents the inertia of the thing. If it’s a real thing on a real spring it’s mass. Then we know quantitatively how it moves. It has a period, based on this spring constant and this mass. And we can say how big the oscillations are based on how big the starting displacement and velocity are. That’s everything I care about in a spring. At least until I get into something wild like several springs wired together, which I am not doing now and might never do.

    And, as we’ll see when we get back to orbits, a lot of things work close enough to springs.

     
    • tkflor 8:13 pm on Saturday, 20 May, 2017 Permalink | Reply

      “I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? ”
      Why don’t you call it a “ground state”?

      Like

      • Joseph Nebus 6:49 am on Friday, 26 May, 2017 Permalink | Reply

        You’re right. This is a ground state.

        For folks just joining in, the “ground state” is what the system looks like when it’s got the least possible energy. At least the least energy consistent with it being a system at all. For a spring problem that’s the one where the thing is at rest, at the center, not displaced at all.

        In a more complicated system you can have an equilibrium that’s stable and that isn’t the ground state. That isn’t the case here, but I wonder if thinking about that didn’t make me avoid calling it a ground state.

        Like

  • Joseph Nebus 6:00 pm on Thursday, 11 May, 2017 Permalink | Reply
    Tags: , Legendre transform   

    Excuses, But Classed Up Some 


    Afraid I’m behind on resuming Why Stuff Can Orbit, mostly as a result of a power outage yesterday. It wasn’t a major one, but it did reshuffle all the week’s chores to yesterday when we could be places that had power, and kept me from doing as much typing as I wanted. I’m going to be riding this excuse for weeks.

    So instead, here, let me pass this on to you.

    It links to a post about the Legendre Transform, which is one of those cool advanced tools you get a couple years into a mathematics or physics major. It is, like many of these cool advanced tools, about solving differential equations. Differential equations turn up anytime the current state of something affects how it’s going to change, which is to say, anytime you’re looking at something not boring. It’s one of mathematics’s uses of “duals”, letting you swap between the function you’re interested in and what you know about how the function you’re interested in changes.

    On the linked page, Jonathan Manton tries to present reasons behind the Legendre transform, in ways he likes better. It might not explain the idea in a way you like, especially if you haven’t worked with it before. But I find reading multiple attempts to explain an idea helpful. Even if one perspective doesn’t help, having a cluster of ideas often does.

     
  • Joseph Nebus 6:00 pm on Thursday, 4 May, 2017 Permalink | Reply
    Tags: , , , , , ,   

    Why Stuff Can Orbit, Part 8: Introducing Stability 


    Previously:

    And the supplemental reading:


    I bet you imagined I’d forgot this series, or that I’d quietly dropped it. Not so. I’ve just been finding the energy for this again. 2017 has been an exhausting year.

    With the last essay I finished the basic goal of “Why Stuff Can Orbit”. I’d described some of the basic stuff for central forces. These involve something — a planet, a mass on a spring, whatever — being pulled by the … center. Well, you can call anything the origin, the center of your coordinate system. Why put that anywhere but the place everything’s pulled towards? The key thing about a central force is it’s always in the direction of the center. It can be towards the center or away from the center, but it’s always going to be towards the center because the “away from” case is boring. (The thing gets pushed away from the center and goes far off, never to be seen again.) How strongly it’s pulled toward the center changes only with the distance from the center.

    Since the force only changes with the distance between the thing and the center it’s easy to think this is a one-dimensional sort of problem. You only need the coordinate describing this distance. We call that ‘r’, because we end up finding orbits that are circles. Since the distance between the center of a circle and its edge is the radius, it would be a shame to use any other letter.

    Forces are hard to work with. At least for a lot of stuff. We can represent central forces instead as potential energy. This is easier because potential energy doesn’t have any direction. It’s a lone number. When we can shift something complicated into one number chances are we’re doing well.

    But we are describing something in space. Something in three-dimensional space, although it turns out we’ll only need two. We don’t care about stuff that plunges right into the center; that’s boring. We like stuff that loops around and around the center. Circular orbits. We’ve seen that second dimension in the angular momentum, which we represent as ‘L’ for reasons I dunno. I don’t think I’ve ever met anyone who did. Maybe it was the first letter that came to mind when someone influential wrote a good textbook. Angular momentum is a vector, but for these problems we don’t need to care about that. We can use an ordinary number to carry all the information we need about it.

    We get that information from the potential energy plus a term that’s based on the square of the angular momentum divided by the square of the radius. This “effective potential energy” lets us find whether there can be a circular orbit at all, and where it’ll be. And it lets us get some other nice stuff like how the size of the orbit and the time it takes to complete an orbit relate to each other. See the earlier stuff for details. In short, though, we get an equilibrium, a circular orbit, whenever the effective potential energy is flat, neither rising nor falling. That happens when the effective potential energy changes from rising to falling, or changes from falling to rising. Well, if it isn’t rising and if it isn’t falling, what else can it be doing? It only does this for an infinitesimal moment, but that’s all we need. It also happens when the effective potential energy is flat for a while, but that like never happens.

    Where I want to go next is into closed orbits. That is, as the planet orbits a sun (or whatever it is goes around whatever it’s going around), does it come back around to exactly where it started? Moving with the same speed in the same direction? That is, does the thing orbit like a planet does?

    (Planets don’t orbit like this. When you have three, or more, things in the universe the mathematics of orbits gets way too complicated to do exactly. But this is the thing they’re approximating, we hope, well.)

    To get there I’ll have to put back a second dimension. Sorry. Won’t need a third, though. That’ll get named θ because that’s our first choice for an angle. And it makes too much sense to describe a planet’s position as its distance from the center and the angle it makes with respect to some reference line. Which reference line? Whatever works for you. It’s like measuring longitude. We could measure degrees east and west of some point other than Greenwich as well, and as correctly, as we do. We use the one we use because it was convenient.

    Along the way to closed orbits I have to talk about stability. There are many kinds of mathematical stability. My favorite is called Lyapunov Stability, because it’s such a mellifluous sound. They all circle around the same concept. It’s what you’d imagine from how we use the word in English. Start with an equilibrium, a system that isn’t changing. Give it a nudge. This disrupts it in some way. Does the disruption stay bounded? That is, does the thing still look somewhat like it did before? Or does the disruption grow so crazy big we have no idea what it’ll ever look like again? (A small nudge, by the way. You can break anything with a big enough nudge; that’s not interesting. It’s whether you can break it with a small nudge that we’d like to know.)

    One of the ways we can study this is by looking at the effective potential energy. By its shape we can say whether a central-force equilibrium is stable or not. It’s easy, too, as we’ve got this set up. (Warning before you go passing yourself off as a mathematical physicist: it is not always easy!) Look at the effective potential energy versus the radius. If it has a part that looks like a bowl, cupped upward, it’s got a stable equilibrium. If it doesn’t, it doesn’t have a stable equilibrium. If you aren’t sure, imagine the potential energy was a track, like for a toy car. And imagine you dropped a marble on it. If you give the marble a nudge, does it roll to a stop? If it does, stable. If it doesn’t, unstable.

    The sort of wiggly shape that serves as every mathematical physicist's generic potential energy curve to show off the different kinds of equilibrium.

    A phony effective potential energy. Most are a lot less exciting than this; see some of the earlier pieces in this series. But some weird-shaped functions like this were toyed with by physicists in the 19th century who were hoping to understand chemistry. Why should gases behave differently at different temperatures? Why should some combinations of elements make new compounds while others don’t? We needed statistical mechanics and quantum mechanics to explain those, but we couldn’t get there without a lot of attempts and failures at explaining it with potential energies and classical mechanics.

    Stable is more interesting. We look at cases where there is this little bowl cupped upward. If we have a tiny nudge we only have to look at a small part of that cup. And that cup is going to look an awful lot like a parabola. If you don’t remember what a parabola is, think back to algebra class. Remember that curvey shape that was the only thing drawn on the board when you were dealing with the quadratic formula? That shape is a parabola.

    Who cares about parabolas? We care because we know something good about them. In this context, anyway. The potential energy for a mass on a spring is also a parabola. And we know everything there is to know about masses on springs. Seriously. You’d think it was all physics was about from like 1678 through 1859. That’s because it’s something calculus lets us solve exactly. We don’t need books of complicated integrals or computers to do the work for us.

    So here’s what we do. It’s something I did not get clearly when I was first introduced to these concepts. This left me badly confused and feeling lost in my first physics and differential equations courses. We are taking our original physics problem and building a new problem based on it. This new problem looks at how big our nudge away from the equilibrium is. How big the nudge is, how fast it grows, how it changes in time will follow rules. Those rules will look a lot like those for a mass on a spring. We started out with a radius that gives us a perfectly circular orbit. Now we get a secondary problem about how the difference between the nudged and the circular orbit changes in time.

    That secondary problem has the same shape, the same equations, as a mass on a spring does. A mass on a spring is a central force problem. All the tools we had for studying central-force problems are still available. There is a new central-force problem, hidden within our original one. Here the “center” is the equilibrium we’re nudged around. It will let us answer a new set of questions.

     
  • Joseph Nebus 6:00 pm on Sunday, 12 March, 2017 Permalink | Reply
    Tags: , Basic Instructions, , Little Iodine, , Piranha Club,   

    Reading the Comics, March 6, 2017: Blackboards Edition 


    I can’t say there’s a compelling theme to the first five mathematically-themed comics of last week. Screens full of mathematics turned up in a couple of them, so I’ll run with that. There were also just enough strips that I’m splitting the week again. It seems fair to me and gives me something to remember Wednesday night that I have to rush to complete.

    Jimmy Hatlo’s Little Iodine for the 1st of January, 1956 was rerun on the 5th of March. The setup demands Little Iodine pester her father for help with the “hard homework” and of course it’s arithmetic that gets to play hard work. It’s a word problem in terms of who has how many apples, as you might figure. Don’t worry about Iodine’s boss getting fired; Little Iodine gets her father fired every week. It’s their schtick.

    Little Iodine wakes her father early after a night at the lodge. 'You got to help me with my [hard] homework.' 'Ooh! My head! Wha'?' 'The first one is, if John has twice as many apples as Tom and Sue put together ... ' 'Huh? kay! Go on, let's get this over with.' They work through to morning. Iodine's teacher sees her asleep in class and demands she bring 'a note from your parents as to why you sleep in school instead of at home!' She goes to her father's office where her father's boss is saying, 'Well, Tremblechin, wake up! The hobo hotel is three blocks south and PS: DON'T COME BACK!'

    Jimmy Hatlo’s Little Iodine for the 1st of January, 1956. I guess class started right back up the 2nd, but it would’ve avoided so much trouble if she’d done her homework sometime during the winter break. That said, I never did.

    Dana Simpson’s Phoebe and her Unicorn for the 5th mentions the “most remarkable of unicorn confections”, a sugar dodecahedron. Dodecahedrons have long captured human imaginations, as one of the Platonic Solids. The Platonic Solids are one of the ways we can make a solid-geometry analogue to a regular polygon. Phoebe’s other mentioned shape of cubes is another of the Platonic Solids, but that one’s common enough to encourage no sense of mystery or wonder. The cube’s the only one of the Platonic Solids that will fill space, though, that you can put into stacks that don’t leave gaps between them. Sugar cubes, Wikipedia tells me, have been made only since the 19th century; the Moravian sugar factory director Jakub Kryštof Rad got a patent for cutting block sugar into uniform pieces in 1843. I can’t dispute the fun of “dodecahedron” as a word to say. Many solid-geometric shapes have names that are merely descriptive, but which are rendered with Greek or Latin syllables so as to sound magical.

    Bud Grace’s Piranha Club for the 6th started a sequence in which the Future Disgraced Former President needs the most brilliant person in the world, Bud Grace. A word balloon full of mathematics is used as symbol for this genius. I feel compelled to point out Bud Grace was a physics major. But while Grace could as easily have used something from the physics department to show his deep thinking abilities, that would all but certainly have been rendered as equation and graphs, the stuff of mathematics again.

    At the White Supremacist House: 'I have the smartest people I could find to help me run this soon-to-be-great-again country, but I'm worried that they're NOT SMART ENOUGH! I want the WORLD'S SMARTEST GENIUS to be my SPECIAL ADVISOR!' Meanwhile, cartoonist Bud Grace thinks of stuff like A = pi*r^2 and a^2 + b^2 = c^2 and tries working out 241 times 365, 'carry the one ... hmmmm ... '

    Bud Grace’s Piranha Club for the 6th of March, 2017. 241 times 635 is 153,035 by the way. I wouldn’t work that out in my head if I needed the number. I might work out an estimate of how big it was, in which case I’d do this: 241 is about 250, which is one-quarter of a thousand. One-quarter of 635 is something like 150, which times a thousand is 150,000. If I needed it exactly I’d get a calculator. Unless I just needed something to occupy my mind without having any particular emotional charge.

    Scott Meyer’s Basic Instructions rerun for the 6th is aptly titled, “How To Unify Newtonian Physics And Quantum Mechanics”. Meyer’s advice is not bad, really, although generic enough it applies to any attempts to reconcile two different models of a phenomenon. Also there’s not particularly a problem reconciling Newtonian physics with quantum mechanics. It’s general relativity and quantum mechanics that are so hard to reconcile.

    Still, Basic Instructions is about how you can do a thing, or learn to do a thing. It’s not about how to allow anything to be done for the first time. And it’s true that, per quantum mechanics, we can’t predict exactly what any one particle will do at any time. We can say what possible things it might do and how relatively probable they are. But big stuff, the stuff for which Newtonian physics is relevant, involve so many particles that the unpredictability becomes too small to notice. We can see this as the Law of Large Numbers. That’s the probability rule that tells us we can’t predict any coin flip, but we know that a million fair tosses of a coin will not turn up 800,000 tails. There’s more to it than that (there’s always more to it), but that’s a starting point.

    Michael Fry’s Committed rerun for the 6th features Albert Einstein as the icon of genius. Natural enough. And it reinforces this with the blackboard full of mathematics. I’m not sure if that blackboard note of “E = md3” is supposed to be a reference to the famous Far Side panel of Einstein hearing the maid talk about everything being squared away. I’ll take it as such.

     
  • Joseph Nebus 6:00 pm on Thursday, 23 February, 2017 Permalink | Reply
    Tags: , , compulsions, , Over The Hedge, , , Wide Open   

    Reading the Comics, February 15, 2017: SMBC Does Not Cut In Line Edition 


    On reflection, that Saturday Morning Breakfast Cereal I was thinking about was not mathematically-inclined enough to be worth including here. Helping make my mind up on that was that I had enough other comic strips to discuss here that I didn’t need to pad my essay. Yes, on a slow week I let even more marginal stuff in. Here’s the comic I don’t figure to talk about. Enjoy!

    Jack Pullan’s Boomerangs rerun for the 16th is another strip built around the “algebra is useless in real life” notion. I’m too busy noticing Mom in the first panel saying “what are you doing play [sic] video games?” to respond.

    Ruben Bolling’s Super-Fun-Pak Comix excerpt for the 16th is marginal, yeah, but fun. Numeric coincidence and numerology can sneak into compulsions with terrible ease. I can believe easily the need to make the number of steps divisible by some favored number.

    Rich Powell’s Wide Open for the 16th is a caveman science joke, and it does rely on a chalkboard full of algebra for flavor. The symbols come tantalizingly close to meaningful. The amount of kinetic energy, K or KE, of a particle of mass m moving at speed v is indeed K = \frac{1}{2} m v^2 . Both 16 and 32 turn up often in the physics of falling bodies, at least if we’re using feet to measure. a = -\frac{k}{m} x turns up in physics too. It comes from the acceleration of a mass on a spring. But an equation of the same shape turns up whenever you describe things that go through tiny wobbles around the normal value. So the blackboard is gibberish, but it’s a higher grade of gibberish than usual.

    Rick Detorie’s One Big Happy rerun for the 17th is a resisting-the-word-problem joke, made fresher by setting it in little Ruthie’s playing at school.

    T Lewis and Michael Fry’s Over The Hedge for the 18th mentions the three-body problem. As Verne the turtle says, it’s a problem from physics. The way two objects — sun and planet, planet and moon, pair of planets, whatever — orbit each other if they’re the only things in the universe is easy. You can describe it all perfectly and without using more than freshman physics majors know. Introduce a third body, though, and we don’t know anymore. Chaos can happen.

    Emphasis on can. There’s no good way to solve the “general” three-body problem, the one where the star and planets can have any sizes and any starting positions and any starting speeds. We can do well for special cases, though. If you have a sun, a planet, and a satellite — each body negligible compared to the other — we can predict orbits perfectly well. If the bodies have to stay in one plane of motion, instead of moving in three-dimensional space, we can do pretty well. If we know two of the bodies orbit each other tightly and the third is way off in the middle of nowhere we can do pretty well.

    But there’s still so many interesting cases for which we just can’t be sure chaos will not break out. Three interacting bodies just offer so much more chance for things to happen. (To mention something surely coincidental, it does seem to be a lot easier to write good comedy, or drama, with three important characters rather than two. Any pair of characters can gang up on the third, after all. I notice how much more energetic Over The Hedge became when Hammy the Squirrel joined RJ and Verne as the core cast.)

    Dave Whamond’s Reality Check for the 18th is your basic mathematics-illiteracy joke, done well enough.

     
  • Joseph Nebus 6:00 pm on Wednesday, 1 February, 2017 Permalink | Reply
    Tags: cables, , , podcasts   

    Mathematics Stuff To Read Or Listen To 


    I concede January was a month around here that could be characterized as “lazy”. Not that I particularly skimped on the Reading the Comics posts. But they’re relatively easy to do: the comics tell me what to write about, and I could do a couple paragraphs on most anything, apparently.

    While I get a couple things planned out for the coming month, though, here’s some reading for other people.

    The above links to a paper in the Proceedings of the National Academy of Sciences. It’s about something I’ve mentioned when talking about knot before. And it’s about something everyone with computer cables or, like the tweet suggests, holiday lights finds. The things coil up. Spontaneous knotting of an agitated string by Dorian M Raymer and Douglas E Smith examines when these knots are likely to form, and how likely they are. It’s not a paper for the lay audience, but there are a bunch of fine pictures. The paper doesn’t talk about Christmas lights, no matter what the tweet does, but the mathematics carries over to this.

    MathsByAGirl, meanwhile, had a post midmonth listing a couple of mathematics podcasts. I’m familiar with one of them, BBC Radio 4’s A Brief History of Mathematics, which was a set of ten-to-twenty-minute sketches of historically important mathematics figures. I’ll trust MathsByAGirl’s taste on other podcasts. I’d spent most of this month finishing off a couple of audio books (David Hackett Fischer’s Washington’s Crossing which I started listening to while I was in Trenton for a week, because that’s the sort of thing I think is funny, and Robert Louis Stevenson’s Doctor Jekyll and Mister Hyde And Other Stories) and so fell behind on podcasts. But now there’s some more stuff to listen forward to.

    And then I’ll wrap up with this from KeplerLounge. It looks to be the start of some essays about something outside the scope of my Why Stuff Can Orbit series. (Which I figure to resume soon.) We start off talking about orbits as if planets were “point masses”. Which is what the name suggests: a mass that fills up a single point, with no volume, no shape, no features. This makes the mathematics easier. The mathematics is just as easy if the planets are perfect spheres, whether hollow or solid. But real planets are not perfect spheres. They’re a tiny bit blobby. And they’re a little lumpy as well. We can ignore that if we’re doing rough estimates of how orbits work. But if we want to get them right we can’t ignore that anymore. And this essay describes some of how we go about dealing with that.

     
  • Joseph Nebus 6:00 pm on Tuesday, 27 December, 2016 Permalink | Reply
    Tags: , , , Riemann hypothesis,   

    The End 2016 Mathematics A To Z: Xi Function 


    I have today another request from gaurish, who’s also been good enough to give me requests for ‘Y’ and ‘Z’. I apologize for coming to this a day late. But it was Christmas and many things demanded my attention.

    Xi Function.

    We start with complex-valued numbers. People discovered them because they were useful tools to solve polynomials. They turned out to be more than useful fictions, if numbers are anything more than useful fictions. We can add and subtract them easily. Multiply and divide them less easily. We can even raise them to powers, or raise numbers to them.

    If you become a mathematics major then somewhere in Intro to Complex Analysis you’re introduced to an exotic, infinitely large sum. It’s spoken of reverently as the Riemann Zeta Function, and it connects to something named the Riemann Hypothesis. Then you remember that you’ve heard of this, because if you’re willing to become a mathematics major you’ve read mathematics popularizations. And you know the Riemann Hypothesis is an unsolved problem. It proposes something that might be true or might be false. Either way has astounding implications for the way numbers fit together.

    Riemann here is Bernard Riemann, who’s turned up often in these A To Z sequences. We saw him in spheres and in sums, leading to integrals. We’ll see him again. Riemann just covered so much of 19th century mathematics; we can’t talk about calculus without him. Zeta, Xi, and later on, Gamma are the famous Greek letters. Mathematicians fall back on them because the Roman alphabet just hasn’t got enough letters for our needs. I’m writing them out as English words instead because if you aren’t familiar with them they look like an indistinct set of squiggles. Even if you are familiar, sometimes. I got confused in researching this some because I did slip between a lowercase-xi and a lowercase-zeta in my mind. All I can plead is it’s been a hard week.

    Riemann’s Zeta function is famous. It’s easy to approach. You can write it as a sum. An infinite sum, but still, those are easy to understand. Pick a complex-valued number. I’ll call it ‘s’ because that’s the standard. Next take each of the counting numbers: 1, 2, 3, and so on. Raise each of them to the power ‘s’. And take the reciprocal, one divided by those numbers. Add all that together. You’ll get something. Might be real. Might be complex-valued. Might be zero. We know many values of ‘s’ what would give us a zero. The Riemann Hypothesis is about characterizing all the possible values of ‘s’ that give us a zero. We know some of them, so boring we call them trivial: -2, -4, -6, -8, and so on. (This looks crazy. There’s another way of writing the Riemann Zeta function which makes it obvious instead.) The Riemann Hypothesis is about whether all the proper, that is, non-boring values of ‘s’ that give us a zero are 1/2 plus some imaginary number.

    It’s a rare thing mathematicians have only one way of writing. If something’s been known and studied for a long time there are usually variations. We find different ways to write the problem. Or we find different problems which, if solved, would solve the original problem. The Riemann Xi function is an example of this.

    I’m going to spare you the formula for it. That’s in self-defense. I haven’t found an expression of the Xi function that isn’t a mess. The normal ways to write it themselves call on the Zeta function, as well as the Gamma function. The Gamma function looks like factorials, for the counting numbers. It does its own thing for other complex-valued numbers.

    That said, I’m not sure what the advantages are in looking at the Xi function. The one that people talk about is its symmetry. Its value at a particular complex-valued number ‘s’ is the same as its value at the number ‘1 – s’. This may not seem like much. But it gives us this way of rewriting the Riemann Hypothesis. Imagine all the complex-valued numbers with the same imaginary part. That is, all the numbers that we could write as, say, ‘x + 4i’, where ‘x’ is some real number. If the size of the value of Xi, evaluated at ‘x + 4i’, always increases as ‘x’ starts out equal to 1/2 and increases, then the Riemann hypothesis is true. (This has to be true not just for ‘x + 4i’, but for all possible imaginary numbers. So, ‘x + 5i’, and ‘x + 6i’, and even ‘x + 4.1 i’ and so on. But it’s easier to start with a single example.)

    Or another way to write it. Suppose the size of the value of Xi, evaluated at ‘x + 4i’ (or whatever), always gets smaller as ‘x’ starts out at a negative infinitely large number and keeps increasing all the way to 1/2. If that’s true, and true for every imaginary number, including ‘x – i’, then the Riemann hypothesis is true.

    And it turns out if the Riemann hypothesis is true we can prove the two cases above. We’d write the theorem about this in our papers with the start ‘The Following Are Equivalent’. In our notes we’d write ‘TFAE’, which is just as good. Then we’d take which ever of them seemed easiest to prove and find out it isn’t that easy after all. But if we do get through we declare ourselves fortunate, sit back feeling triumphant, and consider going out somewhere to celebrate. But we haven’t got any of these alternatives solved yet. None of the equivalent ways to write it has helped so far.

    We know some some things. For example, we know there are infinitely many roots for the Xi function with a real part that’s 1/2. This is what we’d need for the Riemann hypothesis to be true. But we don’t know that all of them are.

    The Xi function isn’t entirely about what it can tell us for the Zeta function. The Xi function has its own exotic and wonderful properties. In a 2009 paper on arxiv.org, for example, Drs Yang-Hui He, Vishnu Jejjala, and Djordje Minic describe how if the zeroes of the Xi function are all exactly where we expect them to be then we learn something about a particular kind of string theory. I admit not knowing just what to say about a genus-one free energy of the topological string past what I have read in this paper. In another paper they write of how the zeroes of the Xi function correspond to the description of the behavior for a quantum-mechanical operator that I just can’t find a way to describe clearly in under three thousand words.

    But mathematicians often speak of the strangeness that mathematical constructs can match reality so well. And here is surely a powerful one. We learned of the Riemann Hypothesis originally by studying how many prime numbers there are compared to the counting numbers. If it’s true, then the physics of the universe may be set up one particular way. Is that not astounding?

     
    • gaurish 5:34 am on Wednesday, 28 December, 2016 Permalink | Reply

      Yes it’s astounding. You have a very nice talent of talking about mathematical quantities without showing formulas :)

      Liked by 1 person

      • Joseph Nebus 5:15 am on Thursday, 5 January, 2017 Permalink | Reply

        You’re most kind, thank you. I’ve probably gone overboard in avoiding formulas lately though.

        Like

  • Joseph Nebus 6:00 pm on Friday, 11 November, 2016 Permalink | Reply
    Tags: , , , , , ,   

    The End 2016 Mathematics A To Z: Ergodic 


    This essay follows up on distributions, mentioned back on Wednesday. This is only one of the ideas which distributions serve. Do you have a word you’d like to request? I figure to close ‘F’ on Saturday afternoon, and ‘G’ is already taken. But give me a request for a free letter soon and I may be able to work it in.

    Ergodic.

    There comes a time a physics major, or a mathematics major paying attention to one of the field’s best non-finance customers, first works on a statistical mechanics problem. Instead of keeping track of the positions and momentums of one or two or four particles she’s given the task of tracking millions of particles. It’s listed as a distribution of all the possible values they can have. But she still knows what it really is. And she looks at how to describe the way this distribution changes in time. If she’s the slightest bit like me, or anyone I knew, she freezes up this. Calculate the development of millions of particles? Impossible! She tries working out what happens to just one, instead, and hopes that gives some useful results.

    And then it does.

    It’s a bit much to call this luck. But it is because the student starts off with some simple problems. Particles of gas in a strong box, typically. They don’t interact chemically. Maybe they bounce off each other, but she’s never asked about that. She’s asked about how they bounce off the walls. She can find the relationship between the volume of the box and the internal gas pressure on the interior and the temperature of the gas. And it comes out right.

    She goes on to some other problems and it suddenly fails. Eventually she re-reads the descriptions of how to do this sort of problem. And she does them again and again and it doesn’t feel useful. With luck there’s a moment, possibly while showering, that the universe suddenly changes. And the next time the problem works out. She’s working on distributions instead of toy little single-particle problems.

    But the problem remains: why did it ever work, even for that toy little problem?

    It’s because some systems of things are ergodic. It’s a property that some physics (or mathematics) problems have. Not all. It’s a bit hard to describe clearly. Part of what motivated me to take this topic is that I want to see if I can explain it clearly.

    Every part of some system has a set of possible values it might have. A particle of gas can be in any spot inside the box holding it. A person could be in any of the buildings of her city. A pool ball could be travelling in any direction on the pool table. Sometimes that will change. Gas particles move. People go to the store. Pool balls bounce off the edges of the table.

    These values will have some kind of distribution. Look at where the gas particle is now. And a second from now. And a second after that. And so on, to the limits of human knowledge. Or to when the box breaks open. Maybe the particle will be more often in some areas than in others. Maybe it won’t. Doesn’t matter. It has some distribution. Over time we can say how often we expect to find the gas particle in each of its possible places.

    The same with whatever our system is. People in buildings. Balls on pool tables. Whatever.

    Now instead of looking at one particle (person, ball, whatever) we have a lot of them. Millions of particle in the box. Tens of thousands of people in the city. A pool table that somehow supports ten thousand balls. Imagine they’re all settled to wherever they happen to be.

    So where are they? The gas particle one is easy to imagine. At least for a mathematics major. If you’re stuck on it I’m sorry. I didn’t know. I’ve thought about boxes full of gas particles for decades now and it’s hard to remember that isn’t normal. Let me know if you’re stuck, and where you are. I’d like to know where the conceptual traps are.

    But back to the gas particles in a box. Some fraction of them are in each possible place in the box. There’s a distribution here of how likely you are to find a particle in each spot.

    How does that distribution, the one you get from lots of particles at once, compare to the first, the one you got from one particle given plenty of time? If they agree the system is ergodic. And that’s why my hypothetical physics major got the right answers from the wrong work. (If you are about to write me to complain I’m leaving out important qualifiers let me say I know. Please pretend those qualifiers are in place. If you don’t see what someone might complain about thank you, but it wouldn’t hurt to think of something I might be leaving out here. Try taking a shower.)

    The person in a building is almost certainly not an ergodic system. There’s buildings any one person will never ever go into, however possible it might be. But nearly all buildings have some people who will go into them. The one-person-with-time distribution won’t be the same as the many-people-at-once distribution. Maybe there’s a way to qualify things so that it becomes ergodic. I doubt it.

    The pool table, now, that’s trickier to say. For a real pool table no, of course not. An actual ball on an actual table rolls to a stop pretty soon, either from the table felt’s friction or because it drops into a pocket. Tens of thousands of balls would form an immobile heap on the table that would be pretty funny to see, now that I think of it. Well, maybe those are the same. But they’re a pretty boring same.

    Anyway when we talk about “pool tables” in this context we don’t mean anything so sordid as something a person could play pool on. We mean something where the table surface hasn’t any friction. That makes the physics easier to model. It also makes the game unplayable, which leaves the mathematical physicist strangely unmoved. In this context anyway. We also mean a pool table that hasn’t got any pockets. This makes the game even more unplayable, but the physics even easier. (It makes it, really, like a gas particle in a box. Only without that difficult third dimension to deal with.)

    And that makes it clear. The one ball on a frictionless, pocketless table bouncing around forever maybe we can imagine. A huge number of balls on that frictionless, pocketless table? Possibly trouble. As long as we’re doing imaginary impossible unplayable pool we could pretend the balls don’t collide with each other. Then the distributions of what ways the balls are moving could be equal. If they do bounce off each other, or if they get so numerous they can’t squeeze past one another, well, that’s different.

    An ergodic system lets you do this neat, useful trick. You can look at a single example for a long time. Or you can look at a lot of examples at one time. And they’ll agree in their typical behavior. If one is easier to study than the other, good! Use the one that you can work with. Mathematicians like to do this sort of swapping between equivalent problems a lot.

    The problem is it’s hard to find ergodic systems. We may have a lot of things that look ergodic, that feel like they should be ergodic. But proved ergodic, with a logic that we can’t shake? That’s harder to do. Often in practice we will include a note up top that we are assuming the system to be ergodic. With that “ergodic hypothesis” in mind we carry on with our work. It gives us a handle on a lot of problems that otherwise would be beyond us.

     
  • Joseph Nebus 6:00 pm on Wednesday, 9 November, 2016 Permalink | Reply
    Tags: , , , , , Josiah Willard Gibbs   

    The End 2016 Mathematics A To Z: Distribution (statistics) 


    As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

    Distribution (statistics)

    The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

    But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

    And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

    We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

    There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

    We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

    That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

    Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

    There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

     
  • Joseph Nebus 6:00 pm on Wednesday, 2 November, 2016 Permalink | Reply
    Tags: , , eigenvalues, , , , ,   

    The End 2016 Mathematics A To Z: Algebra 


    So let me start the End 2016 Mathematics A To Z with a word everybody figures they know. As will happen, everybody’s right and everybody’s wrong about that.

    Algebra.

    Everybody knows what algebra is. It’s the point where suddenly mathematics involves spelling. Instead of long division we’re on a never-ending search for ‘x’. Years later we pass along gifs of either someone saying “stop asking us to find your ex” or someone who’s circled the letter ‘x’ and written “there it is”. And make jokes about how we got through life without using algebra. And we know it’s the thing mathematicians are always doing.

    Mathematicians aren’t always doing that. I expect the average mathematician would say she almost never does that. That’s a bit of a fib. We have a lot of work where we do stuff that would be recognizable as high school algebra. It’s just we don’t really care about that. We’re doing that because it’s how we get the problem we are interested in done. the most recent few pieces in my “Why Stuff can Orbit” series include a bunch of high school algebra-style work. But that was just because it was the easiest way to answer some calculus-inspired questions.

    Still, “algebra” is a much-used word. It comes back around the second or third year of a mathematics major’s career. It comes in two forms in undergraduate life. One form is “linear algebra”, which is a great subject. That field’s about how stuff moves. You get to imagine space as this stretchy material. You can stretch it out. You can squash it down. You can stretch it in some directions and squash it in others. You can rotate it. These are simple things to build on. You can spend a whole career building on that. It becomes practical in surprising ways. For example, it’s the field of study behind finding equations that best match some complicated, messy real data.

    The second form is “abstract algebra”, which comes in about the same time. This one is alien and baffling for a long while. It doesn’t help that the books all call it Introduction to Algebra or just Algebra and all your friends think you’re slumming. The mathematics major stumbles through confusing definitions and theorems that ought to sound comforting. (“Fermat’s Little Theorem”? That’s a good thing, right?) But the confusion passes, in time. There’s a beautiful subject here, one of my favorites. I’ve talked about it a lot.

    We start with something that looks like the loosest cartoon of arithmetic. We get a bunch of things we can add together, and an ‘addition’ operation. This lets us do a lot of stuff that looks like addition modulo numbers. Then we go on to stuff that looks like picking up floor tiles and rotating them. Add in something that we call ‘multiplication’ and we get rings. This is a bit more like normal arithmetic. Add in some other stuff and we get ‘fields’ and other structures. We can keep falling back on arithmetic and on rotating tiles to build our intuition about what we’re doing. This trains mathematicians to look for particular patterns in new, abstract constructs.

    Linear algebra is not an abstract-algebra sort of algebra. Sorry about that.

    And there’s another kind of algebra that mathematicians talk about. At least once they get into grad school they do. There’s a huge family of these kinds of algebras. The family trait for them is that they share a particular rule about how you can multiply their elements together. I won’t get into that here. There are many kinds of these algebras. One that I keep trying to study on my own and crash hard against is Lie Algebra. That’s named for the Norwegian mathematician Sophus Lie. Pronounce it “lee”, as in “leaning”. You can understand quantum mechanics much better if you’re comfortable with Lie Algebras and so now you know one of my weaknesses. Another kind is the Clifford Algebra. This lets us create something called a “hypercomplex number”. It isn’t much like a complex number. Sorry. Clifford Algebra does lend to a construct called spinors. These help physicists understand the behavior of bosons and fermions. Every bit of matter seems to be either a boson or a fermion. So you see why this is something people might like to understand.

    Boolean Algebra is the algebra of this type that a normal person is likely to have heard of. It’s about what we can build using two values and a few operations. Those values by tradition we call True and False, or 1 and 0. The operations we call things like ‘and’ and ‘or’ and ‘not’. It doesn’t sound like much. It gives us computational logic. Isn’t that amazing stuff?

    So if someone says “algebra” she might mean any of these. A normal person in a non-academic context probably means high school algebra. A mathematician speaking without further context probably means abstract algebra. If you hear something about “matrices” it’s more likely that she’s speaking of linear algebra. But abstract algebra can’t be ruled out yet. If you hear a word like “eigenvector” or “eigenvalue” or anything else starting “eigen” (or “characteristic”) she’s more probably speaking of abstract algebra. And if there’s someone’s name before the word “algebra” then she’s probably speaking of the last of these. This is not a perfect guide. But it is the sort of context mathematicians expect other mathematicians notice.

     
    • John Friedrich 2:13 am on Thursday, 3 November, 2016 Permalink | Reply

      The cruelest trick that happened to me was when a grad school professor labeled the Galois Theory class “Algebra”. Until then, the lowest score I’d ever gotten in a math class was a B. After that, I decided to enter the work force and abandon my attempts at a master’s degree.

      Like

      • Joseph Nebus 3:32 pm on Friday, 4 November, 2016 Permalink | Reply

        Well, it’s true enough that it’s part of algebra. But I’d feel uncomfortable plunging right into that without the prerequisites being really clear. I’m not sure I’ve even run into a nice clear pop-culture explanation of Galois Theory past some notes about how there’s two roots to a quadratic equation and see how they mirror each other.

        Like

  • Joseph Nebus 6:00 pm on Friday, 28 October, 2016 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 7: ALL the Circles 


    Previously:

    And some supplemental reading:


    Last time around I showed how to do a central-force problem for normal gravity. That’s one where a planet, or moon, or satellite, or whatever is drawn towards the center of space. It’s drawn by a potential energy that equals some constant times the inverse of the distance from the origin. That is, V(r) = C r-1. With a little bit of fussing around we could find out what distance from the center lets a circular orbit happen. And even Kepler’s Third Law, connecting how long an orbit takes to how big it must be.

    There are two natural follow-up essays. One is to work out elliptical orbits. We know there are such things; all real planets and moons have them, and nearly all satellites do. The other is to work out circular orbits for another easy-to-understand example, like a mass on a spring. That’s something with a potential energy that looks like V(r) = C r2.

    I want to do the elliptical orbits later on. The mass-on-a-spring I could do now. So could you, if you look follow last week’s essay and just change the numbers a little. But, you know, why bother working out one problem? Why not work out a lot of them? Why not work out every central-force problem, all at once?

    Because we can’t. I mean, I can describe how to do that, but it isn’t going to save us much time. Like, the quadratic formula is great because it’ll give you the roots of a quadratic polynomial in one step. You don’t have to do anything but a little arithmetic. We can’t get a formula that easy if we try to solve for every possible potential energy.

    But we can work out a lot of central-force potential energies all at once. That is, we can solve for a big set of similar problems, a “family” as we call them. The obvious family is potential energies that are powers of the planet’s distance from the center. That is, they’re potential energies that follow the rule

    V(r) = C r^n

    Here ‘C’ is some number. It might depend on the planet’s mass, or the sun’s mass. Doesn’t matter. All that’s important is that it not change over the course of the problem. So, ‘C’ for Constant. And ‘n’ is another constant number. Some numbers turn up a lot in useful problems. If ‘n’ is -1 then this can describe gravitational attraction. If ‘n’ is 2 then this can describe a mass on a spring. This ‘n’ can be any real number. That’s not an ideal choice of letter. ‘n’ usually designates a whole number. By using that letter I’m biasing people to think of numbers like ‘2’ at the expense of perfectly legitimate alternatives such as ‘2.1’. But now that I’ve made that explicit maybe we won’t make a casual mistake.

    So what I want is to find where there are stable circular orbits for an arbitrary radius-to-a-power force. I don’t know what ‘C’ and ‘n’ are, but they’re some numbers. To find where a planet can have a circular orbit I need to suppose the planet has some mass, ‘m’. And that its orbit has some angular momentum, a number called ‘L’. From this we get the effective potential energy. That’s what the potential energy looks like when we remember that angular momentum has to be conserved.

    V_{eff}(r) = C r^n + \frac{L^2}{2m} r^{-2}

    To find where a circular orbit can be we have to take the first derivative of Veff with respect to ‘r’. The circular orbit can happen at a radius for which this first derivative equals zero. So we need to solve this:

    \frac{dV_{eff}}{dr} = n C r^{n-1} - 2\frac{L^2}{2m} r^{-3} = 0

    That derivative we know from the rules of how to take derivatives. And from this point on we have to do arithmetic. We want to get something which looks like ‘r = (some mathematics stuff here)’. Hopefully it’ll be something not too complicated. And hey, in the second term there, the one with L2 in it, we have a 2 in the numerator and a 2 in the denominator. So those cancel out and that’s simpler. That’s hopeful, isn’t it?

    n C r^{n-1} - \frac{L^2}{m}r^{-3} = 0

    OK. Add \frac{L^2}{m}r^{-3} to both sides of the equation; we’re used to doing that. At least in high school algebra we are.

    n C r^{n-1} = \frac{L^2}{m}r^{-3}

    Not looking much better? Try multiplying both left and right sides by ‘r3‘. This gets rid of all the ‘r’ terms on the right-hand side of the equation.

    n C r^{n+2} = \frac{L^2}{m}

    Now we’re getting close to the ideal of ‘r = (some mathematics stuff)’. Divide both sides by the constant number ‘n times C’.

    r^{n+2} = \frac{L^2}{n C m}

    I know how much everybody likes taking (n+2)-nd roots of a quantity. I’m sure you occasionally just pick an object at random — your age, your telephone number, a potato, a wooden block — and find its (n+2)-nd root. I know. I’ll spoil some of the upcoming paragraphs to say that it’s going to be more useful knowing ‘rn + 2‘ than it is knowing ‘r’. But I’d like to have the radius of a circular orbit on the record. Here it is.

    r = \left(\frac{L^2}{n C m}\right)^{\frac{1}{n + 2}}

    Can we check that this is right? Well, we can at least check that things aren’t wrong. We can check against the example we already know. That’s the gravitational potential energy problem. For that one, ‘C’ is the number ‘G M m’. That’s the gravitational constant of the universe times the mass of the sun times the mass of the planet. And for gravitational potential energy, ‘n’ is equal to -1. This implies that, for a gravitational potential energy problem, we get a circular orbit when

    r_{grav} = \left(\frac{L^2}{n G M m^2}\right)^{\frac{1}{1}}

    I’m labelling it ‘rgrav‘ to point out it’s the radius of a circular orbit for gravitational problems. Might or might not need that in the future, but the label won’t hurt anything.

    Go ahead and guess whether that agrees with last week’s work. I’m feeling confident.

    OK, so, we know where a circular orbit might turn up for an arbitrary power function potential energy. Is it stable? We know from the third “Why Stuff Can Orbit” essay that it’s not a sure thing. We can have potential energies that don’t have any circular orbits. So it must be possible there are unstable orbits.

    Whether our circular orbit is stable demands we do the same work we did last time. It will look a little harder to start, because there’s one more variable in it. What had been ‘-1’ last time is now an ‘n’, and stuff like ‘-2’ becomes ‘n-1’. Is that actually harder? Really?

    So here’s the second derivative of the effective potential:

    \frac{d^2V_{eff}}{dr^2} = (n-1)nCr^{n - 2} + 3\frac{L^2}{m}r^{-4}

    My first impulse when I worked this out was to take the ‘r’ for a circular orbit, the thing worked out five paragraphs above, and plug it in to that expression. This is madness. Don’t do it. Or, you know, go ahead and start doing it and see how long it takes before you regret the errors of your ways.

    The non-madness-inducing way to work out if this is a positive number? It involves noticing r^{n-2} is the same number as r^{n+2}\cdot r^{-4} . So we have this bit of distribution-law magic:

    \frac{d^2V_{eff}}{dr^2} = (n-1)nCr^{n + 2}r^{-4} + 3\frac{L^2}{m}r^{-4}

    \frac{d^2V_{eff}}{dr^2} = \left((n-1)nCr^{n + 2} + 3\frac{L^2}{m}\right) \cdot r^{-4}

    I’m sure we all agree that’s better, right? No, honestly, let me tell you why this is better. When will this expression be true?

    \left((n-1)nCr^{n + 2} + 3\frac{L^2}{m}\right) \cdot r^{-4} > 0

    That’s the product of two expressions. One of them is ‘r-4‘. ‘r’ is the radius of the planet’s orbit. That has to be a positive number. It’s how far the planet is from the origin. The number can’t be anything but positive. So we don’t have to worry about that.

    SPOILER: I just palmed a card there. Did you see me palm a card there? Because I totally did. Watch for where that card turns up. It’ll be after this next bit.

    So let’s look at the non-card-palmed part of this. We’re going to have a stable equilibrium when the other factor of that mess up above is positive. We need to know when this is true:

    (n-1)nCr^{n + 2} + 3\frac{L^2}{m}  > 0

    OK. Well. We do know what ‘rn+2‘ is. Worked that out … uhm … twelve(?) paragraphs ago. I’ll say twelve and hope I don’t mess that up in editing. Anyway, what’s important is r^{n+2} = \frac{L^2}{n C m} . So we put that in where ‘rn+2‘ appeared in that above expression.

    (n-1)nC\frac{L^2}{n C m} + 3 \frac{L^2}{m} > 0

    This is going to simplify down some. Look at that first term, with an ‘n C’ in the numerator and again in the denominator. We’re going to be happier soon as we cancel those out.

    (n-1)\frac{L^2}{m} + 3\frac{L^2}{m} > 0

    And now we get to some fine distributive-law action, the kind everyone likes:

    \left( (n-1) + 3 \right)\frac{L^2}{m} > 0

    Well, we know \frac{L^2}{m} has to be positive. The angular momentum ‘L’ might be positive or might be negative but its square is certainly positive. The mass ‘m’ has to be a positive number. So we’ll get a stable equilibrium whenever (n - 1) + 3 is greater than 0. That is, whenever n > -2 . Done.

    No we’re not done. That’s nonsense. We knew that going in. We saw that a couple essays ago. If your potential energy were something like, say, V(r) = -2 r^3 you wouldn’t have any orbits at all, never mind stable orbits. But 3 is certainly greater than -2. So what’s gone wrong here?

    Let’s go back to that palmed card. Remember I mentioned how the radius of our circular orbit was a positive number. This has to be true, if there is a circular orbit. What if there isn’t one? Do we know there is a radius ‘r’ that the planet can orbit the origin? Here’s the formula giving us that circular orbit’s radius once again:

    r = \left(\frac{L^2}{n C m}\right)^{\frac{1}{n + 2}}

    Do we know that’s going to exist? … Well, sure. That’s going to be some meaningful number as long as we avoid obvious problems. Like, we can’t have the power ‘n’ be equal to zero, because dividing by zero is all sorts of bad. Also we can’t have the constant ‘C’ be zero, again because dividing by zero is bad.

    Not a problem, though. If either ‘C’ or ‘n’ were zero, or if both were, then the original potential energy would be a constant number. V(r) would be equal to ‘C’ (if ‘n’ were zero), or ‘0’ (if ‘C’ were zero). It wouldn’t change with the radius ‘r’. This is a case called the ‘free particle’. There’s no force pushing the planet in one direction or another. So if the planet were not moving it would never start. If the planet were already moving, it would keep moving in the same direction in a straight line. No circular orbits.

    Similarly if ‘n’ were equal to ‘-2’ there’d be problems because the power we raise that parenthetical expression to would be equal to one divided by zero, which is bad. Is there anything else that could be trouble there?

    What if the thing inside parentheses is a negative number? I may not know what ‘n’ is. I don’t. We started off by supposing we didn’t know beyond that it was a number. But I do know that the n-th root of a negative number is going to be trouble. It might be negative. It might be complex-valued. But it won’t be a positive number. And we need a radius that’s a positive number. So that’s the palmed card. To have a circular orbit at all, positive or negative, we have to have:

    \frac{L^2}{n C m} > 0

    ‘L’ is a regular old number, maybe positive, maybe negative. So ‘L2‘ is a positive number. And the mass ‘m’ is a positive number. We don’t know what ‘n’ and C’ are. But as long as their product is positive we’re good. The whole equation will be true. So ‘n’ and ‘C’ can both be negative numbers. We saw that with gravity: V(r) = -\frac{GMm}{r} . ‘G’ is the gravitational constant of the universe, a positive number. ‘M’ and ‘m’ are masses, also positive.

    Or ‘n’ and ‘C’ can both be positive numbers. That turns up with spring problems: V(r) = K r^2 , where ‘K’ is the ‘spring constant’. That’s some positive number again.

    That time we found potential energies that didn’t have orbits? They were ones that had a positive ‘C’ and negative ‘n’, or a negative ‘C’ and positive ‘n’. The case we just worked out doesn’t have circular orbits. It’s nice to have that sorted out at least.

    So what does it mean that we can’t have a stable orbit if ‘n’ is less than or equal to -2? Even if ‘C’ is negative? It turns out that if you have a negative ‘C’ and big negative ‘n’, like say -5, the potential energy drops way down to something infinitely large and negative at smaller and smaller radiuses. If you have a positive ‘C’, the potential energy goes way up at smaller and smaller radiuses. For large radiuses the potential drops to zero. But there’s never the little U-shaped hill in the middle, the way you get for gravity-like potentials or spring potentials or normal stuff like that. Yeah, who would have guessed?

    What if we do have a stable orbit? How long does an orbit take? How does that relate to the radius of the orbit? We used this radius expression to work out Kepler’s Third Law for the gravity problem last week. We can do that again here.

    Last week we worked out what the angular momentum ‘L’ had to be in terms of the radius of the orbit and the time it takes to complete one orbit. The radius of the orbit we called ‘r’. The time an orbit takes we call ‘T’. The formula for angular momentum doesn’t depend on what problem we’re doing. It just depends on the mass ‘m’ of what’s spinning around and how it’s spinning. So:

    L = 2\pi m \frac{r^2}{T}

    And from this we know what ‘L2‘ is.

    L^2 = 4\pi^2 m^2 \frac{r^4}{T^2}

    That’s convenient because we have an ‘L2‘ term in the formula for what the radius is. I’m going to stick with the formula we got for ‘rn+2‘ because that is so, so much easier to work with than ‘r’ by itself. So we go back to that starting point and then substitute what we know ‘L2‘ to be in there.

    r^{n + 2} = \frac{L^2}{n C m}

    This we rewrite as:

    r^{n + 2} = \frac{4 \pi^2 m^2}{n C m}\frac{r^4}{T^2}

    Some stuff starts cancelling out again. One ‘m’ in the numerator and one in the denominator. Small thing but it makes our lives a bit better. We can multiply the left side and the right side by T2. That’s more obviously an improvement. We can divide the left side and the right side by ‘rn + 2‘. And yes that is too an improvement. Watch all this:

    r^{n + 2} = \frac{4 \pi^2 m}{n C}\frac{r^4}{T^2}

    T^2 \cdot r^{n + 2} = \frac{4 \pi^2 m}{n C}r^4

    T^2  = \frac{4 \pi^2 m}{n C}r^{2 - n}

    And that last bit is the equivalent of Kepler’s Third Law for our arbitrary power-law style force.

    Are we right? Hard to say offhand. We can check that we aren’t wrong, at least. We can check against the gravitational potential energy. For this ‘n’ is equal to -1. ‘C’ is equal to ‘-G M m’. Make those substitutions; what do we get?

    T^2  = \frac{4 \pi^2 m}{(-1) (-G M m)}r^{2 - (-1)}

    T^2  = \frac{4 \pi^2}{G M}r^{3}

    Well, that is what we expected for this case. So the work looks good, this far. Comforting.

     
  • Joseph Nebus 6:00 pm on Friday, 21 October, 2016 Permalink | Reply
    Tags: , , , , , , , ,   

    Why Stuff Can Orbit, Part 6: Circles and Where To Find Them 


    Previously:

    And some supplemental reading:


    So now we can work out orbits. At least orbits for a central force problem. Those are ones where a particle — it’s easy to think of it as a planet — is pulled towards the center of the universe. How strong that pull is depends on some constants. But it only changes as the distance the planet is from the center changes.

    What we’d like to know is whether there are circular orbits. By “we” I mean “mathematical physicists”. And I’m including you in that “we”. If you’re reading this far you’re at least interested in knowing how mathematical physicists think about stuff like this.

    It’s easiest describing when these circular orbits exist if we start with the potential energy. That’s a function named ‘V’. We write it as ‘V(r)’ to show it’s an energy that changes as ‘r’ changes. By ‘r’ we mean the distance from the center of the universe. We’d use ‘d’ for that except we’re so used to thinking of distance from the center as ‘radius’. So ‘r’ seems more compelling. Sorry.

    Besides the potential energy we need to know the angular momentum of the planet (or whatever it is) moving around the center. The amount of angular momentum is a number we call ‘L’. It might be positive, it might be negative. Also we need the planet’s mass, which we call ‘m’. The angular momentum and mass let us write a function called the effective potential energy, ‘Veff(r)’.

    And we’ll need to take derivatives of ‘Veff(r)’. Fortunately that “How Differential Calculus Works” essay explains all the symbol-manipulation we need to get started. That part is calculus, but the easy part. We can just follow the rules already there. So here’s what we do:

    • The planet (or whatever) can have a circular orbit around the center at any radius which makes the equation \frac{dV_{eff}}{dr} = 0 true.
    • The circular orbit will be stable if the radius of its orbit makes the second derivative of the effective potential, \frac{d^2V_{eff}}{dr^2} , some number greater than zero.

    We’re interested in stable orbits because usually unstable orbits are boring. They might exist but any little perturbation breaks them down. The mathematician, ordinarily, sees this as a useless solution except in how it describes different kinds of orbits. The physicist might point out that sometimes it can take a long time, possibly millions of years, before the perturbation becomes big enough to stand out. Indeed, it’s an open question whether our solar system is stable. While it seems to have gone millions of years without any planet changing its orbit very much we haven’t got the evidence to say it’s impossible that, say, Saturn will be kicked out of the solar system anytime soon. Or worse, that Earth might be. “Soon” here means geologically soon, like, in the next million years.

    (If it takes so long for the instability to matter then the mathematician might allow that as “metastable”. There are a lot of interesting metastable systems. But right now, I don’t care.)

    I realize now I didn’t explain the notation for the second derivative before. It looks funny because that’s just the best we can work out. In that fraction \frac{d^2V_{eff}}{dr^2} the ‘d’ isn’t a number so we can’t cancel it out. And the superscript ‘2’ doesn’t mean squaring, at least not the way we square numbers. There’s a functional analysis essay in there somewhere. Again I’m sorry about this but there’s a lot of things mathematicians want to write out and sometimes we can’t find a way that avoids all confusion. Roll with it.

    So that explains the whole thing clearly and easily and now nobody could be confused and yeah I know. If my Classical Mechanics professor left it at that we’d have open rebellion. Let’s do an example.

    There are two and a half good examples. That is, they’re central force problems with answers we know. One is gravitation: we have a planet orbiting a star that’s at the origin. Another is springs: we have a mass that’s connected by a spring to the origin. And the half is electric: put a positive electric charge at the center and have a negative charge orbit that. The electric case is only half a problem because it’s the same as the gravitation problem except for what the constants involved are. Electric charges attract each other crazy way stronger than gravitational masses do. But that doesn’t change the work we do.

    This is a lie. Electric charges accelerating, and just orbiting counts as accelerating, cause electromagnetic effects to happen. They give off light. That’s important, but it’s also complicated. I’m not going to deal with that.

    I’m going to do the gravitation problem. After all, we know the answer! By Kepler’s something law, something something radius cubed something G M … something … squared … After all, we can look up the answer!

    The potential energy for a planet orbiting a sun looks like this:

    V(r) = - G M m \frac{1}{r}

    Here ‘G’ is a constant, called the Gravitational Constant. It’s how strong gravity in the universe is. It’s not very strong. ‘M’ is the mass of the sun. ‘m’ is the mass of the planet. To make sense ‘M’ should be a lot bigger than ‘m’. ‘r’ is how far the planet is from the sun. And yes, that’s one-over-r, not one-over-r-squared. This is the potential energy of the planet being at a given distance from the sun. One-over-r-squared gives us how strong the force attracting the planet towards the sun is. Different thing. Related thing, but different thing. Just listing all these quantities one after the other means ‘multiply them together’, because mathematicians multiply things together a lot and get bored writing multiplication symbols all the time.

    Now for the effective potential we need to toss in the angular momentum. That’s ‘L’. The effective potential energy will be:

    V_{eff}(r) = - G M m \frac{1}{r} + \frac{L^2}{2 m r^2}

    I’m going to rewrite this in a way that means the same thing, but that makes it easier to take derivatives. At least easier to me. You’re on your own. But here’s what looks easier to me:

    V_{eff}(r) = - G M m r^{-1} + \frac{L^2}{2 m} r^{-2}

    I like this because it makes every term here look like “some constant number times r to a power”. That’s easy to take the derivative of. Check back on that “How Differential Calculus Works” essay. The first derivative of this ‘Veff(r)’, taken with respect to ‘r’, looks like this:

    \frac{dV_{eff}}{dr} = -(-1) G M m r^{-2} -2\frac{L^2}{2m} r^{-3}

    We can tidy that up a little bit: -(-1) is another way of writing 1. The second term has two times something divided by 2. We don’t need to be that complicated. In fact, when I worked out my notes I went directly to this simpler form, because I wasn’t going to be thrown by that. I imagine I’ve got people reading along here who are watching these equations warily, if at all. They’re ready to bolt at the first sign of something terrible-looking. There’s nothing terrible-looking coming up. All we’re doing from this point on is really arithmetic. It’s multiplying or adding or otherwise moving around numbers to make the equation prettier. It happens we only know those numbers by cryptic names like ‘G’ or ‘L’ or ‘M’. You can go ahead and pretend they’re ‘4’ or ‘5’ or ‘7’ if you like. You know how to do the steps coming up.

    So! We allegedly can have a circular orbit when this first derivative is equal to zero. What values of ‘r’ make true this equation?

    G M m r^{-2} - \frac{L^2}{m} r^{-3} = 0

    Not so helpful there. What we want is to have something like ‘r = (mathematics stuff here)’. We have to do some high school algebra moving-stuff-around to get that. So one thing we can do to get closer is add the quantity \frac{L^2}{m} r^{-3} to both sides of this equation. This gets us:

    G M m r^{-2} = \frac{L^2}{m} r^{-3}

    Things are getting better. Now multiply both sides by the same number. Which number? r3. That’s because ‘r-3‘ times ‘r3‘ is going to equal 1, while ‘r-2‘ times ‘r3‘ will equal ‘r1‘, which normal people call ‘r’. I kid; normal people don’t think of such a thing at all, much less call it anything. But if they did, they’d call it ‘r’. We’ve got:

    G M m r = \frac{L^2}{m}

    And now we’re getting there! Divide both sides by whatever number ‘G M’ is, as long as it isn’t zero. And then we have our circular orbit! It’s at the radius

    r = \frac{L^2}{G M m^2}

    Very good. I’d even say pretty. It’s got all those capital letters and one little lowercase. Something squared in the numerator and the denominator. Aesthetically pleasant. Stinks a little that it doesn’t look like anything we remember from Kepler’s Laws once we’ve looked them up. We can fix that, though.

    The key is the angular momentum ‘L’ there. I haven’t said anything about how that number relates to anything. It’s just been some constant of the universe. In a sense that’s fair enough. Angular momentum is conserved, exactly the same way energy is conserved, or the way linear momentum is conserved. Why not just let it be whatever number it happens to be?

    (A note for people who skipped earlier essays: Angular momentum is not a number. It’s really a three-dimensional vector. But in a central force problem with just one planet moving around we aren’t doing any harm by pretending it’s just a number. We set it up so that the angular momentum is pointing directly out of, or directly into, the sheet of paper we pretend the planet’s orbiting in. Since we know the direction before we even start work, all we have to car about is the size. That’s the number I’m talking about.)

    The angular momentum of a thing is its moment of inertia times its angular velocity. I’m glad to have cleared that up for you. The moment of inertia of a thing describes how easy it is to start it spinning, or stop it spinning, or change its spin. It’s a lot like inertia. What it is depends on the mass of the thing spinning, and how that mass is distributed, and what it’s spinning around. It’s the first part of physics that makes the student really have to know volume integrals.

    We don’t have to know volume integrals. A single point mass spinning at a constant speed at a constant distance from the origin is the easy angular momentum to figure out. A mass ‘m’ at a fixed distance ‘r’ from the center of rotation moving at constant speed ‘v’ has an angular momentum of ‘m’ times ‘r’ times ‘v’.

    So great; we’ve turned ‘L’ which we didn’t know into ‘m r v’, where we know ‘m’ and ‘r’ but don’t know ‘v’. We’re making progress, I promise. The planet’s tracing out a circle in some amount of time. It’s a circle with radius ‘r’. So it traces out a circle with perimeter ‘2 π r’. And it takes some amount of time to do that. Call that time ‘T’. So its speed will be the distance travelled divided by the time it takes to travel. That’s \frac{2 \pi r}{T} . Again we’ve changed one unknown number ‘L’ for another unknown number ‘T’. But at least ‘T’ is an easy familiar thing: it’s how long the orbit takes.

    Let me show you how this helps. Start off with what ‘L’ is:

    L = m r v = m r \frac{2\pi r}{T} = 2\pi m \frac{r^2}{T}

    Now let’s put that into the equation I got eight paragraphs ago:

    r = \frac{L^2}{G M m^2}

    Remember that one? Now put what I just said ‘L’ was, in where ‘L’ shows up in that equation.

    r = \frac{\left(2\pi m \frac{r^2}{T}\right)^2}{G M m^2}

    I agree, this looks like a mess and possibly a disaster. It’s not so bad. Do some cleaning up on that numerator.

    r = \frac{4 \pi^2 m^2}{G M m^2} \frac{r^4}{T^2}

    That’s looking a lot better, isn’t it? We even have something we can divide out: the mass of the planet is just about to disappear. This sounds bizarre, but remember Kepler’s laws: the mass of the planet never figures into things. We may be on the right path yet.

    r = \frac{4 \pi^2}{G M} \frac{r^4}{T^2}

    OK. Now I’m going to multiply both sides by ‘T2‘ because that’ll get that out of the denominator. And I’ll divide both sides by ‘r’ so that I only have the radius of the circular orbit on one side of the equation. Here’s what we’ve got now:

    T^2 = \frac{4 \pi^2}{G M} r^3

    And hey! That looks really familiar. A circular orbit’s radius cubed is some multiple of the square of the orbit’s time. Yes. This looks right. At least it looks reasonable. Someone else can check if it’s right. I like the look of it.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about more and different … um …

    I’d like to talk about the different … oh, dear. Yes. You’re going to ask about that, aren’t you?

    Ugh. All right. I’ll do it.

    How do we know this is a stable orbit? Well, it just is. If it weren’t the Earth wouldn’t have a Moon after all this. Heck, the Sun wouldn’t have an Earth. At least it wouldn’t have a Jupiter. If the solar system is unstable, Jupiter is probably the most stable part. But that isn’t convincing. I’ll do this right, though, and show what the second derivative tells us. It tells us this is too a stable orbit.

    So. The thing we have to do is find the second derivative of the effective potential. This we do by taking the derivative of the first derivative. Then we have to evaluate this second derivative and see what value it has for the radius of our circular orbit. If that’s a positive number, then the orbit’s stable. If that’s a negative number, then the orbit’s not stable. This isn’t hard to do, but it isn’t going to look pretty.

    First the pretty part, though. Here’s the first derivative of the effective potential:

    \frac{dV_{eff}}{dr} = G M m r^{-2} - \frac{L^2}{m} r^{-3}

    OK. So the derivative of this with respect to ‘r’ isn’t hard to evaluate again. This is again a function with a bunch of terms that are all a constant times r to a power. That’s the easiest sort of thing to differentiate that isn’t just something that never changes.

    \frac{d^2 V_{eff}}{dr^2} = -2 G M m r^{-3} - (-3)\frac{L^2}{m} r^{-4}

    Now the messy part. We need to work out what that line above is when our planet’s in our circular orbit. That circular orbit happens when r = \frac{L^2}{G M m^2} . So we have to substitute that mess in for ‘r’ wherever it appears in that above equation and you’re going to love this. Are you ready? It’s:

    -2 G M m \left(\frac{L^2}{G M m^2}\right)^{-3} + 3\frac{L^2}{m}\left(\frac{L^2}{G M m^2}\right)^{-4}

    This will get a bit easier promptly. That’s because something raised to a negative power is the same as its reciprocal raised to the positive of that power. So that terrible, terrible expression is the same as this terrible, terrible expression:

    -2 G M m \left(\frac{G M m^2}{L^2}\right)^3 + 3 \frac{L^2}{m}\left(\frac{G M m^2}{L^2}\right)^4

    Yes, yes, I know. Only thing to do is start hacking through all this because I promise it’s going to get better. Putting all those third- and fourth-powers into their parentheses turns this mess into:

    -2 G M m \frac{G^3 M^3 m^6}{L^6} + 3 \frac{L^2}{m} \frac{G^4 M^4 m^8}{L^8}

    Yes, my gut reaction when I see multiple things raised to the eighth power is to say I don’t want any part of this either. Hold on another line, though. Things are going to start cancelling out and getting shorter. Group all those things-to-powers together:

    -2 \frac{G^4 M^4 m^7}{L^6} + 3 \frac{G^4 M^4 m^7}{L^6}

    Oh. Well, now this is different. The second derivative of the effective potential, at this point, is the number

    \frac{G^4 M^4 m^7}{L^6}

    And I admit I don’t know what number that is. But here’s what I do know: ‘G’ is a positive number. ‘M’ is a positive number. ‘m’ is a positive number. ‘L’ might be positive or might be negative, but ‘L6‘ is a positive number either way. So this is a bunch of positive numbers multiplied and divided together.

    So this second derivative what ever it is must be a positive number. And so this circular orbit is stable. Give the planet a little nudge and that’s all right. It’ll stay near its orbit. I’m sorry to put you through that but some people raised the, honestly, fair question.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about the other kinds of central forces that you might get. We only solved one problem here. We can solve way more than that.

     
    • howardat58 6:18 pm on Friday, 21 October, 2016 Permalink | Reply

      I love the chatty approach.

      Like

      • Joseph Nebus 5:03 am on Saturday, 22 October, 2016 Permalink | Reply

        Thank you. I realized doing Theorem Thursdays over the summer that it was hard to avoid that voice, and then that it was fun writing in it. So eventually I do learn, sometimes.

        Like

  • Joseph Nebus 6:00 pm on Friday, 14 October, 2016 Permalink | Reply
    Tags: , , , , , harmonic motion, ,   

    How Mathematical Physics Works: Another Course In 2200 Words 


    OK, I need some more background stuff before returning to the Why Stuff Can Orbit series. Last week I explained how to take derivatives, which is one of the three legs of a Calculus I course. Now I need to say something about why we take derivatives. This essay won’t really qualify you to do mathematical physics, but it’ll at least let you bluff your way through a meeting with one.

    We care about derivatives because we’re doing physics a smart way. This involves thinking not about forces but instead potential energy. We have a function, called V or sometimes U, that changes based on where something is. If we need to know the forces on something we can take the derivative, with respect to position, of the potential energy.

    The way I’ve set up these central force problems makes it easy to shift between physical intuition and calculus. Draw a scribbly little curve, something going up and down as you like, as long as it doesn’t loop back on itself. Also, don’t take the pen from paper. Also, no corners. That’s just cheating. Smooth curves. That’s your potential energy function. Take any point on this scribbly curve. If you go to the right a little from that point, is the curve going up? Then your function has a positive derivative at that point. Is the curve going down? Then your function has a negative derivative. Find some other point where the curve is going in the other direction. If it was going up to start, find a point where it’s going down. Somewhere in-between there must be a point where the curve isn’t going up or going down. The Intermediate Value Theorem says you’re welcome.

    These points where the potential energy isn’t increasing or decreasing are the interesting ones. At least if you’re a mathematical physicist. They’re equilibriums. If whatever might be moving happens to be exactly there, then it’s not going to move. It’ll stay right there. Mathematically: the force is some fixed number times the derivative of the potential energy there. The potential energy’s derivative is zero there. So the force is zero and without a force nothing’s going to change. Physical intuition: imagine you laid out a track with exactly the shape of your curve. Put a marble at this point where the track isn’t rising and isn’t falling. Does the marble move? No, but if you’re not so sure about that read on past the next paragraph.

    Mathematical physicists learn to look for these equilibriums. We’re taught to not bother with what will happen if we release this particle at this spot with this velocity. That is, you know, not looking at any particular problem someone might want to know. We look instead at equilibriums because they help us describe all the possible behaviors of a system. Mathematicians are sometimes characterized as lazy in spirit. This is fair. Mathematicians will start out with a problem looking to see if it’s just like some other problem someone already solved. But the flip side is if one is going to go to the trouble of solving a new problem, she’s going to really solve it. We’ll work out not just what happens from some one particular starting condition. We’ll try to describe all the different kinds of thing that could happen, and how to tell which of them does happen for your measly little problem.

    If you actually do have a curvy track and put a marble down on its equilibrium it might yet move. Suppose the track is rising a while and then falls back again; putting the marble at top and it’s likely to roll one way or the other. If it doesn’t it’s probably because of friction; the track sticks a little. If it were a really smooth track and the marble perfectly round then it’d fall. Give me this. But even with a perfectly smooth track and perfectly frictionless marble it’ll still roll one way or another. Unless you put it exactly at the spot that’s the top of the hill, not a bit to the left or the right. Good luck.

    What’s happening here is the difference between a stable and an unstable equilibrium. This is again something we all have a physical intuition for. Imagine you have something that isn’t moving. Give it a little shove. Does it stay about like it was? Then it’s stable. Does it break? Then it’s unstable. The marble at the top of the track is at an unstable equilibrium; a little nudge and it’ll roll away. If you had a marble at the bottom of a track, inside a valley, then it’s a stable equilibrium. A little nudge will make the marble rock back and forth but it’ll stay nearby.

    Yes, if you give it a crazy big whack the marble will go flying off, never to be seen again. We’re talking about small nudges. No, smaller than that. This maybe sounds like question-begging to you. But what makes for an unstable equilibrium is that no nudge is too small. The nudge — perturbation, in the trade — will just keep growing. In a stable equilibrium there’s nudges small enough that they won’t keep growing. They might not shrink, but they won’t grow either.

    So how to tell which is which? Well, look at your potential energy and imagine it as a track with a marble again. Where are the unstable equilibriums? They’re the ones at tops of hills. Near them the curve looks like a cup pointing down, to use the metaphor every Calculus I class takes. Where are the stable equilibriums? They’re the ones at bottoms of valleys. Near them the curve looks like a cup pointing up. Again, see Calculus I.

    We may be able to tell the difference between these kinds of equilibriums without drawing the potential energy. We can use the second derivative. To find the second derivative of a function you take the derivative of a function and then — you may want to think this one over — take the derivative of that. That is, you take the derivative of the original function a second time. Sometimes higher mathematics gives us terms that aren’t too hard.

    So if you have a spot where you know there’s an equilibrium, look at what the second derivative at that spot is. If it’s positive, you have a stable equilibrium. If it’s negative, you have an unstable equilibrium. This is called “Second Derivative Test”, as it was named by a committee that figured it was close enough to 5 pm and why cause trouble?

    If the second derivative is zero there, um, we can’t say anything right now. The equilibrium may also be an inflection point. That’s where the growth of something pauses a moment before resuming. Or where the decline of something pauses a moment before resuming. In either case that’s still an unstable equilibrium. But it doesn’t have to be. It could still be a stable equilibrium. It might just have a very smoothly flat base. No telling just from that one piece of information and this is why we have to go on to other work.

    But this gets at how we’d like to look at a system. We look for its equilibriums. We figure out which equilibriums are stable and which ones are unstable. With a little more work we can say, if the system starts out like this it’ll stay near that equilibrium. If it starts out like that it’ll stay near this whole other equilibrium. If it starts out this other way, it’ll go flying off to the end of the universe. We can solve every possible problem at once and never have to bother with a particular case. This feels good.

    It also gives us a little something more. You maybe have heard of a tangent line. That’s a line that’s, er, tangent to a curve. Again with the not-too-hard terms. What this means is there’s a point, called the “point of tangency”, again named by a committee that wanted to get out early. And the line just touches the original curve at that point, and it’s going in exactly the same direction as the original curve at that point. Typically this means the line just grazes the curve, at least around there. If you’ve ever rolled a pencil until it just touched the edge of your coffee cup or soda can, you’ve set up a tangent line to the curve of your beverage container. You just didn’t think of it as that because you’re not daft. Fair enough.

    Mathematicians will use tangents because a tangent line has values that are so easy to calculate. The function describing a tangent line is a polynomial and we llllllllove polynomials, correctly. The tangent line is always easy to understand, however hard the original function was. Its value, at the equilibrium, is exactly what the original function’s was. Its first derivative, at the equilibrium, is exactly what the original function’s was at that point. Its second derivative is zero, which might or might not be true of the original function. We don’t care.

    We don’t use tangent lines when we look at equilibriums. This is because in this case they’re boring. If it’s an equilibrium then its tangent line is a horizontal line. No matter what the original function was. It’s trivial: you know the answer before you’ve heard the question.

    Ah, but, there is something mathematical physicists do like. The tangent line is boring. Fine. But how about, using the second derivative, building a tangent … well, “parabola” is the proper term. This is a curve that’s a quadratic, that looks like an open bowl. It exactly matches the original function at the equilibrium. Its derivative exactly matches the original function’s derivative at the equilibrium. Its second derivative also exactly matches the original function’s second derivative, though. Third derivative we don’t care about. It’s so not important here I can’t even finish this sentence in a

    What this second-derivative-based approximation gives us is a parabola. It will look very much like the original function if we’re close to the equilibrium. And this gives us something great. The great thing is this is the same potential energy shape of a weight on a spring, or anything else that oscillates back and forth. It’s the potential energy for “simple harmonic motion”.

    And that’s great. We start studying simple harmonic motion, oh, somewhere in high school physics class because it’s so much fun to play with slinkies and springs and accidentally dropping weights on our lab partners. We never stop. The mathematics behind it is simple. It turns up everywhere. If you understand the mathematics of a mass on a spring you have a tool that relevant to pretty much every problem you ever have. This approximation is part of that. Close to a stable equilibrium, whatever system you’re looking at has the same behavior as a weight on a spring.

    It may strike you that a mass on a spring is itself a central force. And now I’m saying that within the central force problem I started out doing, stuff that orbits, there’s another central force problem. This is true. You’ll see that in a few Why Stuff Can Orbit essays.

    So far, by the way, I’ve talked entirely about a potential energy with a single variable. This is for a good reason: two or more variables is harder. Well of course it is. But the basic dynamics are still open. There’s equilibriums. They can be stable or unstable. They might have inflection points. There is a new kind of behavior. Mathematicians call it a “saddle point”. This is where in one direction the potential energy makes it look like a stable equilibrium while in another direction the potential energy makes it look unstable. Examples of it kind of look like the shape of a saddle, if you haven’t looked at an actual saddle recently. (If you really want to know, get your computer to plot the function z = x2 – y2 and look at the origin, where x = 0 and y = 0.) Well, there’s points on an actual saddle that would be saddle points to a mathematician. It’s unstable, because there’s that direction where it’s definitely unstable.

    So everything about multivariable functions is longer, and a couple bits of it are harder. There’s more chances for weird stuff to happen. I think I can get through most of Why Stuff Can Orbit without having to know that. But do some reading up on that before you take a job as a mathematical physicist.

     
  • Joseph Nebus 6:00 pm on Friday, 7 October, 2016 Permalink | Reply
    Tags: , , ,   

    How Differential Calculus Works 


    I’m at a point in my Why Stuff Can Orbit essays where I need to talk about derivatives. If I don’t then I’m going to be stuck with qualitative talk that’s too hard to focus on anything. But it’s a big subject. It’s about two-fifths of a Freshman Calculus course and one of the two main legs of calculus. So I want to pull out a quick essay explaining the important stuff.

    Derivatives, also called differentials, are about how things change. By “things” I mean “functions”. And particularly I mean functions which have as a domain that’s in the real numbers and a range that’s in the real numbers. That’s the starting point. We can define derivatives for functions with domains or ranges that are vectors of real numbers, or that are complex-valued numbers, or are even more exotic kinds of numbers. But those ideas are all built on the derivatives for real-valued functions. So if you get really good at them, everything else is easy.

    Derivatives are the part of calculus where we study how functions change. They’re blessed with some easy-to-visualize interpretations. Suppose you have a function that describes where something is at a given time. Then its derivative with respect to time is the thing’s velocity. You can build a pretty good intuitive feeling for derivatives that way. I won’t stop you from it.

    A function’s derivative is itself a function. The derivative doesn’t have to be constant, although it’s possible. Sometimes the derivative is positive. This means the original function increases as whatever its variable increases. Sometimes the derivative is negative. This means the original function decreases as its variable increases. If the derivative is a big number this means it’s increasing or decreasing fast. If the derivative is a small number it’s increasing or decreasing slowly. If the derivative is zero then the function isn’t increasing or decreasing, at least not at this particular value of the variable. This might be a local maximum, where the function’s got a larger value than it has anywhere else nearby. It might be a local minimum, where the function’s got a smaller value than it has anywhere else nearby. Or it might just be a little pause in the growth or shrinkage of the function. No telling, at least not just from knowing the derivative is zero.

    Derivatives tell you something about where the function is going. We can, and in practice often do, make approximations to a function that are built on derivatives. Many interesting functions are real pains to calculate. Derivatives let us make polynomials that are as good an approximation as we need and that a computer can do for us instead.

    Derivatives can change rapidly. That’s all right. They’re functions in their own right. Sometimes they change more rapidly and more extremely than the original function did. Sometimes they don’t. Depends what the original function was like.

    Not every function has a derivative. Functions that have derivatives don’t necessarily have them at every point. A function has to be continuous to have a derivative. A function that jumps all over the place doesn’t really have a direction, not that differential calculus will let us suss out. But being continuous isn’t enough. The function also has to be … well, we call it “differentiable”. The word at least tells us what the property is good for, even if it doesn’t say how to tell if a function is differentiable. The function has to not have any corners, points where the direction suddenly and unpredictably changes.

    Otherwise differentiable functions can have some corners, some non-differentiable points. For instance the height of a bouncing ball, over time, looks like a bunch of upside-down U-like shapes that suddenly rebound off the floor and go up again. Those points interrupting the upside-down U’s aren’t differentiable, even though the rest of the curve is.

    It’s possible to make functions that are nothing but corners and that aren’t differentiable anywhere. These are done mostly to win quarrels with 19th century mathematicians about the nature of the real numbers. We use them in Real Analysis to blow students’ minds, and occasionally after that to give a fanciful idea a hard test. In doing real-world physics we usually don’t have to think about them.

    So why do we do them? Because they tell us how functions change. They can tell us where functions momentarily don’t change, which are usually important. And equations with differentials in them, known in the trade as “differential equations”. They’re also known to mathematics majors as “diffy Q’s”, a name which delights everyone. Diffy Q’s let us describe physical systems where there’s any kind of feedback. If something interacts with its surroundings, that interaction’s probably described by differential equations.

    So how do we do them? We start with a function that we’ll call ‘f’ because we’re not wasting more of our lives figuring out names for functions and we’re feeling smugly confident Turkish authorities aren’t interested in us.

    f’s domain is real numbers, for example, the one we’ll call ‘x’. Its range is also real numbers. f(x) is some number in that range. It’s the one that the function f matches with the domain’s value of x. We read f(x) aloud as “the value of f evaluated at the number x”, if we’re pretending or trying to scare Freshman Calculus classes. Really we say “f of x” or maybe “f at x”.

    There’s a couple ways to write down the derivative of f. First, for example, we say “the derivative of f with respect to x”. By that we mean how does the value of f(x) change if there’s a small change in x. That difference matters if we have a function that depends on more than one variable, if we had an “f(x, y)”. Having several variables for f changes stuff. Mostly it makes the work longer but not harder. We don’t need that here.

    But there’s a couple ways to write this derivative. The best one if it’s a function of one variable is to put a little ‘ mark: f'(x). We pronounce that “f-prime of x”. That’s good enough if there’s only the one variable to worry about. We have other symbols. One we use a lot in doing stuff with differentials looks for all the world like a fraction. We spend a lot of time in differential calculus explaining why it isn’t a fraction, and then in integral calculus we do some shady stuff that treats it like it is. But that’s \frac{df}{dx} . This also appears as \frac{d}{dx} f . If you encounter this do not cross out the d’s from the numerator and denominator. In this context the d’s are not numbers. They’re actually operators, which is what we call a function whose domain is functions. They’re notational quirks is all. Accept them and move on.

    How do we calculate it? In Freshman Calculus we introduce it with this expression involving limits and fractions and delta-symbols (the Greek letter that looks like a triangle). We do that to scare off students who aren’t taking this stuff seriously. Also we use that formal proper definition to prove some simple rules work. Then we use those simple rules. Here’s some of them.

    1. The derivative of something that doesn’t change is 0.
    2. The derivative of xn, where n is any constant real number — positive, negative, whole, a fraction, rational, irrational, whatever — is n xn-1.
    3. If f and g are both functions and have derivatives, then the derivative of the function f plus g is the derivative of f plus the derivative of g. This probably has some proper name but it’s used so much it kind of turns invisible. I dunno. I’d call it something about linearity because “linearity” is what happens when adding stuff works like adding numbers does.
    4. If f and g are both functions and have derivatives, then the derivative of the function f times g is the derivative of f times the original g plus the original f times the derivative of g. This is called the Product Rule.
    5. If f and g are both function and have derivatives we might do something called composing them. That is, for a given x we find the value of g(x), and then we put that value in to f. That we write as f(g(x)). For example, “the sine of the square of x”. Well, the derivative of that with respect to x is the derivative of f evaluated at g(x) times the derivative of g. That is, f'(g(x)) times g'(x). This is called the Chain Rule and believe it or not this turns up all the time.
    6. If f is a function and C is some constant number, then the derivative of C times f(x) is just C times the derivative of f(x), that is, C times f'(x). You don’t really need this as a separate rule. If you know the Product Rule and that first one about the derivatives of things that don’t change then this follows. But who wants to go through that many steps for a measly little result like this?
    7. Calculus textbooks say there’s this Quotient Rule for when you divide f by g but you know what? The one time you’re going to need it it’s easier to work out by the Product Rule and the Chain Rule and your life is too short for the Quotient Rule. Srsly.
    8. There’s some functions with special rules for the derivative. You can work them out from the Real And Proper Definition. Or you can work them out from infinitely-long polynomial approximations that somehow make sense in context. But what you really do is just remember the ones anyone actually uses. The derivative of ex is ex and we love it for that. That’s why e is that 2.71828(etc) number and not something else. The derivative of the natural log of x is 1/x. That’s what makes it natural. The derivative of the sine of x is the cosine of x. That’s if x is measured in radians, which it always is in calculus, because it makes that work. The derivative of the cosine of x is minus the sine of x. Again, radians. The derivative of the tangent of x is um the square of the secant of x? I dunno, look that sucker up. The derivative of the arctangent of x is \frac{1}{1 + x^2} and you know what? The calculus book lists this in a table in the inside covers. Just use that. Arctangent. Sheesh. We just made up the “secant” and the “cosecant” to haze Freshman Calculus students anyway. We don’t get a lot of fun. Let us have this. Or we’ll make you hear of the inverse hyperbolic cosecant.

    So. What’s this all mean for central force problems? Well, here’s what the effective potential energy V(r) usually looks like:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    So, first thing. The variable here is ‘r’. The variable name doesn’t matter. It’s just whatever name is convenient and reminds us somehow why we’re talking about this number. We adapt our rules about derivatives with respect to ‘x’ to this new name by striking out ‘x’ wherever it appeared and writing in ‘r’ instead.

    Second thing. C is a constant. So is n. So is L and so is m. 2 is also a constant, which you never think about until a mathematician brings it up. That second term is a little complicated in form. But what it means is “a constant, which happens to be the number \frac{L^2}{2m} , multiplied by r-2”.

    So the derivative of Veff, with respect to r, is the derivative of the first term plus the derivative of the second. The derivatives of each of those terms is going to be some constant time r to another number. And that’s going to be:

    V'_{eff}(r) = C n r^{n - 1} - 2 \frac{L^2}{2 m r^3}

    And there you can divide the 2 in the numerator and the 2 in the denominator. So we could make this look a little simpler yet, but don’t worry about that.

    OK. So that’s what you need to know about differential calculus to understand orbital mechanics. Not everything. Just what we need for the next bit.

     
    • davekingsbury 1:06 pm on Saturday, 8 October, 2016 Permalink | Reply

      Good article. Just finished Morton Cohen’s biography of Lewis Carroll, who was a great populariser of mathematics, logic, etc. Started a shared poem in tribute to him, here is a cheeky plug, hope you don’t mind!

      https://davekingsbury.wordpress.com/2016/10/08/web-of-life/

      Like

      • Joseph Nebus 12:22 am on Tuesday, 11 October, 2016 Permalink | Reply

        Thanks for sharing and I’m quite happy to have your plug here. I know about Carroll’s mathematics-popularization side; his logic puzzles are particularly choice ones even today. (Granting that deductive logic really lends itself to being funny.)

        Oddly I haven’t read a proper biography of Carroll, or most of the other mathematicians I’m interested in. Which is strange since I’m so very interested in the history and the cultural development of mathematics.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 28 September, 2016 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 5: Why Physics Doesn’t Work And What To Do About It 


    Less way previously:


    My title’s hyperbole, to the extent it isn’t clickbait. Of course physics works. By “work” I mean “model the physical world in useful ways”. If it didn’t work then we would call it “pure” mathematics instead. Mathematicians would study it for its beauty. Physicists would be left to fend for themselves. “Useful” I’ll say means “gives us something interesting to know”. “Interesting” I’ll say if you want to ask what that means then I think you’re stalling.

    But what I mean is that Newtonian physics, the physics learned in high school, doesn’t work. Well, it works, in that if you set up a problem right and calculate right you get answers that are right. It’s just not efficient, for a lot of interesting problems. Don’t ask me about interesting again. I’ll just say the central-force problems from this series are interesting.

    Newtonian, high school type, physics works fine. It shines when you have only a few things to keep track of. In this central force problem we have one object, a planet-or-something, that moves. And only one force, one that attracts the planet to or repels the planet from the center, the Origin. This is where we’d put the sun, in a planet-and-sun system. So that seems all right as far as things go.

    It’s less good, though, if there’s constraints. If it’s not possible for the particle to move in any old direction, say. That doesn’t turn up here; we can imagine a planet heading in any direction relative to the sun. But it’s also less good if there’s a symmetry in what we’re studying. And in this case there is. The strength of the central force only changes based on how far the planet is from the origin. The direction only changes based on what direction the planet is relative to the origin. It’s a bit daft to bother with x’s and y’s and maybe even z’s when all we care about is the distance from the origin. That’s a number we’ve called ‘r’.

    So this brings us to Lagrangian mechanics. This was developed in the 18th century by Joseph-Louis Lagrange. He’s another of those 18th century mathematicians-and-physicists with his name all over everything. Lagrangian mechanics are really, really good when there’s a couple variables that describe both what we’d like to observe about the system and its energy. That’s exactly what we have with central forces. Give me a central force, one that’s pointing directly toward or away from the origin, and that grows or shrinks as the radius changes. I can give you a potential energy function, V(r), that matches that force. Give me an angular momentum L for the planet to have, and I can give you an effective potential energy function, Veff(r). And that effective potential energy lets us describe how the coordinates change in time.

    The method looks roundabout. It depends on two things. One is the coordinate you’re interested in, in this case, r. The other is how fast that coordinate changes in time. This we have a couple of ways of denoting. When working stuff out on paper that’s often done by putting a little dot above the letter. If you’re typing, dots-above-the-symbol are hard. So we mark it as a prime instead: r’. This works well until the web browser or the word processor assumes we want smart quotes and we already had the r’ in quote marks. At that point all hope of meaning is lost and we return to communicating by beating rocks with sticks. We live in an imperfect world.

    What we get out of this is a setup that tells us how fast r’, how fast the coordinate we’re interested in changes in time, itself changes in time. If the coordinate we’re interested in is the ordinary old position of something, then this describes the rate of change of the velocity. In ordinary English we call that the acceleration. What makes this worthwhile is that the coordinate doesn’t have to be the position. It also doesn’t have to be all the information we need to describe the position. For the central force problem r here is just how far the planet is from the center. That tells us something about its position, but not everything. We don’t care about anything except how far the planet is from the center, not yet. So it’s fine we have a setup that doesn’t tell us about the stuff we don’t care about.

    How fast r’ changes in time will be proportional to how fast the effective potential energy, Veff(r), changes with its coordinate. I so want to write “changes with position”, since these coordinates are usually the position. But they can be proxies for the position, or things only loosely related to the position. For an example that isn’t a central force, think about a spinning top. It spins, it wobbles, it might even dance across the table because don’t they all do that? The coordinates that most sensibly describe how it moves are about its rotation, though. What axes is it rotating around? How do those change in time? Those don’t have anything particular to do with where the top is. That’s all right. The mathematics works just fine.

    A circular orbit is one where the radius doesn’t change in time. (I’ll look at non-circular orbits later on.) That is, the radius is not increasing and is not decreasing. If it isn’t getting bigger and it isn’t getting smaller, then it’s got to be staying the same. Not all higher mathematics is tricky. The radius of the orbit is the thing I’ve been calling r all this time. So this means that r’, how fast r is changing with time, has to be zero. Now a slightly tricky part.

    How fast is r’, the rate at which r changes, changing? Well, r’ never changes. It’s always the same value. Anytime something is always the same value the rate of its change is zero. This sounds tricky. The tricky part is that it isn’t tricky. It’s coincidental that r’ is zero and the rate of change of r’ is zero, though. If r’ were any fixed, never-changing number, then the rate of change of r’ would be zero. It happens that we’re interested in times when r’ is zero.

    So we’ll find circular orbits where the change in the effective potential energy, as r changes, is zero. There’s an easy-to-understand intuitive idea of where to find these points. Look at a plot of Veff and imagine this is a smooth track or the cross-section of a bowl or the landscaping of a hill. Imagine dropping a ball or a marble or a bearing or something small enough to roll in it. Where does it roll to a stop? That’s where the change is zero.

    It’s too much bother to make a bowl or landscape a hill or whatnot for every problem we’re interested in. We might do it anyway. Mathematicians used to, to study problems that were too complicated to do by useful estimates. These were “analog computers”. They were big in the days before digital computers made it no big deal to simulate even complicated systems. We still need “analog computers” or models sometimes. That’s usually for problems that involve chaotic stuff like turbulent fluids. We call this stuff “wind tunnels” and the like. It’s all a matter of solving equations by building stuff.

    We’re not working with problems that complicated. There isn’t the sort of chaos lurking in this problem that drives us to real-world stuff. We can find these equilibriums by working just with symbols instead.

     
  • Joseph Nebus 6:00 pm on Saturday, 24 September, 2016 Permalink | Reply
    Tags: handedness, ,   

    Some Thermomathematics Reading 


    I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.

    One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.

    Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.

    And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.

     
  • Joseph Nebus 6:00 pm on Thursday, 8 September, 2016 Permalink | Reply
    Tags: , , , , , , , , ,   

    Why Stuff Can Orbit, Part 4: On The L 


    Less way previously:


    We were chatting about central forces. In these a small object — a satellite, a planet, a weight on a spring — is attracted to the center of the universe, called the origin. We’ve been studying this by looking at potential energy, a function that in this case depends only on how far the object is from the origin. But to find circular orbits, we can’t just look at the potential energy. We have to modify this potential energy to account for angular momentum. This essay I mean to discuss that angular momentum some.

    Let me talk first about the potential energy. Mathematical physicists usually write this as a function named U or V. I’m using V. That’s what my professor used teaching this, back when I was an undergraduate several hundred thousand years ago. A central force, by definition, changes only with how far you are from the center. I’ve put the center at the origin, because I am not a madman. This lets me write the potential energy as V = V(r).

    V(r) could, in principle, be anything. In practice, though, I am going to want it to be r raised to a power. That is, V(r) is equal to C rn. The ‘C’ here is a constant. It’s a scaling constant. The bigger a number it is the stronger the central force. The closer the number is to zero the weaker the force is. In standard units, gravity has a constant incredibly close to zero. This makes orbits very big things, which generally works out well for planets. In the mathematics of masses on springs, the constant is closer to middling little numbers like 1.

    The ‘n’ here is a deceiver. It’s a constant number, yes, and it can be anything we want. But the use of ‘n’ as a symbol has connotations. Usually when a mathematician or a physicist writes ‘n’ it’s because she needs a whole number. Usually a positive whole number. Sometimes it’s negative. But we have a legitimate central force if ‘n’ is any real number: 2, -1, one-half, the square root of π, any of that is good. If you just write ‘n’ without explanation, the reader will probably think “integers”, possibly “counting numbers”. So it’s worth making explicit when this isn’t so. It’s bad form to surprise the reader with what kind of number you’re even talking about.

    (Some number of essays on we’ll find out that the only values ‘n’ can have that are worth anything are -1, 2, and 7. And 7 isn’t all that good. But we aren’t supposed to know that yet.)

    C rn isn’t the only kind of central force that could exist. Any function rule would do. But it’s enough. If we wanted a more complicated rule we could just add two, or three, or more potential energies together. This would give us V(r) = C_1 r^{n_1} + C_2 r^{n_2} , with C1 and C2 two possibly different numbers, and n1 and n2 two definitely different numbers. (If n1 and n2 were the same number then we should just add C1 and C2 together and stop using a more complicated expression than we need.) Remember that Newton’s Law of Motion about the sum of multiple forces being something vector something something direction? When we look at forces as potential energy functions, that law turns into just adding potential energies together. They’re well-behaved that way.

    And if we can add these r-to-a-power potential energies together then we’ve got everything we need. Why? Polynomials. We can approximate most any potential energy that would actually happen with a big enough polynomial. Or at least a polynomial-like function. These r-to-a-power forces are a basis set for all the potential energies we’re likely to care about. Understand how to work with one and you understand how to work with them all.

    Well, one exception. The logarithmic potential, V(r) = C log(r), is really interesting. And it has real-world applicability. It describes how strongly two vortices, two whirlpools, attract each other. You can write the logarithm as a polynomial. But logarithms are pretty well-behaved functions. You might be better off just doing that as a special case.

    Still, at least to start with, we’ll stick with V(r) = C rn and you know what I mean by all those letters now. So I’m free to talk about angular momentum.

    You’ve probably heard of momentum. It’s got something to do with movement, only sports teams and political campaigns are always gaining or losing it somehow. When we talk of that we’re talking of linear momentum. It describes how much mass is moving how fast in what direction. So it’s a vector, in three-dimensional space. Or two-dimensional space if you’re making the calculations easier. To find what the vector is, we make a list of every object that’s moving. We take its velocity — how fast it’s moving and in what direction — and multiply that by its mass. Mass is a single number, a scalar, and we’re always allowed to multiply a vector by a scalar. This gets us another vector. Once we’ve done that for everything that’s moving, we add all those product vectors together. We can always add vectors together. And this gives us a grand total vector, the linear momentum of the system.

    And that’s conserved. If one part of the system starts moving slower it’s because other parts are moving faster, and vice-versa. In the real world momentum seems to evaporate. That’s because some of the stuff moving faster turns out to be air objects bumped into, or particles of the floor that get dragged along by friction, or other stuff we don’t care about. That momentum can seem to evaporate is what makes its use in talking about ports teams or political campaigns make sense. It also annoys people who want you to know they understand science words better than you. So please consider this my authorization to use “gaining” and “losing” momentum in this sense. Ignore complainers. They’re the people who complain the word “decimate” gets used to mean “destroy way more than ten percent of something”, even though that’s the least bad mutation of an English word’s meaning in three centuries.

    Angular momentum is also a vector. It’s also conserved. We can calculate what that vector is by the same sort of process, that of calculating something on each object that’s spinning and adding it all up. In real applications it can seem to evaporate. But that’s also because the angular momentum is going into particles of air. Or it rubs off grease on the axle. Or it does other stuff we wish we didn’t have to deal with.

    The calculation is a little harder to deal with. There’s three parts to a spinning thing. There’s the thing, and there’s how far it is from the axis it’s spinning around, and there’s how fast it’s spinning. So you need to know how fast it’s travelling in the direction perpendicular to the shortest line between the thing and the axis it’s spinning around. Its angular momentum is going to be as big as the mass times the distance from the axis times the perpendicular speed. It’s going to be pointing in whichever axis direction makes its movement counterclockwise. (Because that’s how physicists started working this out and it would be too much bother to change now.)

    You might ask: wait, what about stuff like a wheel that’s spinning around its center? Or a ball being spun? That can’t be an angular momentum of zero? How do we work that out? The answer is: calculus. Also, we don’t need that. This central force problem I’ve framed so that we barely even need algebra for it.

    See, we only have a single object that’s moving. That’s the planet or satellite or weight or whatever it is. It’s got some mass, the value of which we call ‘m’ because why make it any harder on ourselves. And it’s spinning around the origin. We’ve been using ‘r’ to mean the number describing how far it is from the origin. That’s the distance to the axis it’s spinning around. Its velocity — well, we don’t have any symbols to describe what that is yet. But you can imagine working that out. Or you trust that I have some clever mathematical-physics tool ready to introduce to work it out. I have, kind of. I’m going to ignore it altogether. For now.

    The symbol we use for the total angular momentum in a system is \vec{L} . The little arrow above the symbol is one way to denote “this is a vector”. It’s a good scheme, what with arrows making people think of vectors and it being easy to write on a whiteboard. In books, sometimes, we make do just by putting the letter in boldface, L, which is easier for old-fashioned word processors to do. If we’re sure that the reader isn’t going to forget that L is this vector then we might stop highlighting the fact altogether. That’s even less work to do.

    It’s going to be less work yet. Central force problems like this mean the object can move only in a two-dimensional plane. (If it didn’t, it wouldn’t conserve angular momentum: the direction of \vec{L} would have to change. Sounds like magic, but trust me.) The angular momentum’s direction has to be perpendicular to that plane. If the object is spinning around on a sheet of paper, the angular momentum is pointing straight outward from the sheet of paper. It’s pointing toward you if the object is moving counterclockwise. It’s pointing away from you if the object is moving clockwise. What direction it’s pointing is locked in.

    All we need to know is how big this angular momentum vector is, and whether it’s positive or negative. So we just care about this number. We can call it ‘L’, no arrow, no boldface, no nothing. It’s just a number, the same as is the mass ‘m’ or distance from the origin ‘r’ or any of our other variables.

    If ‘L’ is zero, this means there’s no total angular momentum. This means the object can be moving directly out from the origin, or directly in. This is the only way that something can crash into the center. So if setting L to be zero doesn’t allow that then we know we did something wrong, somewhere. If ‘L’ isn’t zero, then the object can’t crash into the center. If it did we’d be losing angular momentum. The object’s mass times its distance from the center times its perpendicular speed would have to be some non-zero number, even when the distance was zero. We know better than to look for that.

    You maybe wonder why we use ‘L’ of all letters for the angular momentum. I do. I don’t know. I haven’t found any sources that say why this letter. Linear momentum, which we represent with \vec{p} , I know. Or, well, I know the story every physicist says about it. p is the designated letter for linear momentum because we used to use the word “impetus”, as in “impulse”, to mean what we mean by momentum these days. And “p” is the first letter in “impetus” that isn’t needed for some more urgent purpose. (“m” is too good a fit for mass. “i” has to work both as an index and as that number which, squared, gives us -1. And for that matter, “e” we need for that exponentials stuff, and “t” is too good a fit for time.) That said, while everybody, everybody, repeats this, I don’t know the source. Perhaps it is true. I can imagine, say, Euler or Lagrange in their writing settling on “p” for momentum and everybody copying them. I just haven’t seen a primary citation showing this is so.

    (I don’t mean to sound too unnecessarily suspicious. But just because everyone agrees on the impetus-thus-p story doesn’t mean it’s so. I mean, every Star Trek fan or space historian will tell you that the first space shuttle would have been named Constitution until the Trekkies wrote in and got it renamed Enterprise. But the actual primary documentation that the shuttle would have been named Constitution is weak to nonexistent. I’ve come to the conclusion NASA had no plan in mind to name space shuttles until the Trekkies wrote in and got one named. I’ve done less poking around the impetus-thus-p story, in that I’ve really done none, but I do want it on record that I would like more proof.)

    Anyway, “p” for momentum is well-established. So I would guess that when mathematical physicists needed a symbol for angular momentum they looked for letters close to “p”. When you get into more advanced corners of physics “q” gets called on to be position a lot. (Momentum and position, it turns out, are nearly-identical-twins mathematically. So making their symbols p and q offers aesthetic charm. Also great danger if you make one little slip with the pen.) “r” is called on for “radius” a lot. Looking on, “t” is going to be time.

    On the other side of the alphabet, well, “o” is just inviting danger. “n” we need to count stuff. “m” is mass or we’re crazy. “l” might have just been the nearest we could get to “p” without intruding on a more urgently-needed symbol. (“s” we use a lot for parameters like length of an arc that work kind of like time but aren’t time.) And then shift to the capital letter, I expect, because a lowercase l looks like a “1”, to everybody’s certain doom.

    The modified potential energy, then, is going to include the angular momentum L. At least, the amount of angular momentum. It’s also going to include the mass of the object moving, and the radius r that says how far the object is from the center. It will be:

    V_{eff}(r) = V(r) + \frac{L^2}{2 m r^2}

    V(r) was the original potential, whatever that was. The modifying term, with this square of the angular momentum and all that, I kind of hope you’ll just accept on my word. The L2 means that whether the angular momentum is positive or negative, the potential will grow very large as the radius gets small. If it didn’t, there might not be orbits at all. And if the angular momentum is zero, then the effective potential is the same original potential that let stuff crash into the center.

    For the sort of r-to-a-power potentials I’ve been looking at, I get an effective potential of:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    where n might be an integer. I’m going to pretend a while longer that it might not be, though. C is certainly some number, maybe positive, maybe negative.

    If you pick some values for C, n, L, and m you can sketch this out. If you just want a feel for how this Veff looks it doesn’t much matter what values you pick. Changing values just changes the scale, that is, where a circular orbit might happen. It doesn’t change whether it happens. Picking some arbitrary numbers is a good way to get a feel for how this sort of problem works. It’s good practice.

    Sketching will convince you there are energy minimums, where we can get circular orbits. It won’t say where to find them without some trial-and-error or building a model of this energy and seeing where a ball bearing dropped into it rolls to a stop. We can do this more efficiently.

     
  • Joseph Nebus 6:00 pm on Sunday, 28 August, 2016 Permalink | Reply
    Tags: , ,   

    Reading the Comics, August 27, 2016: Calm Before The Term Edition 


    Here in the United States schools are just lurching back into the mode where they have students come in and do stuff all day. Perhaps this is why it was a routine week. Comic Strip Master Command wants to save up a bunch of story problems for us. But here’s what the last seven days sent into my attention.

    Jeff Harris’s Shortcuts educational feature for the 21st is about algebra. It’s got a fair enough blend of historical trivia and definitions and examples and jokes. I don’t remember running across the “number cruncher” joke before.

    Mark Anderson’s Andertoons for the 23rd is your typical student-in-lecture joke. But I do sympathize with students not understanding when a symbol gets used for different meanings. It throws everyone. But sometimes the things important to note clearly in one section are different from the needs in another section. No amount of warning will clear things up for everybody, but we try anyway.

    Tom Thaves’s Frank and Ernest for the 23rd tells a joke about collapsing wave functions, which is why you never see this comic in a newspaper but always see it on a physics teacher’s door. This is properly physics, specifically quantum mechanics. But it has mathematical import. The most practical model of quantum mechanics describes what state a system is in by something called a wave function. And we can turn this wave function into a probability distribution, which describes how likely the system is to be in each of its possible states. “Collapsing” the wave function is a somewhat mysterious and controversial practice. It comes about because if we know nothing about a system then it may have one of many possible values. If we observe, say, the position of something though, then we have one possible value. The wave functions before and after the observation are different. We call it collapsing, reflecting how a universe of possibilities collapsed into a mere fact. But it’s hard to find an explanation for what that is that’s philosophically and physically satisfying. This problem leads us to Schrödinger’s Cat, and to other challenges to our sense of how the world could make sense. So, if you want to make your mark here’s a good problem for you. It’s not going to be easy.

    John Allison’s Bad Machinery for the 24th tosses off a panel full of mathematics symbols as proof of hard thinking. In other routine references John Deering’s Strange Brew for the 26th is just some talk about how hard fractions are.

    While it’s outside the proper bounds of mathematics talk, Tom Toles’s Randolph Itch, 2 am for the 23rd is a delight. My favorite strip of this bunch. Should go on the syllabus.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: