Tagged: energy Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 3:00 pm on Friday, 27 May, 2016 Permalink | Reply
    Tags: , , energy, , ,   

    Why Stuff Can Orbit, Part 2: Why Stuff Can’t Orbit 


    Previously:

    As I threatened last week, I want to talk some about central forces. They’re forces in which one particle attracts another with a force that depends only on how far apart the two are. Last week’s essay described some of the assumptions behind the model.

    Mostly, we can study two particles interacting as if it were one particle hovering around the origin. The origin is some central reference point. If we’re looking for a circular orbit then we only have to worry about one variable. This would be ‘r’, the radius of the orbit: how far the planet is from the sun it orbits.

    Now, central forces can follow any rule you like. Not in reality, of course. In reality there’s two central forces you ever see. One is gravity (electric attraction or repulsion follows the same rule) and the other is springs. But we can imagine there being others. At the end of this string of essays I hope to show why there’s special things about these gravity and spring-type forces. And by imagining there’s others we can learn something about why we only actually see these.

    So now I’m going to stop talking about forces. I’ll talk about potential energy instead. There’s several reasons for this, but they all come back to this one: energy is easier to deal with. Energy is a scalar, a single number. A force is a vector, which for this kind of physics-based problem is an array of numbers. We have less to deal with if we stick to energy. If we need forces later on we can get them from the energy. We’ll need calculus to do that, but it won’t be the hard parts of calculus.

    The potential energy will be some function. As a central force it’ll depend only on the distance, r, that a particle is from the origin. It’s convenient to have a name for this. So I will use a common name: V(r). V is a common symbol to use for potential energy. U is another. The (r) emphasizes that this is some function which depends on r. V(r) doesn’t commit us to any particular function, not at this point.

    You might ask: why is the potential energy represented with V, or with U? And I don’t really know. Sometimes we’ll use PE to mean potential energy, which is as clear a shorthand name as we could hope for. But a name that’s two letters like that tends to be viewed with suspicion when we have to do calculus work on it. The label looks like the product of P and E, and derivatives of products get tricky. So it’s a less popular label if you know you’re going take the derivative of the potential energy anytime soon. EP can also get used, and the subscript means it doesn’t look like the product of any two things. Still, at least in my experience, U and V are most often used.

    As I say, I don’t know just why it should be them. It might just be that the letters were available when someone wrote a really good physics textbook. If we want to assume there must be some reason behind this letter choice I have seen a plausible guess. Potential energy is used to produce work. Work is W. So potential energy should be a letter close to W. That suggests U and V, both letters that are part of the letter W. (Listen to the name of ‘W’, and remember that until fairly late in the game U and V weren’t clearly distinguished as letters.) But I do not know of manuscript evidence suggesting that’s what anyone every thought. It is at best a maybe useful mnemonic.

    Here’s an advantage that using potential energy will give us: we can postpone using calculus a little. Not for quantitative results. Not for ones that describe exactly where something should orbit. But it’s good for qualitative results. We can answer questions like “is there a circular orbit” and “are there maybe several plausible orbits” just by looking at a picture.

    That picture is a plot of the values of V(r) against r. And that can be anything. I mean it. Take your preferred drawing medium and draw any wiggly curve you like. It can’t loop back or cross itself or something like that, but it can be as smooth or as squiggly as you like. That’s your central-force potential energy V(r).

    Are there any circular orbits for this potential? Calculus gives us the answer, but we don’t need that. For a potential like our V(r), that depend on one variable, we can just look. (We could also do this for a potential that depends on two variables.) Take your V(r). Imagine it’s the sides of a perfectly smooth bowl or track or something. Now imagine dropping a marble or a ball bearing or something nice and smooth on it. Does the marble come to a rest anywhere? That’s your equilibrium. That’s where a circular orbit can happen.

    A generic wiggly shape with a bunch of peaks and troughs.

    Figure 1. A generic yet complicated V(r). Spoiler: I didn’t draw this myself because I figured using Octave was easier than using ArtRage on my iPad.

    We’re using some real-world intuition to skip doing analysis. That’s all right in this case. Newtonian mechanics say that a particle’s momentum changes in the direction of a force felt. If a particle doesn’t change its mass, then that means it accelerates where the force, uh, forces it. And this sort of imaginary bowl or track matches up the potential energy we want to study with a constrained gravitational potential energy.

    My generic V(r) was a ridiculous function. This sort of thing doesn’t happen in the real world. But they might have. Wiggly functions like that were explored in the 19th century by physicists trying to explain chemistry. They hoped complicated potentials would explain why gases expanded when they warmed and contracted when they cooled. The project failed. Atoms follow quantum-mechanics laws that match only vaguely match Newtonian mechanics like this. But just because functions like these don’t happen doesn’t mean we can’t learn something from them.

    We can’t study every possible V(r). Not at once. Not without more advanced mathematics than I want to use right now. What I’d like to do instead is look at one family of V(r) functions. There will be infinitely many different functions here, but they’ll all resemble each other in important ways. If you’ll allow me to introduce two new numbers we can describe them all with a single equation. The new numbers I’ll name C and n. They’re both constants, at least for this problem. They’re some numbers and maybe at some point I’ll care which ones they are, but it doesn’t matter. If you want to pretend that C is another way to write “eight”, go ahead. n … well, you can pretend that’s just another way to write some promising number like “two” for now. I’ll say when I want to be more specific about it.

    The potential energy I want to look at has a form we call a power law, because it’s all about raising a variable to a power. And we only have the one variable, r. So the potential energy looks like this:

    V(r) = C r^n

    There are some values of n that it will turn out are meaningful. If n is equal to 2, then this is the potential energy for two particles connected by a spring. You might complain there are very few things in the world connected to other things by springs. True enough, but a lot of things act as if they were springs. This includes most anything that’s near but pushed away from a stable equilibrium. It’s a potential worth studying.

    If n is equal to -1, then this is the potential energy for two particles attracting each other by gravity or by electric charges. And here there’s an important little point. If the force is attractive, like gravity or like two particles having opposite electric charges, then we need C to be a negative number. If the force is repulsive, like two particles having the same electric charge, then we need C to be a positive number.

    Although n equalling two, and n equalling negative one, are special cases they aren’t the only ones we can imagine. n may be any number, positive or negative. It could be zero, too, but in that case the potential is a flat line and there’s nothing happening there. That’s known as a “free particle”. It’s just something that moves around with no impetus to speed up or slow down or change direction or anything.

    So let me sketch the potentials for positive n, first for a positive C and second for a negative C. Don’t worry about the numbers on either the x- or the y-axes here; they don’t matter. The shape is all we care about right now.

    The curve starts at zero and rises ever upwards as the radius r increases.

    Figure 2. V(r) = C rn for a positive C and a positive n.


    The curve starts at zero and drops ever downwards as the radius r increases.

    Figure 3. V(r) = C rn for a negative C and a positive n.

    Now let me sketch the potentials for a negative n, first for a positive C and second for a negative C.

    The curve starts way high up and keeps dropping, but levelling out, as the radius r increases

    Figure 4. V(r) = C rn for a positive C and a negative n.


    The curve starts way down low and rises, but levelling out, as the radius r increases.

    Figure 5. V(r) = C rn for a negative C and a negative n.

    And now we can look for equilibriums, for circular orbits. If we have a positive n and a positive C, then — well, do the marble-in-a-bowl test. Start from anywhere; the marble rolls down to the origin where it smashes and stops. The only circular orbit is at a radius r of zero.

    With a positive n and a negative C, start from anywhere except a radius r of exactly zero and the marble rolls off to the right, without ever stopping. The only circular orbit is at a radius r of zero.

    With a negative n and a positive C, the marble slides down a hill that gets more shallow but that never levels out. It rolls off getting ever farther from the origin. There’s no circular orbits.

    With a negative n and a negative C, start from anywhere and the marble rolls off to the left. The marble will plummet down that ever-steeper hill. The only circular orbit is at a radius r of zero.

    So for all these cases, with a potential V(r) = C rn, the only possible “orbits” have both particles zero distance apart. Otherwise the orbiting particle smashes right down into the center or races away never to be seen again. Clearly something has gone wrong with this little project.

    If you’ve spotted what’s gone wrong please don’t say what it is right away. I’d like people to ponder it a little before coming back to this next week. That will come, I expect, shortly after the first Theorem Thursday post. If you have any requests for that project, please get them in, the sooner the better.

     
  • Joseph Nebus 3:00 pm on Monday, 25 April, 2016 Permalink | Reply
    Tags: , electromagnetism, energy, , , , ,   

    A Leap Day 2016 Mathematics A To Z: Yukawa Potential 


    Yeah, ‘Y’ is a lousy letter in the Mathematics Glossary. I have a half-dozen mathematics books on the shelf by my computer. Some is semi-popular stuff like Richard Courant and Herbert Robbins’s What Is Mathematics? (the Ian Stewart revision). Some is fairly technical stuff, by which I mean Hidetoshi Nishimori’s Statistical Physics of Spin Glasses and Information Processing. There’s just no ‘Y’ terms in any of them worth anything. But I can rope something into the field. For example …

    Yukawa Potential

    When you as a physics undergraduate first take mechanics it’s mostly about very simple objects doing things according to one rule. The objects are usually these indivisible chunks. They’re either perfectly solid or they’re points, too tiny to have a surface area or volume that might mess things up. We draw them as circles or as blocks because they’re too hard to see on the paper or board otherwise. We spend a little time describing how they fall in a room. This lends itself to demonstrations in which the instructor drops a rubber ball. Then we go on to a mass on a spring hanging from the ceiling. Then to a mass on a spring hanging to another mass.

    Then we go onto two things sliding on a surface and colliding, which would really lend itself to bouncing pool balls against one another. Instead we use smaller solid balls. Sometimes those “Newton’s Cradle” things with the five balls that dangle from wires and just barely touch each other. They give a good reason to start talking about vectors. I mean positional vectors, the ones that say “stuff moving this much in this direction”. Normal vectors, that is. Then we get into stars and planets and moons attracting each other by gravity. And then we get into the stuff that really needs calculus. The earlier stuff is helped by it, yes. It’s just by this point we can’t do without.

    The “things colliding” and “balls dropped in a room” are the odd cases in this. Most of the interesting stuff in an introduction to mechanics course is about things attracting, or repelling, other things. And, particularly, they’re particles that interact by “central forces”. Their attraction or repulsion is along the line that connects the two particles. (Impossible for a force to do otherwise? Just wait until Intro to Mechanics II, when magnetism gets in the game. After that, somewhere in a fluid dynamics course, you’ll see how a vortex interacts with another vortex.) The potential energies for these all vary with distance between the points.

    Yeah, they also depend on the mass, or charge, or some kind of strength-constant for the points. They also depend on some universal constant for the strength of the interacting force. But those are, well, constant. If you move the particles closer together or farther apart the potential changes just by how much you moved them, nothing else.

    Particles hooked together by a spring have a potential that looks like \frac{1}{2}k r^2 . Here ‘r’ is how far the particles are from each other. ‘k’ is the spring constant; it’s just how strong the spring is. The one-half makes some other stuff neater. It doesn’t do anything much for us here. A particle attracted by another gravitationally has a potential that looks like -G M \frac{1}{r} . Again ‘r’ is how far the particles are from each other. ‘G’ is the gravitational constant of the universe. ‘M’ is the mass of the other particle. (The particle’s own mass doesn’t enter into it.) The electric potential looks like the gravitational potential but we have different symbols for stuff besides the \frac{1}{r} bit.

    The spring potential and the gravitational/electric potential have an interesting property. You can have “closed orbits” with a pair of them. You can set a particle orbiting another and, with time, get back to exactly the original positions and velocities. (Three or more particles you’re not guaranteed of anything.) The curious thing is this doesn’t always happen for potentials that look like “something or other times r to a power”. In fact, it never happens, except for the spring potential, the gravitational/electric potential, and — peculiarly — for the potential k r^7 . ‘k’ doesn’t mean anything there, and we don’t put a one-seventh or anything out front for convenience, because nobody knows anything that needs anything like that, ever. We can have stable orbits, ones that stay within a minimum and a maximum radius, for a potential k r^n whenever n is larger than -2, at least. And that’s it, for potentials that are nothing but r-to-a-power.

    Ah, but does the potential have to be r-to-a-power? And here we see Dr Hideki Yukawa’s potential energy. Like these springs and gravitational/electric potentials, it varies only with the distance between particles. Its strength isn’t just the radius to a power, though. It uses a more complicated expression:

    -K \frac{e^{-br}}{r}

    Here ‘K’ is a scaling constant for the strength of the whole force. It’s the kind of thing we have ‘G M’ for in the gravitational potential, or ‘k’ in the spring potential. The ‘b’ is a second kind of scaling. And that a kind of range. A range of what? It’ll help to look at this potential rewritten a little. It’s the same as -\left(K \frac{1}{r}\right) \cdot \left(e^{-br}\right) . That’s the gravitational/electric potential, times e-br. That’s a number that will be very large as r is small, but will drop to zero surprisingly quickly as r gets larger. How quickly will depend on b. The larger a number b is, the faster this drops to zero. The smaller a number b is, the slower this drops to zero. And if b is equal to zero, then e-br is equal to 1, and we have the gravitational/electric potential all over again.

    Yukawa introduced this potential to physics in the 1930s. He was trying to model the forces which keep an atom’s nucleus together. It represents the potential we expect from particles that attract one another by exchanging some particles with a rest mass. This rest mass is hidden within that number ‘b’ there. If the rest mass is zero, the particles are exchanging something like light, and that’s just what we expect for the electric potential. For the gravitational potential … um. It’s complicated. It’s one of the reasons why we expect that gravitons, if they exist, have zero rest mass. But we don’t know that gravitons exist. We have a lot of trouble making theoretical gravitons and quantum mechanics work together. I’d rather be skeptical of the things until we need them.

    Still, the Yukawa potential is an interesting mathematical creature even if we ignore its important role in modern physics. When I took my Introduction to Mechanics final one of the exam problems was deriving the equivalent of Kepler’s Laws of Motion for the Yukawa Potential. I thought then it was a brilliant problem. I still do. It struck me while writing this that I don’t remember whether it allows for closed orbits, except when b is zero. I’m a bit afraid to try to work out whether it does, lest I learn that I can’t follow the reasoning for that anymore. That would be a terrible thing to learn.

     
    • elkement (Elke Stangl) 1:31 pm on Wednesday, 27 April, 2016 Permalink | Reply

      That’s an interesting one!! Re closed orbits: I just remember that there are only two potentials that will make sure that every bound orbit is closed: A quadratic (Hooke’s Law, a spring) and a gravitational 1/r potential. Other potentials can have closed orbits, but it depends on initial conditions.
      Proofs usually make use of all the constants – energy, angular momentum – to be subsituted in the equations of motion (or the constants emerge from applying Langrange’s formalism) and angular momentum gives rise to an effective ‘add-on’ potential. Then different substitutions are applied that better fit the geometry of the problem, like using 1/r rather than r and angles or polar coordinates … and the statement about closed orbits should be a consequence of calculating the change in angle for moving from maximum to minimum radius.
      The procecure felt a bit like so-called early quantum mechanics, where theorems about integer changes in angular momentum were ‘tacked on’ classical theory … and all worked out nicely (and only) with harmonic or 1/r potentials.

      Like

      • Joseph Nebus 7:01 pm on Friday, 29 April, 2016 Permalink | Reply

        Hm. On reading my copy of Davis’s Classical Mechanics — my old textbook on this — I see he says the kr7 potential allows for closed orbits, but doesn’t say one thing or another about whether every orbit with that potential is closed.

        But the section has got that tone like you describe, about early quantum mechanics and other proofs like this, of being ad hoc. Describing where an equilibrium might be is fine. The added talk about what makes it stable? … I suppose that’s more obvious when you’ve got some experience in similar problems, but I remember as a freshman finding it baffling why this should be a calculation. And then the part about apsidal angles, to say whether the orbits are closed, seems to come from a particularly deep field of nowhere.

        This does remind me that I’ve got a book I mean to read, partly for education, partly for recreation, that is about introducing the most potent tools of mechanics while studying the simplest orbiting-bodies problems.

        Like

        • elkement (Elke Stangl) 2:08 pm on Tuesday, 3 May, 2016 Permalink | Reply

          I searched for a reference now – this is the theorem I meant and its proof (translated to English from French): https://arxiv.org/pdf/0704.2396v1.pdf
          Quote: “In 1873, Joseph Louis Francois Bertrand (1822-1900) published a short but important paper in which he proved that there are two central fields only for which all bounded orbits are closed, namely, the isotropic harmonic oscillator law and Newton’s universal gravitation law”

          Like

          • Joseph Nebus 3:50 pm on Wednesday, 4 May, 2016 Permalink | Reply

            Ooh, thank you. This is interesting. And remarkable for being so compact, too! Who knew there’d be results that interesting with barely five pages of work?

            Liked by 1 person

  • Joseph Nebus 3:00 pm on Friday, 25 March, 2016 Permalink | Reply
    Tags: , , energy, , , motion, Newtonian mechanics   

    A Leap Day 2016 Mathematics A To Z: Lagrangian 


    It’s another of my handful of free choice days today. I’ll step outside the abstract algebra focus I’ve somehow gotten lately to look instead at mechanics.

    Lagrangian.

    So, you likely know Newton’s Laws of Motion. At least you know of them. We build physics out of them. So a lot of applied mathematics relies on them. There’s a law about bodies at rest staying at rest. There’s one about bodies in motion continuing in a straight line. There’s one about the force on a body changing its momentum. Something about F equalling m a. There’s something about equal and opposite forces. That’s all good enough, and that’s all correct. We don’t use them anyway.

    I’m overstating for the sake of a good hook. They’re all correct. And if the problem’s simple enough there’s not much reason to go past this F and m a stuff. It’s just that once you start looking at complicated problems this gets to be an awkward tool. Sometimes a system is just hard to describe using forces and accelerations. Sometimes it’s impossible to say even where to start.

    For example, imagine you have one of those pricey showpiece globes. The kind that’s a big ball that spins on an axis, and whose axis in on a ring that can tip forward or back. And it’s an expensive showpiece globe. That axis is itself in another ring that rotates clockwise and counterclockwise. Give the globe a good solid spin so it won’t slow down anytime soon. Then nudge the frame, so both the horizontal ring and the ring the axis is on wobble some. The whole shape is going to wobble and move in some way. We ought to be able to model that. How? Force and mass and acceleration barely seem to even exist.

    The Lagrangian we get from Joseph-Louis Lagrange, who in the 18th century saw a brilliant new way to understand physics. It doesn’t describe how things move in response to forces, at least not directly. It describes how things move using energy. In particular, it uses on potential energy and kinetic energy.

    This is brilliant on many counts. The biggest is in switching from forces to energy. Forces are vectors; they carry information about their size and their direction. Energy is a scalar; it’s just a number. A number is almost always easier to work with than a number alongside a direction.

    The second big brilliance is that the Lagrangian gives us freedom in choosing coordinate systems. We have to know where things are and how they’re changing. The first obvious guess for how to describe things is their position in space. And that works fine until we look at stuff such as this spinning, wobbling globe. That never quite moves, although the spinning and the wobbling is some kind of motion. The problem begs us to think of the globe’s rotation around three different axes. Newton doesn’t help us with that. The Lagrangian, though —

    The Lagrangian lets us describe physics using “generalized coordinates”. By this we mean coordinates that make sense for the problem even if they don’t directly relate to where something or other is in space. Any pick of coordinates is good, as long as we can describe the potential energy and the kinetic energy of the system using them.

    I’ve been writing about this as if the Lagrangian were the cure for all hard work ever. It’s not, alas. For example, we often want to study big bunches of particles that all attract (or repel) each other. That attraction (or repulsion) we represent as potential energy. This is easier to deal with than forces, granted. But that’s easier, which is not the same as easy.

    Still, the Lagrangian is great. We can do all the physics we used to. And we have a new freedom to set up problems in convenient ways. And the perspective of looking at energy instead of forces gives us a fruitful view on physics problems.

     
    • howardat58 3:06 pm on Friday, 25 March, 2016 Permalink | Reply

      While with Lagrange, how about the Lagrange Multiplier? Maybe on the next A-Z

      Liked by 1 person

    • elkement (Elke Stangl) 7:42 am on Tuesday, 5 April, 2016 Permalink | Reply

      A question: When physicists say Lagrangian they often actually mean Lagrangian density – the ‘curly L’ to be integrated over space and time, and the ‘actual’ Lagrangian is the integral. How is term used in math?

      Like

      • Joseph Nebus 2:05 am on Saturday, 9 April, 2016 Permalink | Reply

        My experience, which I admit is limited is that a mathematician speaking of the Lagrangian is probably, by default, speaking of the energy, the kinetic energy minus potential. She’d probably only mean the Lagrangian density if it were clear from context she were working in field theory.

        But my experience was mostly as a student, and grad student, just learning different ways to describe mechanics. I never got deeply enough into this kind of physics to do field theory, for example. I was over in statistical mechanics and fluid flow problems instead. And there I was doing much more with the Hamiltonian forms of physical systems.

        Liked by 1 person

        • elkement (Elke Stangl) 6:33 am on Sunday, 10 April, 2016 Permalink | Reply

          Thinking again about why I brought this up: Seems I had tacitly assumed you were actually using field theory without emphasizing it … especially you had mentioned your experience in fluid dynamics before. I thought about the stress energy tensor (of something simple, like an ideal gas), as it is often one of the first things explained in field theory courses … after Noether’s Theorem had been introduced.

          So can I propose Noether’s Theorem for your next A-Z tour? ;-)

          Like

          • Joseph Nebus 2:46 am on Friday, 15 April, 2016 Permalink | Reply

            It isn’t a strange thought to have. I kind of backed into fluid dynamics the wrong way around, by looking for a problem well-handled by some statistical mechanics tools, and Monte Carlo simulations, that I had on hand rather than out of any particular interest in how fluids move. I never even had a proper course on the subject, although I’d picked up some basic stuff from differential equations and numerical methods courses. The result is a real fluid flow specialist would find me a staggering mix of knowledge and ignorance. I have considered relying on my love’s workplace benefits to see if I can take a proper fluid dynamics course without paying full tuition.

            I might do Noether’s Theorem. I’m kind of tempted to do a series of nothing but theorems, actually, although how to organize the theme is a challenge.

            Liked by 1 person

  • Joseph Nebus 4:15 pm on Saturday, 13 June, 2015 Permalink | Reply
    Tags: energy, , , ,   

    Conditions of equilibrium and stability 


    This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

    For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

    In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

    CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

    Liked by 1 person

    carnotcycle

    cse01

    In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

    cse02

    Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

    If we restrict our attention to a thermodynamic system of unchanging composition and apply…

    View original post 2,534 more words

     
    • sheldonk2014 4:29 pm on Saturday, 13 June, 2015 Permalink | Reply

      I love these theories,great break down of physics,makes me want to look closer at life

      Like

      • Joseph Nebus 2:19 am on Tuesday, 16 June, 2015 Permalink | Reply

        Well, thank you. If you can feel inspired to learn about remarkable things then I’m quite happy.

        Like

  • Joseph Nebus 3:00 pm on Saturday, 28 December, 2013 Permalink | Reply
    Tags: energy, , , railroads,   

    CarnotCycle on the Gibbs-Helmholtz Equation 


    I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

    The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

    [1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

     
  • Joseph Nebus 3:56 am on Monday, 4 February, 2013 Permalink | Reply
    Tags: absolute zero, energy, enthalpy, , , ,   

    Fun With General Physics 


    I’m sure to let my interest in the Internet Archive version of Landau, Akhiezer, and Lifshiz General Physics wane soon enough. But for now I’m still digging around and finding stuff that delights me. For example, here, from the end of section 58 (Solids and Liquids):

    As the temperature decreases, the specific heat of a solid also decreases and tends to zero at absolute zero. This is a consequence of a remarkable general theorem (called Nernst’s theorem), according to which, at sufficiently low temperatures, any quantity representing a property of a solid or liquid becomes independent of temperature. In particular, as absolute zero is approached, the energy and enthalpy of a body no longer depend on the temperature; the specific heats cp and cV, which are the derivatives of these quantities with respect to temperature, therefore tend to zero.

    It also follows from Nernst’s theorem that, as T \rightarrow 0 , the coefficient of thermal expansion tends to zero, since the volume of the body ceases to depend on the temperature.

     
  • Joseph Nebus 12:15 am on Saturday, 2 June, 2012 Permalink | Reply
    Tags: axle, center of mass, , energy, , , pivot point, , , , violin   

    One Way To Fall Over 


    [ Huh. My statistics page says that someone came to me yesterday looking for the “mathematics behind rap music”. I don’t doubt there is such mathematics, but I’ve never written anything approaching it. I admit that despite the long intertwining of mathematics and music, and my own childhood of playing a three-quarter size violin in a way that must be characterized as “technically playing”, I don’t know anything nontrivial about the mathematics of any music. So, whoever was searching for that, I’m sorry to have disappointed you. ]

    Now, let me try my first guess at saying whether it’s easier to tip the cube over by pushing along the middle of the edge or by pushing at the corner. I laid out the ground rules, and particularly, the symbols used for the size of the box (it’s of length a) and how far the center of mass (the dead center of the box) is from the edges and the corners last time around. Here’s my first thought about what has to be done to tip the box over: we have to make the box pivot on some point — along one edge, if we’re pushing on the edge; along one corner, if we’re pushing on the corner — and so make it start to roll. If we can raise the center of mass above the pivot then we can drop the box back down with some other face to the floor, which has to count as tipping the box over. If we don’t raise the center of mass we aren’t tipping the box at all, we’re just shoving it.

    (More …)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: