Search Results for: gibbs Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Wednesday, 9 November, 2016 Permalink | Reply
    Tags: , , , , , Josiah Willard Gibbs   

    The End 2016 Mathematics A To Z: Distribution (statistics) 


    As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

    Distribution (statistics)

    The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

    But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

    And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

    We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

    There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

    We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

    That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

    Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

    There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

     
  • Joseph Nebus 3:00 pm on Wednesday, 6 April, 2016 Permalink | Reply
    Tags: , , Dublin, , , , , rotations,   

    A Leap Day 2016 Mathematics A To Z: Quaternion 


    I’ve got another request from Gaurish today. And it’s a word I had been thinking to do anyway. When one looks for mathematical terms starting with ‘q’ this is one that stands out. I’m a little surprised I didn’t do it for last summer’s A To Z. But here it is at last:

    Quaternion.

    I remember the seizing of my imagination the summer I learned imaginary numbers. If we could define a number i, so that i-squared equalled negative 1, and work out arithmetic which made sense out of that, why not do it again? Complex-valued numbers are great. Why not something more? Maybe we could also have some other non-real number. I reached deep into my imagination and picked j as its name. It could be something else. Maybe the logarithm of -1. Maybe the square root of i. Maybe something else. And maybe we could build arithmetic with a whole second other non-real number.

    My hopes of this brilliant idea petered out over the summer. It’s easy to imagine a super-complex number, something that’s “1 + 2i + 3j”. And it’s easy to work out adding two super-complex numbers like this together. But multiplying them together? What should i times j be? I couldn’t solve the problem. Also I learned that we didn’t need another number to be the logarithm of -1. It would be π times i. (Or some other numbers. There’s some surprising stuff in logarithms of negative or of complex-valued numbers.) We also don’t need something special to be the square root of i, either. \frac{1}{2}\sqrt{2} + \frac{1}{2}\sqrt{2}\imath will do. (So will another number.) So I shelved the project.

    Even if I hadn’t given up, I wouldn’t have invented something. Not along those lines. Finer minds had done the same work and had found a way to do it. The most famous of these is the quaternions. It has a famous discovery. Sir William Rowan Hamilton — the namesake of “Hamiltonian mechanics”, so you already know what a fantastic mind he was — had a flash of insight that’s come down in the folklore and romance of mathematical history. He had the idea on the 16th of October, 1843, while walking with his wife along the Royal Canal, in Dublin, Ireland. While walking across the bridge he saw what was missing. It seems he lacked pencil and paper. He carved it into the bridge:

    i^2 = j^2 = k^2 = ijk = -1

    The bridge now has a plaque commemorating the moment. You can’t make a sensible system with two non-real numbers. But three? Three works.

    And they are a mysterious three! i, j, and k are somehow not the same number. But each of them, multiplied by themselves, gives us -1. And the product of the three is -1. They are even more mysterious. To work sensibly, i times j can’t be the same thing as j times i. Instead, i times j equals minus j times i. And j times k equals minus k times j. And k times i equals minus i times k. We must give up commutivity, the idea that the order in which we multiply things doesn’t matter.

    But if we’re willing to accept that the order matters, then quaternions are well-behaved things. We can add and subtract them just as we would think to do if we didn’t know they were strange constructs. If we keep the funny rules about the products of i and j and k straight, then we can multiply them as easily as we multiply polynomials together. We can even divide them. We can do all the things we do with real numbers, only with these odd sets of four real numbers.

    The way they look, that pattern of 1 + 2i + 3j + 4k, makes them look a lot like vectors. And we can use them like vectors pointing to stuff in three-dimensional space. It’s not quite a comfortable fit, though. That plain old real number at the start of things seems like it ought to signify something, but it doesn’t. In practice, it doesn’t give us anything that regular old vectors don’t. And vectors allow us to ponder not just three- or maybe four-dimensional spaces, but as many as we need. You might wonder why we need more than four dimensions, even allowing for time. It’s because if we want to track a lot of interacting things, it’s surprisingly useful to put them all into one big vector in a very high-dimension space. It’s hard to draw, but the mathematics is nice. Hamiltonian mechanics, particularly, almost beg for it.

    That’s not to call them useless, or even a niche interest. They do some things fantastically well. One of them is rotations. We can represent rotating a point around an arbitrary axis by an arbitrary angle as the multiplication of quaternions. There are many ways to calculate rotations. But if we need to do three-dimensional rotations this is a great one because it’s easy to understand and easier to program. And as you’d imagine, being able to calculate what rotations do is useful in all sorts of applications.

    They’ve got good uses in number theory too, as they correspond well to the different ways to solve problems, often polynomials. They’re also popular in group theory. They might be the simplest rings that work like arithmetic but that don’t commute. So they can serve as ways to learn properties of more exotic ring structures.

    Knowing of these marvelous exotic creatures of the deep mathematics your imagination might be fired. Can we do this again? Can we make something with, say, four unreal numbers? No, no we can’t. Four won’t work. Nor will five. If we keep going, though, we do hit upon success with seven unreal numbers.

    This is a set called the octonions. Hamilton had barely worked out the scheme for quaternions when John T Graves, a friend of his at least up through the 16th of December, 1843, wrote of this new scheme. (Graves didn’t publish before Arthur Cayley did. Cayley’s one of those unspeakably prolific 19th century mathematicians. He has at least 967 papers to his credit. And he was a lawyer doing mathematics on the side for about 250 of those papers. This depresses every mathematician who ponders it these days.)

    But where quaternions are peculiar, octonions are really peculiar. Let me call a couple quaternions p, q, and r. p times q might not be the same thing as q times r. But p times the product of q and r will be the same thing as the product of p and q itself times r. This we call associativity. Octonions don’t have that. Let me call a couple quaternions s, t, and u. s times the product of t times u may be either positive or negative the product of s and t times u. (It depends.)

    Octonions have some neat mathematical properties. But I don’t know of any general uses for them that are as catchy as understanding rotations. Not rotations in the three-dimensional world, anyway.

    Yes, yes, we can go farther still. There’s a construct called “sedenions”, which have fifteen non-real numbers on them. That’s 16 terms in each number. Where octonions are peculiar, sedenions are really peculiar. They work even less like regular old numbers than octonions do. With octonions, at least, when you multiply s by the product of s and t, you get the same number as you would multiplying s by s and then multiplying that by t. Sedenions don’t even offer that shred of normality. Besides being a way to learn about abstract algebra structures I don’t know what they’re used for.

    I also don’t know of further exotic terms along this line. It would seem to fit a pattern if there’s some 32-term construct that we can define something like multiplication for. But it would presumably be even less like regular multiplication than sedenion multiplication is. If you want to fiddle about with that please do enjoy yourself. I’d be interested to hear if you turn up anything, but I don’t expect it’ll revolutionize the way I look at numbers. Sorry. But the discovery might be the fun part anyway.

     
    • elkement (Elke Stangl) 7:04 am on Sunday, 10 April, 2016 Permalink | Reply

      I wonder if quaternions would be useful in physics – as so often describing the same physics using different math leads to new insights. I vaguely remember some articles proposed by people who wanted to ‘revive’ quaternions for physics (sometimes this was close to … uhm … ‘outsider physics’, so I was reminded of people willing to apply Lord Kelvin’s theory of smoke rings to atomic physics…), but I have not encountered them in theoretical physics courses.

      Like

      • elkement (Elke Stangl) 7:41 am on Sunday, 10 April, 2016 Permalink | Reply

        I should post an update – before somebody points out my ignorance of history of science and tells me to check out Wikipedia :-) https://en.wikipedia.org/wiki/Quaternion
        This quote explains it:
        “From the mid-1880s, quaternions began to be displaced by vector analysis, which had been developed by Josiah Willard Gibbs, Oliver Heaviside, and Hermann von Helmholtz. Vector analysis described the same phenomena as quaternions, so it borrowed some ideas and terminology liberally from the literature of quaternions. However, vector analysis was conceptually simpler and notationally cleaner, and eventually quaternions were relegated to a minor role in mathematics and physics.”

        Like

        • Joseph Nebus 2:57 am on Friday, 15 April, 2016 Permalink | Reply

          I was going to say, but did figure you’d get to it soon enough. And it isn’t like quaternions are wrong. If you’ve got a programming language construct for quaternions, such as because you’re using Fortran, they’ll be fine for an array of three- or four-dimensional vectors as long as you’re careful about multiplications. It’s just that if you’ve turned your system into a 3N-dimensional vector, you might as well use a vector with 3N spots, instead of an array of N quaternions.

          Liked by 1 person

  • Joseph Nebus 6:31 pm on Wednesday, 30 September, 2015 Permalink | Reply
    Tags: ,   

    How Gibbs derived the Phase Rule 


    I knew I’d been forgetting something about the end of summer. I’m embarrassed again it was Peter Mander’s Carnot Cycle blog resuming its discussions of thermodynamics.

    The September article is about Gibbs’s phase rule. Gibbs here is Josiah Willard Gibbs, who established much of the mathematical vocabulary of thermodynamics. The phase rule here talks about the change of a substance from one phase to another. The classic example is water changing from liquid to solid, or solid to gas, or gas to liquid. Everything does that for some combinations of pressure and temperature and available volume. It’s just a good example because we can see those phase transitions happen whenever we want.

    The question that feels natural to mathematicians, and physicists, is about degrees of freedom. Suppose that we’re able take a substance and change its temperature or its volume or its pressure. How many of those things can we change at once without making the material different? And the phase rule is a way to calculate that. It’s not always the same number because at some combinations of pressure and temperature and volume the substance can be equally well either liquid or gas, or be gas or solid, or be solid or liquid. These represent phase transitions, melting or solidifying or evaporating. There’s even one combination — the triple point — where the material can be solid, liquid, or gas simultaneously.

    Carnot Cycle presents the way that Gibbs originally derived his phase rule. And it’s remarkably neat and clean and accessible. The meat of it is really a matter of counting, keeping track of how much information we have and how much we want and looking at the difference between the things. I recommend reading it even if you are somehow not familiar with differential forms. Simply trust that a “d” followed by some other letter (or a letter with a subscript) is some quantity whose value we might be interested in, and you should follow the reasoning well.

    Liked by 1 person

    carnotcycle

    pr01

    The Phase Rule formula was first stated by the American mathematical physicist Josiah Willard Gibbs in his monumental masterwork On the Equilibrium of Heterogeneous Substances (1875-1878), in which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.

    In a paragraph under the heading “On Coexistent Phases of Matter”, Gibbs gives the derivation of his famous formula in just 77 words. Of all the many Phase Rule proofs in the thermodynamic literature, it is one of the simplest and shortest. And yet textbooks of physical science have consistently overlooked it in favor of more complicated, less lucid derivations.

    To redress this long-standing discourtesy to Gibbs, CarnotCycle here presents Gibbs’ original derivation of the Phase Rule in an up-to-date form. His purely prose description has been supplemented with clarifying mathematical content, and the outmoded symbols used in the single equation to which he refers have been replaced with their…

    View original post 863 more words

     
  • Joseph Nebus 3:00 pm on Monday, 21 September, 2015 Permalink | Reply
    Tags: , Brownian motion, , symmetries,   

    Reading the Comics, September 16, 2015: Celebrity Appearance Edition 


    I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.

    Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.

    The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.

    The geese are flying in V, I, and X patterns. The guess is that they're Roman geese.

    Jeff Corriveau’s Deflocked for the 15th of September, 2015.

    Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.

    The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)

    Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.

    In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)

    Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.

    Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.

    Dave Whamond’s Reality Check for the 16th of September shows off the Metropolitan Museum of Symmetry. This is probably meant to be an art museum. Symmetries are studied in mathematics too, though. Many symmetries, the ways you can swap shapes around, form interesting groups or rings. And in mathematical physics, symmetries give us useful information about the behavior of systems. That’s enough for me to claim this comic is mathematically linked.

     
  • Joseph Nebus 10:21 pm on Sunday, 21 September, 2014 Permalink | Reply
    Tags: , , , ,   

    The Geometry of Thermodynamics (Part 2) 


    I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.

    The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.

    A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.

    Like

    carnotcycle

    jcm1 James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

    Historical background

    Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

    The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

    dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
    dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
    dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
    dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

    This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to…

    View original post 1,262 more words

     
    • elkement 11:24 am on Tuesday, 23 September, 2014 Permalink | Reply

      carnotcycle has to be turned into a book :-)

      Like

      • Joseph Nebus 11:37 pm on Wednesday, 24 September, 2014 Permalink | Reply

        It should become one! I wouldn’t be surprised if that’s in mind, actually, particularly given the deliberate pace with which the articles are being written.

        Like

  • Joseph Nebus 3:08 pm on Saturday, 9 August, 2014 Permalink | Reply
    Tags: , , , Maxwell's Equations,   

    The Geometry of Thermodynamics (Part 1) 


    I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

    Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

    This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

    Like

    carnotcycle

    1geo1

    Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

    In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbsmagnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

    But in Gibbs’ case, this is far from the truth.

    The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

    View original post 1,490 more words

     
  • Joseph Nebus 8:52 pm on Thursday, 6 February, 2014 Permalink | Reply
    Tags: , , science history, ,   

    The Liquefaction of Gases – Part I 


    I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

    But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

    Like

    carnotcycle

    On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

    Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

    While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

    But in…

    View original post 1,342 more words

     
    • Damyanti 6:47 am on Thursday, 13 February, 2014 Permalink | Reply

      I usually run far from all topics science-related, but I like this little bit of history here.

      Like

      • Joseph Nebus 11:46 pm on Thursday, 13 February, 2014 Permalink | Reply

        I’m glad you do enjoy. I like a good bit of history myself, mathematics and science included, and might go looking for more topics that have a historical slant.

        Like

    • LFFL 10:43 pm on Sunday, 23 February, 2014 Permalink | Reply

      You lost me at “deeper mathematical subjects”. I barely have addition & subtraction down.

      Like

      • Joseph Nebus 4:43 am on Monday, 24 February, 2014 Permalink | Reply

        Aw, but the deeper stuff is fascinating. For example, imagine you have a parcel of land with some really complicated boundary, all sorts of nooks and crannies and corners and curves and all that. If you just walk around the outside, keeping track of how far you walk and in what direction, then, you can use a bit of calculus to tell exactly how much area is enclosed by the boundary, however complicated a shape it is.

        Isn’t that amazing? You never even have to set foot inside the property, just walk around its boundary.

        Like

        • LFFL 4:46 am on Monday, 24 February, 2014 Permalink | Reply

          Wow. I’m impressed by your brain power. I just wasn’t born with a brain for much math beyond the basics.

          Like

          • Joseph Nebus 4:25 am on Tuesday, 25 February, 2014 Permalink | Reply

            Aw, you’re kind to me, and unkind to you. It’s not my brainpower, at least. The result is a consequence of some pretty important work you learn early on in calculus, and I’d expect you could understand the important part of it without knowing more than the basics.

            Like

  • Joseph Nebus 3:00 pm on Saturday, 28 December, 2013 Permalink | Reply
    Tags: , , , railroads,   

    CarnotCycle on the Gibbs-Helmholtz Equation 


    I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

    The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

    [1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

     
  • Joseph Nebus 5:43 am on Thursday, 21 March, 2013 Permalink | Reply
    Tags: , , , ,   

    Gibbs’ Elementary Principles in Statistical Mechanics 


    I had another discovery from the collection of books at archive.org, now that I thought to look for it: Josiah Willard Gibbs’s Elementary Principles in Statistical Mechanics, originally published in 1902 and reprinted 1960 by Dover, which gives you a taste of Gibbs’s writings by its extended title, Developed With Especial Reference To The Rational Foundation of Thermodynamics. Gibbs was an astounding figure even in a field that seems to draw out astounding figures, and he’s a good candidate for the title of “greatest scientist to come from the United States”.

    He lived in walking distance of Yale (where his father and then he taught) nearly his whole life, working nearly isolated but with an astounding talent for organizing the many complex and confused ideas in the study of thermodynamics into a neat, logical science. Some great scientists have the knack for finding important work to do; some great scientists have the knack for finding ways to express work so the masses can understand it. Gibbs … well, perhaps it’s a bit much to say the masses understand it, but the language of modern thermodynamics and of quantum mechanics is very much the language he spoke a century-plus ago.

    My understanding is he published almost all his work in the journal Transactions of the Connecticut Philosophical Society, in a show of hometown pride which probably left the editors baffled but, I suppose, happy to print something this fellow was very sure about.

    To give some idea why they might have found him baffling, though, consider the first paragraph of Chapter 1, which is accurate and certainly economical:

    We shall use Hamilton’s form of the equations of motion for a system of n degrees of freedom, writing q_1, \cdots q_n for the (generalized) coördinates, \dot{q}_1, \cdots \dot{q}_n for the (generalized) velocities, and

    F_1 q_1 + F_2 q_2 + \cdots + F_n q_n [1]

    for the moment of the forces. We shall call the quantities F_1, \cdots F_n the (generalized) forces, and the quantities p_1 \cdots p_n , defined by the equations

    p_1 = \frac{d\epsilon_p}{d\dot{q}_1}, p_2 = \frac{d\epsilon_p}{d\dot{q}_2}, etc., [2]

    where \epsilon_p denotes the kinetic energy of the system, the (generalized) momenta. The kinetic energy is here regarded as a function of the velocities and coördinates. We shall usually regard it as a function of the momenta and coördinates, and on this account we denote it by \epsilon_p . This will not prevent us from occasionally using formulas like [2], where it is sufficiently evident the kinetic energy is regarded as function of the \dot{q}‘s and q‘s. But in expressions like d\epsilon_p/dq_1 , where the denominator does not determine the question, the kinetic energy is always to be treated in the differentiation as function of the p’s and q’s.

    (There’s also a footnote I skipped because I don’t know an elegant way to include it in WordPress.) Your friend the physics major did not understand that on first read any more than you did, although she probably got it after going back and reading it a touch more slowly. And his writing is just like that: 240 pages and I’m not sure I could say any of them could be appreciably tightened.


    Also, I note I finally reached 9,000 page views! Thank you; I couldn’t have done it without at least twenty of you, since I’m pretty sure I’ve obsessively clicked on my own pages at minimum 8,979 times.

     
    • Peter Mander 8:05 pm on Thursday, 21 March, 2013 Permalink | Reply

      Fully agree with your assessment of Gibbs’ greatness. The US should be immensely proud of him.

      Like

  • Joseph Nebus 5:46 pm on Tuesday, 5 March, 2013 Permalink | Reply
    Tags: , , , ,   

    Reblog: Mixed-Up Views Of Entropy 


    The blog CarnotCycle, which tries to explain thermodynamics — a noble goal, since thermodynamics is a really big, really important, and criminally unpopularized part of science and mathematics — here starts from an “Unpublished Fragment” by the great Josiah Willard Gibbs to talk about entropy.

    Gibbs — a strong candidate for the greatest scientist the United States ever produced, complete with fascinating biographical quirks to make him seem accessibly peculiar — gave to statistical mechanics much of the mathematical form and power that it now has. Gibbs had planned to write something about “On entropy as mixed-up-ness”, which certainly puts in one word what people think of entropy as being. The concept is more powerful and more subtle than that, though, and CarnotCycle talks about some of the subtleties.

    Like

    carnotcycle

    mixedup

    Tucked away at the back of Volume One of The Scientific Papers of J. Willard Gibbs, is a brief chapter headed ‘Unpublished Fragments’. It contains a list of nine subject headings for a supplement that Professor Gibbs was planning to write to his famous paper “On the Equilibrium of Heterogeneous Substances”. Sadly, he completed his notes for only two subjects before his death in April 1903, so we will never know what he had in mind to write about the sixth subject in the list: On entropy as mixed-up-ness.

    Mixed-up-ness. It’s a catchy phrase, with an easy-to-grasp quality that brings entropy within the compass of minds less given to abstraction. That’s no bad thing, but without Gibbs’ guidance as to exactly what he meant by it, mixed-up-ness has taken on a life of its own and has led to entropy acquiring the derivative associations of chaos and disorder…

    View original post 627 more words

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: