Updates from January, 2017 Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Sunday, 8 January, 2017 Permalink | Reply
    Tags: , , Birdbrains, , Elderberries, Grand Avenue, , Pot Shots, , Quincy   

    Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition 


    Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

    Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

    Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

    Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

    Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

    Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

    Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

    Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

    'Now for my math homework. I've got a comfortable chair, a good light, plenty of paper, a sharp pencil, a new eraser, and a terrific urge to go out and play some ball.'

    Ted Shearer’s Quincy for the 14th of November, 1977, and reprinted the 7th of January, 2017. I kind of remember having a lamp like that. I don’t remember ever sitting down to do my mathematics homework with a paintbrush.

    Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

    Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

     
  • Joseph Nebus 6:00 pm on Friday, 11 November, 2016 Permalink | Reply
    Tags: , , , , , ,   

    The End 2016 Mathematics A To Z: Ergodic 


    This essay follows up on distributions, mentioned back on Wednesday. This is only one of the ideas which distributions serve. Do you have a word you’d like to request? I figure to close ‘F’ on Saturday afternoon, and ‘G’ is already taken. But give me a request for a free letter soon and I may be able to work it in.

    Ergodic.

    There comes a time a physics major, or a mathematics major paying attention to one of the field’s best non-finance customers, first works on a statistical mechanics problem. Instead of keeping track of the positions and momentums of one or two or four particles she’s given the task of tracking millions of particles. It’s listed as a distribution of all the possible values they can have. But she still knows what it really is. And she looks at how to describe the way this distribution changes in time. If she’s the slightest bit like me, or anyone I knew, she freezes up this. Calculate the development of millions of particles? Impossible! She tries working out what happens to just one, instead, and hopes that gives some useful results.

    And then it does.

    It’s a bit much to call this luck. But it is because the student starts off with some simple problems. Particles of gas in a strong box, typically. They don’t interact chemically. Maybe they bounce off each other, but she’s never asked about that. She’s asked about how they bounce off the walls. She can find the relationship between the volume of the box and the internal gas pressure on the interior and the temperature of the gas. And it comes out right.

    She goes on to some other problems and it suddenly fails. Eventually she re-reads the descriptions of how to do this sort of problem. And she does them again and again and it doesn’t feel useful. With luck there’s a moment, possibly while showering, that the universe suddenly changes. And the next time the problem works out. She’s working on distributions instead of toy little single-particle problems.

    But the problem remains: why did it ever work, even for that toy little problem?

    It’s because some systems of things are ergodic. It’s a property that some physics (or mathematics) problems have. Not all. It’s a bit hard to describe clearly. Part of what motivated me to take this topic is that I want to see if I can explain it clearly.

    Every part of some system has a set of possible values it might have. A particle of gas can be in any spot inside the box holding it. A person could be in any of the buildings of her city. A pool ball could be travelling in any direction on the pool table. Sometimes that will change. Gas particles move. People go to the store. Pool balls bounce off the edges of the table.

    These values will have some kind of distribution. Look at where the gas particle is now. And a second from now. And a second after that. And so on, to the limits of human knowledge. Or to when the box breaks open. Maybe the particle will be more often in some areas than in others. Maybe it won’t. Doesn’t matter. It has some distribution. Over time we can say how often we expect to find the gas particle in each of its possible places.

    The same with whatever our system is. People in buildings. Balls on pool tables. Whatever.

    Now instead of looking at one particle (person, ball, whatever) we have a lot of them. Millions of particle in the box. Tens of thousands of people in the city. A pool table that somehow supports ten thousand balls. Imagine they’re all settled to wherever they happen to be.

    So where are they? The gas particle one is easy to imagine. At least for a mathematics major. If you’re stuck on it I’m sorry. I didn’t know. I’ve thought about boxes full of gas particles for decades now and it’s hard to remember that isn’t normal. Let me know if you’re stuck, and where you are. I’d like to know where the conceptual traps are.

    But back to the gas particles in a box. Some fraction of them are in each possible place in the box. There’s a distribution here of how likely you are to find a particle in each spot.

    How does that distribution, the one you get from lots of particles at once, compare to the first, the one you got from one particle given plenty of time? If they agree the system is ergodic. And that’s why my hypothetical physics major got the right answers from the wrong work. (If you are about to write me to complain I’m leaving out important qualifiers let me say I know. Please pretend those qualifiers are in place. If you don’t see what someone might complain about thank you, but it wouldn’t hurt to think of something I might be leaving out here. Try taking a shower.)

    The person in a building is almost certainly not an ergodic system. There’s buildings any one person will never ever go into, however possible it might be. But nearly all buildings have some people who will go into them. The one-person-with-time distribution won’t be the same as the many-people-at-once distribution. Maybe there’s a way to qualify things so that it becomes ergodic. I doubt it.

    The pool table, now, that’s trickier to say. For a real pool table no, of course not. An actual ball on an actual table rolls to a stop pretty soon, either from the table felt’s friction or because it drops into a pocket. Tens of thousands of balls would form an immobile heap on the table that would be pretty funny to see, now that I think of it. Well, maybe those are the same. But they’re a pretty boring same.

    Anyway when we talk about “pool tables” in this context we don’t mean anything so sordid as something a person could play pool on. We mean something where the table surface hasn’t any friction. That makes the physics easier to model. It also makes the game unplayable, which leaves the mathematical physicist strangely unmoved. In this context anyway. We also mean a pool table that hasn’t got any pockets. This makes the game even more unplayable, but the physics even easier. (It makes it, really, like a gas particle in a box. Only without that difficult third dimension to deal with.)

    And that makes it clear. The one ball on a frictionless, pocketless table bouncing around forever maybe we can imagine. A huge number of balls on that frictionless, pocketless table? Possibly trouble. As long as we’re doing imaginary impossible unplayable pool we could pretend the balls don’t collide with each other. Then the distributions of what ways the balls are moving could be equal. If they do bounce off each other, or if they get so numerous they can’t squeeze past one another, well, that’s different.

    An ergodic system lets you do this neat, useful trick. You can look at a single example for a long time. Or you can look at a lot of examples at one time. And they’ll agree in their typical behavior. If one is easier to study than the other, good! Use the one that you can work with. Mathematicians like to do this sort of swapping between equivalent problems a lot.

    The problem is it’s hard to find ergodic systems. We may have a lot of things that look ergodic, that feel like they should be ergodic. But proved ergodic, with a logic that we can’t shake? That’s harder to do. Often in practice we will include a note up top that we are assuming the system to be ergodic. With that “ergodic hypothesis” in mind we carry on with our work. It gives us a handle on a lot of problems that otherwise would be beyond us.

     
  • Joseph Nebus 6:00 pm on Wednesday, 9 November, 2016 Permalink | Reply
    Tags: , , , , , Josiah Willard Gibbs   

    The End 2016 Mathematics A To Z: Distribution (statistics) 


    As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

    Distribution (statistics)

    The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

    But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

    And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

    We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

    There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

    We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

    That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

    Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

    There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

     
  • Joseph Nebus 3:00 pm on Tuesday, 5 April, 2016 Permalink | Reply
    Tags: , , , square root day, ,   

    JH van ‘t Hoff and the Gaseous Theory of Solutions; also, Pricing Games 


    Do you ever think about why stuff dissolves? Like, why a spoon of sugar in a glass of water should seem to disappear instead of turning into a slight change in the water’s clarity? Well, sure, in those moods when you look at the world as a child does, not accepting that life is just like that and instead can imagine it being otherwise. Take that sort of question and put it to adult inquiry and you get great science.

    Peter Mander of the Carnot Cycle blog this month writes a tale about Jacobus Henricus van ‘t Hoff, the first winner of a Nobel Prize for Chemistry. In 1883, on hearing of an interesting experiment with semipermeable membranes, van ‘t Hoff had a brilliant insight about why things go into solution, and how. The insight had only one little problem. It makes for fine reading about the history of chemistry and of its mathematical study.


    In other, television-related news, the United States edition of The Price Is Right included a mention of “square root day” yesterday, 4/4/16. It was in the game “Cover-Up”, in which the contestant tries making successively better guesses at the price of a car. This they do by covering up wrong digits with new guesses. For the start of the game, before the contestant’s made any guesses, they need something irrelevant to the game to be on the board. So, they put up mock calendar pages for 1/1/2001, 2/2/2004, 3/3/2009, 4/4/2016, and finally a card reading \sqrt{DAY} . The game show also had a round devoted to Pi Day a few weeks back. So I suppose they’re trying to reach out to people into pop mathematics. It’s cute.

     
    • Marta Frant 5:27 am on Thursday, 7 April, 2016 Permalink | Reply

      Questions, questions, questions… The constant ‘why’ is what makes the world go around.

      Like

      • Joseph Nebus 2:07 am on Saturday, 9 April, 2016 Permalink | Reply

        ‘Why’ is indeed one of the big questions. ‘What’ and ‘The Heck?’ are also pretty important.

        Liked by 1 person

  • Joseph Nebus 11:04 pm on Monday, 22 February, 2016 Permalink | Reply
    Tags: caucuses, , ensembles,   

    Ensembled 


    A couple weeks back voting in the Democratic party’s Iowa caucus had several districts tied between Clinton and Sanders supporters. The ties were broken by coin tosses. That fact produced a bunch of jokes at Iowa’s expense. I can’t join in this joking. If the votes don’t support one candidate over another, but someone must win, what’s left but an impartial tie-breaking scheme?

    After Clinton won six of the coin tosses people joked about the “impartial” idea breaking down. Well, we around here know that there are no unfair coins. And while it’s possible to have an unfair coin toss, I’m not aware of any reason to think any of the tosses were. It’s lucky to win six coin tosses. If the tosses are fair, the chance of getting any one right is one-half. Suppose the tosses are “independent”. That is, the outcome of one doesn’t change the chances of any other. Then the chance of getting six right in a row is the chance of getting one right, times itself, six times over. That is, the chance is one-half raised to the sixth power. That’s a small number, about 1.5 percent. But it’s not so riotously small as to deserve rioting.

    My love asked me about a claim about this made on a Facebook discussion. The writer asserted that six heads was exactly as likely as any other outcome of six coin tosses. My love wondered: is that true?

    Yes and no. It depends on what you mean by “any other outcome”. Grant that heads and tails are equally likely to come up. Grant also that coin tosses are independent. Then six heads, H H H H H H, are just as likely to come up as six tails, T T T T T T. I don’t think anyone will argue with me that far.

    But are both of these exactly as likely as the first toss coming up heads and all the others tails? As likely as H T T T T T? Yes, I would say they are. But I understand if you feel skeptical, and if you want convincing. The chance of getting heads once in a fair coin toss is one-half. We started with that. What’s the chance of getting five tails in a row? That must be one-half raised to the fifth power. The first coin toss and the last five don’t depend on one another. This means the chance of that first heads followed by those five tails is one-half times one-half to the fifth power. And that’s one-half to the sixth power.

    What about the first two tosses coming up heads and the next four tails? H H T T T T? We can run through the argument again. The chance of two coin tosses coming up heads would be one-half to the second power. The chance of four coin tosses coming up tails would be one-half to the fourth power. The chance of the first streak being followed by the second is the product of the two chances. One-half to the second power times one-half to the fourth power is one-half to the sixth power.

    We could go on like this and try out all the possible outcomes. There’s only 64 of them. That’s going to be boring. We could prove any particular string of outcomes is just as likely as any other. We need to make an argument that’s a little more clever, but also a little more abstract.

    Don’t think just now of a particular sequence of coin toss outcomes. Consider this instead: what is the chance you will call a coin toss right? You might call heads, you might call tails. The coin might come up heads, the coin might come up tails. The chance you call it right, though — well, won’t that be one-half? Stay at this point until you’re sure it is.

    So write out a sequence of possible outcomes. Don’t tell me what it is. It can be any set of H and T, as you like, as long as it’s six outcomes long.

    What is the chance you wrote down six correct tosses in a row? That’ll be the chance of calling one outcome right, one-half, times itself six times over. One-half to the sixth power. So I know the probability that your prediction was correct. Which of the 64 possible outcomes did you write down? I don’t know. I suspect you didn’t even write one down. I would’ve just pretended I had one in mind until the essay required me to do something too. But the exact same argument applies no matter which sequence you pretended to write down. (Look at it. I didn’t use any information about what sequence you would have picked. So how could the sequence affect the outcome?) Therefore each of the 64 possible outcomes has the same chance of coming up.

    So in this context, yes, six heads in a row is exactly as likely as any other sequence of six coin tosses.

    I will guess that you aren’t perfectly happy with this argument. It probably feels like something is unaccounted-for. What’s unaccounted-for is that nobody cares about the difference between the sequence H H T H H H and the sequence H H H T H H. Would you even notice the difference if I hadn’t framed the paragraph to make the difference stand out? In either case, the sequence is “one tail, five heads”. What’s the chance of getting “one tail, five heads”?

    Well, the chance of getting one of several mutually exclusive outcomes is the sum of the chance of each individual outcome. And these are mutually exclusive outcomes: you can’t get both H H T H H H and H H H T H H as the result of the same set of coin tosses.

    (There can be not-mutually-exclusive outcomes. Consider, for example, the chance of getting “at least three tails” and the chance of the third coin toss being heads. Calculating the chance of either of those outcomes happening demands more thinking. But we don’t have to deal with that here, so we won’t.)

    There are six distinct ways to get one tails and five heads. The tails can be the first toss’s result. Or the tails can be the second toss’s result. Or the tails can be the third toss’s result. And so on. Each of these possible outcomes has the same probability, one-half to the sixth power. So the chance of getting “one tails, five heads” is one-half to the sixth power, added to itself, six times over. That is, it’s six times one-half to the sixth power. That will come up about one time in eleven that you do a sequence of six coin tosses.

    There are fifteen ways to get two tails and four heads. So the chance of the outcome being “two tails, four heads” is fifteen times one-half to the sixth power. That will come up a bit less than one in four times.

    There are twenty, count ’em, ways to get three tails and three heads. So the chance of that is twenty times one-half to the sixth power. That’s a little more than three times in ten. There are fifteen ways to get four tails and two heads, so the chance of that drops again. There’s six ways to get five tails and one heads. And there’s just one way to get six tails and no heads on six coin tosses.

    So if you think of the outcome as “this many tails and that many heads”, then, no, not all outcomes are equally likely. “Three tails and three heads” is a lot more likely than “no tails and six heads”. “Two tails and four heads” is more likely than “one tails and five heads”.

    Whether it’s right to say “every outcome is just as likely” depends on what you think “an outcome” is. If it’s a particular sequence of heads and tails, then yes, it is. If it’s the aggregate statistic of how many heads and tails, then no, it’s not.

    We see this kind of distinction all over the place. Every hand of cards, for example, might be as likely to turn up as every other hand of cards. But consider five-card poker hands. There are very few hands that have the interesting pattern of being a straight flush, five sequential cards of the same face. There are more hands that have the interesting pattern of four-of-a-kind. There are a lot of hands that have the mildly interesting pattern of two-of-a-kind and nothing else going on. There’s a huge mass of cards that don’t have any pattern we’ve seen fit to notice. So a straight flush is regarded as a very unlikely hand to have, and four-of-a-kind more likely but still rare. Two-of-a-kind is none too rare. Nothing at all is most likely, at least in a five-card hand. (When you get seven cards, a hand with nothing at all becomes less likely. You have so many chances that you just have to hit something.)

    The distinction carries over into statistical mechanics. The field studies the state of things. Is a mass of material solid or liquid or gas? Is a solid magnetized or not, or is it trying to be? Are molecules in a high- or a low-energy state?

    Mathematicians use the name “ensemble” to describe a state of whatever it is we’re studying. But we have the same problem of saying what kind of description we mean. Suppose we are studying the magnetism of a solid object. We do this by imagining the object as a bunch of smaller regions, each with a tiny bit of magnetism. That bit might have the north pole pointing up, or the south pole pointing up. We might say the ensemble is that there are ten percent more north-pole-up regions than there are south-pole-up regions.

    But by that, do we mean we’re interested in “ten percent more north-pole-up than south-pole-up regions”? Or do we mean “these particular regions are north-pole-up, and these are south-pole-up”? We distinguish this by putting in some new words.

    The “canonical ensemble” is, generally, the kind of aggregate-statistical-average description of things. So, “ten percent more north-pole-up than south-pole-up regions” would be such a canonical ensemble. Or “one tails, five heads” would be a canonical ensemble. If we want to look at the fine details we speak of the “microcanonical ensemble”. That would be “these particular regions are north-pole-up, and these are south-pole-up”. Or that would be “the coin tosses came up H H H T H H”.

    Just what is a canonical and what is a microcanonical ensemble depends on context. Of course it would. Consider the standpoint of the city manager, hoping to estimate the power and water needs of neighborhoods and bringing the language of statistical mechanics to the city-planning world. There, it is enough detail to know how many houses on a particular street are occupied and how many residents there are. She could fairly consider that a microcanonical ensemble. From the standpoint of the letter carriers for the post office, though, that would be a canonical ensemble. It would give an idea how much time would be needed to deliver on that street. But would be just short of useful in getting letters to recipients. The letter carrier would want to know which people are in which house before rating that a microcanonical ensemble.

    Much of statistical mechanics is studying ensembles, and which ensembles are more or less likely than others. And how that likelihood changes as conditions change.

    So let me answer the original question. In this coin-toss problem, yes, every microcanonical ensemble is just as likely as every other microcanonical ensemble. The sequence ‘H H H H H H’ is just as likely as ‘H T H H H T’ or ‘T T H T H H’ are. But not every canonical ensemble is as likely as every other one. Six heads in six tosses are less likely than two heads and four tails, or three heads and three tails, are. The answer depends on what you mean by the question.

     
    • Ken Dowell 3:52 am on Tuesday, 23 February, 2016 Permalink | Reply

      So it occurs to me that problem is not the coin toss but rather the system that demands a winner even though there really isn’t one.

      Like

      • Joseph Nebus 4:12 am on Tuesday, 23 February, 2016 Permalink | Reply

        Well, it is. But using any kind of voting scheme to make decisions leaves you vulnerable to problems. What to do in case of a tie is an obvious one. Once there’s three possible choices in play (and at least three voters) it becomes impossible to guarantee a perfectly fair outcome.

        Like

  • Joseph Nebus 3:00 pm on Monday, 21 September, 2015 Permalink | Reply
    Tags: , Brownian motion, , symmetries,   

    Reading the Comics, September 16, 2015: Celebrity Appearance Edition 


    I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.

    Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.

    The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.

    The geese are flying in V, I, and X patterns. The guess is that they're Roman geese.

    Jeff Corriveau’s Deflocked for the 15th of September, 2015.

    Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.

    The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)

    Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.

    In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)

    Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.

    Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.

    Dave Whamond’s Reality Check for the 16th of September shows off the Metropolitan Museum of Symmetry. This is probably meant to be an art museum. Symmetries are studied in mathematics too, though. Many symmetries, the ways you can swap shapes around, form interesting groups or rings. And in mathematical physics, symmetries give us useful information about the behavior of systems. That’s enough for me to claim this comic is mathematically linked.

     
  • Joseph Nebus 3:00 pm on Tuesday, 18 August, 2015 Permalink | Reply
    Tags: detailed balance, , rankings,   

    How Pinball Leagues and Chemistry Work: The Mathematics 


    My love and I play in several pinball leagues. I need to explain something of how they work.

    Most of them organize league nights by making groups of three or four players and having them play five games each on a variety of pinball tables. The groupings are made by order. The 1st through 4th highest-ranked players who’re present are the first group, the 5th through 8th the second group, the 9th through 12th the third group, and so on. For each table the player with the highest score gets some number of league points. The second-highest score earns a lesser number of league points, third-highest gets fewer points yet, and the lowest score earns the player comments about how the table was not being fair. The total number of points goes into the player’s season score, which gives her ranking.

    You might see the bootstrapping problem here. Where do the rankings come from? And what happens if someone joins the league mid-season? What if someone misses a competition day? (Some leagues give a fraction of points based on the player’s season average. Other leagues award no points.) How does a player get correctly ranked?

    (More …)

     
    • sheldonk2014 3:26 pm on Tuesday, 18 August, 2015 Permalink | Reply

      Interesting sounding

      Like

      • Joseph Nebus 9:13 pm on Saturday, 22 August, 2015 Permalink | Reply

        Glad you’re interested. My love and I dropped in the rankings last pinball league meeting, although we were a bit overvalued going into it. So that was just part of the process of getting back to our detailed balance.

        Still, there’s something terrible about putting up my best-ever game on Cirqus Voltaire and coming in third (of four) on that table. On the bright side I put in my best-ever game on Metallica too and came in first. And that was the final game of the night, so I could go out on a high note.

        Liked by 1 person

  • Joseph Nebus 5:00 pm on Sunday, 21 June, 2015 Permalink | Reply
    Tags: , , , , , ,   

    Reading the Comics, June 21, 2015: Blatantly Padded Edition, Part 2 


    I said yesterday I was padding one mathematics-comics post into two for silly reasons. And I was. But there were enough Sunday comics on point that splitting one entry into two has turned out to be legitimate. Nice how that works out sometimes.

    Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. (June 19) uses mathematics as something to heap upon a person until they yield to your argument. It’s a fallacious way to argue, but it does work. Even at a mathematical conference the terror produced by a screen full of symbols can chase follow-up questions away. On the 21st, they present mathematics as a more obviously useful thing. Well, mathematics with a bit of physics.

    Nate Frakes’s Break Of Day (June 19) is this week’s anthropomorphic algebra joke.

    Life at the quantum level: one subatomic particle suspects the other of being unfaithful because both know he could be in two places at once.

    Niklas Eriksson’s Carpe Diem for the 20th of June, 2015.

    Niklas Eriksson’s Carpe Diem (June 20) is captioned “Life at the Quantum Level”. And it’s built on the idea that quantum particles could be in multiple places at once. Whether something can be in two places at once depends on coming up with a clear idea about what you mean by “thing” and “places” and for that matter “at once”; when you try to pin the ideas down they prove to be slippery. But the mathematics of quantum mechanics is fascinating. It cries out for treating things we would like to know about, such as positions and momentums and energies of particles, as distributions instead of fixed values. That is, we know how likely it is a particle is in some region of space compared to how likely it is somewhere else. In statistical mechanics we resort to this because we want to study so many particles, or so many interactions, that it’s impractical to keep track of them all. In quantum mechanics we need to resort to this because it appears this is just how the world works.

    (It’s even less on point, but Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 21st of June has a bit of riffing on Schrödinger’s Cat.)

    Brian and Ron Boychuk’s Chuckle Brothers (June 20) name-drops algebra as the kind of mathematics kids still living with their parents have trouble with. That’s probably required by the desire to make a joking definition of “aftermath”, so that some specific subject has to be named. And it needs parents to still be watching closely over their kids, something that doesn’t quite fit for college-level classes like Intro to Differential Equations. So algebra, geometry, or trigonometry it must be. I am curious whether algebra reads as the funniest of that set of words, or if it just fits better in the space available. ‘Geometry’ is as long a word as ‘algebra’, but it may not have the same connotation of being an impossibly hard class.

    Little Iodine does badly in arithmetic in class. But she's very good at counting the calories, and the cost, of what her teacher eats.

    Jimmy Hatlo’s Little Iodine for the 18th of April, 1954, and rerun the 18th of June, 2015.

    And from the world of vintage comic strips, Jimmy Hatlo’s Little Iodine (June 21, originally run the 18th of April, 1954) reminds us that anybody can do any amount of arithmetic if it’s something they really want to calculate.

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney (June 21) is another strip using the idea of mathematics — and particularly word problems — to signify great intelligence. I suppose it’s easier to recognize the form of a word problem than it is to recognize a good paper on the humanities if you only have two dozen words to show it in.

    Juba’s Viivi and Wagner (June 21) is a timely reminder that while sudokus may be fun logic puzzles, they are ultimately the puzzle you decide to make of them.

     
  • Joseph Nebus 4:15 pm on Saturday, 13 June, 2015 Permalink | Reply
    Tags: , , , ,   

    Conditions of equilibrium and stability 


    This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

    For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

    In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

    CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

    Like

    carnotcycle

    cse01

    In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

    cse02

    Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

    If we restrict our attention to a thermodynamic system of unchanging composition and apply…

    View original post 2,534 more words

     
    • sheldonk2014 4:29 pm on Saturday, 13 June, 2015 Permalink | Reply

      I love these theories,great break down of physics,makes me want to look closer at life

      Like

      • Joseph Nebus 2:19 am on Tuesday, 16 June, 2015 Permalink | Reply

        Well, thank you. If you can feel inspired to learn about remarkable things then I’m quite happy.

        Like

  • Joseph Nebus 10:22 pm on Friday, 24 April, 2015 Permalink | Reply
    Tags: , , , , , John von Neumann, Ludwig Boltzmann, , Shannon entropy   

    A Little More Talk About What We Talk About When We Talk About How Interesting What We Talk About Is 


    I had been talking about how much information there is in the outcome of basketball games, or tournaments, or the like. I wanted to fill in at least one technical term, to match some of the others I’d given.

    In this information-theory context, an experiment is just anything that could have different outcomes. A team can win or can lose or can tie in a game; that makes the game an experiment. The outcomes are the team wins, or loses, or ties. A team can get a particular score in the game; that makes that game a different experiment. The possible outcomes are the team scores zero points, or one point, or two points, or so on up to whatever the greatest possible score is.

    If you know the probability p of each of the different outcomes, and since this is a mathematics thing we suppose that you do, then we have what I was calling the information content of the outcome of the experiment. That’s a number, measured in bits, and given by the formula

    \sum_{j} - p_j \cdot \log\left(p_j\right)

    The sigma summation symbol means to evaluate the expression to the right of it for every value of some index j. The pj means the probability of outcome number j. And the logarithm may be that of any base, although if we use base two then we have an information content measured in bits. Those are the same bits as are in the bytes that make up the megabytes and gigabytes in your computer. You can see this number as an estimate of how many well-chosen yes-or-no questions you’d have to ask to pick the actual result out of all the possible ones.

    I’d called this the information content of the experiment’s outcome. That’s an idiosyncratic term, chosen because I wanted to hide what it’s normally called. The normal name for this is the “entropy”.

    To be more precise, it’s known as the “Shannon entropy”, after Claude Shannon, pioneer of the modern theory of information. However, the equation defining it looks the same as one that defines the entropy of statistical mechanics, that thing everyone knows is always increasing and somehow connected with stuff breaking down. Well, almost the same. The statistical mechanics one multiplies the sum by a constant number called the Boltzmann constant, after Ludwig Boltzmann, who did so much to put statistical mechanics in its present and very useful form. We aren’t thrown by that. The statistical mechanics entropy describes energy that is in a system but that can’t be used. It’s almost background noise, present but nothing of interest.

    Is this Shannon entropy the same entropy as in statistical mechanics? This gets into some abstract grounds. If two things are described by the same formula, are they the same kind of thing? Maybe they are, although it’s hard to see what kind of thing might be shared by “how interesting the score of a basketball game is” and “how much unavailable energy there is in an engine”.

    The legend has it that when Shannon was working out his information theory he needed a name for this quantity. John von Neumann, the mathematician and pioneer of computer science, suggested, “You should call it entropy. In the first place, a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” There are variations of the quote, but they have the same structure and punch line. The anecdote appears to trace back to an April 1961 seminar at MIT given by one Myron Tribus, who claimed to have heard the story from Shannon. I am not sure whether it is literally true, but it does express a feeling about how people understand entropy that is true.

    Well, these entropies have the same form. And they’re given the same name, give or take a modifier of “Shannon” or “statistical” or some other qualifier. They’re even often given the same symbol; normally a capital S or maybe an H is used as the quantity of entropy. (H tends to be more common for the Shannon entropy, but your equation would be understood either way.)

    I’m not comfortable saying they’re the same thing, though. After all, we use the same formula to calculate a batting average and to work out the average time of a commute. But we don’t think those are the same thing, at least not more generally than “they’re both averages”. These entropies measure different kinds of things. They have different units that just can’t be sensibly converted from one to another. And the statistical mechanics entropy has many definitions that not just don’t have parallels for information, but wouldn’t even make sense for information. I would call these entropies siblings, with strikingly similar profiles, but not more than that.

    But let me point out something about the Shannon entropy. It is low when an outcome is predictable. If the outcome is unpredictable, presumably knowing the outcome will be interesting, because there is no guessing what it might be. This is where the entropy is maximized. But an absolutely random outcome also has a high entropy. And that’s boring. There’s no reason for the outcome to be one option instead of another. Somehow, as looked at by the measure of entropy, the most interesting of outcomes and the most meaningless of outcomes blur together. There is something wondrous and strange in that.

     
    • Angie Mc 9:43 pm on Saturday, 25 April, 2015 Permalink | Reply

      Clever title to go with an interesting post, Joseph :)

      Like

    • ivasallay 3:35 am on Sunday, 26 April, 2015 Permalink | Reply

      There is so much entropy in my life that I just didn’t know there were two different kinds.

      Like

      • Joseph Nebus 8:21 pm on Monday, 27 April, 2015 Permalink | Reply

        It’s worse than that: there’s many kinds of entropy out there. There’s even a kind of entropy that describes how large black holes are.

        Like

    • Aquileana 12:08 pm on Sunday, 26 April, 2015 Permalink | Reply

      Shannon Entropy is so interesting … The last paragraph of your post is eloquent… Thanks for teaching us about the The sigma summation in which the pj means the probability of outcome number j.
      Best wishes to you. Aquileana :star:

      Like

    • vagabondurges 7:55 pm on Monday, 27 April, 2015 Permalink | Reply

      I always enjoy trying to follow along with your math posts, and throwing some mathmatician anecdotes in there seasons it to perfection.

      Like

      • Joseph Nebus 8:24 pm on Monday, 27 April, 2015 Permalink | Reply

        Thank you. I’m fortunate with mathematician anecdotes that so many of them have this charming off-kilter logic. They almost naturally have the structure of a simple vaudeville joke.

        Like

    • elkement 7:41 pm on Wednesday, 29 April, 2015 Permalink | Reply

      I totally agree on your way of introducing the entropy ‘siblings’. Actually, I had once wondered why you call the ‘information entropy’ ‘entropy’ just because of similar mathematical definitions.

      Again Feynman comes to my mind: In his physics lectures he said that very rarely did work in engineering contribute to theoretical foundations in science: One time Carnot did it – describing his ideal cycle and introducing thermodynamical entropy – and the other thing Feynman mentioned was Shannon’s information theory.

      Like

      • Joseph Nebus 5:55 am on Tuesday, 5 May, 2015 Permalink | Reply

        It’s curious to me how this p-times-log-p form turns up in things that don’t seem related. I do wonder if there’s a common phenomenon we need to understand that we haven’t quite pinned down yet and that makes for a logical unification of the different kinds of entropy.

        I hadn’t noticed that Feynman quote before, but he’s surely right about Carnot and Shannon. They did much to give clear central models and definitions to fields that were forming, and put out problems so compelling that they shaped the fields.

        Liked by 1 person

    • LFFL 9:58 am on Friday, 1 May, 2015 Permalink | Reply

      Omg the TITLE of this! Lol :D I’m getting motion sickness as I speak.

      Like

      • Joseph Nebus 6:04 am on Tuesday, 5 May, 2015 Permalink | Reply

        Yeah, I was a little afraid of that. But it’s just so wonderful to say. And more fun to diagram.

        I hope the text came out all right.

        Like

  • Joseph Nebus 3:32 pm on Saturday, 7 February, 2015 Permalink | Reply
    Tags: life, memoir, ,   

    The Thermodynamics of Life 


    Peter Mander of the Carnot Cycle blog, which is primarily about thermodynamics, has a neat bit about constructing a mathematical model for how the body works. This model doesn’t look anything like a real body, as it’s concerned with basically the flow of heat, and how respiration fires the work our bodies need to do to live. Modeling at this sort of detail brings to mind an old joke told of mathematicians — that, challenged to design a maximally efficient dairy farm, the mathematician begins with “assume a spherical cow” — but great insights can come from models that look too simple to work.

    It also, sad to say, includes a bit of Bright Young Science-Minded Lad (in this case, the author’s partner of the time) reasoning his way through what traumatized people might think, in a way that’s surely well-intended but also has to be described as “surely well-intended”, so, know that the tags up top of the article aren’t misleading.

     
  • Joseph Nebus 10:21 pm on Sunday, 21 September, 2014 Permalink | Reply
    Tags: , , , ,   

    The Geometry of Thermodynamics (Part 2) 


    I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.

    The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.

    A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.

    Like

    carnotcycle

    jcm1 James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

    Historical background

    Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

    The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

    dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
    dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
    dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
    dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

    This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to…

    View original post 1,262 more words

     
    • elkement 11:24 am on Tuesday, 23 September, 2014 Permalink | Reply

      carnotcycle has to be turned into a book :-)

      Like

      • Joseph Nebus 11:37 pm on Wednesday, 24 September, 2014 Permalink | Reply

        It should become one! I wouldn’t be surprised if that’s in mind, actually, particularly given the deliberate pace with which the articles are being written.

        Like

  • Joseph Nebus 3:08 pm on Saturday, 9 August, 2014 Permalink | Reply
    Tags: , , , Maxwell's Equations,   

    The Geometry of Thermodynamics (Part 1) 


    I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

    Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

    This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

    Like

    carnotcycle

    1geo1

    Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

    In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

    But in Gibbs’ case, this is far from the truth.

    The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

    View original post 1,490 more words

     
  • Joseph Nebus 9:40 pm on Thursday, 8 May, 2014 Permalink | Reply
    Tags: , Kelvin, , , ,   

    The ideal gas equation 


    I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

    If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

    Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

    Like

    carnotcycle

    es01

    If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

    It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

    es02

    The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

    View original post 1,628 more words

     
  • Joseph Nebus 11:19 pm on Thursday, 6 March, 2014 Permalink | Reply
    Tags: , , , ,   

    The Liquefaction of Gases – Part II 


    The CarnotCycle blog has a continuation of last month’s The Liquefaction of Gases, as you might expect, named The Liquefaction of Gases, Part II, and it’s another intriguing piece. The story here is about how the theory of cooling, and of phase changes — under what conditions gases will turn into liquids — was developed. There’s a fair bit of mathematics involved, although most of the important work is in in polynomials. If you remember in algebra (or in pre-algebra) drawing curves for functions that had x3 in them, and in finding how they sometimes had one and sometimes had three real roots, then you’re well on your way to understanding the work which earned Johannes van der Waals the 1910 Nobel Prize in Physics.

    Like

    carnotcycle

    lg201 Future Nobel Prize winners both. Kamerlingh Onnes and Johannes van der Waals in 1908.

    On Friday 10 July 1908, at Leiden in the Netherlands, Kamerlingh Onnes succeeded in liquefying the one remaining gas previously thought to be non-condensable – helium – using a sequential Joule-Thomson cooling technique to drive the temperature down to just 4 degrees above absolute zero. The event brought to a conclusion the race to liquefy the so-called permanent gases, following the revelation that all gases have a critical temperature below which they must be cooled before liquefaction is possible.

    This crucial fact was established by Dr. Thomas Andrews, professor of chemistry at Queen’s College Belfast, in his groundbreaking study of the liquefaction of carbon dioxide, “On the Continuity of the Gaseous and Liquid States of Matter”, published in the Philosophical Transactions of the Royal Society of London in 1869.

    As described in Part I of…

    View original post 2,047 more words

     
  • Joseph Nebus 8:52 pm on Thursday, 6 February, 2014 Permalink | Reply
    Tags: , , science history, ,   

    The Liquefaction of Gases – Part I 


    I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

    But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

    Like

    carnotcycle

    On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

    Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

    While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

    But in…

    View original post 1,342 more words

     
    • Damyanti 6:47 am on Thursday, 13 February, 2014 Permalink | Reply

      I usually run far from all topics science-related, but I like this little bit of history here.

      Like

      • Joseph Nebus 11:46 pm on Thursday, 13 February, 2014 Permalink | Reply

        I’m glad you do enjoy. I like a good bit of history myself, mathematics and science included, and might go looking for more topics that have a historical slant.

        Like

    • LFFL 10:43 pm on Sunday, 23 February, 2014 Permalink | Reply

      You lost me at “deeper mathematical subjects”. I barely have addition & subtraction down.

      Like

      • Joseph Nebus 4:43 am on Monday, 24 February, 2014 Permalink | Reply

        Aw, but the deeper stuff is fascinating. For example, imagine you have a parcel of land with some really complicated boundary, all sorts of nooks and crannies and corners and curves and all that. If you just walk around the outside, keeping track of how far you walk and in what direction, then, you can use a bit of calculus to tell exactly how much area is enclosed by the boundary, however complicated a shape it is.

        Isn’t that amazing? You never even have to set foot inside the property, just walk around its boundary.

        Like

        • LFFL 4:46 am on Monday, 24 February, 2014 Permalink | Reply

          Wow. I’m impressed by your brain power. I just wasn’t born with a brain for much math beyond the basics.

          Like

          • Joseph Nebus 4:25 am on Tuesday, 25 February, 2014 Permalink | Reply

            Aw, you’re kind to me, and unkind to you. It’s not my brainpower, at least. The result is a consequence of some pretty important work you learn early on in calculus, and I’d expect you could understand the important part of it without knowing more than the basics.

            Like

  • Joseph Nebus 3:00 pm on Saturday, 28 December, 2013 Permalink | Reply
    Tags: , , , railroads,   

    CarnotCycle on the Gibbs-Helmholtz Equation 


    I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

    The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

    [1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

     
  • Joseph Nebus 12:48 am on Thursday, 3 October, 2013 Permalink | Reply
    Tags: , , quantum field theory, , ,   

    From ElKement: May The Force Field Be With You 


    I’m derelict in mentioning this but ElKement’s blog, Theory And Practice Of Trying To Combine Just Anything, has published the second part of a non-equation-based description of quantum field theory. This one, titled “May The Force Field Be With You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory”, is about introducing the idea of a field, and a bit of how they can be understood in quantum mechanics terms.

    A field, in this context, means some quantity that’s got a defined value for every point in space and time that you’re studying. As ElKement notes, the temperature is probably the most familiar to people. I’d imagine that’s partly because it’s relatively easy to feel the temperature change as one goes about one’s business — after all, gravity is also a field, but almost none of us feel it appreciably change — and because weather maps make the changes of that in space and in time available in attractive pictures.

    The thing the field contains can be just about anything. The temperature would be just a plain old number, or as mathematicians would have it a “scalar”. But you can also have fields that describe stuff like the pull of gravity, which is a certain amount of pull and pointing, for us, toward the center of the earth. You can also have fields that describe, for example, how quickly and in what direction the water within a river is flowing. These strengths-and-directions are called “vectors” [1], and a field of vectors offers a lot of interesting mathematics and useful physics. You can also plunge into more exotic mathematical constructs, but you don’t have to. And you don’t need to understand any of this to read ElKement’s more robust introduction to all this.

    [1] The independent student newspaper for the New Jersey Institute of Technology is named The Vector, and has as motto “With Magnitude and Direction Since 1924”. I don’t know if other tech schools have newspapers which use a similar joke.

     
    • elkement 6:22 am on Thursday, 3 October, 2013 Permalink | Reply

      Thanks again for your kind pingback and publicity :-)
      I need to get to vectors and tensors in the next post(s) but I am still trying to figure out how to do this without mentioning those terms. Fluid dynamics is often a good starting point, e.g. to introduce, ‘derive’ or better motivate Schrödinger’s equation. On the other hand Feynman used to plunge directly into path integrals – presenting them as a rule along the lines of “This is the way nature works – live with it” – and deriving Schrödinger’s equation later.

      Like

      • Joseph Nebus 3:20 am on Saturday, 5 October, 2013 Permalink | Reply

        I’m not quite sure how I’d do either. I think I could probably explain vectors without having to use mathematical symbolism, since the idea of stuff moving at particular speeds in directions can call on physical intuition. Tensors I don’t know how I’d try to explain in popular terms, partly because I’m not really as proficient in them as I should be. I probably need to think seriously about my own understanding of them.

        Like

        • elkement 6:26 pm on Monday, 7 October, 2013 Permalink | Reply

          I have also always considered easier to imagine the different aspects of a vector – the abstract object and the ‘arrow’ as it lives in a specific base. But how do you really imagine the ‘abstract tensor object’ – in contrast to a ‘matrix’ (with more than 3 dimensions probably…)

          I have started to read about general relativity (… will finish after I have finally understood the Higgs…) and it took me quite a while to comprehend that you are not allowed to shift a vector in curved space as you shift the ‘arrow’. Actually it made me think about vectors in a new way…

          Like

          • Joseph Nebus 2:46 am on Friday, 18 October, 2013 Permalink | Reply

            (I’m embarrassed that I lost this comment somehow.)

            I can sort of reconstruct the process when I think I started to get vectors as a concept, particularly in thinking of them as not tied to some particular point, or even containing information about a point, but somehow floating freely off that. If I get around to trying to explain vectors I might even be able to make all that explicit again.

            Tensors I keep feeling like I’m on the verge of having that intuitive leap to where I have some mental model for how they work but I keep finding I don’t quite do enough work with them that it gets past following the rules and into really understanding the rules.

            Like

  • Joseph Nebus 10:55 am on Thursday, 19 September, 2013 Permalink | Reply
    Tags: , , , , ,   

    From ElKement: Space Balls, Baywatch, and the Geekiness of Classical Mechanics 


    Over on Elkement’s blog, Theory and Practice of Trying To Combine Just Anything, is the start of a new series about quantum field theory. Elke Stangl is trying a pretty impressive trick here in trying to describe a pretty advanced field without resorting to the piles of equations that maybe are needed to be precise, but, which also fill the page with piles of equations.

    The first entry is about classical mechanics, and contrasting the familiar way that it gets introduced to people —- the whole forceequalsmasstimesacceleration bit — and an alternate description, based on what’s called the Principle of Least Action. This alternate description is as good as the familiar old Newton’s Laws in describing what’s going on, but it also makes a host of powerful new mathematical tools available. So when you get into serious physics work you tend to shift over to that model; and, if you want to start talking Modern Physics, stuff like quantum mechanics, you pretty nearly have to start with that if you want to do anything.

    So, since it introduces in clear language a fascinating and important part of physics and mathematics, I’d recommend folks try reading the essay. It’s building up to an explanation of fields, as the modern physicist understands them, too, which is similarly an important topic worth being informed about.

     
    • elkement 11:03 am on Thursday, 19 September, 2013 Permalink | Reply

      Thanks a lot, Joseph – I am really honored :-) I hope I will be able to meet the expectations raised by your post :-D

      Like

    • elkement 11:06 am on Thursday, 19 September, 2013 Permalink | Reply

      Reblogged this on Theory and Practice of Trying to Combine Just Anything and commented:
      This is self-serving, but I can’t resist reblogging Joseph Nebus’ endorsement of my posts on Quantum Field Theory. Joseph is running a great blog on mathematics, and he manages to explain math in an accessible and entertaining way. I hope I will be able to do the same to theoretical physics!

      Like

  • Joseph Nebus 8:50 pm on Monday, 8 July, 2013 Permalink | Reply
    Tags: , ,   

    On exact and inexact differentials 


    The CarnotCycle blog recently posted a nice little article titled “On Exact And Inexact Differentials” and I’m bringing it to people’s attention because its the sort of thing which would have been extremely useful to me at a time when I was reading calculus-heavy texts that just assumed you knew what exact differentials were, without being aware that you probably missed the day in intro differential equations when they were explained. (That was by far my worst performance in a class. I have no excuse.)

    So this isn’t going to be the most accessible article you run across on my blog here, until I finish making the switch to a full-on advanced statistical mechanics course. But if you start getting into, particularly, thermodynamics and wonder where this particular and slightly funky string of symbols comes from, this is a nice little warmup. For extra help, CarnotCycle also explains what makes something an inexact differential.

    Like

    carnotcycle

    From the search term phrases that show up on this blog’s stats, CarnotCycle detects that a significant segment of visitors are studying foundation level thermodynamics  at colleges and universities around the world. So what better than a post that tackles that favorite test topic – exact and inexact differentials.

    When I was an undergraduate, back in the time of Noah, we were first taught the visual approach to these things. Later we dispensed with diagrams and got our answers purely through the operations of calculus, but either approach is equally instructive. CarnotCycle herewith presents them both.

    – – – –

    The visual approach

    Ok, let’s start off down the visual track by contemplating the following pair of pressure-volume diagrams:

    visual track

    The points A and B have identical coordinates on both diagrams, with A and B respectively representing the initial and final states of a closed PVT system, such as an…

    View original post 782 more words

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: