## My Mathematics Reading For The 13th of June

I’m working on the next Why Stuff Can Orbit post, this one to feature a special little surprise. In the meanwhile here’s some of the things I’ve read recently and liked.

The Theorem of the Day is just what the name offers. They’re fit onto single slides, so there’s not much text to read. I’ll grant some of them might be hard reading at once, though, if you’re not familiar with the lingo. Anyway, this particular theorem, the Lindemann-Weierstrass Theorem, is one of the famous ones. Also one of the best-named ones. Karl Weierstrass is one of those names you find all over analysis. Over the latter half of the 19th century he attacked the logical problems that had bugged calculus for the previous three centuries and beat them all. I’m lying, but not by much. Ferdinand von Lindemann’s name turns up less often, but he’s known in mathematics circles for proving that π is transcendental (and so, ultimately, that the circle can’t be squared by compass and straightedge). And he was David Hilbert’s thesis advisor.

The Lindemann-Weierstrass Theorem is one of those little utility theorems that’s neat on its own, yes, but is good for proving other stuff. This theorem says that if a given number is algebraic (ask about that some A To Z series) then e raised to that number has to be transcendental, and vice-versa. (The exception: e raised to 0 is equal to 1.) The page also mentions one of those fun things you run across when you have a scientific calculator and can repeat an operation on whatever the result of the last operation was.

I’ve mentioned Maths By A Girl before, but, it’s worth checking in again. This is a piece about Apéry’s Constant, which is one of those numbers mathematicians have heard of, and that we don’t know whether is transcendental or not. It’s hard proving numbers are transcendental. If you go out trying to build a transcendental number it’s easy, but otherwise, you have to hope you know your number is the exponential of an algebraic number.

I forget which Twitter feed brought this to my attention, but here’s a couple geometric theorems demonstrated and explained some by Dave Richeson. There’s something wonderful in a theorem that’s mostly a picture. It feels so supremely mathematical to me.

And last, Katherine Bourzac writing for Nature.com reports the creation of a two-dimensional magnet. This delights me since one of the classic problems in statistical mechanics is a thing called the Ising model. It’s a basic model for the mathematics of how magnets would work. The one-dimensional version is simple enough that you can give it to undergrads and have them work through the whole problem. The two-dimensional version is a lot harder to solve and I’m not sure I ever saw it laid out even in grad school. (Mind, I went to grad school for mathematics, not physics, and the subject is a lot more physics.) The four- and higher-dimensional model can be solved by a clever approach called mean field theory. The three-dimensional model .. I don’t think has any exact solution, which seems odd given how that’s the version you’d think was most useful.

That there’s a real two-dimensional magnet (well, a one-molecule-thick magnet) doesn’t really affect the model of two-dimensional magnets. The model is interesting enough for its mathematics, which teaches us about all kinds of phase transitions. And it’s close enough to the way certain aspects of real-world magnets behave to enlighten our understanding. The topic couldn’t avoid drawing my eye, is all.

## Reading the Comics, March 11, 2017: Accountants Edition

And now I can wrap up last week’s delivery from Comic Strip Master Command. It’s only five strips. One certainly stars an accountant. one stars a kid that I believe is being coded to read as an accountant. The rest, I don’t know. I pick Edition titles for flimsy reasons anyway. This’ll do.

Ryan North’s Dinosaur Comics for the 6th is about things that could go wrong. And every molecule of air zipping away from you at once is something which might possibly happen but which is indeed astronomically unlikely. This has been the stuff of nightmares since the late 19th century made probability an important part of physics. The chance all the air near you would zip away at once is impossibly unlikely. But such unlikely events challenge our intuitions about probability. An event that has zero chance of happening might still happen, given enough time and enough opportunities. But we’re not using our time well to worry about that. If nothing else, even if all the air around you did rush away at once, it would almost certainly rush back right away.

Mark Anderson’s Andertoons for the 7th is the Mark Anderson’s Andertoons for last week. It’s another kid-at-the-chalkboard panel. What gets me is that if the kid did keep one for himself then shouldn’t he have written 38?

Brian Basset’s Red and Rover for the 8th mentions fractions. It’s just there as the sort of thing a kid doesn’t find all that naturally compelling. That’s all right I like the bug-eyed squirrel in the first panel.

Bill Holbrook’s On The Fastrack for the 9th concludes the wedding of accountant Fi. It uses the square root symbol so as to make the cake topper clearly mathematical as opposed to just an age.

## Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition

Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

## The End 2016 Mathematics A To Z: Ergodic

This essay follows up on distributions, mentioned back on Wednesday. This is only one of the ideas which distributions serve. Do you have a word you’d like to request? I figure to close ‘F’ on Saturday afternoon, and ‘G’ is already taken. But give me a request for a free letter soon and I may be able to work it in.

## Ergodic.

There comes a time a physics major, or a mathematics major paying attention to one of the field’s best non-finance customers, first works on a statistical mechanics problem. Instead of keeping track of the positions and momentums of one or two or four particles she’s given the task of tracking millions of particles. It’s listed as a distribution of all the possible values they can have. But she still knows what it really is. And she looks at how to describe the way this distribution changes in time. If she’s the slightest bit like me, or anyone I knew, she freezes up this. Calculate the development of millions of particles? Impossible! She tries working out what happens to just one, instead, and hopes that gives some useful results.

And then it does.

It’s a bit much to call this luck. But it is because the student starts off with some simple problems. Particles of gas in a strong box, typically. They don’t interact chemically. Maybe they bounce off each other, but she’s never asked about that. She’s asked about how they bounce off the walls. She can find the relationship between the volume of the box and the internal gas pressure on the interior and the temperature of the gas. And it comes out right.

She goes on to some other problems and it suddenly fails. Eventually she re-reads the descriptions of how to do this sort of problem. And she does them again and again and it doesn’t feel useful. With luck there’s a moment, possibly while showering, that the universe suddenly changes. And the next time the problem works out. She’s working on distributions instead of toy little single-particle problems.

But the problem remains: why did it ever work, even for that toy little problem?

It’s because some systems of things are ergodic. It’s a property that some physics (or mathematics) problems have. Not all. It’s a bit hard to describe clearly. Part of what motivated me to take this topic is that I want to see if I can explain it clearly.

Every part of some system has a set of possible values it might have. A particle of gas can be in any spot inside the box holding it. A person could be in any of the buildings of her city. A pool ball could be travelling in any direction on the pool table. Sometimes that will change. Gas particles move. People go to the store. Pool balls bounce off the edges of the table.

These values will have some kind of distribution. Look at where the gas particle is now. And a second from now. And a second after that. And so on, to the limits of human knowledge. Or to when the box breaks open. Maybe the particle will be more often in some areas than in others. Maybe it won’t. Doesn’t matter. It has some distribution. Over time we can say how often we expect to find the gas particle in each of its possible places.

The same with whatever our system is. People in buildings. Balls on pool tables. Whatever.

Now instead of looking at one particle (person, ball, whatever) we have a lot of them. Millions of particle in the box. Tens of thousands of people in the city. A pool table that somehow supports ten thousand balls. Imagine they’re all settled to wherever they happen to be.

So where are they? The gas particle one is easy to imagine. At least for a mathematics major. If you’re stuck on it I’m sorry. I didn’t know. I’ve thought about boxes full of gas particles for decades now and it’s hard to remember that isn’t normal. Let me know if you’re stuck, and where you are. I’d like to know where the conceptual traps are.

But back to the gas particles in a box. Some fraction of them are in each possible place in the box. There’s a distribution here of how likely you are to find a particle in each spot.

How does that distribution, the one you get from lots of particles at once, compare to the first, the one you got from one particle given plenty of time? If they agree the system is ergodic. And that’s why my hypothetical physics major got the right answers from the wrong work. (If you are about to write me to complain I’m leaving out important qualifiers let me say I know. Please pretend those qualifiers are in place. If you don’t see what someone might complain about thank you, but it wouldn’t hurt to think of something I might be leaving out here. Try taking a shower.)

The person in a building is almost certainly not an ergodic system. There’s buildings any one person will never ever go into, however possible it might be. But nearly all buildings have some people who will go into them. The one-person-with-time distribution won’t be the same as the many-people-at-once distribution. Maybe there’s a way to qualify things so that it becomes ergodic. I doubt it.

The pool table, now, that’s trickier to say. For a real pool table no, of course not. An actual ball on an actual table rolls to a stop pretty soon, either from the table felt’s friction or because it drops into a pocket. Tens of thousands of balls would form an immobile heap on the table that would be pretty funny to see, now that I think of it. Well, maybe those are the same. But they’re a pretty boring same.

Anyway when we talk about “pool tables” in this context we don’t mean anything so sordid as something a person could play pool on. We mean something where the table surface hasn’t any friction. That makes the physics easier to model. It also makes the game unplayable, which leaves the mathematical physicist strangely unmoved. In this context anyway. We also mean a pool table that hasn’t got any pockets. This makes the game even more unplayable, but the physics even easier. (It makes it, really, like a gas particle in a box. Only without that difficult third dimension to deal with.)

And that makes it clear. The one ball on a frictionless, pocketless table bouncing around forever maybe we can imagine. A huge number of balls on that frictionless, pocketless table? Possibly trouble. As long as we’re doing imaginary impossible unplayable pool we could pretend the balls don’t collide with each other. Then the distributions of what ways the balls are moving could be equal. If they do bounce off each other, or if they get so numerous they can’t squeeze past one another, well, that’s different.

An ergodic system lets you do this neat, useful trick. You can look at a single example for a long time. Or you can look at a lot of examples at one time. And they’ll agree in their typical behavior. If one is easier to study than the other, good! Use the one that you can work with. Mathematicians like to do this sort of swapping between equivalent problems a lot.

The problem is it’s hard to find ergodic systems. We may have a lot of things that look ergodic, that feel like they should be ergodic. But proved ergodic, with a logic that we can’t shake? That’s harder to do. Often in practice we will include a note up top that we are assuming the system to be ergodic. With that “ergodic hypothesis” in mind we carry on with our work. It gives us a handle on a lot of problems that otherwise would be beyond us.

## The End 2016 Mathematics A To Z: Distribution (statistics)

As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

## Distribution (statistics)

The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

## JH van ‘t Hoff and the Gaseous Theory of Solutions; also, Pricing Games

Do you ever think about why stuff dissolves? Like, why a spoon of sugar in a glass of water should seem to disappear instead of turning into a slight change in the water’s clarity? Well, sure, in those moods when you look at the world as a child does, not accepting that life is just like that and instead can imagine it being otherwise. Take that sort of question and put it to adult inquiry and you get great science.

Peter Mander of the Carnot Cycle blog this month writes a tale about Jacobus Henricus van ‘t Hoff, the first winner of a Nobel Prize for Chemistry. In 1883, on hearing of an interesting experiment with semipermeable membranes, van ‘t Hoff had a brilliant insight about why things go into solution, and how. The insight had only one little problem. It makes for fine reading about the history of chemistry and of its mathematical study.

In other, television-related news, the United States edition of The Price Is Right included a mention of “square root day” yesterday, 4/4/16. It was in the game “Cover-Up”, in which the contestant tries making successively better guesses at the price of a car. This they do by covering up wrong digits with new guesses. For the start of the game, before the contestant’s made any guesses, they need something irrelevant to the game to be on the board. So, they put up mock calendar pages for 1/1/2001, 2/2/2004, 3/3/2009, 4/4/2016, and finally a card reading $\sqrt{DAY}$. The game show also had a round devoted to Pi Day a few weeks back. So I suppose they’re trying to reach out to people into pop mathematics. It’s cute.

## Ensembled

A couple weeks back voting in the Democratic party’s Iowa caucus had several districts tied between Clinton and Sanders supporters. The ties were broken by coin tosses. That fact produced a bunch of jokes at Iowa’s expense. I can’t join in this joking. If the votes don’t support one candidate over another, but someone must win, what’s left but an impartial tie-breaking scheme?

After Clinton won six of the coin tosses people joked about the “impartial” idea breaking down. Well, we around here know that there are no unfair coins. And while it’s possible to have an unfair coin toss, I’m not aware of any reason to think any of the tosses were. It’s lucky to win six coin tosses. If the tosses are fair, the chance of getting any one right is one-half. Suppose the tosses are “independent”. That is, the outcome of one doesn’t change the chances of any other. Then the chance of getting six right in a row is the chance of getting one right, times itself, six times over. That is, the chance is one-half raised to the sixth power. That’s a small number, about 1.5 percent. But it’s not so riotously small as to deserve rioting.

Yes and no. It depends on what you mean by “any other outcome”. Grant that heads and tails are equally likely to come up. Grant also that coin tosses are independent. Then six heads, H H H H H H, are just as likely to come up as six tails, T T T T T T. I don’t think anyone will argue with me that far.

But are both of these exactly as likely as the first toss coming up heads and all the others tails? As likely as H T T T T T? Yes, I would say they are. But I understand if you feel skeptical, and if you want convincing. The chance of getting heads once in a fair coin toss is one-half. We started with that. What’s the chance of getting five tails in a row? That must be one-half raised to the fifth power. The first coin toss and the last five don’t depend on one another. This means the chance of that first heads followed by those five tails is one-half times one-half to the fifth power. And that’s one-half to the sixth power.

What about the first two tosses coming up heads and the next four tails? H H T T T T? We can run through the argument again. The chance of two coin tosses coming up heads would be one-half to the second power. The chance of four coin tosses coming up tails would be one-half to the fourth power. The chance of the first streak being followed by the second is the product of the two chances. One-half to the second power times one-half to the fourth power is one-half to the sixth power.

We could go on like this and try out all the possible outcomes. There’s only 64 of them. That’s going to be boring. We could prove any particular string of outcomes is just as likely as any other. We need to make an argument that’s a little more clever, but also a little more abstract.

Don’t think just now of a particular sequence of coin toss outcomes. Consider this instead: what is the chance you will call a coin toss right? You might call heads, you might call tails. The coin might come up heads, the coin might come up tails. The chance you call it right, though — well, won’t that be one-half? Stay at this point until you’re sure it is.

So write out a sequence of possible outcomes. Don’t tell me what it is. It can be any set of H and T, as you like, as long as it’s six outcomes long.

What is the chance you wrote down six correct tosses in a row? That’ll be the chance of calling one outcome right, one-half, times itself six times over. One-half to the sixth power. So I know the probability that your prediction was correct. Which of the 64 possible outcomes did you write down? I don’t know. I suspect you didn’t even write one down. I would’ve just pretended I had one in mind until the essay required me to do something too. But the exact same argument applies no matter which sequence you pretended to write down. (Look at it. I didn’t use any information about what sequence you would have picked. So how could the sequence affect the outcome?) Therefore each of the 64 possible outcomes has the same chance of coming up.

So in this context, yes, six heads in a row is exactly as likely as any other sequence of six coin tosses.

I will guess that you aren’t perfectly happy with this argument. It probably feels like something is unaccounted-for. What’s unaccounted-for is that nobody cares about the difference between the sequence H H T H H H and the sequence H H H T H H. Would you even notice the difference if I hadn’t framed the paragraph to make the difference stand out? In either case, the sequence is “one tail, five heads”. What’s the chance of getting “one tail, five heads”?

Well, the chance of getting one of several mutually exclusive outcomes is the sum of the chance of each individual outcome. And these are mutually exclusive outcomes: you can’t get both H H T H H H and H H H T H H as the result of the same set of coin tosses.

(There can be not-mutually-exclusive outcomes. Consider, for example, the chance of getting “at least three tails” and the chance of the third coin toss being heads. Calculating the chance of either of those outcomes happening demands more thinking. But we don’t have to deal with that here, so we won’t.)

There are six distinct ways to get one tails and five heads. The tails can be the first toss’s result. Or the tails can be the second toss’s result. Or the tails can be the third toss’s result. And so on. Each of these possible outcomes has the same probability, one-half to the sixth power. So the chance of getting “one tails, five heads” is one-half to the sixth power, added to itself, six times over. That is, it’s six times one-half to the sixth power. That will come up about one time in eleven that you do a sequence of six coin tosses.

There are fifteen ways to get two tails and four heads. So the chance of the outcome being “two tails, four heads” is fifteen times one-half to the sixth power. That will come up a bit less than one in four times.

There are twenty, count ’em, ways to get three tails and three heads. So the chance of that is twenty times one-half to the sixth power. That’s a little more than three times in ten. There are fifteen ways to get four tails and two heads, so the chance of that drops again. There’s six ways to get five tails and one heads. And there’s just one way to get six tails and no heads on six coin tosses.

So if you think of the outcome as “this many tails and that many heads”, then, no, not all outcomes are equally likely. “Three tails and three heads” is a lot more likely than “no tails and six heads”. “Two tails and four heads” is more likely than “one tails and five heads”.

Whether it’s right to say “every outcome is just as likely” depends on what you think “an outcome” is. If it’s a particular sequence of heads and tails, then yes, it is. If it’s the aggregate statistic of how many heads and tails, then no, it’s not.

We see this kind of distinction all over the place. Every hand of cards, for example, might be as likely to turn up as every other hand of cards. But consider five-card poker hands. There are very few hands that have the interesting pattern of being a straight flush, five sequential cards of the same face. There are more hands that have the interesting pattern of four-of-a-kind. There are a lot of hands that have the mildly interesting pattern of two-of-a-kind and nothing else going on. There’s a huge mass of cards that don’t have any pattern we’ve seen fit to notice. So a straight flush is regarded as a very unlikely hand to have, and four-of-a-kind more likely but still rare. Two-of-a-kind is none too rare. Nothing at all is most likely, at least in a five-card hand. (When you get seven cards, a hand with nothing at all becomes less likely. You have so many chances that you just have to hit something.)

The distinction carries over into statistical mechanics. The field studies the state of things. Is a mass of material solid or liquid or gas? Is a solid magnetized or not, or is it trying to be? Are molecules in a high- or a low-energy state?

Mathematicians use the name “ensemble” to describe a state of whatever it is we’re studying. But we have the same problem of saying what kind of description we mean. Suppose we are studying the magnetism of a solid object. We do this by imagining the object as a bunch of smaller regions, each with a tiny bit of magnetism. That bit might have the north pole pointing up, or the south pole pointing up. We might say the ensemble is that there are ten percent more north-pole-up regions than there are south-pole-up regions.

But by that, do we mean we’re interested in “ten percent more north-pole-up than south-pole-up regions”? Or do we mean “these particular regions are north-pole-up, and these are south-pole-up”? We distinguish this by putting in some new words.

The “canonical ensemble” is, generally, the kind of aggregate-statistical-average description of things. So, “ten percent more north-pole-up than south-pole-up regions” would be such a canonical ensemble. Or “one tails, five heads” would be a canonical ensemble. If we want to look at the fine details we speak of the “microcanonical ensemble”. That would be “these particular regions are north-pole-up, and these are south-pole-up”. Or that would be “the coin tosses came up H H H T H H”.

Just what is a canonical and what is a microcanonical ensemble depends on context. Of course it would. Consider the standpoint of the city manager, hoping to estimate the power and water needs of neighborhoods and bringing the language of statistical mechanics to the city-planning world. There, it is enough detail to know how many houses on a particular street are occupied and how many residents there are. She could fairly consider that a microcanonical ensemble. From the standpoint of the letter carriers for the post office, though, that would be a canonical ensemble. It would give an idea how much time would be needed to deliver on that street. But would be just short of useful in getting letters to recipients. The letter carrier would want to know which people are in which house before rating that a microcanonical ensemble.

Much of statistical mechanics is studying ensembles, and which ensembles are more or less likely than others. And how that likelihood changes as conditions change.

So let me answer the original question. In this coin-toss problem, yes, every microcanonical ensemble is just as likely as every other microcanonical ensemble. The sequence ‘H H H H H H’ is just as likely as ‘H T H H H T’ or ‘T T H T H H’ are. But not every canonical ensemble is as likely as every other one. Six heads in six tosses are less likely than two heads and four tails, or three heads and three tails, are. The answer depends on what you mean by the question.

## Reading the Comics, September 16, 2015: Celebrity Appearance Edition

I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.

Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.

The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.

Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.

The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)

Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.

In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)

Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.

Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.

Dave Whamond’s Reality Check for the 16th of September shows off the Metropolitan Museum of Symmetry. This is probably meant to be an art museum. Symmetries are studied in mathematics too, though. Many symmetries, the ways you can swap shapes around, form interesting groups or rings. And in mathematical physics, symmetries give us useful information about the behavior of systems. That’s enough for me to claim this comic is mathematically linked.

## How Pinball Leagues and Chemistry Work: The Mathematics

My love and I play in several pinball leagues. I need to explain something of how they work.

Most of them organize league nights by making groups of three or four players and having them play five games each on a variety of pinball tables. The groupings are made by order. The 1st through 4th highest-ranked players who’re present are the first group, the 5th through 8th the second group, the 9th through 12th the third group, and so on. For each table the player with the highest score gets some number of league points. The second-highest score earns a lesser number of league points, third-highest gets fewer points yet, and the lowest score earns the player comments about how the table was not being fair. The total number of points goes into the player’s season score, which gives her ranking.

You might see the bootstrapping problem here. Where do the rankings come from? And what happens if someone joins the league mid-season? What if someone misses a competition day? (Some leagues give a fraction of points based on the player’s season average. Other leagues award no points.) How does a player get correctly ranked?

## Reading the Comics, June 21, 2015: Blatantly Padded Edition, Part 2

I said yesterday I was padding one mathematics-comics post into two for silly reasons. And I was. But there were enough Sunday comics on point that splitting one entry into two has turned out to be legitimate. Nice how that works out sometimes.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. (June 19) uses mathematics as something to heap upon a person until they yield to your argument. It’s a fallacious way to argue, but it does work. Even at a mathematical conference the terror produced by a screen full of symbols can chase follow-up questions away. On the 21st, they present mathematics as a more obviously useful thing. Well, mathematics with a bit of physics.

Nate Frakes’s Break Of Day (June 19) is this week’s anthropomorphic algebra joke.

Niklas Eriksson’s Carpe Diem (June 20) is captioned “Life at the Quantum Level”. And it’s built on the idea that quantum particles could be in multiple places at once. Whether something can be in two places at once depends on coming up with a clear idea about what you mean by “thing” and “places” and for that matter “at once”; when you try to pin the ideas down they prove to be slippery. But the mathematics of quantum mechanics is fascinating. It cries out for treating things we would like to know about, such as positions and momentums and energies of particles, as distributions instead of fixed values. That is, we know how likely it is a particle is in some region of space compared to how likely it is somewhere else. In statistical mechanics we resort to this because we want to study so many particles, or so many interactions, that it’s impractical to keep track of them all. In quantum mechanics we need to resort to this because it appears this is just how the world works.

(It’s even less on point, but Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 21st of June has a bit of riffing on Schrödinger’s Cat.)

Brian and Ron Boychuk’s Chuckle Brothers (June 20) name-drops algebra as the kind of mathematics kids still living with their parents have trouble with. That’s probably required by the desire to make a joking definition of “aftermath”, so that some specific subject has to be named. And it needs parents to still be watching closely over their kids, something that doesn’t quite fit for college-level classes like Intro to Differential Equations. So algebra, geometry, or trigonometry it must be. I am curious whether algebra reads as the funniest of that set of words, or if it just fits better in the space available. ‘Geometry’ is as long a word as ‘algebra’, but it may not have the same connotation of being an impossibly hard class.

And from the world of vintage comic strips, Jimmy Hatlo’s Little Iodine (June 21, originally run the 18th of April, 1954) reminds us that anybody can do any amount of arithmetic if it’s something they really want to calculate.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney (June 21) is another strip using the idea of mathematics — and particularly word problems — to signify great intelligence. I suppose it’s easier to recognize the form of a word problem than it is to recognize a good paper on the humanities if you only have two dozen words to show it in.

Juba’s Viivi and Wagner (June 21) is a timely reminder that while sudokus may be fun logic puzzles, they are ultimately the puzzle you decide to make of them.

## Conditions of equilibrium and stability

This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

If we restrict our attention to a thermodynamic system of unchanging composition and apply…

View original post 2,534 more words

I had been talking about how much information there is in the outcome of basketball games, or tournaments, or the like. I wanted to fill in at least one technical term, to match some of the others I’d given.

In this information-theory context, an experiment is just anything that could have different outcomes. A team can win or can lose or can tie in a game; that makes the game an experiment. The outcomes are the team wins, or loses, or ties. A team can get a particular score in the game; that makes that game a different experiment. The possible outcomes are the team scores zero points, or one point, or two points, or so on up to whatever the greatest possible score is.

If you know the probability p of each of the different outcomes, and since this is a mathematics thing we suppose that you do, then we have what I was calling the information content of the outcome of the experiment. That’s a number, measured in bits, and given by the formula

$\sum_{j} - p_j \cdot \log\left(p_j\right)$

The sigma summation symbol means to evaluate the expression to the right of it for every value of some index j. The pj means the probability of outcome number j. And the logarithm may be that of any base, although if we use base two then we have an information content measured in bits. Those are the same bits as are in the bytes that make up the megabytes and gigabytes in your computer. You can see this number as an estimate of how many well-chosen yes-or-no questions you’d have to ask to pick the actual result out of all the possible ones.

I’d called this the information content of the experiment’s outcome. That’s an idiosyncratic term, chosen because I wanted to hide what it’s normally called. The normal name for this is the “entropy”.

To be more precise, it’s known as the “Shannon entropy”, after Claude Shannon, pioneer of the modern theory of information. However, the equation defining it looks the same as one that defines the entropy of statistical mechanics, that thing everyone knows is always increasing and somehow connected with stuff breaking down. Well, almost the same. The statistical mechanics one multiplies the sum by a constant number called the Boltzmann constant, after Ludwig Boltzmann, who did so much to put statistical mechanics in its present and very useful form. We aren’t thrown by that. The statistical mechanics entropy describes energy that is in a system but that can’t be used. It’s almost background noise, present but nothing of interest.

Is this Shannon entropy the same entropy as in statistical mechanics? This gets into some abstract grounds. If two things are described by the same formula, are they the same kind of thing? Maybe they are, although it’s hard to see what kind of thing might be shared by “how interesting the score of a basketball game is” and “how much unavailable energy there is in an engine”.

The legend has it that when Shannon was working out his information theory he needed a name for this quantity. John von Neumann, the mathematician and pioneer of computer science, suggested, “You should call it entropy. In the first place, a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” There are variations of the quote, but they have the same structure and punch line. The anecdote appears to trace back to an April 1961 seminar at MIT given by one Myron Tribus, who claimed to have heard the story from Shannon. I am not sure whether it is literally true, but it does express a feeling about how people understand entropy that is true.

Well, these entropies have the same form. And they’re given the same name, give or take a modifier of “Shannon” or “statistical” or some other qualifier. They’re even often given the same symbol; normally a capital S or maybe an H is used as the quantity of entropy. (H tends to be more common for the Shannon entropy, but your equation would be understood either way.)

I’m not comfortable saying they’re the same thing, though. After all, we use the same formula to calculate a batting average and to work out the average time of a commute. But we don’t think those are the same thing, at least not more generally than “they’re both averages”. These entropies measure different kinds of things. They have different units that just can’t be sensibly converted from one to another. And the statistical mechanics entropy has many definitions that not just don’t have parallels for information, but wouldn’t even make sense for information. I would call these entropies siblings, with strikingly similar profiles, but not more than that.

But let me point out something about the Shannon entropy. It is low when an outcome is predictable. If the outcome is unpredictable, presumably knowing the outcome will be interesting, because there is no guessing what it might be. This is where the entropy is maximized. But an absolutely random outcome also has a high entropy. And that’s boring. There’s no reason for the outcome to be one option instead of another. Somehow, as looked at by the measure of entropy, the most interesting of outcomes and the most meaningless of outcomes blur together. There is something wondrous and strange in that.

## The Thermodynamics of Life

Peter Mander of the Carnot Cycle blog, which is primarily about thermodynamics, has a neat bit about constructing a mathematical model for how the body works. This model doesn’t look anything like a real body, as it’s concerned with basically the flow of heat, and how respiration fires the work our bodies need to do to live. Modeling at this sort of detail brings to mind an old joke told of mathematicians — that, challenged to design a maximally efficient dairy farm, the mathematician begins with “assume a spherical cow” — but great insights can come from models that look too simple to work.

It also, sad to say, includes a bit of Bright Young Science-Minded Lad (in this case, the author’s partner of the time) reasoning his way through what traumatized people might think, in a way that’s surely well-intended but also has to be described as “surely well-intended”, so, know that the tags up top of the article aren’t misleading.

## The Geometry of Thermodynamics (Part 2)

I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.

The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.

A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.

James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

Historical background

Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to…

View original post 1,262 more words

## The Geometry of Thermodynamics (Part 1)

I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original post 1,490 more words

## The ideal gas equation

I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

View original post 1,628 more words

## The Liquefaction of Gases – Part II

The CarnotCycle blog has a continuation of last month’s The Liquefaction of Gases, as you might expect, named The Liquefaction of Gases, Part II, and it’s another intriguing piece. The story here is about how the theory of cooling, and of phase changes — under what conditions gases will turn into liquids — was developed. There’s a fair bit of mathematics involved, although most of the important work is in in polynomials. If you remember in algebra (or in pre-algebra) drawing curves for functions that had x3 in them, and in finding how they sometimes had one and sometimes had three real roots, then you’re well on your way to understanding the work which earned Johannes van der Waals the 1910 Nobel Prize in Physics.

Future Nobel Prize winners both. Kamerlingh Onnes and Johannes van der Waals in 1908.

On Friday 10 July 1908, at Leiden in the Netherlands, Kamerlingh Onnes succeeded in liquefying the one remaining gas previously thought to be non-condensable – helium – using a sequential Joule-Thomson cooling technique to drive the temperature down to just 4 degrees above absolute zero. The event brought to a conclusion the race to liquefy the so-called permanent gases, following the revelation that all gases have a critical temperature below which they must be cooled before liquefaction is possible.

This crucial fact was established by Dr. Thomas Andrews, professor of chemistry at Queen’s College Belfast, in his groundbreaking study of the liquefaction of carbon dioxide, “On the Continuity of the Gaseous and Liquid States of Matter”, published in the Philosophical Transactions of the Royal Society of London in 1869.

As described in Part I of…

View original post 2,047 more words

## The Liquefaction of Gases – Part I

I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

But in…

View original post 1,350 more words

## CarnotCycle on the Gibbs-Helmholtz Equation

I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

[1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

## From ElKement: May The Force Field Be With You

I’m derelict in mentioning this but ElKement’s blog, Theory And Practice Of Trying To Combine Just Anything, has published the second part of a non-equation-based description of quantum field theory. This one, titled “May The Force Field Be With You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory”, is about introducing the idea of a field, and a bit of how they can be understood in quantum mechanics terms.

A field, in this context, means some quantity that’s got a defined value for every point in space and time that you’re studying. As ElKement notes, the temperature is probably the most familiar to people. I’d imagine that’s partly because it’s relatively easy to feel the temperature change as one goes about one’s business — after all, gravity is also a field, but almost none of us feel it appreciably change — and because weather maps make the changes of that in space and in time available in attractive pictures.

The thing the field contains can be just about anything. The temperature would be just a plain old number, or as mathematicians would have it a “scalar”. But you can also have fields that describe stuff like the pull of gravity, which is a certain amount of pull and pointing, for us, toward the center of the earth. You can also have fields that describe, for example, how quickly and in what direction the water within a river is flowing. These strengths-and-directions are called “vectors” [1], and a field of vectors offers a lot of interesting mathematics and useful physics. You can also plunge into more exotic mathematical constructs, but you don’t have to. And you don’t need to understand any of this to read ElKement’s more robust introduction to all this.

[1] The independent student newspaper for the New Jersey Institute of Technology is named The Vector, and has as motto “With Magnitude and Direction Since 1924”. I don’t know if other tech schools have newspapers which use a similar joke.