I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.
One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.
Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.
And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.
Stigler’s Law is a half-joking principle of mathematics and scientific history. It says that scientific discoveries are never named for the person who discovered them. It’s named for the statistician Stephen Stigler, who asserted that the principle was discovered by the sociologist Robert K Merton.
If you study much scientific history you start to wonder if anything is named correctly. There are reasons why. Often it’s very hard to say exactly what the discovery is, especially if it’s something fundamental. Often the earliest reports of something are unclear, at least to later eyes. People’s attention falls on a person who did very well describing or who effectively publicized the discovery. Sometimes a discovery is just in the air, and many people have important pieces of it nearly simultaneously. And sometimes history just seems perverse. Pell’s Equation, for example, is named for John Pell, who did not discover it, did not solve it, and did not particularly advance our understanding of it. We seem to name it Pell’s because Pell had translated a book which included a solution of the problem into English, and Leonhard Euler mistakenly thought Pell had solved it.
The Carnot Cycle blog for this month is about a fine example of naming confusion. In this case it’s about Boyle’s Law. That’s one of the rules describing how gases work. It says that, if a gas is held at a constant temperature, and the amount of gas doesn’t change, then the pressure of the gas times its volume stays constant. Squeeze the gas into a smaller volume and it exerts more pressure on the container it’s in. Stretch it into a larger volume and it presses more weakly on the container.
Obvious? Perhaps. But it is a thing that had to be discovered. There’s a story behind that. Peter Mander explains some of its tale.
And then three days pass and I have enough comic strips for another essay. That’s fine by me, really. I picked this edition’s name because there’s a comic strip that actually touches on information theory, and another that’s about a much-needed mathematical symbol, and another about the ways we represent numbers. That’s enough grounds for me to use the title.
Samson’s Dark Side Of The Horse for the 19th of November looks like this week’s bid for an anthropomorphic numerals joke. I suppose it’s actually numeral cosplay instead. I’m amused, anyway.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 19th of November makes a patent-law joke out of the invention of zero. It’s also an amusing joke. It may be misplaced, though. The origins of zero as a concept is hard enough to trace. We can at least trace the symbol zero. In Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers, Amir D Aczel traces out not just the (currently understood) history of Arabic numerals, but some of how the history of that history has evolved, and finally traces down the oldest known example of a written (well, carved) zero.
Tony Cochrane’s Agnes for the 20th of November is at heart just a joke about a student’s apocalyptically bad grades. It contains an interesting punch line, though, in Agnes’s statement that “math people are dreadful spellers”. I haven’t heard that before. It might be a joke about algebra introducing letters into numbers. But it does seem to me there’s a supposition that mathematics people aren’t very good writers or speakers. I do remember back as an undergraduate other people on the student newspaper being surprised I could write despite majoring in physics and mathematics. That may reflect people remembering bad experiences of sitting in class with no idea what the instructor was going on about. It’s easy to go from “I don’t understand this mathematics class” to “I don’t understand mathematics people”.
Steve Sicula’s Home and Away for the 20th of November is about using gambling as a way to teach mathematics. So it would be a late entry for the recent Gambling Edition of the Reading The Comics posts. Although this strip is a rerun from the 15th of August, 2008, so it’s actually an extremely early entry.
Ruben Bolling’s Tom The Dancing Bug for the 20th of November is a Super-Fun-Pak Comix installment. And for a wonder it hasn’t got a Chaos Butterfly sequence. Under the Guy Walks Into A Bar label is a joke about a horse doing arithmetic that itself swings into a base-ten joke. In this case it’s suggested the horse would count in base four, and I suppose that’s plausible enough. The joke depends on the horse pronouncing a base four “10” as “ten”, when the number is actually “four”. But the lure of the digits is very hard to resist, and saying “four” suggests the numeral “4” whatever the base is supposed to be.
Mark Leiknes’s Cow and Boy for the 21st of November is a rerun from the 9th of August, 2008. It mentions the holographic principle, which is a neat concept. The principle’s explained all right in the comic. The idea was first developed in the late 1970s, following the study of black hole thermodynamics. Black holes are fascinating because the mathematics of them suggest they have a temperature, and an entropy, and even information which can pass into and out of them. This study implied that information about the three-dimensional volume of the black hole was contained entirely in the two-dimensional surface, though. From here things get complicated, though, and I’m going to shy away from describing the whole thing because I’m not sure I can do it competently. It is an amazing thing that information about a volume can be encoded in the surface, though, and vice-versa. And it is astounding that we can imagine a logically consistent organization of the universe that has a structure completely unlike the one our senses suggest. It’s a lasting and hard-to-dismiss philosophical question. How much of the way the world appears to be structured is the result of our minds, our senses, imposing that structure on it? How much of it is because the world is ‘really’ like that? (And does ‘really’ mean anything that isn’t trivial, then?)
I should make clear that while we can imagine it, we haven’t been able to prove that this holographic universe is a valid organization. Explaining gravity in quantum mechanics terms is a difficult point, as it often is.
Dave Blazek’s Loose Parts for the 21st of November is a two- versus three-dimensions joke. The three-dimension figure on the right is a standard way of drawing x-, y-, and z-axes, organized in an ‘isometric’ view. That’s one of the common ways of drawing three-dimensional figures on a two-dimensional surface. The two-dimension figure on the left is a quirky representation, but it’s probably unavoidable as a way to make the whole panel read cleanly. Usually when the axes are drawn isometrically, the x- and y-axes are the lower ones, with the z-axis the one pointing vertically upward. That is, they’re the ones in the floor of the room. So the typical two-dimensional figure would be the lower axes.
I hate to report this but Peter Mander’s CarnotCycle blog has reached its last post for the calendar year. It’s a nice, practical one, though, explaining how antifreeze works. What’s important about antifreeze to us is that we can add something to water so that its freezing point is at some more convenient temperature. The logic of why it works is there in statistical mechanics, and the mathematics of it can be pretty simple. One of the things which awed me in high school chemistry class was learning how to use the formulas to describe how much different solutions would shift the freezing temperature around. It seemed all so very simple, and so practical too.
The Carnot Cycle blog for this month is about chemical potential. “Chemical potential” in thermodynamics covers a lot of interesting phenomena. It gives a way to model chemistry using the mechanisms of statistical mechanics. It lets us study a substance that’s made of several kinds of particle. This potential is written with the symbol μ, and I admit I don’t know how that symbol got picked over all the possible alternatives.
The chemical potential varies with the number of particles. Each different type of particle gets its own chemical potential, so there may be a μ1 and μ2 and μ3 and so on. The chemical potential μ1 is how much the free energy varies as the count of particles-of-type-1 varies. μ2 is how much the free energy varies as the count of particles-of-type-2 varies, and so on. This might strike you as similar to the way pressure and volume of a gas depend on each other, or if you retained a bit more of thermodynamics how the temperature and entropy vary. This is so. Pressure and volume are conjugate variables, as are temperature and entropy, and so are chemical potential and particle number. (And for a wonder, “particle number” means exactly what it suggests: the number of particles of that kind in the system.)
It was the American mathematical physicist Josiah Willard Gibbs who introduced the concepts of phase and chemical potential in his milestone monograph On the Equilibrium of Heterogeneous Substances (1876-1878) with which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.
In a paragraph under the heading “On Coexistent Phases of Matter” Gibbs mentions – in passing – that for a system of coexistent phases in equilibrium at constant temperature and pressure, the chemical potential μ of any component must have the same value in every phase.
This simple statement turns out to have considerable practical value as we shall see. But first, let’s go through the formal proof of Gibbs’ assertion.
An important result
Consider a system of two phases, each containing the same components, in equilibrium at constant temperature and pressure. Suppose a small quantity dni moles of any component i is transferred from phase A in…
I knew I’d been forgetting something about the end of summer. I’m embarrassed again it was Peter Mander’s Carnot Cycle blog resuming its discussions of thermodynamics.
The September article is about Gibbs’s phase rule. Gibbs here is Josiah Willard Gibbs, who established much of the mathematical vocabulary of thermodynamics. The phase rule here talks about the change of a substance from one phase to another. The classic example is water changing from liquid to solid, or solid to gas, or gas to liquid. Everything does that for some combinations of pressure and temperature and available volume. It’s just a good example because we can see those phase transitions happen whenever we want.
The question that feels natural to mathematicians, and physicists, is about degrees of freedom. Suppose that we’re able take a substance and change its temperature or its volume or its pressure. How many of those things can we change at once without making the material different? And the phase rule is a way to calculate that. It’s not always the same number because at some combinations of pressure and temperature and volume the substance can be equally well either liquid or gas, or be gas or solid, or be solid or liquid. These represent phase transitions, melting or solidifying or evaporating. There’s even one combination — the triple point — where the material can be solid, liquid, or gas simultaneously.
Carnot Cycle presents the way that Gibbs originally derived his phase rule. And it’s remarkably neat and clean and accessible. The meat of it is really a matter of counting, keeping track of how much information we have and how much we want and looking at the difference between the things. I recommend reading it even if you are somehow not familiar with differential forms. Simply trust that a “d” followed by some other letter (or a letter with a subscript) is some quantity whose value we might be interested in, and you should follow the reasoning well.
The Phase Rule formula was first stated by the American mathematical physicist Josiah Willard Gibbs in his monumental masterwork On the Equilibrium of Heterogeneous Substances (1875-1878), in which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.
In a paragraph under the heading “On Coexistent Phases of Matter”, Gibbs gives the derivation of his famous formula in just 77 words. Of all the many Phase Rule proofs in the thermodynamic literature, it is one of the simplest and shortest. And yet textbooks of physical science have consistently overlooked it in favor of more complicated, less lucid derivations.
To redress this long-standing discourtesy to Gibbs, CarnotCycle here presents Gibbs’ original derivation of the Phase Rule in an up-to-date form. His purely prose description has been supplemented with clarifying mathematical content, and the outmoded symbols used in the single equation to which he refers have been replaced with their…
I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.
Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.
The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.
Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.
The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)
Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.
In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)
Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.
Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.
My love and I play in several pinball leagues. I need to explain something of how they work.
Most of them organize league nights by making groups of three or four players and having them play five games each on a variety of pinball tables. The groupings are made by order. The 1st through 4th highest-ranked players who’re present are the first group, the 5th through 8th the second group, the 9th through 12th the third group, and so on. For each table the player with the highest score gets some number of league points. The second-highest score earns a lesser number of league points, third-highest gets fewer points yet, and the lowest score earns the player comments about how the table was not being fair. The total number of points goes into the player’s season score, which gives her ranking.
You might see the bootstrapping problem here. Where do the rankings come from? And what happens if someone joins the league mid-season? What if someone misses a competition day? (Some leagues give a fraction of points based on the player’s season average. Other leagues award no points.) How does a player get correctly ranked?
This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.
For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.
In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.
CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.
In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.
Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.
If we restrict our attention to a thermodynamic system of unchanging composition and apply…
Entropy is hard to understand. It’s deceptively easy to describe, and the concept is popular, but to understand it is challenging. In this month’s entry CarnotCycle talks about thermodynamic entropy and where it comes from. I don’t promise you will understand it after this essay, but you will be closer to understanding it.
Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.
Reversibility and mechanical systems
The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.
Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes…
Peter Mander of the Carnot Cycle blog, which is primarily about thermodynamics, has a neat bit about constructing a mathematical model for how the body works. This model doesn’t look anything like a real body, as it’s concerned with basically the flow of heat, and how respiration fires the work our bodies need to do to live. Modeling at this sort of detail brings to mind an old joke told of mathematicians — that, challenged to design a maximally efficient dairy farm, the mathematician begins with “assume a spherical cow” — but great insights can come from models that look too simple to work.
It also, sad to say, includes a bit of Bright Young Science-Minded Lad (in this case, the author’s partner of the time) reasoning his way through what traumatized people might think, in a way that’s surely well-intended but also has to be described as “surely well-intended”, so, know that the tags up top of the article aren’t misleading.
I don’t ever try speaking for Comic Strip Master Command, and it almost never speaks to me, but it does seem like this week’s strips mentioning mathematical themes was trying to stick to the classic subjects: anthropomorphized numbers, word problems, ways to measure time and space, under-defined probability questions, and sudoku. It feels almost like a reunion weekend to have all these topics come together.
Dan Thompson’s Brevity (November 23) is a return to the world-of-anthropomorphic-numbers kind of joke, and a pun on the arithmetic mean, which is after all the statistic which most lends itself to puns, just edging out the “range” and the “single-factor ANOVA F-Test”.
Phil Frank Joe Troise’s The Elderberries (November 23, rerun) brings out word problem humor, using train-leaves-the-station humor as a representative of the kinds of thinking academics do. Nagging slightly at me is that I think the strip had established the Professor as one of philosophy and while it’s certainly not unreasonable for a philosopher to be interested in mathematics I wouldn’t expect this kind of mathematics to strike him as very interesting. But then there is the need to get the idea across in two panels, too.
Jonathan Lemon’s Rabbits Against Magic (November 25) brings up a way of identifying the time — “half seven” — which recalls one of my earliest essays around here, “How Many Numbers Have We Named?”, because the construction is one that I find charming and that was glad to hear was still current. “Half seven” strikes me as similar in construction to saying a number as “five and twenty” instead of “twenty-five”, although I’m ignorant as to whether the actually is any similarity.
Scott Hilburn’s The Argyle Sweater (November 26) brings out a joke that I thought had faded out back around, oh, 1978, when the United States decided it wasn’t going to try converting to metric after all, now that we had two-liter bottles of soda. The curious thing about this sort of hyperconversion (it’s surely a satiric cousin to the hypercorrection that makes people mangle a sentence in the misguided hope of perfecting it) — besides that the “yard” in Scotland Yard is obviously not a unit of measure — is the notion that it’d be necessary to update idiomatic references that contain old-fashioned units of measurement. Part of what makes idioms anything interesting is that they can be old-fashioned while still making as much sense as possible; “in for a penny, in for a pound” is a sensible thing to say in the United States, where the pound hasn’t been legal tender since 1857; why would (say) “an ounce of prevention is worth a pound of cure” be any different? Other than that it’s about the only joke easily found on the ground once you’ve decided to look for jokes in the “systems of measurement” field.
Mark Heath’s Spot the Frog (November 26, rerun) I’m not sure actually counts as a mathematics joke, although it’s got me intrigued: Surly Toad claims to have a stick in his mouth to use to give the impression of a smile, or 37 (“Sorry, 38”) other facial expressions. The stick’s shown as a bundle of maple twigs, wound tightly together and designed to take shapes easily. This seems to me the kind of thing that’s grown as an application of knot theory, the study of, well, it’s almost right there in the name. Knots, the study of how strings of things can curl over and around and cross themselves (or other strings), seemed for a very long time to be a purely theoretical playground, not least because, to be addressable by theory, the knots had to be made of an imaginary material that could be stretched arbitrarily finely, and could be pushed frictionlessly through it, which allows for good theoretical work but doesn’t act a thing like a shoelace. Then I think everyone was caught by surprise when it turned out the mathematics of these very abstract knots also describe the way proteins and other long molecules fold, and unfold; and from there it’s not too far to discovering wonderful structures that can change almost by magic with slight bits of pressure. (For my money, the most astounding thing about knots is that you can describe thermodynamics — the way heat works — on them, but I’m inclined towards thermodynamic problems.)
Henry Scarpelli and Crag Boldman’s Archie (November 28, rerun) offers an interesting problem: when Veronica was out of town for a week, Archie’s test scores improved. Is there a link? This kind of thing is awfully interesting to study, and awfully difficult to: there’s no way to run a truly controlled experiment to see whether Veronica’s presence affects Archie’s test scores. After all, he never takes the same test twice, even if he re-takes a test on the same subject (and even if the re-test were the exact same questions, he would go into it the second time with relevant experience that he didn’t have the first time). And a couple good test scores might be relevant, or might just be luck, or it might be that something else happened to change that week that we haven’t noticed yet. How can you trace down plausible causal links in a complicated system?
One approach is an experimental design that, at least in the psychology textbooks I’ve read, gets called A-B-A, or A-B-A-B, experiment design: measure whatever it is you’re interested in during a normal time, “A”, before whatever it is whose influence you want to see has taken hold. Then measure it for a time “B” where something has changed, like, Veronica being out of town. Then go back as best as possible to the normal situation, “A” again; and, if your time and research budget allow, going back to another stretch of “B” (and, hey, maybe even “A” again) helps. If there is an influence, it ought to appear sometime after “B” starts, and fade out again after the return to “A”. The more you’re able to replicate this the sounder the evidence for a link is.
(We’re actually in the midst of something like this around home: our pet rabbit was diagnosed with a touch of arthritis in his last checkup, but mildly enough and in a strange place, so we couldn’t tell whether it’s worth putting him on medication. So we got a ten-day prescription and let that run its course and have tried to evaluate whether it’s affected his behavior. This has proved difficult to say because we don’t really have a clear way of measuring his behavior, although we can say that the arthritis medicine is apparently his favorite thing in the world, based on his racing up to take the liquid and his trying to grab it if we don’t feed it to him fast enough.)
Ralph Hagen’s The Barn (November 28) has Rory the sheep wonder about the chances he and Stan the bull should be together in the pasture, given how incredibly vast the universe is. That’s a subtly tricky question to ask, though. If you want to show that everything that ever existed is impossibly unlikely you can work out, say, how many pastures there are on Earth multiply it by an estimate of how many Earth-like planets there likely are in the universe, and take one divided by that number and marvel at Rory’s incredible luck. But that number’s fairly meaningless: among other obvious objections, wouldn’t Rory wonder the same thing if he were in a pasture with Dan the bull instead? And Rory wouldn’t be wondering anything at all if it weren’t for the accident by which he happened to be born; how impossibly unlikely was that? And that Stan was born too? (And, obviously, that all Rory and Stan’s ancestors were born and survived to the age of reproducing?)
Except that in this sort of question we seem to take it for granted, for instance, that all Stan’s ancestors would have done their part by existing and doing their part to bringing Stan around. And we’d take it for granted that the pasture should exist, rather than be a farmhouse or an outlet mall or a rocket base. To come up with odds that mean anything we have to work out what the probability space of all possible relevant outcomes is, and what the set of all conditions that satisfy the concept of “we’re stuck here together in this pasture” is.
Mark Pett’s Lucky Cow (November 28) brings up sudoku puzzles and the mystery of where they come from, exactly. This prompted me to wonder about the mechanics of making sudoku puzzles and while it certainly seems they could be automated pretty well, making your own amounts to just writing the digits one through nine nine times over, and then blanking out squares until the puzzle is hard. A casual search of the net suggests the most popular way of making sure you haven’t blanking out squares so that the puzzle becomes unsolvable (in this case, that there’s two or more puzzles that fit the revealed information) is to let an automated sudoku solver tell you. That’s true enough but I don’t see any mention of any algorithms by which one could check if you’re blanking out a solution-foiling set of squares. I don’t know whether that reflects there being no algorithm for this that’s more efficient than “try out possible solutions”, or just no algorithm being more practical. It’s relatively easy to make a computer try out possible solutions, after all.
A paper published by Mária Ercsey-Ravasz and Zoltán Toroczkai in Nature Scientific Reports in 2012 describes the recasting of the problem of solving sudoku into a deterministic, dynamical system, and matches the difficulty of a sudoku puzzle to chaotic behavior of that system. (If you’re looking at the article and despairing, don’t worry. Go to the ‘Puzzle hardness as transient chaotic dynamics’ section, and read the parts of the sentence that aren’t technical terms.) Ercsey-Ravasz and Toroczkai point out their chaos-theory-based definition of hardness matches pretty well, though not perfectly, the estimates of difficulty provided by sudoku editors and solvers. The most interesting (to me) result they report is that sudoku puzzles which give you the minimum information — 17 or 18 non-blank numbers to start — are generally not the hardest puzzles. 21 or 22 non-blank numbers seem to match the hardest of puzzles, though they point out that difficulty has got to depend on the positioning of the non-blank numbers and not just how many there are.
The above tweet is from the Analysis Fact of The Day feed, which for the 5th had a neat little bit taken from Joseph Fourier’s The Analytic Theory Of Heat, published 1822. Fourier was trying to at least describe the way heat moves through objects, and along the way he developed thing called Fourier series and a field called Fourier Analysis. In this we treat functions — even ones we don’t yet know — as sinusoidal waves, overlapping and interfering with and reinforcing one another.
If we have infinitely many of these waves we can approximate … well, not every function, but surprisingly close to all the functions that might represent real-world affairs, and surprisingly near all the functions we’re interested in anyway. The advantage of representing functions as sums of sinusoidal waves is that sinusoidal waves are very easy to differentiate and integrate, and to add together those differentials and integrals, and that means we can turn problems that are extremely hard into problems that may be longer, but are made up of much easier parts. Since usually it’s better to do something that’s got many easy steps than it is to do something with a few hard ones, Fourier series and Fourier analysis are some of the things you get to know well as you become a mathematician.
The “Fourier Echoes Euler” page linked here shows simply one nice, sweet result that Fourier proved in that major work. It demonstrates what you get if, for absolutely any real number x, you add together et cetera. There’s one step in it — “integration by parts” — that you’ll have to remember from freshman calculus, or maybe I’ll get around to explaining that someday, but I would expect most folks reading this far could follow this neat result.
I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.
The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.
A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.
James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations
Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.
The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:
Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.
This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.
Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.
In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.
But in Gibbs’ case, this is far from the truth.
The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II
And on to the tracking of how my little mathematics blog is doing. As readership goes, things are looking good — my highest number of page views since January 2013, and third-highest ever, and also my highest number of unique viewers since January 2013 (unique viewer counts aren’t provided for before December 2012, so who knows what happened before that). The total number of page views rose from 565 in April to 751, and the number of unique visitors rose from 238 to 315. This is a remarkably steady number of views per visitor, though — 2.37 rising to 2.38, as if that were a significant difference. I passed visitor number 15,000 somewhere around the 5th of May, and at number 15,682 right now that puts me on track to hit 16,000 somewhere around the 13th.
As with April, the blog’s felt pretty good to me. I think I’m hitting a pretty good mixture of writing about stuff that interest me and finding readers who’re interested to read it. I’m hoping I can keep that up another month.
The most popular articles of the month — well, I suspect someone was archive-binging on the mathematics comics ones because, here goes:
Where Does A Plane Touch A Sphere? is a nicely popular bit motivated by the realization that a tangent point is an important calculus concept and nevertheless a subtler thing than one might realize.
I think without actually checking this is the first month I’ve noticed with seven countries sending me twenty or more visitors each — the United States (438), Canada (39), Australia (38), Sweden (31), Denmark (21), and Singapore and the United Kingdom (20 each). Austria came in at 19, too. Sixteen countries sent me one visitor each: Antigua and Barbuda, Colombia, Guernsey, Hong Kong, Ireland, Italy, Jamaica, Japan, Kuwait, Lebanon, Mexico, Morocco, Norway, Peru, Poland, Swaziland, and Switzerland. Morocco’s the only one to have been there last month.
And while I lack for search term poetry, some of the interesting searches that brought people here include:
I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.
If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.
Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.
If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.
It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.
The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…
I was reading a thermodynamics book (C Truesdell and S Bharatha’s The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, which is a fascinating read, for the field, and includes a number of entertaining, for the field, snipes at the stuff textbook writers put in because they’re just passing on stuff without rethinking it carefully), and ran across a couple proofs which mentioned equations that were true “almost everywhere”. That’s a construction it might be surprising to know even exists in mathematics, so, let me take a couple hundred words to talk about it.
The idea isn’t really exotic. You’ve seen a kind of version of it when you see an equation containing the note that there’s an exception, such as, . If the exceptions are tedious to list — because there are many of them to write down, or because they’re wordy to describe (the thermodynamics book mentioned the exceptions were where a particular set of conditions on several differential equations happened simultaneously, if it ever happened) — and if they’re unlikely to come up, then, we might just write whatever it is we want to say and add an “almost everywhere”, or for shorthand, put an “ae” after the line. This “almost everywhere” will, except in freak cases, propagate through the rest of the proof, but I only see people writing that when they’re students working through the concept. In publications, the “almost everywhere” gets put in where the condition first stops being true everywhere-everywhere and becomes only almost-everywhere, and taken as read after that.
I introduced this with an equation, but it can apply to any relationship: something is greater than something else, something is less than or equal to something else, even something is not equal to something else. (After all, “ is true almost everywhere, but there is that nagging exception.) A mathematical proof is normally about things which are true. Whether one thing is equal to another is often incidental to that.
What’s meant by “unlikely to come up” is actually rigorously defined, which is why we can get away with this. It’s otherwise a bit daft to think we can just talk about things that are true except where they aren’t and not even post warnings about where they’re not true. If we say something is true “almost everywhere” on the real number line, for example, that means that the set of exceptions has a total length of zero. So if the only exception is where x equals 1, sure enough, that’s a set with no length. Similarly if the exceptions are where x equals positive 1 or negative 1, that’s still a total length of zero. But if the set of exceptions were all values of x from 0 to 4, well, that’s a set of total length 4 and we can’t say “almost everywhere” for that.
This is all quite like saying that it can’t happen that if you flip a fair coin infinitely many times it will come up tails every single time. It won’t, even though properly speaking there’s no reason that it couldn’t. If something is true almost everywhere, then your chance of picking an exception out of all the possibilities is about like your chance of flipping that fair coin and getting tails infinitely many times over.
The CarnotCycle blog has a continuation of last month’s The Liquefaction of Gases, as you might expect, named The Liquefaction of Gases, Part II, and it’s another intriguing piece. The story here is about how the theory of cooling, and of phase changes — under what conditions gases will turn into liquids — was developed. There’s a fair bit of mathematics involved, although most of the important work is in in polynomials. If you remember in algebra (or in pre-algebra) drawing curves for functions that had x3 in them, and in finding how they sometimes had one and sometimes had three real roots, then you’re well on your way to understanding the work which earned Johannes van der Waals the 1910 Nobel Prize in Physics.
Future Nobel Prize winners both. Kamerlingh Onnes and Johannes van der Waals in 1908.
On Friday 10 July 1908, at Leiden in the Netherlands, Kamerlingh Onnes succeeded in liquefying the one remaining gas previously thought to be non-condensable – helium – using a sequential Joule-Thomson cooling technique to drive the temperature down to just 4 degrees above absolute zero. The event brought to a conclusion the race to liquefy the so-called permanent gases, following the revelation that all gases have a critical temperature below which they must be cooled before liquefaction is possible.
This crucial fact was established by Dr. Thomas Andrews, professor of chemistry at Queen’s College Belfast, in his groundbreaking study of the liquefaction of carbon dioxide, “On the Continuity of the Gaseous and Liquid States of Matter”, published in the Philosophical Transactions of the Royal Society of London in 1869.
I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.
But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.
On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.
Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.
While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.