My Little 2021 Mathematics A-to-Z: Inverse


I owe Iva Sallay thanks for the suggestion of today’s topic. Sallay is a longtime friend of my blog here. And runs the Find the Factors recreational mathematics puzzle site. If you haven’t been following, or haven’t visited before, this is a fun week to step in again. The puzzles this week include (American) Thanksgiving-themed pictures.

Inverse.

When we visit the museum made of a visual artist’s studio we often admire the tools. The surviving pencils and crayons, pens, brushes and such. We don’t often notice the eraser, the correction tape, the unused white-out, or the pages cut into scraps to cover up errors. To do something is to want to undo it. This is as true for the mathematics of a circle as it is for the drawing of one.

If not to undo something, we do often want to know where something comes from. A classic paper asks can one hear the shape of a drum? You hear a sound. Can you say what made that sound? Fine, dismiss the drum shape as idle curiosity. The same question applies to any sensory data. If our hand feels cooler here, where is the insulation of the building damaged? If we have this electrocardiogram reading, what can we say about the action of the heart producing that? If we see the banks of a river, what can we know about how the river floods?

And this is the point, and purpose, of inverses. We can understand them as finding the causes of what we observe.

The first inverse we meet is usually the inverse function. It’s introduced as a way to undo what a function does. That’s an odd introduction, if you’re comfortable with what a function is. A function is a mathematical construct. It’s two sets — a domain and a range — and a rule that links elements in the domain to the range. To “undo” a function is like “undoing” a rectangle. But a function has a compelling “physical” interpretation. It’s routine to introduce functions as machines that take some numbers in and give numbers out. We think of them as ways to transform the domain into the range. In functional analysis get to thinking of domains as the most perfect putty. We expect functions to stretch and rotate and compress and slide along as though they were drawing a Betty Boop cartoon.

So we’re trained to speak of a function as a verb, acting on pieces of the domain. An element or point, or a region, or the whole domain. We think the function “maps”, or “takes”, or “transforms” this into its image in the range. And if we can turn one thing into another, surely we can turn it back.

Some things it’s obvious we can turn back. Suppose our function adds 2 to whatever we give it. We can get the original back by subtracting 2. If the function subtracts 32 and divides by 1.8, we can reverse it by multiplying by 1.8 and adding 32. If the function takes the reciprocal, we can take the reciprocal again. We have a bit of a problem if we started out taking the reciprocal of 0, but who would want to do such a thing anyway? If the function squares a number, we can undo that by taking the square root. Unless we started from a negative number. Then we have trouble.

The trouble is not every function has an inverse. Which we could have realized by thinking how to undo “multiply by zero”. To be a well-defined function, the rule part has to match elements in the domain to exactly one element in the range. This makes the function, in the impenetrable jargon of the mathematician, a “one-to-one function”. Or you can describe it with the more intuitive label of “bijective”.

But there’s no reason more than one thing in the domain can’t match to the same thing in the range. If I know the cosine of my angle is \frac{1}{2}, my angle might be 30 degrees. Or -30 degrees. Or 390 degrees. Or 330 degrees. You may protest there’s no difference between a 30 degree and a 390 degree angle. I agree those angles point in the same direction. But a gear rotated 390 degrees has done something that a gear rotated 30 degrees hasn’t. If all I know is where the dot I’ve put on the gear is, how can I know how much it’s rotated?

So what we do is shift from the actual cosine into one branch of the cosine. By restricting the domain we can create a function that has the same rule as the one we want, but that’s also one-to-one and so has an inverse. What restriction to use? That depends on what you want. But mathematicians have some that come up so often they might as well be defaults. So the square root is the inverse of the square of nonnegative numbers. The inverse Cosine is the inverse of the cosine of angles from 0 to 180 degrees. The inverse Sine is the inverse of the sine of angles from -90 to 90 degrees. The capital letters are convention to say we’re doing this. If we want a different range, we write out that we’re looking for an inverse cosine from -180 to 0 degrees or whatever. (Yes, the mathematician will default to using radians, rather than degrees, for angles. That’s a different essay.) It’s an imperfect solution, but it often works well enough.

The trouble we had with cosines, and functions, continues through all inverses. There are almost always alternate causes. Many shapes of drums sound alike. Take two metal bars. Heat both with a blowtorch, one on the end and one in the center. Not to the point of melting, only to the point of being too hot to touch. Let them cool in insulated boxes for a couple weeks. There’ll be no measurement you can do on the remaining heat that tells you which one was heated on the end and which the center. That’s not because your thermometers are no good or the flow of heat is not deterministic or anything. It’s that both starting cases settle to the same end. So here there is no usable inverse.

This is not to call inverses futile. We can look for what we expect to find useful. We are inclined to find inverses of the cosine between 0 and 180 degrees, even though 4140 through 4320 degrees is as legitimate. We may not know what is wrong with a heart, but have some idea what a heart could do and still beat. And there’s a famous example in 19th-century astronomy. After the discovery of Uranus came the discovery it did not move right. For a while it moved across the sky too fast for its distance from the sun. Then it started moving too slow. The obvious supposition was that there was another, not-yet-seen, planet, affecting its orbit.

The trouble is finding it. Calculating the orbit from what data they had required solving equations with 13 unknown quantities. John Couch Adams and Urbain Le Verrier attempted this anyway, making suppositions about what they could not measure. They made great suppositions. Le Verrier made the better calculations, and persuaded an astronomer (Johann Gottfried Galle, assisted by Heinrich Louis d’Arrest) to go look. Took about an hour of looking. They also made lucky suppositions. Both, for example, supposed the trans-Uranian planet would obey “Bode’s Law”, a seeming pattern in the size of planetary radiuses. The actual Neptune does not. It was near enough in the sky to where the calculated planet would be, though. The world is vaster than our imaginations.

That there are many ways to draw Betty Boop does not mean there’s nothing to learn about how this drawing was done. And so we keep having inverses as a vibrant field of mathematics.


Next week I hope to cover the letter ‘C’ and don’t think I’m not worried about what that ‘C’ will be. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all of this year’s essays, and all the A-to-Z essays from past years, should be at this link. Thank you for reading.

My All 2020 Mathematics A to Z: Unitary Matrix


I assume that last week I disappointed Mr Wu, of the Singapore Maths Tuition blog, last week when I passed on a topic he suggested to unintentionally rewrite a good enough essay. I hope to make it up this week with a piece of linear algebra.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Unitary Matrix.

A Unitary Matrix — note the article; there is not a singular the Unitary Matrix — starts with a matrix. This is an ordered collection of scalars. The scalars we call elements. I can’t think of a time I ever saw a matrix represented except as a rectangular grid of elements, or as a capital letter for the name of a matrix. Or a block inside a matrix. In principle the elements can be anything. In practice, they’re almost always either real numbers or complex numbers. To speak of Unitary Matrixes invokes complex-valued numbers. If a matrix that would be Unitary has only real-valued elements, we call that an Orthogonal Matrix. It’s not wrong to call an Orthogonal matrix “Unitary”. It’s like pointing to a known square, though, and calling it a parallelogram. Your audience will grant that’s true. But it wonder what you’re getting at, unless you’re talking about a bunch of parallelograms and some of them happen to be squares.

As with polygons, though, there are many names for particular kinds of matrices. The flurry of them settles down on the Intro to Linear Algebra student and it takes three or four courses before most of them feel like familiar names. I will try to keep the flurry clear. First, we’re talking about square matrices, ones with the same number of rows as columns.

Start with any old square matrix. Give it the name U because you see where this is going. There are a couple of new matrices we can derive from it. One of them is the complex conjugate. This is the matrix you get by taking the complex conjugate of every term. So, if one element is 3 + 4\imath , in the complex conjugate, that element would be 3 - 4\imath . Reverse the plus or minus sign of the imaginary component. The shorthand for “the complex conjugate to matrix U” is U^* . Also we’ll often just say “the conjugate”, taking the “complex” part as implied.

Start back with any old square matrix, again called U. Another thing you can do with it is take the transposition. This matrix, U-transpose, you get by keeping the order of elements but changing rows and columns. That is, the elements in the first row become the elements in the first column. The elements in the second row become the elements in the second column. Third row becomes the third column, and so on. The diagonal — first row, first column; second row, second column; third row, third column; and so on — stays where it was. The shorthand for “the transposition of U” is U^T .

You can chain these together. If you start with U and take both its complex-conjugate and its transposition, you get the adjoint. We write that with a little dagger: U^{\dagger} = (U^*)^T . For a wonder, as matrices go, it doesn’t matter whether you take the transpose or the conjugate first. It’s the same U^{\dagger} = (U^T)^* . You may ask how people writing this out by hand never mistake U^T for U^{\dagger} . This is a good question and I hope to have an answer someday. (I would write it as U^{A} in my notes.)

And the last thing you can maybe do with a square matrix is take its inverse. This is like taking the reciprocal of a number. When you multiply a matrix by its inverse, you get the Identity Matrix. Not every matrix has an inverse, though. It’s worse than real numbers, where only zero doesn’t have a reciprocal. You can have a matrix that isn’t all zeroes and that doesn’t have an inverse. This is part of why linear algebra mathematicians command the big money. But if a matrix U has an inverse, we write that inverse as U^{-1} .

The Identity Matrix is one of a family of square matrices. Every element in an identity matrix is zero, except on the diagonal. That is, the element at row one, column one, is the number 1. The element at row two, column two is also the number 1. Same with row three, column three: another one. And so on. This is the “identity” matrix because it works like the multiplicative identity. Pick any matrix you like, and multiply it by the identity matrix; you get the original matrix right back. We use the name I for an identity matrix. If we have to be clear how many rows and columns the matrix has, we write that as a subscript: I_2 or I_3 or I_N or so on.

So this, finally, lets me say what a Unitary Matrix is. It’s any square matrix U where the adjoint, U^{\dagger} is the same matrix as the inverse, U^{-1} . It’s wonderful to learn you have a Unitary Matrix. Not just because, most of the time, finding the inverse of a matrix is a long and tedious procedure. Here? You have to write the elements in a different order and change the plus-or-minus sign on the imaginary numbers. The only way it would be easier if you had only real numbers, and didn’t have to take the conjugates.

That’s all a nice heap of terms. What makes any of them important, other than so Intro to Linear Algebra professors can test their students?

Well, you know mathematicians. If we like something like this, it’s usually because it holds out the prospect of turning a hard problems into easier ones. So it is. Start out with any old matrix. Call it A. Then there exist some unitary matrixes, call them U and V. And their product does something wonderful: UAV is a “diagonal” matrix. A diagonal matrix has zeroes for every element except the diagonal ones. That is, row one, column one; row two, column two; row three, column three; and so on. The elements that trace a path from the upper-left to the lower-right corner of the matrix. (The diagonal from the upper-right to the lower-left we have nothing to do with.) Everything we might do with matrices is easier on a diagonal matrix. So we process our matrix A into this diagonal matrix D. Process it by whatever the heck we’re doing. If we then multiply this by the inverses of U and V? If we calculate V^{-1}DU^{-1} ? We get whatever our process would have given us had we done it to A. And, since U and V are unitary matrices, it’s easy to find these inverses. Wonderful!

Also this sounds like I just said Unitary Matrixes are great because they solve a problem you never heard of before.

The 20th Century’s first great use for Unitary Matrixes, and I imagine the impulse for Mr Wu’s suggestion, was quantum mechanics. (A later use would be data compression.) Unitary Matrixes help us calculate how quantum systems evolve. This should be a little easier to understand if I use a simple physics problem as demonstration.

So imagine three blocks, all the same mass. They’re connected in a row, left to right. There’s two springs, one between the left and the center mass, one between the center and the right mass. The springs have the same strength. The blocks can only move left-to-right. But, within those bounds, you can do anything you like with the blocks. Move them wherever you like and let go. Let them go with a kick moving to the left or the right. The only restraint is they can’t pass through one another; you can’t slide the center block to the right of the right block.

This is not quantum mechanics, by the way. But it’s not far, either. You can turn this into a fine toy of a molecule. For now, though, think of it as a toy. What can you do with it?

A bunch of things, but there’s two really distinct ways these blocks can move. These are the ways the blocks would move if you just hit it with some energy and let the system do what felt natural. One is to have the center block stay right where it is, and the left and right blocks swinging out and in. We know they’ll swing symmetrically, the left block going as far to the left as the right block goes to the right. But all these symmetric oscillations look about the same. They’re one mode.

The other is … not quite antisymmetric. In this mode, the center block moves in one direction and the outer blocks move in the other, just enough to keep momentum conserved. Eventually the center block switches direction and swings the other way. But the outer blocks switch direction and swing the other way too. If you’re having trouble imagining this, imagine looking at it from the outer blocks’ point of view. To them, it’s just the center block wobbling back and forth. That’s the other mode.

And it turns out? It doesn’t matter how you started these blocks moving. The movement looks like a combination of the symmetric and the not-quite-antisymmetric modes. So if you know how the symmetric mode evolves, and how the not-quite-antisymmetric mode evolves? Then you know how every possible arrangement of this system evolves.

So here’s where we get to quantum mechanics. Suppose we know the quantum mechanics description of a system at some time. This we can do as a vector. And we know the Hamiltonian, the description of all the potential and kinetic energy, for how the system evolves. The evolution in time of our quantum mechanics description we can see as a unitary matrix multiplied by this vector.

The Hamiltonian, by itself, won’t (normally) be a Unitary Matrix. It gets the boring name H. It’ll be some complicated messy thing. But perhaps we can find a Unitary Matrix U, so that UHU^{\dagger} is a diagonal matrix. And then that’s great. The original H is hard to work with. The diagonalized version? That one we can almost always work with. And then we can go from solutions on the diagonalized version back to solutions on the original. (If the function \psi describes the evolution of UHU^{\dagger} , then U^{\dagger}\psi U describes the evolution of H .) The work that U (and U^{\dagger} ) does to H is basically what we did with that three-block, two-spring model. It’s picking out the modes, and letting us figure out their behavior. Then put that together to work out the behavior of what we’re interested in.

There are other uses, besides time-evolution. For instance, an important part of quantum mechanics and thermodynamics is that we can swap particles of the same type. Like, there’s no telling an electron that’s on your nose from an electron that’s in one of the reflective mirrors the Apollo astronauts left on the Moon. If they swapped positions, somehow, we wouldn’t know. It’s important for calculating things like entropy that we consider this possibility. Two particles swapping positions is a permutation. We can describe that as multiplying the vector that describes what every electron on the Earth and Moon is doing by a Unitary Matrix. Here it’s a matrix that does nothing but swap the descriptions of these two electrons. I concede this doesn’t sound thrilling. But anything that goes into calculating entropy is first-rank important.

As with time-evolution and with permutation, though, any symmetry matches a Unitary Matrix. This includes obvious things like reflecting across a plane. But it also covers, like, being displaced a set distance. And some outright obscure symmetries too, such as the phase of the state function \Psi . I don’t have a good way to describe what this is, physically; we can’t observe it directly. This symmetry, though, manifests as the conservation of electric charge, a thing we rather like.

This, then, is the sort of problem that draws Unitary Matrixes to our attention.


Thank you for reading. This and all of my 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. Next week, I hope to have something to say for the letter V.

My All 2020 Mathematics A to Z: Renormalization


I have again Elke Stangl, author of elkemental Force, to thank for the subject this week. Again, Stangl’s is a blog of wide-ranging theme interests. And it’s got more poetry this week again, this time haikus about the Dirac delta function.

I also have Kerson Huang, of the Massachusetts Institute of Technology and of Nanyang Technological University, to thank for much insight into the week’s subject. Huang published this A Critical History of Renormalization, which gave me much to think about. It’s likely a paper that would help anyone hoping to know the history of the technique better.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Renormalization.

There is a mathematical model, the Ising Model, for how magnets work. The model has the simplicity of a toy model given by a professor (Wilhelm Lenz) to his grad student (Ernst Ising). Suppose matter is a uniform, uniformly-spaced grid. At each point on the grid we have either a bit of magnetism pointed up (value +1) or down (value -1). It is a nearest-neighbor model. Each point interacts with its nearest neighbors and none of the other points. For a one-dimensional grid this is easy. It’s the stuff of thermodynamics homework for physics majors. They don’t understand it, because you need the hyperbolic trigonometric functions. But they could. For two dimensions … it’s hard. But doable. And interesting. It describes important things like phase changes. The way that you can take a perfectly good strong magnet and heat it up until it’s an iron goo, then cool it down to being a strong magnet again.

For such a simple model it works well. A lot of the solids we find interesting are crystals, or are almost crystals. These are molecules arranged in a grid. So that part of the model is fine. They do interact, foremost, with their nearest neighbors. But not exclusively. In principle, every molecule in a crystal interacts with every other molecule. Can we account for this? Can we make a better model?

Yes, many ways. Here’s one. It’s designed for a square grid, the kind you get by looking at the intersections on a normal piece of graph paper. Each point is in a row and a column. The rows are a distance ‘a’ apart. So are the columns.

Now draw a new grid, on top of the old. Do it by grouping together two-by-two blocks of the original. Draw new rows and columns through the centers of these new blocks. Put at the new intersections a bit of magnetism. Its value is the mean of whatever the four blocks around it are. So, could be 1, could be -1, could be 0, could be ½, could be -½. There’s more options. But look at what we have. It’s still an Ising-like model, with interactions between nearest-neighbors. There’s more choices for what value each point can have. And the grid spacing is now 2a instead of a. But it all looks pretty similar.

And now the great insight, that we can trace to Leo P Kadanoff in 1966. What if we relabel the distance between grid points? We called it 2a before. Call it a, now, again. What’s important that’s different from the Ising model we started with?

There’s the not-negligible point that there’s five different values a point can have, instead of two. But otherwise? In the operations we do, not much is different. How about in what it models? And there it’s interesting. Think of the original grid points. In the original scaling, they interacted only with units one original-row or one original-column away. Now? Their average interacts with the average of grid points that were as far as three original-rows or three original-columns away. It’s a small change. But it’s closer to reflecting the reality of every molecule interacting with every other molecule.

You know what happens when mathematicians get one good trick. We figure what happens if we do it again. Take the rescaled grid, the one that represents two-by-two blocks of the original. Rescale it again, making two-by-two blocks of these two-by-two blocks. Do the same rules about setting the center points as a new grid. And then re-scaling. What we have now are blocks that represent averages of four-by-four blocks of the original. And that, imperfectly, let a point interact with a point seven original-rows or original-columns away. (Or farther: seven original-rows down and three original-columns to the left, say. Have fun counting all the distances.) And again: we have eight-by-eight blocks and even more range. Again: sixteen-by-sixteen blocks and double the range again. Why not carry this on forever?

This is renormalization. It’s a specific sort, called the block-spin renormalization group. It comes from condensed matter physics, where we try to understand how molecules come together to form bulks of matter. Kenneth Wilson stretched this over to studying the Kondo Effect. This is a problem in how magnetic impurities affect electrical resistance. (It’s named for Jun Kondo.) It’s great work. It (in part) earned Wilson a Nobel Prize. But the idea is simple. We can understand complex interactions by making them simple ones. The interactions have a natural scale, cutting off at the nearest neighbor. But we redefine ‘nearest neighbor’, again and again, until it reaches infinitely far away.

This problem, and its solution, come from thermodynamics. Particularly, statistical mechanics. This is a bit ahistoric. Physicists first used renormalization in quantum mechanics. This is all right. As a general guideline, everything in statistical mechanics turns into something in quantum mechanics, and vice-versa. What quantum mechanics lacked, for a generation, was logical rigor for renormalization. This statistical mechanics approach provided that.

Renormalization in quantum mechanics we needed because of virtual particles. Quantum mechanics requires that particles can pop into existence, carrying momentum, and then pop back out again. This gives us electromagnetism, and the strong nuclear force (which holds particles together), and the weak nuclear force (which causes nuclear decay). Leave gravity over on the side. The more momentum in the virtual particle, the shorter a time it can exist. It’s actually the more energy, the shorter the particle lasts. In that guise you know it as the Uncertainty Principle. But it’s momentum that’s important here. This means short-range interactions transfer more momentum, and long-range ones transfer less. And here we had thought forces got stronger as the particles interacting got closer together.

In principle, there is no upper limit to how much momentum one of these virtual particles can have. And, worse, the original particle can interact with its virtual particle. This by exchanging another virtual particle. Which is even higher-energy and shorter-range. The virtual particle can also interact with the field that’s around the original particle. Pairs of virtual particles can exchange more virtual particles. And so on. What we get, when we add this all together, seems like it should be infinitely large. Every particle the center of an infinitely great bundle of energy.

Renormalization, the original renormalization, cuts that off. Sets an effective limit on the system. The limit is not “only particles this close will interact” exactly. It’s more “only virtual particles with less than this momentum will”. (Yes, there’s some overlap between these ideas.) This seems different to us mere dwellers in reality. But to a mathematical physicist, knowing that position and momentum are conjugate variables? Limiting one is the same work as limiting the other.

This, when developed, left physicists uneasy. It’s for good reasons. The cutoff is arbitrary. Its existence, sure, but we often deal with arbitrary cutoffs for things. When we calculate a weather satellite’s orbit we do not care that other star systems exist. We barely care that Jupiter exists. Still, where to put the cutoff? Quantum Electrodynamics, using this, could provide excellent predictions of physical properties. But shouldn’t we get different predictions with different cutoffs? How do we know we’re not picking a cutoff because it makes our test problem work right? That we’re not picking one that produces garbage for every other problem? Read the writing of a physicist of the time and — oh, why be coy? We all read Richard Feynman, his QED at least. We see him sulking about a technique he used to brilliant effect.

Wilson-style renormalization answered Feynman’s objections. (Though not to Feynman’s satisfaction, if I understand the history right.) The momentum cutoff serves as a scale. Or if you prefer, the scale of interactions we consider tells us the cutoff. Different scales give us different quantum mechanics. One scale, one cutoff, gives us the way molecules interact together, on the scale of condensed-matter physics. A different scale, with a different cutoff, describes the particles of Quantum Electrodynamics. Other scales describe something more recognizable as classical physics. Or the Yang-Mills gauge theory, as describes the Standard Model of subatomic particles, all those quarks and leptons.

Renormalization offers a capsule of much of mathematical physics, though. It started as an arbitrary trick to avoid calculation problems. In time, we found a rationale for the trick. But found it from looking at a problem that seemed unrelated. On learning the related trick well, though, we see they’re different aspects of the same problem. It’s a neat bit of work.


This and all the other 2020 A-to-Z essays should be at this link. Essays from every A-to-Z series should be gathered at this link. I am looking eagerly for topics for the letters S, T, and U, and am scouting ahead for V, W, and X topics also. Thanks for your thoughts, and thank you for reading.

Using my A to Z Archives: Nearest Neighbor Model


For the 2018 A-to-Z I spent some time talking about a big piece of thermodynamics. Anyone taking a statistical mechanics course learns about the Nearest Neighbor Model. It’s a way of handling big systems of things that all interact. This is really hard to do. But if you make the assumption that the nearest pairs are the most important ones, and everything else is sort of a correction or meaningless noise? You get … a problem that’s easier to simulate on a computer. It’s not necessarily easier to solve. But it’s a good starting point for a lot of systems.

The restaurant I was thinking of, when I wrote this, was Woody’s Oasis, which had been kicked out of East Lansing as part of the stage in gentrification where all the good stuff gets the rent raised out from under it, and you get chain restaurants instead. They had a really good vegetarian … thing … called smead, that we guess was some kind of cracked-wheat sandwich filling. No idea what it was. There are other Woody’s Oasises in the area, somehow all different and before the pandemic we kept figuring we’d go and see if they had smead, sometime.

My 2018 Mathematics A To Z: Nearest Neighbor Model


I had a free choice of topics for today! Nobody had a suggestion for the letter ‘N’, so, I’ll take one of my own. If you did put in a suggestion, I apologize; I somehow missed the comment in which you did. I’ll try to do better in future.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Nearest Neighbor Model.

Why are restaurants noisy?

It’s one of those things I wondered while at a noisy restaurant. I have heard it is because restauranteurs believe patrons buy more, and more expensive stuff, in a noisy place. I don’t know that I have heard this correctly, nor that what I heard was correct. I’ll leave it to people who work that end of restaurants to say. But I wondered idly whether mathematics could answer why.

It’s easy to form a rough model. Suppose I want my brilliant words to be heard by the delightful people at my table. Then I have to be louder, to them, than the background noise is. Fine. I don’t like talking loudly. My normal voice is soft enough even I have a hard time making it out. And I’ll drop the ends of sentences when I feel like I’ve said all the interesting parts of them. But I can overcome my instinct if I must.

The trouble comes from other people thinking of themselves the way I think of myself. They want to be heard over how loud I have been. And there’s no convincing them they’re wrong. If there’s bunches of tables near one another, we’re going to have trouble. We’ll each by talking loud enough to drown one another out, until the whole place is a racket. If we’re close enough together, that is. If the tables around mine are empty, chances are my normal voice is enough for the cause. If they’re not, we might have trouble.

So this inspires a model. The restaurant is a space. The tables are set positions, points inside it. Each table is making some volume of noise. Each table is trying to be louder than the background noise. At least until the people at the table reach the limits of their screaming. Or decide they can’t talk, they’ll just eat and go somewhere pleasant.

Making calculations on this demands some more work. Some is obvious: how do you represent “quiet” and “loud”? Some is harder: how far do voices carry? Grant that a loud table is still loud if you’re near it. How far away before it doesn’t sound loud? How far away before you can’t hear it anyway? Imagine a dining room that’s 100 miles long. There’s no possible party at one end that could ever be heard at the other. Never mind that a 100-mile-long restaurant would be absurd. It shows that the limits of people’s voices are a thing we have to consider.

There are many ways to model this distance effect. A realistic one would fall off with distance, sure. But it would also allow for echoes and absorption by the walls, and by other patrons, and maybe by restaurant decor. This would take forever to get answers from, but if done right it would get very good answers. A simpler model would give answers less fitted to your actual restaurant. But the answers may be close enough, and let you understand the system. And may be simple enough that you can get answers quickly. Maybe even by hand.

And so I come to the “nearest neighbor model”. The common English meaning of the words suggest what it’s about. We get it from models, like my restaurant noise problem. It’s made of a bunch of points that have some value. For my problem, tables and their noise level. And that value affects stuff in some region around these points.

In the “nearest neighbor model”, each point directly affects only its nearest neighbors. Saying which is the nearest neighbor is easy if the points are arranged in some regular grid. If they’re evenly spaced points on a line, say. Or a square grid. Or a triangular grid. If the points are in some other pattern, you need to think about what the nearest neighbors are. This is why people working in neighbor-nearness problems get paid the big money.

Suppose I use a nearest neighbor model for my restaurant problem. In this, I pretend the only background noise at my table is that of the people the next table over, in each direction. Two tables over? Nope. I don’t hear them at my table. I do get an indirect effect. Two tables over affects the table that’s between mine and theirs. But vice-versa, too. The table that’s 100 miles away can’t affect me directly, but it can affect a table in-between it and me. And that in-between table can affect the next one closer to me, and so on. The effect is attenuated, yes. Shouldn’t it be, if we’re looking at something farther away?

This sort of model is easy to work with numerically. I’m inclined toward problems that work numerically. Analytically … well, it can be easy. It can be hard. There’s a one-dimensional version of this problem, a bunch of evenly-spaced sites on an infinitely long line. If each site is limited to one of exactly two values, the problem becomes easy enough that freshman physics majors can solve it exactly. They don’t, not the first time out. This is because it requires recognizing a trigonometry trick that they don’t realize would be relevant. But once they know the trick, they agree it’s easy, when they go back two years later and look at it again. It just takes familiarity.

This comes up in thermodynamics, because it makes a nice model for how ferromagnetism can work. More realistic problems, like, two-dimensional grids? … That’s harder to solve exactly. Can be done, though not by undergraduates. Three-dimensional can’t, last time I looked. Weirdly, four-dimensional can. You expect problems to only get harder with more dimensions of space, and then you get a surprise like that.

The nearest-neighbor-model is a first choice. It’s hardly the only one. If I told you there were a next-nearest-neighbor model, what would you suppose it was? Yeah, you’d be right. As long as you supposed it was “things are affected by the nearest and the next-nearest neighbors”. Mathematicians have heard of loopholes too, you know.

As for my restaurant model? … I never actually modelled it. I did think about the model. I concluded my model wasn’t different enough from ferromagnetism models to need me to study it more. I might be mistaken. There may be interesting weird effects caused by the facts of restaurants. That restaurants are pretty small things. That they can have echo-y walls and ceilings. That they can have sound-absorbing things like partial walls or plants. Perhaps I gave up too easily when I thought I knew the answer. Some of my idle thoughts end up too idle.


I should have my next Fall 2018 Mathematics A-To-Z post on Tuesday. It’ll be available at this link, as are the rest of these glossary posts.

Some Thermomathematics Reading


I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.

One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.

Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.

And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.

Who Discovered Boyle’s Law?


Stigler’s Law is a half-joking principle of mathematics and scientific history. It says that scientific discoveries are never named for the person who discovered them. It’s named for the statistician Stephen Stigler, who asserted that the principle was discovered by the sociologist Robert K Merton.

If you study much scientific history you start to wonder if anything is named correctly. There are reasons why. Often it’s very hard to say exactly what the discovery is, especially if it’s something fundamental. Often the earliest reports of something are unclear, at least to later eyes. People’s attention falls on a person who did very well describing or who effectively publicized the discovery. Sometimes a discovery is just in the air, and many people have important pieces of it nearly simultaneously. And sometimes history just seems perverse. Pell’s Equation, for example, is named for John Pell, who did not discover it, did not solve it, and did not particularly advance our understanding of it. We seem to name it Pell’s because Pell had translated a book which included a solution of the problem into English, and Leonhard Euler mistakenly thought Pell had solved it.

The Carnot Cycle blog for this month is about a fine example of naming confusion. In this case it’s about Boyle’s Law. That’s one of the rules describing how gases work. It says that, if a gas is held at a constant temperature, and the amount of gas doesn’t change, then the pressure of the gas times its volume stays constant. Squeeze the gas into a smaller volume and it exerts more pressure on the container it’s in. Stretch it into a larger volume and it presses more weakly on the container.

Obvious? Perhaps. But it is a thing that had to be discovered. There’s a story behind that. Peter Mander explains some of its tale.

Reading the Comics, November 21, 2015: Communication Edition


And then three days pass and I have enough comic strips for another essay. That’s fine by me, really. I picked this edition’s name because there’s a comic strip that actually touches on information theory, and another that’s about a much-needed mathematical symbol, and another about the ways we represent numbers. That’s enough grounds for me to use the title.

Samson’s Dark Side Of The Horse for the 19th of November looks like this week’s bid for an anthropomorphic numerals joke. I suppose it’s actually numeral cosplay instead. I’m amused, anyway.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 19th of November makes a patent-law joke out of the invention of zero. It’s also an amusing joke. It may be misplaced, though. The origins of zero as a concept is hard enough to trace. We can at least trace the symbol zero. In Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers, Amir D Aczel traces out not just the (currently understood) history of Arabic numerals, but some of how the history of that history has evolved, and finally traces down the oldest known example of a written (well, carved) zero.

Tony Cochrane’s Agnes for the 20th of November is at heart just a joke about a student’s apocalyptically bad grades. It contains an interesting punch line, though, in Agnes’s statement that “math people are dreadful spellers”. I haven’t heard that before. It might be a joke about algebra introducing letters into numbers. But it does seem to me there’s a supposition that mathematics people aren’t very good writers or speakers. I do remember back as an undergraduate other people on the student newspaper being surprised I could write despite majoring in physics and mathematics. That may reflect people remembering bad experiences of sitting in class with no idea what the instructor was going on about. It’s easy to go from “I don’t understand this mathematics class” to “I don’t understand mathematics people”.

Steve Sicula’s Home and Away for the 20th of November is about using gambling as a way to teach mathematics. So it would be a late entry for the recent Gambling Edition of the Reading The Comics posts. Although this strip is a rerun from the 15th of August, 2008, so it’s actually an extremely early entry.

Ruben Bolling’s Tom The Dancing Bug for the 20th of November is a Super-Fun-Pak Comix installment. And for a wonder it hasn’t got a Chaos Butterfly sequence. Under the Guy Walks Into A Bar label is a joke about a horse doing arithmetic that itself swings into a base-ten joke. In this case it’s suggested the horse would count in base four, and I suppose that’s plausible enough. The joke depends on the horse pronouncing a base four “10” as “ten”, when the number is actually “four”. But the lure of the digits is very hard to resist, and saying “four” suggests the numeral “4” whatever the base is supposed to be.

Mark Leiknes’s Cow and Boy for the 21st of November is a rerun from the 9th of August, 2008. It mentions the holographic principle, which is a neat concept. The principle’s explained all right in the comic. The idea was first developed in the late 1970s, following the study of black hole thermodynamics. Black holes are fascinating because the mathematics of them suggest they have a temperature, and an entropy, and even information which can pass into and out of them. This study implied that information about the three-dimensional volume of the black hole was contained entirely in the two-dimensional surface, though. From here things get complicated, though, and I’m going to shy away from describing the whole thing because I’m not sure I can do it competently. It is an amazing thing that information about a volume can be encoded in the surface, though, and vice-versa. And it is astounding that we can imagine a logically consistent organization of the universe that has a structure completely unlike the one our senses suggest. It’s a lasting and hard-to-dismiss philosophical question. How much of the way the world appears to be structured is the result of our minds, our senses, imposing that structure on it? How much of it is because the world is ‘really’ like that? (And does ‘really’ mean anything that isn’t trivial, then?)

I should make clear that while we can imagine it, we haven’t been able to prove that this holographic universe is a valid organization. Explaining gravity in quantum mechanics terms is a difficult point, as it often is.

Dave Blazek’s Loose Parts for the 21st of November is a two- versus three-dimensions joke. The three-dimension figure on the right is a standard way of drawing x-, y-, and z-axes, organized in an ‘isometric’ view. That’s one of the common ways of drawing three-dimensional figures on a two-dimensional surface. The two-dimension figure on the left is a quirky representation, but it’s probably unavoidable as a way to make the whole panel read cleanly. Usually when the axes are drawn isometrically, the x- and y-axes are the lower ones, with the z-axis the one pointing vertically upward. That is, they’re the ones in the floor of the room. So the typical two-dimensional figure would be the lower axes.

How Antifreeze Works


I hate to report this but Peter Mander’s CarnotCycle blog has reached its last post for the calendar year. It’s a nice, practical one, though, explaining how antifreeze works. What’s important about antifreeze to us is that we can add something to water so that its freezing point is at some more convenient temperature. The logic of why it works is there in statistical mechanics, and the mathematics of it can be pretty simple. One of the things which awed me in high school chemistry class was learning how to use the formulas to describe how much different solutions would shift the freezing temperature around. It seemed all so very simple, and so practical too.

Phase Equilibria and the usefulness of μ


The Carnot Cycle blog for this month is about chemical potential. “Chemical potential” in thermodynamics covers a lot of interesting phenomena. It gives a way to model chemistry using the mechanisms of statistical mechanics. It lets us study a substance that’s made of several kinds of particle. This potential is written with the symbol μ, and I admit I don’t know how that symbol got picked over all the possible alternatives.

The chemical potential varies with the number of particles. Each different type of particle gets its own chemical potential, so there may be a μ1 and μ2 and μ3 and so on. The chemical potential μ1 is how much the free energy varies as the count of particles-of-type-1 varies. μ2 is how much the free energy varies as the count of particles-of-type-2 varies, and so on. This might strike you as similar to the way pressure and volume of a gas depend on each other, or if you retained a bit more of thermodynamics how the temperature and entropy vary. This is so. Pressure and volume are conjugate variables, as are temperature and entropy, and so are chemical potential and particle number. (And for a wonder, “particle number” means exactly what it suggests: the number of particles of that kind in the system.)

carnotcycle

gibbs

It was the American mathematical physicist Josiah Willard Gibbs who introduced the concepts of phase and chemical potential in his milestone monograph On the Equilibrium of Heterogeneous Substances (1876-1878) with which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.

In a paragraph under the heading “On Coexistent Phases of Matter” Gibbs mentions – in passing – that for a system of coexistent phases in equilibrium at constant temperature and pressure, the chemical potential μ of any component must have the same value in every phase.

This simple statement turns out to have considerable practical value as we shall see. But first, let’s go through the formal proof of Gibbs’ assertion.

An important result

pe01

Consider a system of two phases, each containing the same components, in equilibrium at constant temperature and pressure. Suppose a small quantity dni moles of any component i is transferred from phase A in…

View original post 819 more words

How Gibbs derived the Phase Rule


I knew I’d been forgetting something about the end of summer. I’m embarrassed again it was Peter Mander’s Carnot Cycle blog resuming its discussions of thermodynamics.

The September article is about Gibbs’s phase rule. Gibbs here is Josiah Willard Gibbs, who established much of the mathematical vocabulary of thermodynamics. The phase rule here talks about the change of a substance from one phase to another. The classic example is water changing from liquid to solid, or solid to gas, or gas to liquid. Everything does that for some combinations of pressure and temperature and available volume. It’s just a good example because we can see those phase transitions happen whenever we want.

The question that feels natural to mathematicians, and physicists, is about degrees of freedom. Suppose that we’re able take a substance and change its temperature or its volume or its pressure. How many of those things can we change at once without making the material different? And the phase rule is a way to calculate that. It’s not always the same number because at some combinations of pressure and temperature and volume the substance can be equally well either liquid or gas, or be gas or solid, or be solid or liquid. These represent phase transitions, melting or solidifying or evaporating. There’s even one combination — the triple point — where the material can be solid, liquid, or gas simultaneously.

Carnot Cycle presents the way that Gibbs originally derived his phase rule. And it’s remarkably neat and clean and accessible. The meat of it is really a matter of counting, keeping track of how much information we have and how much we want and looking at the difference between the things. I recommend reading it even if you are somehow not familiar with differential forms. Simply trust that a “d” followed by some other letter (or a letter with a subscript) is some quantity whose value we might be interested in, and you should follow the reasoning well.

carnotcycle

pr01

The Phase Rule formula was first stated by the American mathematical physicist Josiah Willard Gibbs in his monumental masterwork On the Equilibrium of Heterogeneous Substances (1875-1878), in which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.

In a paragraph under the heading “On Coexistent Phases of Matter”, Gibbs gives the derivation of his famous formula in just 77 words. Of all the many Phase Rule proofs in the thermodynamic literature, it is one of the simplest and shortest. And yet textbooks of physical science have consistently overlooked it in favor of more complicated, less lucid derivations.

To redress this long-standing discourtesy to Gibbs, CarnotCycle here presents Gibbs’ original derivation of the Phase Rule in an up-to-date form. His purely prose description has been supplemented with clarifying mathematical content, and the outmoded symbols used in the single equation to which he refers have been replaced with their…

View original post 863 more words

Reading the Comics, September 16, 2015: Celebrity Appearance Edition


I couldn’t go on calling this Back To School Editions. A couple of the comic strips the past week have given me reason to mention people famous in mathematics or physics circles, and one who’s even famous in the real world too. That’ll do for a title.

Jeff Corriveau’s Deflocked for the 15th of September tells what I want to call an old joke about geese formations. The thing is that I’m not sure it is an old joke. At least I can’t think of it being done much. It seems like it should have been.

The formations that geese, or other birds, form has been a neat corner of mathematics. The question they inspire is “how do birds know what to do?” How can they form complicated groupings and, more, change their flight patterns at a moment’s notice? (Geese flying in V shapes don’t need to do that, but other flocking birds will.) One surprising answer is that if each bird is just trying to follow a couple of simple rules, then if you have enough birds, the group will do amazingly complex things. This is good for people who want to say how complex things come about. It suggests you don’t need very much to have robust and flexible systems. It’s also bad for people who want to say how complex things come about. It suggests that many things that would be interesting can’t be studied in simpler models. Use a smaller number of birds or fewer rules or such and the interesting behavior doesn’t appear.

The geese are flying in V, I, and X patterns. The guess is that they're Roman geese.
Jeff Corriveau’s Deflocked for the 15th of September, 2015.

Scott Adams’s Dilbert Classics from the 15th and 16th of September (originally run the 22nd and 23rd of July, 1992) are about mathematical forecasts of the future. This is a hard field. It’s one people have been dreaming of doing for a long while. J Willard Gibbs, the renowned 19th century physicist who put the mathematics of thermodynamics in essentially its modern form, pondered whether a thermodynamics of history could be made. But attempts at making such predictions top out at demographic or rough economic forecasts, and for obvious reason.

The next day Dilbert’s garbageman, the smartest person in the world, asserts the problem is chaos theory, that “any complex iterative model is no better than a wild guess”. I wouldn’t put it that way, although I’m not sure what would convey the idea within the space available. One problem with predicting complicated systems, even if they are deterministic, is that there is a difference between what we can measure a system to be and what the system actually is. And for some systems that slight error will be magnified quickly to the point that a prediction based on our measurement is useless. (Fortunately this seems to affect only interesting systems, so we can still do things like study physics in high school usefully.)

Maria Scrivan’s Half Full for the 16th of September makes the Common Core joke. A generation ago this was a New Math joke. It’s got me curious about the history of attempts to reform mathematics teaching, and how poorly they get received. Surely someone’s written a popular or at least semipopular book about the process? I need some friends in the anthropology or sociology departments to tell, I suppose.

In Mark Tatulli’s Heart of the City for the 16th of September, Heart is already feeling lost in mathematics. She’s in enough trouble she doesn’t recognize mathematics terms. That is an old joke, too, although I think the best version of it was done in a Bloom County with no mathematical content. (Milo Bloom met his idol Betty Crocker and learned that she was a marketing icon who knew nothing of cooking. She didn’t even recognize “shish kebob” as a cooking term.)

Mell Lazarus’s Momma for the 16th of September sneers at the idea of predicting where specks of dust will land. But the motion of dust particles is interesting. What can be said about the way dust moves when the dust is being battered by air molecules that are moving as good as randomly? This becomes a problem in statistical mechanics, and one that depends on many things, including just how fast air particles move and how big molecules are. Now for the celebrity part of this story.

Albert Einstein published four papers in his “Annus mirabilis” year of 1905. One of them was the Special Theory of Relativity, and another the mass-energy equivalence. Those, and the General Theory of Relativity, are surely why he became and still is a familiar name to people. One of his others was on the photoelectric effect. It’s a cornerstone of quantum mechanics. If Einstein had done nothing in relativity he’d still be renowned among physicists for that. The last paper, though, that was on Brownian motion, the movement of particles buffeted by random forces like this. And if he’d done nothing in relativity or quantum mechanics, he’d still probably be known in statistical mechanics circles for this work. Among other things this work gave the first good estimates for the size of atoms and molecules, and gave easily observable, macroscopic-scale evidence that molecules must exist. That took some work, though.

Dave Whamond’s Reality Check for the 16th of September shows off the Metropolitan Museum of Symmetry. This is probably meant to be an art museum. Symmetries are studied in mathematics too, though. Many symmetries, the ways you can swap shapes around, form interesting groups or rings. And in mathematical physics, symmetries give us useful information about the behavior of systems. That’s enough for me to claim this comic is mathematically linked.

How Pinball Leagues and Chemistry Work: The Mathematics


My love and I play in several pinball leagues. I need to explain something of how they work.

Most of them organize league nights by making groups of three or four players and having them play five games each on a variety of pinball tables. The groupings are made by order. The 1st through 4th highest-ranked players who’re present are the first group, the 5th through 8th the second group, the 9th through 12th the third group, and so on. For each table the player with the highest score gets some number of league points. The second-highest score earns a lesser number of league points, third-highest gets fewer points yet, and the lowest score earns the player comments about how the table was not being fair. The total number of points goes into the player’s season score, which gives her ranking.

You might see the bootstrapping problem here. Where do the rankings come from? And what happens if someone joins the league mid-season? What if someone misses a competition day? (Some leagues give a fraction of points based on the player’s season average. Other leagues award no points.) How does a player get correctly ranked?

Continue reading “How Pinball Leagues and Chemistry Work: The Mathematics”

Conditions of equilibrium and stability


This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

carnotcycle

cse01

In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

cse02

Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

If we restrict our attention to a thermodynamic system of unchanging composition and apply…

View original post 2,534 more words

Reversible and irreversible change


Entropy is hard to understand. It’s deceptively easy to describe, and the concept is popular, but to understand it is challenging. In this month’s entry CarnotCycle talks about thermodynamic entropy and where it comes from. I don’t promise you will understand it after this essay, but you will be closer to understanding it.

carnotcycle

rev01

Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.

Reversibility and mechanical systems

The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.

Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes…

View original post 692 more words

The Thermodynamics of Life


Peter Mander of the Carnot Cycle blog, which is primarily about thermodynamics, has a neat bit about constructing a mathematical model for how the body works. This model doesn’t look anything like a real body, as it’s concerned with basically the flow of heat, and how respiration fires the work our bodies need to do to live. Modeling at this sort of detail brings to mind an old joke told of mathematicians — that, challenged to design a maximally efficient dairy farm, the mathematician begins with “assume a spherical cow” — but great insights can come from models that look too simple to work.

It also, sad to say, includes a bit of Bright Young Science-Minded Lad (in this case, the author’s partner of the time) reasoning his way through what traumatized people might think, in a way that’s surely well-intended but also has to be described as “surely well-intended”, so, know that the tags up top of the article aren’t misleading.

Reading the Comics, November 28, 2014: Greatest Hits Edition?


I don’t ever try speaking for Comic Strip Master Command, and it almost never speaks to me, but it does seem like this week’s strips mentioning mathematical themes was trying to stick to the classic subjects: anthropomorphized numbers, word problems, ways to measure time and space, under-defined probability questions, and sudoku. It feels almost like a reunion weekend to have all these topics come together.

Dan Thompson’s Brevity (November 23) is a return to the world-of-anthropomorphic-numbers kind of joke, and a pun on the arithmetic mean, which is after all the statistic which most lends itself to puns, just edging out the “range” and the “single-factor ANOVA F-Test”.

Phil Frank Joe Troise’s The Elderberries (November 23, rerun) brings out word problem humor, using train-leaves-the-station humor as a representative of the kinds of thinking academics do. Nagging slightly at me is that I think the strip had established the Professor as one of philosophy and while it’s certainly not unreasonable for a philosopher to be interested in mathematics I wouldn’t expect this kind of mathematics to strike him as very interesting. But then there is the need to get the idea across in two panels, too.

Jonathan Lemon’s Rabbits Against Magic (November 25) brings up a way of identifying the time — “half seven” — which recalls one of my earliest essays around here, “How Many Numbers Have We Named?”, because the construction is one that I find charming and that was glad to hear was still current. “Half seven” strikes me as similar in construction to saying a number as “five and twenty” instead of “twenty-five”, although I’m ignorant as to whether the actually is any similarity.

Scott Hilburn’s The Argyle Sweater (November 26) brings out a joke that I thought had faded out back around, oh, 1978, when the United States decided it wasn’t going to try converting to metric after all, now that we had two-liter bottles of soda. The curious thing about this sort of hyperconversion (it’s surely a satiric cousin to the hypercorrection that makes people mangle a sentence in the misguided hope of perfecting it) — besides that the “yard” in Scotland Yard is obviously not a unit of measure — is the notion that it’d be necessary to update idiomatic references that contain old-fashioned units of measurement. Part of what makes idioms anything interesting is that they can be old-fashioned while still making as much sense as possible; “in for a penny, in for a pound” is a sensible thing to say in the United States, where the pound hasn’t been legal tender since 1857; why would (say) “an ounce of prevention is worth a pound of cure” be any different? Other than that it’s about the only joke easily found on the ground once you’ve decided to look for jokes in the “systems of measurement” field.

Mark Heath’s Spot the Frog (November 26, rerun) I’m not sure actually counts as a mathematics joke, although it’s got me intrigued: Surly Toad claims to have a stick in his mouth to use to give the impression of a smile, or 37 (“Sorry, 38”) other facial expressions. The stick’s shown as a bundle of maple twigs, wound tightly together and designed to take shapes easily. This seems to me the kind of thing that’s grown as an application of knot theory, the study of, well, it’s almost right there in the name. Knots, the study of how strings of things can curl over and around and cross themselves (or other strings), seemed for a very long time to be a purely theoretical playground, not least because, to be addressable by theory, the knots had to be made of an imaginary material that could be stretched arbitrarily finely, and could be pushed frictionlessly through it, which allows for good theoretical work but doesn’t act a thing like a shoelace. Then I think everyone was caught by surprise when it turned out the mathematics of these very abstract knots also describe the way proteins and other long molecules fold, and unfold; and from there it’s not too far to discovering wonderful structures that can change almost by magic with slight bits of pressure. (For my money, the most astounding thing about knots is that you can describe thermodynamics — the way heat works — on them, but I’m inclined towards thermodynamic problems.)

Veronica was out of town for a week; Archie's test scores improved. This demonstrates that test scores aren't everything.
Henry Scarpelli and Crag Boldman’s Archie for the 28th of November, 2014. Clearly we should subject this phenomenon to scientific inquiry!

Henry Scarpelli and Crag Boldman’s Archie (November 28, rerun) offers an interesting problem: when Veronica was out of town for a week, Archie’s test scores improved. Is there a link? This kind of thing is awfully interesting to study, and awfully difficult to: there’s no way to run a truly controlled experiment to see whether Veronica’s presence affects Archie’s test scores. After all, he never takes the same test twice, even if he re-takes a test on the same subject (and even if the re-test were the exact same questions, he would go into it the second time with relevant experience that he didn’t have the first time). And a couple good test scores might be relevant, or might just be luck, or it might be that something else happened to change that week that we haven’t noticed yet. How can you trace down plausible causal links in a complicated system?

One approach is an experimental design that, at least in the psychology textbooks I’ve read, gets called A-B-A, or A-B-A-B, experiment design: measure whatever it is you’re interested in during a normal time, “A”, before whatever it is whose influence you want to see has taken hold. Then measure it for a time “B” where something has changed, like, Veronica being out of town. Then go back as best as possible to the normal situation, “A” again; and, if your time and research budget allow, going back to another stretch of “B” (and, hey, maybe even “A” again) helps. If there is an influence, it ought to appear sometime after “B” starts, and fade out again after the return to “A”. The more you’re able to replicate this the sounder the evidence for a link is.

(We’re actually in the midst of something like this around home: our pet rabbit was diagnosed with a touch of arthritis in his last checkup, but mildly enough and in a strange place, so we couldn’t tell whether it’s worth putting him on medication. So we got a ten-day prescription and let that run its course and have tried to evaluate whether it’s affected his behavior. This has proved difficult to say because we don’t really have a clear way of measuring his behavior, although we can say that the arthritis medicine is apparently his favorite thing in the world, based on his racing up to take the liquid and his trying to grab it if we don’t feed it to him fast enough.)

Ralph Hagen’s The Barn (November 28) has Rory the sheep wonder about the chances he and Stan the bull should be together in the pasture, given how incredibly vast the universe is. That’s a subtly tricky question to ask, though. If you want to show that everything that ever existed is impossibly unlikely you can work out, say, how many pastures there are on Earth multiply it by an estimate of how many Earth-like planets there likely are in the universe, and take one divided by that number and marvel at Rory’s incredible luck. But that number’s fairly meaningless: among other obvious objections, wouldn’t Rory wonder the same thing if he were in a pasture with Dan the bull instead? And Rory wouldn’t be wondering anything at all if it weren’t for the accident by which he happened to be born; how impossibly unlikely was that? And that Stan was born too? (And, obviously, that all Rory and Stan’s ancestors were born and survived to the age of reproducing?)

Except that in this sort of question we seem to take it for granted, for instance, that all Stan’s ancestors would have done their part by existing and doing their part to bringing Stan around. And we’d take it for granted that the pasture should exist, rather than be a farmhouse or an outlet mall or a rocket base. To come up with odds that mean anything we have to work out what the probability space of all possible relevant outcomes is, and what the set of all conditions that satisfy the concept of “we’re stuck here together in this pasture” is.

Mark Pett’s Lucky Cow (November 28) brings up sudoku puzzles and the mystery of where they come from, exactly. This prompted me to wonder about the mechanics of making sudoku puzzles and while it certainly seems they could be automated pretty well, making your own amounts to just writing the digits one through nine nine times over, and then blanking out squares until the puzzle is hard. A casual search of the net suggests the most popular way of making sure you haven’t blanking out squares so that the puzzle becomes unsolvable (in this case, that there’s two or more puzzles that fit the revealed information) is to let an automated sudoku solver tell you. That’s true enough but I don’t see any mention of any algorithms by which one could check if you’re blanking out a solution-foiling set of squares. I don’t know whether that reflects there being no algorithm for this that’s more efficient than “try out possible solutions”, or just no algorithm being more practical. It’s relatively easy to make a computer try out possible solutions, after all.

A paper published by Mária Ercsey-Ravasz and Zoltán Toroczkai in Nature Scientific Reports in 2012 describes the recasting of the problem of solving sudoku into a deterministic, dynamical system, and matches the difficulty of a sudoku puzzle to chaotic behavior of that system. (If you’re looking at the article and despairing, don’t worry. Go to the ‘Puzzle hardness as transient chaotic dynamics’ section, and read the parts of the sentence that aren’t technical terms.) Ercsey-Ravasz and Toroczkai point out their chaos-theory-based definition of hardness matches pretty well, though not perfectly, the estimates of difficulty provided by sudoku editors and solvers. The most interesting (to me) result they report is that sudoku puzzles which give you the minimum information — 17 or 18 non-blank numbers to start — are generally not the hardest puzzles. 21 or 22 non-blank numbers seem to match the hardest of puzzles, though they point out that difficulty has got to depend on the positioning of the non-blank numbers and not just how many there are.

Echoing “Fourier Echoes Euler”


The above tweet is from the Analysis Fact of The Day feed, which for the 5th had a neat little bit taken from Joseph Fourier’s The Analytic Theory Of Heat, published 1822. Fourier was trying to at least describe the way heat moves through objects, and along the way he developed thing called Fourier series and a field called Fourier Analysis. In this we treat functions — even ones we don’t yet know — as sinusoidal waves, overlapping and interfering with and reinforcing one another.

If we have infinitely many of these waves we can approximate … well, not every function, but surprisingly close to all the functions that might represent real-world affairs, and surprisingly near all the functions we’re interested in anyway. The advantage of representing functions as sums of sinusoidal waves is that sinusoidal waves are very easy to differentiate and integrate, and to add together those differentials and integrals, and that means we can turn problems that are extremely hard into problems that may be longer, but are made up of much easier parts. Since usually it’s better to do something that’s got many easy steps than it is to do something with a few hard ones, Fourier series and Fourier analysis are some of the things you get to know well as you become a mathematician.

The “Fourier Echoes Euler” page linked here shows simply one nice, sweet result that Fourier proved in that major work. It demonstrates what you get if, for absolutely any real number x, you add together \cos\left(x\right) - \frac12 \cos\left(2x\right) + \frac13 \cos\left(3x\right) - \frac14 \cos\left(4x\right) + \frac15 \cos\left(5x\right) - \cdots et cetera. There’s one step in it — “integration by parts” — that you’ll have to remember from freshman calculus, or maybe I’ll get around to explaining that someday, but I would expect most folks reading this far could follow this neat result.

The Geometry of Thermodynamics (Part 2)


I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.

The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.

A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.

carnotcycle

jcm1 James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

Historical background

Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to…

View original post 1,262 more words

The Geometry of Thermodynamics (Part 1)


I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

carnotcycle

1geo1

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original post 1,490 more words

The Math Blog Statistics, May 2014


And on to the tracking of how my little mathematics blog is doing. As readership goes, things are looking good — my highest number of page views since January 2013, and third-highest ever, and also my highest number of unique viewers since January 2013 (unique viewer counts aren’t provided for before December 2012, so who knows what happened before that). The total number of page views rose from 565 in April to 751, and the number of unique visitors rose from 238 to 315. This is a remarkably steady number of views per visitor, though — 2.37 rising to 2.38, as if that were a significant difference. I passed visitor number 15,000 somewhere around the 5th of May, and at number 15,682 right now that puts me on track to hit 16,000 somewhere around the 13th.

As with April, the blog’s felt pretty good to me. I think I’m hitting a pretty good mixture of writing about stuff that interest me and finding readers who’re interested to read it. I’m hoping I can keep that up another month.

The most popular articles of the month — well, I suspect someone was archive-binging on the mathematics comics ones because, here goes:

  1. How Many Trapezoids I Can Draw, which will be my memorial
  2. Reading the Comics, May 13, 2014: Good Class Problems Edition, which was a tiny bit more popular than …
  3. Reading the Comics, May 26, 2014: Definitions Edition, the last big entry in the Math Comics of May sequence.
  4. Some Things About Joseph Nebus is just my little biographic page and I have no idea why anyone’s even looking at that.
  5. Reading the Comics, May 18, 2014: Pop Math of the 80s Edition is back on the mathematics comics, as these things should be,
  6. Reading the Comics, May 4, 2014: Summing the Series Edition and what the heck, let’s just mention this one too.
  7. The ideal gas equation is my headsup to a good writer’s writings.
  8. Where Does A Plane Touch A Sphere? is a nicely popular bit motivated by the realization that a tangent point is an important calculus concept and nevertheless a subtler thing than one might realize.

I think without actually checking this is the first month I’ve noticed with seven countries sending me twenty or more visitors each — the United States (438), Canada (39), Australia (38), Sweden (31), Denmark (21), and Singapore and the United Kingdom (20 each). Austria came in at 19, too. Sixteen countries sent me one visitor each: Antigua and Barbuda, Colombia, Guernsey, Hong Kong, Ireland, Italy, Jamaica, Japan, Kuwait, Lebanon, Mexico, Morocco, Norway, Peru, Poland, Swaziland, and Switzerland. Morocco’s the only one to have been there last month.

And while I lack for search term poetry, some of the interesting searches that brought people here include:

  • working mathematically comics
  • https://nebusresearch.wordpress.com/ [ They’ve come to the right place, then. ]
  • how do you say 1898600000000000000000000000 in words [ I never do. ]
  • two trapezoids make a [ This is kind of beautiful as it is. ]
  • when you take a trapeziod apart how many trangles will you have?
  • -7/11,5/-8which is greter rational number and why
  • origin is the gateway to your entire gaming universe. [ This again is rather beautiful. ]
  • venn diagram on cartoons and amusement parks [ Beats me. ]

The ideal gas equation


I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

carnotcycle

es01

If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

es02

The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

View original post 1,628 more words

What Is True Almost Everywhere?


I was reading a thermodynamics book (C Truesdell and S Bharatha’s The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, which is a fascinating read, for the field, and includes a number of entertaining, for the field, snipes at the stuff textbook writers put in because they’re just passing on stuff without rethinking it carefully), and ran across a couple proofs which mentioned equations that were true “almost everywhere”. That’s a construction it might be surprising to know even exists in mathematics, so, let me take a couple hundred words to talk about it.

The idea isn’t really exotic. You’ve seen a kind of version of it when you see an equation containing the note that there’s an exception, such as, \frac{\left(x - 1\right)^2}{\left(x - 1\right)} = x \mbox{ for } x \neq 1 . If the exceptions are tedious to list — because there are many of them to write down, or because they’re wordy to describe (the thermodynamics book mentioned the exceptions were where a particular set of conditions on several differential equations happened simultaneously, if it ever happened) — and if they’re unlikely to come up, then, we might just write whatever it is we want to say and add an “almost everywhere”, or for shorthand, put an “ae” after the line. This “almost everywhere” will, except in freak cases, propagate through the rest of the proof, but I only see people writing that when they’re students working through the concept. In publications, the “almost everywhere” gets put in where the condition first stops being true everywhere-everywhere and becomes only almost-everywhere, and taken as read after that.

I introduced this with an equation, but it can apply to any relationship: something is greater than something else, something is less than or equal to something else, even something is not equal to something else. (After all, “x \neq -x is true almost everywhere, but there is that nagging exception.) A mathematical proof is normally about things which are true. Whether one thing is equal to another is often incidental to that.

What’s meant by “unlikely to come up” is actually rigorously defined, which is why we can get away with this. It’s otherwise a bit daft to think we can just talk about things that are true except where they aren’t and not even post warnings about where they’re not true. If we say something is true “almost everywhere” on the real number line, for example, that means that the set of exceptions has a total length of zero. So if the only exception is where x equals 1, sure enough, that’s a set with no length. Similarly if the exceptions are where x equals positive 1 or negative 1, that’s still a total length of zero. But if the set of exceptions were all values of x from 0 to 4, well, that’s a set of total length 4 and we can’t say “almost everywhere” for that.

This is all quite like saying that it can’t happen that if you flip a fair coin infinitely many times it will come up tails every single time. It won’t, even though properly speaking there’s no reason that it couldn’t. If something is true almost everywhere, then your chance of picking an exception out of all the possibilities is about like your chance of flipping that fair coin and getting tails infinitely many times over.

The Liquefaction of Gases – Part II


The CarnotCycle blog has a continuation of last month’s The Liquefaction of Gases, as you might expect, named The Liquefaction of Gases, Part II, and it’s another intriguing piece. The story here is about how the theory of cooling, and of phase changes — under what conditions gases will turn into liquids — was developed. There’s a fair bit of mathematics involved, although most of the important work is in in polynomials. If you remember in algebra (or in pre-algebra) drawing curves for functions that had x3 in them, and in finding how they sometimes had one and sometimes had three real roots, then you’re well on your way to understanding the work which earned Johannes van der Waals the 1910 Nobel Prize in Physics.

carnotcycle

lg201 Future Nobel Prize winners both. Kamerlingh Onnes and Johannes van der Waals in 1908.

On Friday 10 July 1908, at Leiden in the Netherlands, Kamerlingh Onnes succeeded in liquefying the one remaining gas previously thought to be non-condensable – helium – using a sequential Joule-Thomson cooling technique to drive the temperature down to just 4 degrees above absolute zero. The event brought to a conclusion the race to liquefy the so-called permanent gases, following the revelation that all gases have a critical temperature below which they must be cooled before liquefaction is possible.

This crucial fact was established by Dr. Thomas Andrews, professor of chemistry at Queen’s College Belfast, in his groundbreaking study of the liquefaction of carbon dioxide, “On the Continuity of the Gaseous and Liquid States of Matter”, published in the Philosophical Transactions of the Royal Society of London in 1869.

As described in Part I of…

View original post 2,047 more words

The Liquefaction of Gases – Part I


I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

carnotcycle

On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

But in…

View original post 1,350 more words

CarnotCycle on the Gibbs-Helmholtz Equation


I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

[1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

My July 2013 Statistics


As I’ve started keeping track of my blog statistics here where it’s all public information, let me continue.

WordPress says that in July 2013 I had 341 pages read, which is down rather catastrophically from the June score of 713. The number of distinct visitors also dropped, though less alarmingly, from 246 down to 156; this also implies the number of pages each visitor viewed dropped from 2.90 down to 2.19. That’s still the second-highest number of pages-per-visitor that I’ve had recorded since WordPress started sharing that information with me, so, I’m going to suppose that the combination of school letting out (so fewer people are looking for help about trapezoids) and my relatively fewer posts this month hit me. There are presently 215 people following the blog, if my Twitter followers are counted among them. They hear about new posts, anyway.

My most popular posts over the past 30 days have been:

  1. John Dee, the ‘Mathematicall Praeface’ and the English School of Mathematics, which is primarily a pointer to the excellent mathematics history blog The Renaissance Mathematicus, and about the really quite fascinating Doctor John Dee, advisor to England’s Queen Elizabeth I.
  2. Counting From 52 To 11,108, some further work from Professor Inder J Taneja on a lovely bit of recreational mathematics. (Professor Taneja even pops in for the comments.)
  3. Geometry The Old-Fashioned Way, pointing to a fun little web page in which you can work out geometric constructions using straightedge and compass live and direct on the web.
  4. Reading the Comics, July 5, 2013, and finally; I was wondering if people actually still liked these posts.
  5. On Exact And Inexact Differentials, another “reblog” style pointer, this time to Carnot Cycle, a thermodynamics-oriented blog.
  6. And The $64 Question Was, in which I learned something about a classic game show and started to think about how it might be used educationally.

My all-time most popular post remains How Many Trapezoids I Can Draw, because I think there are people out there who worry about how many different kinds of trapezoids there are. I hope I can bring a little peace to their minds. (I make the answer out at six.)

The countries sending me the most viewers the past month have been the United States (165), then Denmark (32), Australia (24), India (18), and the United Kingdom and Brazil (12 each). Sorry, Canada (11). Sending me a single viewer each were Estonia, Slovenia, South Africa, the Netherlands, Argentina, Pakistan, Angola, France, and Switzerland. Argentina and Slovenia did the same for me last month too.

On exact and inexact differentials


The CarnotCycle blog recently posted a nice little article titled “On Exact And Inexact Differentials” and I’m bringing it to people’s attention because its the sort of thing which would have been extremely useful to me at a time when I was reading calculus-heavy texts that just assumed you knew what exact differentials were, without being aware that you probably missed the day in intro differential equations when they were explained. (That was by far my worst performance in a class. I have no excuse.)

So this isn’t going to be the most accessible article you run across on my blog here, until I finish making the switch to a full-on advanced statistical mechanics course. But if you start getting into, particularly, thermodynamics and wonder where this particular and slightly funky string of symbols comes from, this is a nice little warmup. For extra help, CarnotCycle also explains what makes something an inexact differential.

carnotcycle

From the search term phrases that show up on this blog’s stats, CarnotCycle detects that a significant segment of visitors are studying foundation level thermodynamics  at colleges and universities around the world. So what better than a post that tackles that favorite test topic – exact and inexact differentials.

When I was an undergraduate, back in the time of Noah, we were first taught the visual approach to these things. Later we dispensed with diagrams and got our answers purely through the operations of calculus, but either approach is equally instructive. CarnotCycle herewith presents them both.

– – – –

The visual approach

Ok, let’s start off down the visual track by contemplating the following pair of pressure-volume diagrams:

visual track

The points A and B have identical coordinates on both diagrams, with A and B respectively representing the initial and final states of a closed PVT system, such as an…

View original post 782 more words

Reading the Comics, April 28, 2013


The flow of mathematics-themed comic strips almost dried up in April. I’m going to assume this reflects the kids of the cartoonists being on Spring Break, and teachers not placing exams immediately after the exam, in early to mid-March, and that we were just seeing the lag from that. I’m joking a little bit, but surely there’s some explanation for the rash of did-you-get-your-taxes-done comics appearing two weeks after April 15, and I’m fairly sure it isn’t the high regard United States newspaper syndicates have for their Canadian readership.

Dave Whamond’s Reality Check (April 8) uses the “infinity” symbol and tossed pizza dough together. The ∞ symbol, I understand, is credited to the English mathematician John Wallis, who introduced it in the Treatise on the Conic Sections, a text that made clearer how conic sections could be described algebraically. Wikipedia claims that Wallis had the idea that negative numbers were, rather than less than zero, actually greater than infinity, which we’d regard as a quirky interpretation, but (if I can verify it) it makes for an interesting point in figuring out how long people took to understand negative numbers like we believe we do today.

Jonathan Lemon’s Rabbits Against Magic (April 9) does a playing-the-odds joke, in this case in the efficiency of alligator repellent. The joke in this sort of thing comes to the assumption of independence of events — whether the chance that a thing works this time is the same as the chance of it working last time — and a bit of the idea that you find the probability of something working by trying it many times and counting the successes. Trusting in the Law of Large Numbers (and the independence of the events), this empirically-generated probability can be expected to match the actual probability, once you spend enough time thinking about what you mean by a term like “the actual probability”.

Continue reading “Reading the Comics, April 28, 2013”

Gibbs’ Elementary Principles in Statistical Mechanics


I had another discovery from the collection of books at archive.org, now that I thought to look for it: Josiah Willard Gibbs’s Elementary Principles in Statistical Mechanics, originally published in 1902 and reprinted 1960 by Dover, which gives you a taste of Gibbs’s writings by its extended title, Developed With Especial Reference To The Rational Foundation of Thermodynamics. Gibbs was an astounding figure even in a field that seems to draw out astounding figures, and he’s a good candidate for the title of “greatest scientist to come from the United States”.

He lived in walking distance of Yale (where his father and then he taught) nearly his whole life, working nearly isolated but with an astounding talent for organizing the many complex and confused ideas in the study of thermodynamics into a neat, logical science. Some great scientists have the knack for finding important work to do; some great scientists have the knack for finding ways to express work so the masses can understand it. Gibbs … well, perhaps it’s a bit much to say the masses understand it, but the language of modern thermodynamics and of quantum mechanics is very much the language he spoke a century-plus ago.

My understanding is he published almost all his work in the journal Transactions of the Connecticut Philosophical Society, in a show of hometown pride which probably left the editors baffled but, I suppose, happy to print something this fellow was very sure about.

To give some idea why they might have found him baffling, though, consider the first paragraph of Chapter 1, which is accurate and certainly economical:

We shall use Hamilton’s form of the equations of motion for a system of n degrees of freedom, writing q_1, \cdots q_n for the (generalized) coördinates, \dot{q}_1, \cdots \dot{q}_n for the (generalized) velocities, and

F_1 q_1 + F_2 q_2 + \cdots + F_n q_n [1]

for the moment of the forces. We shall call the quantities F_1, \cdots F_n the (generalized) forces, and the quantities p_1 \cdots p_n , defined by the equations

p_1 = \frac{d\epsilon_p}{d\dot{q}_1}, p_2 = \frac{d\epsilon_p}{d\dot{q}_2}, etc., [2]

where \epsilon_p denotes the kinetic energy of the system, the (generalized) momenta. The kinetic energy is here regarded as a function of the velocities and coördinates. We shall usually regard it as a function of the momenta and coördinates, and on this account we denote it by \epsilon_p . This will not prevent us from occasionally using formulas like [2], where it is sufficiently evident the kinetic energy is regarded as function of the \dot{q}‘s and q‘s. But in expressions like d\epsilon_p/dq_1 , where the denominator does not determine the question, the kinetic energy is always to be treated in the differentiation as function of the p’s and q’s.

(There’s also a footnote I skipped because I don’t know an elegant way to include it in WordPress.) Your friend the physics major did not understand that on first read any more than you did, although she probably got it after going back and reading it a touch more slowly. And his writing is just like that: 240 pages and I’m not sure I could say any of them could be appreciably tightened.


Also, I note I finally reached 9,000 page views! Thank you; I couldn’t have done it without at least twenty of you, since I’m pretty sure I’ve obsessively clicked on my own pages at minimum 8,979 times.

Reblog: Mixed-Up Views Of Entropy


The blog CarnotCycle, which tries to explain thermodynamics — a noble goal, since thermodynamics is a really big, really important, and criminally unpopularized part of science and mathematics — here starts from an “Unpublished Fragment” by the great Josiah Willard Gibbs to talk about entropy.

Gibbs — a strong candidate for the greatest scientist the United States ever produced, complete with fascinating biographical quirks to make him seem accessibly peculiar — gave to statistical mechanics much of the mathematical form and power that it now has. Gibbs had planned to write something about “On entropy as mixed-up-ness”, which certainly puts in one word what people think of entropy as being. The concept is more powerful and more subtle than that, though, and CarnotCycle talks about some of the subtleties.

carnotcycle

mixedup

Tucked away at the back of Volume One of The Scientific Papers of J. Willard Gibbs, is a brief chapter headed ‘Unpublished Fragments’. It contains a list of nine subject headings for a supplement that Professor Gibbs was planning to write to his famous paper “On the Equilibrium of Heterogeneous Substances”. Sadly, he completed his notes for only two subjects before his death in April 1903, so we will never know what he had in mind to write about the sixth subject in the list: On entropy as mixed-up-ness.

Mixed-up-ness. It’s a catchy phrase, with an easy-to-grasp quality that brings entropy within the compass of minds less given to abstraction. That’s no bad thing, but without Gibbs’ guidance as to exactly what he meant by it, mixed-up-ness has taken on a life of its own and has led to entropy acquiring the derivative associations of chaos and disorder…

View original post 627 more words

%d bloggers like this: