Monte Carlo pioneer Arianna Wright Rosenbluth dead at 93


The New York Times carried an obituary for Dr Arianna Wright Rosenbluth. She died in December from Covid-19 and the United States’s mass-murderous handling of Covid-19. And she’s a person I am not sure I knew anything substantial about. I had known her name, but not anything more. This is a chance to correct that a bit.

Rosenbluth was a PhD in physics (and an Olympics-qualified fencer). Her postdoctoral work was with the Atomic Energy Commission, bringing her to a position at Los Alamos National Laboratory in the early 1950s. And a moment in computer science that touches very many people’s work, my own included. This is in what we call Metropolis-Hastings Markov Chain Monte Carlo.

Monte Carlo methods are numerical techniques that rely on randomness. The name references the casinos. Markov Chain refers to techniques that create a sequence of things. Each thing exists in some set of possibilities. If we’re talking about Markov Chain Monte Carlo this is usually an enormous set of possibilities, too many to deal with by hand, except for little tutorial problems. The trick is that what the next item in the sequence is depends on what the current item is, and nothing more. This may sound implausible — when does anything in the real world not depend on its history? — but the technique works well regardless. Metropolis-Hastings is a way of finding states that meet some condition well. Usually this is a maximum, or minimum, of some interesting property. The Metropolis-Hastings rule has the chance of going to an improved state, one with more of whatever the property we like, be 1, a certainty. The chance of going to a worsened state, with less of the property, be not zero. The worse the new state is, the less likely it is, but it’s never zero. The result is a sequence of states which, most of the time, improve whatever it is you’re looking for. It sometimes tries out some worse fits, in the hopes that this leads us to a better fit, for the same reason sometimes you have to go downhill to reach a larger hill. The technique works quite well at finding approximately-optimum states when it’s hard to find the best state, but it’s easy to judge which of two states is better. Also when you can have a computer do a lot of calculations, because it needs a lot of calculations.

So here we come to Rosenbluth. She and her then-husband, according to an interview he gave in 2003, were the primary workers behind the 1953 paper that set out the technique. And, particularly, she wrote the MANIAC computer program which ran the algorithm. It’s important work and an uncounted number of mathematicians, physicists, chemists, biologists, economists, and other planners have followed. She would go on to study statistical mechanics problems, in particular simulations of molecules. It’s still a rich field of study.

My All 2020 Mathematics A to Z: Renormalization


I have again Elke Stangl, author of elkemental Force, to thank for the subject this week. Again, Stangl’s is a blog of wide-ranging theme interests. And it’s got more poetry this week again, this time haikus about the Dirac delta function.

I also have Kerson Huang, of the Massachusetts Institute of Technology and of Nanyang Technological University, to thank for much insight into the week’s subject. Huang published this A Critical History of Renormalization, which gave me much to think about. It’s likely a paper that would help anyone hoping to know the history of the technique better.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Renormalization.

There is a mathematical model, the Ising Model, for how magnets work. The model has the simplicity of a toy model given by a professor (Wilhelm Lenz) to his grad student (Ernst Ising). Suppose matter is a uniform, uniformly-spaced grid. At each point on the grid we have either a bit of magnetism pointed up (value +1) or down (value -1). It is a nearest-neighbor model. Each point interacts with its nearest neighbors and none of the other points. For a one-dimensional grid this is easy. It’s the stuff of thermodynamics homework for physics majors. They don’t understand it, because you need the hyperbolic trigonometric functions. But they could. For two dimensions … it’s hard. But doable. And interesting. It describes important things like phase changes. The way that you can take a perfectly good strong magnet and heat it up until it’s an iron goo, then cool it down to being a strong magnet again.

For such a simple model it works well. A lot of the solids we find interesting are crystals, or are almost crystals. These are molecules arranged in a grid. So that part of the model is fine. They do interact, foremost, with their nearest neighbors. But not exclusively. In principle, every molecule in a crystal interacts with every other molecule. Can we account for this? Can we make a better model?

Yes, many ways. Here’s one. It’s designed for a square grid, the kind you get by looking at the intersections on a normal piece of graph paper. Each point is in a row and a column. The rows are a distance ‘a’ apart. So are the columns.

Now draw a new grid, on top of the old. Do it by grouping together two-by-two blocks of the original. Draw new rows and columns through the centers of these new blocks. Put at the new intersections a bit of magnetism. Its value is the mean of whatever the four blocks around it are. So, could be 1, could be -1, could be 0, could be ½, could be -½. There’s more options. But look at what we have. It’s still an Ising-like model, with interactions between nearest-neighbors. There’s more choices for what value each point can have. And the grid spacing is now 2a instead of a. But it all looks pretty similar.

And now the great insight, that we can trace to Leo P Kadanoff in 1966. What if we relabel the distance between grid points? We called it 2a before. Call it a, now, again. What’s important that’s different from the Ising model we started with?

There’s the not-negligible point that there’s five different values a point can have, instead of two. But otherwise? In the operations we do, not much is different. How about in what it models? And there it’s interesting. Think of the original grid points. In the original scaling, they interacted only with units one original-row or one original-column away. Now? Their average interacts with the average of grid points that were as far as three original-rows or three original-columns away. It’s a small change. But it’s closer to reflecting the reality of every molecule interacting with every other molecule.

You know what happens when mathematicians get one good trick. We figure what happens if we do it again. Take the rescaled grid, the one that represents two-by-two blocks of the original. Rescale it again, making two-by-two blocks of these two-by-two blocks. Do the same rules about setting the center points as a new grid. And then re-scaling. What we have now are blocks that represent averages of four-by-four blocks of the original. And that, imperfectly, let a point interact with a point seven original-rows or original-columns away. (Or farther: seven original-rows down and three original-columns to the left, say. Have fun counting all the distances.) And again: we have eight-by-eight blocks and even more range. Again: sixteen-by-sixteen blocks and double the range again. Why not carry this on forever?

This is renormalization. It’s a specific sort, called the block-spin renormalization group. It comes from condensed matter physics, where we try to understand how molecules come together to form bulks of matter. Kenneth Wilson stretched this over to studying the Kondo Effect. This is a problem in how magnetic impurities affect electrical resistance. (It’s named for Jun Kondo.) It’s great work. It (in part) earned Wilson a Nobel Prize. But the idea is simple. We can understand complex interactions by making them simple ones. The interactions have a natural scale, cutting off at the nearest neighbor. But we redefine ‘nearest neighbor’, again and again, until it reaches infinitely far away.

This problem, and its solution, come from thermodynamics. Particularly, statistical mechanics. This is a bit ahistoric. Physicists first used renormalization in quantum mechanics. This is all right. As a general guideline, everything in statistical mechanics turns into something in quantum mechanics, and vice-versa. What quantum mechanics lacked, for a generation, was logical rigor for renormalization. This statistical mechanics approach provided that.

Renormalization in quantum mechanics we needed because of virtual particles. Quantum mechanics requires that particles can pop into existence, carrying momentum, and then pop back out again. This gives us electromagnetism, and the strong nuclear force (which holds particles together), and the weak nuclear force (which causes nuclear decay). Leave gravity over on the side. The more momentum in the virtual particle, the shorter a time it can exist. It’s actually the more energy, the shorter the particle lasts. In that guise you know it as the Uncertainty Principle. But it’s momentum that’s important here. This means short-range interactions transfer more momentum, and long-range ones transfer less. And here we had thought forces got stronger as the particles interacting got closer together.

In principle, there is no upper limit to how much momentum one of these virtual particles can have. And, worse, the original particle can interact with its virtual particle. This by exchanging another virtual particle. Which is even higher-energy and shorter-range. The virtual particle can also interact with the field that’s around the original particle. Pairs of virtual particles can exchange more virtual particles. And so on. What we get, when we add this all together, seems like it should be infinitely large. Every particle the center of an infinitely great bundle of energy.

Renormalization, the original renormalization, cuts that off. Sets an effective limit on the system. The limit is not “only particles this close will interact” exactly. It’s more “only virtual particles with less than this momentum will”. (Yes, there’s some overlap between these ideas.) This seems different to us mere dwellers in reality. But to a mathematical physicist, knowing that position and momentum are conjugate variables? Limiting one is the same work as limiting the other.

This, when developed, left physicists uneasy. It’s for good reasons. The cutoff is arbitrary. Its existence, sure, but we often deal with arbitrary cutoffs for things. When we calculate a weather satellite’s orbit we do not care that other star systems exist. We barely care that Jupiter exists. Still, where to put the cutoff? Quantum Electrodynamics, using this, could provide excellent predictions of physical properties. But shouldn’t we get different predictions with different cutoffs? How do we know we’re not picking a cutoff because it makes our test problem work right? That we’re not picking one that produces garbage for every other problem? Read the writing of a physicist of the time and — oh, why be coy? We all read Richard Feynman, his QED at least. We see him sulking about a technique he used to brilliant effect.

Wilson-style renormalization answered Feynman’s objections. (Though not to Feynman’s satisfaction, if I understand the history right.) The momentum cutoff serves as a scale. Or if you prefer, the scale of interactions we consider tells us the cutoff. Different scales give us different quantum mechanics. One scale, one cutoff, gives us the way molecules interact together, on the scale of condensed-matter physics. A different scale, with a different cutoff, describes the particles of Quantum Electrodynamics. Other scales describe something more recognizable as classical physics. Or the Yang-Mills gauge theory, as describes the Standard Model of subatomic particles, all those quarks and leptons.

Renormalization offers a capsule of much of mathematical physics, though. It started as an arbitrary trick to avoid calculation problems. In time, we found a rationale for the trick. But found it from looking at a problem that seemed unrelated. On learning the related trick well, though, we see they’re different aspects of the same problem. It’s a neat bit of work.


This and all the other 2020 A-to-Z essays should be at this link. Essays from every A-to-Z series should be gathered at this link. I am looking eagerly for topics for the letters S, T, and U, and am scouting ahead for V, W, and X topics also. Thanks for your thoughts, and thank you for reading.

Some Thermomathematics Reading


I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.

One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.

Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.

And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.

Phase Equilibria and the usefulness of μ


The Carnot Cycle blog for this month is about chemical potential. “Chemical potential” in thermodynamics covers a lot of interesting phenomena. It gives a way to model chemistry using the mechanisms of statistical mechanics. It lets us study a substance that’s made of several kinds of particle. This potential is written with the symbol μ, and I admit I don’t know how that symbol got picked over all the possible alternatives.

The chemical potential varies with the number of particles. Each different type of particle gets its own chemical potential, so there may be a μ1 and μ2 and μ3 and so on. The chemical potential μ1 is how much the free energy varies as the count of particles-of-type-1 varies. μ2 is how much the free energy varies as the count of particles-of-type-2 varies, and so on. This might strike you as similar to the way pressure and volume of a gas depend on each other, or if you retained a bit more of thermodynamics how the temperature and entropy vary. This is so. Pressure and volume are conjugate variables, as are temperature and entropy, and so are chemical potential and particle number. (And for a wonder, “particle number” means exactly what it suggests: the number of particles of that kind in the system.)

carnotcycle

gibbs

It was the American mathematical physicist Josiah Willard Gibbs who introduced the concepts of phase and chemical potential in his milestone monograph On the Equilibrium of Heterogeneous Substances (1876-1878) with which he almost single-handedly laid the theoretical foundations of chemical thermodynamics.

In a paragraph under the heading “On Coexistent Phases of Matter” Gibbs mentions – in passing – that for a system of coexistent phases in equilibrium at constant temperature and pressure, the chemical potential μ of any component must have the same value in every phase.

This simple statement turns out to have considerable practical value as we shall see. But first, let’s go through the formal proof of Gibbs’ assertion.

An important result

pe01

Consider a system of two phases, each containing the same components, in equilibrium at constant temperature and pressure. Suppose a small quantity dni moles of any component i is transferred from phase A in…

View original post 819 more words

Reversible and irreversible change


Entropy is hard to understand. It’s deceptively easy to describe, and the concept is popular, but to understand it is challenging. In this month’s entry CarnotCycle talks about thermodynamic entropy and where it comes from. I don’t promise you will understand it after this essay, but you will be closer to understanding it.

carnotcycle

rev01

Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.

Reversibility and mechanical systems

The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.

Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes…

View original post 692 more words

Spontaneity and the performance of work


I’d wanted just to point folks to the latest essay in the CarnotCycle blog. This thermodynamics piece is a bit about how work gets done, and how it relates to two kinds of variables describing systems. The two kinds are known as intensive and extensive variables, and considering them helps guide us to a different way to regard physical problems.

carnotcycle

spn01

Imagine a perfect gas contained by a rigid-walled cylinder equipped with a frictionless piston held in position by a removable external agency such as a magnet. There are finite differences in the pressure (P1>P2) and volume (V2>V1) of the gas in the two compartments, while the temperature can be regarded as constant.

If the constraint on the piston is removed, will the piston move? And if so, in which direction?

Common sense, otherwise known as dimensional analysis, tells us that differences in volume (dimensions L3) cannot give rise to a force. But differences in pressure (dimensions ML-1T-2) certainly can. There will be a net force of P1–P2 per unit area of piston, driving it to the right.

– – – –

The driving force

In thermodynamics, there exists a set of variables which act as “generalised forces” driving a system from one state to…

View original post 290 more words

The ideal gas equation


I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

carnotcycle

es01

If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

es02

The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

View original post 1,628 more words

The Liquefaction of Gases – Part I


I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

carnotcycle

On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

But in…

View original post 1,334 more words

Reading the Comics, November 13, 2013


For this week’s round of comic strips there’s almost a subtler theme than “they mention math in some way”: several have got links to statistical mechanics and the problem of recurrence. I’m not sure what’s gotten into Comic Strip Master Command that they sent out instructions to do comics that I can tie to the astounding interactions of infinities and improbable events, but it makes me wonder if I need to write a few essays about it.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde (October 30) summons the classic “infinite monkeys” problem of probability for its punch line. The concept — that if you had something producing strings of letters at random (taken to be monkeys because, I suppose, it’s assumed they would hit every key without knowing what sensibly comes next), it would, given enough time, produce any given result. The idea goes back a long way, and it’s blessed with a compelling mental image even though typewriters are a touch old-fashioned these days.

It seems to have gotten its canonical formulation in Émile Borel’s 1913 article “Statistical Mechanics and Irreversibility”, as you might expect since statistical mechanics brings up the curious problem of entropy. In short: every physical interaction, say, when two gases — let’s say clear air and some pink smoke as 1960s TV shows used to knock characters out — mix, is time-reversible. Look at the interaction of one clear-gas molecule and one pink-gas molecule and you can’t tell whether it’s playing forward or backward. But look at the entire room and it’s obvious whether they’re mixing or unmixing. How can something be time-reversible at every step of every interaction but not in whole?

The idea got a second compelling metaphor with Jorge Luis Borges’s Library of Babel, with a bit more literary class and in many printings fewer monkeys.

Continue reading “Reading the Comics, November 13, 2013”

Gibbs’ Elementary Principles in Statistical Mechanics


I had another discovery from the collection of books at archive.org, now that I thought to look for it: Josiah Willard Gibbs’s Elementary Principles in Statistical Mechanics, originally published in 1902 and reprinted 1960 by Dover, which gives you a taste of Gibbs’s writings by its extended title, Developed With Especial Reference To The Rational Foundation of Thermodynamics. Gibbs was an astounding figure even in a field that seems to draw out astounding figures, and he’s a good candidate for the title of “greatest scientist to come from the United States”.

He lived in walking distance of Yale (where his father and then he taught) nearly his whole life, working nearly isolated but with an astounding talent for organizing the many complex and confused ideas in the study of thermodynamics into a neat, logical science. Some great scientists have the knack for finding important work to do; some great scientists have the knack for finding ways to express work so the masses can understand it. Gibbs … well, perhaps it’s a bit much to say the masses understand it, but the language of modern thermodynamics and of quantum mechanics is very much the language he spoke a century-plus ago.

My understanding is he published almost all his work in the journal Transactions of the Connecticut Philosophical Society, in a show of hometown pride which probably left the editors baffled but, I suppose, happy to print something this fellow was very sure about.

To give some idea why they might have found him baffling, though, consider the first paragraph of Chapter 1, which is accurate and certainly economical:

We shall use Hamilton’s form of the equations of motion for a system of n degrees of freedom, writing q_1, \cdots q_n for the (generalized) coördinates, \dot{q}_1, \cdots \dot{q}_n for the (generalized) velocities, and

F_1 q_1 + F_2 q_2 + \cdots + F_n q_n [1]

for the moment of the forces. We shall call the quantities F_1, \cdots F_n the (generalized) forces, and the quantities p_1 \cdots p_n , defined by the equations

p_1 = \frac{d\epsilon_p}{d\dot{q}_1}, p_2 = \frac{d\epsilon_p}{d\dot{q}_2}, etc., [2]

where \epsilon_p denotes the kinetic energy of the system, the (generalized) momenta. The kinetic energy is here regarded as a function of the velocities and coördinates. We shall usually regard it as a function of the momenta and coördinates, and on this account we denote it by \epsilon_p . This will not prevent us from occasionally using formulas like [2], where it is sufficiently evident the kinetic energy is regarded as function of the \dot{q}‘s and q‘s. But in expressions like d\epsilon_p/dq_1 , where the denominator does not determine the question, the kinetic energy is always to be treated in the differentiation as function of the p’s and q’s.

(There’s also a footnote I skipped because I don’t know an elegant way to include it in WordPress.) Your friend the physics major did not understand that on first read any more than you did, although she probably got it after going back and reading it a touch more slowly. And his writing is just like that: 240 pages and I’m not sure I could say any of them could be appreciably tightened.


Also, I note I finally reached 9,000 page views! Thank you; I couldn’t have done it without at least twenty of you, since I’m pretty sure I’ve obsessively clicked on my own pages at minimum 8,979 times.

Reblog: Mixed-Up Views Of Entropy


The blog CarnotCycle, which tries to explain thermodynamics — a noble goal, since thermodynamics is a really big, really important, and criminally unpopularized part of science and mathematics — here starts from an “Unpublished Fragment” by the great Josiah Willard Gibbs to talk about entropy.

Gibbs — a strong candidate for the greatest scientist the United States ever produced, complete with fascinating biographical quirks to make him seem accessibly peculiar — gave to statistical mechanics much of the mathematical form and power that it now has. Gibbs had planned to write something about “On entropy as mixed-up-ness”, which certainly puts in one word what people think of entropy as being. The concept is more powerful and more subtle than that, though, and CarnotCycle talks about some of the subtleties.

carnotcycle

mixedup

Tucked away at the back of Volume One of The Scientific Papers of J. Willard Gibbs, is a brief chapter headed ‘Unpublished Fragments’. It contains a list of nine subject headings for a supplement that Professor Gibbs was planning to write to his famous paper “On the Equilibrium of Heterogeneous Substances”. Sadly, he completed his notes for only two subjects before his death in April 1903, so we will never know what he had in mind to write about the sixth subject in the list: On entropy as mixed-up-ness.

View original post 686 more words

Fun With General Physics


I’m sure to let my interest in the Internet Archive version of Landau, Akhiezer, and Lifshiz General Physics wane soon enough. But for now I’m still digging around and finding stuff that delights me. For example, here, from the end of section 58 (Solids and Liquids):

As the temperature decreases, the specific heat of a solid also decreases and tends to zero at absolute zero. This is a consequence of a remarkable general theorem (called Nernst’s theorem), according to which, at sufficiently low temperatures, any quantity representing a property of a solid or liquid becomes independent of temperature. In particular, as absolute zero is approached, the energy and enthalpy of a body no longer depend on the temperature; the specific heats cp and cV, which are the derivatives of these quantities with respect to temperature, therefore tend to zero.

It also follows from Nernst’s theorem that, as T \rightarrow 0 , the coefficient of thermal expansion tends to zero, since the volume of the body ceases to depend on the temperature.

General Physics from the Internet Archive


Mir Books is this company that puts out downloadable, translated copies of mostly Soviet mathematics and physics books. As often happens I started reading them kind of on a whim and kept following in the faith that someday I’d see a math text I just absolutely had to have. It hasn’t quite reached that, although a post from today identified one I do like which, naturally enough, they aren’t publishing. It’s from the Internet Archive instead.

The book is General Physics, by L D Landau, A I Akhiezer, and E M Lifshiz. The title is just right; it gets you from mechanics to fields to crystals to thermodynamics to chemistry to fluid dynamics in about 370 pages. The scope and size probably tell you this isn’t something for the mass audience; the book’s appropriate for an upper-level undergraduate or a grad student, or someone who needs a reference for a lot of physics.

So I can’t recommend this for normal readers, but if you’re the sort of person who sees beauty in a quote like:

Putting r here equal to the Earth’s radius R, we find a relation between the densities of the atmosphere at the Earth’s surface (nE) and at infinity (n):

n_{\infty} = n_E e^{-\frac{GMm}{kTR}}

then by all means read on.

The Music Goes Round And Round


So. The really big flaw in my analysis of an “Infinite Jukebox” tune — one in which the song is free to jump between two points, with a probability of \frac13 of jumping from the one-minute mark to the two-minute mark, and an equal likelihood of jumping from the two-minute mark to the one-minute mark — and my conclusion that, on average, the song would lose a minute just as often as it gained one and so we could expect the song to be just as long as the original, is that I made allowance for only the one jump. The three-minute song with two points at which it could jump, which I used for the model, can play straight through with no cuts or jumps (three minutes long), or it can play jumping from the one-minute to the two-minute mark (a two minute version), or it can play from the start to the second minute, jump back to the first, and continue to the end (a four minute version). But if you play any song on the Infinite Jukebox you see that more can happen.

Continue reading “The Music Goes Round And Round”

Infinite Buggles


Working through my circle of friends have been links to The Infinite Jukebox, an amusing web site which takes a song, analyzes points at which clean edits can be made, and then randomly jumps through them so that the song just never ends. The idea is neat, and its visual representation of the song and the places where it can — but doesn’t have to — jump forward or back can be captivating. My Dearly Beloved has been particularly delighted with the results on “I Am A Camera”, by the Buggles, as it has many good edit points and can sound quite natural after the jumps if you aren’t paying close attention to the lyrics. I recommend playing that at least a bit so you get some sense of how it works, although listening to an infinitely long rendition of the Buggles, or any other band, is asking for a lot.

One question that comes naturally to mind, at least to my mind, is: given there are these various points where the song can skip ahead or skip back, how long should we expect such an “infinite” rendition of a song to take? What’s the average, that is the expected value, of the song’s playing? I wouldn’t dare jump into analyzing “I Am A Camera”, not without working on some easier problems to figure out how it should be done, but let’s look.

Continue reading “Infinite Buggles”

Reblog: Random matrix theory and the Coulomb gas


inordinatum’s guest blog post here discusses something which, I must confess, isn’t going to be accessible to most of my readers. But it’s interesting to me, since it addresses many topics that are either directly in or close to my mathematical research interests.

The random matrix theory discussed here is the study of what we can say about matrices when we aren’t told the numbers in the matrix, but are told the distribution of the numbers — how likely any cell within the matrix is to be within any particular range. From that start it might sound like almost nothing could be said; after all, couldn’t anything go? But in exactly the same way that we’re able to speak very precisely about random events in the context of probability and statistics — for example, that a (fair) coin flipped a million times will come up tails pretty near 500,000 times, and will not come up tails 600,000 times — we’re able to speak confidently about the properties of these random matrices.

In any event, please do not worry about understanding the whole post. I found it fascinating and that’s why I’ve reblogged it here.

inordinatum

Today I have the pleasure of presenting you a guest post by Ricardo, a good physicist friend of mine in Paris, who is working on random matrix theory. Enjoy!

After writing a nice piece of hardcore physics to my science blog (in Portuguese, I am sorry), Alex asked me to come by and repay the favor. I am happy to write a few lines about the basis of my research in random matrices, and one of the first nice surprises we have while learning the subject.

In this post, I intent to present you some few neat tricks I learned while tinkering with Random Matrix Theory (RMT). It is a pretty vast subject, whose ramifications extend to nuclear physics, information theory, particle physics and, surely, mathematics as a whole. One of the main questions on this subject is: given a matrix $latex M$ whose entries are taken randomly from a…

View original post 1,082 more words