A Summer 2015 Mathematics A to Z Roundup


Since I’ve run out of letters there’s little dignified to do except end the Summer 2015 Mathematics A to Z. I’m still organizing my thoughts about the experience. I’m quite glad to have done it, though.

For the sake of good organization, here’s the set of pages that this project’s seen created:

Fibonacci’s Biased Scarf


Here is a neat bit of crochet work with a bunch of nice recreational-mathematics properties. The first is that the distance between yellow rows, or between blue rows, represents the start of the Fibonacci sequence of numbers. I’m not sure if the Fibonacci sequence is the most famous sequence of whole numbers but it’s certainly among the most famous, and it’s got interesting properties and historical context.

The second recreational-mathematics property is that the pattern is rotationally symmetric. Rotate it 180 degrees and you get back the original pattern, albeit with blue and yellow swapped. You can form a group out of the ways that it’s possible to rotate an object and get back something that looks like the original. Symmetry groups can be things of simple aesthetic beauty, describing scarf patterns and ways to tile floors and the like. They can also describe things of deep physical significance. Much of the ability of quantum chromodynamics to describe nuclear physics comes from these symmetry groups.

The logo at top of the page is of a trefoil knot, which I’d mentioned a couple weeks back. A trefoil knot isn’t perfectly described by its silhouette. Where the lines intersect you have to imagine the string (or whatever makes up the knot) passing twice, once above and once below itself. If you do that crossing-over and crossing-under consistently you get the trefoil knot, the simplest loop that isn’t an unknot, that can’t be shaken loose into a simple circle.

Knot Theorist

FibonacciScarf

This scarf is totally biased. That’s not to say that it’s prejudiced, but that it was worked in the diagonal direction of the cloth.

My project was made from Julie Blagojevich’s free pattern Fibonacci’s Biased using Knit Picks Curio. The number of rows in each stripe is according to the numbers of the Fibonacci sequence up to 34. In other words, if you start at the blue side of the scarf and work your way right, the sequence of the number of yellow rows is 1, 1, 2, 3, 5, 8, 13, 21, 34. The sequence of the blue stripes are the same, but in the opposite direction. The effect is a rotationally symmetric scarf with few color changes at the edges and frequent color changes in the center. As I frequently tell my friends, math is beautiful.

If my geekiness hasn’t scared you away yet, here’s a random fun…

View original post 46 more words

A Summer 2015 Mathematics A To Z: tensor


Tensor.

The true but unenlightening answer first: a tensor is a regular, rectangular grid of numbers. The most common kind is a two-dimensional grid, so that it looks like a matrix, or like the times tables. It might be square, with as many rows as columns, or it might be rectangular.

It can also be one-dimensional, looking like a row or a column of numbers. Or it could be three-dimensional, rows and columns and whole levels of numbers. We don’t try to visualize that. It can be what we call zero-dimensional, in which case it just looks like a solitary number. It might be four- or more-dimensional, although I confess I’ve never heard of anyone who actually writes out such a thing. It’s just so hard to visualize.

You can add and subtract tensors if they’re of compatible sizes. You can also do something like multiplication. And this does mean that tensors of compatible sizes will form a ring. Of course, that doesn’t say why they’re interesting.

Tensors are useful because they can describe spatial relationships efficiently. The word comes from the same Latin root as “tension”, a hint about how we can imagine it. A common use of tensors is in describing the stress in an object. Applying stress in different directions to an object often produces different effects. The classic example there is a newspaper. Rip it in one direction and you get a smooth, clean tear. Rip it perpendicularly and you get a raggedy mess. The stress tensor represents this: it gives some idea of how a force put on the paper will create a tear.

Tensors show up a lot in physics, and so in mathematical physics. Technically they show up everywhere, since vectors and even plain old numbers (scalars, in the lingo) are kinds of tensors, but that’s not what I mean. Tensors can describe efficiently things whose magnitude and direction changes based on where something is and where it’s looking. So they are a great tool to use if one wants to represent stress, or how well magnetic fields pass through objects, or how electrical fields are distorted by the objects they move in. And they describe space, as well: general relativity is built on tensors. The mathematics of a tensor allow one to describe how space is shaped, based on how to measure the distance between two points in space.

My own mathematical education happened to be pretty tensor-light. I never happened to have courses that forced me to get good with them, and I confess to feeling intimidated when a mathematical argument gets deep into tensor mathematics. Joseph C Kolecki, with NASA’s Glenn (Lewis) Research Center, published in 2002 a nice little booklet “An Introduction to Tensors for Students of Physics and Engineering”. This I think nicely bridges some of the gap between mathematical structures like vectors and matrices, that mathematics and physics majors know well, and the kinds of tensors that get called tensors and that can be intimidating.

Reading the Comics, July 7, 2015: Carrying On The Streak Edition


I admit I’ve been a little unnerved lately. Between the A To Z project and the flood of mathematics-themed jokes from Comic Strip Master Command — and miscellaneous follies like my WordPress statistics-reading issues — I’ve had a post a day for several weeks now. The streak has to end sometime, surely, right? So it must, but not today. I admit the bunch of comics mentioning mathematical topics the past couple days was more one of continuing well-explored jokes rather than breaking new territory. But every comic strip is somebody’s first, isn’t it? (That’s an intimidating thought.)

Mickey Mouse promises to help a nephew with his mathematics homework. The word problem is also a tongue-twister. It haunts Mickey all night.
Disney’s Mickey Mouse rerun the 6th of July, 2015. Probably rerun many more times, too.

Disney’s Mickey Mouse (June 6, rerun from who knows when) is another example of the word problem that even adults can’t do. I think it’s an interesting one for being also a tongue-twister. I tend to think of this sort of problem as a calculus question, but that’s surely just that I spend more time with calculus than with algebra or simpler arithmetic.

Donald keeps his nephews awake by counting sheep all night. They all get to sleep when he counts sheep by fours.
Disney’s Donald Duck for the 6th of July, 2015. Also probably rerun many times.

And then Disney’s Donald Duck (June 6 also, but probably a rerun from some other date) is a joke built on counting sheep. Might help someone practice their four-times table, too. I like the internal logic of this one. Maybe I just like sheep in comic strips.

Eric Teitelbaum and Bill Teitelbaum’s Bottomliners (June 6) is a bit of wordplay based on the idiom that figures will “add up” if they’re correct. There are so many things one can do with figures, though, aren’t there? Surely something will be right.

Justin Thompson’s Mythtickle (June 6, again a rerun) is about the curious way that objects are mostly empty space. The first panel shows on the alien’s chalkboard legitimate equations from quantum mechanics. The first line describes (in part) a function called psi that describes where a particle is likely to be found over time. The second and third lines describe how the probability distribution — where a particle is likely to be found — changes over time.

Doug Bratton’s Pop Culture Shock Therapy (July 7) just name-drops mathematics as something a kid will do badly in. In this case the kid is Calvin, from Calvin and Hobbes. While it’s true he did badly in mathematics I suspect that’s because it’s so easy to fit an elementary-school arithmetic question and a wrong answer in a single panel.

The idea of mathematics as a way to bludgeon people into accepting your arguments must have caught someone’s imagination over at the Parker studios. Jeff Parker’s The Wizard of Id for July 7 uses this joke, just as Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. did back on June 19th. (Both comic strips were created by the prolific Johnny Hart. I was surprised to learn they’re not still drawn and written by the same teams.) As I mentioned at the time, smothering people beneath mathematical symbols is logically fallacious. This is not to say it doesn’t work.

Reading the Comics, June 21, 2015: Blatantly Padded Edition, Part 2


I said yesterday I was padding one mathematics-comics post into two for silly reasons. And I was. But there were enough Sunday comics on point that splitting one entry into two has turned out to be legitimate. Nice how that works out sometimes.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. (June 19) uses mathematics as something to heap upon a person until they yield to your argument. It’s a fallacious way to argue, but it does work. Even at a mathematical conference the terror produced by a screen full of symbols can chase follow-up questions away. On the 21st, they present mathematics as a more obviously useful thing. Well, mathematics with a bit of physics.

Nate Frakes’s Break Of Day (June 19) is this week’s anthropomorphic algebra joke.

Life at the quantum level: one subatomic particle suspects the other of being unfaithful because both know he could be in two places at once.
Niklas Eriksson’s Carpe Diem for the 20th of June, 2015.

Niklas Eriksson’s Carpe Diem (June 20) is captioned “Life at the Quantum Level”. And it’s built on the idea that quantum particles could be in multiple places at once. Whether something can be in two places at once depends on coming up with a clear idea about what you mean by “thing” and “places” and for that matter “at once”; when you try to pin the ideas down they prove to be slippery. But the mathematics of quantum mechanics is fascinating. It cries out for treating things we would like to know about, such as positions and momentums and energies of particles, as distributions instead of fixed values. That is, we know how likely it is a particle is in some region of space compared to how likely it is somewhere else. In statistical mechanics we resort to this because we want to study so many particles, or so many interactions, that it’s impractical to keep track of them all. In quantum mechanics we need to resort to this because it appears this is just how the world works.

(It’s even less on point, but Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 21st of June has a bit of riffing on Schrödinger’s Cat.)

Brian and Ron Boychuk’s Chuckle Brothers (June 20) name-drops algebra as the kind of mathematics kids still living with their parents have trouble with. That’s probably required by the desire to make a joking definition of “aftermath”, so that some specific subject has to be named. And it needs parents to still be watching closely over their kids, something that doesn’t quite fit for college-level classes like Intro to Differential Equations. So algebra, geometry, or trigonometry it must be. I am curious whether algebra reads as the funniest of that set of words, or if it just fits better in the space available. ‘Geometry’ is as long a word as ‘algebra’, but it may not have the same connotation of being an impossibly hard class.

Little Iodine does badly in arithmetic in class. But she's very good at counting the calories, and the cost, of what her teacher eats.
Jimmy Hatlo’s Little Iodine for the 18th of April, 1954, and rerun the 18th of June, 2015.

And from the world of vintage comic strips, Jimmy Hatlo’s Little Iodine (June 21, originally run the 18th of April, 1954) reminds us that anybody can do any amount of arithmetic if it’s something they really want to calculate.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney (June 21) is another strip using the idea of mathematics — and particularly word problems — to signify great intelligence. I suppose it’s easier to recognize the form of a word problem than it is to recognize a good paper on the humanities if you only have two dozen words to show it in.

Juba’s Viivi and Wagner (June 21) is a timely reminder that while sudokus may be fun logic puzzles, they are ultimately the puzzle you decide to make of them.

Conditions of equilibrium and stability


This month Peter Mander’s CarnotCycle blog talks about the interesting world of statistical equilibriums. And particularly it talks about stable equilibriums. A system’s in equilibrium if it isn’t going to change over time. It’s in a stable equilibrium if being pushed a little bit out of equilibrium isn’t going to make the system unpredictable.

For simple physical problems these are easy to understand. For example, a marble resting at the bottom of a spherical bowl is in a stable equilibrium. At the exact bottom of the bowl, the marble won’t roll away. If you give the marble a little nudge, it’ll roll around, but it’ll stay near where it started. A marble sitting on the top of a sphere is in an equilibrium — if it’s perfectly balanced it’ll stay where it is — but it’s not a stable one. Give the marble a nudge and it’ll roll away, never to come back.

In statistical mechanics we look at complicated physical systems, ones with thousands or millions or even really huge numbers of particles interacting. But there are still equilibriums, some stable, some not. In these, stuff will still happen, but the kind of behavior doesn’t change. Think of a steadily-flowing river: none of the water is staying still, or close to it, but the river isn’t changing.

CarnotCycle describes how to tell, from properties like temperature and pressure and entropy, when systems are in a stable equilibrium. These are properties that don’t tell us a lot about what any particular particle is doing, but they can describe the whole system well. The essay is higher-level than usual for my blog. But if you’re taking a statistical mechanics or thermodynamics course this is just the sort of essay you’ll find useful.

carnotcycle

cse01

In terms of simplicity, purely mechanical systems have an advantage over thermodynamic systems in that stability and instability can be defined solely in terms of potential energy. For example the center of mass of the tower at Pisa, in its present state, must be higher than in some infinitely near positions, so we can conclude that the structure is not in stable equilibrium. This will only be the case if the tower attains the condition of metastability by returning to a vertical position or absolute stability by exceeding the tipping point and falling over.

cse02

Thermodynamic systems lack this simplicity, but in common with purely mechanical systems, thermodynamic equilibria are always metastable or stable, and never unstable. This is equivalent to saying that every spontaneous (observable) process proceeds towards an equilibrium state, never away from it.

If we restrict our attention to a thermodynamic system of unchanging composition and apply…

View original post 2,534 more words

Reading the Comics, June 4, 2015: Taking It Easy Edition


I do like looking for thematic links among the comic strips that mention mathematical topics that I gather for these posts. This time around all I can find is a theme of “nothing big going on”. I’m amused by some of them but don’t think there’s much depth to the topics. But I like them anyway.

Mark Anderson’s Andertoons gets its appearance here with the May 25th strip. And it’s a joke about the hatred of fractions. It’s a suitable one for posting in mathematics classes too, since it is right about naming three famous irrational numbers — pi, the “golden ratio” phi, and the square root of two — and the fact they can’t be written as fractions which use only whole numbers in the numerator and denominator. Pi is, well, pi. The square root of two is nice and easy to find, and has that famous legend about the Pythagoreans attached to it. And it’s probably the easiest number to prove is irrational.

The “golden ratio” is that number that’s about 1.618. It’s interesting because 1 divided by phi is about 0.618, and who can resist a symmetry like that? That’s about all there is to say for it, though. The golden ratio is otherwise a pretty boring number, really. It’s gained celebrity as an “ideal” ratio — that a rectangle with one side about 1.6 times as long as the other is somehow more appealing than other choices — but that’s rubbish. It’s a novelty act among numbers. Novelty acts are fun, but we should appreciate them for what they are.

Continue reading “Reading the Comics, June 4, 2015: Taking It Easy Edition”

Reversible and irreversible change


Entropy is hard to understand. It’s deceptively easy to describe, and the concept is popular, but to understand it is challenging. In this month’s entry CarnotCycle talks about thermodynamic entropy and where it comes from. I don’t promise you will understand it after this essay, but you will be closer to understanding it.

carnotcycle

rev01

Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.

Reversibility and mechanical systems

The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.

Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes…

View original post 692 more words

A Little More Talk About What We Talk About When We Talk About How Interesting What We Talk About Is


I had been talking about how much information there is in the outcome of basketball games, or tournaments, or the like. I wanted to fill in at least one technical term, to match some of the others I’d given.

In this information-theory context, an experiment is just anything that could have different outcomes. A team can win or can lose or can tie in a game; that makes the game an experiment. The outcomes are the team wins, or loses, or ties. A team can get a particular score in the game; that makes that game a different experiment. The possible outcomes are the team scores zero points, or one point, or two points, or so on up to whatever the greatest possible score is.

If you know the probability p of each of the different outcomes, and since this is a mathematics thing we suppose that you do, then we have what I was calling the information content of the outcome of the experiment. That’s a number, measured in bits, and given by the formula

\sum_{j} - p_j \cdot \log\left(p_j\right)

The sigma summation symbol means to evaluate the expression to the right of it for every value of some index j. The pj means the probability of outcome number j. And the logarithm may be that of any base, although if we use base two then we have an information content measured in bits. Those are the same bits as are in the bytes that make up the megabytes and gigabytes in your computer. You can see this number as an estimate of how many well-chosen yes-or-no questions you’d have to ask to pick the actual result out of all the possible ones.

I’d called this the information content of the experiment’s outcome. That’s an idiosyncratic term, chosen because I wanted to hide what it’s normally called. The normal name for this is the “entropy”.

To be more precise, it’s known as the “Shannon entropy”, after Claude Shannon, pioneer of the modern theory of information. However, the equation defining it looks the same as one that defines the entropy of statistical mechanics, that thing everyone knows is always increasing and somehow connected with stuff breaking down. Well, almost the same. The statistical mechanics one multiplies the sum by a constant number called the Boltzmann constant, after Ludwig Boltzmann, who did so much to put statistical mechanics in its present and very useful form. We aren’t thrown by that. The statistical mechanics entropy describes energy that is in a system but that can’t be used. It’s almost background noise, present but nothing of interest.

Is this Shannon entropy the same entropy as in statistical mechanics? This gets into some abstract grounds. If two things are described by the same formula, are they the same kind of thing? Maybe they are, although it’s hard to see what kind of thing might be shared by “how interesting the score of a basketball game is” and “how much unavailable energy there is in an engine”.

The legend has it that when Shannon was working out his information theory he needed a name for this quantity. John von Neumann, the mathematician and pioneer of computer science, suggested, “You should call it entropy. In the first place, a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” There are variations of the quote, but they have the same structure and punch line. The anecdote appears to trace back to an April 1961 seminar at MIT given by one Myron Tribus, who claimed to have heard the story from Shannon. I am not sure whether it is literally true, but it does express a feeling about how people understand entropy that is true.

Well, these entropies have the same form. And they’re given the same name, give or take a modifier of “Shannon” or “statistical” or some other qualifier. They’re even often given the same symbol; normally a capital S or maybe an H is used as the quantity of entropy. (H tends to be more common for the Shannon entropy, but your equation would be understood either way.)

I’m not comfortable saying they’re the same thing, though. After all, we use the same formula to calculate a batting average and to work out the average time of a commute. But we don’t think those are the same thing, at least not more generally than “they’re both averages”. These entropies measure different kinds of things. They have different units that just can’t be sensibly converted from one to another. And the statistical mechanics entropy has many definitions that not just don’t have parallels for information, but wouldn’t even make sense for information. I would call these entropies siblings, with strikingly similar profiles, but not more than that.

But let me point out something about the Shannon entropy. It is low when an outcome is predictable. If the outcome is unpredictable, presumably knowing the outcome will be interesting, because there is no guessing what it might be. This is where the entropy is maximized. But an absolutely random outcome also has a high entropy. And that’s boring. There’s no reason for the outcome to be one option instead of another. Somehow, as looked at by the measure of entropy, the most interesting of outcomes and the most meaningless of outcomes blur together. There is something wondrous and strange in that.

Reading the Comics, April 20, 2015: History of Mathematics Edition


This is a bit of a broad claim, but it seems Comic Strip Master Command was thinking of all mathematics one lead-time ago. There’s a comic about the original invention of mathematics, and another showing off 20th century physics equations. This seems as much of the history of mathematics as one could reasonably expect from the comics page.

Mark Anderson’s Andertoons gets its traditional appearance around here with the April 17th strip. It features a bit of arithmetic that is indeed lovely but wrong.

Continue reading “Reading the Comics, April 20, 2015: History of Mathematics Edition”

Reading the Comics, January 6, 2015: First of the Year Edition


I apologize for not writing as thoughtfully about the comics this week as I’d like, but it’s been a bit of a rushed week and I haven’t had the chance to do pop-mathematics writing of the kind I like, which is part of why you aren’t right now seeing a post about goldfish. All should be back to normal soon. I’m as ever not sure which is my favorite comic of the bunch this week; I think Bewley may have the strongest, if meanest, joke in it, though as you can see by the text Candorville gave me the most to think about.

Ryan Pagelow’s Buni (December 31) saw out the year with a touch of anthropomorphic-numerals business. Never worry, 4; your time will come again.

He's been snoring not the letter 'Z', but the numeral '2'.
Daniel Beyer’s Long Story Short (January 1, 2015). Snoring humor.

Daniel Beyer’s Long Story Short (January 1) plays a little on the way a carelessly-written Z will morph so easily into a 2, and vice-versa, which serves as a reminder to the people who give out alphanumeric confirmation codes: stop using both 0’s and O’s, and 1’s and I’s, and 2’s and Z’s, in the same code already. I know in the database there’s no confusion about this but in the e-mail you sent out and in the note we wrote down at the airport transcribing this over the phone, there is. And now that it’s mentioned, why is the letter Z used to symbolize snoring? Nobody is sure, but Cecil Adams and The Straight Dope trace it back to the comics, with Rudolph Dirks’s The Katzenjammer Kids either the originator or at least the popularizer of the snoring Z.

Continue reading “Reading the Comics, January 6, 2015: First of the Year Edition”

The Arthur Christmas Problem


Since it’s the season for it I’d like to point new or new-wish readers to a couple of posts I did in 2012-13, based on the Aardman Animation film Arthur Christmas, which was just so very charming in every way. It also puts forth some good mathematical and mathematical-physics questions.

Opening the scene is “Could `Arthur Christmas’ Happen In Real Life?” which begins with a scene in the movie: Arthur and Grand-Santa are stranded on a Caribbean island while the reindeer and sleigh, without them, go flying off in a straight line. This raises the question of what is a straight line if you’re on the surface of something spherical like the Earth.

“Returning To Arthur Christmas” was titled that because I’d left the subject for a couple weeks, as is my wont, and it gets a little bit more spoiler-y since the film seems to come down on the side of the reindeer moving on a path called a Great Circle. This forces us to ask another question: if the reindeer are moving around the Earth, are they moving with the Earth’s rotation, like an airplane does, or freely of it, like a satellite does?

“Arthur Christmas And The Least Common Multiple” starts by supposing that the reindeer are moving the way satellites do, independent of the Earth’s rotation, and on making some assumptions about the speed of the reindeer and the path they’re taking, works out how long Arthur and Grand-Santa would need to wait before the reindeer and sled are back if they’re lucky enough to be waiting on the equator.

“Six Minutes Off” shifts matters a little, by supposing that they’re not on the equator, which makes meeting up the reindeer a much nastier bit of timing. If they’re willing to wait long enough the reindeer will come as close as they want to their position, but the wait can be impractically long, for example, eight years, or over five thousand years, which would really slow down the movie.

And finally “Arthur Christmas and the End of Time” wraps up matters with a bit of heady speculation about recurrence: the way that a physical system can, if the proper conditions are met, come back either to its starting point or to a condition arbitrarily close to its starting point, if you wait long enough. This offers some dazzling ideas about the really, really long-term fate of the universe, which is always a heady thought. I hope you enjoy.

Reading the Comics, December 5, 2014: Good Questions Edition


This week’s bundle of mathematics-themed comic strips has a pretty nice blend, to my tastes: about half of them get at good and juicy topics, and about half are pretty quick and easy things to describe. So, my thanks to Comic Strip Master Command for the distribution.

Bill Watterson’s Calvin and Hobbes (December 1, rerun) slips in a pretty good probability question, although the good part is figuring out how to word it: what are the chances Calvin’s Dad was thinking of 92,376,051 of all the possible numbers out there? Given that there’s infinitely many possible choices, if every one of them is equally likely to be drawn, then the chance he was thinking of that particular number is zero. But Calvin’s Dad couldn’t be picking from every possible number; all humanity, working for its entire existence, will only ever think of finitely many numbers, which is the kind of fact that humbles me when I stare too hard at it. And people, asked to pick a number, have ones they prefer: 37, for example, or 17. Christopher Miller’s American Cornball: A Laffopedic Guide To The Formerly Funny (a fine guide to jokes that you see lingering around long after they were actually funny) notes that what number people tend to pick seems to vary in time, and in the early 20th century 23 was just as funny a number as you could think of on a moment’s notice.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (December 1) is entitled “how introductory physics problems are written”, and yeah, that’s about the way that a fair enough problem gets rewritten so as to technically qualify as a word problem. I think I’ve mentioned my favorite example of quietly out-of-touch word problems, a 1970s-era probability book which asked the probability of two out of five transistors in a radio failing. That was blatantly a rewrite of a problem about a five-vacuum-tube radio (my understanding is many radios in that era used five tubes) and each would have a non-negligible chance of failing on any given day. But that’s a slightly different problem, as the original question would have made good sense when it was composed, and it only in the updating became ridiculous.

Julie Larson’s The Dinette Set (December 2) illustrates one of the classic sampling difficulties: how can something be generally true if, in your experience, it isn’t? If you make the reasonable assumption that there’s nothing exceptional about you, then, shouldn’t your experience of, say, fraction of people who exercise, or average length of commute, or neighborhood crime rate be tolerably close to what’s really going on? You could probably build an entire philosophy-of-mathematics course around this panel before even starting the question of how do you get a fair survey of a population.

Scott Hilburn’s The Argyle Sweater (December 3) tells a Roman numeral joke that actually I don’t remember encountering before. Huh.

Samson’s Dark Side Of The Horse (December 3) does some play with mathematical symbols and of course I got distracted by thinking what kind of problem Horace was working on in the first panel; it looks obvious to me that it’s something about the orbit of one body around another. In principle, it might be anything, since the great discovery of algebra is that you can replace numbers with symbols like “a” and work out relations without having to know anything about them. “G”, for example, tends to mean the gravitational constant of the universe, and “GM” makes this identification almost certain: gravitation problems need the masses of a main body, like a planet, and a smaller body, like a satellite, and that’s usually represented as either m1 and m2 or as M and m.

In orbital mechanics problems, “a” often refers to the semimajor axis — the long diameter of the ellipse the orbiting body makes — and “e” the eccentricity — a measure of how close to a circle the ellipse is (an eccentricity of zero means it’s a circle — but the fact that there’s subscripts of k makes that identification suspect: subscripts are often used to distinguish which of multiple similar things you mean to talk about, and if it’s just one body orbiting the other there’s no need for that. So what is Horace working on?

The answer is: Horace is working on an orbital perturbation problem, describing how far from the true circular orbit a satellite will drift when you consider things like atmospheric drag and the slightly non-spherical shape of the Earth. ak is still a semi-major axis and ek the eccentricity, but of the little changes from the original orbit, rather than the original orbit themselves. And now I wonder if Samson plucked the original symbol just because it looked so graphically pleasant, or if Samson was slipping in a further joke about the way an attractive body will alter another body’s course.

Jenny Campbell’s Flo and Friends (December 4) offers a less exciting joke: it’s a simple word problem joke, playing on the ambiguity of “calculate how many seconds there are in the year”. Now, the dull way to work this out is to multiply 60 seconds per minute times 60 minutes per hour times 24 hours per day times 365 (or 365.25, or 365.2422 if you want to start that argument) days per year. But we can do almost as well and purely in our heads, if we remember that a million seconds is almost twelve days long. How many twelve-day stretches are there in a year? Well, right about 31 — after all, the year is (nearly) 12 groups of 31 days, and therefore it’s also 31 groups of 12 days. Therefore the year is about 31 million seconds long. If we pull out the calculator we find that a 365-day year is 31,536,000 seconds, but isn’t it more satisfying to say “about 31 million seconds” like we just did?

John Deering’s Strange Brew (December 4) took me the longest time to work out what the joke was supposed to be. I’m still not positive but I think it’s just one colleague sneering at the higher mathematics of another.

Todd the Dinosaur's abacus only goes up to 2.
Patrick Roberts’s Todd the Dinosaur (December 5) discovers there are numbers bigger than 2.

Patrick Roberts’s Todd the Dinosaur (December 5) discovers that some numbers are quite big ones, actually. There is a challenge in working with really big numbers, even if they’re usually bigger than 2. Usually we’re not interested in a number by itself, and would rather do some kind of calculation with it, and that’s boring to do too much of, but a computer can only work with so many digits at once. The average computer uses floating point arithmetic schemes which will track, at most, about 19 decimal digits, on the reasonable guess that twenty decimal digits is the difference between 3.1415926535897932384 and 3.1415926535897932385 and how often is that difference — a millionth of a millionth of a millionth of a percent — going to matter? If it does, then, you do the kind of work that gets numerical mathematicians their big paydays: using schemes that work with more digits, or chopping up a problem so that you never have to use all 86 relevant digits at once, or rewriting your calculation so that you don’t need so many digits of accuracy all at once.

David Bloomier, former math teacher, has the number 4 car, identified by addition, division, square root, and finger count.
Daniel Beyer’s Offbeat Comics (December 5) gives four ways to represent the number 4. Five, if you count the caption.

Daniel Beyer’s Offbeat Comics (December 5) gives a couple of ways to express the number 4 — including, look closely, holding up fingers — as part of a joke about the driver being a former mathematics teacher.

Greg Cravens’s The Buckets (December 5) is the old, old, old joke about never using algebra in real life. Do English teachers get this same gag about never using the knowledge of how to diagram sentences? In any case, I did use my knowledge of sentence-diagramming, and for the best possible application: I made fun of a guy on the Internet with it.

I advise against reading the comments — I mean, that’s normally good advice, but comic strips attract readers who want to complain about how stupid kids are anymore and strips that mention education give plenty of grounds for it — but I noticed one of the early comments said “try to do any repair at home without some understanding of it”. I like the claim, but, I can’t think of any home repair I’ve done that’s needed algebra. The most I’ve needed has been working out the area of a piece of plywood I needed, but if multiplying length by width is algebra than we’ve badly debased the term. Even my really ambitious project, building a PVC-frame pond cover, isn’t going to be one that uses algebra unless we take an extremely generous view of the subject.

Reading The Comics, November 9, 2014: Finally, A Picture Edition


I knew if I kept going long enough some cartoonist not on Gocomics.com would have to mention mathematics. That finally happened with one from Comics Kingdom, and then one from the slightly freak case of Rick Detorie’s One Big Happy. Detorie’s strip is on Gocomics.com, but a rerun from several years ago. He has a different one that runs on the normal daily pages. This is for sound economic reasons: actual newspapers pay much better than the online groupings of them (considering how cheap Comics Kingdom and Gocomics are for subscribers I’m not surprised) so he doesn’t want his current strips run on Gocomics.com. As for why his current strips do appear on, for example, the fairly good online comics page of AZcentral.com, that’s a good question, and one that deserves a full answer.

The psychiatric patient is looking for something in the middle of curved space-time.
Vic Lee’s Pardon My Planet for the 6th of November, 2014.

Vic Lee’s Pardon My Planet (November 9), which broke the streak of Comics Kingdom not making it into these pages, builds around a quote from Einstein I never heard of before but which sounds like the sort of vaguely inspirational message that naturally attaches to famous names. The patient talks about the difficulty of finding something in “the middle of four-dimensional curved space-time”, although properly speaking it could be tricky finding anything within a bounded space, whether it’s curved or not. The generic mathematics problem you’d build from this would be to have some function whose maximum in a region you want to find (if you want the minimum, just multiply your function by minus one and then find the maximum of that), and there’s multiple ways to do that. One obvious way is the mathematical equivalent of getting to the top of a hill by starting from wherever you are and walking the steepest way uphill. Another way is to just amble around, picking your next direction at random, always taking directions that get you higher and usually but not always refusing directions that bring you lower. You can probably see some of the obvious problems with either approach, and this is why finding the spot you want can be harder than it sounds, even if it’s easy to get started looking.

Reuben Bolling’s Super Fun-Pak Comix (November 6), which is technically a rerun since the Super Fun-Pak Comix have been a longrunning feature in his Tom The Dancing Bug pages, is primarily a joke about the Heisenberg Uncertainty Principle, that there is a limit to what information one can know about the universe. This limit can be understood mathematically, though. The wave formulation of quantum mechanics describes everything there is to know about a system in terms of a function, called the state function and normally designated Ψ, the value of which can vary with location and time. Determining the location or the momentum or anything about the system is done by a process called “applying an operator to the state function”. An operator is a function that turns one function into another, which sounds like pretty sophisticated stuff until you learn that, like, “multiply this function by minus one” counts.

In quantum mechanics anything that can be observed has its own operator, normally a bit tricker than just “multiply this function by minus one” (although some are not very much harder!), and applying that operator to the state function is the mathematical representation of making that observation. If you want to observe two distinct things, such as location and momentum, that’s a matter of applying the operator for the first thing to your state function, and then taking the result of that and applying the operator for the second thing to it. And here’s where it gets really interesting: it doesn’t have to, but it can depend what order you do this in, so that you get different results applying the first operator and then the second from what you get applying the second operator and then the first. The operators for location and momentum are such a pair, and the result is that we can’t know to arbitrary precision both at once. But there are pairs of operators for which it doesn’t make a difference. You could, for example, know both the momentum and the electrical charge of Scott Baio simultaneously to as great a precision as your Scott-Baio-momentum-and-electrical-charge-determination needs are, and the mathematics will back you up on that.

Ruben Bolling’s Tom The Dancing Bug (November 6), meanwhile, was a rerun from a few years back when it looked like the Large Hadron Collider might never get to working and the glitches started seeming absurd, as if an enormous project involving thousands of people and millions of parts could ever suffer annoying setbacks because not everything was perfectly right the first time around. There was an amusing notion going around, illustrated by Bolling nicely enough, that perhaps the results of the Large Hadron Collider would be so disastrous somehow that the universe would in a fit of teleological outrage prevent its successful completion. It’s still a funny idea, and a good one for science fiction stories: Isaac Asimov used the idea in a short story dubbed “Thiotimoline and the Space Age”, published 1959, which resulted in the attempts to manipulate a compound which dissolves before it adds water might have accidentally sent hurricanes Carol, Edna, and Diane into New England in 1954 and 1955.

Chip Sansom’s The Born Loser (November 7) gives me a bit of a writing break by just being a pun strip that you can save for next March 14.

Dan Thompson’s Brevity (November 7), out of reruns, is another pun strip, though with giant monsters.

Francesco Marciuliano’s Medium Large (November 7) is about two of the fads of the early 80s, those of turning everything into a breakfast cereal somehow and that of playing with Rubik’s Cubes. Rubik’s Cubes have long been loved by a certain streak of mathematicians because they are a nice tangible representation of group theory — the study of things that can do things that look like addition without necessarily being numbers — that’s more interesting than just picking up a square and rotating it one, two, three, or four quarter-turns. I still think it’s easier to just peel the stickers off (and yet, the die-hard Rubik’s Cube Popularizer can point out there’s good questions about polarity you can represent by working out the rules of how to peel off only some stickers and put them back on without being detected).

Ruthie questions whether she'd be friends with people taking carrot sticks from her plate, or whether anyone would take them in the first place. Word problems can be tricky things.
Rick Detorie’s One Big Happy for the 9th of November, 2014.

Rick Detorie’s One Big Happy (November 9), and I’m sorry, readers about a month in the future from now, because that link’s almost certainly expired, is another entry in the subject of word problems resisted because the thing used to make the problem seem less abstract has connotations that the student doesn’t like.

Fred Wagner’s Animal Crackers (November 9) is your rare comic that could be used to teach positional notation, although when you actually pay attention you realize it doesn’t actually require that.

Mac and Bill King’s Magic In A Minute (November 9) shows off a mathematically-based slight-of-hand trick, describing a way to make it look like you’re reading your partner-monkey’s mind. This is probably a nice prealgebra problem to work out just why it works. You could also consider this a toe-step into the problem of encoding messages, finding a way to send information about something in a way that the original information can be recovered, although obviously this particular method isn’t terribly secure for more than a quick bit of stage magic.

Reading the Comics, October 14, 2014: Not Talking About Fourier Transforms Edition


I know that it’s disappointing to everyone, given that one of the comic strips in today’s roundup of mathematically-themed such gives me such a good excuse to explain what Fourier Transforms are and why they’re interesting and well worth the time learning. But I’m not going to do that today. There’s enough other things to think about and besides you probably aren’t going to need Fourier Transforms in class for a couple more weeks yet. For today, though, no, I’ll go on to other things instead. Sorry to disappoint.

Glen McCoy and Gary McCoy’s The Flying McCoys (October 9) jokes about how one can go through life without ever using algebra. I imagine other departments get this, too, like, “I made it through my whole life without knowing anything about US History!” or “And did any of that time I spent learning Art do anything for me?” I admit a bias here: I like learning stuff even if it isn’t useful because I find it fun to learn stuff. I don’t insist that you share in finding that fun, but I am going to look at you weird if you feel some sense of triumph about not learning stuff.

Tom Thaves’s Frank and Ernest (October 10) does a gag about theoretical physics, and string theory, which is that field where physics merges almost imperceptibly into mathematics and philosophy. The rough idea of string theory is that it’d be nice to understand why the particles we actually observe exist, as opposed to things that we could imagine existing that that don’t seem to — like, why couldn’t there be something that’s just like an electron, but two times as heavy? Why couldn’t there be something with the mass of a proton but three-quarters the electric charge? — by supposing that what we see are the different natural modes of behavior of some more basic construct, these strings. A natural mode is, well, what something will do if it’s got a bunch of energy and is left to do what it will with it.

Probably the most familiar kind of natural mode is how if you strike a glass or a fork or such it’ll vibrate, if we’re lucky at a tone we can hear, and if we’re really lucky, at one that sounds good. Things can have more than one natural mode. String theory hopes to explain all the different kinds of particles, and the different ways in which they interact, as being different modes of a hopefully small and reasonable variety of “strings”. It’s a controversial theory because it’s been very hard to find experiments that proves, or soundly rules out, a particular model of it as representation of reality, and the models require invoking exotic things like more dimensions of space than we notice. This could reflect string theory being an intriguing but ultimately non-physical model of the world; it could reflect that we just haven’t found the right way to go about proving it yet.

Charles Schulz’s Peanuts (October 10, originally run October 13, 1967) has Sally press Charlie Brown into helping her with her times tables. She does a fair bit if guessing, which isn’t by itself a bad approach. For one, if you don’t know the exact answer, but you can pin down a lower and and upper bound, you’re doing work that might be all you really need and you’re doing work that may give you a hint how to get what you really want. And for that matter, guessing at a solution can be the first step to finding one. One of my favorite areas of mathematics, Monte Carlo methods, finds solutions to complicated problems by starting with a wild guess and making incremental refinements. It’s not guaranteed to work, but when it does, it gets extremely good solutions and with a remarkable ease. Granted this, doesn’t really help the times tables much.

On the 11th (originally run October 14, 1967), Sally incidentally shows the hard part of refining guesses about a solution; there has to be some way of telling whether you’re getting warmer. In your typical problem for a Monte Carlo approach, for example, you have some objective function — say, the distance travelled by something going along a path, or the total energy of a system — and can measure whether an attempted change is improving your solution — say, minimizing your distance or reducing the potential energy — or is making it worse. Typically, you take any refinement that makes the provisional answer better, and reject most, but not all, refinements that make the provisional answer worse.

That said, “Overly-Eight” is one of my favorite made-up numbers. A “Quillion” is also a pretty good one.

Jeff Mallet’s Frazz (October 12) isn’t explicitly about mathematics, but it’s about mathematics. “Why do I have to show my work? I got the right answer?” There are good responses on two levels, the first of which is practical, and which blends into the second: if you give me-the-instructor the wrong answer then I can hopefully work out why you got it wrong. Did you get it wrong because you made a minor but ultimately meaningless slip in your calculations, or did you get it wrong because you misunderstood the problem and did not know what kind of calculation to do? Error comes in many forms; some are boring — wrote the wrong number down at the start and never noticed, missed a carry — some are revealing — doesn’t know the order of operations, doesn’t know how the chain rule applies in differentiation — and some are majestic.

These last are the great ones, the errors that I love seeing, even though they’re the hardest to give a fair grade to. Sometimes a student will go off on a tack that doesn’t look anything like what we did in class, or could have reasonably seen in the textbook, but that shows some strange and possibly mad burst of creative energy. Usually this is rubbish and reflects the student flailing around, but, sometimes the student is on to something, might be trying an approach that, all right, doesn’t work here, but which if it were cleaned of its logical flaws might be a new and different way to work out the problem.

And that blends to the second reason: finding answers is nice enough and if you’re good at that, I’m glad, but is it all that important? We have calculators, after all. What’s interesting, and what is really worth learning in mathematics, is how to find answers: what approaches can efficiently be used on this problem, and how do you select one, and how do you do it to get a correct answer? That’s what’s really worth learning, and what is being looked for when the instruction is to show your work. Caulfield had the right answer, great, but is it because he knew a good way to work out the problem, or is it because he noticed the answer was left on the blackboard from the earlier class when this one started, or is it because he guessed and got lucky, or is it because he thought of a clever new way to solve the problem? If he did have a clever new way to do the problem, shouldn’t other people get to see it? Coming up with clever new ways to find answers is the sort of thing that gets you mathematical immortality as a pioneer of some approach that gets mysteriously named for somebody else.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (October 14) makes fun of tenure, the process by which people with a long track record of skill, talent, and drive are rewarded with no longer having to fear being laid off or fired except for cause. (Though I should sometime write about Fourier Transforms, as they’re rather neat.)

'Albert, stop daydreaming and eat your soup', which is alphabet soup, apparently, and where you could find E = m c c if you looked just right.
Margaret Shulock’s Six Chix comic for the 14th of October, 2014: Albert Einstein is evoked alongside the origins of his famous equation about c and m and soup and stuff.

Margaret Shulock’s turn at Six Chix (October 14) (the comic strip is shared among six women because … we couldn’t have six different comic strips written and drawn by women all at the same time, I guess?) evokes the classic image of Albert Einstein, the genius, and drawing his famous equation out of the ordinary stuff of daily life. (I snark a little; Shulock is also the writer for Apartment 3-G, to the extent that things can be said to be written in Apartment 3-G.)

Reading the Comics, September 28, 2014: Punning On A Sunday Edition


I honestly don’t intend this blog to become nothing but talk about the comic strips, but then something like this Sunday happens where Comic Strip Master Command decided to send out math joke priority orders and what am I to do? And here I had a wonderful bit about the natural logarithm of 2 that I meant to start writing sometime soon. Anyway, for whatever reason, there’s a lot of punning going on this time around; I don’t pretend to explain that.

Jason Poland’s Robbie and Bobby (September 25) puns off of a “meth lab explosion” in a joke that I’ve seen passed around Twitter and the like but not in a comic strip, possibly because I don’t tend to read web comics until they get absorbed into the Gocomics.com collective.

Brian Boychuk and Ron Boychuk’s The Chuckle Brothers (September 26) shows how an infinity pool offers the chance to finally, finally, do a one-point perspective drawing just like the art instruction book says.

Bill Watterson’s Calvin and Hobbes (September 27, rerun) wrapped up the latest round of Calvin not learning arithmetic with a gag about needing to know the difference between the numbers of things and the values of things. It also surely helps the confusion that the (United States) dime is a tiny coin, much smaller in size than the penny or nickel that it far out-values. I’m glad I don’t have to teach coin values to kids.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 27) mentions Lagrange points. These are mathematically (and physically) very interesting because they come about from what might be the first interesting physics problem. If you have two objects in the universe, attracting one another gravitationally, then you can describe their behavior perfectly and using just freshman or even high school calculus. For that matter, describing their behavior is practically what Isaac Newton invented his calculus to do.

Add in a third body, though, and you’ve suddenly created a problem that just can’t be done by freshman calculus, or really, done perfectly by anything but really exotic methods. You’re left with approximations, analytic or numerical. (Karl Fritiof Sundman proved in 1912 that one could create an infinite series solution, but it’s not a usable solution. To get a desired accuracy requires so many terms and so much calculation that you’re better off not using it. This almost sounds like the classical joke about mathematicians, coming up with solutions that are perfect but unusable. It is the most extreme case of a possible-but-not-practical solution I’m aware of, if stories I’ve heard about its convergence rate are accurate. I haven’t tried to follow the technique myself.)

But just because you can’t solve every problem of a type doesn’t mean you can’t solve some of them, and the ones you do solve might be useful anyway. Joseph-Louis Lagrange did that, studying the problem of one large body — like a sun, or a planet — and one middle-sized body — a planet, or a moon — and one tiny body — like an asteroid, or a satellite. If the middle-sized body is orbiting the large body in a nice circular orbit, then, there are five special points, dubbed the Lagrange points. A satellite that’s at one of those points (with the right speed) will keep on orbiting at the same rotational speed that the middle body takes around the large body; that is, the system will turn as if the large, middle, and tiny bodies were fixed in place, relative to each other.

Two of these spots, dubbed numbers 4 and 5, are stable: if your tiny body is not quite in the right location that’s all right, because it’ll stay nearby, much in the same way that if you roll a ball into a pit it’ll stay in the pit. But three of these spots, numbers 1, 2, and 3, are unstable: if your tiny body is not quite on those spots, it’ll fall away, in much the same way if you set a ball on the peak of the roof it’ll roll off one way or another.

When Lagrange noticed these points there wasn’t any particular reason to think of them as anything but a neat mathematical construct. But the points do exist, and they can be stable even if the medium body doesn’t have a perfectly circular orbit, or even if there are other planets in the universe, which throws off the nice simple calculations yet. Something like 1700 asteroids are known to exist in the number 4 and 5 Lagrange points for the Sun and Jupiter, and there are a handful known for Saturn and Neptune, and apparently at least five known for Mars. For Earth apparently there’s just the one known to exist, catchily named 2010 TK7, discovered in October 2010, although I’d be surprised if that were the only one. They’re just small.

Professor Peter Peddle has the crazy idea of studying boxing scientifically and preparing strategy accordingly.
Elliot Caplin and John Cullen Murphy’s Big Ben Bolt, from the 23rd of August, 1953 (rerun the 28th of September, 2014).

Elliot Caplin and John Cullen Murphy’s Big Ben Bolt (September 28, originally run August 23, 1953) has been on the Sunday strips now running a tale about a mathematics professor, Peter Peddle, who’s threatening to revolutionize Big Ben Bolt’s boxing world by reducing it to mathematical abstraction; past Sunday strips have even shown the rather stereotypically meek-looking professor overwhelming much larger boxers. The mathematics described here is nonsense, of course, but it’d be asking a bit of the comic strip writers to have a plausible mathematical description of the perfect boxer, after all.

But it’s hard for me anyway to not notice that the professor’s approach is really hard to gainsay. The past generation of baseball, particularly, has been revolutionized by a very mathematical, very rigorous bit of study, looking at questions like how many pitches can a pitcher actually throw before he loses control, and where a batter is likely to hit based on past performance (of this batter and of batters in general), and how likely is this player to have a better or a worse season if he’s signed on for another year, and how likely is it he’ll have a better enough season than some cheaper or more promising player? Baseball is extremely well structured to ask these kinds of questions, with football almost as good for it — else there wouldn’t be fantasy football leagues — and while I am ignorant of modern boxing, I would be surprised if a lot of modern boxing strategy weren’t being studied in Professor Peddle’s spirit.

Eric the Circle (September 28), this one by Griffinetsabine, goes to the Shapes Singles Bar for a geometry pun.

Bill Amend’s FoxTrot (September 28) (and not a rerun; the strip is new runs on Sundays) jumps on the Internet Instructional Video bandwagon that I’m sure exists somewhere, with child prodigy Jason Fox having the idea that he could make mathematics instruction popular enough to earn millions of dollars. His instincts are probably right, too: instructional videos that feature someone who looks cheerful and to be having fun and maybe a little crazy — well, let’s say eccentric — are probably the ones that will be most watched, at least. It’s fun to see people who are enjoying themselves, and the odder they act the better up to a point. I kind of hate to point out, though, that Jason Fox in the comic strip is supposed to be ten years old, implying that (this year, anyway) he was born nine years after Bob Ross died. I know that nothing ever really goes away anymore, but, would this be a pop culture reference that makes sense to Jason?

Tom Thaves’s Frank and Ernest (September 28) sets up the idea of Euclid as a playwright, offering a string of geometry puns.

Jef Mallet’s Frazz (September 28) wonders about why trains show up so often in story problems. I’m not sure that they do, actually — haven’t planes and cars taken their place here, too? — although the reasons aren’t that obscure. Questions about the distance between things changing over time let you test a good bit of arithmetic and algebra while being naturally about stuff it’s reasonable to imagine wanting to know. What more does the homework-assigner want?

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 28) pops back up again with the prospect of blowing one’s mind, and it is legitimately one of those amazing things, that e^{i \pi} = -1 . It is a remarkable relationship between a string of numbers each of which are mind-blowing in their ways — negative 1, and pi, and the base of the natural logarithms e, and dear old i (which, multiplied by itself, is equal to negative 1) — and here they are all bundled together in one, quite true, relationship. I do have to wonder, though, whether anyone who would in a social situation like this understand being told “e raised to the i times pi power equals negative one”, without the framing of “we’re talking now about exponentials raised to imaginary powers”, wouldn’t have already encountered this and had some of the mind-blowing potential worn off.

The Geometry of Thermodynamics (Part 2)


I should mention — I should have mentioned earlier, but it has been a busy week — that CarnotCycle has published the second part of “The Geometry of Thermodynamics”. This is a bit of a tougher read than the first part, admittedly, but it’s still worth reading. The essay reviews how James Clerk Maxwell — yes, that Maxwell — developed the thermodynamic relationships that would have made him famous in physics if it weren’t for his work in electromagnetism that ultimately overthrew the Newtonian paradigm of space and time.

The ingenious thing is that the best part of this work is done on geometric grounds, on thinking of the spatial relationships between quantities that describe how a system moves heat around. “Spatial” may seem a strange word to describe this since we’re talking about things that don’t have any direct physical presence, like “temperature” and “entropy”. But if you draw pictures of how these quantities relate to one another, you have curves and parallelograms and figures that follow the same rules of how things fit together that you’re used to from ordinary everyday objects.

A wonderful side point is a touch of human fallibility from a great mind: in working out his relations, Maxwell misunderstood just what was meant by “entropy”, and needed correction by the at-least-as-great Josiah Willard Gibbs. Many people don’t quite know what to make of entropy even today, and Maxwell was working when the word was barely a generation away from being coined, so it’s quite reasonable he might not understand a term that was relatively new and still getting its precise definition. It’s surprising nevertheless to see.

carnotcycle

jcm1 James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

Historical background

Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to…

View original post 1,262 more words

Reading the Comics, August 16, 2014: Saturday Morning Breakfast Cereal Edition


Zach Weinersmith’s Saturday Morning Breakfast Cereal is a long-running and well-regarded web comic that I haven’t paid much attention to because I don’t read many web comics. XKCD, Newshounds, and a couple others are about it. I’m not opposed to web comics, mind you, I just don’t get around to following them typically. But Saturday Morning Breakfast Cereal started running on Gocomics.com recently, and Gocomics makes it easy to start adding comics, and I did, and that’s served me well for the mathematical comics collections since it’s been a pretty dry spell. I bet it’s the summer vacation.

Saturday Morning Breakfast Cereal (July 30) seems like a reach for inclusion in mathematical comics since its caption is “Physicists make lousy firemen” and it talks about the action of a fire — and of the “living things” caught in the fire — as processes producing wobbling and increases in disorder. That’s an effort at describing a couple of ideas, the first that the temperature of a thing is connected to the speed at which the molecules making it up are moving, and the second that the famous entropy is a never-decreasing quantity. We get these notions from thermodynamics and particularly the attempt to understand physically important quantities like heat and temperature in terms of particles — which have mass and position and momentum — and their interactions. You could write an entire blog about entropy and probably someone does.

Randy Glasbergen’s Glasbergen Cartoons (August 2) uses the word-problem setup for a strip of “Dog Math” and tries to remind everyone teaching undergraduates the quotient rule that it really could be worse, considering.

Nate Fakes’s Break of Day (August 4) takes us into an anthropomorphized world that isn’t numerals for a change, to play on the idea that skill in arithmetic is evidence of particular intelligence.

Jiggs tries to explain addition to his niece, and learns his brother-in-law is his brother-in-law.
George McManus’s _Bringing Up Father_, originally run the 12th of April, 1949.

George McManus’s Bringing Up Father (August 11, rerun from April 12, 1949) goes to the old motif of using money to explain addition problems. It’s not a bad strategy, of course: in a way, arithmetic is one of the first abstractions one does, in going from the idea that a hundred of something added to a hundred fifty of something will yield two hundred fifty of that thing, and it doesn’t matter what that something is: you’ve abstracted out the ideas of “a hundred plus a hundred fifty”. In algebra we start to think about whether we can add together numbers without knowing what one or both of the numbers are — “x plus y” — and later still we look at adding together things that aren’t necessarily numbers.

And back to Saturday Morning Breakfast Cereal (August 13), which has a physicist type building a model of his “lack of dates” based on random walks and, his colleague objects, “only works if we assume you’re an ideal gas molecule”. But models are often built on assumptions that might, taken literally, be nonsensical, like imagining the universe to have exactly three elements in it, supposing that people never act against their maximal long-term economic gain, or — to summon a traditional mathematics/physics joke — assuming a spherical cow. The point of a model is to capture some interesting behavior, and avoid the complicating factors that can’t be dealt with precisely or which don’t relate to the behavior being studied. Choosing how to simplify is the skill and art that earns mathematicians the big money.

And then for August 16, Saturday Morning Breakfast Cereal does a binary numbers joke. I confess my skepticism that there are any good alternate-base-number jokes, but you might like them.

Combining Matrices And Model Universes


I would like to resume talking about matrices and really old universes and the way nucleosynthesis in these model universes causes atoms to keep settling down to peculiar but unchanging distribution.

I’d already described how a matrix offers a nice way to organize elements, and in ways that encode information about the context of the elements by where they’re placed. That’s useful and saves some writing, certainly, although by itself it’s not that interesting. Matrices start to get really powerful when, first, the elements being stored are things on which you can do something like arithmetic with pairs of them. Here I mostly just mean that you can add together two elements, or multiply them, and get back something meaningful.

This typically means that the matrix is made up of a grid of numbers, although that isn’t actually required, just, really common if we’re trying to do mathematics.

Then you get the ability to add together and multiply together the matrices themselves, turning pairs of matrices into some new matrix, and building something that works a lot like arithmetic on these matrices.

Adding one matrix to another is done in almost the obvious way: add the element in the first row, first column of the first matrix to the element in the first row, first column of the second matrix; that’s the first row, first column of your new matrix. Then add the element in the first row, second column of the first matrix to the element in the first row, second column of the second matrix; that’s the first row, second column of the new matrix. Add the element in the second row, first column of the first matrix to the element in the second row, first column of the second matrix, and put that in the second row, first column of the new matrix. And so on.

This means you can only add together two matrices that are the same size — the same number of rows and of columns — but that doesn’t seem unreasonable.

You can also do something called scalar multiplication of a matrix, in which you multiply every element in the matrix by the same number. A scalar is just a number that isn’t part of a matrix. This multiplication is useful, not least because it lets us talk about how to subtract one matrix from another: to find the difference of the first matrix and the second, scalar-multiply the second matrix by -1, and then add the first to that product. But you can do scalar multiplication by any number, by two or minus pi or by zero if you feel like it.

I should say something about notation. When we want to write out these kinds of operations efficiently, of course, we turn to symbols to represent the matrices. We can, in principle, use any symbols, but by convention a matrix usually gets represented with a capital letter, A or B or M or P or the like. So to add matrix A to matrix B, with the result being matrix C, we can write out the equation “A + B = C”, which is about as simple as we could hope to see. Scalars are normally written in lowercase letters, often Greek letters, if we don’t know what the number is, so that the scalar multiplication of the number r and the matrix A would be the product “rA”, and we could write the difference between matrix A and matrix B as “A + (-1)B” or “A – B”.

Matrix multiplication, now, that is done by a process that sounds like doubletalk, and it takes a while of practice to do it right. But there are good reasons for doing it that way and we’ll get to one of those reasons by the end of this essay.

To multiply matrix A and matrix B together, we do multiply various pairs of elements from both matrix A and matrix B. The surprising thing is that we also add together sets of these products, per this rule.

Take the element in the first row, first column of A, and multiply it by the element in the first row, first column of B. Add to that the product of the element in the first row, second column of A and the second row, first column of B. Add to that total the product of the element in the first row, third column of A and the third row, second column of B, and so on. When you’ve run out of columns of A and rows of B, this total is the first row, first column of the product of the matrices A and B.

Plenty of work. But we have more to do. Take the product of the element in the first row, first column of A and the element in the first row, second column of B. Add to that the product of the element in the first row, second column of A and the element in the second row, second column of B. Add to that the product of the element in the first row, third column of A and the element in the third row, second column of B. And keep adding those up until you’re out of columns of A and rows of B. This total is the first row, second column of the product of matrices A and B.

This does mean that you can multiply matrices of different sizes, provided the first one has as many columns as the second has rows. And the product may be a completely different size from the first or second matrices. It also means it might be possible to multiply matrices in one order but not the other: if matrix A has four rows and three columns, and matrix B has three rows and two columns, then you can multiply A by B, but not B by A.

My recollection on learning this process was that this was crazy, and the workload ridiculous, and I imagine people who get this in Algebra II, and don’t go on to using mathematics later on, remember the process as nothing more than an unpleasant blur of doing a lot of multiplying and addition for some reason or other.

So here is one of the reasons why we do it this way. Let me define two matrices:

A = \left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)

B = \left(\begin{tabular}{c} 100 \\ 0 \\ 0 \end{tabular}\right)

Then matrix A times B is

AB = \left(\begin{tabular}{c}  3/4 * 100 + 0 * 0 + 2/5 * 0 \\  1/4 * 100 + 3/5 * 0 + 2/5 * 0 \\  0 * 100 + 2/5 * 0 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  75 \\  25 \\  0  \end{tabular}\right)

You’ve seen those numbers before, of course: the matrix A contains the probabilities I put in my first model universe to describe the chances that over the course of a billion years a hydrogen atom would stay hydrogen, or become iron, or become uranium, and so on. The matrix B contains the original distribution of atoms in the toy universe, 100 percent hydrogen and nothing anything else. And the product of A and B was exactly the distribution after that first billion years: 75 percent hydrogen, 25 percent iron, nothing uranium.

If we multiply the matrix A by that product again — well, you should expect we’re going to get the distribution of elements after two billion years, that is, 56.25 percent hydrogen, 33.75 percent iron, 10 percent uranium, but let me write it out anyway to show:

\left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)\left(\begin{tabular}{c}  75 \\ 25 \\ 0  \end{tabular}\right) = \left(\begin{tabular}{c}  3/4 * 75 + 0 * 25 + 2/5 * 0 \\  1/4 * 75 + 3/5 * 25 + 2/5 * 0 \\  0 * 75 + 2/5 * 25 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  56.25 \\  33.75 \\  10  \end{tabular}\right)

And if you don’t know just what would happen if we multipled A by that product, you aren’t paying attention.

This also gives a reason why matrix multiplication is defined this way. The operation captures neatly the operation of making a new thing — in the toy universe case, hydrogen or iron or uranium — out of some combination of fractions of an old thing — again, the former distribution of hydrogen and iron and uranium.

Or here’s another reason. Since this matrix A has three rows and three columns, you can multiply it by itself and get a matrix of three rows and three columns out of it. That matrix — which we can write as A2 — then describes how two billion years of nucleosynthesis would change the distribution of elements in the toy universe. A times A times A would give three billion years of nucleosynthesis; A10 ten billion years. The actual calculating of the numbers in these matrices may be tedious, but it describes a complicated operation very efficiently, which we always want to do.

I should mention another bit of notation. We usually use capital letters to represent matrices; but, a matrix that’s just got one column is also called a vector. That’s often written with a lowercase letter, with a little arrow above the letter, as in \vec{x} , or in bold typeface, as in x. (The arrows are easier to put in writing, the bold easier when you were typing on typewriters.) But if you’re doing a lot of writing this out, and know that (say) x isn’t being used for anything but vectors, then even that arrow or boldface will be forgotten. Then we’d write the product of matrix A and vector x as just Ax.  (There are also cases where you put a little caret over the letter; that’s to denote that it’s a vector that’s one unit of length long.)

When you start writing vectors without an arrow or boldface you start to run the risk of confusing what symbols mean scalars and what ones mean vectors. That’s one of the reasons that Greek letters are popular for scalars. It’s also common to put scalars to the left and vectors to the right. So if one saw “rMx”, it would be expected that r is a scalar, M a matrix, and x a vector, and if they’re not then this should be explained in text nearby, preferably before the equations. (And of course if it’s work you’re doing, you should know going in what you mean the letters to represent.)

The Geometry of Thermodynamics (Part 1)


I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

carnotcycle

1geo1

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original post 1,490 more words

Reading the Comics, July 28, 2014: Homework in an Amusement Park Edition


I don’t think my standards for mathematics content in comic strips are seriously lowering, but the strips do seem to be coming pretty often for the summer break. I admit I’m including one of these strips just because it lets me talk about something I saw at an amusement park, though. I have my weaknesses.

Harley Schwadron’s 9 to 5 (July 25) builds its joke around the ambiguity of saying a salary is six (or some other number) of figures, if you don’t specify what side of the decimal they’re on. That’s an ordinary enough gag, although the size of a number can itself be an interesting thing to know. The number of digits it takes to write a number down corresponds, roughly, with the logarithm of a number, and in the olden days a lot of computations depended on logarithms: multiplying two numbers is equivalent to adding their logarithms; dividing two numbers, subtracting their logarithms. And addition and subtraction are normally easier than multiplication and division. Similarly, raising one number to a power becomes multiplying one number by the logarithm of another, and multiplication is easier than exponentiation. So counting the number of digits in a number might be something anyway.

Steve Breen and Mike Thompson’s Grand Avenue (July 25) has the kids mention something as being “like going to an amusement park to do math homework”, which gives me a chance to share this incident. Last year my love and I were in the Cedar Point amusement park (in Sandusky, Ohio), and went to the coffee shop. We saw one guy sitting at a counter, with his laptop and a bunch of papers sprawled out, looking pretty much like we do when we’re grading papers, and we thought initially that it was so very sad that someone would be so busy at work that (we presumed) he couldn’t even really participate in the family expedition to the amusement park.

And then we remembered: not everybody lives a couple hours away from an amusement park. If we lived, say, fifteen minutes from a park we had season passes to, we’d certainly at least sometimes take our grading work to the park, so we could get it done in an environment we liked and reward ourselves for getting done with a couple roller coasters and maybe the Cedar Downs carousel (which is worth an entry around these parts anyway). To grade, anyway; I’d never have the courage to bring my laptop to the coffee shop. So I guess all I’m saying is, I have a context in which yes, I could imagine going to an amusement park to grade math homework at least.

Wulff and Morgenthaler Truth Facts (July 25) makes a Venn diagram joke in service of asserting that only people who don’t understand statistics would play the lottery. This is an understandable attitude of Wulff and Morgenthaler, and of many, many people who make the same claim. The expectation value — the amount you expect to win some amount, times the probability you will win that amount, minus the cost of the ticket — is negative for all but the most extremely oversized lottery payouts, and the most extremely oversized lottery payouts still give you odds of winning so tiny that you really aren’t hurting your chances by not buying a ticket. However, the smugness behind the attitude bothers me — I’m generally bothered by smugness — and jokes like this one contain the assumption that the only sensible way to live is a ruthless profit-and-loss calculation to life that even Jeremy Bentham might say is a bit much. For the typical person, buying a lottery ticket is a bit of a lark, a couple dollars of disposable income spent because, what the heck, it’s about what you’d spend on one and a third sodas and you aren’t that thirsty. Lottery pools with coworkers or friends make it a small but fun social activity, too. That something is a net loss of money does not mean it is necessarily foolish. (This isn’t to say it’s wise, either, but I’d generally like a little more sympathy for people’s minor bits of recreational foolishness.)

Marc Anderson’s Andertoons (July 27) does a spot of wordplay about the meaning of “aftermath”. I can’t think of much to say about this, so let me just mention that Florian Cajori’s A History of Mathematical Notations reports (section 201) that the + symbol for addition appears to trace from writing “et”, meaning and, a good deal and the letters merging together and simplifying from that. This seems plausible enough on its face, but it does cause me to reflect that the & symbol also is credited as a symbol born from writing “et” a lot. (Here, picture writing Et and letting the middle and lower horizontal strokes of the E merge with the cross bar and the lowest point of the t.)

Berkeley Breathed’s Bloom County (July 27, rerun from, I believe, July of 1988) is one of the earliest appearances I can remember of the Grand Unification appearing in popular culture, certainly in comic strips. Unifications have a long and grand history in mathematics and physics in explaining things which look very different by the same principles, with the first to really draw attention probably being Descartes showing that algebra and geometry could be understood as a single thing, and problems difficult in one field could be easy in the other. In physics, the most thrilling unification was probably the explaining of electricity, magnetism, and light as the same thing in the 19th century; being able to explain many varied phenomena with some simple principles is just so compelling. General relativity shows that we can interpret accelerations and gravitation as the same thing; and in the late 20th century, physicists found that it’s possible to use a single framework to explain both electromagnetism and the forces that hold subatomic particles together and that break them apart.

It’s not yet known how to explain gravity and quantum mechanics in the same, coherent, frame. It’s generally assumed they can be reconciled, although I suppose there’s no logical reason they have to be. Finding a unification — or a proof they can’t be unified — would certainly be one of the great moments of mathematical physics.

The idea of the grand unification theory as an explanation for everything is … well, fair enough. A grand unification theory should be able to explain what particles in the universe exist, and what forces they use to interact, and from there it would seem like the rest of reality is details. Perhaps so, but it’s a long way to go from a simple starting point to explaining something as complicated as a penguin. I guess what I’m saying is I doubt Oliver would notice the non-existence of Opus in the first couple pages of his work.

Thom Bluemel’s Birdbrains (July 28) takes us back to the origin of numbers. It also makes me realize I don’t know what’s the first number that we know of people discovering. What I mean is, it seems likely that humans are just able to recognize a handful of numbers, like one and two and maybe up to six or so, based on how babies and animals can recognize something funny if the counts of small numbers of things don’t make sense. And larger numbers were certainly known to antiquity; probably the fact that numbers keep going on forever was known to antiquity. And some special numbers with interesting or difficult properties, like pi or the square root of two, were known so long ago we can’t say who discovered them. But then there are numbers like the Euler-Mascheroni constant, which are known and recognized as important things, and we can say reasonably well who discovered them. So what is the first number with a known discoverer?

Lewis Carroll and my Playing With Universes


I wanted to explain what’s going on that my little toy universes with three kinds of elements changing to one another keep settling down to steady and unchanging distributions of stuff. I can’t figure a way to do that other than to introduce some actual mathematics notation, and I’m aware that people often find that sort of thing off-putting, or terrifying, or at the very least unnerving.

There’s fair reason to: the entire point of notation is to write down a lot of information in a way that’s compact or easy to manipulate. Using it at all assumes that the writer, and the reader, are familiar with enough of the background that they don’t have to have it explained at each reference. To someone who isn’t familiar with the topic, then, the notation looks like symbols written down without context and without explanation. It’s much like wandering into an Internet forum where all the local acronyms are unfamiliar, the in-jokes are heavy on the ground, and for some reason nobody actually spells out Dave Barry’s name in full.

Let me start by looking at the descriptions of my toy universe: it’s made up of a certain amount of hydrogen, a certain amount of iron, and a certain amount of uranium. Since I’m not trying to describe, like, where these elements are or how they assemble into toy stars or anything like that, I can describe everything that I find interesting about this universe with three numbers. I had written those out as “40% hydrogen, 35% iron, 25% uranium”, for example, or “10% hydrogen, 60% iron, 30% uranium”, or whatever the combination happens to be. If I write the elements in the same order each time, though, I don’t really need to add “hydrogen” and “iron” and “uranium” after the numbers, and if I’m always looking at percentages I don’t even need to add the percent symbol. I can just list the numbers and let the “percent hydrogen” or “percent iron” or “percent uranium” be implicit: “40, 35, 25”, for one universe’s distribution, or “10, 60, 30” for another.

Letting the position of where a number is written carry information is a neat and easy way to save effort, and when you notice what’s happening you realize it’s done all the time: it’s how writing the date as “7/27/14” makes any sense, or how a sports scoreboard might compactly describe the course of the game:

0 1 0   1 2 0   0 0 4   8 13 1
2 0 0   4 0 0   0 0 1   7 15 0

To use the notation you need to understand how the position encodes information. “7/27/14” doesn’t make sense unless you know the first number is the month, the second the day within the month, and the third the year in the current century, and that there’s an equally strong convention putting the day within the month first and the month in the year second presents hazards when the information is ambiguous. Reading the box score requires knowing the top row reflects the performance of the visitor’s team, the bottom row the home team, and the first nine columns count the runs by each team in each inning, while the last three columns are the total count of runs, hits, and errors by that row’s team.

When you put together the numbers describing something into a rectangular grid, that’s termed a matrix of numbers. The box score for that imaginary baseball game is obviously one, but it’s also a matrix if I just write the numbers describing my toy universe in a row, or a column:

40
35
25

or

10
60
30

If a matrix has just the one column, it’s often called a vector. If a matrix has the same number of rows as it has columns, it’s called a square matrix. Matrices and vectors are also usually written with either straight brackets or curled parentheses around them, left and right, but that’s annoying to do in HTML so please just pretend.

The matrix as mathematicians know it today got put into a logically rigorous form around 1850 largely by the work of James Joseph Sylvester and Arthur Cayley, leading British mathematicians who also spent time teaching in the United States. Both are fascinating people, Sylvester for his love of poetry and language and for an alleged incident while briefly teaching at the University of Virginia which the MacTutor archive of mathematician biographies, citing L S Feuer, describes so: “A student who had been reading a newspaper in one of Sylvester’s lectures insulted him and Sylvester struck him with a sword stick. The student collapsed in shock and Sylvester believed (wrongly) that he had killed him. He fled to New York where one os his elder brothers was living.” MacTutor goes on to give reasons why this story may be somewhat distorted, although it does suggest one solution to the problem of students watching their phones in class.

Cayley, meanwhile, competes with Leonhard Euler for prolific range in a mathematician. MacTutor cites him as having at least nine hundred published papers, covering pretty much all of modern mathematics, including work that would underlie quantum mechanics and non-Euclidean geometry. He wrote about 250 papers in the fourteen years he was working as a lawyer, which would by itself have made him a prolific mathematician. If you need to bluff your way through a mathematical conversation, saying “Cayley” and following it with any random noun will probably allow you to pass.

MathWorld mentions, to my delight, that Lewis Carroll, in his secret guise as Charles Dodgson, came in to the world of matrices in 1867 with an objection to the very word. In writing about them, Dodgson said, “”I am aware that the word `Matrix’ is already in use to express the very meaning for which I use the word `Block’; but surely the former word means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He’s got a fair point, really, but there wasn’t much to be done in 1867 to change the word, and it’s only gotten more entrenched since then.

What’s Going On In The Old Universe


Last time in this infinitely-old universe puzzle, we found that by making a universe of only three kinds of atoms (hydrogen, iron, and uranium) which shifted to one another with fixed chances over the course of time, we’d end up with the same distribution of atoms regardless of what the distribution of hydrogen, iron, and uranium was to start with. That seems like it might require explanation.

(For people who want to join us late without re-reading: I got to wondering what the universe might look like if it just ran on forever, stars fusing lighter elements into heavier ones, heavier elements fissioning into lighter ones. So I looked at a toy model where there were three kinds of atoms, dubbed hydrogen for the lighter elements, iron for the middle, and uranium for the heaviest, and made up some numbers saying how likely hydrogen was to be turned into heavier atoms over the course of a billion years, how likely iron was to be turned into something heavier or lighter, and how likely uranium was to be turned into lighter atoms. And sure enough, if the rates of change stay constant, then the universe goes from any initial distribution of atoms to a single, unchanging-ever-after mix in surprisingly little time, considering it’s got a literal eternity to putter around.)

The first question, it seems, is whether I happened to pick a freak set of numbers for the change of one kind of atom to another. It’d be a stroke of luck, but, these things happen. In my first model, I gave hydrogen a 25 percent chance of turning to iron, and no chance of turning to helium, in a billion years. Let’s change that so any given atom has a 20 percent chance of turning to iron and a 20 percent chance of turning to uranium. Similarly, instead of iron having no chance of turning to hydrogen and a 40 percent chance of turning to uranium, let’s try giving each iron atom a 25 percent chance of becoming hydrogen and a 25 percent chance of becoming uranium. Uranium, first time around, had a 40 percent chance of becoming hydrogen and a 40 percent chance of becoming iron. Let me change that to a 60 percent chance of becoming hydrogen and a 20 percent chance of becoming iron.

With these chances of changing, a universe that starts out purely hydrogen settles on being about 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium in about ten billion years. If the universe starts out with equal amounts of hydrogen, iron, and uranium, however, it settles over the course of eight billion years to … 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium. If it starts out with no hydrogen and the rest of matter evenly split between iron and uranium, then over the course of twelve billion years it gets to … 50 percent hydrogen, a litte over 28 percent iron, and a little over 21 percent uranium.

Perhaps the problem is that I’m picking these numbers, and I’m biased towards things that are pretty nice ones — halves and thirds and two-fifths and the like — and maybe that’s causing this state where the universe settles down very quickly and stops changing any. We should at least try that before supposing there’s necessarily something more than coincidence going on here.

So I set the random number generator to produce some element changes which can’t be susceptible to my bias for simple numbers. Give hydrogen a 44.5385 percent chance of staying hydrogen, a 10.4071 percent chance of becoming iron, and a 45.0544 percent chance of becoming uranium. Give iron a 25.2174 percent chance of becoming hydrogen, a 32.0355 percent chance of staying iron, and a 42.7471 percent chance of becoming uranium. Give uranium a 2.9792 percent chance of becoming hydrogen, a 48.9201 percent chance of becoming iron, and a 48.1007 percent chance of staying uranium. (Clearly, by the way, I’ve given up on picking numbers that might reflect some actual if simple version of nucleosynthesis and I’m just picking numbers for numbers’ sake. That’s all right; the question this essay is, are we stuck getting an unchanging yet infinitely old universe?)

And the same thing happens again: after nine billion years a universe starting from pure hydrogen will be about 18.7 percent hydrogen, about 35.7 percent iron, and about 45.6 percent uranium. Starting from no hydrogen, 50 percent iron, and 50 percent uranium, we get to the same distribution in again about nine billion years. A universe beginning with equal amounts hydrogen, iron, and uranium under these rules gets to the same distribution after only seven billion years.

The conclusion is this settling down doesn’t seem to be caused by picking numbers that are too particularly nice-looking or obviously attractive; and the distributions don’t seem to have an obvious link to what the probabilities of changing are. There seems to be something happening here, though admittedly we haven’t proven that rigorously. To spoil a successor article in this thread: there is something here, and it’s a big thing.

(Also, no, we’re not stuck with an unchanging universe, and good on you if you can see ways to keep the universe changing without, like, having the probability of one atom changing to another itself vary in time.)

To Build A Universe


So I kept thinking about what the distribution of elements might be in an infinitely old universe. It’s a tough problem to consider, if you want to do it exactly right, since you have to consider how stars turn lighter atoms in a blistering array of possibilities. Besides the nearly hundred different elements — which represents the count of how many protons are in the nucleus — each element has multiple isotopes — representing how many neutrons are in the nucleus — and I don’t know how many there are to consider but it’s certainly at least several hundred to deal with. There’s probably a major work in the astrophysics literature describing all the ways atoms and their isotopes can get changed over the course of a star’s lifetime, either actually existing or waiting for an indefatigable writer to make it her life’s work.

But I can make a toy model, because I want to do mathematics, and I can see what I might learn from that. This is basically a test vehicle: I want to see whether building a more accurate model is likely to be worthwhile.

For my toy model of the universe I will pretend there are only three kinds of atoms in the universe: hydrogen, iron, and uranium. These represent the lighter elements — which can fuse together to release energy — and Iron-56 — which can’t release energy by fusing into heavier or by fissioning into lighter elements — and the heavier elements — which can fission apart to release energy and lighter elements. I can describe the entire state of the universe with three numbers, saying what fraction of the universe is hydrogen, what fraction is iron, and what fraction is uranium. So these are pretty powerful toys.

Over time the stars in this universe will change some of their hydrogen into iron, and some of their iron into uranium. The uranium will change some of itself into hydrogen and into iron. How much? I’m going to make up some nice simple numbers and say that over the course of a billion years, one-quarter of all the hydrogen in the universe will be changed into iron; three-quarters of the hydrogen will remain hydrogen. Over that same time, let’s say two-fifths of all the iron in the universe will be changed to uranium, while the remaining three-fifths will remain iron. And the uranium? Well, that decays; let’s say that two-fifths of the uranium will become hydrogen, two-fifths will become iron, and the remaining one-fifth will stay uranium. If I had more elements in the universe I could make a more detailed, subtle model, and if I didn’t feel quite so lazy I might look up more researched figures for this, but, again: toy model.

I’m by the way assuming this change of elements is constant for all time and that it doesn’t depend on the current state of the universe. There are sound logical reasons behind this: to have the rate of nucleosynthesis vary in time would require me to do more work. As above: toy model.

So what happens? This depends on what we start with, sure. Let’s imagine the universe starts out made of nothing but hydrogen, so that the composition of the universe is 100% hydrogen, 0% iron, 0% uranium. After the first billion years, some of the hydrogen will be changed to iron, but there wasn’t any iron so there’s no uranium now. The universe’s composition would be 75% hydrogen, 25% iron, 0% uranium. After the next billion years three-quarters of the hydrogen becomes iron and two-fifths of the iron becomes uranium, so we’ll be at 56.25% hydrogen, 33.75% iron, 10% uranium. Another billion years passes, and once again three-quarters of the hydrogen becomes iron, two-fifths of the iron becomes uranium, and two-fifths of the uranium becomes hydrogen and another two-fifths becomes iron. This is a lot of arithmetic but the results are easy enough to find: 46.188% hydrogen, 38.313% iron, 15.5% uranium. After some more time we have 40.841% hydrogen, 40.734% iron, 18.425% uranium. It’s maybe a fair question whether the universe is going to run itself all the way down to have nothing but iron, but, the next couple billions of years show things settling down. Let me put all this in a neat little table.

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 100% 0% 0%
1 75% 25% 0%
2 56.25% 33.75% 10%
3 46.188% 38.313% 15.5%
4 40.841% 40.734% 18.425%
5 38% 42.021% 19.979%
6 36.492% 42.704% 20.804%
7 35.691% 43.067% 21.242%
8 35.265% 43.260% 21.475%
9 35.039% 43.362% 21.599%
10 34.919% 43.417% 21.665%
11 34.855% 43.446% 21.700%
12 34.821% 43.461% 21.718%
13 34.803% 43.469% 21.728%
14 34.793% 43.473% 21.733%
15 34.788% 43.476% 21.736%
16 34.786% 43.477% 21.737%
17 34.784% 43.478% 21.738%
18 34.783% 43.478% 21.739%
19 34.783% 43.478% 21.739%
20 34.783% 43.478% 21.739%

We could carry on but there’s really no point: the numbers aren’t going to change again. Well, probably they’re changing a little bit, four or more places past the decimal point, but this universe has settled down to a point where just as much hydrogen is being lost to fusion as is being created by fission, and just as much uranium is created by fusion as is lost by fission, and just as much iron is being made as is being turned into uranium. There’s a balance in the universe.

At least, that’s the balance if we start out with a universe made of nothing but hydrogen. What if it started out with a different breakdown, for example, a universe that started as one-third hydrogen, one-third iron, and one-third uranium? In that case, as the universe ages, the distribution of elements goes like this:

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 33.333% 33.333% 33.333%
1 38.333% 41.667% 20%
2 36.75% 42.583% 20.667%
3 35.829% 43.004% 21.167%
4 35.339% 43.226% 21.435%
5 35.078% 43.345% 21.578%
10 34.795% 43.473% 21.732%
15 34.783% 43.478% 21.739%

We’ve gotten to the same distribution, only a tiny bit faster. (It doesn’t quite get there after fourteen billion years.) I hope it won’t shock you if I say that we’d see the same thing if we started with a universe made of nothing but iron, or of nothing but uranium, or of other distributions. Some take longer to settle down than others, but, they all seem to converge on the same long-term fate for the universe.

Obviously there’s something special about this toy universe, with three kinds of atoms changing into one another at these rates, which causes it to end up at the same distribution of atoms.

In A Really Old Universe


So, my thinking about an “Olbers Universe” infinitely old and infinitely large in extent brought me to a depressing conclusion that such a universe has to be empty, or at least just about perfectly empty. But we can still ponder variations on the idea and see if that turns up anything. For example, what if we have a universe that’s infinitely old, but not infinite in extent, either because space is limited or because all the matter in the universe is in one neighborhood?

Suppose we have stars. Stars do many things. One of them is they turn elements into other elements, mostly heavier ones. For the lightest of atoms — hydrogen and helium, for example — stars create heavier elements by fusing them together. Making the heavier atoms from these lighter ones, in the net, releases energy, which is why fusion is constantly thought of as a potential power source. And that’s good for making elements up to as heavy as iron. After that point, fusion becomes a net energy drain. But heavier elements still get made as the star dies: when it can’t produce energy by fusion anymore the great mass of the star collapses on itself and that shoves together atom nucleuses regardless of the fact this soaks up more energy. (Warning: the previous description of nucleosynthesis, as it’s called, was very simplified and a little anthropomorphized, and wasn’t seriously cross-checked against developments in the field since I was a physics-and-astronomy undergraduate. Do not use it to attempt to pass your astrophysics qualifier. It’s good enough for everyday use, what with how little responsibility most of us have for stars.)

The important thing to me is that a star begins as a ball of dust, produced by earlier stars (and in our, finite universe, from the Big Bang, which produced a lot of hydrogen and helium and some of the other lightest elements), that condenses into a star, turns many of the elements into it into other elements, and then returns to a cloud of dust that mixes with other dust clouds and forms new stars.

Now. Over time, over the generations of stars, we tend to get heavier elements out of the mix. That’s pretty near straightforward mathematics: if you have nothing but hydrogen and helium — atoms that have one or two protons in the nucleus — it’s quite a trick to fuse them together into something with more than two, three, or four protons in the nucleus. If you have hydrogen, helium, lithium, and beryllium to work with — one, two, three, and four protons in the nucleus — it’s easier to get products of from two up to eight protons in the nucleus. And so on. The tendency is for each generation of stars to have relatively less hydrogen and helium and relatively more of the heavier atoms in its makeup.

So what happens if you have infinitely many generations? The first guess would be, well, stars will keep gathering together and fusing together as long as there are any elements lighter than iron, so that eventually there’d be a time when there were no (or at least no significant) amounts of elements lighter than iron, at which point the stars cease to shine. There’s nothing more to fuse together to release energy and we have a universe of iron- and heavier-metal ex-stars. I’m not sure if this is an even more depressing infinite universe than the infinitely large, infinitely old one which couldn’t have anything at all in it.

Except that this isn’t the whole story. Heavier elements than iron can release energy by fission, splitting into two or more lighter elements. Uranium and radium and a couple other elements are famous for them, but I believe every element has at least some radioactive isotopes. Popular forms of fission will produce alpha particles, which is what they named this particular type of radioactive product before they realized it was just the nucleus of a helium atom. Other types of radioactive decay will produce neutrons, which, if they’re not in the nucleus of an atom, will last an average of about fifteen minutes before decaying into a proton — a hydrogen nucleus — and some other stuff. Some more exotic forms of radioactive decay can produce protons by themselves, too. I haven’t checked the entire set of possible fission byproducts but I wouldn’t be surprised if most of the lighter elements can be formed by something’s breaking down.

In short, even if we fused the entire contents of the universe into atoms heavier than iron, we would still get out a certain amount of hydrogen and of helium, and also of other lighter elements. In short, stars turn hydrogen and helium, eventually, into very heavy elements; but the very heavy elements turn at least part of themselves back into hydrogen and helium.

So, it seems plausible, at least qualitatively, that given enough time to work there’d be a stable condition: hydrogen and helium being turned into heavier atoms at the same rate that heavier atoms are producing hydrogen and helium in their radioactive decay. And an infinitely old universe has enough time for anything.

And that’s, to me, anyway, an interesting question: what would the distribution of elements look like in an infinitely old universe?

(I should point out here that I don’t know. I would be surprised if someone in the astrophysics community has worked it out, at least in a rough form, for an as-authentic-as-possible set of assumptions about how nucleosynthesis works. But I am so ignorant of the literature I’m not sure how to even find the answer they’ve posed. I can think of it as a mathematical puzzle at least, though.)

In A Really Big Universe


I’d got to thinking idly about Olbers’ Paradox, the classic question of why the night sky is dark. It’s named for Heinrich Wilhelm Olbers, 1758-1840, who of course was not the first person to pose the problem nor to give a convincing answer to it, but, that’s the way naming rights go.

It doesn’t sound like much of a question at first, after all, it’s night. But if we suppose the universe is infinitely large and is infinitely old, then, along the path of any direction you look in the sky, day or night, there’ll be a star. The star may be very far away, so that it’s very faint; but it takes up much less of the sky from being so far away. The result is that the star’s intensity, as a function of how much of the sky it takes up, is no smaller. And there’ll be stars shining just as intensely in directions that are near to that first star. The sky in an infinitely large, infinitely old universe should be a wall of stars.

Oh, some stars will be dimmer than average, and some brighter, but that doesn’t matter much. We can suppose the average star is of average brightness and average size for reasons that are right there in the name of the thing; it makes the reasoning a little simpler and doesn’t change the result.

The reason there is darkness is that our universe is neither infinitely large nor infinitely old. There aren’t enough stars to fill the sky and there’s not enough time for the light from all of them to get to us.

But we can still imagine standing on a planet in such an Olbers Universe (to save myself writing “infinitely large and infinitely old” too many times), with enough vastness and enough time to have a night sky that looks like a shell of starlight, and that’s what I was pondering. What might we see if you looked at the sky, in these conditions?

Well, light, obviously; we can imagine the sky looking as bright as the sun, but in all directions above the horizon. The sun takes up a very tiny piece of the sky — it’s about as wide across as your thumb, held at arm’s length, and try it if you don’t believe me (better, try it with the Moon, which is about the same size as the Sun and easier to look at) — so, multiply that brightness by the difference between your thumb and the sky and imagine the investment in sunglasses this requires.

It’s worse than that, though. Yes, in any direction you look there’ll be a star, but if you imagine going on in that direction there’ll be another star, eventually. And another one past that, and another past that yet. And the light — the energy — of those stars shining doesn’t disappear because there’s a star between it and the viewer. The heat will just go into warming up the stars in its path and get radiated through.

This is why interstellar dust, or planets, or other non-radiating bodies doesn’t answer why the sky could be dark in a vast enough universe. Anything that gets enough heat put into it will start to glow and start to shine from that light. The stars will slow down the waves of heat from the stars behind them, but given enough time, it will get through, and in an infinitely old universe, there is enough time.

The conclusion, then, is that our planet in an Olbers Universe would get an infinite amount of heat pouring onto it, at all times. It’s hard to see how life could possibly exist in the circumstance; water would boil away — rock would boil away — and the planet just would evaporate into dust.

Things get worse, though: it’s not just our planet that would get boiled away like this, but as far as I can tell, the stars too. Each star would be getting an infinite amount of heat pouring into it. It seems to me this requires the matter making up the stars to get so hot it would boil away, just as the atmosphere and water and surface of the imagined planet would, until the star — until all stars — disintegrate. At this point I have to think of the great super-science space-opera writers of the early 20th century, listening to the description of a wave of heat that boils away a star, and sniffing, “Amateurs. Come back when you can boil a galaxy instead”. Well, the galaxy would boil too, for the same reasons.

Even once the stars have managed to destroy themselves, though, the remaining atoms would still have a temperature, and would still radiate faint light. And that faint light, multiplied by the infinitely many atoms and all the time they have, would still accumulate to an infinitely great heat. I don’t know how hot you have to get to boil a proton into nothingness — or a quark — but if there is any temperature that does it, it’d be able to.

So the result, I had to conclude, is that an infinitely large, infinitely old universe could exist only if it didn’t have anything in it, or at least if it had nothing that wasn’t at absolute zero in it. This seems like a pretty dismal result and left me looking pretty moody for a while, even I was sure that EE “Doc” Smith would smile at me for working out the heat-death of quarks.

Of course, there’s no reason that a universe has to, or even should, be pleasing to imagine. And there is a little thread of hope for life, or at least existence, in a Olbers Universe.

All the destruction-of-everything comes about from the infinitely large number of stars, or other radiating bodies, in the universe. If there’s only finitely much matter in the universe, then, their total energy doesn’t have to add up to the point of self-destruction. This means giving up an assumption that was slipped into my Olbers Universe without anyone noticing: the idea that it’s about uniformly distributed. If you compare any two volumes of equal size, from any time, they have about the same number of stars in them. This is known in cosmology as “isotropy”.

Our universe seems to have this isotropy. Oh, there are spots where you can find many stars (like the center of a galaxy) and spots where there are few (like, the space in-between galaxies), but the galaxies themselves seem to be pretty uniformly distributed.

But an imagined universe doesn’t have to have this property. If we suppose an Olbers Universe without then we can have stars and planets and maybe even life. It could even have many times the mass, the number of stars and everything, that our universe has, spread across something much bigger than our universe. But it does mean that this infinitely large, infinitely old universe will have all its matter clumped together into some section, and nearly all the space — in a universe with an incredible amount of space — will be empty.

I suppose that’s better than a universe with nothing at all, but somehow only a little better. Even though it could be a universe with more stars and more space occupied than our universe has, that infinitely vast emptiness still haunts me.

(I’d like to note, by the way, that all this universe-building and reasoning hasn’t required any equations or anything like that. One could argue this has diverted from mathematics and cosmology into philosophy, and I wouldn’t dispute that, but can imagine philosophers might.)

Weightlessness at the Equator (Whiteboard Sketch #1)


The mathematics blog Scientific Finger Food has an interesting entry, “Weightlessness at the Equator (Whiteboard Sketch #1)”, which looks at the sort of question that’s easy to imagine when you’re young: since gravity pulls you to the center of the earth, and the earth’s spinning pushes you away (unless we’re speaking precisely, but you know what that means), so, how fast would the planet have to spin so that a person on the equator wouldn’t feel any weight?

It’s a straightforward problem, one a high school student ought to be able to do. Sebastian Templ works the problem out, though, including the all-important diagram that shows the important part, which is what calculation to do.

In reality, the answer doesn’t much matter since a planet spinning nearly fast enough to allow for weightlessness at the equator would be spinning so fast it couldn’t hold itself together, and a more advanced version of this problem could make use of that: given some measure of how strongly rock will hold itself together, what’s the fastest that the planet can spin before it falls apart? And a yet more advanced course might work out how other phenomena, such as tides or the precession of the poles might work. Eventually, one might go on to compose highly-regarded works of hard science fiction, if you’re willing to start from the questions easy to imagine when you’re young.

scientific finger food

At the present time, our Earth does a full rotation every 24 hours, which results in day and night. Just like on a carrousel, its inhabitants (and, by the way, all the other stuff on and of the planet) are pushed “outwards” due to the centrifugal force. So we permanently feel an “upwards” pulling force thanks to the Earth’s rotation. However, the centrifugal force is much weaker than the centri petal force, which is directed towards the core of the planet and usually called “gravitation”. If this wasn’t the case, we would have serious problems holding our bodies down to the ground. (The ground, too, would have troubles holding itself “to the ground”.)

Especially on the equator, the centrifugal and the gravitational force are antagonistic forces: the one points “downwards” while the other points “upwards”.

How fast would the Earth have to spin in order to cause weightlessness at the…

View original post 201 more words

CarnotCycle on the Gibbs-Helmholtz Equation


I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

[1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

What’s The Point Of Hamiltonian Mechanics?


The Diff_Eq twitter feed had a link the other day to a fine question put on StackExchange.com: What’s the Point of Hamiltonian Mechanics? Hamiltonian Mechanics is a different way of expressing the laws of Newtonian mechanics from what you learn in high school, and what you learn from that F equals m a business, and it gets introduced in the Mechanics course you take early on as a physics major.

At this level of physics you’re mostly concerned with, well, the motion of falling balls, of masses hung on springs, of pendulums swinging back and forth, of satellites orbiting planets. This is all nice tangible stuff and you can work problems out pretty well if you know all the forces the moving things exert on one another, forming a lot of equations that tell you how the particles are accelerating, from which you can get how the velocities are changing, from which you can get how the positions are changing.

The Hamiltonian formation starts out looking like it’s making life harder, because instead of looking just at the positions of particles, it looks at both the positions and the momentums (which is the product of the mass and the velocity). However, instead of looking at the forces, particularly, you look at the energy in the system, which typically is going to be the kinetic energy plus the potential energy. The energy is a nice thing to look at, because it’s got some obvious physical meaning, and you should know how it changes over time, and because it’s just a number (a scalar, in the trade) instead of a vector, the way forces are.

And here’s a neat thing: the way the position changes over time is found by looking at how the energy would change if you made a little change in the momentum; and the way the momentum changes over time is found by looking at how the energy would change if you made a little change in the position. As that sentence suggests, that’s awfully pretty; there’s something aesthetically compelling about treating position and momentum so very similarly. (They’re not treated in exactly the same way, but it’s close enough.) And writing the mechanics problem this way, as position and momentum changing in time, means we can use tools that come from linear algebra and the study of matrices to answer big questions like whether the way the system moves is stable, which are hard to answer otherwise.

The questioner who started the StackExchange discussion pointed out that before they get to Hamiltonian mechanics, the course also introduced the Euler-Lagrange formation, which looks a lot like the Hamiltonian, and which was developed first, and gets introduced to students first; why not use that? Here I have to side with most of the commenters about the Hamiltonian turning out to be more useful when you go on to more advanced physics. The Euler-Lagrange form is neat, and particularly liberating because you get an incredible freedom in how you set up the coordinates describing the action of your system. But it doesn’t have that same symmetry in treating the position and momentum, and you don’t get the energy of the system built right into the equations you’re writing, and you can’t use the linear algebra and matrix tools that were introduced. Mostly, the good things that Euler-Lagrange forms give you, such as making it obvious when a particular coordinate doesn’t actually contribute to the behavior of the system, or letting you look at energy instead of forces, the Hamiltonian also gives you, and the Hamiltonian can be used to do more later on.

What is Physics all about?


Over on the Reading Penrose blog, Jean Louis Van Belle (and I apologize if I’ve got the name capitalized or punctuated wrong but I couldn’t find the author’s name except in a run-together, uncapitalized form) is trying to understand Roger Penrose’s Road to Reality, about the various laws of physics as we understand them. In the entry for the 6th of December, “Ordinary Differential equations (II)”, he gets to the question “What’s Physics All About?” and comes to what I have to agree is the harsh fact: a lot of physics is about solving differential equations.

Some of them are ordinary differential equations, some of them are partial differential equations, but really, a lot of it is differential equations. Some of it is setting up models for differential equations. Here, though, he looks at a number of ordinary differential equations and how they can be classified. The post is a bit cryptic — he intends the blog to be his working notes while he reads a challenging book — but I think it’s still worth recommending as a quick tour through some of the most common, physics-relevant, kinds of ordinary differential equation.

From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace


I’m frightfully late on following up on this, but ElKement has another entry in the series regarding quantum field theory, this one engagingly titled “On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”. The objective is to introduce the concept of phase space, a way of looking at physics problems that marks maybe the biggest thing one really needs to understand if one wants to be not just a physics major (or, for many parts of the field, a mathematics major) and a grad student.

As an undergraduate, it’s easy to get all sorts of problems in which, to pick an example, one models a damped harmonic oscillator. A good example of this is how one models the way a car bounces up and down after it goes over a bump, when the shock absorbers are working. You as a student are given some physical properties — how easily the car bounces, how well the shock absorbers soak up bounces — and how the first bounce went — how far the car bounced upward, how quickly it started going upward — and then work out from that what the motion will be ever after. It’s a bit of calculus and you might do it analytically, working out a complicated formula, or you might do it numerically, letting one of many different computer programs do the work and probably draw a picture showing what happens. That’s shown in class, and then for homework you do a couple problems just like that but with different numbers, and for the exam you get another one yet, and one more might turn up on the final exam.

Continue reading “From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”

From ElKement: May The Force Field Be With You


I’m derelict in mentioning this but ElKement’s blog, Theory And Practice Of Trying To Combine Just Anything, has published the second part of a non-equation-based description of quantum field theory. This one, titled “May The Force Field Be With You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory”, is about introducing the idea of a field, and a bit of how they can be understood in quantum mechanics terms.

A field, in this context, means some quantity that’s got a defined value for every point in space and time that you’re studying. As ElKement notes, the temperature is probably the most familiar to people. I’d imagine that’s partly because it’s relatively easy to feel the temperature change as one goes about one’s business — after all, gravity is also a field, but almost none of us feel it appreciably change — and because weather maps make the changes of that in space and in time available in attractive pictures.

The thing the field contains can be just about anything. The temperature would be just a plain old number, or as mathematicians would have it a “scalar”. But you can also have fields that describe stuff like the pull of gravity, which is a certain amount of pull and pointing, for us, toward the center of the earth. You can also have fields that describe, for example, how quickly and in what direction the water within a river is flowing. These strengths-and-directions are called “vectors” [1], and a field of vectors offers a lot of interesting mathematics and useful physics. You can also plunge into more exotic mathematical constructs, but you don’t have to. And you don’t need to understand any of this to read ElKement’s more robust introduction to all this.

[1] The independent student newspaper for the New Jersey Institute of Technology is named The Vector, and has as motto “With Magnitude and Direction Since 1924”. I don’t know if other tech schools have newspapers which use a similar joke.

From ElKement: Space Balls, Baywatch, and the Geekiness of Classical Mechanics


Over on Elkement’s blog, Theory and Practice of Trying To Combine Just Anything, is the start of a new series about quantum field theory. Elke Stangl is trying a pretty impressive trick here in trying to describe a pretty advanced field without resorting to the piles of equations that maybe are needed to be precise, but, which also fill the page with piles of equations.

The first entry is about classical mechanics, and contrasting the familiar way that it gets introduced to people —- the whole forceequalsmasstimesacceleration bit — and an alternate description, based on what’s called the Principle of Least Action. This alternate description is as good as the familiar old Newton’s Laws in describing what’s going on, but it also makes a host of powerful new mathematical tools available. So when you get into serious physics work you tend to shift over to that model; and, if you want to start talking Modern Physics, stuff like quantum mechanics, you pretty nearly have to start with that if you want to do anything.

So, since it introduces in clear language a fascinating and important part of physics and mathematics, I’d recommend folks try reading the essay. It’s building up to an explanation of fields, as the modern physicist understands them, too, which is similarly an important topic worth being informed about.