## How Differential Calculus Works

I’m at a point in my Why Stuff Can Orbit essays where I need to talk about derivatives. If I don’t then I’m going to be stuck with qualitative talk that’s too hard to focus on anything. But it’s a big subject. It’s about two-fifths of a Freshman Calculus course and one of the two main legs of calculus. So I want to pull out a quick essay explaining the important stuff.

Derivatives, also called differentials, are about how things change. By “things” I mean “functions”. And particularly I mean functions which have as a domain that’s in the real numbers and a range that’s in the real numbers. That’s the starting point. We can define derivatives for functions with domains or ranges that are vectors of real numbers, or that are complex-valued numbers, or are even more exotic kinds of numbers. But those ideas are all built on the derivatives for real-valued functions. So if you get really good at them, everything else is easy.

Derivatives are the part of calculus where we study how functions change. They’re blessed with some easy-to-visualize interpretations. Suppose you have a function that describes where something is at a given time. Then its derivative with respect to time is the thing’s velocity. You can build a pretty good intuitive feeling for derivatives that way. I won’t stop you from it.

A function’s derivative is itself a function. The derivative doesn’t have to be constant, although it’s possible. Sometimes the derivative is positive. This means the original function increases as whatever its variable increases. Sometimes the derivative is negative. This means the original function decreases as its variable increases. If the derivative is a big number this means it’s increasing or decreasing fast. If the derivative is a small number it’s increasing or decreasing slowly. If the derivative is zero then the function isn’t increasing or decreasing, at least not at this particular value of the variable. This might be a local maximum, where the function’s got a larger value than it has anywhere else nearby. It might be a local minimum, where the function’s got a smaller value than it has anywhere else nearby. Or it might just be a little pause in the growth or shrinkage of the function. No telling, at least not just from knowing the derivative is zero.

Derivatives tell you something about where the function is going. We can, and in practice often do, make approximations to a function that are built on derivatives. Many interesting functions are real pains to calculate. Derivatives let us make polynomials that are as good an approximation as we need and that a computer can do for us instead.

Derivatives can change rapidly. That’s all right. They’re functions in their own right. Sometimes they change more rapidly and more extremely than the original function did. Sometimes they don’t. Depends what the original function was like.

Not every function has a derivative. Functions that have derivatives don’t necessarily have them at every point. A function has to be continuous to have a derivative. A function that jumps all over the place doesn’t really have a direction, not that differential calculus will let us suss out. But being continuous isn’t enough. The function also has to be … well, we call it “differentiable”. The word at least tells us what the property is good for, even if it doesn’t say how to tell if a function is differentiable. The function has to not have any corners, points where the direction suddenly and unpredictably changes.

Otherwise differentiable functions can have some corners, some non-differentiable points. For instance the height of a bouncing ball, over time, looks like a bunch of upside-down U-like shapes that suddenly rebound off the floor and go up again. Those points interrupting the upside-down U’s aren’t differentiable, even though the rest of the curve is.

It’s possible to make functions that are nothing but corners and that aren’t differentiable anywhere. These are done mostly to win quarrels with 19th century mathematicians about the nature of the real numbers. We use them in Real Analysis to blow students’ minds, and occasionally after that to give a fanciful idea a hard test. In doing real-world physics we usually don’t have to think about them.

So why do we do them? Because they tell us how functions change. They can tell us where functions momentarily don’t change, which are usually important. And equations with differentials in them, known in the trade as “differential equations”. They’re also known to mathematics majors as “diffy Q’s”, a name which delights everyone. Diffy Q’s let us describe physical systems where there’s any kind of feedback. If something interacts with its surroundings, that interaction’s probably described by differential equations.

So how do we do them? We start with a function that we’ll call ‘f’ because we’re not wasting more of our lives figuring out names for functions and we’re feeling smugly confident Turkish authorities aren’t interested in us.

f’s domain is real numbers, for example, the one we’ll call ‘x’. Its range is also real numbers. f(x) is some number in that range. It’s the one that the function f matches with the domain’s value of x. We read f(x) aloud as “the value of f evaluated at the number x”, if we’re pretending or trying to scare Freshman Calculus classes. Really we say “f of x” or maybe “f at x”.

There’s a couple ways to write down the derivative of f. First, for example, we say “the derivative of f with respect to x”. By that we mean how does the value of f(x) change if there’s a small change in x. That difference matters if we have a function that depends on more than one variable, if we had an “f(x, y)”. Having several variables for f changes stuff. Mostly it makes the work longer but not harder. We don’t need that here.

But there’s a couple ways to write this derivative. The best one if it’s a function of one variable is to put a little ‘ mark: f'(x). We pronounce that “f-prime of x”. That’s good enough if there’s only the one variable to worry about. We have other symbols. One we use a lot in doing stuff with differentials looks for all the world like a fraction. We spend a lot of time in differential calculus explaining why it isn’t a fraction, and then in integral calculus we do some shady stuff that treats it like it is. But that’s $\frac{df}{dx}$. This also appears as $\frac{d}{dx} f$. If you encounter this do not cross out the d’s from the numerator and denominator. In this context the d’s are not numbers. They’re actually operators, which is what we call a function whose domain is functions. They’re notational quirks is all. Accept them and move on.

How do we calculate it? In Freshman Calculus we introduce it with this expression involving limits and fractions and delta-symbols (the Greek letter that looks like a triangle). We do that to scare off students who aren’t taking this stuff seriously. Also we use that formal proper definition to prove some simple rules work. Then we use those simple rules. Here’s some of them.

1. The derivative of something that doesn’t change is 0.
2. The derivative of xn, where n is any constant real number — positive, negative, whole, a fraction, rational, irrational, whatever — is n xn-1.
3. If f and g are both functions and have derivatives, then the derivative of the function f plus g is the derivative of f plus the derivative of g. This probably has some proper name but it’s used so much it kind of turns invisible. I dunno. I’d call it something about linearity because “linearity” is what happens when adding stuff works like adding numbers does.
4. If f and g are both functions and have derivatives, then the derivative of the function f times g is the derivative of f times the original g plus the original f times the derivative of g. This is called the Product Rule.
5. If f and g are both function and have derivatives we might do something called composing them. That is, for a given x we find the value of g(x), and then we put that value in to f. That we write as f(g(x)). For example, “the sine of the square of x”. Well, the derivative of that with respect to x is the derivative of f evaluated at g(x) times the derivative of g. That is, f'(g(x)) times g'(x). This is called the Chain Rule and believe it or not this turns up all the time.
6. If f is a function and C is some constant number, then the derivative of C times f(x) is just C times the derivative of f(x), that is, C times f'(x). You don’t really need this as a separate rule. If you know the Product Rule and that first one about the derivatives of things that don’t change then this follows. But who wants to go through that many steps for a measly little result like this?
7. Calculus textbooks say there’s this Quotient Rule for when you divide f by g but you know what? The one time you’re going to need it it’s easier to work out by the Product Rule and the Chain Rule and your life is too short for the Quotient Rule. Srsly.
8. There’s some functions with special rules for the derivative. You can work them out from the Real And Proper Definition. Or you can work them out from infinitely-long polynomial approximations that somehow make sense in context. But what you really do is just remember the ones anyone actually uses. The derivative of ex is ex and we love it for that. That’s why e is that 2.71828(etc) number and not something else. The derivative of the natural log of x is 1/x. That’s what makes it natural. The derivative of the sine of x is the cosine of x. That’s if x is measured in radians, which it always is in calculus, because it makes that work. The derivative of the cosine of x is minus the sine of x. Again, radians. The derivative of the tangent of x is um the square of the secant of x? I dunno, look that sucker up. The derivative of the arctangent of x is $\frac{1}{1 + x^2}$ and you know what? The calculus book lists this in a table in the inside covers. Just use that. Arctangent. Sheesh. We just made up the “secant” and the “cosecant” to haze Freshman Calculus students anyway. We don’t get a lot of fun. Let us have this. Or we’ll make you hear of the inverse hyperbolic cosecant.

So. What’s this all mean for central force problems? Well, here’s what the effective potential energy V(r) usually looks like:

$V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}$

So, first thing. The variable here is ‘r’. The variable name doesn’t matter. It’s just whatever name is convenient and reminds us somehow why we’re talking about this number. We adapt our rules about derivatives with respect to ‘x’ to this new name by striking out ‘x’ wherever it appeared and writing in ‘r’ instead.

Second thing. C is a constant. So is n. So is L and so is m. 2 is also a constant, which you never think about until a mathematician brings it up. That second term is a little complicated in form. But what it means is “a constant, which happens to be the number $\frac{L^2}{2m}$, multiplied by r-2”.

So the derivative of Veff, with respect to r, is the derivative of the first term plus the derivative of the second. The derivatives of each of those terms is going to be some constant time r to another number. And that’s going to be:

$V'_{eff}(r) = C n r^{n - 1} - 2 \frac{L^2}{2 m r^3}$

And there you can divide the 2 in the numerator and the 2 in the denominator. So we could make this look a little simpler yet, but don’t worry about that.

OK. So that’s what you need to know about differential calculus to understand orbital mechanics. Not everything. Just what we need for the next bit.

• #### davekingsbury 1:06 pm on Saturday, 8 October, 2016 Permalink | Reply

Good article. Just finished Morton Cohen’s biography of Lewis Carroll, who was a great populariser of mathematics, logic, etc. Started a shared poem in tribute to him, here is a cheeky plug, hope you don’t mind!

https://davekingsbury.wordpress.com/2016/10/08/web-of-life/

Like

• #### Joseph Nebus 12:22 am on Tuesday, 11 October, 2016 Permalink | Reply

Thanks for sharing and I’m quite happy to have your plug here. I know about Carroll’s mathematics-popularization side; his logic puzzles are particularly choice ones even today. (Granting that deductive logic really lends itself to being funny.)

Oddly I haven’t read a proper biography of Carroll, or most of the other mathematicians I’m interested in. Which is strange since I’m so very interested in the history and the cultural development of mathematics.

Liked by 1 person

## Spontaneity and the performance of work

I’d wanted just to point folks to the latest essay in the CarnotCycle blog. This thermodynamics piece is a bit about how work gets done, and how it relates to two kinds of variables describing systems. The two kinds are known as intensive and extensive variables, and considering them helps guide us to a different way to regard physical problems.

Like

Imagine a perfect gas contained by a rigid-walled cylinder equipped with a frictionless piston held in position by a removable external agency such as a magnet. There are finite differences in the pressure (P1>P2) and volume (V2>V1) of the gas in the two compartments, while the temperature can be regarded as constant.

If the constraint on the piston is removed, will the piston move? And if so, in which direction?

Common sense, otherwise known as dimensional analysis, tells us that differences in volume (dimensions L3) cannot give rise to a force. But differences in pressure (dimensions ML-1T-2) certainly can. There will be a net force of P1–P2 per unit area of piston, driving it to the right.

– – – –

The driving force

In thermodynamics, there exists a set of variables which act as “generalised forces” driving a system from one state to…

View original post 290 more words

## Reading the Comics, September 8, 2014: What Is The Problem Edition

Must be the start of school or something. In today’s roundup of mathematically-themed comics there are a couple of strips that I think touch on the question of defining just what the problem is: what are you trying to measure, what are you trying to calculate, what are the rules of this sort of calculation? That’s a lot of what’s really interesting about mathematics, which is how I’m able to say something about a rerun Archie comic. It’s not easy work but that’s why I get that big math-blogger paycheck.

I’d have thought the universe to be at least three-dimensional.

John Hambrock’s The Brilliant Mind of Edison Lee (September 2) talks about the shape of the universe. Measuring the world, or the universe, is certainly one of the older influences on mathematical thought. From a handful of observations and some careful reasoning, for example, one can understand how large the Earth is, and how far away the Moon and the Sun must be, without going past the kinds of reasoning or calculations that a middle school student would probably be able to follow.

There is something deeper to consider about the shape of space, though: the geometry of the universe affects what things can happen in them, and can even be seen in the kinds of physics that happen. A famous, and astounding, result by the mathematical physicist Emmy Noether shows that symmetries in space correspond to conservation laws. That the universe is, apparently, rotationally symmetric — everything would look the same if the whole universe were picked up and rotated (say) 80 degrees along one axis — means that there is such a thing as the conservation of angular momentum. That the universe is time-symmetric — the universe would look the same if it had got started five hours later (please pretend that’s a statement that can have any coherent meaning) — means that energy is conserved. And so on. It may seem, superficially, like a cosmologist is engaged in some almost ancient-Greek-style abstract reasoning to wonder what shapes the universe could have and what it does, but (putting aside that it gets hard to divide mathematics, physics, and philosophy in this kind of field) we can imagine observable, testable consequences of the answer.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 5) tells a joke starting with “two perfectly rational perfectly informed individuals walk into a bar”, along the way to a joke about economists. The idea of “perfectly rational perfectly informed” people is part of the mathematical modeling that’s become a popular strain of economic thought in recent decades. It’s a model, and like many models, is properly speaking wrong, but it allows one to describe interesting behavior — in this case, how people will make decisions — without complications you either can’t handle or aren’t interested in. The joke goes on to the idea that one can assign costs and benefits to continuing in the joke. The idea that one can quantify preferences and pleasures and happiness I think of as being made concrete by Jeremy Bentham and the utilitarian philosophers, although trying to find ways to measure things has been a streak in Western thought for close to a thousand years now, and rather fruitfully so. But I wouldn’t have much to do with protagonists who can’t stay around through the whole joke either.

Marc Anderson’s Andertoons (September 6) was probably composed in the spirit of joking, but it does hit something that I understand baffles kids learning it every year: that subtracting a negative number does the same thing as adding a positive number. To be fair to kids who need a couple months to feel quite confident in what they’re doing, mathematicians needed a couple generations to get the hang of it too. We have now a pretty sound set of rules for how to work with negative numbers, that’s nice and logically tested and very successful at representing things we want to know, but there seems to be a strong intuition that says “subtracting a negative three” and “adding a positive three” might just be different somehow, and we won’t really know negative numbers until that sense of something being awry is resolved.

Andertoons pops up again the next day (September 7) with a completely different drawing of a chalkboard and this time a scientist and a rabbit standing in front of it. The rabbit’s shown to be able to do more than multiply and, indeed, the mathematics is correct. Cosines and sines have a rather famous link to exponentiation and to imaginary- and complex-valued numbers, and it can be useful to change an ordinary cosine or sine into this exponentiation of a complex-valued number. Why? Mostly, because exponentiation tends to be pretty nice, analytically: you can multiply and divide terms pretty easily, you can take derivatives and integrals almost effortlessly, and then if you need a cosine or a sine you can get that out at the end again. It’s a good trick to know how to do.

Jeff Harris’s Shortcuts children’s activity panel (September 9) is a page of stuff about “Geometry”, and it’s got some nice facts (some mathematical, some historical), and a fair bunch of puzzles about the field.

Morrie Turner’s Wee Pals (September 7, perhaps a rerun; Turner died several months ago, though I don’t know how far ahead of publication he was working) features a word problem in terms of jellybeans that underlines the danger of unwarranted assumptions in this sort of problem-phrasing.

How far back is this rerun from if Moose got lunch for two for $8.95? Craig Boldman and Henry Scarpelli’s Archie (September 8, rerun) goes back to one of arithmetic’s traditional comic strip applications, that of working out the tip. Poor Moose is driving himself crazy trying to work out 15 percent of$8.95, probably from a quiz-inspired fear that if he doesn’t get it correct to the penny he’s completely wrong. Being able to do a calculation precisely is useful, certainly, but he’s forgetting that in tis real-world application he gets some flexibility in what has to be calculated. He’d save some effort if he realized the tip for $8.95 is probably close enough to the tip for$9.00 that he could afford the difference, most obviously, and (if his budget allows) that he could just as well work out one-sixth the bill instead of fifteen percent, and give up that workload in exchange for sixteen cents.

Mark Parisi’s Off The Mark (September 8) is another entry into the world of anthropomorphized numbers, so you can probably imagine just what π has to say here.

• #### howardat58 1:47 am on Tuesday, 9 September, 2014 Permalink | Reply

If teachers understood that when you bring in negative numbers you don’t just stick them to the left of the numbers you had before, you have actually created a new number system, for a new purpose, and made two copies of the original numbers (the natural numbers), stuck the copies together at zero, with one lot going to the left and one lot going to the right. It is completely arbitrary whether the ones to the right are called the positive numbers or the negative numbers. But of course that is far too mathematical for most people.

Like

• #### Joseph Nebus 6:50 pm on Friday, 12 September, 2014 Permalink | Reply

You are right, that the introduction of negative numbers as this thing sort of slapped onto the left end of the number line sets up for a lot of trouble, particularly in subtraction and probably also in multiplication and even greater-than and less-than comparisons. But I also see why it’s so attractive to introduce it that way; it feels natural, or at least it look that way.

I wonder if there’s a way to introduce the subject more rigorously but still at a level that elementary school students, who generally aren’t very strong on abstract reasoning, will still find comfortable, and that won’t cause their parents to get upset that they’re being taught something too weirdly different from what they kind of remember from school themselves.

Like

• #### Thomas Anderson 2:08 am on Wednesday, 10 September, 2014 Permalink | Reply

In the case of Moose there, I’m tempted to say that, at least in part, he’s the victim of his education. If I had my way, students at some level would have a math class all about quickly winging it in real life situations. Shortcuts, etc. That would have been very helpful for me. It wasn’t until college physics when I realized you can start fudging numbers until you got “close enough.”

Like

• #### Joseph Nebus 6:56 pm on Friday, 12 September, 2014 Permalink | Reply

He is, yes, certainly a victim there. The training to work out the exact problem specified is a good one, but what’s missed is knowledge that he actually gets to pick the exact problem. That’s a real shame since so much mathematics is a matter of picking the exact problem you want to solve, and how perfectly you have to solve it.

(Admittedly, Moose, by the definition of his character, would struggle with a problem simpler to calculate too.)

Like

• #### Thomas Anderson 7:27 pm on Friday, 12 September, 2014 Permalink | Reply

That’s a great way of putting it. I feel like a lot of my mathematics woes would have been avoided if I’d learned fact that the “right” answer is the one that’s as accurate as you need it to be.

Ugh, I think back to college physics and wasting time trying to juggle so many decimals just because I thought the most accurate answer was the most desirable one, when I could have done the question in a quarter the time by saying “okay pi is 3, g is 10, and air resistance is a figment of an overactive imagination.”

Like

• #### Joseph Nebus 7:16 pm on Sunday, 14 September, 2014 Permalink | Reply

Decimals have this strange hypnotic effect on the human psyche. I don’t know what TV Tropes would call it but you see a piece of this in a science fiction show where a character is shown to be very smart by reeling out far more digits than could be meaningful or even useful, like the time Commander Data gave the travel time between galaxies down to fractions of a second.

It’s kind of a shame that fractions are kind of clumsy to work with, since the description of something as, say, one-quarter can avoid the trap of thinking that you have more precision than 0.25 really entitles you to. (And, yes, you can add a note about your margin for error, but I’m not convinced that people really internalize that, not without a lot of practice.)

Like

• #### elkement 8:14 am on Thursday, 11 September, 2014 Permalink | Reply

The joke involving those perfectly rational perfectly informed beings is my favorite – as it has several ‘levels’…

Like

• #### Joseph Nebus 6:58 pm on Friday, 12 September, 2014 Permalink | Reply

Yeah, it has at that, hasn’t it?

Like

## Reading the Comics, August 29, 2014: Recurring Jokes Edition

Well, I did say we were getting to the end of summer. It’s taken only a couple days to get a fresh batch of enough mathematics-themed comics to include here, although the majority of them are about mathematics in ways that we’ve seen before, sometimes many times. I suppose that’s fair; it’s hard to keep thinking of wholly original mathematics jokes, after all. When you’ve had one killer gag about “537”, it’s tough to move on to “539” and have it still feel fresh.

Tom Toles’s Randolph Itch, 2 am (August 27, rerun) presents Randolph suffering the nightmare of contracting a case of entropy. Entropy might be the 19th-century mathematical concept that’s most achieved popular recognition: everyone knows it as some kind of measure of how disorganized things are, and that it’s going to ever increase, and if pressed there’s maybe something about milk being stirred into coffee that’s linked with it. The mathematical definition of entropy is tied to the probability one will find whatever one is looking at in a given state. Work out the probability of finding a system in a particular state — having particles in these positions, with these speeds, maybe these bits of magnetism, whatever — and multiply that by the logarithm of that probability. Work out that product for all the possible ways the system could possibly be configured, however likely or however improbable, just so long as they’re not impossible states. Then add together all those products over all possible states. (This is when you become grateful for learning calculus, since that makes it imaginable to do all these multiplications and additions.) That’s the entropy of the system. And it applies to things with stunning universality: it can be meaningfully measured for the stirring of milk into coffee, to heat flowing through an engine, to a body falling apart, to messages sent over the Internet, all the way to the outcomes of sports brackets. It isn’t just body parts falling off.

Randy Glasbergen’s _The Better Half_ For the 28th of August, 2014.

Randy Glasbergen’s The Better Half (August 28) does the old joke about not giving up on algebra someday being useful. Do teachers in other subjects get this? “Don’t worry, someday your knowledge of the Panic of 1819 will be useful to you!” “Never fear, someday they’ll all look up to you for being able to diagram a sentence!” “Keep the faith: you will eventually need to tell someone who only speaks French that the notebook of your uncle is on the table of your aunt!”

Eric the Circle (August 28, by “Gilly” this time) sneaks into my pages again by bringing a famous mathematical symbol into things. I’d like to make a mention of the links between mathematics and music which go back at minimum as far as the Ancient Greeks and the observation that a lyre string twice as long produced the same note one octave lower, but lyres and strings don’t fit the reference Gilly was going for here. Too bad.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 28) is another strip to use a “blackboard full of mathematical symbols” as visual shorthand for “is incredibly smart stuff going on”. The symbols look to me like they at least started out as being meaningful — they’re the kinds of symbols I expect in describing the curvature of space, and which you can find by opening up a book about general relativity — though I’m not sure they actually stay sensible. (It’s not the kind of mathematics I’ve really studied.) However, work in progress tends to be sloppy, the rough sketch of an idea which can hopefully be made sound.

Anthony Blades’s Bewley (August 29) has the characters stare into space pondering the notion that in the vastness of infinity there could be another of them out there. This is basically the same existentially troublesome question of the recurrence of the universe in enough time, something not actually prohibited by the second law of thermodynamics and the way entropy tends to increase with the passing of time, but we have already talked about that.

## Reading the Comics, July 24, 2014: Math Is Just Hard Stuff, Right? Edition

Maybe there is no pattern to how Comic Strip Master Command directs the making of mathematics-themed comic strips. It hasn’t quite been a week since I had enough to gather up again. But it’s clearly the summertime anyway; the most common theme this time seems to be just that mathematics is some hard stuff, without digging much into particular subjects. I can work with that.

Pab Sungenis’s The New Adventures of Queen Victoria (July 19) brings in Erwin Schrödinger and his in-strip cat Barfly for a knock-knock joke about proof, with Andrew Wiles’s name dropped probably because he’s the only person who’s gotten to be famous for a mathematical proof. Wiles certainly deserves fame for proving Fermat’s Last Theorem and opening up what I understand to be a useful new field for mathematical research (Fermat’s Last Theorem by itself is nice but unimportant; the tools developed to prove it, though, that’s worthwhile), but remembering only Wiles does slight Richard Taylor, whose help Wiles needed to close a flaw in his proof.

Incidentally I don’t know why the cat is named Barfly. It has the feel to me of a name that was a punchline for one strip and then Sungenis felt stuck with it. As Thomas Dye of the web comic Newshounds said, “Joke names’ll kill you”. (I’m inclined to think that funny names can work, as the Marx Brotehrs, Fred Allen, and Vic and Sade did well with them, but they have to be a less demanding kind of funny.)

John Deering’s Strange Brew (July 19) uses a panel full of mathematical symbols scrawled out as the representation of “this is something really hard being worked out”. I suppose this one could also be filed under “rocket science themed comics”, but it comes from almost the first problem of mathematical physics: if you shoot something straight up, how long will it take to fall back down? The faster the thing starts up, the longer it takes to fall back, until at some speed — the escape velocity — it never comes back. This is because the size of the gravitational attraction between two things decreases as they get farther apart. At or above the escape velocity, the thing has enough speed that all the pulling of gravity, from the planet or moon or whatever you’re escaping from, will not suffice to slow the thing down to a stop and make it fall back down.

The escape velocity depends on the size of the planet or moon or sun or galaxy or whatever you’re escaping from, of course, and how close to the surface (or center) you start from. It also assumes you’re talking about the speed when the thing starts flying away, that is, that the thing doesn’t fire rockets or get a speed boost by flying past another planet or anything like that. And things don’t have to reach the escape velocity to be useful. Nothing that’s in earth orbit has reached the earth’s escape velocity, for example. I suppose that last case is akin to how you can still get some stuff done without getting out of the recliner.

Mel Henze’s Gentle Creatures (July 21) uses mathematics as the standard for proving intelligence exists. I’ve got a vested interest in supporting that proposition, but I can’t bring myself to say more than that it shows a particular kind of intelligence exists. I appreciate the equation of the final panel, though, as it can be pretty well generalized.

Bill Holbrook’s _Safe Havens_ for the 22nd of July, 2014.

Bill Holbrook’s Safe Havens (July 22) plays on mathematics’ reputation of being not very much a crowd-pleasing activity. That’s all right, although I think Holbrook makes a mistake by having the arena claim to offer a “lecture on the actual odds of beating the casino”, since the mathematics of gambling is just the sort of mathematics I think would draw a crowd. Probability enjoys a particular sweet spot for popular treatment: many problems don’t require great amounts of background to understand, and have results that are surprising, but which have reasons that are easy to follow and don’t require sophisticated arguments, and are about problems that are easy to imagine or easy to find interesting: cards being drawn, dice being rolled, coincidences being found, or secrets being revealed. I understand Holbrook’s editorial cartoon-type point behind the lecture notice he put up, but the venue would have better scared off audiences if it offered a lecture on, say, “Chromatic polynomials for rigidly achiral graphs: new work on Yamada’s invariant”. I’m not sure I could even explain that title in 1200 words.

Missy Meyer’s Holiday Doodles (July 22) revelas to me that apparently the 22nd of July was “Casual Pi Day”. Yeah, I suppose that passes. I didn’t see much about it in my Twitter feed, but maybe I need some more acquaintances who don’t write dates American-fashion.

Thom Bluemel’s Birdbrains (July 24) again uses mathematics — particularly, Calculus — as not just the marker for intelligence but also as The Thing which will decide whether a kid goes on to success in life. I think the dolphin (I guess it’s a dolphin?) parent is being particularly horrible here, as it’s not as if a “B+” is in any way a grade to be ashamed of, and telling kids it is either drives them to give up on caring about grades, or makes them send whiny e-mails to their instructors about how they need this grade and don’t understand why they can’t just do some make-up work for it. Anyway, it makes the kid miserable, it makes the kid’s teachers or professors miserable, and for crying out loud, it’s a B+.

(I’m also not sure whether a dolphin would consider a career at Sea World success in life, but that’s a separate and very sad issue.)

• #### BunnyHugger 7:40 pm on Thursday, 24 July, 2014 Permalink | Reply

I take Holbrook’s point not as “math is unpopular” but as “math is an anathema to gamblers” — because it demonstrates that their behavior is irrational, while they prefer to hold on to superstition and vain hope.

Like

• #### Joseph Nebus 8:13 pm on Saturday, 26 July, 2014 Permalink | Reply

I think that you’re right about Holbrook’s point.

But I suspect the claimed lecture title would still attract gamblers, out of a belief that if they knew a little more about how the industry works then they’d be able to beat the house. The belief may be irrational but it’s hard to fight, and sometimes (as with various Blackjack card-counting schemes) it even pays out.

Like

• #### Donna 1:47 am on Sunday, 27 July, 2014 Permalink | Reply

My friend Vanessa put casual pi day in her Facebook feed. Does that help? I didn’t know about this before this year. I can get you guys to friend each other if need be. She’s also a math prof.

Like

• #### Joseph Nebus 8:17 pm on Sunday, 27 July, 2014 Permalink | Reply

Helps some, sure. I don’t have any particular reason to disapprove of Casual Pi day, ahouth I suppose it’s worse-timed for fitting into classes than the fourteenth of March will be.

And, thank you, although I haven’t gotten on Facebook yet. I’m still getting Twitter to make sense, I’m afraid.

Like

• #### Donna 1:48 am on Sunday, 27 July, 2014 Permalink | Reply

Oh, and I’m so glad I guessed my wordpress password… hmmm maybe it’s not too secure after all…

Like

• #### Joseph Nebus 8:18 pm on Sunday, 27 July, 2014 Permalink | Reply

Well, if nobody else guessed it first, that’s as secure as it needs to be, isn’t it?

Like

• #### elkement 9:02 am on Sunday, 3 August, 2014 Permalink | Reply

“Barfly” looks like “Garfield” – isn’t that a copyright issue? (I did not research any background, such as if both cartoons are perhaps created by the same illustrator so it is maybe a stupid comment…)

Like

• #### Joseph Nebus 6:39 am on Monday, 4 August, 2014 Permalink | Reply

Yeah, “Barfly” is just Garfield, and there’s no connection I’m aware of between the Queen Victoria cartoonist and the Garfield owners. Possibly Pab Sugenis uses an image that was given out for free use for publicity purposes. Possibly the original use of the character was for satirical purposes (I could easily imagine making a joke about Garfield repeating the same pose for the characters over and over, since the comic strip does do that), which would make Barfly at least start out as a fair use. Possibly Sugenis just figures, if they were going to sue him, they’d have done it by now, so what’s he got to lose? Those big web-cartoonist paychecks?

Like

## Reading the Comics, June 4, 2014: Intro Algebra Edition

I’m not sure that there is a theme to the most recent mathematically-themed comic strips that I’ve seen, all from GoComics in the past week, but they put me in mind of the stuff encountered in learning algebra, so let’s run with that. It’s either that or I start making these “edition” titles into something absolutely and utterly meaningless, which could be.

Marc Anderson’s Andertoons (May 30) uses the classic setup of a board full of equation to indicate some serious, advanced thinking going on, and then puts in a cute animal twist on things. I don’t believe that the equation signifies anything, but I have to admit I’m not sure. It looks quite plausibly like something which might turn up in quantum mechanics (the “h” and “c” and lambda are awfully suggestive), so if Anderson made it up out of whole cloth he did an admirable job. If he didn’t make it up and someone recognizes it, please, let me know; I’m curious what it might be.

Marc Anderson reappears on the second of June has the classic reluctant student upset with the teacher who knew all along what x was. Knowledge of what x is is probably the source of most jokes about learning algebra, or maybe mathematics overall, and it’s amusing to me anyway that what we really care about is not what x is particularly — we don’t even do ourselves any harm if we call it some other letter, or for that matter an empty box — but learning how to figure out what values in the place of x would make the relationship true.

Jonathan Lemon’s Rabbits Against Magic (May 31) has the one-eyed rabbit Weenus doing miscellaneous arithmetic on the way to punning about things working out. I suppose to get to that punch line you have to either have mathematics or gym class as the topic, and I wouldn’t be surprised if Lemon’s done a version with those weight-lifting machines on screen. That’s not because I doubt his creativity, just that it’s the logical setup.

Eric Scott’s Back In The Day (June 2) has a pair of dinosaurs wondering about how many stars there are. Astronomy has always inspired mathematics. After one counts the number of stars one gets to wondering, how big the universe could be — Archimedes, classically, estimated the universe was about big enough to hold 1063 grains of sand — or how far away the sun might be — which the Ancient Greeks were able to estimate to the right order of magnitude on geometric grounds — and I imagine that looking deep into the sky can inspire the idea that the infinitely large and the infinitely small are at least things we can try to understand. Trying to count stars is a good start.

Steve Boreman’s Little Dog Lost (June 2) has a stick insect provide the excuse for some geometry puns.

Brian and Ron Boychuk’s The Chuckle Brothers (June 4) has a pie shop gag that I bet the Boychuks are kicking themselves for not having published back in mid-March.

• #### ivasallay 8:49 pm on Wednesday, 4 June, 2014 Permalink | Reply

My favorites were the Andertoons and the Chuckle Brothers.

Like

• #### elkement 3:07 pm on Thursday, 5 June, 2014 Permalink | Reply

Of course I try to solve the puzzle – does the equations in the first cartoon mean anything? I confess – it does not ring a bell immediately.

The dimension of the first term is really ‘energy’ – so there has to be some truth to parts of it.
But what are these subscripts x,y,z?

E is typically used to denote constant energy – here it seems to be time-dependent. I first thought it’s some potential varying with time (as the last letter is V – typically potential energy)… but then I saw that E is also in the coefficients on the right-hand side.

If I ever spot something like this in a physics book I will try to find this post again and post another comment!

Like

• #### Joseph Nebus 9:51 pm on Friday, 6 June, 2014 Permalink | Reply

Yeah, those are just about the points that stumped me: I could imagine, for example, the symbols having gotten a little confused and the E on the right-hand side of the equation meant to be strength of an electric field, in which case it makes sense to have x and y and z as spatial subscripts, and the E on the left-hand-side energy. This is sloppy, but it seems like the kind of sloppiness that plausibly happens in the middle of working out a problem. The second line, defining E and B again, seems like it’s consistent with that.

But then I’m not sure why an electric and magnetic field would be measured only in two dimensions while there’s a third, marked by z, in the problem.

I suppose it’s all nonsense, but it’s awfully good nonsense. If it does turn out to be something I’d love to hear it.

Like

• #### irenehelenowski 10:36 am on Friday, 6 June, 2014 Permalink | Reply

I’ll have to check these out! I’m also a fan of Strange Quark comics from Dallin Durfee. Doesn’t always have a mathematics or physics theme but always gives an exercise in logic :)

Like

• #### Joseph Nebus 9:40 pm on Friday, 6 June, 2014 Permalink | Reply

Oh, thank you. I’m really oblivious about web comics, for no really good reason, so have to count on people referring me to them.

Like

• #### irenehelenowski 11:11 am on Saturday, 7 June, 2014 Permalink | Reply

You’re welcome. I am too but just discovered him on twitter, LOL. Hope you enjoy his posts.

Like

• #### Joseph Nebus 6:01 am on Sunday, 8 June, 2014 Permalink | Reply

I’ll do my best. Thank you.

Like

## The ideal gas equation

I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

Like

If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

View original post 1,628 more words

## Underground Maps, And Making Your Own

By way of the UK Ed Chat web site I’ve found something that’s maybe useful in mathematical education and yet just irresistible to my sort of mind. It’s the Metro Map Creator, a tool for creating maps in the style and format of those topographical, circuit-diagram-style maps made famous by the London Underground and many subway or rail systems with complex networks to describe.

UK Ed Chat is of the opinion it can be used to organize concepts by how they lead to one another and how they connect to other concepts. I can’t dispute that, but what tickles me is that it’s just so beautiful to create maps like this. There’s a reason I need to ration the time I spend playing Sid Meier’s Railroads.

It also brings to mind that in the early 90s I attended a talk by someone who was in the business of programming automated mapmaking software. If you accept that it’s simple to draw a map, as in a set of lines describing a location, at whatever scale and over whatever range is necessary, there’s still an important chore that’s less obviously simple: how do you label it? Labels for the things in the map have to be close to — but not directly on — the thing being labelled, but they also have to be positioned so as not to overlap other labels that have to be there, and also to not overlap other important features, although they might be allowed to run over something unimportant. Add some other constraints, such as the label being allowed to rotate a modest bit but not too much (we can’t really have a city name be upside-down), and working out a rule by which to set a label’s location becomes a neat puzzle.

As I remember from that far back, the solution (used then) was to model each label’s position as if it were an object on the end of a spring which had some resting length that wasn’t zero — so that the label naturally moves away from the exact position of the thing being labelled — but with a high amount of friction — as if the labels were being dragged through jelly — with some repulsive force given off by the things labels must not overlap, and simulate shaking the entire map until it pretty well settles down. (In retrospect I suspect the lecturer was trying to talk about Monte Carlo simulations without getting into too much detail about that, when the simulated physics of the labels were the point of the lecture.)

This always struck me as a neat solution as it introduced a bit of physics into a problem which hasn’t got any on the assumption that a stable solution to the imposed physical problem will turn out to be visually appealing. It feels like an almost comic reversal of the normal way that mathematics and physics interact.

Pardon me, now, though, as I have to go design many, many imaginary subway systems.

• #### ivasallay 2:50 am on Monday, 5 May, 2014 Permalink | Reply

I had no idea that labeling maps could be so involved! Have fun!

Like

• #### Joseph Nebus 3:48 am on Tuesday, 6 May, 2014 Permalink | Reply

I wouldn’t have had any idea that map-labelling could be so involved either, if it weren’t for that lecture. I’m delighted, though.

And isn’t it wonderful how many things turn out to be fascinatingly involved one they have the chance?

Like

• #### ivasallay 1:26 am on Thursday, 8 May, 2014 Permalink | Reply

Yes, indeed!

Like

## Reading the Comics, April 21, 2014: Bill Amend In Name Only Edition

Recently the National Council of Teachers of Mathematics met in New Orleans. Among the panelists was Bill Amend, the cartoonist for FoxTrot, who gave a talk about the writing of mathematics comic strips. Among the items he pointed out as challenges for mathematics comics — and partly applicable to any kind of teaching of mathematics — were:

• Accessibility
• Stereotypes
• What is “easy” and “hard”?
• I’m not exactly getting smarter as I age
• Newspaper editors might not like them

Besides the talk (and I haven’t found a copy of the PowerPoint slides of his whole talk) he also offered a collection of FoxTrot comics with mathematical themes, good for download and use (with credit given) for people who need to stock up on them. The link might be expire at any point, note, so if you want them, go now.

While that makes a fine lead-in to a collection of mathematics-themed comic strips around here I have to admit the ones I’ve seen the last couple weeks haven’t been particularly inspiring, and none of them are by Bill Amend. They’ve covered a fair slate of the things you can write mathematics comics about — physics, astronomy, word problems, insult humor — but there’s still interesting things to talk about. For example:

## How Dirac Made Every Number

A couple weeks back I offered a challenge taken from Graham Farmelo’s biography (The Strangest Man) of the physicist Paul Dirac. The physicist had been invited into a game to create whole numbers by using exactly four 2’s and the normal arithmetic operations, for example:

$1 = \frac{2 + 2}{2 + 2}$

$2 = 2^{2 - \left(2 \div 2\right)}$

$4 = 2^2 \div 2 + 2$

$8 = 2^{2^{2}} \div 2$

While four 2’s have to be used, and not any other numerals, it’s permitted to use the 2’s stupidly, as every one of my examples here does. Dirac went off and worked out a scheme for producing any positive integer from them. Now, if all goes well, Dirac’s answer should be behind this cut and it hasn’t been spoiled in the reader or the mails sent out to people reading it.

• #### delarsea 9:29 pm on Friday, 18 April, 2014 Permalink | Reply

Reblogged this on How and Why?.

Like

• #### abyssbrain 2:54 am on Friday, 10 April, 2015 Permalink | Reply

Nice post. I’m not familiar with the history of this problem and that it’s attributed the Dirac. Though I have seen the version of using the 4’s to produce every number.

Like

• #### Joseph Nebus 3:58 am on Friday, 17 April, 2015 Permalink | Reply

I don’t know that the problem traces to Dirac, but at least a biography of him did credit him with a solution using exactly four 2’s. Using 4’s to make numbers is another wonderful puzzle, though.

Liked by 1 person

## Weightlessness at the Equator (Whiteboard Sketch #1)

The mathematics blog Scientific Finger Food has an interesting entry, “Weightlessness at the Equator (Whiteboard Sketch #1)”, which looks at the sort of question that’s easy to imagine when you’re young: since gravity pulls you to the center of the earth, and the earth’s spinning pushes you away (unless we’re speaking precisely, but you know what that means), so, how fast would the planet have to spin so that a person on the equator wouldn’t feel any weight?

It’s a straightforward problem, one a high school student ought to be able to do. Sebastian Templ works the problem out, though, including the all-important diagram that shows the important part, which is what calculation to do.

In reality, the answer doesn’t much matter since a planet spinning nearly fast enough to allow for weightlessness at the equator would be spinning so fast it couldn’t hold itself together, and a more advanced version of this problem could make use of that: given some measure of how strongly rock will hold itself together, what’s the fastest that the planet can spin before it falls apart? And a yet more advanced course might work out how other phenomena, such as tides or the precession of the poles might work. Eventually, one might go on to compose highly-regarded works of hard science fiction, if you’re willing to start from the questions easy to imagine when you’re young.

Like

At the present time, our Earth does a full rotation every 24 hours, which results in day and night. Just like on a carrousel, its inhabitants (and, by the way, all the other stuff on and of the planet) are pushed “outwards” due to the centrifugal force. So we permanently feel an “upwards” pulling force thanks to the Earth’s rotation. However, the centrifugal force is much weaker than the centri petal force, which is directed towards the core of the planet and usually called “gravitation”. If this wasn’t the case, we would have serious problems holding our bodies down to the ground. (The ground, too, would have troubles holding itself “to the ground”.)

Especially on the equator, the centrifugal and the gravitational force are antagonistic forces: the one points “downwards” while the other points “upwards”.

# How fast would the Earth have to spin in order to cause weightlessness at the…

View original post 201 more words

## Can You Be As Clever As Dirac For A Little Bit

I’ve been reading Graham Farmelo’s The Strangest Man: The Hidden Life of Paul Dirac, which is a quite good biography about a really interestingly odd man and important physicist. Among the things mentioned is that at one point Dirac was invited in to one of those number-challenge puzzles that even today sometimes make the rounds of the Internet. This one is to construct whole numbers using exactly four 2’s and the normal, non-exotic operations — addition, subtraction, exponentials, roots, the sort of thing you can learn without having to study calculus. For example:

$1 = \left(2 \div 2\right) \cdot \left(2 \div 2\right)$
$2 = 2 \cdot 2^{\left(2 - 2\right)}$
$3 = 2 + \left(\frac{2}{2}\right)^2$
$4 = 2 + 2 + 2 - 2$

Now these aren’t unique; for example, you could also form 2 by writing $2 \div 2 + 2 \div 2$, or as $2^{\left(2 + 2\right)\div 2}$. But the game is to form as many whole numbers as you can, and to find the highest number you can.

Dirac went to work and, complained his friends, broke the game because he found a formula that can any positive whole number, using exactly four 2’s.

I couldn’t think of it, and had to look to the endnotes to find what it was, but you might be smarter than me, and might have fun playing around with it before giving up and looking in the endnotes yourself. The important things are, it has to produce any positive integer, it has to use exactly four 2’s (although they may be used stupidly, as in the examples I gave above), and it has to use only common arithmetic operators (an ambiguous term, I admit, but, if you can find it on a non-scientific calculator or in a high school algebra textbook outside the chapter warming you up to calculus you’re probably fine). Good luck.

• #### elkement 5:36 pm on Thursday, 10 April, 2014 Permalink | Reply

I have read the book last year – but I cannot remember! So I will play a bit before I will give up :-)

Like

• #### Joseph Nebus 3:34 am on Friday, 11 April, 2014 Permalink | Reply

Oh, do. I’ll give the puzzle a couple more days before putting up a spoiler-protected answer.

It is one that made me slap my forehead and realize of course, if that helps any.

Like

## The Liquefaction of Gases – Part II

The CarnotCycle blog has a continuation of last month’s The Liquefaction of Gases, as you might expect, named The Liquefaction of Gases, Part II, and it’s another intriguing piece. The story here is about how the theory of cooling, and of phase changes — under what conditions gases will turn into liquids — was developed. There’s a fair bit of mathematics involved, although most of the important work is in in polynomials. If you remember in algebra (or in pre-algebra) drawing curves for functions that had x3 in them, and in finding how they sometimes had one and sometimes had three real roots, then you’re well on your way to understanding the work which earned Johannes van der Waals the 1910 Nobel Prize in Physics.

Like

Future Nobel Prize winners both. Kamerlingh Onnes and Johannes van der Waals in 1908.

On Friday 10 July 1908, at Leiden in the Netherlands, Kamerlingh Onnes succeeded in liquefying the one remaining gas previously thought to be non-condensable – helium – using a sequential Joule-Thomson cooling technique to drive the temperature down to just 4 degrees above absolute zero. The event brought to a conclusion the race to liquefy the so-called permanent gases, following the revelation that all gases have a critical temperature below which they must be cooled before liquefaction is possible.

This crucial fact was established by Dr. Thomas Andrews, professor of chemistry at Queen’s College Belfast, in his groundbreaking study of the liquefaction of carbon dioxide, “On the Continuity of the Gaseous and Liquid States of Matter”, published in the Philosophical Transactions of the Royal Society of London in 1869.

As described in Part I of…

View original post 2,047 more words

## Reading the Comics, February 21, 2014: Circumferences and Monkeys Edition

And now to finish off the bundle of mathematic comics that I had run out of time for last time around. Once again the infinite monkeys situation comes into play; there’s also more talk about circumferences than average.

Brian and Ron Boychuk’s The Chuckle Brothers (February 13) does a little wordplay on how “circumference” sounds like it could kind of be a knightly name, which I remember seeing in a minor Bugs Bunny cartoon back in the day. “Circumference” the word derives from the Latin, “circum” meaning around and “fero” meaning “to carry”; and to my mind, the really interesting question is why do we have the words “perimeter” and “circumference” when it seems like either one would do? “Circumference” does have the connotation of referring to just the boundary of a circular or roughly circular form, but why should the perimeter of circular things be so exceptional as to usefully have its own distinct term? But English is just like that, I suppose.

Paul Trapp’s Thatababy (February 13) brings back the infinite-monkey metaphor. The infinite monkeys also appear in John Deering’s Strange Brew (February 20), which is probably just a coincidence based on how successfully tossing in lots of monkeys can produce giggling. Or maybe the last time Comic Strip Master Command issued its orders it sent out a directive, “more infinite monkey comics!”

Ruben Bolling’s Tom The Dancing Bug (February 14) delivers some satirical jabs about Biblical textual inerrancy by pointing out where the Bible makes mathematical errors. I tend to think nitpicking the Bible mostly a waste of good time on everyone’s part, although the handful of arithmetic errors are a fair wedge against the idea that the text can’t have any errors and requires no interpretation or even forgiveness, with the Ezra case the stronger one. The 1 Kings one is about the circumference and the diameter for a vessel being given, and those being incompatible, but it isn’t hard to come up with a rationalization that brings them plausibly in line (you have to suppose that the diameter goes from outer wall to outer wall, while the circumference is that of an inner wall, which may be a bit odd but isn’t actually ruled out by the text), which is why I think it’s the weaker.

Bill Whitehead’s Free Range (February 16) uses a blackboard full of mathematics as a generic “this is something really complicated” signifier. The symbols as written don’t make a lot of sense, although I admit it’s common enough while working out a difficult problem to work out weird bundles of partly-written expressions or abuses of notation (like on the middle left of the board, where a bracket around several equations is shown as being less than a bracket around fewer equations), just because ideas are exploding faster than they can be written out sensibly. Hopefully once the point is proven you’re able to go back and rebuild it all in a form which makes sense, either by going into standard notation or by discovering that you have soem new kind of notation that has to be used. It’s very exciting to come up with some new bit of notation, even if it’s only you and a couple people you work with who ever use it. Developing a good way of writing a concept might be the biggest thrill in mathematics, even better than proving something obscure or surprising.

Jonathan Lemon’s Rabbits Against Magic (February 18) uses a blackboard full of mathematics symbols again to give the impression of someone working on something really hard. The first two lines of equations on 8-Ball’s board are the time-dependent Schrödinger Equations, describing how the probability distribution for something evolves in time. The last line is Euler’s formula, the curious and fascinating relationship between pi, the base of the natural logarithm e, imaginary numbers, one, and zero.

Todd Clark’s Lola (February 20) uses the person-on-an-airplane setup for a word problem, in this case, about armrest squabbling. Interesting to me about this is that the commenters get into a squabble about how airplane speeds aren’t measured in miles per hour but rather in nautical miles, although nobody not involved in air traffic control really sees that. What amuses me about this is that what units you use to measure the speed of the plane don’t matter; the kind of work you’d do for a plane-travelling-at-speed problem is exactly the same whatever the units are. For that matter, none of the unique properties of the airplane, such as that it’s travelling through the air rather than on a highway or a train track, matter at all to the problem. The plane could be swapped out and replaced with any other method of travel without affecting the work — except that airplanes are more likely than trains (let’s say) to have an armrest shortage and so the mock question about armrest fights is one in which it matters that it’s on an airplane.

Bill Watterson’s Calvin and Hobbes (February 21) is one of the all-time classics, with Calvin wondering about just how fast his sledding is going, and being interested right up to the point that Hobbes identifies mathematics as the way to know. There’s a lot of mathematics to be seen in finding how fast they’re going downhill. Measuring the size of the hill and how long it takes to go downhill provides the average speed, certainly. Working out how far one drops, as opposed to how far one travels, is a trigonometry problem. Trying the run multiple times, and seeing how the speed varies, introduces statistics. Trying to answer questions like when are they travelling fastest — at a single instant, rather than over the whole run — introduce differential calculus. Integral calculus could be found from trying to tell what the exact distance travelled is. Working out what the shortest or the fastest possible trips introduce the calculus of variations, which leads in remarkably quick steps to optics, statistical mechanics, and even quantum mechanics. It’s pretty heady stuff, but I admit, yeah, it’s math.

## The Liquefaction of Gases – Part I

I know, or at least I’m fairly confident, there’s a couple readers here who like deeper mathematical subjects. It’s fine to come up with simulated Price is Right games or figure out what grades one needs to pass the course, but those aren’t particularly challenging subjects.

But those are hard to write, so, while I stall, let me point you to CarnotCycle, which has a nice historical article about the problem of liquefaction of gases, a problem that’s not just steeped in thermodynamics but in engineering. If you’re a little familiar with thermodynamics you likely won’t be surprised to see names like William Thomson, James Joule, or Willard Gibbs turn up. I was surprised to see in the additional reading T O’Conor Sloane show up; science fiction fans might vaguely remember that name, as he was the editor of Amazing Stories for most of the 1930s, in between Hugo Gernsback and Raymond Palmer. It’s often a surprising world.

Like

On Monday 3 December 1877, the French Academy of Sciences received a letter from Louis Cailletet, a 45 year-old physicist from Châtillon-sur-Seine. The letter stated that Cailletet had succeeded in liquefying both carbon monoxide and oxygen.

Liquefaction as such was nothing new to 19th century science, it should be said. The real news value of Cailletet’s announcement was that he had liquefied two gases previously considered ‘non condensable’.

While a number of gases such as chlorine, carbon dioxide, sulfur dioxide, hydrogen sulfide, ethylene and ammonia had been liquefied by the simultaneous application of pressure and cooling, the principal gases comprising air – nitrogen and oxygen – together with carbon monoxide, nitric oxide, hydrogen and helium, had stubbornly refused to liquefy, despite the use of pressures up to 3000 atmospheres. By the mid-1800s, the general opinion was that these gases could not be converted into liquids under any circumstances.

But in…

View original post 1,342 more words

• #### Damyanti 6:47 am on Thursday, 13 February, 2014 Permalink | Reply

I usually run far from all topics science-related, but I like this little bit of history here.

Like

• #### Joseph Nebus 11:46 pm on Thursday, 13 February, 2014 Permalink | Reply

I’m glad you do enjoy. I like a good bit of history myself, mathematics and science included, and might go looking for more topics that have a historical slant.

Like

• #### LFFL 10:43 pm on Sunday, 23 February, 2014 Permalink | Reply

You lost me at “deeper mathematical subjects”. I barely have addition & subtraction down.

Like

• #### Joseph Nebus 4:43 am on Monday, 24 February, 2014 Permalink | Reply

Aw, but the deeper stuff is fascinating. For example, imagine you have a parcel of land with some really complicated boundary, all sorts of nooks and crannies and corners and curves and all that. If you just walk around the outside, keeping track of how far you walk and in what direction, then, you can use a bit of calculus to tell exactly how much area is enclosed by the boundary, however complicated a shape it is.

Isn’t that amazing? You never even have to set foot inside the property, just walk around its boundary.

Like

• #### LFFL 4:46 am on Monday, 24 February, 2014 Permalink | Reply

Wow. I’m impressed by your brain power. I just wasn’t born with a brain for much math beyond the basics.

Like

• #### Joseph Nebus 4:25 am on Tuesday, 25 February, 2014 Permalink | Reply

Aw, you’re kind to me, and unkind to you. It’s not my brainpower, at least. The result is a consequence of some pretty important work you learn early on in calculus, and I’d expect you could understand the important part of it without knowing more than the basics.

Like

## CarnotCycle on the Gibbs-Helmholtz Equation

I’m a touch late discussing this and can only plead that it has been December after all. Over on the CarnotCycle blog — which is focused on thermodynamics in a way I rather admire — was recently a discussion of the Gibbs-Helmholtz Equation, which turns up in thermodynamics classes, and goes a bit better than the class I remember by showing a couple examples of actually using it to understand how chemistry works. Well, it’s so easy in a class like this to get busy working with symbols and forget that thermodynamics is a supremely practical science [1].

The Gibbs-Helmholtz Equation — named for Josiah Willard Gibbs and for Hermann von Helmholtz, both of whom developed it independently (Helmholtz first) — comes in a couple of different forms, which CarnotCycle describes. All these different forms are meant to describe whether a particular change in a system is likely to happen. CarnotCycle’s discussion gives a couple of examples of actually working out the numbers, including for the Haber process, which I don’t remember reading about in calculative detail before. So I wanted to recommend it as a bit of practical mathematics or physics.

[1] I think it was Stephen Brush pointed out many of the earliest papers in thermodynamics appeared in railroad industry journals, because the problems of efficiently getting power from engines, and of how materials change when they get below freezing, are critically important to turning railroads from experimental contraptions into a productive industry. The observation might not be original to him. The observation also might have been Wolfgang Schivelbusch’s instead.

## What’s The Point Of Hamiltonian Mechanics?

The Diff_Eq twitter feed had a link the other day to a fine question put on StackExchange.com: What’s the Point of Hamiltonian Mechanics? Hamiltonian Mechanics is a different way of expressing the laws of Newtonian mechanics from what you learn in high school, and what you learn from that F equals m a business, and it gets introduced in the Mechanics course you take early on as a physics major.

At this level of physics you’re mostly concerned with, well, the motion of falling balls, of masses hung on springs, of pendulums swinging back and forth, of satellites orbiting planets. This is all nice tangible stuff and you can work problems out pretty well if you know all the forces the moving things exert on one another, forming a lot of equations that tell you how the particles are accelerating, from which you can get how the velocities are changing, from which you can get how the positions are changing.

The Hamiltonian formation starts out looking like it’s making life harder, because instead of looking just at the positions of particles, it looks at both the positions and the momentums (which is the product of the mass and the velocity). However, instead of looking at the forces, particularly, you look at the energy in the system, which typically is going to be the kinetic energy plus the potential energy. The energy is a nice thing to look at, because it’s got some obvious physical meaning, and you should know how it changes over time, and because it’s just a number (a scalar, in the trade) instead of a vector, the way forces are.

And here’s a neat thing: the way the position changes over time is found by looking at how the energy would change if you made a little change in the momentum; and the way the momentum changes over time is found by looking at how the energy would change if you made a little change in the position. As that sentence suggests, that’s awfully pretty; there’s something aesthetically compelling about treating position and momentum so very similarly. (They’re not treated in exactly the same way, but it’s close enough.) And writing the mechanics problem this way, as position and momentum changing in time, means we can use tools that come from linear algebra and the study of matrices to answer big questions like whether the way the system moves is stable, which are hard to answer otherwise.

The questioner who started the StackExchange discussion pointed out that before they get to Hamiltonian mechanics, the course also introduced the Euler-Lagrange formation, which looks a lot like the Hamiltonian, and which was developed first, and gets introduced to students first; why not use that? Here I have to side with most of the commenters about the Hamiltonian turning out to be more useful when you go on to more advanced physics. The Euler-Lagrange form is neat, and particularly liberating because you get an incredible freedom in how you set up the coordinates describing the action of your system. But it doesn’t have that same symmetry in treating the position and momentum, and you don’t get the energy of the system built right into the equations you’re writing, and you can’t use the linear algebra and matrix tools that were introduced. Mostly, the good things that Euler-Lagrange forms give you, such as making it obvious when a particular coordinate doesn’t actually contribute to the behavior of the system, or letting you look at energy instead of forces, the Hamiltonian also gives you, and the Hamiltonian can be used to do more later on.

## What is Physics all about?

Over on the Reading Penrose blog, Jean Louis Van Belle (and I apologize if I’ve got the name capitalized or punctuated wrong but I couldn’t find the author’s name except in a run-together, uncapitalized form) is trying to understand Roger Penrose’s Road to Reality, about the various laws of physics as we understand them. In the entry for the 6th of December, “Ordinary Differential equations (II)”, he gets to the question “What’s Physics All About?” and comes to what I have to agree is the harsh fact: a lot of physics is about solving differential equations.

Some of them are ordinary differential equations, some of them are partial differential equations, but really, a lot of it is differential equations. Some of it is setting up models for differential equations. Here, though, he looks at a number of ordinary differential equations and how they can be classified. The post is a bit cryptic — he intends the blog to be his working notes while he reads a challenging book — but I think it’s still worth recommending as a quick tour through some of the most common, physics-relevant, kinds of ordinary differential equation.

## The Intersecting Lines

I haven’t had much chance to sit and think about this, but that’s no reason to keep my readers away from it. Elke Stangl has been pondering a probability problem regarding three intersecting lines on a plane, a spinoff of a physics problem about finding the center of mass of an object by the method of pinning it up from a couple different points and dropping the plumb line. My first impulse, of turning this into a matrix equation, flopped for what were as soon as I worked out a determinant obvious reasons, but that hardly means I’m stuck just yet.

• #### elkement 9:13 pm on Saturday, 16 November, 2013 Permalink | Reply

Thanks a lot for the pingback! I feel better now as the puzzle does not seem to be very easy to solve :-)

Like

• #### elkement 7:03 pm on Thursday, 21 November, 2013 Permalink | Reply

I have got very interesting comments by other physicists on my blog – including a promising solution. I have appended my post now with a drawing related to this idea.

Like

• #### Joseph Nebus 5:00 am on Friday, 22 November, 2013 Permalink | Reply

Oh, that’s great. I’m glad you are getting useful comments; I keep finding I can’t quite put enough focus on the problem for my tastes.

Like

## October 2013’s Statistics

It’s been a month since I last looked over precisely how not-staggeringly-popular I am, so it’s time again.
For October 2013 I had 440 views, down from September’s 2013. These came from 220 distinct viewers, down again from the 237 that September gave me. This does mean there was a slender improvement in views per visitor, from 1.97 up to 2.00. Neither of these are records, although given that I had a poor updating record again this month that’s all tolerable.

The most popular articles from the past month are … well, mostly the comics, and the trapezoids come back again. I’ve clearly got to start categorizing the other kinds of polygons. Or else plunge directly into dynamical systems as that’s the other thing people liked. October 2013’s top hits were:

The country sending me the most readers again was the United States (226 of them), with the United Kingdom coming up second (37). Austria popped into third for, I think, the first time (25 views), followed by Denmark (21) and at long last Canada (18). I hope they still like me in Canada.

Sending just the lone reader each were a bunch of countries: Bermuda, Chile, Colombia, Costa Rica, Finland, Guatemala, Hong Kong, Laos, Lebanon, Malta, Mexico, the Netherlands, Oman, Romania, Saudi Arabia, Slovenia, Sweden, Turkey, and Ukraine. Finland and the Netherlands are repeats from last month, and the Netherlands is going on at least three months like this.

• #### elkement 4:17 pm on Tuesday, 5 November, 2013 Permalink | Reply

I am currently experimenting with software that blocks various tracking and “sniffing” programs – I have just realized that WordPress Stats is classified as sniffing software, too! Now I turned it on again for your site so my Austrian clicks get recorded again.

Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r