Reading the Comics, November 4, 2017: Slow, Small Week Edition


It was a slow week for mathematically-themed comic strips. What I have are meager examples. Small topics to discuss. The end of the week didn’t have anything even under loose standards of being on-topic. Which is fine, since I lost an afternoon of prep time to thunderstorms that rolled through town and knocked out power for hours. Who saw that coming? … If I had, I’d have written more the day before.

Mac King and Bill King’s Magic in a Minute for the 29th of October looks like a word problem. Well, it is a word problem. It looks like a problem about extrapolating a thing (price) from another thing (quantity). Well, it is an extrapolation problem. The fun is in figuring out what quantities are relevant. Now I’ve spoiled the puzzle by explaining it all so.

Olivia Walch’s Imogen Quest for the 30th doesn’t say it’s about a mathematics textbook. But it’s got to be. What other kind of textbook will have at least 28 questions in a section and only give answers to the odd-numbered problems in back? You never see that in your social studies text.

Eric the Circle for the 30th, this one by Dennill, tests how slow a week this was. I guess there’s a geometry joke in Jane Austen? I’ll trust my literate readers to tell me. My doing the world’s most casual search suggests there’s no mention of triangles in Pride and Prejudice. The previous might be the most ridiculously mathematics-nerdy thing I have written in a long while.

Tony Murphy’s It’s All About You for the 31st does some advanced-mathematics name-dropping. In so doing, it’s earned a spot taped to the door of two people in any mathematics department with more than 24 professors across the country. Or will, when they hear there was a gap unification theory joke in the comics. I’m not sure whether Murphy was thinking of anything particular in naming the subject “gap unification theory”. It sounds like a field of mathematical study. But as far as I can tell there’s just one (1) paper written that even says “gap unification theory”. It’s in partition theory. Partition theory is a rich and developed field, which seems surprising considering it’s about breaking up sets of the counting numbers into smaller sets. It seems like a time-waster game. But the game sneaks into everything, so the field turns out to be important. Gap unification, in the paper I can find, is about studying the gaps between these smaller sets.

There’s also a “band-gap unification” problem. I could accept this name being shortened to “gap unification” by people who have to say its name a lot. It’s about the physics of semiconductors, or the chemistry of semiconductors, as you like. The physics or chemistry of them is governed by the energies that electrons can have. Some of these energies are precise levels. Some of these energies are bands, continuums of possible values. When will bands converge? When will they not? Ask a materials science person. Going to say that’s not mathematics? Don’t go looking at the papers.

Whether partition theory or materials since it seems like a weird topic. Maybe Murphy just put together words that sounded mathematical. Maybe he has a friend in the field.

Bill Amend’s FoxTrot Classics for the 1st of November is aiming to be taped up to the high school teacher’s door. It’s easy to show how the square root of two is irrational. Takes a bit longer to show the square root of three is. Turns out all the counting numbers are either perfect squares — 1, 4, 9, 16, and so on — or else have irrational square roots. There’s no whole number with a square root of, like, something-and-three-quarters or something-and-85-117ths. You can show that, easily if tediously, for any particular whole number. What’s it look like to show for all the whole numbers that aren’t perfect squares already? (This strip originally ran the 8th of November, 2006.)

Guy Gilchrist’s Nancy for the 1st does an alphabet soup joke, so like I said, it’s been a slow week around here.

John Zakour and Scott Roberts’s Maria’s Day for the 2nd is really just mathematics being declared hated, so like I said, it’s been a slow week around here.

Advertisements

Reading the Comics, October 14, 2017: Physics Equations Edition


So that busy Saturday I promised for the mathematically-themed comic strips? Here it is, along with a Friday that reached the lowest non-zero levels of activity.

Stephan Pastis’s Pearls Before Swine for the 13th is one of those equations-of-everything jokes. Naturally it features a panel full of symbols that, to my eye, don’t parse. There are what look like syntax errors, for example, with the one that anyone could see the { mark that isn’t balanced by a }. But when someone works rough they will, often, write stuff that doesn’t quite parse. Think of it as an artist’s rough sketch of a complicated scene: the lines and anatomy may be gibberish, but if the major lines of the composition are right then all is well.

Most attempts to write an equation for everything are really about writing a description of the fundamental forces of nature. We trust that it’s possible to go from a description of how gravity and electromagnetism and the nuclear forces go to, ultimately, a description of why chemistry should work and why ecologies should form and there should be societies. There are, as you might imagine, a number of assumed steps along the way. I would accept the idea that we’ll have a unification of the fundamental forces of physics this century. I’m not sure I would believe having all the steps between the fundamental forces and, say, how nerve cells develop worked out in that time.

Mark Anderson’s Andertoons makes it overdue appearance for the week on the 14th, with a chalkboard word-problem joke. Amusing enough. And estimating an answer, getting it wrong, and refining it is good mathematics. It’s not just numerical mathematics that will look for an approximate solution and then refine it. As a first approximation, 15 minus 7 isn’t far off 10. And for mental arithmetic approximating 15 minus 7 as 10 is quite justifiable. It could be made more precise if a more exact answer were needed.

Maria Scrivan’s Half Full for the 14th I’m going to call the anthropomorphic geometry joke for the week. If it’s not then it’s just wordplay and I’d have no business including it here.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 14th tosses in the formula describing how strong the force of gravity between two objects is. In Newtonian gravity, which is why it’s the Newton Police. It’s close enough for most purposes. I’m not sure how this supports the cause of world peace.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th names Riemann’s Quaternary Conjecture. I was taken in by the panel, trying to work out what the proposed conjecture could even mean. The reason it works is that Bernhard Riemann wrote like 150,000 major works in every field of mathematics, and about 149,000 of them are big, important foundational works. The most important Riemann conjecture would be the one about zeroes of the Riemann Zeta function. This is typically called the Riemann Hypothesis. But someone could probably write a book just listing the stuff named for Riemann, and that’s got to include a bunch of very specific conjectures.

Reading the Comics, October 4, 2017: Time-Honored Traditions Edition


It was another busy week in mathematically-themed comic strips last week. Busy enough I’m comfortable rating some as too minor to include. So it’s another week where I post two of these Reading the Comics roundups, which is fine, as I’m still recuperating from the Summer 2017 A To Z project. This first half of the week includes a lot of rerun comics, and you’ll see why my choice of title makes sense.

Lincoln Pierce’s Big Nate: First Class for the 1st of October reprints the strip from the 2nd of October, 1993. It’s got a well-formed story problem that, in the time-honored tradition of this setup, is subverted. I admit I kind of miss the days when exams would have problems typed out in monospace like this.

Ashleigh Brilliant’s Pot-Shots for the 1st is a rerun from sometime in 1975. And it’s an example of the time-honored tradition of specifying how many statistics are made up. Here it comes in at 43 percent of statistics being “totally worthless” and I’m curious how the number attached to this form of joke changes over time.

The Joey Alison Sayers Comic for the 2nd uses a blackboard with mathematics — a bit of algebra and a drawing of a sphere — as the designation for genius. That’s all I have to say about this. I remember being set straight about the difference between ponies and horses and it wasn’t by my sister, who’s got a professional interest in the subject.

Mark Pett’s Lucky Cow rerun for the 2nd is a joke about cashiers trying to work out change. As one of the GoComics.com commenters mentions, the probably best way to do this is to count up from the purchase to the amount you have to give change for. That is, work out $12.43 to $12.50 is seven cents, then from $12.50 to $13.00 is fifty more cents (57 cents total), then from $13.00 to $20.00 is seven dollars ($7.57 total) and then from $20 to $50 is thirty dollars ($37.57 total).

It does make me wonder, though: what did Neil enter as the amount tendered, if it wasn’t $50? Maybe he hit “exact change” or whatever the equivalent was. It’s been a long, long time since I worked a cash register job and while I would occasionally type in the wrong amount of money, the kinds of errors I would make would be easy to correct for. (Entering $30 instead of $20 for the tendered amount, that sort of thing.) But the cash register works however Mark Pett decides it works, so who am I to argue?

Keith Robinson’s Making It rerun for the 2nd includes a fair bit of talk about ratios and percentages, and how to inflate percentages. Also about the underpaying of employees by employers.

Mark Anderson’s Andertoons for the 3rd continues the streak of being Mark Anderson Andertoons for this sort of thing. It has the traditional form of the student explaining why the teacher’s wrong to say the answer was wrong.

Brian Fies’s The Last Mechanical Monster for the 4th includes a bit of legitimate physics in the mad scientist’s captioning. Ballistic arcs are about a thing given an initial speed in a particular direction, moving under constant gravity, without any of the complicating problems of the world involved. No air resistance, no curvature of the Earth, level surfaces to land on, and so on. So, if you start from a given height (‘y0‘) and a given speed (‘v’) at a given angle (‘θ’) when the gravity is a given strength (‘g’), how far will you travel? That’s ‘d’. How long will you travel? That’s ‘t’, as worked out here.

(I should maybe explain the story. The mad scientist here is the one from the first, Fleischer Studios, Superman cartoon. In it the mad scientist sends mechanical monsters out to loot the city’s treasures and whatnot. As the cartoon has passed into the public domain, Brian Fies is telling a story of that mad scientist, finally out of jail, salvaging the one remaining usable robot. Here, training the robot to push aside bank tellers has gone awry. Also, the ground in his lair is not level.)

Tom Toles’s Randolph Itch, 2 am rerun for the 4th uses the time-honored tradition of Albert Einstein needing a bit of help for his work.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th uses the time-honored tradition of little bits of physics equations as designation of many deep thoughts. And then it gets into a bit more pure mathematics along the way. It also reflects the time-honored tradition of people who like mathematics and physics supposing that those are the deepest and most important kinds of thoughts to have. But I suppose we all figure the things we do best are the things it’s important to do best. It’s traditional.

And by the way, if you’d like more of these Reading the Comics posts, I put them all in the category ‘Comic Strips’ and I just now learned the theme I use doesn’t show categories for some reason? This is unsettling and unpleasant. Hm.

Reading the Comics, September 24, 2017: September 24, 2017 Edition


Comic Strip Master Command sent a nice little flood of comics this week, probably to make sure that I transitioned from the A To Z project to normal activity without feeling too lost. I’m going to cut the strips not quite in half because I’m always delighted when I can make a post that’s just a single day’s mathematically-themed comics. Last Sunday, the 24th of September, was such a busy day. I’m cheating a little on what counts as noteworthy enough to talk about here. But people like comic strips, and good on them for liking them.

Norm Feuti’s Gil for the 24th sees Gil discover and try to apply some higher mathematics. There’s probably a good discussion about what we mean by division to explain why Gil’s experiment didn’t pan out. I would pin it down to eliding the difference between “dividing in half” and “dividing by a half”, which is a hard one. Terms that seem almost alike but mean such different things are probably the hardest part of mathematics.

Gil, eating cookies and doing mathematics. 'Dividing fractions. 1/2 divided by 1/2', which he works out to be 1. 'One half divided in half equals one? Wait a minute. If these calculations are correct, then that means ... ' And he takes a half-cookie and snaps it in half, to his disappointment. 'Humph. what's the point of this advanced math if it only works on paper?'
Norm Feuti’s Gil for the 24th of September, 2017, didn’t appear on Gocomics.com or Comics Kingdom, my usual haunts for these comics. But I started reading the strip when it was on Comics Kingdom, and keep reading its reruns. Feuti has continued the comic strip on his own web site, and posts it on Twitter. So it’s quite easy to pick the strip back up, if you have a Twitter account or can read RSS from it. I assume you can read RSS from it.

Russell Myers’s Broom Hilda looks like my padding. But the last panel of the middle row gets my eye. The squirrels talk about how on the equinox night and day “can never be of identical length, due to the angular size of the sun and atmospheric refraction”. This is true enough for the equinox. While any spot on the Earth might see twelve hours facing the sun and twelve hours facing away, the fact the sun isn’t a point, and that the atmosphere carries light around to the “dark” side of the planet, means daylight lasts a little longer than night.

Ah, but. This gets my mathematical modelling interest going. Because it is true that, at least away from the equator, there’s times of year that day is way shorter than night. And there’s times of year that day is way longer than night. Shouldn’t there be some time in the middle when day is exactly equal to night?

The easy argument for is built on the Intermediate Value Theorem. Let me define a function, with domain each of the days of the year. The range is real numbers. It’s defined to be the length of day minus the length of night. Let me say it’s in minutes, but it doesn’t change things if you argue that it’s seconds, or milliseconds, or hours, if you keep parts of hours in also. So, like, 12.015 hours or something. At the height of winter, this function is definitely negative; night is longer than day. At the height of summer, this function is definitely positive; night is shorter than day. So therefore there must be some time, between the height of winter and the height of summer, when the function is zero. And therefore there must be some day, even if it isn’t the equinox, when night and day are the same length

There’s a flaw here and I leave that to classroom discussions to work out. I’m also surprised to learn that my onetime colleague Dr Helmer Aslaksen’s grand page of mathematical astronomy and calendar essays doesn’t seem to have anything about length of day calculations. But go read that anyway; you’re sure to find something fascinating.

Mike Baldwin’s Cornered features an old-fashioned adding machine being used to drown an audience in calculations. Which makes for a curious pairing with …

Bill Amend’s FoxTrot, and its representation of “math hipsters”. I hate to encourage Jason or Marcus in being deliberately difficult. But there are arguments to make for avoiding digital calculators in favor of old-fashioned — let’s call them analog — calculators. One is that people understand tactile operations better, or at least sooner, than they do digital ones. The slide rule changes multiplication and division into combining or removing lengths of things, and we probably have an instinctive understanding of lengths. So this should train people into anticipating what a result is likely to be. This encourages sanity checks, verifying that an answer could plausibly be right. And since a calculation takes effort, it encourages people to think out how to arrange the calculation to require less work. This should make it less vulnerable to accidents.

I suspect that many of these benefits are what you get in the ideal case, though. Slide rules, and abacuses, are no less vulnerable to accidents than anything else is. And if you are skilled enough with the abacus you have no trouble multiplying 18 by 7, you probably would not find multiplying 17 by 8 any harder, and wouldn’t notice if you mistook one for the other.

Jef Mallett’s Frazz asserts that numbers are cool but the real insight is comparisons. And we can argue that comparisons are more basic than numbers. We can talk about one thing being bigger than another even if we don’t have a precise idea of numbers, or how to measure them. See every mathematics blog introducing the idea of different sizes of infinity.

Bill Whitehead’s Free Range features Albert Einstein, universal symbol for really deep thinking about mathematics and physics and stuff. And even a blackboard full of equations for the title panel. I’m not sure whether the joke is a simple absent-minded-professor joke, or whether it’s a relabelled joke about Werner Heisenberg. Absent-minded-professor jokes are not mathematical enough for me, so let me point once again to American Cornball. They’re the first subject in Christopher Miller’s encyclopedia of comic topics. So I’ll carry on as if the Werner Heisenberg joke were the one meant.

Heisenberg is famous, outside World War II history, for the Uncertainty Principle. This is one of the core parts of quantum mechanics, under which there’s a limit to how precisely one can know both the position and momentum of a thing. To identify, with absolutely zero error, where something is requires losing all information about what its momentum might be, and vice-versa. You see the application of this to a traffic cop’s question about knowing how fast someone was going. This makes some neat mathematics because all the information about something is bundled up in a quantity called the Psi function. To make a measurement is to modify the Psi function by having an “operator” work on it. An operator is what we call a function that has domains and ranges of other functions. To measure both position and momentum is equivalent to working on Psi with one operator and then another. But these operators don’t commute. You get different results in measuring momentum and then position than you do measuring position and then momentum. And so we can’t know both of these with infinite precision.

There are pairs of operators that do commute. They’re not necessarily ones we care about, though. Like, the total energy commutes with the square of the angular momentum. So, you know, if you need to measure with infinite precision the energy and the angular momentum of something you can do it. If you had measuring tools that were perfect. You don’t, but you could imagine having them, and in that case, good. Underlying physics wouldn’t spoil your work.

Probably the panel was an absent-minded professor joke.

The Summer 2017 Mathematics A To Z: Young Tableau


I never heard of today’s entry topic three months ago. Indeed, three weeks ago I was still making guesses about just what Gaurish, author of For the love of Mathematics,, was asking about. It turns out to be maybe the grand union of everything that’s ever been in one of my A To Z sequences. I overstate, but barely.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Young Tableau.

The specific thing that a Young Tableau is is beautiful in its simplicity. It could almost be a recreational mathematics puzzle, except that it isn’t challenging enough.

Start with a couple of boxes laid in a row. As many or as few as you like.

Now set another row of boxes. You can have as many as the first row did, or fewer. You just can’t have more. Set the second row of boxes — well, your choice. Either below the first row, or else above. I’m going to assume you’re going below the first row, and will write my directions accordingly. If you do things the other way you’re following a common enough convention. I’m leaving it on you to figure out what the directions should be, though.

Now add in a third row of boxes, if you like. Again, as many or as few boxes as you like. There can’t be more than there are in the second row. Set it below the second row.

And a fourth row, if you want four rows. Again, no more boxes in it than the third row had. Keep this up until you’ve got tired of adding rows of boxes.

How many boxes do you have? I don’t know. But take the numbers 1, 2, 3, 4, 5, and so on, up to whatever the count of your boxes is. Can you fill in one number for each box? So that the numbers are always increasing as you go left to right in a single row? And as you go top to bottom in a single column? Yes, of course. Go in order: ‘1’ for the first box you laid down, then ‘2’, then ‘3’, and so on, increasing up to the last box in the last row.

Can you do it in another way? Any other order?

Except for the simplest of arrangements, like a single row of four boxes or three rows of one box atop another, the answer is yes. There can be many of them, turns out. Seven boxes, arranged three in the first row, two in the second, one in the third, and one in the fourth, have 35 possible arrangements. It doesn’t take a very big diagram to get an enormous number of possibilities. Could be fun drawing an arbitrary stack of boxes and working out how many arrangements there are, if you have some time in a dull meeting to pass.

Let me step away from filling boxes. In one of its later, disappointing, seasons Futurama finally did a body-swap episode. The gimmick: two bodies could only swap the brains within them one time. So would it be possible to put Bender’s brain back in his original body, if he and Amy (or whoever) had already swapped once? The episode drew minor amusement in mathematics circles, and a lot of amazement in pop-culture circles. The writer, a mathematics major, found a proof that showed it was indeed always possible, even after many pairs of people had swapped bodies. The idea that a theorem was created for a TV show impressed many people who think theorems are rarer and harder to create than they necessarily are.

It was a legitimate theorem, and in a well-developed field of mathematics. It’s about permutation groups. These are the study of the ways you can swap pairs of things. I grant this doesn’t sound like much of a field. There is a surprising lot of interesting things to learn just from studying how stuff can be swapped, though. It’s even of real-world relevance. Most subatomic particles of a kind — electrons, top quarks, gluons, whatever — are identical to every other particle of the same kind. Physics wouldn’t work if they weren’t. What would happen if we swap the electron on the left for the electron on the right, and vice-versa? How would that change our physics?

A chunk of quantum mechanics studies what kinds of swaps of particles would produce an observable change, and what kind of swaps wouldn’t. When the swap doesn’t make a change we can describe this as a symmetric operation. When the swap does make a change, that’s an antisymmetric operation. And — the Young Tableau that’s a single row of two boxes? That matches up well with this symmetric operation. The Young Tableau that’s two rows of a single box each? That matches up with the antisymmetric operation.

How many ways could you set up three boxes, according to the rules of the game? A single row of three boxes, sure. One row of two boxes and a row of one box. Three rows of one box each. How many ways are there to assign the numbers 1, 2, and 3 to those boxes, and satisfy the rules? One way to do the single row of three boxes. Also one way to do the three rows of a single box. There’s two ways to do the one-row-of-two-boxes, one-row-of-one-box case.

What if we have three particles? How could they interact? Well, all three could be symmetric with each other. This matches the first case, the single row of three boxes. All three could be antisymmetric with each other. This matches the three rows of one box. Or you could have two particles that are symmetric with each other and antisymmetric with the third particle. Or two particles that are antisymmetric with each other but symmetric with the third particle. Two ways to do that. Two ways to fill in the one-row-of-two-boxes, one-row-of-one-box case.

This isn’t merely a neat, aesthetically interesting coincidence. I wouldn’t spend so much time on it if it were. There’s a matching here that’s built on something meaningful. The different ways to arrange numbers in a set of boxes like this pair up with a select, interesting set of matrices whose elements are complex-valued numbers. You might wonder who introduced complex-valued numbers, let alone matrices of them, into evidence. Well, who cares? We’ve got them. They do a lot of work for us. So much work they have a common name, the “symmetric group over the complex numbers”. As my leading example suggests, they’re all over the place in quantum mechanics. They’re good to have around in regular physics too, at least in the right neighborhoods.

These Young Tableaus turn up over and over in group theory. They match up with polynomials, because yeah, everything is polynomials. But they turn out to describe polynomial representations of some of the superstar groups out there. Groups with names like the General Linear Group (square matrices), or the Special Linear Group (square matrices with determinant equal to 1), or the Special Unitary Group (that thing where quantum mechanics says there have to be particles whose names are obscure Greek letters with superscripts of up to five + marks). If you’d care for more, here’s a chapter by Dr Frank Porter describing, in part, how you get from Young Tableaus to the obscure baryons.

Porter’s chapter also lets me tie this back to tensors. Tensors have varied ranks, the number of different indicies you can have on the things. What happens when you swap pairs of indices in a tensor? How many ways can you swap them, and what does that do to what the tensor describes? Please tell me you already suspect this is going to match something in Young Tableaus. They do this by way of the symmetries and permutations mentioned above. But they are there.

As I say, three months ago I had no idea these things existed. If I ever ran across them it was from seeing the name at MathWorld’s list of terms that start with ‘Y’. The article shows some nice examples (with each rows a atop the previous one) but doesn’t make clear how much stuff this subject runs through. I can’t fit everything in to a reasonable essay. (For example: the number of ways to arrange, say, 20 boxes into rows meeting these rules is itself a partition problem. Partition problems are probability and statistical mechanics. Statistical mechanics is the flow of heat, and the movement of the stars in a galaxy, and the chemistry of life.) I am delighted by what does fit.

The Summer 2017 Mathematics A To Z: Volume Forms


I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Volume Forms.

So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” y = 2x + 1 .

A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called y = 2x + 1 before? … That’s … some mess. And now r = 2\theta + 1 … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

And something to bother you a while. y = 2x + 1 is an equation that looks the same as r = 2\theta + 1 . You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

The Summer 2017 Mathematics A To Z: Ricci Tensor


Today’s is technically a request from Elke Stangl, author of the Elkemental Force blog. I think it’s also me setting out my own petard for self-hoisting, as my recollection is that I tossed off a mention of “defining the Ricci Tensor” as the sort of thing that’s got a deep beauty that’s hard to share with people. And that set off the search for where I had written about the Ricci Tensor. I hadn’t, and now look what trouble I’m in. Well, here goes.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Ricci Tensor.

Imagine if nothing existed.

You’re not doing that right, by the way. I expect what you’re thinking of is a universe that’s a big block of space that doesn’t happen to have any things clogging it up. Maybe you have a natural sense of volume in it, so that you know something is there. Maybe you even imagine something with grid lines or reticules or some reference points. What I imagine after a command like that is a sort of great rectangular expanse, dark and faintly purple-tinged, with small dots to mark its expanse. That’s fine. This is what I really want. But it’s not really imagining nothing existing. There’s space. There’s some sense of where things would be, if they happened to be in there. We’d have to get rid of the space to have “nothing” exist. And even then we have logical problems that sound like word games. (How can nothing have a property like “existing”? Or a property like “not existing”?) This is dangerous territory. Let’s not step there.

So take the empty space that’s what mathematics and physics people mean by “nothing”. What do we know about it? Unless we’re being difficult, it’s got some extent. There are points in it. There’s some idea of distance between these points. There’s probably more than one dimension of space. There’s probably some sense of time, too. At least we’re used to the expectation that things would change if we watched. It’s a tricky sense to have, though. It’s hard to say exactly what time is. We usually fall back on the idea that we know time has passed if we see something change. But if there isn’t anything to see change? How do we know there’s still time passing?

You maybe already answered. We know time is passing because we can see space changing. One of the legs of Modern Physics is geometry, how space is shaped and how its shape changes. This tells us how gravity works, and how electricity and magnetism propagate. If there were no matter, no energy, no things in the universe there would still be some kind of physics. And interesting physics, since the mathematics describing this stuff is even subtler and more challenging to the intuition than even normal Euclidean space. If you’re going to read a pop mathematics blog like this, you’re very used to this idea.

Probably haven’t looked very hard at the idea, though. How do you tell whether space is changing if there’s nothing in it? It’s all right to imagine a coordinate system put on empty space. Coordinates are our concept. They don’t affect the space any more than the names we give the squirrels in the yard affect their behavior. But how to make the coordinates move with the space? It seems question-begging at least.

We have a mathematical gimmick to resolve this. Of course we do. We call it a name like a “test mass” or a “test charge” or maybe just “test particle”. Imagine that we drop into space a thing. But it’s only barely a thing. It’s tiny in extent. It’s tiny in mass. It’s tiny in charge. It’s tiny in energy. It’s so slight in every possible trait that it can’t sully our nothingness. All it does is let us detect it. It’s a good question how. We have good eyes. But now, we could see the particle moving as the space it’s in moves.

But again we can ask how. Just one point doesn’t seem to tell us much. We need a bunch of test particles, a whole cloud of them. They don’t interact. They don’t carry energy or mass or anything. They just carry the sense of place. This is how we would perceive space changing in time. We can ask questions meaningfully.

Here’s an obvious question: how much volume does our cloud take up? If we’re going to be difficult about this, none at all, since it’s a finite number of particles that all have no extent. But you know what we mean. Draw a ball, or at least an ellipsoid, around the test particles. How big is that? Wait a while. Draw another ball around the now-moved test particles. How big is that now?

Here’s another question: has the cloud rotated any? The test particles, by definition, don’t have mass or anything. So they don’t have angular momentum. They aren’t pulling one another to the side any. If they rotate it’s because space has rotated, and that’s interesting to consider. And another question: might they swap positions? Could a pair of particles that go left-to-right swap so they go right-to-left? That I ask admits that I want to allow the possibility.

These are questions about coordinates. They’re about how one direction shifts to other directions. How it stretches or shrinks. That is to say, these are questions of tensors. Tensors are tools for many things, most of them about how things transmit through different directions. In this context, time is another direction.

All our questions about how space moves we can describe as curvature. How do directions fall away from being perpendicular to one another? From being parallel to themselves? How do their directions change in time? If we have three dimensions in space and one in time — a four-dimensional “manifold” — then there’s 20 different “directions” each with maybe their own curvature to consider. This may seem a lot. Every point on this manifold has this set of twenty numbers describing the curvature of space around it. There’s not much to do but accept that, though. If we could do with fewer numbers we would, but trying cheats us out of physics.

Ten of the numbers in that set are themselves a tensor. It’s known as the Weyl Tensor. It describes gravity’s equivalent to light waves. It’s about how the shape of our cloud will change as it moves. The other ten numbers form another tensor. That is, a thousand words into the essay, the Ricci Tensor. The Ricci Tensor describes how the volume of our cloud will change as the test particles move along. It may seem odd to need ten numbers for this, but that’s what we need. For three-dimensional space and one-dimensional time, anyway. We need fewer for two-dimensional space; more, for more dimensions of space.

The Ricci Tensor is a geometric construct. Most of us come to it, if we do, by way of physics. It’s a useful piece of general relativity. It has uses outside this, though. It appears in the study of Ricci Flows. Here space moves in ways akin to how heat flows. And the Ricci Tensor appears in projective geometry, in the study of what properties of shapes don’t depend on how we present them.

It’s still tricky stuff to get a feeling for. I’m not sure I have a good feel for it myself. There’s a long trail of mathematical symbols leading up to these tensors. The geometry of them becomes more compelling in four or more dimensions, which taxes the imagination. Yann Ollivier here has a paper that attempts to provide visual explanations for many of the curvatures and tensors that are part of the field. It might help.

The Summer 2017 Mathematics A To Z: Morse Theory


Today’s A To Z entry is a change of pace. It dives deeper into analysis than this round has been. The term comes from Mr Wu, of the Singapore Maths Tuition blog, whom I thank for the request.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Morse Theory.

An old joke, as most of my academia-related ones are. The young scholar says to his teacher how amazing it was in the old days, when people were foolish, and thought the Sun and the Stars moved around the Earth. How fortunate we are to know better. The elder says, ah yes, but what would it look like if it were the other way around?

There are many things to ponder packed into that joke. For one, the elder scholar’s awareness that our ancestors were no less smart or perceptive or clever than we are. For another, the awareness that there is a problem. We want to know about the universe. But we can only know what we perceive now, where we are at this moment. Even a note we’ve written in the past, or a message from a trusted friend, we can’t take uncritically. What we know is that we perceive this information in this way, now. When we pay attention to our friends in the philosophy department we learn that knowledge is even harder than we imagine. But I’ll stop there. The problem is hard enough already.

We can put it in a mathematical form, one that seems immune to many of the worst problems of knowledge. In this form it looks something like this: if what can we know about the universe, if all we really know is what things in that universe are doing near us? The things that we look at are functions. The universe we’re hoping to understand is the domain of the functions. One filter we use to see the universe is Morse Theory.

We don’t look at every possible function. Functions are too varied and weird for that. We look at functions whose range is the real numbers. And they must be smooth. This is a term of art. It means the function has derivatives. It has to be continuous. It can’t have sharp corners. And it has to have lots of derivatives. The first derivative of a smooth function has to also be continuous, and has to also lack corners. And the derivative of that first derivative has to be continuous, and to lack corners. And the derivative of that derivative has to be the same. A smooth function can can differentiate over and over again, infinitely many times. None of those derivatives can have corners or jumps or missing patches or anything. This is what makes it smooth.

Most functions are not smooth, in much the same way most shapes are not circles. That’s all right. There are many smooth functions anyway, and they describe things we find interesting. Or we think they’re interesting, anyway. Smooth functions are easy for us to work with, and to know things about. There’s plenty of smooth functions. If you’re interested in something else there’s probably a smooth function that’s close enough for practical use.

Morse Theory builds on the “critical points” of these smooth functions. A critical point, in this context, is one where the derivative is zero. Derivatives being zero usually signal something interesting going on. Often they show where the function changes behavior. In freshman calculus they signal where a function changes from increasing to decreasing, so the critical point is a maximum. In physics they show where a moving body no longer has an acceleration, so the critical point is an equilibrium. Or where a system changes from one kind of behavior to another. And here — well, many things can happen.

So take a smooth function. And take a critical point that it’s got. (And, erg. Technical point. The derivative of your smooth function, at that critical point, shouldn’t be having its own critical point going on at the same spot. That makes stuff more complicated.) It’s possible to approximate your smooth function near that critical point with, of course, a polynomial. It’s always polynomials. The shape of these polynomials gives you an index for these points. And that can tell you something about the shape of the domain you’re on.

At least, it tells you something about what the shape is where you are. The universal model for this — based on skimming texts and papers and popularizations of this — is of a torus standing vertically. Like a doughnut that hasn’t tipped over, or like a tire on a car that’s working as normal. I suspect this is the best shape to use for teaching, as anyone can understand it while it still shows the different behaviors. I won’t resist.

Imagine slicing this tire horizontally. Slice it close to the bottom, below the central hole, and the part that drops down is a disc. At least, it could be flattened out tolerably well to a disc.

Slice it somewhere that intersects the hole, though, and you have a different shape. You can’t squash that down to a disc. You have a noodle shape. A cylinder at least. That’s different from what you got the first slice.

Slice the tire somewhere higher. Somewhere above the central hole, and you have … well, it’s still a tire. It’s got a hole in it, but you could imagine patching it and driving on. There’s another different shape that we’ve gotten from this.

Imagine we were confined to the surface of the tire, but did not know what surface it was. That we start at the lowest point on the tire and ascend it. From the way the smooth functions around us change we can tell how the surface we’re on has changed. We can see its change from “basically a disc” to “basically a noodle” to “basically a doughnut”. We could work out what the surface we’re on has to be, thanks to how these smooth functions around us change behavior.

Occasionally we mathematical-physics types want to act as though we’re not afraid of our friends in the philosophy department. So we deploy the second thing we know about Immanuel Kant. He observed that knowing the force of gravity falls off as the square of the distance between two things implies that the things should exist in a three-dimensional space. (Source: I dunno, I never read his paper or book or whatever and dunno I ever heard anyone say they did.) It’s a good observation. Geometry tells us what physics can happen, but what physics does happen tells us what geometry they happen in. And it tells the philosophy department that we’ve heard of Immanuel Kant. This impresses them greatly, we tell ourselves.

Morse Theory is a manifestation of how observable physics teaches us the geometry they happen on. And in an urgent way, too. Some of Edward Witten’s pioneering work in superstring theory was in bringing Morse Theory to quantum field theory. He showed a set of problems called the Morse Inequalities gave us insight into supersymmetric quantum mechanics. The link between physics and doughnut-shapes may seem vague. This is because you’re not remembering that mathematical physics sees “stuff happening” as curves drawn on shapes which represent the kind of problem you’re interested in. Learning what the shapes representing the problem look like is solving the problem.

If you’re interested in the substance of this, the universally-agreed reference is J Milnor’s 1963 text Morse Theory. I confess it’s hard going to read, because it’s a symbols-heavy textbook written before the existence of LaTeX. Each page reminds one why typesetters used to get hazard pay, and not enough of it.

Reading the Comics, August 9, 2017: Pets Doing Mathematics Edition


I had just enough comic strips to split this week’s mathematics comics review into two pieces. I like that. It feels so much to me like I have better readership when I have many days in a row with posting something, however slight. The A to Z is good for three days a week, and if comic strips can fill two of those other days then I get to enjoy a lot of regular publication days. … Though last week I accidentally set the Sunday comics post to appear on Monday, just before the A To Z post. I’m curious how that affected my readers. That nobody said anything is ominous.

Border collies are, as we know, highly intelligent. (Looking over a chalkboard diagramming 'fetch', with symbols.) 'There MUST be some point to it, but I guess we don't have the mathematical tools to crack it at the moment.'
Niklas Eriksson’s Carpe Diem for the 7th of August, 2017. I have to agree the border collies haven’t worked out the point of fetch. I also question whether they’ve worked out the simple ballistics of the tossed stick. If the variables mean what they suggest they mean, then dimensional analysis suggests they’ve got at least three fiascos going on here. Maybe they have an idiosyncratic use for variables like ‘v’.

Niklas Eriksson’s Carpe Diem for the 7th of August uses mathematics as the signifier for intelligence. I’m intrigued by how the joke goes a little different: while the border collies can work out the mechanics of a tossed stick, they haven’t figured out what the point of fetch is. But working out people’s motivations gets into realms of psychology and sociology and economics. There the mathematics might not be harder, but knowing that one is calculating a relevant thing is. (Eriksson’s making a running theme of the intelligence of border collies.)

Nicole Hollander’s Sylvia rerun for the 7th tosses off a mention that “we’re the first generation of girls who do math”. And that therefore there will be a cornucopia of new opportunities and good things to come to them. There’s a bunch of social commentary in there. One is the assumption that mathematics skill is a liberating thing. Perhaps it is the gloom of the times but I doubt that an oppressed group developing skills causes them to be esteemed. It seems more likely to me to make the skills become devalued. Social justice isn’t a matter of good exam grades.

Then, too, it’s not as though women haven’t done mathematics since forever. Every mathematics department on a college campus has some faded posters about Emmy Noether and Sofia Kovalevskaya and maybe Sophie Germaine. Probably high school mathematics rooms too. Again perhaps it’s the gloom of the times. But I keep coming back to the goddess’s cynical dismissal of all this young hope.

Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th portrays arithmetic as a grand-strategic imperative. Well, it means education as a strategic imperative. But arithmetic is the thing Dot uses. I imagine because it is so easy to teach as a series of trivia and quiz about. And it fits in a single panel with room to spare.

Dot: 'Now try it again: two and two is four.' Trixie: 'Fwee!' Dot: 'You're not TRYING! Do you want the Russians to get AHEAD of US!?' Trixie looks back and thinks: 'I didn't even know there was anyone back there!'
Mort Walker and Dik Browne’s Hi and Lois for the 10th of February, 1960 and rerun the 8th of August, 2017. Remember: you’re only young once, but you can be geopolitically naive forever!

Paul Trap’s Thatababy for the 8th is not quite the anthropomorphic-numerals joke of the week. It circles around that territory, though, giving a couple of odd numbers some personality.

Brian Anderson’s Dog Eat Doug for the 9th finally justifies my title for this essay, as cats ponder mathematics. Well, they ponder quantum mechanics. But it’s nearly impossible to have a serious thought about that without pondering its mathematics. This doesn’t mean calculation, mind you. It does mean understanding what kinds of functions have physical importance. And what kinds of things one can do to functions. Understand them and you can discuss quantum mechanics without being mathematically stupid. And there’s enough ways to be stupid about quantum mechanics that any you can cut down is progress.

Reading the Comics, July 30, 2017: Not Really Mathematics edition


It’s been a busy enough week at Comic Strip Master Command that I’ll need to split the results across two essays. Any other week I’d be glad for this, since, hey, free content. But this week it hits a busy time and shouldn’t I have expected that? The odd thing is that the mathematics mentions have been numerous but not exactly deep. So let’s watch as I make something big out of that.

Mark Tatulli’s Heart of the City closed out its “Math Camp” storyline this week. It didn’t end up having much to do with mathematics and was instead about trust and personal responsibility issues. You know, like stories about kids who aren’t learning to believe in themselves and follow their dreams usually are. Since we never saw any real Math Camp activities we don’t get any idea what they were trying to do to interest kids in mathematics, which is a bit of a shame. My guess would be they’d play a lot of the logic-driven puzzles that are fun but that they never get to do in class. The story established that what I thought was an amusement park was instead a fair, so, that might be anywhere Pennsylvania or a couple of other nearby states.

Rick Kirkman and Jerry Scott’s Baby Blues for the 25th sees Hammie have “another” mathematics worksheet accident. Could be any subject, really, but I suppose it would naturally be the one that hey wait a minute, why is he doing mathematics worksheets in late July? How early does their school district come back from summer vacation, anyway?

Hammie 'accidentally' taps a glass of water on his mathematics paper. Then tears it up. Then chews it. Mom: 'Another math worksheet accident?' Hammie: 'Honest, Mom, I think they're cursed!'
Rick Kirkman and Jerry Scott’s Baby Blues for the 25th of July, 2017 Almost as alarming: Hammie is clearly way behind on his “faking plausible excuses” homework. If he doesn’t develop the skills to make a credible reason why he didn’t do something how is he ever going to dodge texts from people too important not to reply to?

Olivia Walch’s Imogen Quest for the 26th uses a spot of mathematics as the emblem for teaching. In this case it’s a bit of physics. And an important bit of physics, too: it’s the time-dependent Schrödinger Equation. This is the one that describes how, if you know the total energy of the system, and the rules that set its potential and kinetic energies, you can work out the function Ψ that describes it. Ψ is a function, and it’s a powerful one. It contains probability distributions: how likely whatever it is you’re modeling is to have a particle in this region, or in that region. How likely it is to have a particle with this much momentum, versus that much momentum. And so on. Each of these we find by applying a function to the function Ψ. It’s heady stuff, and amazing stuff to me. Ψ somehow contains everything we’d like to know. And different functions work like filters that make clear one aspect of that.

Dan Thompson’s Brevity for the 26th is a joke about Sesame Street‘s Count von Count. Also about how we can take people’s natural aptitudes and delights and turn them into sad, droning unpleasantness in the service of corporate overlords. It’s fun.

Steve Sicula’s Home and Away rerun for the 26th is a misplaced Pi Day joke. It originally ran the 22nd of April, but in 2010, before Pi Day was nearly so much a thing.

Doug Savage’s Savage Chickens for the 26th proves something “scientific” by putting numbers into it. Particularly, by putting statistics into it. Understandable impulse. One of the great trends of the past century has been taking the idea that we only understand things when they are measured. And this implies statistics. Everything is unique. Only statistical measurement lets us understand what groups of similar things are like. Does something work better than the alternative? We have to run tests, and see how the something and the alternative work. Are they so similar that the differences between them could plausibly be chance alone? Are they so different that it strains belief that they’re equally effective? It’s one of science’s tools. It’s not everything which makes for science. But it is stuff easy to communicate in one panel.

Neil Kohney’s The Other End for the 26th is really a finance joke. It’s about the ways the finance industry can turn one thing into a dazzling series of trades and derivative trades. But this is a field that mathematics colonized, or that colonized mathematics, over the past generation. Mathematical finance has done a lot to shape ideas of how we might study risk, and probability, and how we might form strategies to use that risk. It’s also done a lot to shape finance. Pretty much any major financial crisis you’ve encountered since about 1990 has been driven by a brilliant new mathematical concept meant to govern risk crashing up against the fact that humans don’t behave the way some model said they should. Nor could they; models are simplified, abstracted concepts that let hard problems be approximated. Every model has its points of failure. Hopefully we’ll learn enough about them that major financial crises can become as rare as, for example, major bridge collapses or major airplane disasters.

Why Stuff Can Orbit, Part 13: To Close A Loop


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Previously:

And the supplemental reading:


Today’s is one of the occasional essays in the Why Stuff Can Orbit sequence that just has a lot of equations. I’ve tried not to write everything around equations because I know what they’re like to read. They’re pretty to look at and after about four of them you might as well replace them with a big grey box that reads “just let your eyes glaze over and move down to the words”. It’s even more glaze-y than that for non-mathematicians.

But we do need them. Equations are wonderfully compact, efficient ways to write about things that are true. Especially things that are only true if exacting conditions are met. They’re so good that I’ll often find myself checking a textbook for an explanation of something and looking only at the equations, letting my eyes glaze over the words. That’s a chilling thing to catch yourself doing. Especially when you’ve written some obscure textbooks and a slightly read mathematics blog.

What I had been looking at was a perturbed central-force orbit. We have something, generically called a planet, that orbits the center of the universe. It’s attracted to the center of the universe by some potential energy, which we describe as ‘U(r)’. It’s some number that changes with the distance ‘r’ the planet has from the center of the universe. It usually depends on other stuff too, like some kind of mass of the planet or some constants or stuff. The planet has some angular momentum, which we can call ‘L’ and pretend is a simple number. It’s in truth a complicated number, but we’ve set up the problem where we can ignore the complicated stuff. This angular momentum implies the potential energy allows for a circular orbit at some distance which we’ll call ‘a’ from the center of the universe.

From ‘U(r)’ and ‘L’ we can say whether this is a stable orbit. If it’s stable, a little perturbation, a nudging, from the circular orbit will stay small. If it’s unstable, a little perturbation will keep growing and never stop. If we perturb this circular orbit the planet will wobble back and forth around the circular orbit. Sometimes the radius will be a little smaller than ‘a’, and sometimes it’ll be a little larger than ‘a’. And now I want to see whether we get a stable closed orbit.

The orbit will be closed if the planet ever comes back to the same position and same momentum that it started with. ‘Started’ is a weird idea in this case. But it’s common vocabulary. By it we mean “whatever properties the thing had when we started paying attention to it”. Usually in a problem like this we suppose there’s some measure of time. It’s typically given the name ‘t’ because we don’t want to make this hard on ourselves. The start is some convenient reference time, often ‘t = 0’. That choice usually makes the equations look simplest.

The position of the planet we can describe with two variables. One is the distance from the center of the universe, ‘r’, which we know changes with time: ‘r(t)’. Another is the angle the planet makes with respect to some reference line. The angle we might call ‘θ’ and often do. This will also change in time, then, ‘θ(t)’. We can pick other variables to describe where something is. But they’re going to involve more algebra, more symbol work, than this choice does so who needs it?

Momentum, now, that’s another set of variables we need to worry about. But we don’t need to worry about them. This particular problem is set up so that if we know the position of the planet we also have the momentum. We won’t be able to get both ‘r(t)’ and ‘θ(t)’ back to their starting values without also getting the momentum there. So we don’t have to worry about that. This won’t always work, as see my future series, ‘Why Statistical Mechanics Works’.

So. We know, because it’s not that hard to work out, how long it takes for ‘r(t)’ to get back to its original, ‘r(0)’, value. It’ll take a time we worked out to be (first big equation here, although we found it a couple essays back):

T_r = 2\pi\sqrt{ \frac{m}{ -F'(a) - \frac{3}{a} F(a) }}

Here ‘m’ is the mass of the planet. And ‘F’ is a useful little auxiliary function. It’s the force that the planet feels when it’s a distance from the origin. It’s defined as F(r) = -\frac{dU}{dr} . It’s convenient to have around. It makes equations like this one simpler, for one. And it’s weird to think of a central force problem where we never, ever see forces. The peculiar thing is we define ‘F’ for every distance the planet might be from the center of the universe. But all we care about is its value at the equilibrium, circular orbit distance of ‘a’. We also care about its first derivative, also evaluated at the distance of ‘a’, which is that F'(a) talk early on in that denominator.

So in the time between time ‘0’ and time ‘Tr‘ the perturbed radius will complete a full loop. It’ll reach its biggest value and its smallest value and get back to the original. (It is so much easier to suppose the perturbation starts at its biggest value at time ‘0’ that we often assume it has. It doesn’t have to be. But if we don’t have something forcing the choice of what time to call ‘0’ on us, why not pick one that’s convenient?) The question is whether ‘θ(t)’ completes a full loop in that time. If it does then we’ve gotten back to the starting position exactly and we have a closed orbit.

Thing is that the angle will never get back to its starting value. The angle ‘θ(t)’ is always increasing at a rate we call ‘ω’, the angular velocity. This number is constant, at least approximately. Last time we found out what this number was:

\omega = \frac{L}{ma^2}

So the angle, over time, is going to look like:

\theta(t) = \frac{L}{ma^2} t

And ‘θ(Tr)’ will never equal ‘θ(0)’ again, not unless ‘ω’ is zero. And if ‘ω’ is zero then the planet is racing away from the center of the universe never to be seen again. Or it’s plummeting into the center of the universe to be gobbled up by whatever resides there. In either case, not what we traditionally think of as orbits. Even if we allow these as orbits, these would be nudges too big to call perturbations.

So here’s the resolution. Angles are right awful pains in mathematical physics. This is because increasing an angle by 2π — or decreasing it by 2π — has no visible effect. In the language of the hew-mon, adding 360 degrees to a turn leaves you back where you started. A 45 degree angle is indistinguishable from a 405 degree angle, or a 765 degree angle, or a -315 degree angle, or so on. This makes for all sorts of irritating alternate cases to consider when you try solving for where one thing meets another. But it allows us to have closed orbits.

Because we can have a closed orbit, now, if the radius ‘r(t)’ completes a full oscillation in the time it takes ‘θ(t)’ to grow by 2π. Or to grow by π. Or to grow by ½π. Or a third of π. Or so on.

So. Last time we worked out that the angular velocity had to be this number:

\omega = \frac{L}{ma^2}

And that looked weird because the central force doesn’t seem to be there. It’s in there. It’s just implicit. We need to know what the central force is to work out what ‘a’ is. But we can make it explicit by using that auxiliary little function ‘F(r)’. In particular, at the circular orbit radius of ‘a’ we have that:

F(a) = -\frac{L^2}{ma^3}

I am going to use this to work out what ‘L’ has to be, in terms of ‘F’ and ‘m’ and ‘a’. First, multiply both sides of this equation by ‘ma3‘:

F(a) \cdot ma^3 = -L^2

And then both sides by -1:

-ma^3 F(a) = L^2

Take the square root — don’t worry, that it will turn out that ‘F(a)’ is a negative number so we’re not doing anything suspicious —

\sqrt{-ma^3 F(a)} = L

Now, take that ‘L’ we’ve got and put it back into the equation for angular velocity:

\omega = \frac{L}{ma^2} = \frac{\sqrt{-ma^3 F(a)}}{ma^2}

We might look stuck and at what seems like an even worse position. It’s not. When you do enough of these problems you get used to some tricks. For example, that ‘ma2‘ in the denominator we could move under the square root if we liked. This we know because ma^2 = \sqrt{ \left(ma^2\right)^2 } at least as long as ‘ma2‘ is positive. It is.

So. We fall back on the trick of squaring and square-rooting the denominator and so generate this mess:

\omega = \sqrt{\frac{-ma^3 F(a)}{\left(ma^2\right)^2}}	\\ \omega = \sqrt{\frac{-ma^3 F(a)}{m^2 a^4}} \\ \omega = \sqrt{\frac{-F(a)}{ma}}

That’s getting nice and simple. Let me go complicate matters. I’ll want to know the angle that the planet sweeps out as the radius goes from its largest to its smallest value. Or vice-versa. This time is going to be half of ‘Tr‘, the time it takes to do a complete oscillation. The oscillation might have started at time ‘t’ of zero, maybe not. But how long it takes will be the same. I’m going to call this angle ‘ψ’, because I’ve written “the angle that the planet sweeps out as the radius goes from its largest to its smallest value” enough times this essay. If ‘ψ’ is equal to π, or one-half π, or one-third π, or some other nice rational multiple of π we’ll get a closed orbit. If it isn’t, we won’t.

So. ‘ψ’ will be one-half times the oscillation time times that angular velocity. This is easy:

\psi = \frac{1}{2} \cdot T_r \cdot \omega

Put in the formulas we have for ‘Tr‘ and for ‘ω’. Now it’ll be complicated.

\psi = \frac{1}{2} 2\pi \sqrt{\frac{m}{-F'(a) - \frac{3}{a} F(a)}} \sqrt{\frac{-F(a)}{ma}}

Now we’ll make this a little simpler again. We have two square roots of fractions multiplied by each other. That’s the same as the square root of the two fractions multiplied by each other. So we can take numerator times numerator and denominator times denominator, all underneath the square root sign. See if I don’t. Oh yeah and one-half of two π is π but you saw that coming.

\psi = \pi \sqrt{ \frac{-m F(a)}{-\left(F'(a) + \frac{3}{m}F(a)\right)\cdot ma} }

OK, so there’s some minus signs in the numerator and denominator worth getting rid of. There’s an ‘m’ in the numerator and the denominator that we can divide out of both sides. There’s an ‘a’ in the denominator that can multiply into a term that has a denominator inside the denominator and you know this would be easier if I could use little cross-out symbols in WordPress LaTeX. If you’re not following all this, try writing it out by hand and seeing what makes sense to cancel out.

\psi = \pi \sqrt{ \frac{F(a)}{aF'(a) + 3F(a)} }

This is getting not too bad. Start from a potential energy ‘U(r)’. Use an angular momentum ‘L’ to figure out the circular orbit radius ‘a’. From the potential energy find the force ‘F(r)’. And then, based on what ‘F’ and the first derivative of ‘F’ happen to be, at the radius ‘a’, we can see whether a closed orbit can be there.

I’ve gotten to some pretty abstract territory here. Next time I hope to make things simpler again.

Why Stuff Can Orbit, Part 12: How Fast Is An Orbit?


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Previously:

And the supplemental reading:


On to the next piece of looking for stable, closed orbits of a central force. We start from a circular orbit of something around the sun or the mounting point or whatever. The center. I would have saved myself so much foggy writing if I had just decided to make this a sun-and-planet problem. But I had wanted to write the general problem. In this the force attracting the something towards the center has a strength that’s some constant times the distance to the center raised to a power. This is easy to describe in symbols. It’s cluttered to describe in words. This is why symbols are so nice.

The perturbed orbit, the one I want to see close up, looks like an oscillation around that circle. The fact it is a perturbation, a small nudge away from the equilibrium, means how big the perturbation is will oscillate in time. How far the planet (whatever) is from the center will make a sine wave in time. Whether it closes depends on what it does in space.

Part of what it does in space is easy. I just said what the distance from the planet to the center does. But to say where the planet is we need to know how far it is from the center and what angle it makes with respect to some reference direction. That’s a little harder. We also need to know where it is in the third dimension, but that’s so easy. An orbit like this is always in one plane, so we picked that plane to be that of our paper or whiteboard or tablet or whatever we’re using to sketch this out. That’s so easy to answer we don’t even count it as solved.

The angle, though. Here, I mean the angle made by looking at the planet, the center, and some reference direction. This angle can be any real number, although a lot of those angles are going to point to the same direction in space. We’re coming at this from a mathematical view, or a physics view. Or a mathematical physics view. It means we measure this angle as radians instead of degrees. That is, a right angle is \frac{\pi}{2} , not 90 degrees, thank you. A full circle is 2\pi and not 360 degrees. We aren’t doing this to be difficult. There are good reasons to use radians. They make the mathematics simpler. What else could matter?

We use \theta as the symbol for this angle. It’s a popular choice. \theta is going to change in time. We’ll want to know how fast it changes over time. This concept we call the angular velocity. For this there are a bunch of different possible notations. The one that I snuck in here two essays ago was ω.

We came at the physics of this orbiting planet from a weird direction. Well, I came at it, and you followed along, and thank you for that. But I never did something like set the planet at a particular distance from the center of the universe and give it a set speed so it would have a circular enough orbit. I set up that we should have some potential energy. That energy implies a central force. It attracts things to the center of the universe. And that there should be some angular momentum that the planet has in its movement. And from that, that there would be some circular orbit. That circular orbit is one with just the right radius and just the right change in angle over time.

From the potential energy and the angular momentum we can work out the radius of the circular orbit. Suppose your potential energy obeys a rule like V(r) = Cr^n for some number ‘C’ and some power, another number, ‘n’. Suppose your planet has the mass ‘m’. Then you’ll get a circular orbit when the planet’s a distance ‘a’ from the center, if a^{n + 2} = \frac{L^2}{n C m} . And it turns out we can also work out the angular velocity of this circular orbit. It’s all implicit in the amount of angular momentum that the planet has. This is part of why a mathematical physicist looks for concepts like angular momentum. They’re easy to work with, and they yield all sorts of interesting information, given the chance.

I first introduced angular momentum as this number that was how much of something that our something had. It’s got physical meaning, though, reflecting how much … uh … our something would like to keep rotating around the way it has. And this can be written as a formula. The angular momentum ‘L’ is equal to the moment of inertia ‘I’ times the angular velocity ‘ω’. ‘L’ and ‘ω’ are really vectors, and ‘I’ is really a tensor. But we don’t have to worry about this because this kind of problem is easy. We can pretend these are all real numbers and nothing more.

The moment of inertia depends on how the mass of the thing rotating is distributed in space. And it depends on how far the mass is from whatever axis it’s rotating around. For real bodies this can be challenging to work out. It’s almost always a multidimensional integral, haunting students in Calculus III. For a mass in a central force problem, though, it’s easy once again. Please tell me you’re not surprised. If it weren’t easy I’d have some more supplemental reading pieces here first.

For a planet of mass ‘m’ that’s a distance ‘r’ from the axis of rotation, the moment of inertia ‘I’ is equal to ‘mr2‘. I’m fibbing. Slightly. This is for a point mass, that is, something that doesn’t occupy volume. We always look at point masses in this sort of physics. At least when we start. It’s easier, for one thing. And it’s not far off. The Earth’s orbit has a radius just under 150,000,000 kilometers. The difference between the Earth’s actual radius of just over 6,000 kilometers and a point-mass radius of 0 kilometers is a minor correction.

So since we know L = I\omega , and we know I = mr^2 , we have L = mr^2\omega and from this:

\omega = \frac{L}{mr^2}

We know that ‘r’ changes in time. It oscillates from a maximum to a minimum value like any decent sine wave. So ‘r2‘ is going to oscillate too, like a … sine-squared wave. And then dividing the constant ‘L’ by something oscillating like a sine-squared wave … this implies ω changes in time. So it does. In a possibly complicated and annoying way. So it does. I don’t want to deal with that. So I don’t.

Instead, I am going to summon the great powers of approximation. This perturbed orbit is a tiny change from a circular orbit with radius ‘a’. Tiny. The difference between the actual radius ‘r’ and the circular-orbit radius ‘a’ should be small enough we don’t notice it at first glance. So therefore:

\omega = \frac{L}{ma^2}

And this is going to be close enough. You may protest: what if it isn’t? Why can’t the perturbation be so big that ‘a’ is a lousy approximation to ‘r’? To this I say: if the perturbation is that big it’s not a perturbation anymore. It might be an interesting problem. But it’s a different problem from what I’m doing here. It needs different techniques. The Earth’s orbit is different from Halley’s Comet’s orbit in ways we can’t ignore. I hope this answers your complaint. Maybe it doesn’t. I’m on your side there. A lot of mathematical physics, and of analysis, is about making approximations. We need to find perturbations big enough to give interesting results. But not so big they need harder mathematics than you can do. It’s a strange art. I’m not sure I know how to describe how to do it. What I know I’ve learned from doing a lot of problems. You start to learn what kinds of approaches usually pan out.

But what we’re relying on is the same trick we use in analysis. We suppose there is some error margin in the orbit’s radius and angle that’s tolerable. Then if the perturbation means we’d fall outside that error margin, we just look instead at a smaller perturbation. If there is no perturbation small enough to stay within our error margin then the orbit isn’t stable. And we already know it is. Here, we’re looking for closed orbits. People could in good faith argue about whether some particular observed orbit is a small enough perturbation from the circular equilibrium. But they can’t argue about whether there exist some small enough perturbations.

Let me suppose that you’re all right with my answer about big perturbations. There’s at least one more good objection to have here. It’s this: where is the central force? The mass of the planet (or whatever) is there. The angular momentum is there. The equilibrium orbit is there. But where’s the force? Where’s the potential energy we started with? Shouldn’t that appear somewhere in the description of how fast this planet moves around the center?

It should. And it is there, in an implicit form. We get the radius of the circular, equilibrium orbit, ‘a’, from knowing the potential energy. But we’ll do well to tease it out more explicitly. I hope to get there next time.

Why Stuff Can Orbit, Part 11: In Search Of Closure


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work.

Previously:

And the supplemental reading:


I’m not ready to finish the series off yet. But I am getting closer to wrapping up perturbed orbits. So I want to say something about what I’m looking for.

In some ways I’m done already. I showed how to set up a central force problem, where some mass gets pulled towards the center of the universe. It can be pulled by a force that follows any rule you like. The rule has to follow some rules. The strength of the pull changes with how far the mass is from the center. It can’t depend on what angle the mass makes with respect to some reference meridian. Once we know how much angular momentum the mass has we can find whether it can have a circular orbit. And we can work out whether that orbit is stable. If the orbit is stable, then for a small nudge, the mass wobbles around that equilibrium circle. It spends some time closer to the center of the universe and some time farther away from it.

I want something a little more, else I can’t carry on this series. I mean, we can make central force problems with more things in them. What we have now is a two-body problem. A three-body problem is more interesting. It’s pretty near impossible to give exact, generally true answers about. We can save things by only looking at very specific cases. Fortunately one is a sun, planet, and moon, where each object is much more massive than the next one. We see a lot of things like that. Four bodies is even more impossible. Things start to clear up if we look at, like, a million bodies, because our idea of what “clear” is changes. I don’t want to do that right now.

Instead I’m going to look for closed orbits. Closed orbits are what normal people would call “orbits”. We’re used to thinking of orbits as, like, satellites going around and around the Earth. We know those go in circles, or ellipses, over and over again. They don’t, but the difference between a closed orbit and what they do is small enough we don’t need to care.

Here, “orbit” means something very close to but not exactly what normal people mean by orbits. Maybe I should have said something about that before. But the difference hasn’t counted for much before.

Start off by thinking of what we need to completely describe what a particular mass is doing. You need to know the central force law that the mass obeys. You need to know, for some reference time, where it is. You also need to know, for that same reference time, what its momentum is. Once you have that, you can predict where it should go for all time to come. You can also work out where it must have been before that reference time. (This we call “retrodicting”. Or “predicting the past”. With this kind of physics problem time has an unnerving symmetry. The tools which forecast what the mass will do in the future are exactly the same as those which tell us what the mass has done in the past.)

Now imagine knowing all the sets of positions and momentums that the mass has had. Don’t look just at the reference time. Look at all the time before the reference time, and look at all the time after the reference time. Imagine highlighting all the sets of positions and momentums the mass ever took on or ever takes on. We highlight them against the universe of all the positions and momentums that the mass could have had if this were a different problem.

What we get is this ribbon-y thread that passes through the universe of every possible setup. This universe of every possible setup we call a “phase space”. It’s easy to explain the “space” part of that name. The phase space obeys the rules we’d expect from a vector space. It also acts in a lot of ways like the regular old space that we live in. The “phase” part I’m less sure how to justify. I suspect we get it because this way of looking at physics problems comes from statistical mechanics. And in that field we’re looking, often, at the different ways a system can behave. This mathematics looks a lot like that of different phases of matter. The changes between solids and liquids and gases are some of what we developed this kind of mathematics to understand, in fact. But this is speculation on my part. I’m not sure why “phase” has attached to this name. I can think of other, harder-to-popularize reasons why the name would make sense too. Maybe it’s the convergence of several reasons. I’d love to hear if someone has a good etymology. If one exists; remember that we still haven’t got the story straight about why ‘m’ stands for the slope of a line.

Anyway, this ribbon of all the arrangements of position and momentum that the mass does ever at any point have we call a “trajectory”. We call it a trajectory because it looks like a trajectory. Sometimes mathematics terms aren’t so complicated. We also call it an “orbit” since very often the problems we like involve trajectories that loop around some interesting area. It looks like a planet orbiting a sun.

A “closed orbit” is an orbit that gets back to where it started. This means you can take some reference time, and wait. Eventually the mass comes back to the same position and the same momentum that you saw at that reference time. This might seem unavoidable. Wouldn’t it have to get back there? And it turns out, no, it doesn’t. A trajectory might wander all over phase space. This doesn’t take much imagination. But even if it doesn’t, if it stays within a bounded region, it could still wander forever without repeating itself. If you’re not sure about that, please consider an old sequence I wrote inspired by the Aardman Animation film Arthur Christmas. Also please consider seeing the Aardman Animation film Arthur Christmas. It is one of the best things this decade has offered us. The short version is, though, that there is a lot of room even in the smallest bit of space. A trajectory is, in a way, a one-dimensional thing that might get all coiled up. But phase space has got plenty of room for that.

And sometimes we will get a closed orbit. The mass can wander around the center of the universe and come back to wherever we first noticed it with the same momentum it first had. A that point it’s locked into doing that same thing again, forever. If it could ever break out of the closed orbit it would have had to the first time around, after all.

Closed orbits, I admit, don’t exist in the real world. Well, the real world is complicated. It has more than a single mass and a single force at work. Energy and momentum are conserved. But we effectively lose both to friction. We call the shortage “entropy”. Never mind. No person has ever seen a circle, and no person ever will. They are still useful things to study. So it is with closed orbits.

An equilibrium orbit, the circular orbit of a mass that’s at exactly the right radius for its angular momentum, is closed. A perturbed orbit, wobbling around the equilibrium, might be closed. It might not. I mean next time to discuss what has to be true to close an orbit.

Why Stuff Can Orbit, Part 10: Where Time Comes From And How It Changes Things


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patron for those able to support his work.

Previously:

And the supplemental reading:


And again my thanks to Thomas K Dye, creator of the web comic Newshounds, for the banner art. He has a Patreon to support his creative habit.

In the last installment I introduced perturbations. These are orbits that are a little off from the circles that make equilibriums. And they introduce something that’s been lurking, unnoticed, in all the work done before. That’s time.

See, how do we know time exists? … Well, we feel it, so, it’s hard for us not to notice time exists. Let me rephrase it then, and put it in contemporary technology terms. Suppose you’re looking at an animated GIF. How do you know it’s started animating? Or that it hasn’t stalled out on some frame?

If the picture changes, then you know. It has to be going. But if it doesn’t change? … Maybe it’s stalled out. Maybe it hasn’t. You don’t know. You know there’s time when you can see change. And that’s one of the little practical insights of physics. You can build an understanding of special relativity by thinking hard about that. Also think about the observation that the speed of light (in vacuum) doesn’t change.

When something physical’s in equilibrium, it isn’t changing. That’s how we found equilibriums to start with. And that means we stop keeping track of time. It’s one more thing to keep track of that doesn’t tell us anything new. Who needs it?

For the planet orbiting a sun, in a perfect circle, or its other little variations, we do still need time. At least some. How far the planet is from the sun doesn’t change, no, but where it is on the orbit will change. We can track where it is by setting some reference point. Where the planet is at the start of our problem. How big is the angle between where the planet is now, the sun (the center of our problem’s universe), and that origin point? That will change over time.

But it’ll change in a boring way. The angle will keep increasing in magnitude at a constant speed. Suppose it takes five time units for the angle to grow from zero degrees to ten degrees. Then it’ll take ten time units for the angle to grow from zero to twenty degrees. It’ll take twenty time units for the angle to grow from zero to forty degrees. Nice to know if you want to know when the planet is going to be at a particular spot, and how long it’ll take to get back to the same spot. At this rate it’ll be eighteen time units before the angle grows to 360 degrees, which looks the same as zero degrees. But it’s not anything interesting happening.

We’ll label this sort of change, where time passes, yeah, but it’s too dull to notice as a “dynamic equilibrium”. There’s change, but it’s so steady and predictable it’s not all that exciting. And I’d set up the circular orbits so that we didn’t even have to notice it. If the radius of the planet’s orbit doesn’t change, then the rate at which its apsidal angle changes, its “angular velocity”, also doesn’t change.

Now, with perturbations, the distance between the planet and the center of the universe will change in time. That was the stuff at the end of the last installment. But also the apsidal angle is going to change. I’ve used ‘r(t)’ to represent the radial distance between the planet and the sun before, and to note that what value it is depends on the time. I need some more symbols.

There’s two popular symbols to use for angles. Both are Greek letters because, I dunno, they’ve always been. (Florian Cajori’s A History of Mathematical Notation doesn’t seem to have anything. And when my default go-to for explaining mathematician’s choices tells me nothing, what can I do? Look at Wikipedia? Sure, but that doesn’t enlighten me either.) One is to use theta, θ. The other is to use phi, φ. Both are good, popular choices, and in three-dimensional problems we’ll often need both. We don’t need both. The orbit of something moving under a central force might be complicated, but it’s going to be in a single plane of movement. The conservation of angular momentum gives us that. It’s not the last thing angular momentum will give us. The orbit might happen not to be in a horizontal plane. But that’s all right. We can tilt our heads until it is.

So I’ll reach deep into the universe of symbols for angles and call on θ for the apsidal angle. θ will change with time, so, ‘θ(t)’ is the angular counterpart to ‘r(t)’.

I’d said before the apsidal angle is the angle made between the planet, the center of the universe, and some reference point. What is my reference point? I dunno. It’s wherever θ(0) is, that is, where the planet is when my time ‘t’ is zero. There’s probably a bootstrapping fallacy here. I’ll cover it up by saying, you know, the reference point doesn’t matter. It’s like the choice of prime meridian. We have to have one, but we can pick whatever one is convenient. So why not pick one that gives us the nice little identity that ‘θ(0) = 0’? If you don’t buy that and insist I pick a reference point first, fine, go ahead. But you know what? The labels on my time axis are arbitrary. There’s no difference in the way physics works whether ‘t’ is ‘0’ or ‘2017’ or ‘21350’. (At least as long as I adjust any time-dependent forces, which there aren’t here.) So we get back to ‘θ(0) = 0’.

For a circular orbit, the dynamic equilibrium case, these are pretty boring, but at least they’re easy to write. They’re:

r(t) = a	\\ \theta(t) = \omega t

Here ‘a’ is the radius of the circular orbit. And ω is a constant number, the angular velocity. It’s how much a bit of time changes the apsidal angle. And this set of equations is pretty dull. You can see why it barely rates a mention.

The perturbed case gets more interesting. We know how ‘r(t)’ looks. We worked that out last time. It’s some function like:

r(t) = a + A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

Here ‘A’ and ‘B’ are some numbers telling us how big the perturbation is, and ‘m’ is the mass of the planet, and ‘k’ is something related to how strong the central force is. And ‘a’ is that radius of the circular orbit, the thing we’re perturbed around.

What about ‘θ(t)’? How’s that look? … We don’t seem to have a lot to go on. We could go back to Newton and all that force equalling the change in momentum over time stuff. We can always do that. It’s tedious, though. We have something better. It’s another gift from the conservation of angular momentum. When we can turn a forces-over-time problem into a conservation-of-something problem we’re usually doing the right thing. The conservation-of-something is typically a lot easier to set up and to track. We’ve used it in the conservation of energy, before, and we’ll use it again. The conservation of ordinary, ‘linear’, momentum helps other problems, though not I’ll grant this one. The conservation of angular momentum will help us here.

So what is angular momentum? … It’s something about ice skaters twirling around and your high school physics teacher sitting on a bar stool spinning a bike wheel. All right. But it’s also a quantity. We can get some idea of it by looking at the formula for calculating linear momentum:

\vec{p} = m\vec{v}

The linear momentum of a thing is its inertia times its velocity. This is if the thing isn’t moving fast enough we have to notice relativity. Also if it isn’t, like, an electric or a magnetic field so we have to notice it’s not precisely a thing. Also if it isn’t a massless particle like a photon because see previous sentence. I’m talking about ordinary things like planets and blocks of wood on springs and stuff. The inertia, ‘m’, is rather happily the same thing as its mass. The velocity is how fast something is travelling and which direction it’s going in.

Angular momentum, meanwhile, we calculate with this radically different-looking formula:

\vec{L} = I\vec{\omega}

Here, again, talking about stuff that isn’t moving so fast we have to notice relativity. That isn’t electric or magnetic fields. That isn’t massless particles. And so on. Here ‘I’ is the “moment of inertia” and \vec{w} is the angular velocity. The angular velocity is a vector that describes for us how fast the spinning is and what direction the axis around which the thing spins is. The moment of inertia describes how easy or hard it is to make the thing spin around each axis. It’s a tensor because real stuff can be easier to spin in some directions than in others. If you’re not sure that’s actually so, try tossing some stuff in the air so it spins in each of the three major directions. You’ll see.

We’re fortunate. For central force problems the moment of inertia is easy to calculate. We don’t need the tensor stuff. And we don’t even need to notice that the angular velocity is a vector. We know what axis the planet’s rotating around; it’s the one pointing out of the plane of motion. We can focus on the size of the angular velocity, the number ‘ω’. See how they’re different, what with one not having an arrow over the symbol. The arrow-less version is easier. For a planet, or other object, with mass ‘m’ that’s orbiting a distance ‘r’ from the sun, the moment of inertia is:

I = mr^2

So we know this number is going to be constant:

L = mr^2\omega

The mass ‘m’ doesn’t change. We’re not doing those kinds of problem. So however ‘r’ changes in time, the angular velocity ‘ω’ has to change with it, so that this product stays constant. The angular velocity is how the apsidal angle ‘θ’ changes over time. So since we know ‘L’ doesn’t change, and ‘m’ doesn’t change, then the way ‘r’ changes must tell us something about how ‘θ’ changes. We’ll get into that next time.

Why Stuff Can Orbit, Part 9: How The Spring In The Cosmos Behaves


Why Stuff Can Orbit, featuring a dazed-looking coati (it's a raccoon-like creature from Latin America) and a starry background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patron for those able to support his work.

Previously:

And the supplemental reading:


First, I thank Thomas K Dye for the banner art I have for this feature! Thomas is the creator of the longrunning web comic Newshounds. He’s hoping soon to finish up special editions of some of the strip’s stories and to publish a definitive edition of the comic’s history. He’s also got a Patreon account to support his art habit. Please give his creations some of your time and attention.

Now back to central forces. I’ve run out of obvious fun stuff to say about a mass that’s in a circular orbit around the center of the universe. Before you question my sense of fun, remember that I own multiple pop histories about the containerized cargo industry and last month I read another one that’s changed my mind about some things. These sorts of problems cover a lot of stuff. They cover planets orbiting a sun and blocks of wood connected to springs. That’s about all we do in high school physics anyway. Well, there’s spheres colliding, but there’s no making a central force problem out of those. You can also make some things that look like bad quantum mechanics models out of that. The mathematics is interesting even if the results don’t match anything in the real world.

But I’m sticking with central forces that look like powers. These have potential energy functions with rules that look like V(r) = C rn. So far, ‘n’ can be any real number. It turns out ‘n’ has to be larger than -2 for a circular orbit to be stable, but that’s all right. There are lots of numbers larger than -2. ‘n’ carries the connotation of being an integer, a whole (positive or negative) number. But if we want to let it be any old real number like 0.1 or π or 18 and three-sevenths that’s fine. We make a note of that fact and remember it right up to the point we stop pretending to care about non-integer powers. I estimate that’s like two entries off.

We get a circular orbit by setting the thing that orbits in … a circle. This sounded smarter before I wrote it out like that. Well. We set it moving perpendicular to the “radial direction”, which is the line going from wherever it is straight to the center of the universe. This perpendicular motion means there’s a non-zero angular momentum, which we write as ‘L’ for some reason. For each angular momentum there’s a particular radius that allows for a circular orbit. Which radius? It’s whatever one is a minimum for the effective potential energy:

V_{eff}(r) = Cr^n + \frac{L^2}{2m}r^{-2}

This we can find by taking the first derivative of ‘Veff‘ with respect to ‘r’ and finding where that first derivative is zero. This is standard mathematics stuff, quite routine. We can do with any function whether it represents something physics or not. So:

\frac{dV_{eff}}{dr} = nCr^{n-1} - 2\frac{L^2}{2m}r^{-3} = 0

And after some work, this gets us to the circular orbit’s radius:

r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

What I’d like to talk about is if we’re not quite at that radius. If we set the planet (or whatever) a little bit farther from the center of the universe. Or a little closer. Same angular momentum though, so the equilibrium, the circular orbit, should be in the same spot. It happens there isn’t a planet there.

This enters us into the world of perturbations, which is where most of the big money in mathematical physics is. A perturbation is a little nudge away from an equilibrium. What happens in response to the little nudge is interesting stuff. And here we already know, qualitatively, what’s going to happen: the planet is going to rock around the equilibrium. This is because the circular orbit is a stable equilibrium. I’d described that qualitatively last time. So now I want to talk quantitatively about how the perturbation changes given time.

Before I get there I need to introduce another bit of notation. It is so convenient to be able to talk about the radius of the circular orbit that would be the equilibrium. I’d called that ‘r’ up above. But I also need to be able to talk about how far the perturbed planet is from the center of the universe. That’s also really hard not to call ‘r’. Something has to give. Since the radius of the circular orbit is not going to change I’m going to give that a new name. I’ll call it ‘a’. There’s several reasons for this. One is that ‘a’ is commonly used for describing the size of ellipses, which turn up in actual real-world planetary orbits. That’s something we know because this is like the thirteenth part of an essay series about the mathematics of orbits. You aren’t reading this if you haven’t picked up a couple things about orbits on your own. Also we’ve used ‘a’ before, in these sorts of approximations. It was handy in the last supplemental as the point of expansion’s name. So let me make that unmistakable:

a \equiv r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}

The \equiv there means “defined to be equal to”. You might ask how this is different from “equals”. It seems like more emphasis to me. Also, there are other names for the circular orbit’s radius that I could have used. ‘re‘ would be good enough, as the subscript would suggest “radius of equilibrium”. Or ‘r0‘ would be another popular choice, the 0 suggesting that this is something of key, central importance and also looking kind of like a circle. (That’s probably coincidence.) I like the ‘a’ better there because I know how easy it is to drop a subscript. If you’re working on a problem for yourself that’s easy to fix, with enough cursing and redoing your notes. On a board in front of class it’s even easier to fix since someone will ask about the lost subscript within three lines. In a post like this? It would be a mess.

So now I’m going to look at possible values of the radius ‘r’ that are close to ‘a’. How close? Close enough that ‘Veff‘, the effective potential energy, looks like a parabola. If it doesn’t look much like a parabola then I look at values of ‘r’ that are even closer to ‘a’. (Do you see how the game is played? If you don’t, look closer. Yes, this is actually valid.) If ‘r’ is that close to ‘a’, then we can get away with this polynomial expansion:

V_{eff}(r) \approx V_{eff}(a) + m\cdot(r - a) + \frac{1}{2} m_2 (r - a)^2

where

m = \frac{dV_{eff}}{dr}\left(a\right)	\\ m_2  = \frac{d^2V_{eff}}{dr^2}\left(a\right)

The “approximate” there is because this is an approximation. V_{eff}(r) is in truth equal to the thing on the right-hand-side there plus something that isn’t (usually) zero, but that is small.

I am sorry beyond my ability to describe that I didn’t make that ‘m’ and ‘m2‘ consistent last week. That’s all right. One of these is going to disappear right away.

Now, what V_{eff}(a) is? Well, that’s whatever you get from putting in ‘a’ wherever you start out seeing ‘r’ in the expression for V_{eff}(r) . I’m not going to bother with that. Call it math, fine, but that’s just a search-and-replace on the character ‘r’. Also, where I’m going next, it’s going to disappear, never to be seen again, so who cares? What’s important is that this is a constant number. If ‘r’ changes, the value of V_{eff}(a) does not, because ‘r’ doesn’t appear anywhere in V_{eff}(a) .

How about ‘m’? That’s the value of the first derivative of ‘Veff‘ with respect to ‘r’, evaluated when ‘r’ is equal to ‘a’. That might be something. It’s not, because of what ‘a’ is. It’s the value of ‘r’ which would make \frac{dV_{eff}}{dr}(r) equal to zero. That’s why ‘a’ has that value instead of some other, any other.

So we’ll have a constant part ‘Veff(a)’, plus a zero part, plus a part that’s a parabola. This is normal, by the way, when we do expansions around an equilibrium. At least it’s common. Good to see it. To find ‘m2‘ we have to take the second derivative of ‘Veff(r)’ and then evaluate it when ‘r’ is equal to ‘a’ and ugh but here it is.

\frac{d^2V_{eff}}{dr^2}(r) = n (n - 1) C r^{n - 2} + 3\cdot\frac{L^2}{m}r^{-4}

And at the point of approximation, where ‘r’ is equal to ‘a’, it’ll be:

m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C a^{n - 2} + 3\cdot\frac{L^2}{m}a^{-4}

We know exactly what ‘a’ is so we could write that out in a nice big expression. You don’t want to. I don’t want to. It’s a bit of a mess. I mean, it’s not hard, but it has a lot of symbols in it and oh all right. Here. Look fast because I’m going to get rid of that as soon as I can.

m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C \left(\frac{L^2}{n C m}\right)^{n - 2} + 3\cdot\frac{L^2}{m}\left(\frac{L^2}{n C m}\right)^{-4}

For the values of ‘n’ that we actually care about because they turn up in real actual physics problems this expression simplifies some. Enough, anyway. If we pretend we know nothing about ‘n’ besides that it is a number bigger than -2 then … ugh. We don’t have a lot that can clean it up.

Here’s how. I’m going to define an auxiliary little function. Its role is to contain our symbolic sprawl. It has a legitimate role too, though. At least it represents something that it makes sense to give a name. It will be a new function, named ‘F’ and that depends on the radius ‘r’:

F(r) \equiv -\frac{dV}{dr}

Notice that’s the derivative of the original ‘V’, not the angular-momentum-equipped ‘Veff‘. This is the secret of its power. It doesn’t do anything to make V_{eff}(r) easier to work with. It starts being good when we take its derivatives, though:

\frac{dV_{eff}}{dr} = -F(r) - \frac{L^2}{m}r^{-3}

That already looks nicer, doesn’t it? It’s going to be really slick when you think about what ‘F(a)’ is. Remember that ‘a’ is the value for ‘r’ which makes the derivative of ‘Veff‘ equal to zero. So … I may not know much, but I know this:

0 = \frac{dV_{eff}}{dr}(a) = -F(a) - \frac{L^2}{m}a^{-3}	\\ F(a) = -\frac{L}{ma^3}

I’m not going to say what value F(r) has for other values of ‘r’ because I don’t care. But now look at what it does for the second derivative of ‘Veff‘:

\frac{d^2 V_{eff}}{dr^2}(r) = -F'(r) + 3\frac{L^2}{mr^4}

Here the ‘F'(r)’ is a shorthand way of writing ‘the derivative of F with respect to r’. You can do when there’s only the one free variable to consider. And now something magic that happens when we look at the second derivative of ‘Veff‘ when ‘r’ is equal to ‘a’ …

\frac{d^2 V_{eff}}{dr^2}(a) = -F'(a) - \frac{3}{a} F(a)

We get away with this because we happen to know that ‘F(a)’ is equal to -\frac{L}{ma^3} and doesn’t that work out great? We’ve turned a symbolic mess into a … less symbolic mess.

Now why do I say it’s legitimate to introduce ‘F(r)’ here? It’s because minus the derivative of the potential energy with respect to the position of something can be something of actual physical interest. It’s the amount of force exerted on the particle by that potential energy at that point. The amount of force on a thing is something that we could imagine being interested in. Indeed, we’d have used that except potential energy is usually so much easier to work with. I’ve avoided it up to this point because it wasn’t giving me anything I needed. Here, I embrace it because it will save me from some awful lines of symbols.

Because with this expression in place I can write the approximation to the potential energy of:

V_{eff}(r) \approx V_{eff}(a) + \frac{1}{2} \left( -F'(a) - \frac{3}{a}F(a) \right) (r - a)^2

So if ‘r’ is close to ‘a’, then the polynomial on the right is a good enough approximation to the effective potential energy. And that potential energy has the shape of a spring’s potential energy. We can use what we know about springs to describe its motion. Particularly, we’ll have this be true:

\frac{dp}{dt} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

Here, ‘p’ is the (linear) momentum of whatever’s orbiting, which we can treat as equal to ‘mr’, the mass of the orbiting thing times how far it is from the center. You may sense in me some reluctance about doing this, what with that ‘we can treat as equal to’ talk. There’s reasons for this and I’d have to get deep into geometry to explain why. I can get away with specifically this use because the problem allows it. If you’re trying to do your own original physics problem inspired by this thread, and it’s not orbits like this, be warned. This is a spot that could open up to a gigantic danger pit, lined at the bottom with sharp spikes and angry poison-clawed mathematical tigers and I bet it’s raining down there too.

So we can rewrite all this as

m\frac{d^2r}{dt^2} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r

And when we learned everything interesting there was to know about springs we learned what the solutions to this look like. Oh, in that essay the variable that changed over time was called ‘x’ and here it’s called ‘r’, but that’s not an actual difference. ‘r’ will be some sinusoidal curve:

r(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

where, here, ‘k’ is equal to that whole mass of constants on the right-hand side:

k = -\left( F'(a) + \frac{3}{a} F(a)\right)

I don’t know what ‘A’ and ‘B’ are. It’ll depend on just what the perturbation is like, how far the planet is from the circular orbit. But I can tell you what the behavior is like. The planet will wobble back and forth around the circular orbit, sometimes closer to the center, sometimes farther away. It’ll spend as much time closer to the center than the circular orbit as it does farther away. And the period of that oscillation will be

T = 2\pi\sqrt{\frac{m}{k}} = 2\pi\sqrt{\frac{m}{-\left(F'(a) + \frac{3}{a}F(a)\right)}}

This tells us something about what the orbit of a thing not in a circular orbit will be like. Yes, I see you in the back there, quivering with excitement about how we’ve got to elliptical orbits. You’re moving too fast. We haven’t got that. There will be elliptical orbits, yes, but only for a very particular power ‘n’ for the potential energy. Not for most of them. We’ll see.

It might strike you there’s something in that square root. We need to take the square root of a positive number, so maybe this will tell us something about what kinds of powers we’re allowed. It’s a good thought. It turns out not to tell us anything useful, though. Suppose we started with V(r) = Cr^n . Then F(r) = -nCr^{n - 1}, and F'(r) = -n(n - 1)C^{n - 2} . Sad to say, this leads us to a journey which reveals that we need ‘n’ to be larger than -2 or else we don’t get oscillations around a circular orbit. We already knew that, though. We already found we needed it to have a stable equilibrium before. We can see there not being a period for these oscillations around the circular orbit as another expression of the circular orbit not being stable. Sad to say, we haven’t got something new out of this.

We will get to new stuff, though. Maybe even ellipses.

What Second Derivatives Are And What They Can Do For You


Previous supplemental reading for Why Stuff Can Orbit:


This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

Necessary qualifiers: pages 65 through 82 of any book on real analysis.

So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write f: \Re \rightarrow \Re . If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

(One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

F^0(x) = f(a)

That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

F^1(x) = f(a) + m\cdot\left(x - a\right)

Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

So I will define a new function, spoken of as f-prime, this way:

f'(x) = \frac{df}{dx}\left(x\right)

If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that \frac{df}{dx} . That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols \frac{df}{dx} so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

m = f'(a) = \frac{df}{dx}\left(a\right)

which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)

This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2

What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)

We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2

This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3

There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be \frac{1}{4\cdot 3\cdot 2} . The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

Everything Interesting There Is To Say About Springs


I need another supplemental essay to get to the next part in Why Stuff Can Orbit. (Here’s the last part.) You probably guessed it’s about springs. They’re useful to know about. Why? That one killer Mystery Science Theater 3000 short, yes. But also because they turn up everywhere.

Not because there are literally springs in everything. Not with the rise in anti-spring political forces. But what makes a spring is a force that pushes something back where it came from. It pushes with a force that grows just as fast as the distance from where it came grows. Most anything that’s stable, that has some normal state which it tends to look like, acts like this. A small nudging away from the normal state gets met with some resistance. A bigger nudge meets bigger resistance. And most stuff that we see is stable. If it weren’t stable it would have broken before we got there.

(There are exceptions. Stable is, sometimes, about perspective. It can be that something is unstable but it takes so long to break that we don’t have to worry about it. Uranium, for example, is dying, turning slowly into stable elements like lead and helium. There will come a day there’s none left in the Earth. But it takes so long to break down that, barring surprises, the Earth will have broken down into something else first. And it may be that something is unstable, but it’s created by something that’s always going on. Oxygen in the atmosphere is always busy combining with other chemicals. But oxygen stays in the atmosphere because life keeps breaking it out of other chemicals.)

Now I need to put in some terms. Start with your thing. It’s on a spring, literally or metaphorically. Don’t care. If it isn’t being pushed in any direction then it’s at rest. Or it’s at an equilibrium. I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? I can tell you what it acts like. It’s your business whether it should. Anyway, your thing has an equilibrium.

Next term is the displacement. It’s how far your thing is from the equilibrium. If it’s really a block of wood on a spring, like it is in high school physics, this displacement is how far the spring is stretched out. In equations I’ll represent this as ‘x’ because I’m not going to go looking deep for letters for something like this. What value ‘x’ has will change with time. This is what makes it a physics problem. If we want to make clear that ‘x’ does depend on time we might write ‘x(t)’. We might go all the way and start at the top of the page with ‘x = x(t)’, just in case.

If ‘x’ is a positive number it means your thing is displaced in one direction. If ‘x’ is a negative number it was displaced in the opposite direction. By ‘one direction’ I mean ‘to the right, or else up’. By ‘the opposite direction’ I mean ‘to the left, or else down’. Yes, you can pick any direction you like but why are you making life harder for everyone? Unless there’s something compelling about the setup of your thing that makes another choice make sense just go along with what everyone else is doing. Apply your creativity and iconoclasm where it’ll make your life better instead.

Also, we only have to worry about one direction. This might surprise you. If you’ve played much with springs you might have noticed how they’re three-dimensional objects. You can set stuff swinging back and forth in two directions at once. That’s all right. We can describe a two-dimensional displacement as a displacement in one direction plus a displacement perpendicular to that. And if there’s no such thing as friction, they won’t interact. We can pretend they’re two problems that happen to be running on the same spring at the same time. So here I declare: we can ignore friction and pretend it doesn’t matter. We don’t have to deal with more than one direction at a time.

(It’s not only friction. There’s problems about how energy gets transmitted between ways the thing can oscillate. This is what causes what starts out as a big whack in one direction to turn into a middling little circular wobbling. That’s a higher level physics than I want to do right now. So here I declare: we can ignore that and pretend it doesn’t matter.)

Whether your thing is displaced or not it’s got some potential energy. This can be as large or as small as you like, down to some minimum when your thing is at equilibrium. The potential energy we represent as a number named ‘U’ because of good reasons that somebody surely had. The potential energy of a spring depends on the square of the displacement. We can write its value as ‘U = ½ k x2‘. Here ‘k’ is a number known as the spring constant. It describes how strongly the spring reacts; the bigger ‘k’ is, the more any displacement’s met with a contrary force. It’ll be a positive number. ½ is that same old one-half that you know from ideas being half-baked or going-off being half-cocked.

Potential energy is great. If you can describe a physics problem with its energy you’re in good shape. It lets us bring physical intuition into understanding things. Imagine a bowl or a Habitrail-type ramp that’s got the cross-section of your potential energy. Drop a little marble into it. How the marble rolls? That’s what your thingy does in that potential energy.

Also we have mathematics. Calculus, particularly differential equations, lets us work out how the position of your thing will change. We need one more piece for this. That’s the momentum of your thing. Momentum is traditionally represented with the letter ‘p’. And now here’s how stuff moves when you know the potential energy ‘U’:

\frac{dp}{dt} = - \frac{\partial U}{\partial x}

Let me unpack that. \frac{dp}{dt} — also known as \frac{d}{dt}p if that looks better — is “the derivative of p with respect to t”. It means “how the value of the momentum changes as the time changes”. And that is equal to minus one times …

You might guess that \frac{\partial U}{\partial x} — also written as \frac{\partial}{\partial x} U — is some kind of derivative. The \partial looks kind of like a cursive d, after all. It’s known as the partial derivative, because it means we look at how ‘U’ changes as ‘x’ and nothing else at all changes. With the normal, ‘d’ style full derivative, we have to track how all the variables change as the ‘t’ we’re interested in changes. In this particular problem the difference doesn’t matter. But there are problems where it does matter and that’s why I’m careful about the symbols.

So now we fall back on how to take derivatives. This gives us the equation that describes how the physics of your thing on a spring works:

\frac{dp}{dt} = - k x

You’re maybe underwhelmed. This is because we haven’t got any idea how the momentum ‘p’ relates to the displacement ‘x’. Well, we do, because I know and if you’re still reading at this point you know full well what momentum is. But let me make it official. Momentum is, for this kind of thing, the mass ‘m’ of your thing times how its position is changing, which is \frac{dx}{dt} . The mass of your thing isn’t changing. If you’re going to let it change then we’re doing some screwy rocket problem and that’s a different article. So its easy to get the momentum out of that problem. We get instead the second derivative of the displacement with respect to time:

m\frac{d^2 x}{dt^2} = - kx

Fine, then. Does that tell us anything about what ‘x(t)’ is? Not yet, but I will now share with you one of the top secrets that only real mathematicians know. We will take a guess to what the answer probably is. Then we’ll see in what circumstances that answer could possibly be right. Does this seem ad hoc? Fine, so it’s ad hoc. Here is the secret of mathematicians:

It’s fine if you get your answer by any stupid method you like, including guessing and getting lucky, as long as you check that your answer is right.

Oh, sure, we’d rather you get an answer systematically, since a system might give us ideas how to find answers in new problems. But if all we want is an answer then, by definition, we don’t care where it came from. Anyway, we’re making a particular guess, one that’s very good for this sort of problem. Indeed, this guess is our system. A lot of guesses at solving differential equations use exactly this guess. Are you ready for my guess about what solves this? Because here it is.

We should expect that

x(t) = C e^{r t}

Here ‘C’ is some constant number, not yet known. And ‘r’ is some constant number, not yet known. ‘t’ is time. ‘e’ is that number 2.71828(etc) that always turns up in these problems. Why? Because its derivative is very easy to take, and if we have to take derivatives we want them to be easy to take. The first derivative of Ce^{rt} with respect to ‘t’ is r Ce^{rt} . The second derivative with respect to ‘t’ is r^2 Ce^{rt} . so here’s what we have:

m r^2 Ce^{rt} = - k Ce^{rt}

What we’d like to find are the values for ‘C’ and ‘r’ that make this equation true. It’s got to be true for every value of ‘t’, yes. But this is actually an easy equation to solve. Why? Because the C e^{rt} on the left side has to equal the C e^{rt} on the right side. As long as they’re not equal to zero and hey, what do you know? C e^{rt} can’t be zero unless ‘C’ is zero. So as long as ‘C’ is any number at all in the world except zero we can divide this ugly lump of symbols out of both sides. (If ‘C’ is zero, then this equation is 0 = 0 which is true enough, I guess.) What’s left?

m r^2 = -k

OK, so, we have no idea what ‘C’ is and we’re not going to have any. That’s all right. We’ll get it later. What we can get is ‘r’. You’ve probably got there already. There’s two possible answers:

r = \pm\sqrt{-\frac{k}{m}}

You might not like that. You remember that ‘k’ has to be positive, and if mass ‘m’ isn’t positive something’s screwed up. So what are we doing with the square root of a negative number? Yes, we’re getting imaginary numbers. Two imaginary numbers, in fact:

r = \imath \sqrt{\frac{k}{m}}, r = - \imath \sqrt{\frac{k}{m}}

Which is right? Both. In some combination, too. It’ll be a bit with that first ‘r’ plus a bit with that second ‘r’. In the differential equations trade this is called superposition. We’ll have information that tells us how much uses the first ‘r’ and how much uses the second.

You might still be upset. Hey, we’ve got these imaginary numbers here describing how a spring moves and while you might not be one of those high-price physicists you see all over the media you know springs aren’t imaginary. I’ve got a couple responses to that. Some are semantic. We only call these numbers “imaginary” because when we first noticed they were useful things we didn’t know what to make of them. The label is an arbitrary thing that doesn’t make any demands of the numbers. If we had called them, oh, “Cardanic numbers” instead would you be upset that you didn’t see any Cardanos in your springs?

My high-class semantic response is to ask in exactly what way is the “square root of minus one” any less imaginary than “three”? Can you give me a handful of three? No? Didn’t think so.

And then the practical response is: don’t worry. Exponentials raised to imaginary numbers do something amazing. They turn into sine waves. Well, sine and cosine waves. I’ll spare you just why. You can find it by looking at the first twelve or so posts of any pop mathematics blog and its article about how amazing Euler’s Formula is. Given that Euler published, like, 2,038 books and papers through his life and the fifty years after his death it took to clear the backlog you might think, “Euler had a lot of Formulas, right? Identities too?” Yes, he did, but you’ll know this one when you see it.

What’s important is that the displacement of your thing on a spring will be described by a function which looks like this:

x(t) = C_1 e^{\sqrt{\frac{k}{m}} t} + C_2 e^{-\sqrt{\frac{k}{m}} t}

for two constants, ‘C1‘ and ‘C2‘. These were the things we called ‘C’ back when we thought the answer might be Ce^{rt} ; there’s two of them because there’s two r’s. I give you my word this is equivalent to a formula like this, but you can make me show my work if you must:

x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

for some (other) constants ‘A’ and ‘B’. Cosine and sine are the old things you remember from learning about cosine and sine.

OK, but what are ‘A’ and ‘B’?

Generically? We don’t care. Some numbers. Maybe zero. Maybe not. The pattern, how the displacement changes over time, will be the same whatever they are. It’ll be regular oscillation. At one time your thing will be as far from the equilibrium as it gets, and not moving toward or away from the center. At one time it’ll be back at the center and moving as fast as it can. At another time it’ll be as far away from the equilibrium as it gets, but on the other side. At another time it’ll be back at the equilibrium and moving as fast as it ever does, but the other way. How far is that maximum? What’s the fastest it travels?

The answer’s in how we started. If we start at the equilibrium without any kind of movement we’re never going to leave the equilibrium. We have to get nudged out of it. But what kind of nudge? There’s three ways you can do to nudge something out.

You can tug it out some and let it go from rest. This is the easiest: then ‘A’ is however big your tug was and ‘B’ is zero.

You can let it start from equilibrium but give it a good whack so it’s moving at some initial velocity. This is the next-easiest: ‘A’ is zero, and ‘B’ is … no, not the initial velocity. You need to look at what the velocity of your thing is at the start. That’s the first derivative:

\frac{dx}{dt} = -\sqrt{\frac{k}{m}}A sin\left(\sqrt{\frac{k}{m}} t\right) + \sqrt{\frac{k}{m}} B sin\left(\sqrt{\frac{k}{m}} t\right)

The start is when time is zero because we don’t need to be difficult. when ‘t’ is zero the above velocity is \sqrt{\frac{k}{m}} B . So that product has to be the initial velocity. That’s not much harder.

The third case is when you start with some displacement and some velocity. A combination of the two. Then, ugh. You have to figure out ‘A’ and ‘B’ that make both the position and the velocity work out. That’s the simultaneous solutions of equations, and not even hard equations. It’s more work is all. I’m interested in other stuff anyway.

Because, yeah, the spring is going to wobble back and forth. What I’d like to know is how long it takes to get back where it started. How long does a cycle take? Look back at that position function, for example. That’s all we need.

x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

Sine and cosine functions are periodic. They have a period of 2π. This means if you take the thing inside the parentheses after a sine or a cosine and increase it — or decrease it — by 2π, you’ll get the same value out. What’s the first time that the displacement and the velocity will be the same as their starting values? If they started at t = 0, then, they’re going to be back there at a time ‘T’ which makes true the equation

\sqrt{\frac{k}{m}} T = 2\pi

And that’s going to be

T = 2\pi\sqrt{\frac{m}{k}}

Maybe surprising thing about this: the period doesn’t depend at all on how big the displacement is. That’s true for perfect springs, which don’t exist in the real world. You knew that. Imagine taking a Junior Slinky from the dollar store and sticking a block of something on one end. Imagine stretching it out to 500,000 times the distance between the Earth and Jupiter and letting go. Would it act like a spring or would it break? Yeah, we know. It’s sad. Think of the animated-cartoon joy a spring like that would produce.

But this period not depending on the displacement is true for small enough displacements, in the real world. Or for good enough springs. Or things that work enough like springs. By “true” I mean “close enough to true”. We can give that a precise mathematical definition, which turns out to be what you would mean by “close enough” in everyday English. The difference is it’ll have Greek letters included.

So to sum up: suppose we have something that acts like a spring. Then we know qualitatively how it behaves. It oscillates back and forth in a sine wave around the equilibrium. Suppose we know what the spring constant ‘k’ is. Suppose we also know ‘m’, which represents the inertia of the thing. If it’s a real thing on a real spring it’s mass. Then we know quantitatively how it moves. It has a period, based on this spring constant and this mass. And we can say how big the oscillations are based on how big the starting displacement and velocity are. That’s everything I care about in a spring. At least until I get into something wild like several springs wired together, which I am not doing now and might never do.

And, as we’ll see when we get back to orbits, a lot of things work close enough to springs.

Excuses, But Classed Up Some


Afraid I’m behind on resuming Why Stuff Can Orbit, mostly as a result of a power outage yesterday. It wasn’t a major one, but it did reshuffle all the week’s chores to yesterday when we could be places that had power, and kept me from doing as much typing as I wanted. I’m going to be riding this excuse for weeks.

So instead, here, let me pass this on to you.

It links to a post about the Legendre Transform, which is one of those cool advanced tools you get a couple years into a mathematics or physics major. It is, like many of these cool advanced tools, about solving differential equations. Differential equations turn up anytime the current state of something affects how it’s going to change, which is to say, anytime you’re looking at something not boring. It’s one of mathematics’s uses of “duals”, letting you swap between the function you’re interested in and what you know about how the function you’re interested in changes.

On the linked page, Jonathan Manton tries to present reasons behind the Legendre transform, in ways he likes better. It might not explain the idea in a way you like, especially if you haven’t worked with it before. But I find reading multiple attempts to explain an idea helpful. Even if one perspective doesn’t help, having a cluster of ideas often does.

Why Stuff Can Orbit, Part 8: Introducing Stability


Previously:

And the supplemental reading:


I bet you imagined I’d forgot this series, or that I’d quietly dropped it. Not so. I’ve just been finding the energy for this again. 2017 has been an exhausting year.

With the last essay I finished the basic goal of “Why Stuff Can Orbit”. I’d described some of the basic stuff for central forces. These involve something — a planet, a mass on a spring, whatever — being pulled by the … center. Well, you can call anything the origin, the center of your coordinate system. Why put that anywhere but the place everything’s pulled towards? The key thing about a central force is it’s always in the direction of the center. It can be towards the center or away from the center, but it’s always going to be towards the center because the “away from” case is boring. (The thing gets pushed away from the center and goes far off, never to be seen again.) How strongly it’s pulled toward the center changes only with the distance from the center.

Since the force only changes with the distance between the thing and the center it’s easy to think this is a one-dimensional sort of problem. You only need the coordinate describing this distance. We call that ‘r’, because we end up finding orbits that are circles. Since the distance between the center of a circle and its edge is the radius, it would be a shame to use any other letter.

Forces are hard to work with. At least for a lot of stuff. We can represent central forces instead as potential energy. This is easier because potential energy doesn’t have any direction. It’s a lone number. When we can shift something complicated into one number chances are we’re doing well.

But we are describing something in space. Something in three-dimensional space, although it turns out we’ll only need two. We don’t care about stuff that plunges right into the center; that’s boring. We like stuff that loops around and around the center. Circular orbits. We’ve seen that second dimension in the angular momentum, which we represent as ‘L’ for reasons I dunno. I don’t think I’ve ever met anyone who did. Maybe it was the first letter that came to mind when someone influential wrote a good textbook. Angular momentum is a vector, but for these problems we don’t need to care about that. We can use an ordinary number to carry all the information we need about it.

We get that information from the potential energy plus a term that’s based on the square of the angular momentum divided by the square of the radius. This “effective potential energy” lets us find whether there can be a circular orbit at all, and where it’ll be. And it lets us get some other nice stuff like how the size of the orbit and the time it takes to complete an orbit relate to each other. See the earlier stuff for details. In short, though, we get an equilibrium, a circular orbit, whenever the effective potential energy is flat, neither rising nor falling. That happens when the effective potential energy changes from rising to falling, or changes from falling to rising. Well, if it isn’t rising and if it isn’t falling, what else can it be doing? It only does this for an infinitesimal moment, but that’s all we need. It also happens when the effective potential energy is flat for a while, but that like never happens.

Where I want to go next is into closed orbits. That is, as the planet orbits a sun (or whatever it is goes around whatever it’s going around), does it come back around to exactly where it started? Moving with the same speed in the same direction? That is, does the thing orbit like a planet does?

(Planets don’t orbit like this. When you have three, or more, things in the universe the mathematics of orbits gets way too complicated to do exactly. But this is the thing they’re approximating, we hope, well.)

To get there I’ll have to put back a second dimension. Sorry. Won’t need a third, though. That’ll get named θ because that’s our first choice for an angle. And it makes too much sense to describe a planet’s position as its distance from the center and the angle it makes with respect to some reference line. Which reference line? Whatever works for you. It’s like measuring longitude. We could measure degrees east and west of some point other than Greenwich as well, and as correctly, as we do. We use the one we use because it was convenient.

Along the way to closed orbits I have to talk about stability. There are many kinds of mathematical stability. My favorite is called Lyapunov Stability, because it’s such a mellifluous sound. They all circle around the same concept. It’s what you’d imagine from how we use the word in English. Start with an equilibrium, a system that isn’t changing. Give it a nudge. This disrupts it in some way. Does the disruption stay bounded? That is, does the thing still look somewhat like it did before? Or does the disruption grow so crazy big we have no idea what it’ll ever look like again? (A small nudge, by the way. You can break anything with a big enough nudge; that’s not interesting. It’s whether you can break it with a small nudge that we’d like to know.)

One of the ways we can study this is by looking at the effective potential energy. By its shape we can say whether a central-force equilibrium is stable or not. It’s easy, too, as we’ve got this set up. (Warning before you go passing yourself off as a mathematical physicist: it is not always easy!) Look at the effective potential energy versus the radius. If it has a part that looks like a bowl, cupped upward, it’s got a stable equilibrium. If it doesn’t, it doesn’t have a stable equilibrium. If you aren’t sure, imagine the potential energy was a track, like for a toy car. And imagine you dropped a marble on it. If you give the marble a nudge, does it roll to a stop? If it does, stable. If it doesn’t, unstable.

The sort of wiggly shape that serves as every mathematical physicist's generic potential energy curve to show off the different kinds of equilibrium.
A phony effective potential energy. Most are a lot less exciting than this; see some of the earlier pieces in this series. But some weird-shaped functions like this were toyed with by physicists in the 19th century who were hoping to understand chemistry. Why should gases behave differently at different temperatures? Why should some combinations of elements make new compounds while others don’t? We needed statistical mechanics and quantum mechanics to explain those, but we couldn’t get there without a lot of attempts and failures at explaining it with potential energies and classical mechanics.

Stable is more interesting. We look at cases where there is this little bowl cupped upward. If we have a tiny nudge we only have to look at a small part of that cup. And that cup is going to look an awful lot like a parabola. If you don’t remember what a parabola is, think back to algebra class. Remember that curvey shape that was the only thing drawn on the board when you were dealing with the quadratic formula? That shape is a parabola.

Who cares about parabolas? We care because we know something good about them. In this context, anyway. The potential energy for a mass on a spring is also a parabola. And we know everything there is to know about masses on springs. Seriously. You’d think it was all physics was about from like 1678 through 1859. That’s because it’s something calculus lets us solve exactly. We don’t need books of complicated integrals or computers to do the work for us.

So here’s what we do. It’s something I did not get clearly when I was first introduced to these concepts. This left me badly confused and feeling lost in my first physics and differential equations courses. We are taking our original physics problem and building a new problem based on it. This new problem looks at how big our nudge away from the equilibrium is. How big the nudge is, how fast it grows, how it changes in time will follow rules. Those rules will look a lot like those for a mass on a spring. We started out with a radius that gives us a perfectly circular orbit. Now we get a secondary problem about how the difference between the nudged and the circular orbit changes in time.

That secondary problem has the same shape, the same equations, as a mass on a spring does. A mass on a spring is a central force problem. All the tools we had for studying central-force problems are still available. There is a new central-force problem, hidden within our original one. Here the “center” is the equilibrium we’re nudged around. It will let us answer a new set of questions.

Reading the Comics, March 6, 2017: Blackboards Edition


I can’t say there’s a compelling theme to the first five mathematically-themed comics of last week. Screens full of mathematics turned up in a couple of them, so I’ll run with that. There were also just enough strips that I’m splitting the week again. It seems fair to me and gives me something to remember Wednesday night that I have to rush to complete.

Jimmy Hatlo’s Little Iodine for the 1st of January, 1956 was rerun on the 5th of March. The setup demands Little Iodine pester her father for help with the “hard homework” and of course it’s arithmetic that gets to play hard work. It’s a word problem in terms of who has how many apples, as you might figure. Don’t worry about Iodine’s boss getting fired; Little Iodine gets her father fired every week. It’s their schtick.

Little Iodine wakes her father early after a night at the lodge. 'You got to help me with my [hard] homework.' 'Ooh! My head! Wha'?' 'The first one is, if John has twice as many apples as Tom and Sue put together ... ' 'Huh? kay! Go on, let's get this over with.' They work through to morning. Iodine's teacher sees her asleep in class and demands she bring 'a note from your parents as to why you sleep in school instead of at home!' She goes to her father's office where her father's boss is saying, 'Well, Tremblechin, wake up! The hobo hotel is three blocks south and PS: DON'T COME BACK!'
Jimmy Hatlo’s Little Iodine for the 1st of January, 1956. I guess class started right back up the 2nd, but it would’ve avoided so much trouble if she’d done her homework sometime during the winter break. That said, I never did.

Dana Simpson’s Phoebe and her Unicorn for the 5th mentions the “most remarkable of unicorn confections”, a sugar dodecahedron. Dodecahedrons have long captured human imaginations, as one of the Platonic Solids. The Platonic Solids are one of the ways we can make a solid-geometry analogue to a regular polygon. Phoebe’s other mentioned shape of cubes is another of the Platonic Solids, but that one’s common enough to encourage no sense of mystery or wonder. The cube’s the only one of the Platonic Solids that will fill space, though, that you can put into stacks that don’t leave gaps between them. Sugar cubes, Wikipedia tells me, have been made only since the 19th century; the Moravian sugar factory director Jakub Kryštof Rad got a patent for cutting block sugar into uniform pieces in 1843. I can’t dispute the fun of “dodecahedron” as a word to say. Many solid-geometric shapes have names that are merely descriptive, but which are rendered with Greek or Latin syllables so as to sound magical.

Bud Grace’s Piranha Club for the 6th started a sequence in which the Future Disgraced Former President needs the most brilliant person in the world, Bud Grace. A word balloon full of mathematics is used as symbol for this genius. I feel compelled to point out Bud Grace was a physics major. But while Grace could as easily have used something from the physics department to show his deep thinking abilities, that would all but certainly have been rendered as equation and graphs, the stuff of mathematics again.

At the White Supremacist House: 'I have the smartest people I could find to help me run this soon-to-be-great-again country, but I'm worried that they're NOT SMART ENOUGH! I want the WORLD'S SMARTEST GENIUS to be my SPECIAL ADVISOR!' Meanwhile, cartoonist Bud Grace thinks of stuff like A = pi*r^2 and a^2 + b^2 = c^2 and tries working out 241 times 365, 'carry the one ... hmmmm ... '
Bud Grace’s Piranha Club for the 6th of March, 2017. 241 times 635 is 153,035 by the way. I wouldn’t work that out in my head if I needed the number. I might work out an estimate of how big it was, in which case I’d do this: 241 is about 250, which is one-quarter of a thousand. One-quarter of 635 is something like 150, which times a thousand is 150,000. If I needed it exactly I’d get a calculator. Unless I just needed something to occupy my mind without having any particular emotional charge.

Scott Meyer’s Basic Instructions rerun for the 6th is aptly titled, “How To Unify Newtonian Physics And Quantum Mechanics”. Meyer’s advice is not bad, really, although generic enough it applies to any attempts to reconcile two different models of a phenomenon. Also there’s not particularly a problem reconciling Newtonian physics with quantum mechanics. It’s general relativity and quantum mechanics that are so hard to reconcile.

Still, Basic Instructions is about how you can do a thing, or learn to do a thing. It’s not about how to allow anything to be done for the first time. And it’s true that, per quantum mechanics, we can’t predict exactly what any one particle will do at any time. We can say what possible things it might do and how relatively probable they are. But big stuff, the stuff for which Newtonian physics is relevant, involve so many particles that the unpredictability becomes too small to notice. We can see this as the Law of Large Numbers. That’s the probability rule that tells us we can’t predict any coin flip, but we know that a million fair tosses of a coin will not turn up 800,000 tails. There’s more to it than that (there’s always more to it), but that’s a starting point.

Michael Fry’s Committed rerun for the 6th features Albert Einstein as the icon of genius. Natural enough. And it reinforces this with the blackboard full of mathematics. I’m not sure if that blackboard note of “E = md3” is supposed to be a reference to the famous Far Side panel of Einstein hearing the maid talk about everything being squared away. I’ll take it as such.