## Why Stuff Can Orbit, Part 10: Where Time Comes From And How It Changes Things

Previously:

And again my thanks to Thomas K Dye, creator of the web comic Newshounds, for the banner art. He has a Patreon to support his creative habit.

In the last installment I introduced perturbations. These are orbits that are a little off from the circles that make equilibriums. And they introduce something that’s been lurking, unnoticed, in all the work done before. That’s time.

See, how do we know time exists? … Well, we feel it, so, it’s hard for us not to notice time exists. Let me rephrase it then, and put it in contemporary technology terms. Suppose you’re looking at an animated GIF. How do you know it’s started animating? Or that it hasn’t stalled out on some frame?

If the picture changes, then you know. It has to be going. But if it doesn’t change? … Maybe it’s stalled out. Maybe it hasn’t. You don’t know. You know there’s time when you can see change. And that’s one of the little practical insights of physics. You can build an understanding of special relativity by thinking hard about that. Also think about the observation that the speed of light (in vacuum) doesn’t change.

When something physical’s in equilibrium, it isn’t changing. That’s how we found equilibriums to start with. And that means we stop keeping track of time. It’s one more thing to keep track of that doesn’t tell us anything new. Who needs it?

For the planet orbiting a sun, in a perfect circle, or its other little variations, we do still need time. At least some. How far the planet is from the sun doesn’t change, no, but where it is on the orbit will change. We can track where it is by setting some reference point. Where the planet is at the start of our problem. How big is the angle between where the planet is now, the sun (the center of our problem’s universe), and that origin point? That will change over time.

But it’ll change in a boring way. The angle will keep increasing in magnitude at a constant speed. Suppose it takes five time units for the angle to grow from zero degrees to ten degrees. Then it’ll take ten time units for the angle to grow from zero to twenty degrees. It’ll take twenty time units for the angle to grow from zero to forty degrees. Nice to know if you want to know when the planet is going to be at a particular spot, and how long it’ll take to get back to the same spot. At this rate it’ll be eighteen time units before the angle grows to 360 degrees, which looks the same as zero degrees. But it’s not anything interesting happening.

We’ll label this sort of change, where time passes, yeah, but it’s too dull to notice as a “dynamic equilibrium”. There’s change, but it’s so steady and predictable it’s not all that exciting. And I’d set up the circular orbits so that we didn’t even have to notice it. If the radius of the planet’s orbit doesn’t change, then the rate at which its apsidal angle changes, its “angular velocity”, also doesn’t change.

Now, with perturbations, the distance between the planet and the center of the universe will change in time. That was the stuff at the end of the last installment. But also the apsidal angle is going to change. I’ve used ‘r(t)’ to represent the radial distance between the planet and the sun before, and to note that what value it is depends on the time. I need some more symbols.

There’s two popular symbols to use for angles. Both are Greek letters because, I dunno, they’ve always been. (Florian Cajori’s A History of Mathematical Notation doesn’t seem to have anything. And when my default go-to for explaining mathematician’s choices tells me nothing, what can I do? Look at Wikipedia? Sure, but that doesn’t enlighten me either.) One is to use theta, θ. The other is to use phi, φ. Both are good, popular choices, and in three-dimensional problems we’ll often need both. We don’t need both. The orbit of something moving under a central force might be complicated, but it’s going to be in a single plane of movement. The conservation of angular momentum gives us that. It’s not the last thing angular momentum will give us. The orbit might happen not to be in a horizontal plane. But that’s all right. We can tilt our heads until it is.

So I’ll reach deep into the universe of symbols for angles and call on θ for the apsidal angle. θ will change with time, so, ‘θ(t)’ is the angular counterpart to ‘r(t)’.

I’d said before the apsidal angle is the angle made between the planet, the center of the universe, and some reference point. What is my reference point? I dunno. It’s wherever θ(0) is, that is, where the planet is when my time ‘t’ is zero. There’s probably a bootstrapping fallacy here. I’ll cover it up by saying, you know, the reference point doesn’t matter. It’s like the choice of prime meridian. We have to have one, but we can pick whatever one is convenient. So why not pick one that gives us the nice little identity that ‘θ(0) = 0’? If you don’t buy that and insist I pick a reference point first, fine, go ahead. But you know what? The labels on my time axis are arbitrary. There’s no difference in the way physics works whether ‘t’ is ‘0’ or ‘2017’ or ‘21350’. (At least as long as I adjust any time-dependent forces, which there aren’t here.) So we get back to ‘θ(0) = 0’.

For a circular orbit, the dynamic equilibrium case, these are pretty boring, but at least they’re easy to write. They’re:

$r(t) = a \\ \theta(t) = \omega t$

Here ‘a’ is the radius of the circular orbit. And ω is a constant number, the angular velocity. It’s how much a bit of time changes the apsidal angle. And this set of equations is pretty dull. You can see why it barely rates a mention.

The perturbed case gets more interesting. We know how ‘r(t)’ looks. We worked that out last time. It’s some function like:

$r(t) = a + A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)$

Here ‘A’ and ‘B’ are some numbers telling us how big the perturbation is, and ‘m’ is the mass of the planet, and ‘k’ is something related to how strong the central force is. And ‘a’ is that radius of the circular orbit, the thing we’re perturbed around.

What about ‘θ(t)’? How’s that look? … We don’t seem to have a lot to go on. We could go back to Newton and all that force equalling the change in momentum over time stuff. We can always do that. It’s tedious, though. We have something better. It’s another gift from the conservation of angular momentum. When we can turn a forces-over-time problem into a conservation-of-something problem we’re usually doing the right thing. The conservation-of-something is typically a lot easier to set up and to track. We’ve used it in the conservation of energy, before, and we’ll use it again. The conservation of ordinary, ‘linear’, momentum helps other problems, though not I’ll grant this one. The conservation of angular momentum will help us here.

So what is angular momentum? … It’s something about ice skaters twirling around and your high school physics teacher sitting on a bar stool spinning a bike wheel. All right. But it’s also a quantity. We can get some idea of it by looking at the formula for calculating linear momentum:

$\vec{p} = m\vec{v}$

The linear momentum of a thing is its inertia times its velocity. This is if the thing isn’t moving fast enough we have to notice relativity. Also if it isn’t, like, an electric or a magnetic field so we have to notice it’s not precisely a thing. Also if it isn’t a massless particle like a photon because see previous sentence. I’m talking about ordinary things like planets and blocks of wood on springs and stuff. The inertia, ‘m’, is rather happily the same thing as its mass. The velocity is how fast something is travelling and which direction it’s going in.

Angular momentum, meanwhile, we calculate with this radically different-looking formula:

$\vec{L} = I\vec{\omega}$

Here, again, talking about stuff that isn’t moving so fast we have to notice relativity. That isn’t electric or magnetic fields. That isn’t massless particles. And so on. Here ‘I’ is the “moment of inertia” and $\vec{w}$ is the angular velocity. The angular velocity is a vector that describes for us how fast the spinning is and what direction the axis around which the thing spins is. The moment of inertia describes how easy or hard it is to make the thing spin around each axis. It’s a tensor because real stuff can be easier to spin in some directions than in others. If you’re not sure that’s actually so, try tossing some stuff in the air so it spins in each of the three major directions. You’ll see.

We’re fortunate. For central force problems the moment of inertia is easy to calculate. We don’t need the tensor stuff. And we don’t even need to notice that the angular velocity is a vector. We know what axis the planet’s rotating around; it’s the one pointing out of the plane of motion. We can focus on the size of the angular velocity, the number ‘ω’. See how they’re different, what with one not having an arrow over the symbol. The arrow-less version is easier. For a planet, or other object, with mass ‘m’ that’s orbiting a distance ‘r’ from the sun, the moment of inertia is:

$I = mr^2$

So we know this number is going to be constant:

$L = mr^2\omega$

The mass ‘m’ doesn’t change. We’re not doing those kinds of problem. So however ‘r’ changes in time, the angular velocity ‘&omega’; has to change with it, so that this product stays constant. The angular velocity is how the apsidal angle ‘θ’ changes over time. So since we know ‘L’ doesn’t change, and ‘m’ doesn’t change, then the way ‘r’ changes must tell us something about how ‘θ’ changes. We’ll get into that next time.

## Reading the Comics, June 24, 2017: Saturday Morning Breakfast Cereal Edition

Somehow this is not the title of every Reading The Comics review! But it is for this post and we’ll explore why below.

Dave Coverly’s Speed Bump for the 18th is not exactly an anthropomorphic-numerals joke. It is about making symbols manifest in the real world, at least. The greater-than and less-than signs as we know them were created by the English mathematician Thomas Harriot, and introduced to the world in his posthumous Artis Analyticae Praxis (1631). He also had an idea of putting a . between the numerals of an expression and the letters multiplied by them, for example, “4.x” to mean four times x. We mostly do without that now, taking multiplication as assumed if two meaningful quantities are put next to one another. But we will use, now, a vertically-centered dot to separate terms multiplied together when that helps our organization. The equals sign we trace to the 16th century mathematician Robert Recorde, whose 1557 Whetsone of Witte uses long but recognizable equals signs. The = sign went into hibernation after that, though, until the 17th century and it took some time to quite get well-used. So it often is with symbols.

Ted Shearer’s Quincy for the 25th of April, 1978 and rerun the 19th of June, 2017. The question does make me wonder how far Mr Tanner was going to go with this. The origins of zero and one are great stuff for class discussion. Two, also. But what about three? Five? Ten? Twelve? Minus one? Irrational numbers, if the class has got up to them? How many students are going to be called on to talk about number origins? And how many truly different stories are there?

Ted Shearer’s Quincy for the 25th of April, 1978 and rerun the 19th of June, starts from the history of zero. It’s worth noting there are a couple of threads woven together in the concept of zero. One is the idea of “nothing”, which we’ve had just forever. I mean, the idea that there isn’t something to work with. Another is the idea of the … well, the additive identity, there being some number that’s one less than one and two less than two. That you can add to anything without changing the thing. And then there’s symbols. There’s the placeholder for “there are no examples of this quantity here”. There’s the denotation of … well, the additive identity. All these things are zeroes, and if you listen closely, they are not quite the same thing. Which is not weird. Most words mean a collection of several concepts. We’re lucky the concepts we mean by “zero” are so compatible in meaning. Think of the poor person trying to understand the word “bear”, or “cleave”.

John Deering’s Strange Brew for the 19th is a “New Math” joke, fittingly done with cavemen. Well, numerals were new things once. Amusing to me is that — while I’m not an expert — in quite a few cultures the symbol for “one” was pretty much the same thing, a single slash mark. It’s hard not to suppose that numbers started out with simple tallies, and the first thing to tally might get dressed up a bit with serifs or such but is, at heart, the same thing you’d get jabbing a sharp thing into a soft rock.

Guy Gilchrist’s Today’s Dogg for the 19th I’m sure is a rerun and I think I’ve featured it here before. So be it. It’s silly symbol-play and dog arithmetic. It’s a comic strip about how dogs are cute; embrace it or skip it.

Zach Weinersmith’s Saturday Morning Breakfast Cereal is properly speaking reruns when it appears on GoComics.com. For whatever reason Weinersmith ran a patch of mathematics strips there this past week. So let me bundle all that up. On the 19th he did a joke mathematicians get a lot, about how the only small talk anyone has about mathematics is how they hated mathematics. I’m not sure mathematicians have it any better than any other teachers, though. Have you ever known someone to say, “My high school gym class gave me a greater appreciation of the world”? Or talk about how grade school history opened their eyes to the wonders of the subject? It’s a sad thing. But there are a lot of things keeping teachers from making students feel joy in their subjects.

For the 21st Weinersmith makes a statisticians joke. I can wrangle some actual mathematics out of an otherwise correctly-formed joke. How do we ever know that something is true? Well, we gather evidence. But how do we know the evidence is relevant? Even if the evidence is relevant, how do we know we’ve interpreted it correctly? Even if we have interpreted it correctly, how do we know that it shows what we want to know? Statisticians become very familiar with hypothesis testing, which amounts to the question, “does this evidence indicate that some condition is implausibly unlikely”? And they can do great work with that. But “implausibly unlikely” is not the same thing as “false”. A person knowledgeable enough and honest turns out to have few things that can be said for certain.

The June 23rd strip I’ve seen go around Mathematics Twitter several times, as see above tweet, about the ways in which mathematical literacy would destroy modern society. It’s a cute and flattering portrait of mathematics’ power, probably why mathematicians like passing it back and forth. But … well, how would “logic” keep people from being fooled by scams? What makes a scam work is that the premise seems logical. And real-world problems — as opposed to logic-class problems — are rarely completely resolvable by deductive logic. There have to be the assumptions, the logical gaps, and the room for humbuggery that allow hoaxes and scams to slip through. And does anyone need a logic class to not “buy products that do nothing”? And what is “nothing”? I have more keychains than I have keys to chain, even if we allow for emergencies and reasonable unexpected extra needs. This doesn’t stop my buying keychains as souvenirs. Does a Penn Central-logo keychain “do nothing” merely because it sits on the windowsill rather than hold any sort of key? If so, was my love foolish to buy it as a present? Granted that buying a lottery ticket is a foolish use of money; is my life any worse for buying that than, say, a peanut butter cup that I won’t remember having eaten a week afterwards? As for credit cards — It’s not clear to me that people max out their credit cards because they don’t understand they will have to pay it back with interest. My experience has been people max out their credit cards because they have things they must pay for and no alternative but going further into debt. That people need more money is a problem of society, yes, but it’s not clear to me that a failure to understand differential equations is at the heart of it. (Also, really, differential equations are overkill to understand credit card debt. A calculator with a repeat-the-last-operation feature and ten minutes to play is enough.)

## Why Shouldn’t We Talk About Mathematics In The Deli Line?

You maybe saw this picture going around your social media a couple days ago. I did, but I’m connected to a lot of mathematics people who were naturally interested. Everyone who did see it was speculating about what the story behind it was. Thanks to the CBC, now we know.

So it’s the most obvious if least excitingly gossip-worthy explanation: this Middletown, Connecticut deli is close to the Wesleyan mathematics department’s office and at least one mathematician was too engrossed talking about the subject to actually place an order. We’ve all been stuck behind people like that. It’s enough to make you wonder whether the Cole slaw there is actually that good. (Don’t know, I haven’t been there, not sure I can dispatch my agent in Groton to check just for this.) The sign’s basically a loving joke, which is a relief. Could be any group of people who won’t stop talking about a thing they enjoy, really. And who have a good joking relationship with the deli-owner.

The CBC’s interview gets into whether mathematicians have a sense of humor. I certainly think we do. I think the habit of forming proofs builds a habit of making a wild assumption and seeing where that gets you, often to a contradiction. And it’s hard not to see that the same skills that will let you go from, say, “suppose every angle can be trisected” to a nonsensical conclusion will also let you suppose something whimsical and get to a silly result.

Dr Anna Haensch, who made the sign kind-of famous-ish, gave as an example of a quick little mathematician’s joke going to the board and declaring “let L be a group”. I should say that’s not a riotously funny mathematician’s joke, not the say (like) talking about things purple and commutative are. It’s just a little passing quip, like if you showed a map of New Jersey and labelled the big city just across the Hudson River as “East Hoboken” or something.

But L would be a slightly disjoint name for a group. Not wrong, just odd, unless the context of the problem gave us good reason for the name. Names of stuff are labels, and so are arbitrary and may be anything. But we use them to carry information. If we know something is a group then we know something about the way it behaves. So if in a dense mass of symbols we see that something is given one of the standard names for groups — G, H, maybe G or H with a subscript or a ‘ or * on top of it — we know that, however lost we might be, we know this thing is a group and we know it should have these properties.

It’s a bit of doing what science fiction fans term “incluing”. That’s giving someone the necessary backstory without drawing attention to the fact we’re doing it. To avoid G or H would be like avoiding “Jane [or John] Doe” as the name for a specific but unidentified person. You can do it, but it seems off.

## Great Stuff By David Hilbert That I’ll Never Finish Reading

And then this came across my Twitter feed (@Nebusj, for the record):

It is to Project Gutenberg’s edition of David Hilbert’s The Foundations Of Geometry. David Hilbert you may know as the guy who gave us 20th Century mathematics. He had help. But he worked hard on the axiomatizing of mathematics, getting rid of intuition and relying on nothing but logical deduction for all mathematical results. “Didn’t we do that already, like, with the Ancient Greeks and all?” you may ask. We aimed for that since the Ancient Greeks, yes, but it’s really hard to do. The Foundations Of Geometry is an example of Hilbert’s work of looking very critically at all of the things we assume, and all of the things that we need, and all of the things we need defined, and trying to get at it all.

Hilbert gave much of 20th Century Mathematics its shape with a list presented at the 1900 International Congress of Mathematicians in Paris. This formed a great list of important unsolved problems. Some of them have been solved since. Some are still unsolved. Some have been proven unsolvable. Each of these results is very interesting. This tells you something about how great his questions were; only a great question is interesting however it turns out.

The Project Gutenberg edition of The Foundations Of Geometry is, mercifully, not a stitched-together PDF version of an ancient library copy. It’s a PDF compiled by, if I’m reading the credits correctly, Joshua Hutchinson, Roger Frank, and David Starner. The text was copied into LaTeX, an incredibly powerful and standard mathematics-writing tool, and compiled into something that … looks a little bit like every mathematics paper and thesis you’ll read these days. It’s a bit odd for a 120-year-old text to look quite like that. But it does mean the formatting looks familiar, if you’re the sort of person who reads mathematics regularly.

(There are a couple lines that read weird to me, but I can’t judge whether that owes to a typo in the preparation of the document or just that the translation from Hilbert’s original German to English produced odd effects. I’m thinking here of Axiom I, 2, shown on page 2, which I understand but feel weird about. Roll with it.)

## Reading the Comics, June 17, 2017: Icons Of Mathematics Edition

Comic Strip Master Command just barely missed being busy enough for me to split the week’s edition. Fine for them, I suppose, although it means I’m going to have to scramble together something for the Tuesday or the Thursday posting slot. Ah well. As befits the comics, there’s a fair bit of mathematics as an icon in the past week’s selections. So let’s discuss.

Mark Anderson’s Andertoons for the 11th is our Mark Anderson’s Andertoons for this essay. Kind of a relief to have that in right away. And while the cartoon shows a real disaster of a student at the chalkboard, there is some truth to the caption. Ruling out plausible-looking wrong answers is progress, usually. So is coming up with plausible-looking answers to work out whether they’re right or wrong. The troubling part here, I’d say, is that the kid came up with pretty poor guesses about what the answer might be. He ought to be able to guess that it’s got to be an odd number, and has to be less than 10, and really ought to be less than 7. If you spot that then you can’t make more than two wrong guesses.

Patrick J Marrin’s Francis for the 12th starts with what sounds like a logical paradox, about whether the Pope could make an infallibly true statement that he was not infallible. Really it sounds like a bit of nonsense. But the limits of what we can know about a logical system will often involve questions of this form. We ask whether something can prove whether it is provable, for example, and come up with a rigorous answer. So that’s the mathematical content which justifies my including this strip here.

Niklas Eriksson’s Carpe Diem for the 13th of June, 2017. Yes, yes, it’s easy to get people excited for the Revolution, but it’ll come to a halt when someone asks about how they get the groceries afterwards.

Niklas Eriksson’s Carpe Diem for the 13th is a traditional use of the blackboard full of mathematics as symbolic of intelligence. Of course ‘E = mc2‘ gets in there. I’m surprised that both π and 3.14 do, too, for as little as we see on the board.

Mark Anderson’s Andertoons for the 14th is a nice bit of reassurance. Maybe the cartoonist was worried this would be a split-week edition. The kid seems to be the same one as the 11th, but the teacher looks different. Anyway there’s a lot you can tell about shapes from their perimeter alone. The one which most startles me comes up in calculus: by doing the right calculation about the lengths and directions of the edge of a shape you can tell how much area is inside the shape. There’s a lot of stuff in this field — multivariable calculus — that’s about swapping between “stuff you know about the boundary of a shape” and “stuff you know about the interior of the shape”. And finding area from tracing the boundary is one of them. It’s still glorious.

Samson’s Dark Side Of The Horse for the 14th is a counting-sheep joke and a Pi Day joke. I suspect the digits of π would be horrible for lulling one to sleep, though. They lack the just-enough-order that something needs for a semiconscious mind to drift off. Horace would probably be better off working out Collatz sequences.

Dana Simpson’s Phoebe and her Unicorn for the 14th mentions mathematics as iconic of what you do at school. Book reports also make the cut.

Dan Barry’s Flash Gordon for the 31st of July, 1962, rerun the 16th of June, 2017. I am impressed that Dr Zarkov can make a TV set capable of viewing alternate universes. I still literally do not know how it is possible that we have sound for our new TV set, and I labelled and connected every single wire in the thing. Oh, wouldn’t it be a kick if Dr Zarkov has the picture from one alternate universe but the sound from a slightly different other one?

Dan Barry’s Flash Gordon for the 31st of July, 1962 and rerun the 16th I’m including just because I love the old-fashioned image of a mathematician in Professor Quita here. At this point in the comic strip’s run it was set in the far-distant future year of 1972, and the action here is on one of the busy multinational giant space stations. Flash himself is just back from Venus where he’d set up some dolphins as assistants to a fish-farming operation helping to feed that world and ours. And for all that early-60s futurism look at that gorgeous old adding machine he’s still got. (Professor Quinta’s discovery is a way to peer into alternate universes, according to the next day’s strip. I’m kind of hoping this means they’re going to spend a week reading Buck Rogers.)

## Why Stuff Can Orbit, Part 9: How The Spring In The Cosmos Behaves

Previously:

First, I thank Thomas K Dye for the banner art I have for this feature! Thomas is the creator of the longrunning web comic Newshounds. He’s hoping soon to finish up special editions of some of the strip’s stories and to publish a definitive edition of the comic’s history. He’s also got a Patreon account to support his art habit. Please give his creations some of your time and attention.

Now back to central forces. I’ve run out of obvious fun stuff to say about a mass that’s in a circular orbit around the center of the universe. Before you question my sense of fun, remember that I own multiple pop histories about the containerized cargo industry and last month I read another one that’s changed my mind about some things. These sorts of problems cover a lot of stuff. They cover planets orbiting a sun and blocks of wood connected to springs. That’s about all we do in high school physics anyway. Well, there’s spheres colliding, but there’s no making a central force problem out of those. You can also make some things that look like bad quantum mechanics models out of that. The mathematics is interesting even if the results don’t match anything in the real world.

But I’m sticking with central forces that look like powers. These have potential energy functions with rules that look like V(r) = C rn. So far, ‘n’ can be any real number. It turns out ‘n’ has to be larger than -2 for a circular orbit to be stable, but that’s all right. There are lots of numbers larger than -2. ‘n’ carries the connotation of being an integer, a whole (positive or negative) number. But if we want to let it be any old real number like 0.1 or π or 18 and three-sevenths that’s fine. We make a note of that fact and remember it right up to the point we stop pretending to care about non-integer powers. I estimate that’s like two entries off.

We get a circular orbit by setting the thing that orbits in … a circle. This sounded smarter before I wrote it out like that. Well. We set it moving perpendicular to the “radial direction”, which is the line going from wherever it is straight to the center of the universe. This perpendicular motion means there’s a non-zero angular momentum, which we write as ‘L’ for some reason. For each angular momentum there’s a particular radius that allows for a circular orbit. Which radius? It’s whatever one is a minimum for the effective potential energy:

$V_{eff}(r) = Cr^n + \frac{L^2}{2m}r^{-2}$

This we can find by taking the first derivative of ‘Veff‘ with respect to ‘r’ and finding where that first derivative is zero. This is standard mathematics stuff, quite routine. We can do with any function whether it represents something physics or not. So:

$\frac{dV_{eff}}{dr} = nCr^{n-1} - 2\frac{L^2}{2m}r^{-3} = 0$

$r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}$

What I’d like to talk about is if we’re not quite at that radius. If we set the planet (or whatever) a little bit farther from the center of the universe. Or a little closer. Same angular momentum though, so the equilibrium, the circular orbit, should be in the same spot. It happens there isn’t a planet there.

This enters us into the world of perturbations, which is where most of the big money in mathematical physics is. A perturbation is a little nudge away from an equilibrium. What happens in response to the little nudge is interesting stuff. And here we already know, qualitatively, what’s going to happen: the planet is going to rock around the equilibrium. This is because the circular orbit is a stable equilibrium. I’d described that qualitatively last time. So now I want to talk quantitatively about how the perturbation changes given time.

Before I get there I need to introduce another bit of notation. It is so convenient to be able to talk about the radius of the circular orbit that would be the equilibrium. I’d called that ‘r’ up above. But I also need to be able to talk about how far the perturbed planet is from the center of the universe. That’s also really hard not to call ‘r’. Something has to give. Since the radius of the circular orbit is not going to change I’m going to give that a new name. I’ll call it ‘a’. There’s several reasons for this. One is that ‘a’ is commonly used for describing the size of ellipses, which turn up in actual real-world planetary orbits. That’s something we know because this is like the thirteenth part of an essay series about the mathematics of orbits. You aren’t reading this if you haven’t picked up a couple things about orbits on your own. Also we’ve used ‘a’ before, in these sorts of approximations. It was handy in the last supplemental as the point of expansion’s name. So let me make that unmistakable:

$a \equiv r = \left(\frac{L^2}{nCm}\right)^{\frac{1}{n + 2}}$

The $\equiv$ there means “defined to be equal to”. You might ask how this is different from “equals”. It seems like more emphasis to me. Also, there are other names for the circular orbit’s radius that I could have used. ‘re‘ would be good enough, as the subscript would suggest “radius of equilibrium”. Or ‘r0‘ would be another popular choice, the 0 suggesting that this is something of key, central importance and also looking kind of like a circle. (That’s probably coincidence.) I like the ‘a’ better there because I know how easy it is to drop a subscript. If you’re working on a problem for yourself that’s easy to fix, with enough cursing and redoing your notes. On a board in front of class it’s even easier to fix since someone will ask about the lost subscript within three lines. In a post like this? It would be a mess.

So now I’m going to look at possible values of the radius ‘r’ that are close to ‘a’. How close? Close enough that ‘Veff‘, the effective potential energy, looks like a parabola. If it doesn’t look much like a parabola then I look at values of ‘r’ that are even closer to ‘a’. (Do you see how the game is played? If you don’t, look closer. Yes, this is actually valid.) If ‘r’ is that close to ‘a’, then we can get away with this polynomial expansion:

$V_{eff}(r) \approx V_{eff}(a) + m\cdot(r - a) + \frac{1}{2} m_2 (r - a)^2$

where

$m = \frac{dV_{eff}}{dr}\left(a\right) \\ m_2 = \frac{d^2V_{eff}}{dr^2}\left(a\right)$

The “approximate” there is because this is an approximation. $V_{eff}(r)$ is in truth equal to the thing on the right-hand-side there plus something that isn’t (usually) zero, but that is small.

I am sorry beyond my ability to describe that I didn’t make that ‘m’ and ‘m2‘ consistent last week. That’s all right. One of these is going to disappear right away.

Now, what $V_{eff}(a)$ is? Well, that’s whatever you get from putting in ‘a’ wherever you start out seeing ‘r’ in the expression for $V_{eff}(r)$. I’m not going to bother with that. Call it math, fine, but that’s just a search-and-replace on the character ‘r’. Also, where I’m going next, it’s going to disappear, never to be seen again, so who cares? What’s important is that this is a constant number. If ‘r’ changes, the value of $V_{eff}(a)$ does not, because ‘r’ doesn’t appear anywhere in $V_{eff}(a)$.

How about ‘m’? That’s the value of the first derivative of ‘Veff‘ with respect to ‘r’, evaluated when ‘r’ is equal to ‘a’. That might be something. It’s not, because of what ‘a’ is. It’s the value of ‘r’ which would make $\frac{dV_{eff}}{dr}(r)$ equal to zero. That’s why ‘a’ has that value instead of some other, any other.

So we’ll have a constant part ‘Veff(a)’, plus a zero part, plus a part that’s a parabola. This is normal, by the way, when we do expansions around an equilibrium. At least it’s common. Good to see it. To find ‘m2‘ we have to take the second derivative of ‘Veff(r)’ and then evaluate it when ‘r’ is equal to ‘a’ and ugh but here it is.

$\frac{d^2V_{eff}}{dr^2}(r) = n (n - 1) C r^{n - 2} + 3\cdot\frac{L^2}{m}r^{-4}$

And at the point of approximation, where ‘r’ is equal to ‘a’, it’ll be:

$m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C a^{n - 2} + 3\cdot\frac{L^2}{m}a^{-4}$

We know exactly what ‘a’ is so we could write that out in a nice big expression. You don’t want to. I don’t want to. It’s a bit of a mess. I mean, it’s not hard, but it has a lot of symbols in it and oh all right. Here. Look fast because I’m going to get rid of that as soon as I can.

$m_2 = \frac{d^2V_{eff}}{dr^2}(a) = n (n - 1) C \left(\frac{L^2}{n C m}\right)^{n - 2} + 3\cdot\frac{L^2}{m}\left(\frac{L^2}{n C m}\right)^{-4}$

For the values of ‘n’ that we actually care about because they turn up in real actual physics problems this expression simplifies some. Enough, anyway. If we pretend we know nothing about ‘n’ besides that it is a number bigger than -2 then … ugh. We don’t have a lot that can clean it up.

Here’s how. I’m going to define an auxiliary little function. Its role is to contain our symbolic sprawl. It has a legitimate role too, though. At least it represents something that it makes sense to give a name. It will be a new function, named ‘F’ and that depends on the radius ‘r’:

$F(r) \equiv -\frac{dV}{dr}$

Notice that’s the derivative of the original ‘V’, not the angular-momentum-equipped ‘Veff‘. This is the secret of its power. It doesn’t do anything to make $V_{eff}(r)$ easier to work with. It starts being good when we take its derivatives, though:

$\frac{dV_{eff}}{dr} = -F(r) - \frac{L^2}{m}r^{-3}$

That already looks nicer, doesn’t it? It’s going to be really slick when you think about what ‘F(a)’ is. Remember that ‘a’ is the value for ‘r’ which makes the derivative of ‘Veff‘ equal to zero. So … I may not know much, but I know this:

$0 = \frac{dV_{eff}}{dr}(a) = -F(a) - \frac{L^2}{m}a^{-3} \\ F(a) = -\frac{L}{ma^3}$

I’m not going to say what value F(r) has for other values of ‘r’ because I don’t care. But now look at what it does for the second derivative of ‘Veff‘:

$\frac{d^2 V_{eff}}{dr^2}(r) = -F'(r) + 3\frac{L^2}{mr^4}$

Here the ‘F'(r)’ is a shorthand way of writing ‘the derivative of F with respect to r’. You can do when there’s only the one free variable to consider. And now something magic that happens when we look at the second derivative of ‘Veff‘ when ‘r’ is equal to ‘a’ …

$\frac{d^2 V_{eff}}{dr^2}(a) = -F'(a) - \frac{3}{a} F(a)$

We get away with this because we happen to know that ‘F(a)’ is equal to $-\frac{L}{ma^3}$ and doesn’t that work out great? We’ve turned a symbolic mess into a … less symbolic mess.

Now why do I say it’s legitimate to introduce ‘F(r)’ here? It’s because minus the derivative of the potential energy with respect to the position of something can be something of actual physical interest. It’s the amount of force exerted on the particle by that potential energy at that point. The amount of force on a thing is something that we could imagine being interested in. Indeed, we’d have used that except potential energy is usually so much easier to work with. I’ve avoided it up to this point because it wasn’t giving me anything I needed. Here, I embrace it because it will save me from some awful lines of symbols.

Because with this expression in place I can write the approximation to the potential energy of:

$V_{eff}(r) \approx V_{eff}(a) + \frac{1}{2} \left( -F'(a) - \frac{3}{a}F(a) \right) (r - a)^2$

So if ‘r’ is close to ‘a’, then the polynomial on the right is a good enough approximation to the effective potential energy. And that potential energy has the shape of a spring’s potential energy. We can use what we know about springs to describe its motion. Particularly, we’ll have this be true:

$\frac{dp}{dt} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r$

Here, ‘p’ is the (linear) momentum of whatever’s orbiting, which we can treat as equal to ‘mr’, the mass of the orbiting thing times how far it is from the center. You may sense in me some reluctance about doing this, what with that ‘we can treat as equal to’ talk. There’s reasons for this and I’d have to get deep into geometry to explain why. I can get away with specifically this use because the problem allows it. If you’re trying to do your own original physics problem inspired by this thread, and it’s not orbits like this, be warned. This is a spot that could open up to a gigantic danger pit, lined at the bottom with sharp spikes and angry poison-clawed mathematical tigers and I bet it’s raining down there too.

So we can rewrite all this as

$m\frac{d^2r}{dt^2} = -\frac{dv_{eff}}{dr}(r) = -\left( F'(a) + \frac{3}{a} F(a)\right) r$

And when we learned everything interesting there was to know about springs we learned what the solutions to this look like. Oh, in that essay the variable that changed over time was called ‘x’ and here it’s called ‘r’, but that’s not an actual difference. ‘r’ will be some sinusoidal curve:

$r(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)$

where, here, ‘k’ is equal to that whole mass of constants on the right-hand side:

$k = -\left( F'(a) + \frac{3}{a} F(a)\right)$

I don’t know what ‘A’ and ‘B’ are. It’ll depend on just what the perturbation is like, how far the planet is from the circular orbit. But I can tell you what the behavior is like. The planet will wobble back and forth around the circular orbit, sometimes closer to the center, sometimes farther away. It’ll spend as much time closer to the center than the circular orbit as it does farther away. And the period of that oscillation will be

$T = 2\pi\sqrt{\frac{m}{k}} = 2\pi\sqrt{\frac{m}{-\left(F'(a) + \frac{3}{a}F(a)\right)}}$

This tells us something about what the orbit of a thing not in a circular orbit will be like. Yes, I see you in the back there, quivering with excitement about how we’ve got to elliptical orbits. You’re moving too fast. We haven’t got that. There will be elliptical orbits, yes, but only for a very particular power ‘n’ for the potential energy. Not for most of them. We’ll see.

It might strike you there’s something in that square root. We need to take the square root of a positive number, so maybe this will tell us something about what kinds of powers we’re allowed. It’s a good thought. It turns out not to tell us anything useful, though. Suppose we started with $V(r) = Cr^n$. Then $F(r) = -nCr^{n - 1}$, and $F'(r) = -n(n - 1)C^{n - 2}$. Sad to say, this leads us to a journey which reveals that we need ‘n’ to be larger than -2 or else we don’t get oscillations around a circular orbit. We already knew that, though. We already found we needed it to have a stable equilibrium before. We can see there not being a period for these oscillations around the circular orbit as another expression of the circular orbit not being stable. Sad to say, we haven’t got something new out of this.

We will get to new stuff, though. Maybe even ellipses.

## My Mathematics Reading For The 13th of June

I’m working on the next Why Stuff Can Orbit post, this one to feature a special little surprise. In the meanwhile here’s some of the things I’ve read recently and liked.

The Theorem of the Day is just what the name offers. They’re fit onto single slides, so there’s not much text to read. I’ll grant some of them might be hard reading at once, though, if you’re not familiar with the lingo. Anyway, this particular theorem, the Lindemann-Weierstrass Theorem, is one of the famous ones. Also one of the best-named ones. Karl Weierstrass is one of those names you find all over analysis. Over the latter half of the 19th century he attacked the logical problems that had bugged calculus for the previous three centuries and beat them all. I’m lying, but not by much. Ferdinand von Lindemann’s name turns up less often, but he’s known in mathematics circles for proving that π is transcendental (and so, ultimately, that the circle can’t be squared by compass and straightedge). And he was David Hilbert’s thesis advisor.

The Lindemann-Weierstrass Theorem is one of those little utility theorems that’s neat on its own, yes, but is good for proving other stuff. This theorem says that if a given number is algebraic (ask about that some A To Z series) then e raised to that number has to be transcendental, and vice-versa. (The exception: e raised to 0 is equal to 1.) The page also mentions one of those fun things you run across when you have a scientific calculator and can repeat an operation on whatever the result of the last operation was.

I’ve mentioned Maths By A Girl before, but, it’s worth checking in again. This is a piece about Apéry’s Constant, which is one of those numbers mathematicians have heard of, and that we don’t know whether is transcendental or not. It’s hard proving numbers are transcendental. If you go out trying to build a transcendental number it’s easy, but otherwise, you have to hope you know your number is the exponential of an algebraic number.

I forget which Twitter feed brought this to my attention, but here’s a couple geometric theorems demonstrated and explained some by Dave Richeson. There’s something wonderful in a theorem that’s mostly a picture. It feels so supremely mathematical to me.

And last, Katherine Bourzac writing for Nature.com reports the creation of a two-dimensional magnet. This delights me since one of the classic problems in statistical mechanics is a thing called the Ising model. It’s a basic model for the mathematics of how magnets would work. The one-dimensional version is simple enough that you can give it to undergrads and have them work through the whole problem. The two-dimensional version is a lot harder to solve and I’m not sure I ever saw it laid out even in grad school. (Mind, I went to grad school for mathematics, not physics, and the subject is a lot more physics.) The four- and higher-dimensional model can be solved by a clever approach called mean field theory. The three-dimensional model .. I don’t think has any exact solution, which seems odd given how that’s the version you’d think was most useful.

That there’s a real two-dimensional magnet (well, a one-molecule-thick magnet) doesn’t really affect the model of two-dimensional magnets. The model is interesting enough for its mathematics, which teaches us about all kinds of phase transitions. And it’s close enough to the way certain aspects of real-world magnets behave to enlighten our understanding. The topic couldn’t avoid drawing my eye, is all.

## Reading the Comics, June 10, 2017: Some Vintage Comics Edition

It’s too many comics to call this a famine edition, after last week’s feast. But there’s not a lot of theme to last week’s mathematically-themed comic strips. There’s a couple that include vintage comic strips from before 1940, though, so let’s run with that as a title.

Glenn McCoy and Gary McCoy’s The Flying McCoys for the 4th of June is your traditional blackboard full of symbols to indicate serious and deep thought on a subject. It’s a silly subject, but that’s fine. The symbols look to me gibberish, but clown research will go along non-traditional paths, I suppose.

Bill Hinds’s Tank McNamara for the 4th is built on mathematics’ successful invasion and colonization of sports management. Analytics, sabermetrics, Moneyball, whatever you want to call it, is built on ideas not far removed from the quality control techniques that changed corporate management so. Look for patterns; look for correlations; look for the things that seem to predict other things. It seems bizarre, almost inhuman, that we might be able to think of football players as being all of a kind, that what we know about (say) one running back will tell us something about another. But if we put roughly similarly capable people through roughly similar training and set them to work in roughly similar conditions, then we start to see why they might perform similarly. Models can help us make better, more rational, choices.

Morrie Turner’s Wee Pals rerun for the 4th is another word-problem resistance joke. I suppose it’s also a reminder about the unspoken assumptions in a problem. It also points out why mathematicians end up speaking in an annoyingly precise manner. It’s an attempt to avoid being shown up like Oliver is.

Which wouldn’t help with Percy Crosby’s Skippy for the 7th of April, 1930, and rerun the 5th. Skippy’s got a smooth line of patter to get out of his mother’s tutoring. You can see where Percy Crosby has the weird trait of drawing comics in 1930 that would make sense today still; few pre-World-War-II comics do.

Niklas Eriksson’s Carpe Diem for the 7th of June, 2017. If I may intrude in someone else’s work, it seems to me that the problem-solver might find a hint to what ‘x’ is by looking to the upper right corner of the page and the $x = \sqrt{13}$ already there.

Niklas Eriksson’s Carpe Diem for the 7th is a joke about mathematics anxiety. I don’t know that it actually explains anything, but, eh. I’m not sure there is a rational explanation for mathematics anxiety; if there were, I suppose it wouldn’t be anxiety.

George Herriman’s Krazy Kat for the 15th of July, 1939, and rerun the 8th, extends that odd little faintly word-problem-setup of the strips I mentioned the other day. I suppose identifying when two things moving at different speeds will intersect will always sound vaguely like a story problem.

George Herriman’s Krazy Kat for the 15th of July, 1939, as rerun the 8th of June, 2017. I know the comic isn’t to everyone’s taste, but I like it. I’m also surprised to see something as directly cartoonish as the brick stopping in midair like that in the third panel. The comic is usually surreal, yes, but not that way.

Tom Toles’s Randolph Itch, 2 am rerun for the 9th is about the sometimes-considered third possibility from a fair coin toss, and how to rig the results of that.

• #### goldenoj 9:01 pm on Sunday, 11 June, 2017 Permalink | Reply

Skippy is fascinating. Had to check if it was really from the 30s http://www.gocomics.com/skippy/2017/06/06 might also be a math comic.

You might want to put your Twitter handle in the sidebar – didn’t realize I had already seen you there via the blog-bot.

Like

• #### Joseph Nebus 11:57 pm on Monday, 12 June, 2017 Permalink | Reply

I didn’t realize I didn’t have my Twitter handle in the sidebar. Thanks, though, I’m glad to do stuff that makes me easier to find or understand, especially if it doesn’t require ongoing work.

Skippy, now, that’s not just a 1930s comic but one of the defining (American) comic strips. Basically every comic strip that stars kids who think is imitating it, either directly or through its influences, particularly Charles Schulz and Peanuts. It’s uncanny, especially when you compare it to its contemporaries, how nearly seamlessly it would fit into a modern comics page. It’s rather like Robert Benchley or Frank Fay in that way; now-obscure humorists or performers whose work is so modern and so influential that a wide swath of the modern genre is quietly imitating them.

Liked by 1 person

• #### Joseph Nebus 11:58 pm on Monday, 12 June, 2017 Permalink | Reply

Oh yes, and you’re right; I could’ve fit the comic from the 6th of June into a Reading the Comics post if I’d thought a bit more about it. Good eye!

Like

## Reading the Comics, June 3, 2017: Feast Week Conclusion Edition

And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product $2^{p - 1} \cdot \left(2^p - 1\right)$ for certain prime numbers ‘p’. (They’re the ones for which $2^p - 1$ is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

• #### Joshua K. 1:33 am on Saturday, 10 June, 2017 Permalink | Reply

I thought that the second question in “Saturday Morning Breakfast Cereal” was meant to imply that mathematicians often have beards; therefore, if God would prefer not to have a beard, he probably isn’t a mathematician.

Like

• #### Joseph Nebus 11:48 pm on Monday, 12 June, 2017 Permalink | Reply

Oh, you may have something there. I’m so used to thinking of beards as a logic problem that I didn’t think of them as a mathematician thing. (In my defense, back in grad school I’m not sure any of the faculty had beards.). I’ll take that interpretation too.

Like

## What Second Derivatives Are And What They Can Do For You

Previous supplemental reading for Why Stuff Can Orbit:

This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

Necessary qualifiers: pages 65 through 82 of any book on real analysis.

So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write $f: \Re \rightarrow \Re$. If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

(One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

$F^0(x) = f(a)$

That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

$F^1(x) = f(a) + m\cdot\left(x - a\right)$

Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

So I will define a new function, spoken of as f-prime, this way:

$f'(x) = \frac{df}{dx}\left(x\right)$

If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that $\frac{df}{dx}$. That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols $\frac{df}{dx}$ so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

$m = f'(a) = \frac{df}{dx}\left(a\right)$

which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

$F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)$

This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

$F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2$

What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

$m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)$

We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

$F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2$

This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

$F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3$

There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be $\frac{1}{4\cdot 3\cdot 2}$. The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

• #### elkement (Elke Stangl) 4:20 pm on Wednesday, 7 June, 2017 Permalink | Reply

This is great – I’ve just written a very short version of that (a much too succinct one) … as an half-hearted attempt to explain Taylor expansions that I need in an upcoming post. But now I won’t feel bad anymore about its incomprehensibility and simply link to this post of yours :-)

Like

• #### Joseph Nebus 12:17 am on Friday, 9 June, 2017 Permalink | Reply

Aw, goodness, thank you so. That’s so kind of you.

Liked by 1 person

## Reading the Comics, May 31, 2017: Feast Week Edition

You know we’re getting near the end of the (United States) school year when Comic Strip Master Command orders everyone to clear out their mathematics jokes. I’m assuming that’s what happened here. Or else a lot of cartoonists had word problems on their minds eight weeks ago. Also eight weeks ago plus whenever they originally drew the comics, for those that are deep in reruns. It was busy enough to split this week’s load into two pieces and might have been worth splitting into three, if I thought I had publishing dates free for all that.

Larry Wright’s Motley Classics for the 28th of May, a rerun from 1989, is a joke about using algebra. Occasionally mathematicians try to use the the ability of people to catch things in midair as evidence of the sorts of differential equations solution that we all can do, if imperfectly, in our heads. But I’m not aware of evidence that anyone does anything that sophisticated. I would be stunned if we didn’t really work by a process of making a guess of where the thing should be and refining it as time allows, with experience helping us make better guesses. There’s good stuff to learn in modeling how to catch stuff, though.

Also I want to say some very good words about Jantze’s graphical design. The mock textbook cover for the title panel on the left is so spot-on for a particular era in mathematics textbooks it’s uncanny. The all-caps Helvetica, the use of two slightly different tans, the minimalist cover art … I know shelves stuffed full in the university mathematics library where every book looks like that. Plus, “[Mathematics Thing] And Their Applications” is one of the roughly four standard approved mathematics book titles. He paid good attention to his references.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 28th deploys a big old whiteboard full of equations for the “secret” of the universe. This makes a neat change from finding the “meaning” of the universe, or of life. The equations themselves look mostly like gibberish to me, but Wise and Aldrich make good uses of their symbols. The symbol $\vec{B}$, a vector-valued quantity named B, turns up a lot. This symbol we often use to represent magnetic flux. The B without a little arrow above it would represent the intensity of the magnetic field. Similarly an $\vec{H}$ turns up. This we often use for magnetic field strength. While I didn’t spot a $\vec{E}$ — electric field — which would be the natural partner to all this, there are plenty of bare E symbols. Those would represent electric potential. And many of the other symbols are what would naturally turn up if you were trying to model how something is tossed around by a magnetic field. Q, for example, is often the electric charge. ω is a common symbol for how fast an electromagnetic wave oscillates. (It’s not the frequency, but it’s related to the frequency.) The uses of symbols is consistent enough, in fact, I wonder if Wise and Aldrich did use a legitimate sprawl of equations and I’m missing the referenced problem.

John Graziano’s Ripley’s Believe It Or Not for the 28th mentions how many symbols are needed to write out the numbers from 1 to 100. Is this properly mathematics? … Oh, who knows. It’s just neat to know.

Mark O’Hare’s Citizen Dog rerun for the 29th has the dog Fergus struggle against a word problem. Ordinary setup and everything, but I love the way O’Hare draws Fergus in that outfit and thinking hard.

The Eric the Circle rerun for the 29th by ACE10203040 is a mistimed Pi Day joke.

Bill Amend’s FoxTrot Classicfor the 31st, a rerun from the 7th of June, 2006, shows the conflation of “genius” and “good at mathematics” in everyday use. Amend has picked a quixotic but in-character thing for Jason Fox to try doing. Euclid’s Fifth Postulate is one of the classic obsessions of mathematicians throughout history. Euclid admitted the thing — a confusing-reading mess of propositions — as a postulate because … well, there’s interesting geometry you can’t do without it, and there doesn’t seem any way to prove it from the rest of his geometric postulates. So it must be assumed to be true.

There isn’t a way to prove it from the rest of the geometric postulates, but it took mathematicians over two thousand years of work at that to be convinced of the fact. But I know I went through a time of wanting to try finding a proof myself. It was a mercifully short-lived time that ended in my humbly understanding that as smart as I figured I was, I wasn’t that smart. We can suppose Euclid’s Fifth Postulate to be false and get interesting geometries out of that, particularly the geometries of the surface of the sphere, and the geometry of general relativity. Jason will surely sometime learn.

• #### goldenoj 9:08 pm on Sunday, 4 June, 2017 Permalink | Reply

Just found these recently. I really enjoy them and catching up is fun. Thanks!

Like

• #### Joseph Nebus 1:05 am on Wednesday, 7 June, 2017 Permalink | Reply

Thanks for finding the pieces. I hope you enjoy; they’re probably my most reliable feature around here.

Like

## How May 2017 Treated My Mathematics Blog

The big news is that in May my mathematics blog crept back above a thousand page views. It had been a whole month since it had reached this threshold of purely imaginary significance. For what was a slow writing month — only twelve posts — marred by my computer dying and a nasty cold the final week, the numbers aren’t bad.

In May there were 1,029 pages viewed here. That’s up from April’s 994 and March’s 1,026. The number of unique visitors is down for the third month running, though, down to 662 from April’s 696 and March’s 699. The happy implication: people reading more posts as they visit. You know, liking my writing more.

I still feel like trying to rig up some compensation for that bizarre event back in … September 2015, wasn’t it? … when suddenly everybody’s statistics everywhere dropped and we blamed it on them no longer counting mobile devices. But if that were so, surely they’ve put them back? There’s no way the non-mobile-device readership is growing fast enough that these numbers should be about stable.

I’d think, anyway. There were 78 posts liked in May, down from April’s 90 and March’s 85. Not to pout or anything but WordPress does tell me that in June 2015 there were 518 likes around here and I can’t think, gosh, what was different then? … Well, it was one of my A To Z months, with posts 28 days of the month, and that usually encourages cross-reading. The number of comments just cratered, though: there were only 8 all month, down from 16 in April and 15 in May. Clearly I’m failing to encourage conversation and I don’t know how to turn that around.

According to Insights the most popular day for reading stuff was Thursday, with 16 percent of page views then. In April Sunday was the busiest day again with 16 percent of page views; in March it was 18 percent, on Tuesdays. I may give up on tracking this; obviously, each day is about equally likely to be the most popular. The most popular reading time was the hour of 6 pm, with 11 percent of page views coming before 7 pm. In April the same hour got 11 percent of page views again. In March it got 12 percent. I might experiment with the designated posting hour to find a more popular time, but obviously most people are going to read right after the thing is published.

So what was popular writing around here in April? I don’t want to say I knew this would happen, but one of the top five posts was one for which I wrote eleven words, and which I predicted to myself would be among the motnh’s top posts.

1. How Many Grooves Are On A Record’s Side? People want simple answers to their questions.
2. Reading the Comics, May 27, 2017: Panels Edition and I’m surprised this took the lead in the month’s Reading the Comics races, given how little time it had to do it.
3. How Many Trapezoids I Can Draw as see above comment about people wanting answers
4. Theorem Thursday: The Jordan Curve Theorem which I was thinking about at the mall on Thursday. Something or other made me think of it and how much I liked my description of how you prove the theorem.
5. Dabbing and the Pythagorean Theorem which, really, I should do more like given how popular this kind of post is.

Now the roster of the 52 countries that sent me readers in May, and how many each of them did. Spoiler: the United States tops the list.

Country Views
United States 658
United Kingdom 38
Australia 28
Italy 23
India 19
Singapore 15
Slovenia 13
Turkey 13
Spain 12
South Africa 11
Austria 10
Switzerland 10
Denmark 7
Mexico 7
New Zealand 7
Puerto Rico 7
Philippines 6
Brazil 5
Oman 5
Russia 5
Sweden 5
Germany 4
Chile 3
France 3
Netherlands 3
European Union 2
Indonesia 2
Pakistan 2
Peru 2
Argentina 1 (*)
Bahamas 1
Belgium 1
Colombia 1
Czech Republic 1
Finland 1 (**)
Iceland 1
Israel 1
Japan 1
Nigeria 1
Poland 1
Portugal 1 (**)
Saudi Arabia 1
Slovakia 1
Sri Lanka 1
St. Kitts & Nevis 1
Taiwan 1
US Virgin Islands 1
Ukraine 1
Uruguay 1
Venezuela 1

There had been 45 countries sending readers in April and 56 in March. European Union makes its big return.

There were 21 single-reader countries in May, way up from April’s 10 but still down from March’s 26. Argentina was a single-reader country in April also. Finland and Portugal have been single-reader countries for three months.

The month starts with 49,247 page views from some 22,212 logged distinct visitors since WordPress started telling us about those. WordPress tells me also there are 662 followers on WordPress, people who’ve gone and clicked the ‘Follow On WordPress’ button at the top right of the page in the hopes that I’ll follow back and increase their readership count. We all know how the game works.

And then what are popular search terms bringing folks here? What you’d expect given the most popular posts.

• comics conversation
• how many grooves are on typical record or cd ? how they are arranged?
• origin is the gateway to your entire gaming universe.
• peacetips football prediction
• only yestetday dividing fractions
• animated rolling dice 7

Plus some 146 unknown search terms. I’d be interested to know what those are too.

Well, thanks all of you for being around for this. I hope it’s a good month ahead.

You know, the arrangement of CDs is probably an interesting subject. I love that sort of technical-detail stuff too. It’s probably only slightly mathematics but I bet I can find a pretext to include it here. If someone’s interested.

## Something Cute I Never Noticed Before About Infinite Sums

This is a trifle, for which I apologize. I’ve been sick. But I ran across this while reading Carl B Boyer’s The History of the Calculus and its Conceptual Development. This is from the chapter “A Century Of Anticipation”, developments leading up to Newton and Leibniz and The Calculus As We Know It. In particular, while working out the indefinite integrals for simple powers — x raised to a whole number — John Wallis, whom you’ll remember from such things as the first use of the ∞ symbol and beating up Thomas Hobbes for his lunch money, noted this:

$\frac{0 + 1}{1 + 1} = \frac{1}{2}$

Which is fine enough. But then Wallis also noted that

$\frac{0 + 1 + 2}{2 + 2 + 2} = \frac{1}{2}$

And furthermore that

$\frac{0 + 1 + 2 + 3}{3 + 3 + 3 + 3} = \frac{1}{2}$

$\frac{0 + 1 + 2 + 3 + 4}{4 + 4 + 4 + 4 + 4} = \frac{1}{2}$

$\frac{0 + 1 + 2 + 3 + 4 + 5}{5 + 5 + 5 + 5 + 5 + 5} = \frac{1}{2}$

And isn’t that neat? Wallis goes on to conclude that this is true not just for finitely many terms in the numerator and denominator, but also if you carry on infinitely far. This seems like a dangerous leap to make, but they treated infinities and infinitesimals dangerously in those days.

What makes this work is — well, it’s just true; explaining how that can be is kind of like explaining how it is circles have a center point. All right. But we can prove that this has to be true at least for finite terms. A sum like 0 + 1 + 2 + 3 is an arithmetic progression. It’s the sum of a finite number of terms, each of them an equal difference from the one before or the one after (or both).

Its sum will be equal to the number of terms times the arithmetic mean of the first and last. That is, it’ll be the number of terms times the sum of the first and the last terms and divided that by two. So that takes care of the numerator. If we have the sum 0 + 1 + 2 + 3 + up to whatever number you like which we’ll call ‘N’, we know its value has to be (N + 1) times N divided by 2. That takes care of the numerator.

The denominator, well, that’s (N + 1) cases of the number N being added together. Its value has to be (N + 1) times N. So the fraction is (N + 1) times N divided by 2, itself divided by (N + 1) times N. That’s got to be one-half except when N is zero. And if N were zero, well, that fraction would be 0 over 0 and we know what kind of trouble that is.

It’s a tiny bit, although you can use it to make an argument about what to expect from $\int{x^n dx}$, as Wallis did. And it delighted me to see and to understand why it should be so.

• #### elkement (Elke Stangl) 4:38 pm on Monday, 5 June, 2017 Permalink | Reply

It reminds me of the famous story about young Gauss – when he baffled his teacher with this somewhat related ‘trick’ of adding up numbers between 1 to 100 very quickly (by actually calculating 101*50).

Like

• #### Joseph Nebus 1:09 am on Wednesday, 7 June, 2017 Permalink | Reply

That’s exactly what crossed my mind, especially as I realized I was doing the sum of 1 through 100 at least implicitly. It feels so playful to have something like that turn up.

Liked by 1 person

## Reading the Comics, May 27, 2017: Panels Edition

Can’t say this was too fast or too slow a week for mathematically-themed comic strips. A bunch of the strips were panel comics, so that’ll do for my theme.

Norm Feuti’s Retail for the 21st mentions every (not that) algebra teacher’s favorite vague introduction to group theory, the Rubik’s Cube. Well, the ways you can rotate the various sides of the cube do form a group, which is something that acts like arithmetic without necessarily being numbers. And it gets into value judgements. There exist algorithms to solve Rubik’s cubes. Is it a show of intelligence that someone can learn an algorithm and solve any cube? — But then, how is solving a Rubik’s cube, with or without the help of an algorithm, a show of intelligence? At least of any intelligence more than the bit of spatial recognition that’s good for rotating cubes around?

Norm Feuti’s Retail for the 21st of May, 2017. A few weeks ago I ran across a book about the world of competitive Rubik’s Cube solving. I haven’t had the chance to read it, but am interested by the ways people form rules for what would seem like a naturally shapeless feature such as solving Rubik’s Cubes. Not featured: the early 80s Saturday morning cartoon that totally existed because somehow that made sense back then.

I don’t see that learning an algorithm for a problem is a lack of intelligence. No more than using a photo reference shows a lack of drawing skill. It’s still something you need to learn, and to apply, and to adapt to the cube as you have it to deal with. Anyway, I never learned any techniques for solving it either. Would just play for the joy of it. Here’s a page with one approach to solving the cube, if you’d like to give it a try yourself. Good luck.

Bob Weber Jr and Jay Stephens’s Oh, Brother! for the 22nd is a word-problem avoidance joke. It’s a slight thing to include, but the artwork is nice.

Brian and Ron Boychuk’s Chuckle Brothers for the 23rd is a very slight thing to include, but it’s looking like a slow week. I need something here. If you don’t see it then things picked up. They similarly tried sprucing things up the 27th, with another joke for taping onto the door.

Nate Fakes’s Break of Day for the 24th features the traditional whiteboard full of mathematics scrawls as a sign of intelligence. The scrawl on the whiteboard looks almost meaningful. The integral, particularly, looks like it might have been copied from a legitimate problem in polar or cylindrical coordinates. I say “almost” because while I think that some of the r symbols there are r’ I’m not positive those aren’t just stray marks. If they are r’ symbols, it’s the sort of integral that comes up when you look at surfaces of spheres. It would be the electric field of a conductive metal ball given some charge, or the gravitational field of a shell. These are tedious integrals to solve, but fortunately after you do them in a couple of introductory physics-for-majors classes you can just look up the answers instead.

Samson’s Dark Side of the Horse for the 26th is the Roman numerals joke for this installment. I feel like it ought to be a pie chart joke too, but I can’t find a way to make it one.

Izzy Ehnes’s The Best Medicine Cartoon for the 27th is the anthropomorphic numerals joke for this paragraph.

## Getting Into Shapes

This is, in part, a post for myself. They all are, but this is moreso. My day job includes some Geographic Information Services stuff, which is how we say “maps” when we want to be taken seriously as Information Technology professionals. When we make maps, what we really do is have a computer draw polygons, and then put dots on them. A common need is to put a dot in the middle of a polygon. Yes, this sounds silly, but describe your job this abstractly and see how it comes out.

The trouble is polygons can be complicated stuff. Can be, not are. If the polygon is, like, the border of your building’s property it’s probably not too crazy. It’s probably a rectangle, or at least a trapezoid. Maybe there’s a curved boundary. If you need a dot, such as to place the street address or a description of the property, you can make a good guess about where to put it so it’s inside the property and not too close to an edge.

But you can’t always. The polygons can be complicated. Especially if you’re representing stuff that reflects government or scientific or commercial interest. There’s good reasons to be interested in the boundaries between the low-low tide and the high-high tide lines of a beach, but that’s not going to look like anything simple for any realistic property. Finding a representative spot to fix labels or other business gets tricky.

So this crossed my Twitter feed and I’ll probably want to refer back to it at some point. It’s an algorithm, published last August by Vladimir Agafonkin at Mapbox, which uses some computation tricks to find a reasonable center.

The approach is, broadly, of a kind with many numerical methods. It tries to find an answer by taking a guess and then seeing if any obvious variations will make it a little better. If you can, then, repeat these variations. Eventually, usually, you’ll get to a pretty good answer. It may not be the exact best possible answer, but that’s all right. We accept that we’ll have a merely approximate answer, but we’ll get it more quickly than we otherwise would have. Often this is fine. Nobody will be upset that the label on a map would be “better” moved one pixel to the right if they get the map ten seconds faster. Optimization is often like that.

I have not tried putting this code into mine yet; I’ve just now read it and I have some higher-priority tasks at work. But I’m hoping to remember that this exists and to see whether I can use it.

## Dabbing and the Pythagorean Theorem

The picture explains itself nicely. Just a thought on an average day.

Like

I enjoyed this article from Fox Sports. Apparently, a French Precalculus textbook created a homework problem asking if football (soccer) superstar Paul Pogba is doing the perfect dab by creating two right triangles.

View original post

## Reading the Comics, May 20, 2017: Major Computer Malfunction Week Edition

I was hit by a massive computer malfunction this week, the kind that forced me to buy a new computer and spend half a week copying stuff over from a limping hard drive and hoping it would maybe work if I held things just right. Mercifully, Comic Strip Master Command gave me a relatively easy week. No huge rush of mathematically-themed comic strips and none that are going to take a thousand words of writing to describe. Let’s go.

Sam Hepburn’s Questionable Quotebook for the 14th includes this week’s anthropomorphic geometry sketch underneath its big text block.

Eric the Circle for the 15th, this one by “Claire the Square”, is the rare Eric the Circle to show off properties of circles. So maybe that’s the second anthropomorphic geometry sketch for the week. If the week hadn’t been dominated by my computer woes that might have formed the title for this edition.

Werner Wejp-Olsen’s Inspector Danger’s Crime Quiz for the 15th puts a mathematician in mortal peril and leaves him there to die. As is traditional for this sort of puzzle the mathematician left a dying clue. (Mathematicians were similarly kind to their investigators on the 4th of July, 2016 and the 9th of July, 2012. I was expecting the answer to be someone with a four-letter and an eight-letter name, none of which anybody here had. Doesn’t matter. It’ll never stand up in court.

John Graziano’s Ripley’s Believe It Or Not for the 17th features one of those astounding claims that grows out of number theory. Graziano asserts that there are an astounding 50,613,244,155,051,856 ways to score exactly 100 points in (ten-pin) bowling. I won’t deny that this seems high to me. But partitioning a number — that is, taking a (positive) whole number and writing down the different ways one can add up (positive) whole numbers to get that sum — often turns up a lot of possibilities. That there should be many ways to get a score of 100 by adding between ten and twenty numbers that could be between zero and ten each, plus the possibility of adding pairs of the numbers (for spares) or trios of numbers (for strikes) makes this less astonishing.

Wikipedia led me to this page, from Balmoral Software, about all the different ways there are to score different numbers. The most surprising thing it reveals to me is that 100 isn’t even the score with the greatest number of possible scores. 77 is. There are 172,542,309,343,731,946 ways to score exactly 77 points. I agree this ought to make me feel better about my game. It doesn’t. It turns out there are, altogether, something like 5,726,805,883,325,784,576 possible different outcomes for a bowling game. And how we can tell that, given there’s no practical way to go and list all of them, is described at the end of the page.

The technique is called “divide and conquer”. There’s no way to list all the outcomes of ten frames of bowling, but there’s certainly a way to list all the outcomes of one. Or two. Or three. So, work out how many possible scores there would be in few enough frames you can handle that. Then combine these shortened games into one that’s the full ten frames. There’s some trouble in matching up the ends of the short games. A spare or a strike in the last frame of a shortened game means one has to account for the first or first two frames of the next one. But this is still an easier problem than the one we started with.

Bill Amend’s FoxTrot Classics for the 18th (rerun from the 25th of May, 2006) is your standard percentages and infinities joke. Really would have expected Paige’s mother to be wise to this game by now, but this sort of thing happens.

## Everything Interesting There Is To Say About Springs

I need another supplemental essay to get to the next part in Why Stuff Can Orbit. (Here’s the last part.) You probably guessed it’s about springs. They’re useful to know about. Why? That one killer Mystery Science Theater 3000 short, yes. But also because they turn up everywhere.

Not because there are literally springs in everything. Not with the rise in anti-spring political forces. But what makes a spring is a force that pushes something back where it came from. It pushes with a force that grows just as fast as the distance from where it came grows. Most anything that’s stable, that has some normal state which it tends to look like, acts like this. A small nudging away from the normal state gets met with some resistance. A bigger nudge meets bigger resistance. And most stuff that we see is stable. If it weren’t stable it would have broken before we got there.

(There are exceptions. Stable is, sometimes, about perspective. It can be that something is unstable but it takes so long to break that we don’t have to worry about it. Uranium, for example, is dying, turning slowly into stable elements like lead and helium. There will come a day there’s none left in the Earth. But it takes so long to break down that, barring surprises, the Earth will have broken down into something else first. And it may be that something is unstable, but it’s created by something that’s always going on. Oxygen in the atmosphere is always busy combining with other chemicals. But oxygen stays in the atmosphere because life keeps breaking it out of other chemicals.)

Now I need to put in some terms. Start with your thing. It’s on a spring, literally or metaphorically. Don’t care. If it isn’t being pushed in any direction then it’s at rest. Or it’s at an equilibrium. I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? I can tell you what it acts like. It’s your business whether it should. Anyway, your thing has an equilibrium.

Next term is the displacement. It’s how far your thing is from the equilibrium. If it’s really a block of wood on a spring, like it is in high school physics, this displacement is how far the spring is stretched out. In equations I’ll represent this as ‘x’ because I’m not going to go looking deep for letters for something like this. What value ‘x’ has will change with time. This is what makes it a physics problem. If we want to make clear that ‘x’ does depend on time we might write ‘x(t)’. We might go all the way and start at the top of the page with ‘x = x(t)’, just in case.

If ‘x’ is a positive number it means your thing is displaced in one direction. If ‘x’ is a negative number it was displaced in the opposite direction. By ‘one direction’ I mean ‘to the right, or else up’. By ‘the opposite direction’ I mean ‘to the left, or else down’. Yes, you can pick any direction you like but why are you making life harder for everyone? Unless there’s something compelling about the setup of your thing that makes another choice make sense just go along with what everyone else is doing. Apply your creativity and iconoclasm where it’ll make your life better instead.

Also, we only have to worry about one direction. This might surprise you. If you’ve played much with springs you might have noticed how they’re three-dimensional objects. You can set stuff swinging back and forth in two directions at once. That’s all right. We can describe a two-dimensional displacement as a displacement in one direction plus a displacement perpendicular to that. And if there’s no such thing as friction, they won’t interact. We can pretend they’re two problems that happen to be running on the same spring at the same time. So here I declare: we can ignore friction and pretend it doesn’t matter. We don’t have to deal with more than one direction at a time.

(It’s not only friction. There’s problems about how energy gets transmitted between ways the thing can oscillate. This is what causes what starts out as a big whack in one direction to turn into a middling little circular wobbling. That’s a higher level physics than I want to do right now. So here I declare: we can ignore that and pretend it doesn’t matter.)

Whether your thing is displaced or not it’s got some potential energy. This can be as large or as small as you like, down to some minimum when your thing is at equilibrium. The potential energy we represent as a number named ‘U’ because of good reasons that somebody surely had. The potential energy of a spring depends on the square of the displacement. We can write its value as ‘U = ½ k x2‘. Here ‘k’ is a number known as the spring constant. It describes how strongly the spring reacts; the bigger ‘k’ is, the more any displacement’s met with a contrary force. It’ll be a positive number. ½ is that same old one-half that you know from ideas being half-baked or going-off being half-cocked.

Potential energy is great. If you can describe a physics problem with its energy you’re in good shape. It lets us bring physical intuition into understanding things. Imagine a bowl or a Habitrail-type ramp that’s got the cross-section of your potential energy. Drop a little marble into it. How the marble rolls? That’s what your thingy does in that potential energy.

Also we have mathematics. Calculus, particularly differential equations, lets us work out how the position of your thing will change. We need one more piece for this. That’s the momentum of your thing. Momentum is traditionally represented with the letter ‘p’. And now here’s how stuff moves when you know the potential energy ‘U’:

$\frac{dp}{dt} = - \frac{\partial U}{\partial x}$

Let me unpack that. $\frac{dp}{dt}$ — also known as $\frac{d}{dt}p$ if that looks better — is “the derivative of p with respect to t”. It means “how the value of the momentum changes as the time changes”. And that is equal to minus one times …

You might guess that $\frac{\partial U}{\partial x}$ — also written as $\frac{\partial}{\partial x} U$ — is some kind of derivative. The $\partial$ looks kind of like a cursive d, after all. It’s known as the partial derivative, because it means we look at how ‘U’ changes as ‘x’ and nothing else at all changes. With the normal, ‘d’ style full derivative, we have to track how all the variables change as the ‘t’ we’re interested in changes. In this particular problem the difference doesn’t matter. But there are problems where it does matter and that’s why I’m careful about the symbols.

So now we fall back on how to take derivatives. This gives us the equation that describes how the physics of your thing on a spring works:

$\frac{dp}{dt} = - k x$

You’re maybe underwhelmed. This is because we haven’t got any idea how the momentum ‘p’ relates to the displacement ‘x’. Well, we do, because I know and if you’re still reading at this point you know full well what momentum is. But let me make it official. Momentum is, for this kind of thing, the mass ‘m’ of your thing times how its position is changing, which is $\frac{dx}{dt}$. The mass of your thing isn’t changing. If you’re going to let it change then we’re doing some screwy rocket problem and that’s a different article. So its easy to get the momentum out of that problem. We get instead the second derivative of the displacement with respect to time:

$m\frac{d^2 x}{dt^2} = - kx$

Fine, then. Does that tell us anything about what ‘x(t)’ is? Not yet, but I will now share with you one of the top secrets that only real mathematicians know. We will take a guess to what the answer probably is. Then we’ll see in what circumstances that answer could possibly be right. Does this seem ad hoc? Fine, so it’s ad hoc. Here is the secret of mathematicians:

It’s fine if you get your answer by any stupid method you like, including guessing and getting lucky, as long as you check that your answer is right.

Oh, sure, we’d rather you get an answer systematically, since a system might give us ideas how to find answers in new problems. But if all we want is an answer then, by definition, we don’t care where it came from. Anyway, we’re making a particular guess, one that’s very good for this sort of problem. Indeed, this guess is our system. A lot of guesses at solving differential equations use exactly this guess. Are you ready for my guess about what solves this? Because here it is.

We should expect that

$x(t) = C e^{r t}$

Here ‘C’ is some constant number, not yet known. And ‘r’ is some constant number, not yet known. ‘t’ is time. ‘e’ is that number 2.71828(etc) that always turns up in these problems. Why? Because its derivative is very easy to take, and if we have to take derivatives we want them to be easy to take. The first derivative of $Ce^{rt}$ with respect to ‘t’ is $r Ce^{rt}$. The second derivative with respect to ‘t’ is $r^2 Ce^{rt}$. so here’s what we have:

$m r^2 Ce^{rt} = - k Ce^{rt}$

What we’d like to find are the values for ‘C’ and ‘r’ that make this equation true. It’s got to be true for every value of ‘t’, yes. But this is actually an easy equation to solve. Why? Because the $C e^{rt}$ on the left side has to equal the $C e^{rt}$ on the right side. As long as they’re not equal to zero and hey, what do you know? $C e^{rt}$ can’t be zero unless ‘C’ is zero. So as long as ‘C’ is any number at all in the world except zero we can divide this ugly lump of symbols out of both sides. (If ‘C’ is zero, then this equation is 0 = 0 which is true enough, I guess.) What’s left?

$m r^2 = -k$

OK, so, we have no idea what ‘C’ is and we’re not going to have any. That’s all right. We’ll get it later. What we can get is ‘r’. You’ve probably got there already. There’s two possible answers:

$r = \pm\sqrt{-\frac{k}{m}}$

You might not like that. You remember that ‘k’ has to be positive, and if mass ‘m’ isn’t positive something’s screwed up. So what are we doing with the square root of a negative number? Yes, we’re getting imaginary numbers. Two imaginary numbers, in fact:

$r = \imath \sqrt{\frac{k}{m}}, r = - \imath \sqrt{\frac{k}{m}}$

Which is right? Both. In some combination, too. It’ll be a bit with that first ‘r’ plus a bit with that second ‘r’. In the differential equations trade this is called superposition. We’ll have information that tells us how much uses the first ‘r’ and how much uses the second.

You might still be upset. Hey, we’ve got these imaginary numbers here describing how a spring moves and while you might not be one of those high-price physicists you see all over the media you know springs aren’t imaginary. I’ve got a couple responses to that. Some are semantic. We only call these numbers “imaginary” because when we first noticed they were useful things we didn’t know what to make of them. The label is an arbitrary thing that doesn’t make any demands of the numbers. If we had called them, oh, “Cardanic numbers” instead would you be upset that you didn’t see any Cardanos in your springs?

My high-class semantic response is to ask in exactly what way is the “square root of minus one” any less imaginary than “three”? Can you give me a handful of three? No? Didn’t think so.

And then the practical response is: don’t worry. Exponentials raised to imaginary numbers do something amazing. They turn into sine waves. Well, sine and cosine waves. I’ll spare you just why. You can find it by looking at the first twelve or so posts of any pop mathematics blog and its article about how amazing Euler’s Formula is. Given that Euler published, like, 2,038 books and papers through his life and the fifty years after his death it took to clear the backlog you might think, “Euler had a lot of Formulas, right? Identities too?” Yes, he did, but you’ll know this one when you see it.

What’s important is that the displacement of your thing on a spring will be described by a function which looks like this:

$x(t) = C_1 e^{\sqrt{\frac{k}{m}} t} + C_2 e^{-\sqrt{\frac{k}{m}} t}$

for two constants, ‘C1‘ and ‘C2‘. These were the things we called ‘C’ back when we thought the answer might be $Ce^{rt}$; there’s two of them because there’s two r’s. I give you my word this is equivalent to a formula like this, but you can make me show my work if you must:

$x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)$

for some (other) constants ‘A’ and ‘B’. Cosine and sine are the old things you remember from learning about cosine and sine.

OK, but what are ‘A’ and ‘B’?

Generically? We don’t care. Some numbers. Maybe zero. Maybe not. The pattern, how the displacement changes over time, will be the same whatever they are. It’ll be regular oscillation. At one time your thing will be as far from the equilibrium as it gets, and not moving toward or away from the center. At one time it’ll be back at the center and moving as fast as it can. At another time it’ll be as far away from the equilibrium as it gets, but on the other side. At another time it’ll be back at the equilibrium and moving as fast as it ever does, but the other way. How far is that maximum? What’s the fastest it travels?

The answer’s in how we started. If we start at the equilibrium without any kind of movement we’re never going to leave the equilibrium. We have to get nudged out of it. But what kind of nudge? There’s three ways you can do to nudge something out.

You can tug it out some and let it go from rest. This is the easiest: then ‘A’ is however big your tug was and ‘B’ is zero.

You can let it start from equilibrium but give it a good whack so it’s moving at some initial velocity. This is the next-easiest: ‘A’ is zero, and ‘B’ is … no, not the initial velocity. You need to look at what the velocity of your thing is at the start. That’s the first derivative:

$\frac{dx}{dt} = -\sqrt{\frac{k}{m}}A sin\left(\sqrt{\frac{k}{m}} t\right) + \sqrt{\frac{k}{m}} B sin\left(\sqrt{\frac{k}{m}} t\right)$

The start is when time is zero because we don’t need to be difficult. when ‘t’ is zero the above velocity is $\sqrt{\frac{k}{m}} B$. So that product has to be the initial velocity. That’s not much harder.

The third case is when you start with some displacement and some velocity. A combination of the two. Then, ugh. You have to figure out ‘A’ and ‘B’ that make both the position and the velocity work out. That’s the simultaneous solutions of equations, and not even hard equations. It’s more work is all. I’m interested in other stuff anyway.

Because, yeah, the spring is going to wobble back and forth. What I’d like to know is how long it takes to get back where it started. How long does a cycle take? Look back at that position function, for example. That’s all we need.

$x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)$

Sine and cosine functions are periodic. They have a period of 2π. This means if you take the thing inside the parentheses after a sine or a cosine and increase it — or decrease it — by 2π, you’ll get the same value out. What’s the first time that the displacement and the velocity will be the same as their starting values? If they started at t = 0, then, they’re going to be back there at a time ‘T’ which makes true the equation

$\sqrt{\frac{k}{m}} T = 2\pi$

And that’s going to be

$T = 2\pi\sqrt{\frac{m}{k}}$

Maybe surprising thing about this: the period doesn’t depend at all on how big the displacement is. That’s true for perfect springs, which don’t exist in the real world. You knew that. Imagine taking a Junior Slinky from the dollar store and sticking a block of something on one end. Imagine stretching it out to 500,000 times the distance between the Earth and Jupiter and letting go. Would it act like a spring or would it break? Yeah, we know. It’s sad. Think of the animated-cartoon joy a spring like that would produce.

But this period not depending on the displacement is true for small enough displacements, in the real world. Or for good enough springs. Or things that work enough like springs. By “true” I mean “close enough to true”. We can give that a precise mathematical definition, which turns out to be what you would mean by “close enough” in everyday English. The difference is it’ll have Greek letters included.

So to sum up: suppose we have something that acts like a spring. Then we know qualitatively how it behaves. It oscillates back and forth in a sine wave around the equilibrium. Suppose we know what the spring constant ‘k’ is. Suppose we also know ‘m’, which represents the inertia of the thing. If it’s a real thing on a real spring it’s mass. Then we know quantitatively how it moves. It has a period, based on this spring constant and this mass. And we can say how big the oscillations are based on how big the starting displacement and velocity are. That’s everything I care about in a spring. At least until I get into something wild like several springs wired together, which I am not doing now and might never do.

And, as we’ll see when we get back to orbits, a lot of things work close enough to springs.

• #### tkflor 8:13 pm on Saturday, 20 May, 2017 Permalink | Reply

“I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? ”
Why don’t you call it a “ground state”?

Like

• #### Joseph Nebus 6:49 am on Friday, 26 May, 2017 Permalink | Reply

You’re right. This is a ground state.

For folks just joining in, the “ground state” is what the system looks like when it’s got the least possible energy. At least the least energy consistent with it being a system at all. For a spring problem that’s the one where the thing is at rest, at the center, not displaced at all.

In a more complicated system you can have an equilibrium that’s stable and that isn’t the ground state. That isn’t the case here, but I wonder if thinking about that didn’t make me avoid calling it a ground state.

Like

## Reading the Comics, May 13, 2017: Quiet Tuesday Through Saturday Edition

From the Sunday and Monday comics pages I was expecting another banner week. And then there was just nothing from Tuesday on, at least not among the comic strips I read. Maybe Comic Strip Master Command has ordered jokes saved up for the last weeks before summer vacation.

Tony Cochrane’s Agnes for the 7th is a mathematics anxiety strip. It’s well-expressed, since Cochrane writes this sort of hyperbole well. It also shows a common attitude that words and stories are these warm, friendly things, while mathematics and numbers are cold and austere. Perhaps Agnes is right to say some of the problem is familiarity. It’s surely impossible to go a day without words, if you interact with people or their legacies; to go without numbers … well, properly impossible. There’s too many things that have to be counted. Or places where arithmetic sneaks in, such as getting enough money to buy a thing. But those don’t seem to be the kinds of mathematics people get anxious about. Figuring out how much change, that’s different.

I suppose some of it is familiarity. It’s easier to dislike stuff you don’t do often. The unfamiliar is frightening, or at least annoying. And humans are story-oriented. Even nonfiction forms stories well. Mathematics … has stories, as do all human projects. But the mathematics itself? I don’t know. There’s just beautiful ingenuity and imagination in a lot of it. I’d just been thinking of the just beautiful scheme for calculating logarithms from a short table. But it takes time to get to that beauty.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 7th is a fractions joke. It might also be a joke about women concealing their ages. Or perhaps it’s about mathematicians expressing things in needlessly complicated ways. I think that’s less a mathematician’s trait than a common human trait. If you’re expert in a thing it’s hard to resist the puckish fun of showing that expertise off. Or just sowing confusion where one may.

Daniel Shelton’s Ben for the 8th is a kid-doing-arithmetic problem. Even I can’t squeeze some deeper subject meaning out of it, but it’s a slow week so I’ll include the strip anyway. Sorry.

Brian Boychuk and Ron Boychuk’s Chuckle Brothers for the 8th is the return of anthropomorphic-geometry joke after what feels like months without. I haven’t checked how long it’s been without but I’m assuming you’ll let me claim that. Thank you.

• #### Joshua K. 4:53 am on Thursday, 18 May, 2017 Permalink | Reply

Perhaps the father in the “Ben” strip, rather than snoring, was telling his son about the set of integers.

Like

## Excuses, But Classed Up Some

Afraid I’m behind on resuming Why Stuff Can Orbit, mostly as a result of a power outage yesterday. It wasn’t a major one, but it did reshuffle all the week’s chores to yesterday when we could be places that had power, and kept me from doing as much typing as I wanted. I’m going to be riding this excuse for weeks.

So instead, here, let me pass this on to you.

It links to a post about the Legendre Transform, which is one of those cool advanced tools you get a couple years into a mathematics or physics major. It is, like many of these cool advanced tools, about solving differential equations. Differential equations turn up anytime the current state of something affects how it’s going to change, which is to say, anytime you’re looking at something not boring. It’s one of mathematics’s uses of “duals”, letting you swap between the function you’re interested in and what you know about how the function you’re interested in changes.

On the linked page, Jonathan Manton tries to present reasons behind the Legendre transform, in ways he likes better. It might not explain the idea in a way you like, especially if you haven’t worked with it before. But I find reading multiple attempts to explain an idea helpful. Even if one perspective doesn’t help, having a cluster of ideas often does.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r