Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Tuesday, 25 October, 2016 Permalink | Reply
    Tags: , , , , origami   

    Reading the Comics, October 22, 2016: The Jokes You Can Make About Fractions Edition 

    Last week had a whole bundle and a half of mathematically-themed comics so let me finish off the set. Also let me refresh my appeal for words for my End Of 2016 Mathematics A To Z. There’s all sorts of letters not yet claimed; please think of a mathematical term and request it!

    David L Hoyt and Jeff Knurek’s Jumble for the 19th gives us a chance to do some word puzzle games again. If you like getting the big answer without doing the individual words then pay attention to the blackboard in the comic. Just saying.

    DUEGN O-O-O; NERDT OOO--; NINBUO OO--O-; MUURQO ---OO; The teacher was happy that those who did poorly on the math test were -----------.

    David L Hoyt and Jeff Knurek’s Jumble for the 19th of October, 2016. The link will probably expire in about a month. Have to say, it’s not a big class. I’m not surprised the students are doing well.

    Patrick J Marran’s Francis for the 20th features origami, as well as some of the more famous polyhedrons. The study of what shapes you can make from a flat sheet by origami processes — just folding, no cutting — is a neat one. Apparently origami geometry can be built out of seven axioms. I’m delighted to learn that the axioms were laid out as recently as 1992, with the exception of one that went unnoticed until 2002.

    Gabby describes her shape as an isocahedron, which must be a typo. We all make them. There’s icosahedrons which look like that figure and I’ve certainly slipped consonants around that way.

    I’m surprised and delighted to find there are ways to make an origami icosahedron. Her figure doesn’t look much like the origami icosahedron of those instructions, but there are many icosahedrons. The name just means there are 20 faces to the polyhedron so there’s a lot of room for variants.

    If you were wondering, yes, the Francis of the title is meant to be the Pope. It’s kind of a Pope Francis fan comic. I cannot explain this phenomenon.

    Rick Detorie’s One Big Happy rerun for the 21st retells one of the standard jokes you can always make about fractions. Fortunately it uses that only as part of the setup, which shows off why I’ve long liked Detorie’s work. Good cartoonists — good writers — take a stock joke and add something to make it fit their characters.

    I’ve featured Richard Thompson’s Poor Richard’s Almanac rerun from the 21st before. I’ll surely feature it again. I just like Richard Thompson art like this. This is my dubious inclusion of the essay. In “What’s New At The Zoo” he tosses off a mention of chimpanzees now typing at 120 words per minute. A comic reference to the famous thought experiment of a monkey, or a hundred monkeys, or infinitely many monkeys given typewriters and time to write all the works of literature? Maybe. Or it might just be that it’s a funny idea. It is, of course.

    'Dad, will you check my math homework?' 'Um, it looks like you wrote two different answers to every problem. Shouldn't there be just one?' 'I like to increase my odds.'

    Rick Kirkman and Jerry Scott’s Baby Blues for the 22nd of October, 2016. I’m not quite curious enough to look, but do wonder how far into the comments you have to go before someone slags on the Common Core. But then I would say if Hammy were to write down first an initial-impression guess of about what the answer should be — say, that “37 + 42” should be a number somewhere around 80 — and then an exact answer, then that would be consistent with what I understand Common Core techniques encourage and a pretty solid approach.

    In Rick Kirkman and Jerry Scott’s Baby Blues for the 22nd Hammie offers multiple answers to each mathematics problem. “I like to increase my odds,” he says. For arithmetic problems, that’s not really helping. But it is often useful, especially in modeling complicated systems, to work out multiple answers. If you’re not sure how something should behave, and it’s troublesome to run experiments, then try develop several different models. If the models all describe similar behavior, then, good! It’s reason to believe you’re probably right, or at least close to right. If the models disagree about their conclusions then you need information. You need experimental results. The ways your models disagree can inspire new experiments.

    Mark Leiknes’s Cow and Boy rerun for the 22nd is another with one of the standard jokes you can make about fractions. I suspect I’ve featured this before too, but I quite like Cow and Boy. It’s sad that the strip was cancelled, and couldn’t make a go of it as web comic. I’m not surprised; the strip had so many running jokes it might as well have had a deer and an orca shooting rocket-propelled grenades at new readers. But it’s grand seeing the many, many, many running jokes as they were first established. This is part of the sequence in which Billy, the Boy of the title, discovers there’s another kid named Billy in the class, quickly dubbed Smart Billy for reasons the strip makes clear.

  • Joseph Nebus 6:00 pm on Sunday, 23 October, 2016 Permalink | Reply
    Tags: cosines, , math anxiety, , ,   

    Reading the Comics, October 19, 2016: An Extra Day Edition 

    I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

    Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

    Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s c^2 = a^2 + b^2 - 2 a b \cos\left(C\right) . Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

    That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

    Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

    Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

    The Help Session ('Be sure to show your work'). 'It's simple --- if 3 deep breaths take 4.2 seconds, and your dread to confidence ratio is 2:1, how long will it take to alleviate your math anxiety?'

    Hilary Price’s Rhymes With Orange for the 19th of October, 2016. I don’t think there’s enough data given to solve the problem. But it’s a start at least. Start by making a note of it on your suspiciously large sheet of paper.

    Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

    And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

    Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

    Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

  • Joseph Nebus 6:00 pm on Friday, 21 October, 2016 Permalink | Reply
    Tags: angular momentum, , , , , Kepler's Laws, , ,   

    Why Stuff Can Orbit, Part 6: Circles and Where To Find Them 


    And some supplemental reading:

    So now we can work out orbits. At least orbits for a central force problem. Those are ones where a particle — it’s easy to think of it as a planet — is pulled towards the center of the universe. How strong that pull is depends on some constants. But it only changes as the distance the planet is from the center changes.

    What we’d like to know is whether there are circular orbits. By “we” I mean “mathematical physicists”. And I’m including you in that “we”. If you’re reading this far you’re at least interested in knowing how mathematical physicists think about stuff like this.

    It’s easiest describing when these circular orbits exist if we start with the potential energy. That’s a function named ‘V’. We write it as ‘V(r)’ to show it’s an energy that changes as ‘r’ changes. By ‘r’ we mean the distance from the center of the universe. We’d use ‘d’ for that except we’re so used to thinking of distance from the center as ‘radius’. So ‘r’ seems more compelling. Sorry.

    Besides the potential energy we need to know the angular momentum of the planet (or whatever it is) moving around the center. The amount of angular momentum is a number we call ‘L’. It might be positive, it might be negative. Also we need the planet’s mass, which we call ‘m’. The angular momentum and mass let us write a function called the effective potential energy, ‘Veff(r)’.

    And we’ll need to take derivatives of ‘Veff(r)’. Fortunately that “How Differential Calculus Works” essay explains all the symbol-manipulation we need to get started. That part is calculus, but the easy part. We can just follow the rules already there. So here’s what we do:

    • The planet (or whatever) can have a circular orbit around the center at any radius which makes the equation \frac{dV_{eff}}{dr} = 0 true.
    • The circular orbit will be stable if the radius of its orbit makes the second derivative of the effective potential, \frac{d^2V_{eff}}{dr^2} , some number greater than zero.

    We’re interested in stable orbits because usually unstable orbits are boring. They might exist but any little perturbation breaks them down. The mathematician, ordinarily, sees this as a useless solution except in how it describes different kinds of orbits. The physicist might point out that sometimes it can take a long time, possibly millions of years, before the perturbation becomes big enough to stand out. Indeed, it’s an open question whether our solar system is stable. While it seems to have gone millions of years without any planet changing its orbit very much we haven’t got the evidence to say it’s impossible that, say, Saturn will be kicked out of the solar system anytime soon. Or worse, that Earth might be. “Soon” here means geologically soon, like, in the next million years.

    (If it takes so long for the instability to matter then the mathematician might allow that as “metastable”. There are a lot of interesting metastable systems. But right now, I don’t care.)

    I realize now I didn’t explain the notation for the second derivative before. It looks funny because that’s just the best we can work out. In that fraction \frac{d^2V_{eff}}{dr^2} the ‘d’ isn’t a number so we can’t cancel it out. And the superscript ‘2’ doesn’t mean squaring, at least not the way we square numbers. There’s a functional analysis essay in there somewhere. Again I’m sorry about this but there’s a lot of things mathematicians want to write out and sometimes we can’t find a way that avoids all confusion. Roll with it.

    So that explains the whole thing clearly and easily and now nobody could be confused and yeah I know. If my Classical Mechanics professor left it at that we’d have open rebellion. Let’s do an example.

    There are two and a half good examples. That is, they’re central force problems with answers we know. One is gravitation: we have a planet orbiting a star that’s at the origin. Another is springs: we have a mass that’s connected by a spring to the origin. And the half is electric: put a positive electric charge at the center and have a negative charge orbit that. The electric case is only half a problem because it’s the same as the gravitation problem except for what the constants involved are. Electric charges attract each other crazy way stronger than gravitational masses do. But that doesn’t change the work we do.

    This is a lie. Electric charges accelerating, and just orbiting counts as accelerating, cause electromagnetic effects to happen. They give off light. That’s important, but it’s also complicated. I’m not going to deal with that.

    I’m going to do the gravitation problem. After all, we know the answer! By Kepler’s something law, something something radius cubed something G M … something … squared … After all, we can look up the answer!

    The potential energy for a planet orbiting a sun looks like this:

    V(r) = - G M m \frac{1}{r}

    Here ‘G’ is a constant, called the Gravitational Constant. It’s how strong gravity in the universe is. It’s not very strong. ‘M’ is the mass of the sun. ‘m’ is the mass of the planet. To make sense ‘M’ should be a lot bigger than ‘m’. ‘r’ is how far the planet is from the sun. And yes, that’s one-over-r, not one-over-r-squared. This is the potential energy of the planet being at a given distance from the sun. One-over-r-squared gives us how strong the force attracting the planet towards the sun is. Different thing. Related thing, but different thing. Just listing all these quantities one after the other means ‘multiply them together’, because mathematicians multiply things together a lot and get bored writing multiplication symbols all the time.

    Now for the effective potential we need to toss in the angular momentum. That’s ‘L’. The effective potential energy will be:

    V_{eff}(r) = - G M m \frac{1}{r} + \frac{L^2}{2 m r^2}

    I’m going to rewrite this in a way that means the same thing, but that makes it easier to take derivatives. At least easier to me. You’re on your own. But here’s what looks easier to me:

    V_{eff}(r) = - G M m r^{-1} + \frac{L^2}{2 m} r^{-2}

    I like this because it makes every term here look like “some constant number times r to a power”. That’s easy to take the derivative of. Check back on that “How Differential Calculus Works” essay. The first derivative of this ‘Veff(r)’, taken with respect to ‘r’, looks like this:

    \frac{dV_{eff}}{dr} = -(-1) G M m r^{-2} -2\frac{L^2}{2m} r^{-3}

    We can tidy that up a little bit: -(-1) is another way of writing 1. The second term has two times something divided by 2. We don’t need to be that complicated. In fact, when I worked out my notes I went directly to this simpler form, because I wasn’t going to be thrown by that. I imagine I’ve got people reading along here who are watching these equations warily, if at all. They’re ready to bolt at the first sign of something terrible-looking. There’s nothing terrible-looking coming up. All we’re doing from this point on is really arithmetic. It’s multiplying or adding or otherwise moving around numbers to make the equation prettier. It happens we only know those numbers by cryptic names like ‘G’ or ‘L’ or ‘M’. You can go ahead and pretend they’re ‘4’ or ‘5’ or ‘7’ if you like. You know how to do the steps coming up.

    So! We allegedly can have a circular orbit when this first derivative is equal to zero. What values of ‘r’ make true this equation?

    G M m r^{-2} - \frac{L^2}{m} r^{-3} = 0

    Not so helpful there. What we want is to have something like ‘r = (mathematics stuff here)’. We have to do some high school algebra moving-stuff-around to get that. So one thing we can do to get closer is add the quantity \frac{L^2}{m} r^{-3} to both sides of this equation. This gets us:

    G M m r^{-2} = \frac{L^2}{m} r^{-3}

    Things are getting better. Now multiply both sides by the same number. Which number? r3. That’s because ‘r-3‘ times ‘r3‘ is going to equal 1, while ‘r-2‘ times ‘r3‘ will equal ‘r1‘, which normal people call ‘r’. I kid; normal people don’t think of such a thing at all, much less call it anything. But if they did, they’d call it ‘r’. We’ve got:

    G M m r = \frac{L^2}{m}

    And now we’re getting there! Divide both sides by whatever number ‘G M’ is, as long as it isn’t zero. And then we have our circular orbit! It’s at the radius

    r = \frac{L^2}{G M m^2}

    Very good. I’d even say pretty. It’s got all those capital letters and one little lowercase. Something squared in the numerator and the denominator. Aesthetically pleasant. Stinks a little that it doesn’t look like anything we remember from Kepler’s Laws once we’ve looked them up. We can fix that, though.

    The key is the angular momentum ‘L’ there. I haven’t said anything about how that number relates to anything. It’s just been some constant of the universe. In a sense that’s fair enough. Angular momentum is conserved, exactly the same way energy is conserved, or the way linear momentum is conserved. Why not just let it be whatever number it happens to be?

    (A note for people who skipped earlier essays: Angular momentum is not a number. It’s really a three-dimensional vector. But in a central force problem with just one planet moving around we aren’t doing any harm by pretending it’s just a number. We set it up so that the angular momentum is pointing directly out of, or directly into, the sheet of paper we pretend the planet’s orbiting in. Since we know the direction before we even start work, all we have to car about is the size. That’s the number I’m talking about.)

    The angular momentum of a thing is its moment of inertia times its angular velocity. I’m glad to have cleared that up for you. The moment of inertia of a thing describes how easy it is to start it spinning, or stop it spinning, or change its spin. It’s a lot like inertia. What it is depends on the mass of the thing spinning, and how that mass is distributed, and what it’s spinning around. It’s the first part of physics that makes the student really have to know volume integrals.

    We don’t have to know volume integrals. A single point mass spinning at a constant speed at a constant distance from the origin is the easy angular momentum to figure out. A mass ‘m’ at a fixed distance ‘r’ from the center of rotation moving at constant speed ‘v’ has an angular momentum of ‘m’ times ‘r’ times ‘v’.

    So great; we’ve turned ‘L’ which we didn’t know into ‘m r v’, where we know ‘m’ and ‘r’ but don’t know ‘v’. We’re making progress, I promise. The planet’s tracing out a circle in some amount of time. It’s a circle with radius ‘r’. So it traces out a circle with perimeter ‘2 π r’. And it takes some amount of time to do that. Call that time ‘T’. So its speed will be the distance travelled divided by the time it takes to travel. That’s \frac{2 \pi r}{T} . Again we’ve changed one unknown number ‘L’ for another unknown number ‘T’. But at least ‘T’ is an easy familiar thing: it’s how long the orbit takes.

    Let me show you how this helps. Start off with what ‘L’ is:

    L = m r v = m r \frac{2\pi r}{T} = 2\pi m \frac{r^2}{T}

    Now let’s put that into the equation I got eight paragraphs ago:

    r = \frac{L^2}{G M m^2}

    Remember that one? Now put what I just said ‘L’ was, in where ‘L’ shows up in that equation.

    r = \frac{\left(2\pi m \frac{r^2}{T}\right)^2}{G M m^2}

    I agree, this looks like a mess and possibly a disaster. It’s not so bad. Do some cleaning up on that numerator.

    r = \frac{4 \pi^2 m^2}{G M m^2} \frac{r^4}{T^2}

    That’s looking a lot better, isn’t it? We even have something we can divide out: the mass of the planet is just about to disappear. This sounds bizarre, but remember Kepler’s laws: the mass of the planet never figures into things. We may be on the right path yet.

    r = \frac{4 \pi^2}{G M} \frac{r^4}{T^2}

    OK. Now I’m going to multiply both sides by ‘T2‘ because that’ll get that out of the denominator. And I’ll divide both sides by ‘r’ so that I only have the radius of the circular orbit on one side of the equation. Here’s what we’ve got now:

    T^2 = \frac{4 \pi^2}{G M} r^3

    And hey! That looks really familiar. A circular orbit’s radius cubed is some multiple of the square of the orbit’s time. Yes. This looks right. At least it looks reasonable. Someone else can check if it’s right. I like the look of it.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about more and different … um …

    I’d like to talk about the different … oh, dear. Yes. You’re going to ask about that, aren’t you?

    Ugh. All right. I’ll do it.

    How do we know this is a stable orbit? Well, it just is. If it weren’t the Earth wouldn’t have a Moon after all this. Heck, the Sun wouldn’t have an Earth. At least it wouldn’t have a Jupiter. If the solar system is unstable, Jupiter is probably the most stable part. But that isn’t convincing. I’ll do this right, though, and show what the second derivative tells us. It tells us this is too a stable orbit.

    So. The thing we have to do is find the second derivative of the effective potential. This we do by taking the derivative of the first derivative. Then we have to evaluate this second derivative and see what value it has for the radius of our circular orbit. If that’s a positive number, then the orbit’s stable. If that’s a negative number, then the orbit’s not stable. This isn’t hard to do, but it isn’t going to look pretty.

    First the pretty part, though. Here’s the first derivative of the effective potential:

    \frac{dV_{eff}}{dr} = G M m r^{-2} - \frac{L^2}{m} r^{-3}

    OK. So the derivative of this with respect to ‘r’ isn’t hard to evaluate again. This is again a function with a bunch of terms that are all a constant times r to a power. That’s the easiest sort of thing to differentiate that isn’t just something that never changes.

    \frac{d^2 V_{eff}}{dr^2} = -2 G M m r^{-3} - (-3)\frac{L^2}{m} r^{-4}

    Now the messy part. We need to work out what that line above is when our planet’s in our circular orbit. That circular orbit happens when r = \frac{L^2}{G M m^2} . So we have to substitute that mess in for ‘r’ wherever it appears in that above equation and you’re going to love this. Are you ready? It’s:

    -2 G M m \left(\frac{L^2}{G M m^2}\right)^{-3} + 3\frac{L^2}{m}\left(\frac{L^2}{G M m^2}\right)^{-4}

    This will get a bit easier promptly. That’s because something raised to a negative power is the same as its reciprocal raised to the positive of that power. So that terrible, terrible expression is the same as this terrible, terrible expression:

    -2 G M m \left(\frac{G M m^2}{L^2}\right)^3 + 3 \frac{L^2}{m}\left(\frac{G M m^2}{L^2}\right)^4

    Yes, yes, I know. Only thing to do is start hacking through all this because I promise it’s going to get better. Putting all those third- and fourth-powers into their parentheses turns this mess into:

    -2 G M m \frac{G^3 M^3 m^6}{L^6} + 3 \frac{L^2}{m} \frac{G^4 M^4 m^8}{L^8}

    Yes, my gut reaction when I see multiple things raised to the eighth power is to say I don’t want any part of this either. Hold on another line, though. Things are going to start cancelling out and getting shorter. Group all those things-to-powers together:

    -2 \frac{G^4 M^4 m^7}{L^6} + 3 \frac{G^4 M^4 m^7}{L^6}

    Oh. Well, now this is different. The second derivative of the effective potential, at this point, is the number

    \frac{G^4 M^4 m^7}{L^6}

    And I admit I don’t know what number that is. But here’s what I do know: ‘G’ is a positive number. ‘M’ is a positive number. ‘m’ is a positive number. ‘L’ might be positive or might be negative, but ‘L6‘ is a positive number either way. So this is a bunch of positive numbers multiplied and divided together.

    So this second derivative what ever it is must be a positive number. And so this circular orbit is stable. Give the planet a little nudge and that’s all right. It’ll stay near its orbit. I’m sorry to put you through that but some people raised the, honestly, fair question.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about the other kinds of central forces that you might get. We only solved one problem here. We can solve way more than that.

    • howardat58 6:18 pm on Friday, 21 October, 2016 Permalink | Reply

      I love the chatty approach.


      • Joseph Nebus 5:03 am on Saturday, 22 October, 2016 Permalink | Reply

        Thank you. I realized doing Theorem Thursdays over the summer that it was hard to avoid that voice, and then that it was fun writing in it. So eventually I do learn, sometimes.


  • Joseph Nebus 6:00 pm on Tuesday, 18 October, 2016 Permalink | Reply
    Tags: , , y   

    The End Of 2016 Mathematics A To Z: Any Requests? 

    I have fun with these, and they’re surprisingly easy to write once I get started. So I’d like to send November and December out with another of my Mathematics A To Z projects. To that end I’m open for suggestions. Have you got a mathematical term that you’d like to dare me to explain? I’m happy to give it a try. Please leave a comment here with the word, or words, you’d like to suggest. And please send folks who might like a couple paragraphs on some mathematical term over this way. I’ll need help filling out the alphabet, especially, based on past experience, the letter ‘y’. There’s not enough mathematical terms starting with ‘y’ to make it easy for me. I’ll need help.

    I usually take this as a first-come, first-serve sort of thing. But I reserve the right to fiddle with synonyms or alternate phrasings if I’m intrigued by something whose letter was already taken. I’ll try to update the roster here with what’s taken and what’s free.

    Here’s the roundup for the Summer 2015 A To Z, and then here’s the roundup for the Leap Day 2016 A To Z. So those cover words I’ve already gotten to. And now I’ve discovered I was inconsistent about whether to use the WordPress tag ‘atoz’ or ‘a-to-z’ and don’t think that isn’t going to drive me crazy.

    Here I’ll try to keep an updated roster of what letters have gotten knocked out:

    A J S
    B K T
    L U
    D V
    E W
    I R

    Is there anything sure to get a word explained by me? Not sure. Picking a letter that’s free helps considerably. I am so going to need a ‘y’ word. Picking a letter early helps. I’m not sure whether it’s better to pick a word that’s got broad applicability or a specialized term. A broadly applicable word tends to be an important one. A specialized term lets me dig into a field I might not actually know much about. That can be fun. Mostly, though, pick the lousy letters of the alphabet. ‘x’ and ‘z’ don’t make things easy for me either. ‘q’ could use some help too.

    • howardat58 6:15 pm on Tuesday, 18 October, 2016 Permalink | Reply

      Cantor’s middle third
      (that would be C)


    • gaurish 5:07 pm on Wednesday, 19 October, 2016 Permalink | Reply

      Normal numbers: I came across it while studying work of Erdős, and wanted to write a post explaining it but could not put it together.


    • gaurish 5:37 pm on Wednesday, 19 October, 2016 Permalink | Reply

      Quotient groups, monster groups, zero or zermelo-fraenkel axioms, Yang Hui’s Triangle: Primitive form of Pascal’s Triangle, commonly known in China; XOR Boolean operation.


    • elkement (Elke Stangl) 7:30 am on Friday, 21 October, 2016 Permalink | Reply

      General Covariance (as used in general relativity), Tensor or Metric Tensor – depending on which of them you had already covered before. Apologies for not cross-checking with previous series ;-)


      • Joseph Nebus 5:14 am on Saturday, 22 October, 2016 Permalink | Reply

        General covariance I can do! If I can understand it myself. I confess tensors are one of those subjects I never got properly trained in, and that I’ve tried repeatedly to learn without ever mastering. But something like this is a good way to make me buckle down and work.

        Liked by 1 person

    • howardat58 6:46 pm on Sunday, 23 October, 2016 Permalink | Reply

      Osculating circle


  • Joseph Nebus 6:00 pm on Sunday, 16 October, 2016 Permalink | Reply
    Tags: beauty, , , ,   

    Reading the Comics, October 14, 2016: Classics Edition 

    The mathematically-themed comic strips of the past week tended to touch on some classic topics and classic motifs. That’s enough for me to declare a title for these comics. Enjoy, won’t you please?

    John McPherson’s Close To Home for the 9th uses the classic board full of mathematics to express deep thinking. And it’s deep thinking about sports. Nerds like to dismiss sports as trivial and so we get the punch line out of this. But models of sports have been one of the biggest growth fields in mathematics the past two decades. And they’ve shattered many longstanding traditional understandings of strategy. It’s not proper mathematics on the board, but that’s all right. It’s not proper sabermetrics either.

    'The mathematical formula seems to be not only correct, but beautiful and elegant as well. However, is there any way we can cut it down to 140 characters or less?'

    Vic Lee’s Pardon My Planet for the 10th of October, 2016. Follow-up questions: why does the scientist have a spoon in his ear, and why are they standing outside the door marked ‘Research Laboratory’? And are they trying to pick a fight with people who’d say it should be ‘140 characters or fewer’? Because I’m happy to see them fight it out, I admit.

    Vic Lee’s Pardon My Planet for the 10th is your classic joke about putting mathematics in marketable terms. There is an idea that a mathematical idea to be really good must be beautiful. And it’s hard to say exactly what beauty is, but “short” and “simple” seem to be parts of it. That’s a fine idea, as long as you don’t forget how context-laden these are. Whether an idea is short depends on what ideas and what concepts you have as background. Whether it’s simple depends on how much you’ve seen similar ideas before. π looks simple. “The smallest positive root of the solution to the differential equation y”(x) = -y(x) where y(0) = 0 and y'(0) = 1” looks hard, but however much mathematics you know, rhetoric alone tells you those are the same thing.

    Scott Hilburn’s The Argyle Sweater for the 10th is your classic anthropomorphic-numerals joke. Well, anthropomorphic-symbols in this case. But it’s the same genre of joke.

    Randy Glasbergen’s Glasbergen Cartoons rerun for the 10th is your classic sudoku-and-arithmetic-as-hard-work joke. And it’s neat to see “programming a VCR” used as an example of the difficult-to-impossible task for a comic strip drawn late enough that it’s into the era of flat-screen, flat-bodied desktop computers.

    Bill Holbrook’s On The Fastrack for 11th is your classic grumbling-about-how-mathematics-is-understood joke. Well, statistics, but most people consider that part of mathematics. (One could mount a strong argument that statistics is as independent of mathematics as physics or chemistry are.) Statistics offers many chances for intellectual mischief, whether deliberately or just from not thinking matters through. That may be inevitable. Sampling, as in political surveys, must talk about distributions, about ranges of possible results. It’s hard to be flawless about that.

    Fi's Math Speech: 'Say you have a political poll with a three-point margin of error. If the poll says one candidate is ahead by three, they always say the poll is TIED ... even when it's just as likely the lead is actuallY SIX!' 'What kind of math is that?' 'Cable news math.'

    Bill Holbrook’s On The Fastrack for the 11th of October, 2016. I don’t know that anyone is going around giving lectures as ‘The Weapon Of Math Instruction’ but it sure seems like somebody ought to be. Then we can get that joke about the mathematician being kicked off an airplane flight out of my Twitter timeline.

    That said I’m not sure I can agree with Fi in her example here. Take her example, a political poll with three-point margin of error. If the poll says one candidate’s ahead by three points, Fi asserts, they’ll say it’s tied when it’s as likely the lead is six. I don’t see that’s quite true, though. When we sample something we estimate the value of something in a population based on what it is in the sample. Obviously we’ll be very lucky if the population and the sample have exactly the same value. But the margin of error gives us a range of how far from the sample value it’s plausible the whole population’s value is, or would be if we could measure it. Usually “plausible” means 95 percent; that is, 95 percent of the time the actual value will be within the margin of error of the sample’s value.

    So here’s where I disagree with Fi. Let’s suppose that the first candidate, Kirk, polls at 43 percent. The second candidate, Picard, polls at 40 percent. (Undecided or third-party candidates make up the rest.) I agree that Kirk’s true, whole-population, support is equally likely to be 40 percent or 46 percent. But Picard’s true, whole-population, support is equally likely to be 37 percent or 43 percent. Kirk’s lead is actually six points if his support was under-represented in the sample and Picard’s was over-represented, by the same measures. But suppose Kirk was just as over-represented and Picard just as under-represented as they were in the previous case. This puts Kirk at 40 percent and Picard at 43 percent, a Kirk-lead of minus three percentage points.

    So what’s the actual chance these two candidates are tied? Well, you have to say what a tie is. It’s vanishingly impossible they have precisely the same true support and we can’t really calculate that. Don’t blame statisticians. You tell me an election in which one candidate gets three more votes than the other isn’t really tied, if there are more than seven votes cast. We can work on “what’s the chance their support is less than some margin?” And then you’d have all the possible chances where Kirk gets a lower-than-surveyed return while Picard gets a higher-than-surveyed return. I can’t say what that is offhand. We haven’t said what this margin-of-tying is, for one thing.

    But it is certainly higher than the chance the lead is actually six; that only happens if the actual vote is different from the poll in one particular way. A tie can happen if the actual vote is different from the poll in many different ways.

    Doing a quick and dirty little numerical simulation suggests to me that, if we assume the sampling respects the standard normal distribution, then in this situation Kirk probably is ahead of Picard. Given a three-point lead and a three-point margin for error Kirk would be expected to beat Picard about 92 percent of the time, while Picard would win about 8 percent of the time.

    Here I have been making the assumption that Kirk’s and Picard’s support are to an extent independent. That is, a vote might be for Kirk or for Picard or for neither. There’s this bank of voting-for-neither-candidate that either could draw on. If there are no undecided candidates, every voter is either Kirk or Picard, then all of this breaks down: Kirk can be up by six only if Picard is down by six. But I don’t know of surveys that work like that.

    Not to keep attacking this particular strip, which doesn’t deserve harsh treatment, but it gives me so much to think about. Assuming by “they” Fi means news anchors — and from what we get on panel, it’s not actually clear she does — I’m not sure they actually do “say the poll is tied”. What I more often remember hearing is that the difference is equal to, or less than, the survey’s margin of error. That might get abbreviated to “a statistical tie”, a usage that I think is fair. But Fi might mean the candidates or their representatives in saying “they”. I can’t fault the campaigns for interpreting data in ways useful for their purposes. The underdog needs to argue that the election can yet be won. The leading candidate needs to argue against complacency. In either case a tie is a viable selling point and a reasonable interpretation of the data.

    Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde for the 12th is a classic use of Einstein and general relativity to explain human behavior. Everyone’s tempted by this. Usually it’s thermodynamics that inspires thoughts that society could be explained mathematically. There’s good reason for this. Thermodynamics builds great and powerful models of complicated systems by supposing that we never know, or need to know, what any specific particle of gas or fluid is doing. We care only about aggregate data. That statistics shows we can understand much about humanity without knowing fine details reinforces this idea. The Wingartens and Clark probably shifted from thermodynamics to general relativity because Einstein is recognizable to normal people. And we’ve all at least heard of mass warping space and can follow the metaphor to money warping law.

    'Lolly, we can't stay alive up here on ten credits! You *must* help me beat the roulette wheel!' 'Flash-honey, you yo'self taught me never to use my PSI talent fo' CROOKEDNESS!' 'Then I have to d it the hard way! This should be no tougher than computing a spaceship's orbit!' The Computer For Gamblers Only works. 'Red 88 plus infinity 6 sqrt(13) - E = Mc^2 (etc) ... hmmm ... which gives the probable winning number of THIRTEEN!' 'Hunh?'

    Dan Barry’s Flash Gordon for the 28th of November, 1961. Um, Flash and Lolly are undercover on the space station that mobsters have put in an orbit above the the 1,000-mile limit past which no laws apply. You know, the way they did in the far-distant future year of 1971. Also Lolly has psychic powers that let her see the future because that’s totally a for-real scientific possibility. Also she’s kind of a dope. Finally I would think a Computer that can predict roulette wheel outcomes wouldn’t be open for the public to use on the gambling space station but perhaps I’m just anticipating the next stunning plot twist of Flash losing their last ten credits betting on false predictions.

    In vintage comics, Dan Barry’s Flash Gordon for the 14th (originally run the 28th of November, 1961) uses the classic idea that sufficient mathematics talent will outwit games of chance. Many believe it. I remember my grandmother’s disappointment that she couldn’t bring the underaged me into the casinos in Atlantic City. This did save her the disappointment of learning I haven’t got any gambling skill besides occasionally buying two lottery tickets if the jackpot is high enough. I admit that an irrational move on my part, but I can spare two dollars for foolishness once or twice a year. The idea of beating a roulette wheel, at least a fair wheel, isn’t absurd. In principle if you knew enough about how the wheel was set up and how the ball was weighted and how it was launched into the spin you could predict where it would land. In practice, good luck. I wouldn’t be surprised if a good roulette wheel weren’t chaotic, or close to it. If it’s chaotic then while the outcome could be predicted if the wheel’s spin and the ball’s initial speed were known well enough, they can’t be measured well enough for a prediction to be meaningful. The comic also uses the classic word balloon full of mathematical symbols to suggest deep reasoning. I spotted Einstein’s famous quote there.

    • Chiaroscuro 6:02 am on Monday, 17 October, 2016 Permalink | Reply

      It’s been managed. Briefly.


      • Joseph Nebus 4:08 am on Tuesday, 18 October, 2016 Permalink | Reply

        I was considering whether to get into that. It is possible to find biases, in mechanical or electronic systems, and that gives the better with deep enough pockets or enough time an advantage. (Blackjack was similarly and famously hacked.) That’s not so helpful if all you’ve got is ten credits to build up something that can break the bank.

        It happens I was wrong about the Computer guiding Flash to the wrong number. Which is fascinating but raises questions about the plausible worldbuilding of this thousand-mile-high gambling space station that the law can’t touch.


  • Joseph Nebus 6:00 pm on Friday, 14 October, 2016 Permalink | Reply
    Tags: , , , , , harmonic motion, perturbations,   

    How Mathematical Physics Works: Another Course In 2200 Words 

    OK, I need some more background stuff before returning to the Why Stuff Can Orbit series. Last week I explained how to take derivatives, which is one of the three legs of a Calculus I course. Now I need to say something about why we take derivatives. This essay won’t really qualify you to do mathematical physics, but it’ll at least let you bluff your way through a meeting with one.

    We care about derivatives because we’re doing physics a smart way. This involves thinking not about forces but instead potential energy. We have a function, called V or sometimes U, that changes based on where something is. If we need to know the forces on something we can take the derivative, with respect to position, of the potential energy.

    The way I’ve set up these central force problems makes it easy to shift between physical intuition and calculus. Draw a scribbly little curve, something going up and down as you like, as long as it doesn’t loop back on itself. Also, don’t take the pen from paper. Also, no corners. That’s just cheating. Smooth curves. That’s your potential energy function. Take any point on this scribbly curve. If you go to the right a little from that point, is the curve going up? Then your function has a positive derivative at that point. Is the curve going down? Then your function has a negative derivative. Find some other point where the curve is going in the other direction. If it was going up to start, find a point where it’s going down. Somewhere in-between there must be a point where the curve isn’t going up or going down. The Intermediate Value Theorem says you’re welcome.

    These points where the potential energy isn’t increasing or decreasing are the interesting ones. At least if you’re a mathematical physicist. They’re equilibriums. If whatever might be moving happens to be exactly there, then it’s not going to move. It’ll stay right there. Mathematically: the force is some fixed number times the derivative of the potential energy there. The potential energy’s derivative is zero there. So the force is zero and without a force nothing’s going to change. Physical intuition: imagine you laid out a track with exactly the shape of your curve. Put a marble at this point where the track isn’t rising and isn’t falling. Does the marble move? No, but if you’re not so sure about that read on past the next paragraph.

    Mathematical physicists learn to look for these equilibriums. We’re taught to not bother with what will happen if we release this particle at this spot with this velocity. That is, you know, not looking at any particular problem someone might want to know. We look instead at equilibriums because they help us describe all the possible behaviors of a system. Mathematicians are sometimes characterized as lazy in spirit. This is fair. Mathematicians will start out with a problem looking to see if it’s just like some other problem someone already solved. But the flip side is if one is going to go to the trouble of solving a new problem, she’s going to really solve it. We’ll work out not just what happens from some one particular starting condition. We’ll try to describe all the different kinds of thing that could happen, and how to tell which of them does happen for your measly little problem.

    If you actually do have a curvy track and put a marble down on its equilibrium it might yet move. Suppose the track is rising a while and then falls back again; putting the marble at top and it’s likely to roll one way or the other. If it doesn’t it’s probably because of friction; the track sticks a little. If it were a really smooth track and the marble perfectly round then it’d fall. Give me this. But even with a perfectly smooth track and perfectly frictionless marble it’ll still roll one way or another. Unless you put it exactly at the spot that’s the top of the hill, not a bit to the left or the right. Good luck.

    What’s happening here is the difference between a stable and an unstable equilibrium. This is again something we all have a physical intuition for. Imagine you have something that isn’t moving. Give it a little shove. Does it stay about like it was? Then it’s stable. Does it break? Then it’s unstable. The marble at the top of the track is at an unstable equilibrium; a little nudge and it’ll roll away. If you had a marble at the bottom of a track, inside a valley, then it’s a stable equilibrium. A little nudge will make the marble rock back and forth but it’ll stay nearby.

    Yes, if you give it a crazy big whack the marble will go flying off, never to be seen again. We’re talking about small nudges. No, smaller than that. This maybe sounds like question-begging to you. But what makes for an unstable equilibrium is that no nudge is too small. The nudge — perturbation, in the trade — will just keep growing. In a stable equilibrium there’s nudges small enough that they won’t keep growing. They might not shrink, but they won’t grow either.

    So how to tell which is which? Well, look at your potential energy and imagine it as a track with a marble again. Where are the unstable equilibriums? They’re the ones at tops of hills. Near them the curve looks like a cup pointing down, to use the metaphor every Calculus I class takes. Where are the stable equilibriums? They’re the ones at bottoms of valleys. Near them the curve looks like a cup pointing up. Again, see Calculus I.

    We may be able to tell the difference between these kinds of equilibriums without drawing the potential energy. We can use the second derivative. To find the second derivative of a function you take the derivative of a function and then — you may want to think this one over — take the derivative of that. That is, you take the derivative of the original function a second time. Sometimes higher mathematics gives us terms that aren’t too hard.

    So if you have a spot where you know there’s an equilibrium, look at what the second derivative at that spot is. If it’s positive, you have a stable equilibrium. If it’s negative, you have an unstable equilibrium. This is called “Second Derivative Test”, as it was named by a committee that figured it was close enough to 5 pm and why cause trouble?

    If the second derivative is zero there, um, we can’t say anything right now. The equilibrium may also be an inflection point. That’s where the growth of something pauses a moment before resuming. Or where the decline of something pauses a moment before resuming. In either case that’s still an unstable equilibrium. But it doesn’t have to be. It could still be a stable equilibrium. It might just have a very smoothly flat base. No telling just from that one piece of information and this is why we have to go on to other work.

    But this gets at how we’d like to look at a system. We look for its equilibriums. We figure out which equilibriums are stable and which ones are unstable. With a little more work we can say, if the system starts out like this it’ll stay near that equilibrium. If it starts out like that it’ll stay near this whole other equilibrium. If it starts out this other way, it’ll go flying off to the end of the universe. We can solve every possible problem at once and never have to bother with a particular case. This feels good.

    It also gives us a little something more. You maybe have heard of a tangent line. That’s a line that’s, er, tangent to a curve. Again with the not-too-hard terms. What this means is there’s a point, called the “point of tangency”, again named by a committee that wanted to get out early. And the line just touches the original curve at that point, and it’s going in exactly the same direction as the original curve at that point. Typically this means the line just grazes the curve, at least around there. If you’ve ever rolled a pencil until it just touched the edge of your coffee cup or soda can, you’ve set up a tangent line to the curve of your beverage container. You just didn’t think of it as that because you’re not daft. Fair enough.

    Mathematicians will use tangents because a tangent line has values that are so easy to calculate. The function describing a tangent line is a polynomial and we llllllllove polynomials, correctly. The tangent line is always easy to understand, however hard the original function was. Its value, at the equilibrium, is exactly what the original function’s was. Its first derivative, at the equilibrium, is exactly what the original function’s was at that point. Its second derivative is zero, which might or might not be true of the original function. We don’t care.

    We don’t use tangent lines when we look at equilibriums. This is because in this case they’re boring. If it’s an equilibrium then its tangent line is a horizontal line. No matter what the original function was. It’s trivial: you know the answer before you’ve heard the question.

    Ah, but, there is something mathematical physicists do like. The tangent line is boring. Fine. But how about, using the second derivative, building a tangent … well, “parabola” is the proper term. This is a curve that’s a quadratic, that looks like an open bowl. It exactly matches the original function at the equilibrium. Its derivative exactly matches the original function’s derivative at the equilibrium. Its second derivative also exactly matches the original function’s second derivative, though. Third derivative we don’t care about. It’s so not important here I can’t even finish this sentence in a

    What this second-derivative-based approximation gives us is a parabola. It will look very much like the original function if we’re close to the equilibrium. And this gives us something great. The great thing is this is the same potential energy shape of a weight on a spring, or anything else that oscillates back and forth. It’s the potential energy for “simple harmonic motion”.

    And that’s great. We start studying simple harmonic motion, oh, somewhere in high school physics class because it’s so much fun to play with slinkies and springs and accidentally dropping weights on our lab partners. We never stop. The mathematics behind it is simple. It turns up everywhere. If you understand the mathematics of a mass on a spring you have a tool that relevant to pretty much every problem you ever have. This approximation is part of that. Close to a stable equilibrium, whatever system you’re looking at has the same behavior as a weight on a spring.

    It may strike you that a mass on a spring is itself a central force. And now I’m saying that within the central force problem I started out doing, stuff that orbits, there’s another central force problem. This is true. You’ll see that in a few Why Stuff Can Orbit essays.

    So far, by the way, I’ve talked entirely about a potential energy with a single variable. This is for a good reason: two or more variables is harder. Well of course it is. But the basic dynamics are still open. There’s equilibriums. They can be stable or unstable. They might have inflection points. There is a new kind of behavior. Mathematicians call it a “saddle point”. This is where in one direction the potential energy makes it look like a stable equilibrium while in another direction the potential energy makes it look unstable. Examples of it kind of look like the shape of a saddle, if you haven’t looked at an actual saddle recently. (If you really want to know, get your computer to plot the function z = x2 – y2 and look at the origin, where x = 0 and y = 0.) Well, there’s points on an actual saddle that would be saddle points to a mathematician. It’s unstable, because there’s that direction where it’s definitely unstable.

    So everything about multivariable functions is longer, and a couple bits of it are harder. There’s more chances for weird stuff to happen. I think I can get through most of Why Stuff Can Orbit without having to know that. But do some reading up on that before you take a job as a mathematical physicist.

  • Joseph Nebus 6:00 pm on Tuesday, 11 October, 2016 Permalink | Reply
    Tags: , , ,   

    Reading the Comics, October 8, 2016: Split Week Edition Part 2 

    And now I can finish off last week’s comics. It was a busy week. The first few days of this week have been pretty busy too. Meanwhile, Dave Kingsbury has recently read a biography of Lewis Carroll, and been inspired to form a haiku/tanka project. You might enjoy.

    Susan Camilleri Konar is a new cartoonist for the Six Chix collective. Her first strip to get mentioned around these parts is from the 5th. It’s a casual mention of the Fibonacci sequence, which is one of the few sequences that a normal audience would recognize as something going on forever. And yes, I noticed the spiral in the background. That’s one of the common visual representations of the Fibonacci sequence: it starts from the center. The rectangles inside have dimensions 1 by 2, then 2 by 3, then 3 by 5, then 5 by 8, and so on; the spiral connects vertices of these rectangles. It’s an attractive spiral and you can derive the overrated Golden Ratio from the dimensions of larger rectangles. This doesn’t make the Golden Ratio important or anything, but it is there.

    'It seems like Fibonacci's been entering his password for days now.'

    Susan Camilleri Konar ‘s Six Chix for the 5th of October, 2016. And yet what distracts me is both how much food Fibonacci has on his desk and how much of it is hidden behind his computer where he can’t get at it. He’s going to end up spilling his coffee on something important fiddling around like that. And that’s not even getting at his computer being this weird angle relative to the walls.

    Ryan North’s Dinosaur Comics for the 6th is part of a story about T-Rex looking for certain truth. Mathematics could hardly avoid coming up. And it does offer what look like universal truths: given the way deductive logic works, and some starting axioms, various things must follow. “1 + 1 = 2” is among them. But there are limits to how much that tells us. If we accept the rules of Monopoly, then owning four railroads means the rent for landing on one is a game-useful $200. But if nobody around you cares about Monopoly, so what? And so it is with mathematics. Utahraptor and Dromiceiomimus point out that the mathematics we know is built on premises we have selected because we find them interesting or useful. We can’t know that the mathematics we’ve deduced has any particular relevance to reality. Indeed, it’s worse than North points out: How do we know whether an argument is valid? Because we believe that its conclusions follow from its premises according to our rules of deduction. We rely on our possibly deceptive senses to tell us what the argument even was. We rely on a mind possibly upset by an undigested bit of beef, a crumb of cheese, or a fragment of an underdone potato to tell us the rules are satisfied. Mathematics seems to offer us absolute truths, but it’s hard to see how we can get there.

    Rick Stromoskis Soup to Nutz for the 6th has a mathematics cameo in a student-resisting-class-questions problem. But the teacher’s question is related to the figure that made my first fame around these parts.

    Mark Anderson’s Andertoons for the 7th is the long-awaited Andertoon for last week. It is hard getting education in through all the overhead.

    Bill Watterson’s Calvin and Hobbes rerun for the 7th is a basic joke about Calvin’s lousy student work. Fun enough. Calvin does show off one of those important skills mathematicians learn, though. He does do a sanity check. He may not know what 12 + 7 and 3 + 4 are, but he does notice that 12 + 7 has to be something larger than 3 + 4. That’s a starting point. It’s often helpful before starting work on a problem to have some idea of what you think the answer should be.

    • davekingsbury 5:57 pm on Wednesday, 12 October, 2016 Permalink | Reply

      Thank you for the mention. Good advice about starting work on a problem knowing roughly what the answer is … though my post demonstrated the opposite!


      • Joseph Nebus 3:43 am on Saturday, 15 October, 2016 Permalink | Reply

        Quite welcome. And, well, usually having an idea what answer you expect helps. Sometimes it misfires, I admit. But all rules of thumb sometimes misfire. If your expectation misfires it’s probably because you expect the answer to be something that’s not just wrong, but wrong in a significant way. That is, not wrong because you’re thinking 12 when it should be 14, but rather wrong because you’re thinking 12 when you should be thinking of doughnut shapes. But figuring that out is another big learning experience.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 9 October, 2016 Permalink | Reply
    Tags: , , ,   

    Reading the Comics, October 4, 2016: Split Week Edition Part 1 

    The last week in mathematically themed comics was a pleasant one. By “a pleasant one” I mean Comic Strip Master Command sent enough comics out that I feel comfortable splitting them across two essays. Look for the other half of the past week’s strips in a couple days at a very similar URL.

    Mac King and Bill King’s Magic in a Minute feature for the 2nd shows off a bit of number-pattern wonder. Set numbers in order on a four-by-four grid and select four as directed and add them up. You get the same number every time. It’s a cute trick. I would not be surprised if there’s some good group theory questions underlying this, like about what different ways one could arrange the numbers 1 through 16. Or for what other size grids the pattern will work for: 2 by 2? (Obviously.) 3 by 3? 5 by 5? 6 by 6? I’m not saying I actually have been having fun doing this. I just sense there’s fun to be had there.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd is based on one of those weirdnesses of the way computers add. I remember in the 90s being on a Java mailing list. Routinely it would draw questions from people worried that something was very wrong, as adding 0.01 to a running total repeatedly wouldn’t get to exactly 1.00. Java was working correctly, in that it was doing what the specifications said. It’s just the specifications didn’t work quite like new programmers expected.

    What’s going on here is the same problem you get if you write down 1/3 as 0.333. You know that 1/3 plus 1/3 plus 1/3 ought to be 1 exactly. But 0.333 plus 0.333 plus 0.333 is 0.999. 1/3 is really a little bit more than 0.333, but we skip that part because it’s convenient to use only a few points past the decimal. Computers normally represent real-valued numbers with a scheme called floating point representation. At heart, that’s representing numbers with a couple of digits. Enough that we don’t normally see the difference between the number we want and the number the computer represents.

    Every number base has some rational numbers it can’t represent exactly using finitely many digits. Our normal base ten, for example, has “one-third” and “two-third”. Floating point arithmetic is built on base two, and that has some problems with tenths and hundredths and thousandths. That’s embarrassing but in the main harmless. Programmers learn about these problems and how to handle them. And if they ask the mathematicians we tell them how to write code so as to keep these floating-point errors from growing uncontrollably. If they ask nice.

    Random Acts of Nancy for the 3rd is a panel from Ernie Bushmiller’s Nancy. That panel’s from the 23rd of November, 1946. And it just uses mathematics in passing, arithmetic serving the role of most of Nancy’s homework. There’s a bit of spelling (I suppose) in there too, which probably just represents what’s going to read most cleanly. Random Acts is curated by Ernie Bushmiller fans Guy Gilchrist (who draws the current Nancy) and John Lotshaw.

    Thom Bluemel’s Birdbrains for the 4th depicts the discovery of a new highest number. When humans discovered ‘1’ is, I would imagine, probably unknowable. Given the number sense that animals have it’s probably something that predates humans, that it’s something we’re evolved to recognize and understand. A single stroke for 1 seems to be a common symbol for the number. I’ve read histories claiming that a culture’s symbol for ‘1’ is often what they use for any kind of tally mark. Obviously nothing in human cultures is truly universal. But when I look at number symbols other than the Arabic and Roman schemes I’m used to, it is usually the symbol for ‘1’ that feels familiar. Then I get to the Thai numeral and shrug at my helplessness.

    Bill Amend’s FoxTrot Classics for the 4th is a rerun of the strip from the 11th of October, 2005. And it’s made for mathematics people to clip out and post on the walls. Jason and Marcus are in their traditional nerdly way calling out sequences of numbers. Jason’s is the Fibonacci Sequence, which is as famous as mathematics sequences get. That’s the sequence of numbers in which every number is the sum of the previous two terms. You can start that sequence with 0 and 1, or with 1 and 1, or with 1 and 2. It doesn’t matter.

    Marcus calls out the Perrin Sequence, which I neve heard of before either. It’s like the Fibonacci Sequence. Each term in it is the sum of two other terms. Specifically, each term is the sum of the second-previous and the third-previous terms. And it starts with the numbers 3, 0, and 2. The sequence is named for François Perrin, who described it in 1899, and that’s as much as I know about him. The sequence describes some interesting stuff. Take n points and put them in a ‘cycle graph’, which looks to the untrained eye like a polygon with n corners and n sides. You can pick subsets of those points. A maximal independent set is the biggest subset you can make that doesn’t fit into another subset. And the number of these maximal independent sets in a cyclic graph is the n-th number in the Perrin sequence. I admit this seems like a nice but not compelling thing to know. But I’m not a cyclic graph kind of person so what do I know?

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 4th is the anthropomorphic numerals joke for this essay and I was starting to worry we wouldn’t get one.

  • Joseph Nebus 6:00 pm on Friday, 7 October, 2016 Permalink | Reply
    Tags: , , ,   

    How Differential Calculus Works 

    I’m at a point in my Why Stuff Can Orbit essays where I need to talk about derivatives. If I don’t then I’m going to be stuck with qualitative talk that’s too hard to focus on anything. But it’s a big subject. It’s about two-fifths of a Freshman Calculus course and one of the two main legs of calculus. So I want to pull out a quick essay explaining the important stuff.

    Derivatives, also called differentials, are about how things change. By “things” I mean “functions”. And particularly I mean functions which have as a domain that’s in the real numbers and a range that’s in the real numbers. That’s the starting point. We can define derivatives for functions with domains or ranges that are vectors of real numbers, or that are complex-valued numbers, or are even more exotic kinds of numbers. But those ideas are all built on the derivatives for real-valued functions. So if you get really good at them, everything else is easy.

    Derivatives are the part of calculus where we study how functions change. They’re blessed with some easy-to-visualize interpretations. Suppose you have a function that describes where something is at a given time. Then its derivative with respect to time is the thing’s velocity. You can build a pretty good intuitive feeling for derivatives that way. I won’t stop you from it.

    A function’s derivative is itself a function. The derivative doesn’t have to be constant, although it’s possible. Sometimes the derivative is positive. This means the original function increases as whatever its variable increases. Sometimes the derivative is negative. This means the original function decreases as its variable increases. If the derivative is a big number this means it’s increasing or decreasing fast. If the derivative is a small number it’s increasing or decreasing slowly. If the derivative is zero then the function isn’t increasing or decreasing, at least not at this particular value of the variable. This might be a local maximum, where the function’s got a larger value than it has anywhere else nearby. It might be a local minimum, where the function’s got a smaller value than it has anywhere else nearby. Or it might just be a little pause in the growth or shrinkage of the function. No telling, at least not just from knowing the derivative is zero.

    Derivatives tell you something about where the function is going. We can, and in practice often do, make approximations to a function that are built on derivatives. Many interesting functions are real pains to calculate. Derivatives let us make polynomials that are as good an approximation as we need and that a computer can do for us instead.

    Derivatives can change rapidly. That’s all right. They’re functions in their own right. Sometimes they change more rapidly and more extremely than the original function did. Sometimes they don’t. Depends what the original function was like.

    Not every function has a derivative. Functions that have derivatives don’t necessarily have them at every point. A function has to be continuous to have a derivative. A function that jumps all over the place doesn’t really have a direction, not that differential calculus will let us suss out. But being continuous isn’t enough. The function also has to be … well, we call it “differentiable”. The word at least tells us what the property is good for, even if it doesn’t say how to tell if a function is differentiable. The function has to not have any corners, points where the direction suddenly and unpredictably changes.

    Otherwise differentiable functions can have some corners, some non-differentiable points. For instance the height of a bouncing ball, over time, looks like a bunch of upside-down U-like shapes that suddenly rebound off the floor and go up again. Those points interrupting the upside-down U’s aren’t differentiable, even though the rest of the curve is.

    It’s possible to make functions that are nothing but corners and that aren’t differentiable anywhere. These are done mostly to win quarrels with 19th century mathematicians about the nature of the real numbers. We use them in Real Analysis to blow students’ minds, and occasionally after that to give a fanciful idea a hard test. In doing real-world physics we usually don’t have to think about them.

    So why do we do them? Because they tell us how functions change. They can tell us where functions momentarily don’t change, which are usually important. And equations with differentials in them, known in the trade as “differential equations”. They’re also known to mathematics majors as “diffy Q’s”, a name which delights everyone. Diffy Q’s let us describe physical systems where there’s any kind of feedback. If something interacts with its surroundings, that interaction’s probably described by differential equations.

    So how do we do them? We start with a function that we’ll call ‘f’ because we’re not wasting more of our lives figuring out names for functions and we’re feeling smugly confident Turkish authorities aren’t interested in us.

    f’s domain is real numbers, for example, the one we’ll call ‘x’. Its range is also real numbers. f(x) is some number in that range. It’s the one that the function f matches with the domain’s value of x. We read f(x) aloud as “the value of f evaluated at the number x”, if we’re pretending or trying to scare Freshman Calculus classes. Really we say “f of x” or maybe “f at x”.

    There’s a couple ways to write down the derivative of f. First, for example, we say “the derivative of f with respect to x”. By that we mean how does the value of f(x) change if there’s a small change in x. That difference matters if we have a function that depends on more than one variable, if we had an “f(x, y)”. Having several variables for f changes stuff. Mostly it makes the work longer but not harder. We don’t need that here.

    But there’s a couple ways to write this derivative. The best one if it’s a function of one variable is to put a little ‘ mark: f'(x). We pronounce that “f-prime of x”. That’s good enough if there’s only the one variable to worry about. We have other symbols. One we use a lot in doing stuff with differentials looks for all the world like a fraction. We spend a lot of time in differential calculus explaining why it isn’t a fraction, and then in integral calculus we do some shady stuff that treats it like it is. But that’s \frac{df}{dx} . This also appears as \frac{d}{dx} f . If you encounter this do not cross out the d’s from the numerator and denominator. In this context the d’s are not numbers. They’re actually operators, which is what we call a function whose domain is functions. They’re notational quirks is all. Accept them and move on.

    How do we calculate it? In Freshman Calculus we introduce it with this expression involving limits and fractions and delta-symbols (the Greek letter that looks like a triangle). We do that to scare off students who aren’t taking this stuff seriously. Also we use that formal proper definition to prove some simple rules work. Then we use those simple rules. Here’s some of them.

    1. The derivative of something that doesn’t change is 0.
    2. The derivative of xn, where n is any constant real number — positive, negative, whole, a fraction, rational, irrational, whatever — is n xn-1.
    3. If f and g are both functions and have derivatives, then the derivative of the function f plus g is the derivative of f plus the derivative of g. This probably has some proper name but it’s used so much it kind of turns invisible. I dunno. I’d call it something about linearity because “linearity” is what happens when adding stuff works like adding numbers does.
    4. If f and g are both functions and have derivatives, then the derivative of the function f times g is the derivative of f times the original g plus the original f times the derivative of g. This is called the Product Rule.
    5. If f and g are both function and have derivatives we might do something called composing them. That is, for a given x we find the value of g(x), and then we put that value in to f. That we write as f(g(x)). For example, “the sine of the square of x”. Well, the derivative of that with respect to x is the derivative of f evaluated at g(x) times the derivative of g. That is, f'(g(x)) times g'(x). This is called the Chain Rule and believe it or not this turns up all the time.
    6. If f is a function and C is some constant number, then the derivative of C times f(x) is just C times the derivative of f(x), that is, C times f'(x). You don’t really need this as a separate rule. If you know the Product Rule and that first one about the derivatives of things that don’t change then this follows. But who wants to go through that many steps for a measly little result like this?
    7. Calculus textbooks say there’s this Quotient Rule for when you divide f by g but you know what? The one time you’re going to need it it’s easier to work out by the Product Rule and the Chain Rule and your life is too short for the Quotient Rule. Srsly.
    8. There’s some functions with special rules for the derivative. You can work them out from the Real And Proper Definition. Or you can work them out from infinitely-long polynomial approximations that somehow make sense in context. But what you really do is just remember the ones anyone actually uses. The derivative of ex is ex and we love it for that. That’s why e is that 2.71828(etc) number and not something else. The derivative of the natural log of x is 1/x. That’s what makes it natural. The derivative of the sine of x is the cosine of x. That’s if x is measured in radians, which it always is in calculus, because it makes that work. The derivative of the cosine of x is minus the sine of x. Again, radians. The derivative of the tangent of x is um the square of the secant of x? I dunno, look that sucker up. The derivative of the arctangent of x is \frac{1}{1 + x^2} and you know what? The calculus book lists this in a table in the inside covers. Just use that. Arctangent. Sheesh. We just made up the “secant” and the “cosecant” to haze Freshman Calculus students anyway. We don’t get a lot of fun. Let us have this. Or we’ll make you hear of the inverse hyperbolic cosecant.

    So. What’s this all mean for central force problems? Well, here’s what the effective potential energy V(r) usually looks like:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    So, first thing. The variable here is ‘r’. The variable name doesn’t matter. It’s just whatever name is convenient and reminds us somehow why we’re talking about this number. We adapt our rules about derivatives with respect to ‘x’ to this new name by striking out ‘x’ wherever it appeared and writing in ‘r’ instead.

    Second thing. C is a constant. So is n. So is L and so is m. 2 is also a constant, which you never think about until a mathematician brings it up. That second term is a little complicated in form. But what it means is “a constant, which happens to be the number \frac{L^2}{2m} , multiplied by r-2”.

    So the derivative of Veff, with respect to r, is the derivative of the first term plus the derivative of the second. The derivatives of each of those terms is going to be some constant time r to another number. And that’s going to be:

    V'_{eff}(r) = C n r^{n - 1} - 2 \frac{L^2}{2 m r^3}

    And there you can divide the 2 in the numerator and the 2 in the denominator. So we could make this look a little simpler yet, but don’t worry about that.

    OK. So that’s what you need to know about differential calculus to understand orbital mechanics. Not everything. Just what we need for the next bit.

    • davekingsbury 1:06 pm on Saturday, 8 October, 2016 Permalink | Reply

      Good article. Just finished Morton Cohen’s biography of Lewis Carroll, who was a great populariser of mathematics, logic, etc. Started a shared poem in tribute to him, here is a cheeky plug, hope you don’t mind!


      • Joseph Nebus 12:22 am on Tuesday, 11 October, 2016 Permalink | Reply

        Thanks for sharing and I’m quite happy to have your plug here. I know about Carroll’s mathematics-popularization side; his logic puzzles are particularly choice ones even today. (Granting that deductive logic really lends itself to being funny.)

        Oddly I haven’t read a proper biography of Carroll, or most of the other mathematicians I’m interested in. Which is strange since I’m so very interested in the history and the cultural development of mathematics.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 4 October, 2016 Permalink | Reply
    Tags: , , , , ,   

    How September 2016 Treated My Mathematics Blog 

    I put together another low-key, low-volume month in September. In trade, I got a low readership: my lowest in the past twelve months, according to WordPress, and less than a thousand readers for the first time since May. Well, that’s a lesson to me about something or other.

    Readership Numbers:

    So there were only 922 page views around here, down from August’s 1,002 and July’s 1,057. The number of distinct readers rose, at least, to 575. There had been only 531 in August. But there were 585 in July, which is the sort of way it goes.

    The number of likes rose to 115, which is technically up from August’s 107. It’s well down from July’s 177. There were 20 comments in September, up from August’s 16 yet down from July’s 33. I think this mostly reflects how many fewer posts I’ve been publishing. There were just eleven original posts in August and September, compared to, for example, July’s boom of 17. I am thinking about doing a new A To Z round to close out the year, which if past performance is any indication would bring me all sorts of readers as well as make me spend every day writing, writing, writing and hoping for any kind of mathematics word that starts with ‘y’.

    Popular Posts:

    I’m not surprised that my most popular post for September was a Reading the Comics post. With hindsight I realize it’s almost perfectly engineered for reliable readership. It’s about something light but lets me, at least in principle, bring up the whole spectrum of mathematics. That said I am surprised two of the most popular posts were stepped deep into Freshman Calculus, threatening to be inaccessible to casual readers. But then both of those posts started out when online friends needed help with their calculus work, so maybe it better matches stuff people need to know. The most-read articles around here in September were:

    Listing Countries:

    Country Readers
    United States 808
    India 53
    Canada 46
    United Kingdom 34
    New Zealand 24
    Australia 23
    Germany 18
    Philippines 17
    France 9
    Argentina 8
    Spain 7
    Singapore 6
    Brazil 6
    Kenya 5
    Switzerland 5
    Austria 3
    Denmark 3
    Indonesia 3
    Italy 3
    Netherlands 3
    South Africa 3
    Uruguay 3
    Bulgaria 2
    Croatia 2
    Cyprus 2
    Greece 2
    Israel 2
    Japan 2
    Malaysia 2
    Mexico 2
    Norway 2
    Puerto Rico 2
    Sweden 2
    Turkey 2
    Costa Rica 1
    Czech Republic 1 (*)
    Estonia 1
    European Union 1
    Hong Kong SAR China 1
    Hungary 1
    Mauritius 1
    Poland 1
    Portugal 1
    Romania 1
    Taiwan 1

    Czech Republic was the only single-reader country last month, and no country’s on a two- or more-month single-reader streak. European Union dropped from three page views so I don’t know what they’re looking for that they aren’t finding here.

    Search Term Non-Poetry:

    Nothing all that trilling among the search terms, although someone’s on a James Clerk Maxwell kick. Among things that brought people here in September were:

    • how many grooves on a record
    • james clerk maxwell comics strip
    • james clerk maxwell comics
    • james clerk maxwell comics stript about scientiest
    • james clerk maxwell comics streip photos
    • james clerk maxwell comics script scientist
    • record groove width in micrometers
    • example of comics strip of maxwell

    Definitely have to commission someone to draw a bunch of James Clerk Maxwell comics.

    Counting Readers:

    October starts with the mathematics blog at 41,318 page views from 17,189 recorded distinct visitors. (The first year or so of the blog WordPress didn’t keep track of distinct visitors, though, or at least didn’t tell us about them.)

    WordPress’s “Insights” tab tells me the most popular day for reading stuff here is Sunday, with 18 percent of page views coming then. Since that’s the designated day for Reading the Comics posts that doesn’t surprise me. The most popular hour is 6 pm, which gets 14 percent of readers in. That must be because I’ve set 6 pm Universal Time as the standard moment when new posts should be published.

    WordPress says I start October with 624 total followers, up modestly from September’s 614 base. There’s a button on the upper-right corner to follow this blog on WordPress. Below that is a button to follow this blog by e-mail. And if you’d like you can follow me on Twitter too, where I try to do more than just point out I’ve posted stuff here. But also to not post so often that you wonder if or when I rest.

    • davekingsbury 9:52 pm on Wednesday, 5 October, 2016 Permalink | Reply

      I wonder if readership is down generally. My own seems to have slumped a bit …


      • Joseph Nebus 12:13 am on Tuesday, 11 October, 2016 Permalink | Reply

        I wonder. I ought to poke around other people’s readership reports and see what their figures are like, and whether there’s any correlations. But that’s also a lot of work, by which I mean any work at all. I’m not sure about going to the trouble of actually doing it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 2 October, 2016 Permalink | Reply
    Tags: , , ,   

    Reading the Comics, October 1, 2016: Jumble Is Back Edition 

    Comic Strip Master Command sent another normal-style week for mathematics references. There’s not much that lets me get really chatty or gossippy about mathematics lore. That’s all right. The important thing is: we’ve got Jumble back.

    Greg Cravens’s The Buckets for the 25th features a bit of parental nonsense-telling. The rather annoying noise inside a car’s cabin when there’s one window open is the sort of thing fluid mechanics ought to be able to study. I see references claiming this noise to be a Helmholz Resonance. This is a kind of oscillation in the air that comes from wind blowing across the lone hole in a solid object. Wikipedia says it’s even the same phenomenon producing an ocean-roar in a seashell held up to the ear. It’s named for Hermann von Helmholtz, who described it while studying sound and vortices. Helmholz is also renowned for making a clear statement of the conservation of energy — an idea many were working towards, mind — and in thermodynamics and electromagnetism and for that matter how the eye works. Also how fast nerves transmit signals. All that said, I’m not sure that all the unpleasant sound heard and pressure felt from a single opened car window is Helmholz Resonance. Real stuff is complicated and the full story is always more complicated than that. I wouldn’t go farther than saying that Helmholz Resonance is one thing to look at.

    Michael Cavna’s Warped for the 25th uses two mathematics-cliché equations as “amazingly successful formulas”. One can quibble with whether Einstein should be counted under mathematics. Pythagoras, at least for the famous theorem named for him, nobody would argue. John Grisham, I don’t know, the joke seems dated to me but we are talking about the comics.

    Tony Carrillos’ F Minus for the 28th uses arithmetic as as something no reasonable person can claim is incorrect. I haven’t read the comments, but I am slightly curious whether someone says something snarky about Common Core mathematics — or even the New Math for crying out loud — before or after someone finds a base other than ten that makes the symbols correct.

    Cory Thomas’s college-set soap-opera strip Watch Your Head for the 28th name-drops Introduction to Functional Analysis. It won’t surprise you it’s a class nobody would take on impulse. It’s an upper-level undergraduate or a grad-student course, something only mathematics majors would find interesting. But it is very interesting. It’s the reward students have for making it through Real Analysis, the spirit-crushing course about why calculus works. Functional Analysis is about what we can do with functions. We can make them work like numbers. We can define addition and multiplication, we can measure their size, we can create sequences of them. We can treat functions almost as if they were numbers. And while we’re working on things more abstract and more exotic than the ordinary numbers Real Analysis depends on, somehow, Functional Analysis is easier than Real Analysis. It’s a wonder.

    Mark Anderson’s Andertoons for the 29th features a student getting worried about the order of arithmetic operations. I appreciate how kids get worried about the feelings of things like that. Although, truly, subtraction doesn’t go “last”; addition and subtraction have the same priority. They share the bottom of the pile, though. Multiplication and division similarly share a priority, above addition-and-subtraction. Many guides to the order of operations say to do addition-and-subtraction in order left to right, but that’s not so. Setting a left-to-right order is okay for deciding where to start. But you could do a string of additions or subtractions in any order and get the same answer, unless the expression is inconsistent.

    Four people sitting at a table divided up as a pie chart. The one sitting behind the overwhelming majority of the chunk says, 'C'mon guys ... I feel like I'm doing all the work here.'

    Daniel Beyer’s Long Story Short for the 30th of September, 2016. I think Randolph Itch, 2am did this joke too but then had everyone retire to the bar chart.

    Daniel Beyer’s Long Story Short for the 30th is a pie chart joke. There’s not a lot of mathematics to it, but I’m amused.

    Justin Boyd’s Invisible Bread for the 30th has maybe my favorite dumb joke of the week. It’s just a kite that’s proven its knowledge of mathematics. I’m a little surprised the kite didn’t call out a funnier number, by which I mean 37, but perhaps … no, that doesn’t work, actually. Of course the kite would be comfortable with higher mathematics.

    LIPOS O-O-O; PURTE OO---; VONPER -OO---; YETMSS --O-OO. Her students were beginning to understand addition and subtraction OOOO OO OOOO.

    David L Hoyt and Jeff Knurek’s Jumble for the 1st of October, 2016. I don’t know that there even is a permanent link for this that would be any good.

    And as promised, David L Hoyt and Jeff Knurek’s Jumble for the 1st of October mentions mathematics. That’s enough for me to include here.

  • Joseph Nebus 6:00 pm on Wednesday, 28 September, 2016 Permalink | Reply
    Tags: , , , , , , ,   

    Why Stuff Can Orbit, Part 5: Why Physics Doesn’t Work And What To Do About It 

    Less way previously:

    My title’s hyperbole, to the extent it isn’t clickbait. Of course physics works. By “work” I mean “model the physical world in useful ways”. If it didn’t work then we would call it “pure” mathematics instead. Mathematicians would study it for its beauty. Physicists would be left to fend for themselves. “Useful” I’ll say means “gives us something interesting to know”. “Interesting” I’ll say if you want to ask what that means then I think you’re stalling.

    But what I mean is that Newtonian physics, the physics learned in high school, doesn’t work. Well, it works, in that if you set up a problem right and calculate right you get answers that are right. It’s just not efficient, for a lot of interesting problems. Don’t ask me about interesting again. I’ll just say the central-force problems from this series are interesting.

    Newtonian, high school type, physics works fine. It shines when you have only a few things to keep track of. In this central force problem we have one object, a planet-or-something, that moves. And only one force, one that attracts the planet to or repels the planet from the center, the Origin. This is where we’d put the sun, in a planet-and-sun system. So that seems all right as far as things go.

    It’s less good, though, if there’s constraints. If it’s not possible for the particle to move in any old direction, say. That doesn’t turn up here; we can imagine a planet heading in any direction relative to the sun. But it’s also less good if there’s a symmetry in what we’re studying. And in this case there is. The strength of the central force only changes based on how far the planet is from the origin. The direction only changes based on what direction the planet is relative to the origin. It’s a bit daft to bother with x’s and y’s and maybe even z’s when all we care about is the distance from the origin. That’s a number we’ve called ‘r’.

    So this brings us to Lagrangian mechanics. This was developed in the 18th century by Joseph-Louis Lagrange. He’s another of those 18th century mathematicians-and-physicists with his name all over everything. Lagrangian mechanics are really, really good when there’s a couple variables that describe both what we’d like to observe about the system and its energy. That’s exactly what we have with central forces. Give me a central force, one that’s pointing directly toward or away from the origin, and that grows or shrinks as the radius changes. I can give you a potential energy function, V(r), that matches that force. Give me an angular momentum L for the planet to have, and I can give you an effective potential energy function, Veff(r). And that effective potential energy lets us describe how the coordinates change in time.

    The method looks roundabout. It depends on two things. One is the coordinate you’re interested in, in this case, r. The other is how fast that coordinate changes in time. This we have a couple of ways of denoting. When working stuff out on paper that’s often done by putting a little dot above the letter. If you’re typing, dots-above-the-symbol are hard. So we mark it as a prime instead: r’. This works well until the web browser or the word processor assumes we want smart quotes and we already had the r’ in quote marks. At that point all hope of meaning is lost and we return to communicating by beating rocks with sticks. We live in an imperfect world.

    What we get out of this is a setup that tells us how fast r’, how fast the coordinate we’re interested in changes in time, itself changes in time. If the coordinate we’re interested in is the ordinary old position of something, then this describes the rate of change of the velocity. In ordinary English we call that the acceleration. What makes this worthwhile is that the coordinate doesn’t have to be the position. It also doesn’t have to be all the information we need to describe the position. For the central force problem r here is just how far the planet is from the center. That tells us something about its position, but not everything. We don’t care about anything except how far the planet is from the center, not yet. So it’s fine we have a setup that doesn’t tell us about the stuff we don’t care about.

    How fast r’ changes in time will be proportional to how fast the effective potential energy, Veff(r), changes with its coordinate. I so want to write “changes with position”, since these coordinates are usually the position. But they can be proxies for the position, or things only loosely related to the position. For an example that isn’t a central force, think about a spinning top. It spins, it wobbles, it might even dance across the table because don’t they all do that? The coordinates that most sensibly describe how it moves are about its rotation, though. What axes is it rotating around? How do those change in time? Those don’t have anything particular to do with where the top is. That’s all right. The mathematics works just fine.

    A circular orbit is one where the radius doesn’t change in time. (I’ll look at non-circular orbits later on.) That is, the radius is not increasing and is not decreasing. If it isn’t getting bigger and it isn’t getting smaller, then it’s got to be staying the same. Not all higher mathematics is tricky. The radius of the orbit is the thing I’ve been calling r all this time. So this means that r’, how fast r is changing with time, has to be zero. Now a slightly tricky part.

    How fast is r’, the rate at which r changes, changing? Well, r’ never changes. It’s always the same value. Anytime something is always the same value the rate of its change is zero. This sounds tricky. The tricky part is that it isn’t tricky. It’s coincidental that r’ is zero and the rate of change of r’ is zero, though. If r’ were any fixed, never-changing number, then the rate of change of r’ would be zero. It happens that we’re interested in times when r’ is zero.

    So we’ll find circular orbits where the change in the effective potential energy, as r changes, is zero. There’s an easy-to-understand intuitive idea of where to find these points. Look at a plot of Veff and imagine this is a smooth track or the cross-section of a bowl or the landscaping of a hill. Imagine dropping a ball or a marble or a bearing or something small enough to roll in it. Where does it roll to a stop? That’s where the change is zero.

    It’s too much bother to make a bowl or landscape a hill or whatnot for every problem we’re interested in. We might do it anyway. Mathematicians used to, to study problems that were too complicated to do by useful estimates. These were “analog computers”. They were big in the days before digital computers made it no big deal to simulate even complicated systems. We still need “analog computers” or models sometimes. That’s usually for problems that involve chaotic stuff like turbulent fluids. We call this stuff “wind tunnels” and the like. It’s all a matter of solving equations by building stuff.

    We’re not working with problems that complicated. There isn’t the sort of chaos lurking in this problem that drives us to real-world stuff. We can find these equilibriums by working just with symbols instead.

  • Joseph Nebus 6:00 pm on Sunday, 25 September, 2016 Permalink | Reply
    Tags: , , , ,   

    Reading the Comics, September 24, 2016: Infinities Happen Edition 

    I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics.

    Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen.

    'It's true, Dr Peel. I'm a bit of a psychic.' 'Would you share the winning lottery numbers with me?' '1, 10, 17, 39, 43, and 47'. 'Those are the winning lottery numbers?' 'Yes!' 'For this Tuesday?' 'Ah! That's where it gets a bit fuzzy.'

    Kieran Meehan’s Pros and Cons for the 18th of September, 2016. I can’t say whether any of these are supposed to be the PowerBall number. (The comic strip’s title is a revision of its original, which more precisely described its gimmick but was harder to remember: A Lawyer, A Doctor, and a Cop.)

    Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye.

    Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone.

    Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it?

    Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it.

    Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians.

  • Joseph Nebus 6:00 pm on Saturday, 24 September, 2016 Permalink | Reply
    Tags: handedness, ,   

    Some Thermomathematics Reading 

    I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.

    One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.

    Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.

    And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.

  • Joseph Nebus 6:00 pm on Wednesday, 21 September, 2016 Permalink | Reply
    Tags: , , L'Hopital's Rule, ,   

    L’Hopital’s Rule Without End: Is That A Thing? 

    I was helping a friend learn L’Hôpital’s Rule. This is a Freshman Calculus thing. (A different one from last week, it happens. Folks are going back to school, I suppose.) The friend asked me a point I thought shouldn’t come up. I’m certain it won’t come up in the exam my friend was worried about, but I couldn’t swear it wouldn’t happen at all. So this is mostly a note to myself to think it over and figure out whether the trouble could come up. And also so this won’t be my most accessible post; I’m sorry for that, for folks who aren’t calculus-familiar.

    L’Hôpital’s Rule is a way of evaluating the limit of one function divided by another, of f(x) divided by g(x). If the limit of \frac{f(x)}{g(x)} has either the form of \frac{0}{0} or \frac{\infty}{\infty} then you’re not stuck. You can take the first derivative of the numerator and the denominator separately. The limit of \frac{f'(x)}{g'(x)} if it exists will be the same value.

    But it’s possible to have to do this several times over. I used the example of finding the limit, as x grows infinitely large, where f(x) = x2 and g(x) = ex. \frac{x^2}{e^x} goes to \frac{\infty}{\infty} as x grows infinitely large. The first derivatives, \frac{2x}{e^x} , also go to \frac{\infty}{\infty} . You have to repeat the process again, taking the first derivatives of numerator and denominator again. \frac{2}{e^x} finally goes to 0 as x gets infinitely large. You might have to do this a bunch of times. If f(x) were x7 and g(x) again ex you’d properly need to do this seven times over. With experience you figure out you can skip some steps. Of course students don’t have the experience to know they can skip ahead to the punch line there, but that’s what the practice in homework is for.

    Anyway, my friend asked whether it’s possible to get a pattern that always ends up with \frac{0}{0} or \frac{\infty}{\infty} and never breaks out of this. And that’s what’s got me stuck. I can think of a few patterns that would. Start out, for example, with f(x) = e3x and g(x) = e2x. Properly speaking, that would never end. You’d get an infinity-over-infinity pattern every derivative you took. Similarly, if you started with f(x) = \frac{1}{x} and g(x) = e^{-x} you’d never come to an end. As x got infinitely large both f(x) and g(x) would go to zero and all their derivatives would be zero over and over and over and over again.

    But those are special cases. Anyone looking at what they were doing instead of just calculating would look at, say, \frac{e^{3x}}{e^{2x}} and realize that’s the same as e^x which falls out of the L’Hôpital’s Rule formulas. Or \frac{\frac{1}{x}}{e^{-x}} would be the same as \frac{e^x}{x} which is an infinity-over-infinity form. But it takes only one derivative to break out of the infinity-over-infinity pattern.

    So I can construct examples that never break out of a zero-over-zero or an infinity-over-infinity pattern if you calculate without thinking. And calculating without thinking is a common problem students have. Arguably it’s the biggest problem mathematics students have. But what I wonder is, are there ratios that end up in an endless zero-over-zero or infinity-over-infinity pattern even if you do think it out?

    And thus this note; I’d like to nag myself into thinking about that.

  • Joseph Nebus 6:00 pm on Sunday, 18 September, 2016 Permalink | Reply
    Tags: , , , , , ,   

    Reading the Comics, September 17, 2016: Show Your Work Edition 

    As though to reinforce how nothing was basically wrong, Comic Strip Master Command sent a normal number of mathematically themed comics around this past week. They bunched the strips up in the first half of the week, but that will happen. It was a fun set of strips in any event.

    Rob Harrell’s Adam @ Home for the 11th tells of a teacher explaining division through violent means. I’m all for visualization tools and if we are going to use them, the more dramatic the better. But I suspect Mrs Clark’s students will end up confused about what exactly they’ve learned. If a doll is torn into five parts, is that communicating that one divided by five is five? If the students were supposed to identify the mass of the parts of the torn-up dolls as the result of dividing one by five, was that made clear to them? Maybe it was. But there’s always the risk in a dramatic presentation that the audience will misunderstand the point. The showier the drama the greater the risk, it seems to me. But I did only get the demonstration secondhand; who knows how well it was done?

    Greg Cravens’ The Buckets for the 11th has the kid, Toby, struggling to turn a shirt backwards and inside-out without taking it off. As the commenters note this is the sort of problem we get into all the time in topology. The field is about what can we say about shapes when we don’t worry about distance? If all we know about a shape is the ways it’s connected, the number of holes it has, whether we can distinguish one side from another, what else can we conclude? I believe commenter Mike is right: take one hand out the bottom of the shirt and slide it into the other sleeve from the outside end, and proceed from there. But I have not tried it myself. I haven’t yet started wearing long-sleeve shirts for the season.

    Bill Amend’s FoxTrot for the 11th — a new strip — does a story problem featuring pizzas cut into some improbable numbers of slices. I don’t say it’s unrealistic someone might get this homework problem. Just that the story writer should really ask whether they’ve ever seen a pizza cut into sevenths. I have a faint memory of being served a pizza cut into tenths by same daft pizza shop, which implies fifths is at least possible. Sevenths I refuse, though.

    Mark Tatulli’s Heart of the City for the 12th plays on the show-your-work directive many mathematics assignments carry. I like Heart’s showiness. But the point of showing your work is because nobody cares what (say) 224 divided by 14 is. What’s worth teaching is the ability to recognize what approaches are likely to solve what problems. What’s tested is whether someone can identify a way to solve the problem that’s likely to succeed, and whether that can be carried out successfully. This is why it’s always a good idea, if you are stumped on a problem, to write out how you think this problem should be solved. Writing out what you mean to do can clarify the steps you should take. And it can guide your instructor to whether you’re misunderstanding something fundamental, or whether you just missed something small, or whether you just had a bad day.

    Norm Feuti’s Gil for the 12th, another rerun, has another fanciful depiction of showing your work. The teacher’s got a fair complaint in the note. We moved away from tally marks as a way to denote numbers for reasons. Twelve depictions of apples are harder to read than the number 12. And they’re terrible if we need to depict numbers like one-half or one-third. Might be an interesting side lesson in that.

    Brian Basset’s Red and Rover for the 14th is a rerun and one I’ve mentioned in these parts before. I understand Red getting fired up to be an animator by the movie. It’s been a while since I watched Donald Duck in Mathmagic Land but my recollection is that while it was breathtaking and visually inventive it didn’t really get at mathematics. I mean, not at noticing interesting little oddities and working out whether they might be true always, or sometimes, or almost never. There is a lot of play in mathematics, especially in the exciting early stages where one looks for a thing to prove. But it’s also in seeing how an ingenious method lets you get just what you wanted to know. I don’t know that the short demonstrates enough of that.

    Punkinhead: 'Can you answer an arithmetic question for me, Julian?' Julian: 'Sure.' Punkinhead: 'What is it?'

    Bud Blake’s Tiger rerun for the 15th of September, 2016. I don’t get to talking about the art of the comics here, but, I quite like Julian’s expressions here. And Bud Blake drew fantastic rumpled clothes.

    Bud Blake’s Tiger rerun for the 15th gives Punkinhead the chance to ask a question. And it’s a great question. I’m not sure what I’d say arithmetic is, not if I’m going to be careful. Offhand I’d say arithmetic is a set of rules we apply to a set of things we call numbers. The rules are mostly about how we can take two numbers and a rule and replace them with a single number. And these turn out to correspond uncannily well with the sorts of things we do with counting, combining, separating, and doing some other stuff with real-world objects. That it’s so useful is why, I believe, arithmetic and geometry were the first mathematics humans learned. But much of geometry we can see. We can look at objects and see how they fit together. Arithmetic we have to infer from the way the stuff we like to count works. And that’s probably why it’s harder to do when we start school.

    What’s not good about that as an answer is that it actually applies to a lot of mathematical constructs, including those crazy exotic ones you sometimes see in science press. You know, the ones where there’s this impossibly complicated tangle with ribbons of every color and a headline about “It’s Revolutionary. It’s 46-Dimensional. It’s Breaking The Rules Of Geometry. Is It The Shape That Finally Quantizes Gravity?” or something like that. Well, describe a thing vaguely and it’ll match a lot of other things. But also when we look to new mathematical structures, we tend to look for things that resemble arithmetic. Group theory, for example, is one of the cornerstones of modern mathematical thought. It’s built around having a set of things on which we can do something that looks like addition. So it shouldn’t be a surprise that many groups have a passing resemblance to arithmetic. Mathematics may produce universal truths. But the ones we see are also ones we are readied to see by our common experience. Arithmetic is part of that common experience.

    'Dude, you have something on your face.' 'Food? Ink? Zit? What??' 'I think it's math.' 'Oh, yeah. I fell asleep on my Calculus book.'

    Jerry Scott and Jim Borgman’s Zits for the 14th of September, 2016. Properly speaking that is ink on his face, but I suppose saying it’s calculus pins down where it came from. Just observing.

    Also Jerry Scott and Jim Borgman’s Zits for the 14th I think doesn’t really belong here. It’s just got a cameo appearance by the concept of mathematics. Dave Whamond’s Reality Check for the 17th similarly just mentions the subject. But I did want to reassure any readers worried after last week that Pierce recovered fine. Also that, you know, for not having a stomach for mathematics he’s doing well carrying on. Discipline will carry one far.

    • ivasallay 3:44 am on Monday, 19 September, 2016 Permalink | Reply

      You said, “Twelve depictions of apples are harder to read than the number 12.” It might be a little difficult to see at first, but the twelve apples were arranged to form the numerals 1 and 2. I thought it was rather clever.


  • Joseph Nebus 6:00 pm on Friday, 16 September, 2016 Permalink | Reply
    Tags: , ,   

    Dark Secrets of Mathematicians: Something About Integration By Parts 

    A friend took me up last night on my offer to help with any mathematics she was unsure about. I’m happy to do it, though of course it came as I was trying to shut things down for bed. But that part was inevitable and besides the exam was today. I thought it worth sharing here, though.

    There’s going to be some calculus in this. There’s no avoiding that. If you don’t know calculus, relax about what the symbols exactly mean. It’s a good trick. Pretend anything you don’t know is just a symbol for “some mathematics thingy I can find out about later, if I need to, and I don’t necessarily have to”.

    “Integration by parts” is one of the standard tricks mathematicians learn in calculus. It comes in handy if you want to integrate a function that itself looks like the product of two other functions. You find the integral of a function by breaking it up into two parts, one of which you differentiate and one of which you integrate. This gives you a product of functions and then a new integral to do. A product of functions is easy to deal with. The new integral … well, if you’re lucky, it’s an easier integral than you started with.

    As you learn integration by parts you learn to look ways to break up functions so the new integral is easier. There’s no hard and fast rule for this. But bet on “the part that has a polynomial in it” as the part that’s better differentiated. “The part that has sines and cosines in it” is probably the part that’s better integrated. An exponential, like 2x, is as easily differentiated as integrated. The exponential of a function, like say 2x2, is better differentiated. These usually turn out impossible to integrate anyway. At least impossible without using crazy exotic functions.

    So your classic integration-by-parts problem gives you an expression like this:

    \int x \sin(x) dx = -x \cos(x) - \int \sin(x) dx

    If you weren’t a mathematics major that might not look better to you, what with it still having integrals and sines and stuff in it. But ask your mathematics friend. She’ll tell you. The thing on the right-hand side is way better. That last term, the integral of the sine of x? She can do that in her sleep. It barely counts as work, at least by the time you’ve got in class to doing integration by parts. It’ll be -x\cos(x) + \cos(x) .

    But sometimes, especially if the function being integrated — the “integrand”, by the way, and good luck playing that in Scrabble — is a bunch of trig functions and exponentials, you get some sad situation like so:

    \int \sin(x) \cos(x) dx = \sin^2(x) - \int \sin(x) \cos(x) dx

    That is, the thing we wanted to integrate, on the left, turns up on the right too. The student sits down, feeling the futility of modern existence. We’re stuck with the original problem all over again and we’re short of tools to do something about it.

    This is the point my friend was confused by, and is the bit of dark magic I want to talk about here. We’re not stumped! We can fall back on one of those mathematics tricks we are always allowed to do. And it’s a trick that’s so simple it seems like it can’t do anything.

    It’s substitution. We are always allowed to substitute one thing for something else that’s equal to it. So in that above equation, what can we substitute, and for what? … Well, nothing in that particular bunch of symbols. We’re going to introduce a new one. It’s going to be the value of the integral we want to evaluate. Since it’s an integral, I’m going to call it ‘I’. You don’t have to call it that, but you’re going to anyway. It doesn’t need a more thoughtful name.

    So I shall define:

    I \equiv  \int \sin(x) \cos(x) dx

    The triple-equals-sign there is an extravagance, I admit. But it’s a common one. Mathematicians use it to say “this is defined to be equal to that”. Granted, that’s what the = sign means. But the triple sign connotes how we emphasize the definition part. That is, ‘I’ might have been anything at all, and we choose this of the universe of possibilities.

    How does this help anything? Well, it turns the integration-by-parts problem into this equation:

    I = \sin^2(x) - I

    And we want to know what ‘I’ equals. And now suddenly it’s easier to see that we don’t actually have to do any calculus from here on out. We can solve it the way we’d solve any problem in high school algebra, which is, move ‘I’ to the other side. Formally, we add the same thing to the left- and the right-hand sides. That’s ‘I’ …

    2I = \sin^2(x)

    … and then divide both sides by the same number, 2 …

    I = \frac{1}{2}\sin^2(x)

    And now remember that substitution is a free action. We can do it whenever we like, and we can undo it whenever we like. This is a good time to undo it. Putting the whole expression back in for ‘I’ we get …

    \int \sin(x) \cos(x) dx = \frac{1}{2}\sin^2(x)

    … which is the integral, evaluated.

    (Someone would like to point out there should be a ‘plus C’ in there. This someone is right, for reasons that would take me too far afield to describe right now. We can overlook it for now anyway. I just want that someone to know I know what you’re thinking and you’re not catching me on this one.)

    Sometimes, the integration by parts will need two or even three rounds before you get back the original integrand. This is because the instructor has chosen a particularly nasty problem for homework or the exam. It is not hopeless! But you will see strange constructs like 4/5 I equalling something. Carry on.

    What makes this a bit of dark magic? I think it’s because of habits. We write down something simple on the left-hand side of an equation. We get an expression for what the right-hand side should be, and it’s usually complicated. And then we try making the right-hand side simpler and simpler. The left-hand side started simple so we never even touch it again. Indeed, working out something like this it’s common to write the left-hand side once, at the top of the page, and then never again. We just write an equals sign, underneath the previous line’s equals sign, and stuff on the right. We forget the left-hand side is there, and that we can do stuff with it and to it.

    I think also we get into a habit of thinking an integral and integrand and all that is some quasi-magic mathematical construct. But it isn’t. It’s just a function. It may even be just a number. We don’t know what it is, but it will follow all the same rules of numbers, or functions. Moving it around may be more writing but it’s not different work to moving ‘4’ or ‘x2‘ around. That’s the value of replacing the integral with a symbol like ‘I’. It’s not that there’s something we can do with ‘I’ that we can’t do with ‘\int \sin(x)\cos(x) dx ‘, other than write it in under four pen strokes. It’s that in algebra we learned the habit of moving a letter around to where it’s convenient. Moving a whole integral expression around seems different.

    But it isn’t. It’s the same work done, just on a different kind of mathematics. I suspect finding out that it could be a trick that simple throws people off.

    • elkement (Elke Stangl) 8:37 am on Sunday, 18 September, 2016 Permalink | Reply

      Ha – spotted that immediately! Finally all that sloppy calculus you do as a physicist has paid off :-) You are not afraid using huge integrals ‘just as a number’, e.g. in a series or as an exponent … always silently assuming that functions are well behaved, converge or whatever terms you mathematicians use for that! ;-)
      That trick is used over and over in quantum physics, in perturbation theories, when you turn a differential equation into an integral equation, and then just use the first few summands if, typically, a potential / perturbation is small (as operators would be defined recursively in the original equation and you finally want the true solution to be defined in relation to the undisturbed solution). One challenge is to disentangle double integrals by ‘time-ordering’ so that a double integral becomes just the square of two integrals times a factor.


      • Joseph Nebus 2:50 am on Monday, 19 September, 2016 Permalink | Reply

        Well, I must admit, I went to grad school in a program with a very strong applied mathematics tradition. (The joke around the department was that it had two tracks, Applied Mathematics and More Applied Mathematics.) It definitely helped my getting used to thinking of a definite integral as just a number, that could be manipulated or moved around as needed. An indefinite integral … well, it’s not properly a number, but it might as well be for this context.

        (I was considering explaining the differences between definite and indefinite integrals, but that seemed a little too far a diversion and too confusing a one. Might make that a separate post sometime when I need to fill out a slow week.)

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 13 September, 2016 Permalink | Reply
    Tags: , , , story problems   

    Reading the Comics, September 10, 2016: Finishing The First Week Of School Edition 

    I understand in places in the United States last week wasn’t the first week of school. It was the second or third or even worse. These places are crazy, in that they do things differently from the way my elementary school did it. So, now, here’s the other half of last week’s comics.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal presented the 8th is a little freak-out about existence. Mathematicians rely on the word “exists”. We suppose things to exist. We draw conclusions about other things that do exist or do not exist. And these things that exist are not things that exist. It’s a bit heady to realize nobody can point to, or trap in a box, or even draw a line around “3”. We can at best talk about stuff that expresses some property of three-ness. We talk about things like “triangles” and we even draw and use representations of them. But those drawings we make aren’t Triangles, the thing mathematicians mean by the concept. They’re at best cartoons, little training wheels to help us get the idea down. Here I regret that as an undergraudate I didn’t take philosophy courses that challenged me. It seems certain to me mathematicians are using some notion of the Platonic Ideal when we speak of things “existing”. But what does that mean, to a mathematician, to a philosopher, and to the person who needs an attractive tile pattern on the floor?

    Cathy Thorne’s Everyday People Cartoons for the 9th is about another bit of the philosophy of mathematics. What are the chances of something that did happen? What does it mean to talk about the chance of something happening? When introducing probability mathematicians like to set it up as “imagine this experiment, which has a bunch of possible outcomes. One of them will happen and the other possibilities will not” and we go on to define a probability from that. That seems reasonable, perhaps because we’re accepting ignorance. We may know (say) that a coin toss is, in principle, perfectly deterministic. If we knew exactly how the coin is made. If we knew exactly how it is tossed. If we knew exactly how the air currents would move during its fall. If we knew exactly what the surface it might bounce off before coming to rest is like. Instead we pretend all this knowable stuff is not, and call the result unpredictability.

    But about events in the past? We can imagine them coming out differently. But the imagination crashes hard when we try to say why they would. If we gave the exact same coin the exact same toss in the exact same circumstances how could it land on anything but the exact same face? In which case how can there have been any outcome other than what did happen? Yes, I know, someone wants to rush in and say “Quantum!” Say back to that person, “waveform collapse” and wait for a clear explanation of what exactly that is. There are things we understand poorly about the transition between the future and the past. The language of probability is a reminder of this.

    Hilary Price’s Rhymes With Orange for the 10th uses the classic story-problem setup of a train leaving the station. It does make me wonder how far back this story setup goes, and what they did before trains were common. Horse-drawn carriages leaving stations, I suppose, or maybe ships at sea. I quite like the teaser joke in the first panel more.

    Caption: Lorraine felt like God was always testing her. She's in a car. God's voice calls, 'A train leaves the station travelling east at 70 mph. At the same time ...' The intro panel, 'The Journey', features Lorraine thinking, 'Shouldn't you be busy rooting for some pro athlete?'

    Hilary Price’s Rhymes With Orange for the 10th of September, 2016. 70 mph? Why not some nice easy number like 60 mph instead? God must really be testing.

    Dan Collins’s Looks Good on Paper for the 10th is the first Möbius Strip joke we’ve had in a while. I’m amused and I do like how much incidental stuff there is. The joke would read just fine without the opossum family crossing the road, but it’s a better strip for having it. Somebody in the comments complained that as drawn it isn’t a Möbius Strip proper; there should be (from our perspective) another half-twist in the road. I’m willing to grant it’s there and just obscured by the crossing-over where the car is, because — as Collins points out — it’s really hard to draw a M&oum;bius Strip recognizably. You try it, and then try making it read cleanly while there’s, at minimum, a road and a car on the strip. That said, I can’t see that the road sign in the lower-left, by the opossums, is facing the right direction. Maybe for as narrow as the road is it’s still on a two-lane road.

    Tom Toles’s Randolph Itch, 2 am rerun for the 10th is an Einstein The Genius comic. It felt familiar to me, but I don’t seem to have included it in previous Reading The Comics posts. Perhaps I noticed it some week that I figured a mere appearance of Einstein didn’t rate inclusion. Randolph certainly fell asleep while reading about mathematics, though.

    It’s popular to tell tales of Einstein not being a very good student, and of not being that good in mathematics. It’s easy to see why. We’d all like to feel a little more like a superlative mind such as that. And Einstein worked hard to develop an image of being accessible and personable. It fits with the charming absent-minded professor image everybody but forgetful professors loves. It feels dramatically right that Einstein should struggle with arithmetic like so many of us do. It’s nonsense, though. When Einstein struggled with mathematics, it was on the edge of known mathematics. He needed advice and consultations for the non-Euclidean geometries core to general relativity? Who doesn’t? I can barely make my way through the basic notation.

    Anyway, it’s pleasant to see Toles holding up Einstein for his amazing mathematical prowess. It was a true thing.

  • Joseph Nebus 6:00 pm on Sunday, 11 September, 2016 Permalink | Reply
    Tags: , , , , ,   

    Reading the Comics, September 6, 2016: Oh Thank Goodness We’re Back Edition 

    That’s a relief. After the previous week’s suspicious silence Comic Strip Master Command sent a healthy number of mathematically-themed comics my way. They cover a pretty normal spread of topics. So this makes for a nice normal sort of roundup.

    Mac King and Bill King’s Magic In A Minute for the 4th is an arithmetic-magic-trick. Like most arithmetic-magic it depends on some true but, to me, dull bit of mathematics. In this case, that 81,234,567 minus 12,345,678 is equal to something. As a kid this sort of trick never impressed me because, well, anyone can do subtraction. I didn’t appreciate that the fun of stage magic in presenting well the mundane.

    Jerry Scott and Jim Borgman’s Zits for the 5th is an ordinary mathematics-is-hard joke. But it’s elevated by the artwork, which shows off the expressive and slightly surreal style that makes the comic so reliable and popular. The formulas look fair enough, the sorts of things someone might’ve been cramming before class. If they’re a bit jumbled up, well, Pierce hasn’t been well.

    'Are you okay, Pierce? You don't look so good.' Pierce indeed throws up, nastily. 'I don't have a stomach for math.' He's vomited a table full of trigonometry formulas, some of them gone awry.

    Jerry Scott and Jim Borgman’s Zits for the 5th of September, 2016. It sure looks to me like there’s more things being explicitly multiplied by ‘1’ than are needed, but it might be the formulas got a little scrambled as Pierce vomited. We’ve all been there. Fun fact: apart from a bit in Calculus I where they drill you on differentiation formulas you never really need the secant. It makes a couple formulas a little more compact and that’s it, so if it’s been nagging at your mind go ahead and forget it.

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 6th is an anthropomorphic-shapes joke and I feel like it’s been here before. Ah, yeah, there it is, from about this time last year. It’s a fair one to rerun.

    Mustard and Boloney popped back in on the 8th with a strip I don’t have in my archive at least. It’s your standard Pi Pun, though. If they’re smart they’ll rerun it in March. I like the coloring; it’s at least a pleasant panel to look at.

    Percy Crosby’s Skippy from the 9th of July, 1929 was rerun the 6th of September. It seems like a simple kid-saying-silly-stuff strip: what is the difference between the phone numbers Clinton 2651 and Clinton 2741 when they add to the same number? (And if Central knows what the number is why do they waste Skippy’s time correcting him? And why, 87 years later, does the phone yell at me for not guessing correctly whether I need the area code for a local number and whether I need to dial 1 before that?) But then who cares what the digits in a telephone number add to? What could that tell us about anything?

    As phone numbers historically developed, the sum can’t tell us anything at all. But if we had designed telephone numbers correctly we could have made it … not impossible to dial a wrong number, but at least made it harder. This insight comes to us from information theory, which, to be fair, we have because telephone companies spent decades trying to work out solutions to problems like people dialing numbers wrong or signals getting garbled in the transmission. We can allow for error detection by schemes as simple as passing along, besides the numbers, the sum of the numbers. This can allow for the detection of a single error: had Skippy called for number 2641 instead of 2741 the problem would be known. But it’s helpless against two errors, calling for 2541 instead of 2741. But we could detect a second error by calculating some second term based on the number we wanted, and sending that along too.

    By adding some more information, other modified sums of the digits we want, we can even start correcting errors. We understand the logic of this intuitively. When we repeat a message twice after sending it, we are trusting that even if one copy of the message is garbled the recipient will take the version received twice as more likely what’s meant. We can design subtler schemes, ones that don’t require we repeat the number three times over. But that should convince you that we can do it.

    The tradeoff is obvious. We have to say more digits of the number we want. It isn’t hard to reach the point we’re ending more error-detecting and error-correcting numbers than we are numbers we want. And what if we make a mistake in the error-correcting numbers? (If we used a smart enough scheme, we can work out the error was in the error-correcting number, and relax.) If it’s important that we get the message through, we shrug and accept this. If there’s no real harm done in getting the message wrong — if we can shrug off the problem of accidentally getting the wrong phone number — then we don’t worry about making a mistake.

    And at this point we’re only a few days into the week. I have enough hundreds of words on the close of the week I’ll put off posting that a couple of days. It’s quite good having the comics back to normal.

  • Joseph Nebus 6:00 pm on Thursday, 8 September, 2016 Permalink | Reply
    Tags: , , , , , , , , ,   

    Why Stuff Can Orbit, Part 4: On The L 

    Less way previously:

    We were chatting about central forces. In these a small object — a satellite, a planet, a weight on a spring — is attracted to the center of the universe, called the origin. We’ve been studying this by looking at potential energy, a function that in this case depends only on how far the object is from the origin. But to find circular orbits, we can’t just look at the potential energy. We have to modify this potential energy to account for angular momentum. This essay I mean to discuss that angular momentum some.

    Let me talk first about the potential energy. Mathematical physicists usually write this as a function named U or V. I’m using V. That’s what my professor used teaching this, back when I was an undergraduate several hundred thousand years ago. A central force, by definition, changes only with how far you are from the center. I’ve put the center at the origin, because I am not a madman. This lets me write the potential energy as V = V(r).

    V(r) could, in principle, be anything. In practice, though, I am going to want it to be r raised to a power. That is, V(r) is equal to C rn. The ‘C’ here is a constant. It’s a scaling constant. The bigger a number it is the stronger the central force. The closer the number is to zero the weaker the force is. In standard units, gravity has a constant incredibly close to zero. This makes orbits very big things, which generally works out well for planets. In the mathematics of masses on springs, the constant is closer to middling little numbers like 1.

    The ‘n’ here is a deceiver. It’s a constant number, yes, and it can be anything we want. But the use of ‘n’ as a symbol has connotations. Usually when a mathematician or a physicist writes ‘n’ it’s because she needs a whole number. Usually a positive whole number. Sometimes it’s negative. But we have a legitimate central force if ‘n’ is any real number: 2, -1, one-half, the square root of π, any of that is good. If you just write ‘n’ without explanation, the reader will probably think “integers”, possibly “counting numbers”. So it’s worth making explicit when this isn’t so. It’s bad form to surprise the reader with what kind of number you’re even talking about.

    (Some number of essays on we’ll find out that the only values ‘n’ can have that are worth anything are -1, 2, and 7. And 7 isn’t all that good. But we aren’t supposed to know that yet.)

    C rn isn’t the only kind of central force that could exist. Any function rule would do. But it’s enough. If we wanted a more complicated rule we could just add two, or three, or more potential energies together. This would give us V(r) = C_1 r^{n_1} + C_2 r^{n_2} , with C1 and C2 two possibly different numbers, and n1 and n2 two definitely different numbers. (If n1 and n2 were the same number then we should just add C1 and C2 together and stop using a more complicated expression than we need.) Remember that Newton’s Law of Motion about the sum of multiple forces being something vector something something direction? When we look at forces as potential energy functions, that law turns into just adding potential energies together. They’re well-behaved that way.

    And if we can add these r-to-a-power potential energies together then we’ve got everything we need. Why? Polynomials. We can approximate most any potential energy that would actually happen with a big enough polynomial. Or at least a polynomial-like function. These r-to-a-power forces are a basis set for all the potential energies we’re likely to care about. Understand how to work with one and you understand how to work with them all.

    Well, one exception. The logarithmic potential, V(r) = C log(r), is really interesting. And it has real-world applicability. It describes how strongly two vortices, two whirlpools, attract each other. You can write the logarithm as a polynomial. But logarithms are pretty well-behaved functions. You might be better off just doing that as a special case.

    Still, at least to start with, we’ll stick with V(r) = C rn and you know what I mean by all those letters now. So I’m free to talk about angular momentum.

    You’ve probably heard of momentum. It’s got something to do with movement, only sports teams and political campaigns are always gaining or losing it somehow. When we talk of that we’re talking of linear momentum. It describes how much mass is moving how fast in what direction. So it’s a vector, in three-dimensional space. Or two-dimensional space if you’re making the calculations easier. To find what the vector is, we make a list of every object that’s moving. We take its velocity — how fast it’s moving and in what direction — and multiply that by its mass. Mass is a single number, a scalar, and we’re always allowed to multiply a vector by a scalar. This gets us another vector. Once we’ve done that for everything that’s moving, we add all those product vectors together. We can always add vectors together. And this gives us a grand total vector, the linear momentum of the system.

    And that’s conserved. If one part of the system starts moving slower it’s because other parts are moving faster, and vice-versa. In the real world momentum seems to evaporate. That’s because some of the stuff moving faster turns out to be air objects bumped into, or particles of the floor that get dragged along by friction, or other stuff we don’t care about. That momentum can seem to evaporate is what makes its use in talking about ports teams or political campaigns make sense. It also annoys people who want you to know they understand science words better than you. So please consider this my authorization to use “gaining” and “losing” momentum in this sense. Ignore complainers. They’re the people who complain the word “decimate” gets used to mean “destroy way more than ten percent of something”, even though that’s the least bad mutation of an English word’s meaning in three centuries.

    Angular momentum is also a vector. It’s also conserved. We can calculate what that vector is by the same sort of process, that of calculating something on each object that’s spinning and adding it all up. In real applications it can seem to evaporate. But that’s also because the angular momentum is going into particles of air. Or it rubs off grease on the axle. Or it does other stuff we wish we didn’t have to deal with.

    The calculation is a little harder to deal with. There’s three parts to a spinning thing. There’s the thing, and there’s how far it is from the axis it’s spinning around, and there’s how fast it’s spinning. So you need to know how fast it’s travelling in the direction perpendicular to the shortest line between the thing and the axis it’s spinning around. Its angular momentum is going to be as big as the mass times the distance from the axis times the perpendicular speed. It’s going to be pointing in whichever axis direction makes its movement counterclockwise. (Because that’s how physicists started working this out and it would be too much bother to change now.)

    You might ask: wait, what about stuff like a wheel that’s spinning around its center? Or a ball being spun? That can’t be an angular momentum of zero? How do we work that out? The answer is: calculus. Also, we don’t need that. This central force problem I’ve framed so that we barely even need algebra for it.

    See, we only have a single object that’s moving. That’s the planet or satellite or weight or whatever it is. It’s got some mass, the value of which we call ‘m’ because why make it any harder on ourselves. And it’s spinning around the origin. We’ve been using ‘r’ to mean the number describing how far it is from the origin. That’s the distance to the axis it’s spinning around. Its velocity — well, we don’t have any symbols to describe what that is yet. But you can imagine working that out. Or you trust that I have some clever mathematical-physics tool ready to introduce to work it out. I have, kind of. I’m going to ignore it altogether. For now.

    The symbol we use for the total angular momentum in a system is \vec{L} . The little arrow above the symbol is one way to denote “this is a vector”. It’s a good scheme, what with arrows making people think of vectors and it being easy to write on a whiteboard. In books, sometimes, we make do just by putting the letter in boldface, L, which is easier for old-fashioned word processors to do. If we’re sure that the reader isn’t going to forget that L is this vector then we might stop highlighting the fact altogether. That’s even less work to do.

    It’s going to be less work yet. Central force problems like this mean the object can move only in a two-dimensional plane. (If it didn’t, it wouldn’t conserve angular momentum: the direction of \vec{L} would have to change. Sounds like magic, but trust me.) The angular momentum’s direction has to be perpendicular to that plane. If the object is spinning around on a sheet of paper, the angular momentum is pointing straight outward from the sheet of paper. It’s pointing toward you if the object is moving counterclockwise. It’s pointing away from you if the object is moving clockwise. What direction it’s pointing is locked in.

    All we need to know is how big this angular momentum vector is, and whether it’s positive or negative. So we just care about this number. We can call it ‘L’, no arrow, no boldface, no nothing. It’s just a number, the same as is the mass ‘m’ or distance from the origin ‘r’ or any of our other variables.

    If ‘L’ is zero, this means there’s no total angular momentum. This means the object can be moving directly out from the origin, or directly in. This is the only way that something can crash into the center. So if setting L to be zero doesn’t allow that then we know we did something wrong, somewhere. If ‘L’ isn’t zero, then the object can’t crash into the center. If it did we’d be losing angular momentum. The object’s mass times its distance from the center times its perpendicular speed would have to be some non-zero number, even when the distance was zero. We know better than to look for that.

    You maybe wonder why we use ‘L’ of all letters for the angular momentum. I do. I don’t know. I haven’t found any sources that say why this letter. Linear momentum, which we represent with \vec{p} , I know. Or, well, I know the story every physicist says about it. p is the designated letter for linear momentum because we used to use the word “impetus”, as in “impulse”, to mean what we mean by momentum these days. And “p” is the first letter in “impetus” that isn’t needed for some more urgent purpose. (“m” is too good a fit for mass. “i” has to work both as an index and as that number which, squared, gives us -1. And for that matter, “e” we need for that exponentials stuff, and “t” is too good a fit for time.) That said, while everybody, everybody, repeats this, I don’t know the source. Perhaps it is true. I can imagine, say, Euler or Lagrange in their writing settling on “p” for momentum and everybody copying them. I just haven’t seen a primary citation showing this is so.

    (I don’t mean to sound too unnecessarily suspicious. But just because everyone agrees on the impetus-thus-p story doesn’t mean it’s so. I mean, every Star Trek fan or space historian will tell you that the first space shuttle would have been named Constitution until the Trekkies wrote in and got it renamed Enterprise. But the actual primary documentation that the shuttle would have been named Constitution is weak to nonexistent. I’ve come to the conclusion NASA had no plan in mind to name space shuttles until the Trekkies wrote in and got one named. I’ve done less poking around the impetus-thus-p story, in that I’ve really done none, but I do want it on record that I would like more proof.)

    Anyway, “p” for momentum is well-established. So I would guess that when mathematical physicists needed a symbol for angular momentum they looked for letters close to “p”. When you get into more advanced corners of physics “q” gets called on to be position a lot. (Momentum and position, it turns out, are nearly-identical-twins mathematically. So making their symbols p and q offers aesthetic charm. Also great danger if you make one little slip with the pen.) “r” is called on for “radius” a lot. Looking on, “t” is going to be time.

    On the other side of the alphabet, well, “o” is just inviting danger. “n” we need to count stuff. “m” is mass or we’re crazy. “l” might have just been the nearest we could get to “p” without intruding on a more urgently-needed symbol. (“s” we use a lot for parameters like length of an arc that work kind of like time but aren’t time.) And then shift to the capital letter, I expect, because a lowercase l looks like a “1”, to everybody’s certain doom.

    The modified potential energy, then, is going to include the angular momentum L. At least, the amount of angular momentum. It’s also going to include the mass of the object moving, and the radius r that says how far the object is from the center. It will be:

    V_{eff}(r) = V(r) + \frac{L^2}{2 m r^2}

    V(r) was the original potential, whatever that was. The modifying term, with this square of the angular momentum and all that, I kind of hope you’ll just accept on my word. The L2 means that whether the angular momentum is positive or negative, the potential will grow very large as the radius gets small. If it didn’t, there might not be orbits at all. And if the angular momentum is zero, then the effective potential is the same original potential that let stuff crash into the center.

    For the sort of r-to-a-power potentials I’ve been looking at, I get an effective potential of:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    where n might be an integer. I’m going to pretend a while longer that it might not be, though. C is certainly some number, maybe positive, maybe negative.

    If you pick some values for C, n, L, and m you can sketch this out. If you just want a feel for how this Veff looks it doesn’t much matter what values you pick. Changing values just changes the scale, that is, where a circular orbit might happen. It doesn’t change whether it happens. Picking some arbitrary numbers is a good way to get a feel for how this sort of problem works. It’s good practice.

    Sketching will convince you there are energy minimums, where we can get circular orbits. It won’t say where to find them without some trial-and-error or building a model of this energy and seeing where a ball bearing dropped into it rolls to a stop. We can do this more efficiently.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: