Tagged: calculus Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Monday, 18 September, 2017 Permalink | Reply
    Tags: , , calculus, , , Jacobian, , , ,   

    The Summer 2017 Mathematics A To Z: Volume Forms 


    I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

    Volume Forms.

    So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” y = 2x + 1 .

    A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called y = 2x + 1 before? … That’s … some mess. And now r = 2\theta + 1 … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

    And something to bother you a while. y = 2x + 1 is an equation that looks the same as r = 2\theta + 1 . You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

    The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

    But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

    We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

    None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

    The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

    What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

    The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

    Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

    That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

    For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

    There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

    But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

    Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

    Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

    Advertisements
     
    • elkement (Elke Stangl) 7:56 am on Tuesday, 19 September, 2017 Permalink | Reply

      That was again very intriguing! I only came across volume forms as a technical term when learning General Relativity. It seems in theoretical statistical mechanics it was not necessary to use that term – despite these fancy integrals in phase spaces with trillions of dimensions that you need when proving things about the canonical ensemble, then move on to the grand canonical one etc. I wonder why. Because the geometry of the spaces or volumes considered are fairly simple after all? “Only highly symmetrical N-balls”?

      And – continuing from my comment on your post on Topology – again, Landau and Lifshitz had pulled it off in GR without “Volume Forms” if I recall correctly. But they explain the integration of tensors of different ranks in spaces with different dimensions separately, which is still doable when having 3D space or 4D spacetime in mind (they also put much emphasis on working out the 3D space-only tensor part of the 4D tensors) – but perhaps exactly these insights and generalizations that you allude to are lost without introducing Volume Forms.

      Like

      • Joseph Nebus 8:23 pm on Friday, 22 September, 2017 Permalink | Reply

        I suspect that, for most problems, the geometry of the phase spaces in statistical mechanics is pretty simple. The problems I’ve worked on have been easy enough in that regard, although there is a lot in the field (especially non-equilibrium statistical mechanics) that I just don’t know.

        Probably it does all come back to the perception of how hard these things are to pick up versus how much one wants to do with them. Or an estimate of the audience, and how likely they are to be familiar with something, and how much book space they’re willing to spend bringing readers up to speed.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 20 August, 2017 Permalink | Reply
    Tags: , , calculus, , , , , , ,   

    Reading the Comics, August 15, 2017: Cake Edition 


    It was again a week just busy enough that I’m comfortable splitting the Reading The Comments thread into two pieces. It’s also a week that made me think about cake. So, I’m happy with the way last week shaped up, as far as comic strips go. Other stuff could have used a lot of work Let’s read.

    Stephen Bentley’s Herb and Jamaal rerun for the 13th depicts “teaching the kids math” by having them divide up a cake fairly. I accept this as a viable way to make kids interested in the problem. Cake-slicing problems are a corner of game theory as it addresses questions we always find interesting. How can a resource be fairly divided? How can it be divided if there is not a trusted authority? How can it be divided if the parties do not trust one another? Why do we not have more cake? The kids seem to be trying to divide the cake by volume, which could be fair. If the cake slice is a small enough wedge they can likely get near enough a perfect split by ordinary measures. If it’s a bigger wedge they’d need calculus to get the answer perfect. It’ll be well-approximated by solids of revolution. But they likely don’t need perfection.

    This is assuming the value of the icing side is not held in greater esteem than the bare-cake sides. This is not how I would value the parts of the cake. They’ll need to work something out about that, too.

    Mac King and Bill King’s Magic in a Minute for the 13th features a bit of numerical wizardry. That the dates in a three-by-three block in a calendar will add up to nine times the centered date. Why this works is good for a bit of practice in simplifying algebraic expressions. The stunt will be more impressive if you can multiply by nine in your head. I’d do that by taking ten times the given date and then subtracting the original date. I won’t say I’m fond of the idea of subtracting 23 from 230, or 17 from 170. But a skilled performer could do something interesting while trying to do this subtraction. (And if you practice the trick you can get the hang of the … fifteen? … different possible answers.)

    Bill Amend’s FoxTrot rerun for the 14th mentions mathematics. Young nerd Jason’s trying to get back into hand-raising form. Arithmetic has considerable advantages as a thing to practice answering teachers. The questions have clear, definitely right answers, that can be worked out or memorized ahead of time, and can be asked in under half a panel’s word balloon space. I deduce the strip first ran the 21st of August, 2006, although that image seems to be broken.

    Ed Allison’s Unstrange Phenomena for the 14th suggests changes in the definition of the mile and the gallon to effortlessly improve the fuel economy of cars. As befits Allison’s Dadaist inclinations the numbers don’t work out. As it is, if you defined a New Mile of 7,290 feet (and didn’t change what a foot was) and a New Gallon of 192 fluid ounces (and didn’t change what an old fluid ounce was) then a 20 old-miles-per-old-gallon car would come out to about 21.7 new-miles-per-new-gallon. Commenter Del_Grande points out that if the New Mile were 3,960 feet then the calculation would work out. This inspires in me curiosity. Did Allison figure out the numbers that would work and then make a mistake in the final art? Or did he pick funny-looking numbers and not worry about whether they made sense? No way to tell from here, I suppose. (Allison doesn’t mention ways to get in touch on the comic’s About page and I’ve only got the weakest links into the professional cartoon community.)

    Todd the Dinosaur in the playground. 'Kickball, here we come!' Teacher's voice: 'Hold it right there! What is 128 divided by 4?' Todd: 'Long division?' He screams until he wakes. Trent: 'What's wrong?' Todd: 'I dreamed it was the first day of school! And my teacher made me do math ... DURING RECESS!' Trent: 'Stop! That's too scary!'

    Patrick Roberts’s Todd the Dinosaur for the 15th of August, 2017. Before you snipe that there’s no room on the teacher’s worksheet for Todd to actually give an answer, remember that it’s an important part of dream-logic that it’s impossible to actually do the commanded task.

    Patrick Roberts’s Todd the Dinosaur for the 15th mentions long division as the stuff of nightmares. So it is. I guess MathWorld and Wikipedia endorse calling 128 divided by 4 long division, although I’m not sure I’m comfortable with that. This may be idiosyncratic; I’d thought of long division as where the divisor is two or more digits. A three-digit number divided by a one-digit one doesn’t seem long to me. I’d just think that was division. I’m curious what readers’ experiences have been.

     
    • goldenoj 10:00 pm on Sunday, 20 August, 2017 Permalink | Reply

      When kids are first taught division outside the multiplication table, it’s called long division. And taught with a variety of horror inspiring, place value destroying algorithms.

      Like

      • Joseph Nebus 12:37 am on Thursday, 24 August, 2017 Permalink | Reply

        Mm, all right. Maybe I’m far enough from learning long division in the first place that I’ve substituted what I think it is for what it actually is.

        I do think of it as the part of division that’s first really baffling, since dividing by (like) 26 will often involve a first guess and then a revised guess. And that’s a deep shock, I think. Up to that point I’m not sure there’s anything that can’t be done exactly right the first time without revisions being needed.

        Like

  • Joseph Nebus 6:00 pm on Friday, 18 August, 2017 Permalink | Reply
    Tags: , , calculus, George Berkeley, , numerical integration, , ,   

    The Summer 2017 Mathematics A To Z: Integration 


    One more mathematics term suggested by Gaurish for the A-To-Z today, and then I’ll move on to a couple of others. Today’s is a good one.

    Integration.

    Stand on the edge of a plot of land. Walk along its boundary. As you walk the edge pay attention. Note how far you walk before changing direction, even in the slightest. When you return to where you started consult your notes. Contained within them is the area you circumnavigated.

    If that doesn’t startle you perhaps you haven’t thought about how odd that is. You don’t ever touch the interior of the region. You never do anything like see how many standard-size tiles would fit inside. You walk a path that is as close to one-dimensional as your feet allow. And encoded in there somewhere is an area. Stare at that incongruity and you realize why integrals baffle the student so. They have a deep strangeness embedded in them.

    We who do mathematics have always liked integration. They grow, in the western tradition, out of geometry. Given a shape, what is a square that has the same area? There are shapes it’s easy to find the area for, given only straightedge and compass: a rectangle? Easy. A triangle? Just as straightforward. A polygon? If you know triangles then you know polygons. A lune, the crescent-moon shape formed by taking a circular cut out of a circle? We can do that. (If the cut is the right size.) A circle? … All right, we can’t do that, but we spent two thousand years trying before we found that out for sure. And we can do some excellent approximations.

    That bit of finding-a-square-with-the-same-area was called “quadrature”. The name survives, mostly in the phrase “numerical quadrature”. We use that to mean that we computed an integral’s approximate value, instead of finding a formula that would get it exactly. The otherwise obvious choice of “numerical integration” we use already. It describes computing the solution of a differential equation. We’re not trying to be difficult about this. Solving a differential equation is a kind of integration, and we need to do that a lot. We could recast a solving-a-differential-equation problem as a find-the-area problem, and vice-versa. But that’s bother, if we don’t need to, and so we talk about numerical quadrature and numerical integration.

    Integrals are built on two infinities. This is part of why it took so long to work out their logic. One is the infinity of number; we find an integral’s value, in principle, by adding together infinitely many things. The other is an infinity of smallness. The things we add together are infinitesimally small. That we need to take things, each smaller than any number yet somehow not zero, and in such quantity that they add up to something, seems paradoxical. Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy. Bishop George Berkeley made a steady name for himself in calculus textbooks by pointing this out. We have worked out several logically consistent schemes for evaluating integrals. They work, mostly, by showing that we can make the error caused by approximating the integral smaller than any margin we like. This is a standard trick, or at least it is, now that we know it.

    That “in principle” above is important. We don’t actually work out an integral by finding the sum of infinitely many, infinitely tiny, things. It’s too hard. I remember in grad school the analysis professor working out by the proper definitions the integral of 1. This is as easy an integral as you can do without just integrating zero. He escaped with his life, but it was a close scrape. He offered the integral of x as a way to test our endurance, without actually doing it. I’ve never made it through that.

    But we do integrals anyway. We have tools on our side. We can show, for example, that if a function obeys some common rules then we can use simpler formulas. Ones that don’t demand so many symbols in such tight formation. Ones that we can use in high school. Also, ones we can adapt to numerical computing, so that we can let machines give us answers which are near enough right. We get to choose how near is “near enough”. But then the machines decide how long we’ll have to wait to get that answer.

    The greatest tool we have on our side is the Fundamental Theorem of Calculus. Even the name promises it’s the greatest tool we might have. This rule tells us how to connect integrating a function to differentiating another function. If we can find a function whose derivative is the thing we want to integrate, then we have a formula for the integral. It’s that function we found. What a fantastic result.

    The trouble is it’s so hard to find functions whose derivatives are the thing we wanted to integrate. There are a lot of functions we can find, mind you. If we want to integrate a polynomial it’s easy. Sine and cosine and even tangent? Yeah. Logarithms? A little tedious but all right. A constant number raised to the power x? Also tedious but doable. A constant number raised to the power x2? Hold on there, that’s madness. No, we can’t do that.

    There is a weird grab-bag of functions we can find these integrals for. They’re mostly ones we can find some integration trick for. An integration trick is some way to turn the integral we’re interested in into a couple of integrals we can do and then mix back together. A lot of a Freshman Calculus course is a heap of tricks we’ve learned. They have names like “u-substitution” and “integration by parts” and “trigonometric substitution”. Some of them are really exotic, such as turning a single integral into a double integral because that leads us to something we can do. And there’s something called “differentiation under the integral sign” that I don’t know of anyone actually using. People know of it because Richard Feynman, in his fun memoir What Do You Care What Other People Think: 250 Pages Of How Awesome I Was In Every Situation Ever, mentions how awesome it made him in so many situations. Mathematics, physics, and engineering nerds are required to read this at an impressionable age, so we fall in love with a technique no textbook ever mentions. Sorry.

    I’ve written about all this as if we were interested just in areas. We’re not. We like calculating lengths and volumes and, if we dare venture into more dimensions, hypervolumes and the like. That’s all right. If we understand how to calculate areas, we have the tools we need. We can adapt them to as many or as few dimensions as we need. By weighting integrals we can do calculations that tell us about centers of mass and moments of inertial, about the most and least probable values of something, about all quantum mechanics.

    As often happens, this powerful tool starts with something anyone might ponder: what size square has the same area as this other shape? And then think seriously about it.

     
    • gaurish 7:24 am on Saturday, 19 August, 2017 Permalink | Reply

      I myself tried to write about integration a couple of years ago, but failed. This is much better. My favourite statement: “Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy.”

      Liked by 1 person

      • Joseph Nebus 12:33 am on Thursday, 24 August, 2017 Permalink | Reply

        Aw, thank you kindly. It may be worth your trying to write again. We all come to new perspectives with time, and a variety of views are good for people trying to find one that helps them understand a thing.

        Liked by 1 person

      • elkement (Elke Stangl) 7:01 pm on Wednesday, 30 August, 2017 Permalink | Reply

        My favorite statement from this article is: “Integrals are built on two infinities. This is part of why it took so long to work out their logic. “

        Liked by 1 person

        • Joseph Nebus 1:14 am on Friday, 8 September, 2017 Permalink | Reply

          That was one of those happy sentences that’s really the whole essay, and everything else was just the run-up and the relaxation from. Have one of those and the rest of the writing is easy.

          Liked by 1 person

  • Joseph Nebus 4:00 pm on Wednesday, 7 June, 2017 Permalink | Reply
    Tags: , calculus, , ,   

    What Second Derivatives Are And What They Can Do For You 


    Previous supplemental reading for Why Stuff Can Orbit:


    This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

    This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

    When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

    Necessary qualifiers: pages 65 through 82 of any book on real analysis.

    So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write f: \Re \rightarrow \Re . If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

    I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

    (One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

    I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

    So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

    I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

    F^0(x) = f(a)

    That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

    We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

    But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

    F^1(x) = f(a) + m\cdot\left(x - a\right)

    Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

    So I will define a new function, spoken of as f-prime, this way:

    f'(x) = \frac{df}{dx}\left(x\right)

    If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that \frac{df}{dx} . That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols \frac{df}{dx} so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

    Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

    m = f'(a) = \frac{df}{dx}\left(a\right)

    which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

    F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)

    This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

    We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2

    What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

    m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)

    We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2

    This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

    If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

    We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

    F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3

    There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be \frac{1}{4\cdot 3\cdot 2} . The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

    We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

     
    • elkement (Elke Stangl) 4:20 pm on Wednesday, 7 June, 2017 Permalink | Reply

      This is great – I’ve just written a very short version of that (a much too succinct one) … as an half-hearted attempt to explain Taylor expansions that I need in an upcoming post. But now I won’t feel bad anymore about its incomprehensibility and simply link to this post of yours :-)

      Like

  • Joseph Nebus 6:00 pm on Wednesday, 31 May, 2017 Permalink | Reply
    Tags: , calculus, , ,   

    Something Cute I Never Noticed Before About Infinite Sums 


    This is a trifle, for which I apologize. I’ve been sick. But I ran across this while reading Carl B Boyer’s The History of the Calculus and its Conceptual Development. This is from the chapter “A Century Of Anticipation”, developments leading up to Newton and Leibniz and The Calculus As We Know It. In particular, while working out the indefinite integrals for simple powers — x raised to a whole number — John Wallis, whom you’ll remember from such things as the first use of the ∞ symbol and beating up Thomas Hobbes for his lunch money, noted this:

    \frac{0 + 1}{1 + 1} = \frac{1}{2}

    Which is fine enough. But then Wallis also noted that

    \frac{0 + 1 + 2}{2 + 2 + 2} = \frac{1}{2}

    And furthermore that

    \frac{0 + 1 + 2 + 3}{3 + 3 + 3 + 3} = \frac{1}{2}

    \frac{0 + 1 + 2 + 3 + 4}{4 + 4 + 4 + 4 + 4} = \frac{1}{2}

    \frac{0 + 1 + 2 + 3 + 4 + 5}{5 + 5 + 5 + 5 + 5 + 5} = \frac{1}{2}

    And isn’t that neat? Wallis goes on to conclude that this is true not just for finitely many terms in the numerator and denominator, but also if you carry on infinitely far. This seems like a dangerous leap to make, but they treated infinities and infinitesimals dangerously in those days.

    What makes this work is — well, it’s just true; explaining how that can be is kind of like explaining how it is circles have a center point. All right. But we can prove that this has to be true at least for finite terms. A sum like 0 + 1 + 2 + 3 is an arithmetic progression. It’s the sum of a finite number of terms, each of them an equal difference from the one before or the one after (or both).

    Its sum will be equal to the number of terms times the arithmetic mean of the first and last. That is, it’ll be the number of terms times the sum of the first and the last terms and divided that by two. So that takes care of the numerator. If we have the sum 0 + 1 + 2 + 3 + up to whatever number you like which we’ll call ‘N’, we know its value has to be (N + 1) times N divided by 2. That takes care of the numerator.

    The denominator, well, that’s (N + 1) cases of the number N being added together. Its value has to be (N + 1) times N. So the fraction is (N + 1) times N divided by 2, itself divided by (N + 1) times N. That’s got to be one-half except when N is zero. And if N were zero, well, that fraction would be 0 over 0 and we know what kind of trouble that is.

    It’s a tiny bit, although you can use it to make an argument about what to expect from \int{x^n dx} , as Wallis did. And it delighted me to see and to understand why it should be so.

     
    • elkement (Elke Stangl) 4:38 pm on Monday, 5 June, 2017 Permalink | Reply

      It reminds me of the famous story about young Gauss – when he baffled his teacher with this somewhat related ‘trick’ of adding up numbers between 1 to 100 very quickly (by actually calculating 101*50).

      Like

      • Joseph Nebus 1:09 am on Wednesday, 7 June, 2017 Permalink | Reply

        That’s exactly what crossed my mind, especially as I realized I was doing the sum of 1 through 100 at least implicitly. It feels so playful to have something like that turn up.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Sunday, 28 May, 2017 Permalink | Reply
    Tags: Best Medicine Cartoon, Break of Day, calculus, , , Oh Brother, , ,   

    Reading the Comics, May 27, 2017: Panels Edition 


    Can’t say this was too fast or too slow a week for mathematically-themed comic strips. A bunch of the strips were panel comics, so that’ll do for my theme.

    Norm Feuti’s Retail for the 21st mentions every (not that) algebra teacher’s favorite vague introduction to group theory, the Rubik’s Cube. Well, the ways you can rotate the various sides of the cube do form a group, which is something that acts like arithmetic without necessarily being numbers. And it gets into value judgements. There exist algorithms to solve Rubik’s cubes. Is it a show of intelligence that someone can learn an algorithm and solve any cube? — But then, how is solving a Rubik’s cube, with or without the help of an algorithm, a show of intelligence? At least of any intelligence more than the bit of spatial recognition that’s good for rotating cubes around?

    'Rubik's cube, huh? I never could solve one of those.' 'I'm just fidgeting with it. I never bothered learning the algorithm either.' 'What algorithm?' 'The pattern you use to solve it.' 'Wait. All you have to do to solve it is memorize a pattern?' 'Of course. How did you think people solved it?' 'I always thought you had to be super smart to figure it out.' 'Well, memorizing the pattern does take a degree of intelligence.' 'Yeah, but that's not the same thing as solving it on your own.' 'I'm sure some people figured out the algorithm without help.' 'I KNEW Chad Gustafson was a liar! He was no eighth-grade prodigy, he just memorized the pattern!' 'Sounds like you and the CUBE have some unresolved issues.'

    Norm Feuti’s Retail for the 21st of May, 2017. A few weeks ago I ran across a book about the world of competitive Rubik’s Cube solving. I haven’t had the chance to read it, but am interested by the ways people form rules for what would seem like a naturally shapeless feature such as solving Rubik’s Cubes. Not featured: the early 80s Saturday morning cartoon that totally existed because somehow that made sense back then.

    I don’t see that learning an algorithm for a problem is a lack of intelligence. No more than using a photo reference shows a lack of drawing skill. It’s still something you need to learn, and to apply, and to adapt to the cube as you have it to deal with. Anyway, I never learned any techniques for solving it either. Would just play for the joy of it. Here’s a page with one approach to solving the cube, if you’d like to give it a try yourself. Good luck.

    Bob Weber Jr and Jay Stephens’s Oh, Brother! for the 22nd is a word-problem avoidance joke. It’s a slight thing to include, but the artwork is nice.

    Brian and Ron Boychuk’s Chuckle Brothers for the 23rd is a very slight thing to include, but it’s looking like a slow week. I need something here. If you don’t see it then things picked up. They similarly tried sprucing things up the 27th, with another joke for taping onto the door.

    Nate Fakes’s Break of Day for the 24th features the traditional whiteboard full of mathematics scrawls as a sign of intelligence. The scrawl on the whiteboard looks almost meaningful. The integral, particularly, looks like it might have been copied from a legitimate problem in polar or cylindrical coordinates. I say “almost” because while I think that some of the r symbols there are r’ I’m not positive those aren’t just stray marks. If they are r’ symbols, it’s the sort of integral that comes up when you look at surfaces of spheres. It would be the electric field of a conductive metal ball given some charge, or the gravitational field of a shell. These are tedious integrals to solve, but fortunately after you do them in a couple of introductory physics-for-majors classes you can just look up the answers instead.

    Samson’s Dark Side of the Horse for the 26th is the Roman numerals joke for this installment. I feel like it ought to be a pie chart joke too, but I can’t find a way to make it one.

    Izzy Ehnes’s The Best Medicine Cartoon for the 27th is the anthropomorphic numerals joke for this paragraph.

     
  • Joseph Nebus 6:00 pm on Friday, 21 April, 2017 Permalink | Reply
    Tags: calculus, , ,   

    In Which I Offer Excuses Instead Of Mathematics 


    I’d been hoping to get back into longer-form essays. And then the calculations I meant to do on one problem turned out more complicated than I’d wanted. And they’re hard to square with the approach I used in some earlier work. Not that the results I was looking at were wrong, mind, just that an approach I’d used as “convenient for this sort of problem” turned inconvenient here.

    So while I have the whole piece back in the shop for re-thinking, which is harder than even thinking, let me give you some other stuff to read. Or look at. One is from regular Singaporean correspondent MathTuition88. If you know anything about topology it’s because you’ve heard about Möbius strips. Surfaces with a single side are neat, and form the base of 95 percent of all science fiction stories in which the mathematics is the fantastic element. Klein bottles are often mentioned as a four-dimensional analogue to the Möbius strip, a solid object with no distinguishable interior or exterior. And a Klein bottle can be divided into two Möbius strips. MathTuition88 showcases a picture about how to turn two strips into a bottle. Or at least the best approximation of a bottle we can do; the actual Klein bottle is a four-dimensional structure and we can just make a three-dimensional imitation of the thing.

    For something a bit more vector-analytic Joe Heafner’s Tensor Time has an essay about vectors. It’s about Heafner’s dislike for the way some vector problems are presented. Some common and easy ways to solve vector equations lead to spurious solutions that have to be weeded out by ad hoc reasoning; can’t we do better? Heafner argues that we can and should. The suggested alternative looks a little stuffy, but as often happens, spending more time on the setup means one spends less time confused later on. Worth pondering.

    And this is a late addition, but I couldn’t resist.

    Now I have a new favorite first chapter for a calculus text.

     
    • sheldonk2014 7:43 pm on Monday, 1 May, 2017 Permalink | Reply

      Thank you for visiting Joseph
      Sorry it has taken me so long to get back to you
      But I know you understand
      As Sheldon Always

      Like

      • Joseph Nebus 3:23 am on Tuesday, 9 May, 2017 Permalink | Reply

        Oh yes. Nothing you ever need to apologize for. I don’t check in on everyone often enough myself. Just glad you’re here and basically all right.

        Like

  • Joseph Nebus 6:00 pm on Sunday, 26 February, 2017 Permalink | Reply
    Tags: , calculus, Flo and Friends, , , , Promises Promises, , , ,   

    Reading the Comics, February 23, 2017: The Week At Once Edition 


    For the first time in ages there aren’t enough mathematically-themed comic strips to justify my cutting the week’s roundup in two. No, I have no idea what I’m going to write about for Thursday. Let’s find out together.

    Jenny Campbell’s Flo and Friends for the 19th faintly irritates me. Flo wants to make sure her granddaughter understands that just because it takes people on average 14 minutes to fall asleep doesn’t mean that anyone actually does, by listing all sorts of reasons that a person might need more than fourteen minutes to sleep. It makes me think of a behavior John Allen Paulos notes in Innumeracy, wherein the statistically wise points out that someone has, say, a one-in-a-hundred-million chance of being killed by a terrorist (or whatever) and is answered, “ah, but what if you’re that one?” That is, it’s a response that has the form of wisdom without the substance. I notice Flo doesn’t mention the many reasons someone might fall asleep in less than fourteen minutes.

    But there is something wise in there nevertheless. For most stuff, the average is the most common value. By “the average” I mean the arithmetic mean, because that is what anyone means by “the average” unless they’re being difficult. (Mathematicians acknowledge the existence of an average called the mode, which is the most common value (or values), and that’s most common by definition.) But just because something is the most common result does not mean that it must be common. Toss a coin fairly a hundred times and it’s most likely to come up tails 50 times. But you shouldn’t be surprised if it actually turns up tails 51 or 49 or 45 times. This doesn’t make 50 a poor estimate for the average number of times something will happen. It just means that it’s not a guarantee.

    Gary Wise and Lance Aldrich’s Real Life Adventures for the 19th shows off an unusually dynamic camera angle. It’s in service for a class of problem you get in freshman calculus: find the longest pole that can fit around a corner. Oh, a box-spring mattress up a stairwell is a little different, what with box-spring mattresses being three-dimensional objects. It’s the same kind of problem. I want to say the most astounding furniture-moving event I’ve ever seen was when I moved a fold-out couch down one and a half flights of stairs single-handed. But that overlooks the caged mouse we had one winter, who moved a Chinese finger-trap full of crinkle paper up the tight curved plastic to his nest by sheer determination. The trap was far longer than could possibly be curved around the tube. We have no idea how he managed it.

    J R Faulkner’s Promises, Promises for the 20th jokes that one could use Roman numerals to obscure calculations. So you could. Roman numerals are terrible things for doing arithmetic, at least past addition and subtraction. This is why accountants and mathematicians abandoned them pretty soon after learning there were alternatives.

    Mark Anderson’s Andertoons for the 21st is the Mark Anderson’s Andertoons for the week. Probably anything would do for the blackboard problem, but something geometry reads very well.

    Jef Mallett’s Frazz for the 21st makes some comedy out of the sort of arithmetic error we all make. It’s so easy to pair up, like, 7 and 3 make 10 and 8 and 2 make 10. It takes a moment, or experience, to realize 78 and 32 will not make 100. Forgive casual mistakes.

    Bud Fisher’s Mutt and Jeff rerun for the 22nd is a similar-in-tone joke built on arithmetic errors. It’s got the form of vaudeville-style sketch compressed way down, which is probably why the third panel could be made into a satisfying final panel too.

    'How did you do on the math test?' 'Terrible.' 'Will your mom be mad?' 'Maybe. But at least she'll know I didn't cheat!'

    Bud Blake’s Tiger for the 23rd of February, 2017. I want to blame the colorists for making Hugo’s baby tooth look so weird in the second and third panels, but the coloring is such a faint thing at that point I can’t. I’m sorry to bring it to your attention if you didn’t notice and weren’t bothered by it before.

    Bud Blake’s Tiger rerun for the 23rd just name-drops mathematics; it could be any subject. But I need some kind of picture around here, don’t I?

    Mike Baldwin’s Cornered for the 23rd is the anthropomorphic numerals joke for the week.

     
  • Joseph Nebus 6:00 pm on Sunday, 12 February, 2017 Permalink | Reply
    Tags: , , calculus, , , Lay Lines, , Pooch Cafe, Rabbits Against Magic,   

    Reading the Comics, February 6, 2017: Another Pictureless Half-Week Edition 


    Got another little flood of mathematically-themed comic strips last week and so once again I’ll split them along something that looks kind of middle-ish. Also this is another bunch of GoComics.com-only posts. Since those seem to be accessible to anyone whether or not they’re subscribers indefinitely far into the future I don’t feel like I can put the comics directly up and will trust you all to click on the links that you find interesting. Which is fine; the new GoComics.com design makes it annoyingly hard to download a comic strip. I don’t think that was their intention. But that’s one of the two nagging problems I have with their new design. So you know.

    Tony Cochran’s Agnes for the 5th sees a brand-new mathematics. Always dangerous stuff. But mathematicians do invent, or discover, new things in mathematics all the time. Part of the task is naming the things in it. That’s something which takes talent. Some people, such as Leonhard Euler, had the knack a great novelist has for putting names to things. The rest of us muddle along. Often if there’s any real-world inspiration, or resemblance to anything, we’ll rely on that. And we look for terminology that evokes similar ideas in other fields. … And, Agnes would like to know, there is mathematics that’s about approximate answers, being “right around” the desired answer. Unfortunately, that’s hard. (It’s all hard, if you’re going to take it seriously, much like everything else people do.)

    Scott Hilburn’s The Argyle Sweater for the 5th is the anthropomorphic numerals joke for this essay.

    Carol Lay’s Lay Lines for the 6th depicts the hazards of thinking deeply and hard about the infinitely large and the infinitesimally small. They’re hard. Our intuition seems well-suited to handing a modest bunch of household-sized things. Logic guides us when thinking about the infinitely large or small, but it takes a long time to get truly conversant and comfortable with it all.

    Paul Gilligan’s Pooch Cafe for the 6th sees Poncho try to argue there’s thermodynamical reasons for not being kind. Reasoning about why one should be kind (or not) is the business of philosophers and I won’t overstep my expertise. Poncho’s mathematics, that’s something I can write about. He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around. That’s fair enough as a description of what the first law of thermodynamics tells us about energy. The equation he reads off reads, “the change in the internal energy (Δ U) equals the heat added to the system (U) minus the work done by the system (W)”. Conservation laws aren’t unique to thermodynamics. But Poncho may be aware of just how universal and powerful thermodynamics is. I’m open to an argument that it’s the most important field of physics.

    Jonathan Lemon’s Rabbits Against Magic for the 6th is another strip Intro to Calculus instructors can use for their presentation on instantaneous versus average velocities. There’s been a bunch of them recently. I wonder if someone at Comic Strip Master Command got a speeding ticket.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 6th is about numeric bases. They’re fun to learn about. There’s an arbitrariness in the way we represent concepts. I think we can understand better what kinds of problems seem easy and what kinds seem harder if we write them out different ways. But base eleven is only good for jokes.

     
    • davekingsbury 10:01 pm on Monday, 13 February, 2017 Permalink | Reply

      He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around.

      How, I wonder, to marry that with Juliet’s declaration of love for Juliet?

      “My bounty is as boundless as the sea,
      My love as deep; the more I give to thee,
      The more I have, for both are infinite.”

      Like

      • Joseph Nebus 11:08 pm on Thursday, 16 February, 2017 Permalink | Reply

        Oh, well, infinities are just trouble no matter what. Anything can happen with them.

        I suppose there’s also the question of how the Banach-Tarski Paradox affects love.

        Liked by 1 person

    • Downpuppy (@Downpuppy) 12:30 am on Tuesday, 14 February, 2017 Permalink | Reply

      Agnes is the first Fuzzy Math reference I’ve seen in about 10 years.

      Squirrel Girl counted to 31 on one hand to defeat Count Nefario, but SMBC is more an ASL snub

      Like

      • Joseph Nebus 11:12 pm on Thursday, 16 February, 2017 Permalink | Reply

        I’m a little surprised fuzzy mathematics doesn’t get used for more comic strips, but I don’t suppose it lends itself to too many different jokes. On the other hand, neither does Pi Day and we’ll see a bunch of those over the coming month.

        I had expected, really, Saturday Morning Breakfast Cereal to go with 1,024 as a natural base if you use your hands in a particularly digit-efficient way.

        Like

  • Joseph Nebus 6:00 pm on Thursday, 26 January, 2017 Permalink | Reply
    Tags: calculus, Clear Blue Water, , , , One Big Family, ,   

    Reading the Comics, January 21, 2017: Homework Edition 


    Now to close out what Comic Strip Master Command sent my way through last Saturday. And I’m glad I’ve shifted to a regular schedule for these. They ordered a mass of comics with mathematical themes for Sunday and Monday this current week.

    Karen Montague-Reyes’s Clear Blue Water rerun for the 17th describes trick-or-treating as “logarithmic”. The intention is to say that the difficulty in wrangling kids from house to house grows incredibly fast as the number of kids increases. Fair enough, but should it be “logarithmic” or “exponential”? Because the logarithm grows slowly as the number you take the logarithm of grows. It grows all the slower the bigger the number gets. The exponential of a number, though, that grows faster and faster still as the number underlying it grows. So is this mistaken?

    I say no. It depends what the logarithm is, and is of. If the number of kids is the logarithm of the difficulty of hauling them around, then the intent and the mathematics are in perfect alignment. Five kids are (let’s say) ten times harder to deal with than four kids. Sensible and, from what I can tell of packs of kids, correct.

    'Anne has six nickels. Sue has 41 pennies. Who has more money?' 'That's not going to be easy to figure out. It all depends on how they're dressed!'

    Rick Detorie’s One Big Happy for the 17th of January, 2017. The section was about how the appearance and trappings of wealth matter for more than the actual substance of wealth so everyone’s really up to speed in the course.

    Rick Detorie’s One Big Happy for the 17th is a resisting-the-word-problem joke. There’s probably some warning that could be drawn about this in how to write story problems. It’s hard to foresee all the reasonable confounding factors that might get a student to the wrong answer, or to see a problem that isn’t meant to be there.

    Bill Holbrook’s On The Fastrack for the 19th continues Fi’s story of considering leaving Fastrack Inc, and finding a non-competition clause that’s of appropriate comical absurdity. As an auditor there’s not even a chance Fi could do without numbers. Were she a pure mathematician … yeah, no. There’s fields of mathematics in which numbers aren’t all that important. But we never do without them entirely. Even if we exclude cases where a number is just used as an index, for which Roman numerals would be almost as good as regular numerals. If nothing else numbers would keep sneaking in by way of polynomials.

    'Uh, Fi? Have you looked at the non-compete clause in your contract?' 'I wouldn't go to one of Fastrack's competitors.' 'No, but, um ... you'd better read this.' 'I COULDN'T USE NUMBERS FOR TWO YEARS???' 'Roman numerals would be okay.'

    Bill Holbrook’s On The Fastrack for the 19th of January, 2017. I feel like someone could write a convoluted story that lets someone do mathematics while avoiding any actual use of any numbers, and that it would probably be Greg Egan who did it.

    Dave Whamond’s Reality Check for the 19th breaks our long dry spell without pie chart jokes.

    Mort Walker and Dik Browne’s Vintage Hi and Lois for the 27th of July, 1959 uses calculus as stand-in for what college is all about. Lois’s particular example is about a second derivative. Suppose we have a function named ‘y’ and that depends on a variable named ‘x’. Probably it’s a function with domain and range both real numbers. If complex numbers were involved then the variable would more likely be called ‘z’. The first derivative of a function is about how fast its values change with small changes in the variable. The second derivative is about how fast the values of the first derivative change with small changes in the variable.

    'I hope our kids are smart enough to win scholarships for college.' 'We can't count on that. We'll just have to save the money!' 'Do you know it costs about $10,000 to send one child through college?!' 'That's $40,000 we'd have to save!' Lois reads to the kids: (d^2/dx^2)y = 6x - 2.

    Mort Walker and Dik Browne’s Vintage Hi and Lois for the 27th of July, 1959. Fortunately Lois discovered the other way to avoid college costs: simply freeze the ages of your children where they are now, so they never face student loans. It’s an appealing plan until you imagine being Trixie.

    The ‘d’ in this equation is more of an instruction than it is a number, which is why it’s a mistake to just divide those out. Instead of writing it as \frac{d^2 y}{dx^2} it’s permitted, and common, to write it as \frac{d^2}{dx^2} y . This means the same thing. I like that because, to me at least, it more clearly suggests “do this thing (take the second derivative) to the function we call ‘y’.” That’s a matter of style and what the author thinks needs emphasis.

    There are infinitely many possible functions y that would make the equation \frac{d^2 y}{dx^2} = 6x - 2 true. They all belong to one family, though. They all look like y(x) = \frac{1}{6} 6 x^3 - \frac{1}{2} 2 x^2 + C x + D , where ‘C’ and ‘D’ are some fixed numbers. There’s no way to know, from what Lois has given, what those numbers should be. It might be that the context of the problem gives information to use to say what those numbers should be. It might be that the problem doesn’t care what those numbers should be. Impossible to say without the context.

     
    • Joshua K. 6:26 am on Monday, 30 January, 2017 Permalink | Reply

      Why is the function in the Hi & Lois discussion stated as y(x) = (1/6)6x^3 – (1/2)2x^2 + Cx +D? Why not just y(x) = x^3 – x^2 + Cx + D?

      Like

      • Joseph Nebus 5:43 pm on Friday, 3 February, 2017 Permalink | Reply

        Good question! I actually put a fair bit of thought into this. If I were doing the problem myself I’d have cut right to x^3 – x^2 + Cx + D. But I thought there’s a number of people reading this for whom calculus is a perfect mystery and I thought that if I put an intermediate step it might help spot the pattern at work, that the coefficients in front of the x^3 and x^2 terms don’t vanish without cause.

        That said, I probably screwed up by writing them as 1/6 and 1/2. That looks too much like I’m just dividing by what the coefficients are. If I had taken more time to think out the post I should have written 1/(23) and 1/(12). This might’ve given a slightly better chance at connecting the powers of x and the fractions in the denominator. I’m not sure how much help that would give, since I didn’t describe how to take antiderivatives here. But I think it’d be a better presentation and I should remember that in future situations like that.

        Like

  • Joseph Nebus 6:00 pm on Friday, 23 December, 2016 Permalink | Reply
    Tags: , calculus, , , ,   

    The End 2016 Mathematics A To Z: Weierstrass Function 


    I’ve teased this one before.

    Weierstrass Function.

    So you know how the Earth is a sphere, but from our normal vantage point right up close to its surface it looks flat? That happens with functions too. Here I mean the normal kinds of functions we deal with, ones with domains that are the real numbers or a Euclidean space. And ranges that are real numbers. The functions you can draw on a sheet of paper with some wiggly bits. Let the function wiggle as much as you want. Pick a part of it and zoom in close. That zoomed-in part will look straight. If it doesn’t look straight, zoom in closer.

    We rely on this. Functions that are straight, or at least straight enough, are easy to work with. We can do calculus on them. We can do analysis on them. Functions with plots that look like straight lines are easy to work with. Often the best approach to working with the function you’re interested in is to approximate it with an easy-to-work-with function. I bet it’ll be a polynomial. That serves us well. Polynomials are these continuous functions. They’re differentiable. They’re smooth.

    That thing about the Earth looking flat, though? That’s a lie. I’ve never been to any of the really great cuts in the Earth’s surface, but I have been to some decent gorges. I went to grad school in the Hudson River Valley. I’ve driven I-80 over Pennsylvania’s scariest bridges. There’s points where the surface of the Earth just drops a great distance between your one footstep and your last.

    Functions do that too. We can have points where a function isn’t differentiable, where it’s impossible to define the direction it’s headed. We can have points where a function isn’t continuous, where it jumps from one region of values to another region. Everyone knows this. We can’t dismiss those as abberations not worthy of the name “function”; too many of them are too useful. Typically we handle this by admitting there’s points that aren’t continuous and we chop the function up. We make it into a couple of functions, each stretching from discontinuity to discontinuity. Between them we have continuous region and we can go about our business as before.

    Then came the 19th century when things got crazy. This particular craziness we credit to Karl Weierstrass. Weierstrass’s name is all over 19th century analysis. He had that talent for probing the limits of our intuition about basic mathematical ideas. We have a calculus that is logically rigorous because he found great counterexamples to what we had assumed without proving.

    The Weierstrass function challenges this idea that any function is going to eventually level out. Or that we can even smooth a function out into basically straight, predictable chunks in-between sudden changes of direction. The function is continuous everywhere; you can draw it perfectly without lifting your pen from paper. But it always looks like a zig-zag pattern, jumping around like it was always randomly deciding whether to go up or down next. Zoom in on any patch and it still jumps around, zig-zagging up and down. There’s never an interval where it’s always moving up, or always moving down, or even just staying constant.

    Despite being continuous it’s not differentiable. I’ve described that casually as it being impossible to predict where the function is going. That’s an abuse of words, yes. The function is defined. Its value at a point isn’t any more random than the value of “x2” is for any particular x. The unpredictability I’m talking about here is a side effect of ignorance. Imagine I showed you a plot of “x2” with a part of it concealed and asked you to fill in the gap. You’d probably do pretty well estimating it. The Weierstrass function, though? No; your guess would be lousy. My guess would be lousy too.

    That’s a weird thing to have happen. A century and a half later it’s still weird. It gets weirder. The Weierstrass function isn’t differentiable generally. But there are exceptions. There are little dots of differentiability, where the rate at which the function changes is known. Not intervals, though. Single points. This is crazy. Derivatives are about how a function changes. We work out what they should even mean by thinking of a function’s value on strips of the domain. Those strips are small, but they’re still, you know, strips. But on almost all of that strip the derivative isn’t defined. It’s only at isolated points, a set with measure zero, that this derivative even exists. It evokes the medieval Mysteries, of how we are supposed to try, even though we know we shall fail, to understand how God can have contradictory properties.

    It’s not quite that Mysterious here. Properties like this challenge our intuition, if we’ve gotten any. Once we’ve laid out good definitions for ideas like “derivative” and “continuous” and “limit” and “function” we can work out whether results like this make sense. And they — well, they follow. We can avoid weird conclusions like this, but at the cost of messing up our definitions for what a “function” and other things are. Making those useless. For the mathematical world to make sense, we have to change our idea of what quite makes sense.

    That’s all right. When we look close we realize the Earth around us is never flat. Even reasonably flat areas have slight rises and falls. The ends of properties are marked with curbs or ditches, and bordered by streets that rise to a center. Look closely even at the dirt and we notice that as level as it gets there are still rocks and scratches in the ground, clumps of dirt an infinitesimal bit higher here and lower there. The flatness of the Earth around us is a useful tool, but we miss a lot by pretending it’s everything. The Weierstrass function is one of the ways a student mathematician learns that while smooth, predictable functions are essential, there is much more out there.

     
  • Joseph Nebus 6:00 pm on Monday, 12 December, 2016 Permalink | Reply
    Tags: , , calculus, definite integrals,   

    The End 2016 Mathematics A To Z: Riemann Sum 


    I see for the other A To Z I did this year I did something else named for Riemann. So I did. Bernhard Riemann did a lot of work that’s essential to how we see mathematics today. We name all kinds of things for him, and correctly so. Here’s one of his many essential bits of work.

    Riemann Sum.

    The Riemann Sum is a thing we learn in Intro to Calculus. It’s essential in getting us to definite integrals. We’re introduced to it in functions of a single variable. The functions have a domain that’s an interval of real numbers and a range that’s somewhere in the real numbers. The Riemann Sum — and from it, the integral — is a real number.

    We get this number by following a couple steps. The first is we chop the interval up into a bunch of smaller intervals. That chopping-up we call a “partition” because it’s another of those times mathematicians use a word the way people might use the same word. From each one of those chopped-up pieces we pick a representative point. Now with each piece evaluate what the function is for that representative point. Multiply that by the width of the partition it was in. Then take those products for each of those pieces and add them all together. If you’ve done it right you’ve got a number.

    You need a couple pieces in place to have “the” Riemann Sum for something. You need a function, which is fair enough. And you need a partitioning of the interval. And you need some representative point for each of the partitions. Change any of them — function, partition, or point — and you may change the sum you get. You expect that for changing the function. Changing the partition? That’s less obvious. But draw some wiggly curvy function on a sheet of paper. Draw a couple of partitions of the horizontal axis. (You’ll probably want to use different colors for different partitions.) That should coax you into it. And you’d probably take it on my word that different representative points give you different sums.

    Very different? It’s possible. There’s nothing stopping it from happening. But if the results aren’t very different then we might just have an integrable function. That’s a function that gives us the same Riemann Sum no matter how we pick representative points, as long as we pick partitions that get finer and finer enough. We measure how fine a partition is by how big the widest chopped-up piece is. To be integrable the Riemann Sum for a function has to get to the same number whenever the partition’s size gets small enough and however we pick points inside. We get the lovely quiet paradox in which we add together infinitely many things, each of them infinitesimally tiny, and get a regular old number out of all that work.

    We use the Riemann Sum for what we call numerical quadrature. That’s working out integrals on the computer. Or calculator. Or by hand. When we do it by evaluating numbers instead of using analysis. It’s very easy to program. And we can do some tricks based on the Riemann Sum to make the numerical estimate a closer match to the actual integral.

    And we use the Riemann Sum to learn how the Riemann Integral works. It’s a blessedly straightforward thing. It appeals to intuition well. It lets us draw all sorts of curves with rectangular boxes overlaying them. It’s so easy to work out the area of a rectangular box. We can imagine adding up these areas without being confused.

    We don’t use the Riemann Sum to actually do integrals, though. Numerical approximations to an integral, yes. For the actual integral it’s too hard to use. What makes it hard is you need to evaluate this for every possible partition and every possible pick of representative points. In grad school my analysis professor worked through — once — using this to integrate the number 1. This is the easiest possible thing to integrate and it was barely manageable. He gave a good try at integrating the function ‘f(x) = x’ but admitted he couldn’t do it. None of us could.

    When you see the Riemann Sum in an Introduction to Calculus course you see it in simplified form. You get partitions that are very easy to work with. Like, you break the interval up into some number of equally-sized chunks. You get representative points that follow one of a couple good choices. The left end of the partition. The right end of the partition. The middle of the partition.

    That’s fine, numerically. If the function is integrable it doesn’t matter what partition or representative points we pick. And it’s fine for learning about whether functions are integrable. If it matters whether you pick left or middle or right ends of the partition then the function isn’t integrable. The instructor can give functions that break integrability based on a given partition or endpoint choice or whatever.

    But that isn’t every possible partition and every possible pick of representative points. I suppose it’s possible to work all that out for a couple of really, really simple functions. But it’s so much work. We’re better off using the Riemann Sum to get to formulas about integrals that don’t depend on actually using the Riemann Sum.

    So that is the curious position the Riemann Sum has. It is a fundament of integral calculus. It is the way we first define the definite integral. We rely on it to learn what definite integrals are like. We use it all the time numerically. We never use it analytically. It’s too hard. I hope you appreciate the strange beauty of that.

     
  • Joseph Nebus 6:00 pm on Sunday, 11 December, 2016 Permalink | Reply
    Tags: , calculus, , pranks, , titles   

    Reading the Comics, December 5, 2016: Cameo Appearances Edition 


    Comic Strip Master Command sent a bunch of strips my way this past week. They’ll get out to your way over this week. The first bunch are all on Gocomics.com, so I don’t feel quite fair including the strips themselves. This set also happens to be a bunch in which mathematics gets a passing mention, or is just used because they need some subject and mathematics is easy to draw into a joke. That’s all right.

    Jef Mallet’s Frazz for the 4th uses blackboard arithmetic and the iconic minor error of arithmetic. It’s also strikingly well-composed; look at the art from a little farther away. Forgetting to carry the one is maybe a perfect minor error for this sort of thing. Everyone does it, experienced mathematicians included. It’s very gradable. When someone’s learning arithmetic making this mistake is considered evidence that someone doesn’t know how to add. When someone’s learned it, making the mistake isn’t considered evidence the person doesn’t know how to add. A lot of mistakes work that way, somehow.

    Rick Stromoski’s Soup to Nutz for the 4th name-drops Fundamentals of Algebra as a devilish, ban-worthy book. Everyone feels that way. Mathematics majors get that way around two months in to their Introduction To Not That Kind Of Algebra course too. I doubt Stromoski has any particular algebra book in mind, but it doesn’t matter. The convention in mathematics books is to make titles that are ruthlessly descriptive, with not a touch of poetry to them. Among the mathematics books I have on my nearest shelf are Resnikoff and Wells’s Mathematics in Civilization; Koks’ Explorations in Mathematical Physics: The Concepts Behind An Elegant Language; Enderton’s A Mathematical Introduction To Logic; Courant, Robbins, and Stewart’s What Is Mathematics?; Murasagi’s Knot Theory And Its Applications; Nishimori’s Statistical Physics of Spin Glasses and Information Processing; Brush’s The Kind Of Motion We Call Heat, and so on. Only the Brush title has the slightest poetry to it, and it’s a history (of thermodynamics and statistical mechanics). The Courant/Robbins/Stewart has a title you could imagine on a bookstore shelf, but it’s also in part a popularization.

    It’s the convention, and it’s all right in its domain. If you are deep in the library stacks and don’t know what a books is about, the spine will tell you what the subject is. You might not know what level or depth the book is in, but you’ll know what the book is. The down side is if you remember having liked a book but not who wrote it you’re lost. Methods of Functional Analysis? Techniques in Modern Functional Analysis? … You could probably make a bingo game out of mathematics titles.

    Johnny Hart’s Back to B.C. for the 5th, a rerun from 1959, plays on the dawn of mathematics and the first thoughts of parallel lines. If parallel lines stir feelings in people they’re complicated feelings. One’s either awed at the resolute and reliable nature of the lines’ interaction, or is heartbroken that the things will never come together (or, I suppose, break apart). I can feel both sides of it.

    Dave Blazek’s Loose Parts for the 5th features the arithmetic blackboard as inspiration for a prank. It’s the sort of thing harder to do with someone’s notes for an English essay. But, to spoil the fun, I have to say in my experience something fiddled with in the middle of a board wouldn’t even register. In much the way people will read over typos, their minds seeing what should be there instead of what is, a minor mathematical error will often not be seen. The mathematician will carry on with what she thought should be there. Especially if the error is a few lines back of the latest work. Not always, though, and when it doesn’t it’s a heck of a problem. (And here I am thinking of the week, the week, I once spent stymied by a problem because I was differentiating the function ex wrong. The hilarious thing here is it is impossible to find something easier to differentiate than ex. After you differentiate it correctly you get ex. An advanced squirrel could do it right, and here I was in grad school doing it wrong.)

    Nate Creekmore’s Maintaining for the 5th has mathematics appear as the sort of homework one does. And a word problem that uses coins for whatever work it does. Coins should be good bases for word problems. They’re familiar enough and people do think about them, and if all else fails someone could in principle get enough dimes and quarters and just work it out by hand.

    Sam Hepburn’s Questionable Quotebook for the 5th uses a blackboard full of mathematics to signify a monkey’s extreme intelligence. There’s a little bit of calculus in there, an appearance of “\frac{df}{dx} ” and a mention of the limit. These are things you get right up front of a calculus course. They’ll turn up in all sorts of problems you try to do.

    Charles Schulz’s Peanuts for the 5th is not really about mathematics. Peppermint Patty just mentions it on the way to explaining the depths of her not-understanding stuff. But it’s always been one of my favorite declarations of not knowing what’s going on so I do want to share it. The strip originally ran the 8th of December, 1969.

     
  • Joseph Nebus 6:00 pm on Saturday, 5 November, 2016 Permalink | Reply
    Tags: calculus, , , , October, , , ,   

    How October 2016 Treated My Mathematics Blog 


    I do try to get these monthly readership review posts done close to the start of the month. I was busy the 1st of the month, though, and had to fit around the End 2016 Mathematics A To Z. And then I meant to set this to post on Thursday, since I didn’t have anything else going that day, and forgot.

    Readership Numbers:

    The number of page views declined again in October, part of a trend that’s been steady since June. There were only 907 views, down a slight amount from September’s 922 or more significantly from August’s 1002. I’ll find my way back above a thousand in a month if I can. A To Z months are usually pretty good ones, possibly because of all the fresh posts reminding people I exist.

    The number of unique visitors dropped to 536. There had been 576 in September, but then there were only 531 unique visitors in August, if you believe that sort of thing. The number of likes was 115, exactly the same as in September and slightly up from August’s 107. The number of comments rose to 24, up from September’s 20 and August’s 16. That’s certainly been helped by people making requests for the End 2016 Mathematics A To Z. But that counts too.

    Popular Posts:

    The most popular post of the month was a surprise to me and dates back to September of 2012, incredibly. I suspect someone on a popular web site linked to it and I never suspected. And the Reading the Comics posts were popular as ever.

    I’ve been trying to limit these most-popular posts to just five pieces. But How Mathematical Physics Works was the next piece to make the top ten and I am proud of it, so there.

    Listing Countries:

    Where did my readers come from in October? All over, but mostly, from 46 particular countries. Here’s the oddly popular list of them:

    Country Readers
    United States 466
    United Kingdom 78
    Philippines 55
    India 52
    Canada 32
    Germany 27
    Austria 23
    Puerto Rico 19
    Australia 14
    France 12
    Slovenia 10
    Spain 9
    Brazil 7
    Netherlands 7
    Italy 6
    New Zealand 5
    Singapore 5
    Denmark 4
    Sweden 4
    Bulgaria 3
    Poland 3
    Serbia 3
    Argentina 2
    European Union 2
    Indonesia 2
    Norway 2
    Bahamas 1
    Belgium 1
    Czech Republic 1 (**)
    Estonia 1 (*)
    Finland 1
    Greece 1
    Ireland 1
    Israel 1
    Jamaica 1
    Japan 1
    Mexico 1
    Portugal 1 (*)
    Russia 1
    Saudi Arabia 1
    Slovakia 1
    South Africa 1
    Ukraine 1
    United Arab Emirates 1
    Uruguay 1
    Zambia 1

    Estonia and Portugal are on two-month streaks as single-read countries. The Czech Republic’s on a three-month streak so. Nobody’s on a four-month streak, not yet.

    Search Term Non-Poetry:

    Once again it wasn’t a truly poetic sort of month. But it was one that taught me what people are looking for, and it’s comics about James Clerk Maxwell. Look at these queries:

    • comic strips of the scientist maxwell
    • comics trip of james clerk maxwell
    • comics about maxwell the scientist
    • james clerk maxwell comics trip
    • log 10 times 10 to the derivative of 10000
    • problems with vinyl lp with too many grooves
    • comics about integers
    • comic strip in advance algebra

    I admit I don’t know why someone sees James Clerk Maxwell as a figure for a comics trip. He’s famous for the laws of electromagnetism, of course. Also for great work in thermodynamics and statistical mechanics. Also for color photography. And explaining how the rings of Saturn could work. And for working out the physics of truss bridges, which may sound boring but is important. Great subject for a biography. Just, a comic?

    Counting Readers:

    November sees the blog start with 42,250 page views, from 17,747 unique visitors if you can believe that. I’m surprised the mathematics blog still has a higher view count than my humor blog has, just now. That one’s consistently more popular; this one’s just been around longer.

    WordPress says I started November with 626 followers, barely up from October’s 624. If you have wanted to follow me, there’s a button on the upper-right corner of the blog for that, at least until I change to a different theme. Also if you know a WordPress theme that would work better for the kind of blog I write let me know. I have a vague itch to change things around and that always precedes trouble. Also you can follow me on Twitter, @Nebusj, or check that out to make sure I’m not one of those people who somehow is hard to Twitter-read.

    According to the “Insights” tab my readership’s largest on Sundays, which makes sense. I’ve standardized on Sundays for the Reading the Comics essays. That gets 18 percent of page views, slightly more than one in seven views. The most popular hour is again 6 pm, I assume Universal Time. 14 percent of page views come in that hour. That’s the same percentage as last month and it must reflect when my standard posting hour is.

     
    • davekingsbury 10:52 pm on Sunday, 6 November, 2016 Permalink | Reply

      Perhaps your wide readership shows that mathematics is a universal language?

      Like

      • Joseph Nebus 5:56 am on Wednesday, 9 November, 2016 Permalink | Reply

        Conceivable! Although I suppose I’ve probably hit on a couple of topics that people are perennially if slightly looking for. And I have the advantage of writing in English, which so much of the Internet still depends upon. (I suppose it can’t hurt I’ve been trying to write sentences easier to understand, which is good for all readers as long as I don’t get simpler than the idea I mean to express.)

        Like

    • davekingsbury 5:03 pm on Wednesday, 9 November, 2016 Permalink | Reply

      Popularising maths and the sciences is a valuable art – long may you continue!

      Like

  • Joseph Nebus 6:00 pm on Friday, 21 October, 2016 Permalink | Reply
    Tags: , calculus, , , , , , ,   

    Why Stuff Can Orbit, Part 6: Circles and Where To Find Them 


    Previously:

    And some supplemental reading:


    So now we can work out orbits. At least orbits for a central force problem. Those are ones where a particle — it’s easy to think of it as a planet — is pulled towards the center of the universe. How strong that pull is depends on some constants. But it only changes as the distance the planet is from the center changes.

    What we’d like to know is whether there are circular orbits. By “we” I mean “mathematical physicists”. And I’m including you in that “we”. If you’re reading this far you’re at least interested in knowing how mathematical physicists think about stuff like this.

    It’s easiest describing when these circular orbits exist if we start with the potential energy. That’s a function named ‘V’. We write it as ‘V(r)’ to show it’s an energy that changes as ‘r’ changes. By ‘r’ we mean the distance from the center of the universe. We’d use ‘d’ for that except we’re so used to thinking of distance from the center as ‘radius’. So ‘r’ seems more compelling. Sorry.

    Besides the potential energy we need to know the angular momentum of the planet (or whatever it is) moving around the center. The amount of angular momentum is a number we call ‘L’. It might be positive, it might be negative. Also we need the planet’s mass, which we call ‘m’. The angular momentum and mass let us write a function called the effective potential energy, ‘Veff(r)’.

    And we’ll need to take derivatives of ‘Veff(r)’. Fortunately that “How Differential Calculus Works” essay explains all the symbol-manipulation we need to get started. That part is calculus, but the easy part. We can just follow the rules already there. So here’s what we do:

    • The planet (or whatever) can have a circular orbit around the center at any radius which makes the equation \frac{dV_{eff}}{dr} = 0 true.
    • The circular orbit will be stable if the radius of its orbit makes the second derivative of the effective potential, \frac{d^2V_{eff}}{dr^2} , some number greater than zero.

    We’re interested in stable orbits because usually unstable orbits are boring. They might exist but any little perturbation breaks them down. The mathematician, ordinarily, sees this as a useless solution except in how it describes different kinds of orbits. The physicist might point out that sometimes it can take a long time, possibly millions of years, before the perturbation becomes big enough to stand out. Indeed, it’s an open question whether our solar system is stable. While it seems to have gone millions of years without any planet changing its orbit very much we haven’t got the evidence to say it’s impossible that, say, Saturn will be kicked out of the solar system anytime soon. Or worse, that Earth might be. “Soon” here means geologically soon, like, in the next million years.

    (If it takes so long for the instability to matter then the mathematician might allow that as “metastable”. There are a lot of interesting metastable systems. But right now, I don’t care.)

    I realize now I didn’t explain the notation for the second derivative before. It looks funny because that’s just the best we can work out. In that fraction \frac{d^2V_{eff}}{dr^2} the ‘d’ isn’t a number so we can’t cancel it out. And the superscript ‘2’ doesn’t mean squaring, at least not the way we square numbers. There’s a functional analysis essay in there somewhere. Again I’m sorry about this but there’s a lot of things mathematicians want to write out and sometimes we can’t find a way that avoids all confusion. Roll with it.

    So that explains the whole thing clearly and easily and now nobody could be confused and yeah I know. If my Classical Mechanics professor left it at that we’d have open rebellion. Let’s do an example.

    There are two and a half good examples. That is, they’re central force problems with answers we know. One is gravitation: we have a planet orbiting a star that’s at the origin. Another is springs: we have a mass that’s connected by a spring to the origin. And the half is electric: put a positive electric charge at the center and have a negative charge orbit that. The electric case is only half a problem because it’s the same as the gravitation problem except for what the constants involved are. Electric charges attract each other crazy way stronger than gravitational masses do. But that doesn’t change the work we do.

    This is a lie. Electric charges accelerating, and just orbiting counts as accelerating, cause electromagnetic effects to happen. They give off light. That’s important, but it’s also complicated. I’m not going to deal with that.

    I’m going to do the gravitation problem. After all, we know the answer! By Kepler’s something law, something something radius cubed something G M … something … squared … After all, we can look up the answer!

    The potential energy for a planet orbiting a sun looks like this:

    V(r) = - G M m \frac{1}{r}

    Here ‘G’ is a constant, called the Gravitational Constant. It’s how strong gravity in the universe is. It’s not very strong. ‘M’ is the mass of the sun. ‘m’ is the mass of the planet. To make sense ‘M’ should be a lot bigger than ‘m’. ‘r’ is how far the planet is from the sun. And yes, that’s one-over-r, not one-over-r-squared. This is the potential energy of the planet being at a given distance from the sun. One-over-r-squared gives us how strong the force attracting the planet towards the sun is. Different thing. Related thing, but different thing. Just listing all these quantities one after the other means ‘multiply them together’, because mathematicians multiply things together a lot and get bored writing multiplication symbols all the time.

    Now for the effective potential we need to toss in the angular momentum. That’s ‘L’. The effective potential energy will be:

    V_{eff}(r) = - G M m \frac{1}{r} + \frac{L^2}{2 m r^2}

    I’m going to rewrite this in a way that means the same thing, but that makes it easier to take derivatives. At least easier to me. You’re on your own. But here’s what looks easier to me:

    V_{eff}(r) = - G M m r^{-1} + \frac{L^2}{2 m} r^{-2}

    I like this because it makes every term here look like “some constant number times r to a power”. That’s easy to take the derivative of. Check back on that “How Differential Calculus Works” essay. The first derivative of this ‘Veff(r)’, taken with respect to ‘r’, looks like this:

    \frac{dV_{eff}}{dr} = -(-1) G M m r^{-2} -2\frac{L^2}{2m} r^{-3}

    We can tidy that up a little bit: -(-1) is another way of writing 1. The second term has two times something divided by 2. We don’t need to be that complicated. In fact, when I worked out my notes I went directly to this simpler form, because I wasn’t going to be thrown by that. I imagine I’ve got people reading along here who are watching these equations warily, if at all. They’re ready to bolt at the first sign of something terrible-looking. There’s nothing terrible-looking coming up. All we’re doing from this point on is really arithmetic. It’s multiplying or adding or otherwise moving around numbers to make the equation prettier. It happens we only know those numbers by cryptic names like ‘G’ or ‘L’ or ‘M’. You can go ahead and pretend they’re ‘4’ or ‘5’ or ‘7’ if you like. You know how to do the steps coming up.

    So! We allegedly can have a circular orbit when this first derivative is equal to zero. What values of ‘r’ make true this equation?

    G M m r^{-2} - \frac{L^2}{m} r^{-3} = 0

    Not so helpful there. What we want is to have something like ‘r = (mathematics stuff here)’. We have to do some high school algebra moving-stuff-around to get that. So one thing we can do to get closer is add the quantity \frac{L^2}{m} r^{-3} to both sides of this equation. This gets us:

    G M m r^{-2} = \frac{L^2}{m} r^{-3}

    Things are getting better. Now multiply both sides by the same number. Which number? r3. That’s because ‘r-3‘ times ‘r3‘ is going to equal 1, while ‘r-2‘ times ‘r3‘ will equal ‘r1‘, which normal people call ‘r’. I kid; normal people don’t think of such a thing at all, much less call it anything. But if they did, they’d call it ‘r’. We’ve got:

    G M m r = \frac{L^2}{m}

    And now we’re getting there! Divide both sides by whatever number ‘G M’ is, as long as it isn’t zero. And then we have our circular orbit! It’s at the radius

    r = \frac{L^2}{G M m^2}

    Very good. I’d even say pretty. It’s got all those capital letters and one little lowercase. Something squared in the numerator and the denominator. Aesthetically pleasant. Stinks a little that it doesn’t look like anything we remember from Kepler’s Laws once we’ve looked them up. We can fix that, though.

    The key is the angular momentum ‘L’ there. I haven’t said anything about how that number relates to anything. It’s just been some constant of the universe. In a sense that’s fair enough. Angular momentum is conserved, exactly the same way energy is conserved, or the way linear momentum is conserved. Why not just let it be whatever number it happens to be?

    (A note for people who skipped earlier essays: Angular momentum is not a number. It’s really a three-dimensional vector. But in a central force problem with just one planet moving around we aren’t doing any harm by pretending it’s just a number. We set it up so that the angular momentum is pointing directly out of, or directly into, the sheet of paper we pretend the planet’s orbiting in. Since we know the direction before we even start work, all we have to care about is the size. That’s the number I’m talking about.)

    The angular momentum of a thing is its moment of inertia times its angular velocity. I’m glad to have cleared that up for you. The moment of inertia of a thing describes how easy it is to start it spinning, or stop it spinning, or change its spin. It’s a lot like inertia. What it is depends on the mass of the thing spinning, and how that mass is distributed, and what it’s spinning around. It’s the first part of physics that makes the student really have to know volume integrals.

    We don’t have to know volume integrals. A single point mass spinning at a constant speed at a constant distance from the origin is the easy angular momentum to figure out. A mass ‘m’ at a fixed distance ‘r’ from the center of rotation moving at constant speed ‘v’ has an angular momentum of ‘m’ times ‘r’ times ‘v’.

    So great; we’ve turned ‘L’ which we didn’t know into ‘m r v’, where we know ‘m’ and ‘r’ but don’t know ‘v’. We’re making progress, I promise. The planet’s tracing out a circle in some amount of time. It’s a circle with radius ‘r’. So it traces out a circle with perimeter ‘2 π r’. And it takes some amount of time to do that. Call that time ‘T’. So its speed will be the distance travelled divided by the time it takes to travel. That’s \frac{2 \pi r}{T} . Again we’ve changed one unknown number ‘L’ for another unknown number ‘T’. But at least ‘T’ is an easy familiar thing: it’s how long the orbit takes.

    Let me show you how this helps. Start off with what ‘L’ is:

    L = m r v = m r \frac{2\pi r}{T} = 2\pi m \frac{r^2}{T}

    Now let’s put that into the equation I got eight paragraphs ago:

    r = \frac{L^2}{G M m^2}

    Remember that one? Now put what I just said ‘L’ was, in where ‘L’ shows up in that equation.

    r = \frac{\left(2\pi m \frac{r^2}{T}\right)^2}{G M m^2}

    I agree, this looks like a mess and possibly a disaster. It’s not so bad. Do some cleaning up on that numerator.

    r = \frac{4 \pi^2 m^2}{G M m^2} \frac{r^4}{T^2}

    That’s looking a lot better, isn’t it? We even have something we can divide out: the mass of the planet is just about to disappear. This sounds bizarre, but remember Kepler’s laws: the mass of the planet never figures into things. We may be on the right path yet.

    r = \frac{4 \pi^2}{G M} \frac{r^4}{T^2}

    OK. Now I’m going to multiply both sides by ‘T2‘ because that’ll get that out of the denominator. And I’ll divide both sides by ‘r’ so that I only have the radius of the circular orbit on one side of the equation. Here’s what we’ve got now:

    T^2 = \frac{4 \pi^2}{G M} r^3

    And hey! That looks really familiar. A circular orbit’s radius cubed is some multiple of the square of the orbit’s time. Yes. This looks right. At least it looks reasonable. Someone else can check if it’s right. I like the look of it.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about more and different … um …

    I’d like to talk about the different … oh, dear. Yes. You’re going to ask about that, aren’t you?

    Ugh. All right. I’ll do it.

    How do we know this is a stable orbit? Well, it just is. If it weren’t the Earth wouldn’t have a Moon after all this. Heck, the Sun wouldn’t have an Earth. At least it wouldn’t have a Jupiter. If the solar system is unstable, Jupiter is probably the most stable part. But that isn’t convincing. I’ll do this right, though, and show what the second derivative tells us. It tells us this is too a stable orbit.

    So. The thing we have to do is find the second derivative of the effective potential. This we do by taking the derivative of the first derivative. Then we have to evaluate this second derivative and see what value it has for the radius of our circular orbit. If that’s a positive number, then the orbit’s stable. If that’s a negative number, then the orbit’s not stable. This isn’t hard to do, but it isn’t going to look pretty.

    First the pretty part, though. Here’s the first derivative of the effective potential:

    \frac{dV_{eff}}{dr} = G M m r^{-2} - \frac{L^2}{m} r^{-3}

    OK. So the derivative of this with respect to ‘r’ isn’t hard to evaluate again. This is again a function with a bunch of terms that are all a constant times r to a power. That’s the easiest sort of thing to differentiate that isn’t just something that never changes.

    \frac{d^2 V_{eff}}{dr^2} = -2 G M m r^{-3} - (-3)\frac{L^2}{m} r^{-4}

    Now the messy part. We need to work out what that line above is when our planet’s in our circular orbit. That circular orbit happens when r = \frac{L^2}{G M m^2} . So we have to substitute that mess in for ‘r’ wherever it appears in that above equation and you’re going to love this. Are you ready? It’s:

    -2 G M m \left(\frac{L^2}{G M m^2}\right)^{-3} + 3\frac{L^2}{m}\left(\frac{L^2}{G M m^2}\right)^{-4}

    This will get a bit easier promptly. That’s because something raised to a negative power is the same as its reciprocal raised to the positive of that power. So that terrible, terrible expression is the same as this terrible, terrible expression:

    -2 G M m \left(\frac{G M m^2}{L^2}\right)^3 + 3 \frac{L^2}{m}\left(\frac{G M m^2}{L^2}\right)^4

    Yes, yes, I know. Only thing to do is start hacking through all this because I promise it’s going to get better. Putting all those third- and fourth-powers into their parentheses turns this mess into:

    -2 G M m \frac{G^3 M^3 m^6}{L^6} + 3 \frac{L^2}{m} \frac{G^4 M^4 m^8}{L^8}

    Yes, my gut reaction when I see multiple things raised to the eighth power is to say I don’t want any part of this either. Hold on another line, though. Things are going to start cancelling out and getting shorter. Group all those things-to-powers together:

    -2 \frac{G^4 M^4 m^7}{L^6} + 3 \frac{G^4 M^4 m^7}{L^6}

    Oh. Well, now this is different. The second derivative of the effective potential, at this point, is the number

    \frac{G^4 M^4 m^7}{L^6}

    And I admit I don’t know what number that is. But here’s what I do know: ‘G’ is a positive number. ‘M’ is a positive number. ‘m’ is a positive number. ‘L’ might be positive or might be negative, but ‘L6‘ is a positive number either way. So this is a bunch of positive numbers multiplied and divided together.

    So this second derivative what ever it is must be a positive number. And so this circular orbit is stable. Give the planet a little nudge and that’s all right. It’ll stay near its orbit. I’m sorry to put you through that but some people raised the, honestly, fair question.

    So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about the other kinds of central forces that you might get. We only solved one problem here. We can solve way more than that.

     
    • howardat58 6:18 pm on Friday, 21 October, 2016 Permalink | Reply

      I love the chatty approach.

      Like

      • Joseph Nebus 5:03 am on Saturday, 22 October, 2016 Permalink | Reply

        Thank you. I realized doing Theorem Thursdays over the summer that it was hard to avoid that voice, and then that it was fun writing in it. So eventually I do learn, sometimes.

        Like

  • Joseph Nebus 6:00 pm on Friday, 14 October, 2016 Permalink | Reply
    Tags: , calculus, , , , harmonic motion, ,   

    How Mathematical Physics Works: Another Course In 2200 Words 


    OK, I need some more background stuff before returning to the Why Stuff Can Orbit series. Last week I explained how to take derivatives, which is one of the three legs of a Calculus I course. Now I need to say something about why we take derivatives. This essay won’t really qualify you to do mathematical physics, but it’ll at least let you bluff your way through a meeting with one.

    We care about derivatives because we’re doing physics a smart way. This involves thinking not about forces but instead potential energy. We have a function, called V or sometimes U, that changes based on where something is. If we need to know the forces on something we can take the derivative, with respect to position, of the potential energy.

    The way I’ve set up these central force problems makes it easy to shift between physical intuition and calculus. Draw a scribbly little curve, something going up and down as you like, as long as it doesn’t loop back on itself. Also, don’t take the pen from paper. Also, no corners. That’s just cheating. Smooth curves. That’s your potential energy function. Take any point on this scribbly curve. If you go to the right a little from that point, is the curve going up? Then your function has a positive derivative at that point. Is the curve going down? Then your function has a negative derivative. Find some other point where the curve is going in the other direction. If it was going up to start, find a point where it’s going down. Somewhere in-between there must be a point where the curve isn’t going up or going down. The Intermediate Value Theorem says you’re welcome.

    These points where the potential energy isn’t increasing or decreasing are the interesting ones. At least if you’re a mathematical physicist. They’re equilibriums. If whatever might be moving happens to be exactly there, then it’s not going to move. It’ll stay right there. Mathematically: the force is some fixed number times the derivative of the potential energy there. The potential energy’s derivative is zero there. So the force is zero and without a force nothing’s going to change. Physical intuition: imagine you laid out a track with exactly the shape of your curve. Put a marble at this point where the track isn’t rising and isn’t falling. Does the marble move? No, but if you’re not so sure about that read on past the next paragraph.

    Mathematical physicists learn to look for these equilibriums. We’re taught to not bother with what will happen if we release this particle at this spot with this velocity. That is, you know, not looking at any particular problem someone might want to know. We look instead at equilibriums because they help us describe all the possible behaviors of a system. Mathematicians are sometimes characterized as lazy in spirit. This is fair. Mathematicians will start out with a problem looking to see if it’s just like some other problem someone already solved. But the flip side is if one is going to go to the trouble of solving a new problem, she’s going to really solve it. We’ll work out not just what happens from some one particular starting condition. We’ll try to describe all the different kinds of thing that could happen, and how to tell which of them does happen for your measly little problem.

    If you actually do have a curvy track and put a marble down on its equilibrium it might yet move. Suppose the track is rising a while and then falls back again; putting the marble at top and it’s likely to roll one way or the other. If it doesn’t it’s probably because of friction; the track sticks a little. If it were a really smooth track and the marble perfectly round then it’d fall. Give me this. But even with a perfectly smooth track and perfectly frictionless marble it’ll still roll one way or another. Unless you put it exactly at the spot that’s the top of the hill, not a bit to the left or the right. Good luck.

    What’s happening here is the difference between a stable and an unstable equilibrium. This is again something we all have a physical intuition for. Imagine you have something that isn’t moving. Give it a little shove. Does it stay about like it was? Then it’s stable. Does it break? Then it’s unstable. The marble at the top of the track is at an unstable equilibrium; a little nudge and it’ll roll away. If you had a marble at the bottom of a track, inside a valley, then it’s a stable equilibrium. A little nudge will make the marble rock back and forth but it’ll stay nearby.

    Yes, if you give it a crazy big whack the marble will go flying off, never to be seen again. We’re talking about small nudges. No, smaller than that. This maybe sounds like question-begging to you. But what makes for an unstable equilibrium is that no nudge is too small. The nudge — perturbation, in the trade — will just keep growing. In a stable equilibrium there’s nudges small enough that they won’t keep growing. They might not shrink, but they won’t grow either.

    So how to tell which is which? Well, look at your potential energy and imagine it as a track with a marble again. Where are the unstable equilibriums? They’re the ones at tops of hills. Near them the curve looks like a cup pointing down, to use the metaphor every Calculus I class takes. Where are the stable equilibriums? They’re the ones at bottoms of valleys. Near them the curve looks like a cup pointing up. Again, see Calculus I.

    We may be able to tell the difference between these kinds of equilibriums without drawing the potential energy. We can use the second derivative. To find the second derivative of a function you take the derivative of a function and then — you may want to think this one over — take the derivative of that. That is, you take the derivative of the original function a second time. Sometimes higher mathematics gives us terms that aren’t too hard.

    So if you have a spot where you know there’s an equilibrium, look at what the second derivative at that spot is. If it’s positive, you have a stable equilibrium. If it’s negative, you have an unstable equilibrium. This is called “Second Derivative Test”, as it was named by a committee that figured it was close enough to 5 pm and why cause trouble?

    If the second derivative is zero there, um, we can’t say anything right now. The equilibrium may also be an inflection point. That’s where the growth of something pauses a moment before resuming. Or where the decline of something pauses a moment before resuming. In either case that’s still an unstable equilibrium. But it doesn’t have to be. It could still be a stable equilibrium. It might just have a very smoothly flat base. No telling just from that one piece of information and this is why we have to go on to other work.

    But this gets at how we’d like to look at a system. We look for its equilibriums. We figure out which equilibriums are stable and which ones are unstable. With a little more work we can say, if the system starts out like this it’ll stay near that equilibrium. If it starts out like that it’ll stay near this whole other equilibrium. If it starts out this other way, it’ll go flying off to the end of the universe. We can solve every possible problem at once and never have to bother with a particular case. This feels good.

    It also gives us a little something more. You maybe have heard of a tangent line. That’s a line that’s, er, tangent to a curve. Again with the not-too-hard terms. What this means is there’s a point, called the “point of tangency”, again named by a committee that wanted to get out early. And the line just touches the original curve at that point, and it’s going in exactly the same direction as the original curve at that point. Typically this means the line just grazes the curve, at least around there. If you’ve ever rolled a pencil until it just touched the edge of your coffee cup or soda can, you’ve set up a tangent line to the curve of your beverage container. You just didn’t think of it as that because you’re not daft. Fair enough.

    Mathematicians will use tangents because a tangent line has values that are so easy to calculate. The function describing a tangent line is a polynomial and we llllllllove polynomials, correctly. The tangent line is always easy to understand, however hard the original function was. Its value, at the equilibrium, is exactly what the original function’s was. Its first derivative, at the equilibrium, is exactly what the original function’s was at that point. Its second derivative is zero, which might or might not be true of the original function. We don’t care.

    We don’t use tangent lines when we look at equilibriums. This is because in this case they’re boring. If it’s an equilibrium then its tangent line is a horizontal line. No matter what the original function was. It’s trivial: you know the answer before you’ve heard the question.

    Ah, but, there is something mathematical physicists do like. The tangent line is boring. Fine. But how about, using the second derivative, building a tangent … well, “parabola” is the proper term. This is a curve that’s a quadratic, that looks like an open bowl. It exactly matches the original function at the equilibrium. Its derivative exactly matches the original function’s derivative at the equilibrium. Its second derivative also exactly matches the original function’s second derivative, though. Third derivative we don’t care about. It’s so not important here I can’t even finish this sentence in a

    What this second-derivative-based approximation gives us is a parabola. It will look very much like the original function if we’re close to the equilibrium. And this gives us something great. The great thing is this is the same potential energy shape of a weight on a spring, or anything else that oscillates back and forth. It’s the potential energy for “simple harmonic motion”.

    And that’s great. We start studying simple harmonic motion, oh, somewhere in high school physics class because it’s so much fun to play with slinkies and springs and accidentally dropping weights on our lab partners. We never stop. The mathematics behind it is simple. It turns up everywhere. If you understand the mathematics of a mass on a spring you have a tool that relevant to pretty much every problem you ever have. This approximation is part of that. Close to a stable equilibrium, whatever system you’re looking at has the same behavior as a weight on a spring.

    It may strike you that a mass on a spring is itself a central force. And now I’m saying that within the central force problem I started out doing, stuff that orbits, there’s another central force problem. This is true. You’ll see that in a few Why Stuff Can Orbit essays.

    So far, by the way, I’ve talked entirely about a potential energy with a single variable. This is for a good reason: two or more variables is harder. Well of course it is. But the basic dynamics are still open. There’s equilibriums. They can be stable or unstable. They might have inflection points. There is a new kind of behavior. Mathematicians call it a “saddle point”. This is where in one direction the potential energy makes it look like a stable equilibrium while in another direction the potential energy makes it look unstable. Examples of it kind of look like the shape of a saddle, if you haven’t looked at an actual saddle recently. (If you really want to know, get your computer to plot the function z = x2 – y2 and look at the origin, where x = 0 and y = 0.) Well, there’s points on an actual saddle that would be saddle points to a mathematician. It’s unstable, because there’s that direction where it’s definitely unstable.

    So everything about multivariable functions is longer, and a couple bits of it are harder. There’s more chances for weird stuff to happen. I think I can get through most of Why Stuff Can Orbit without having to know that. But do some reading up on that before you take a job as a mathematical physicist.

     
  • Joseph Nebus 6:00 pm on Friday, 7 October, 2016 Permalink | Reply
    Tags: calculus, , ,   

    How Differential Calculus Works 


    I’m at a point in my Why Stuff Can Orbit essays where I need to talk about derivatives. If I don’t then I’m going to be stuck with qualitative talk that’s too hard to focus on anything. But it’s a big subject. It’s about two-fifths of a Freshman Calculus course and one of the two main legs of calculus. So I want to pull out a quick essay explaining the important stuff.

    Derivatives, also called differentials, are about how things change. By “things” I mean “functions”. And particularly I mean functions which have as a domain that’s in the real numbers and a range that’s in the real numbers. That’s the starting point. We can define derivatives for functions with domains or ranges that are vectors of real numbers, or that are complex-valued numbers, or are even more exotic kinds of numbers. But those ideas are all built on the derivatives for real-valued functions. So if you get really good at them, everything else is easy.

    Derivatives are the part of calculus where we study how functions change. They’re blessed with some easy-to-visualize interpretations. Suppose you have a function that describes where something is at a given time. Then its derivative with respect to time is the thing’s velocity. You can build a pretty good intuitive feeling for derivatives that way. I won’t stop you from it.

    A function’s derivative is itself a function. The derivative doesn’t have to be constant, although it’s possible. Sometimes the derivative is positive. This means the original function increases as whatever its variable increases. Sometimes the derivative is negative. This means the original function decreases as its variable increases. If the derivative is a big number this means it’s increasing or decreasing fast. If the derivative is a small number it’s increasing or decreasing slowly. If the derivative is zero then the function isn’t increasing or decreasing, at least not at this particular value of the variable. This might be a local maximum, where the function’s got a larger value than it has anywhere else nearby. It might be a local minimum, where the function’s got a smaller value than it has anywhere else nearby. Or it might just be a little pause in the growth or shrinkage of the function. No telling, at least not just from knowing the derivative is zero.

    Derivatives tell you something about where the function is going. We can, and in practice often do, make approximations to a function that are built on derivatives. Many interesting functions are real pains to calculate. Derivatives let us make polynomials that are as good an approximation as we need and that a computer can do for us instead.

    Derivatives can change rapidly. That’s all right. They’re functions in their own right. Sometimes they change more rapidly and more extremely than the original function did. Sometimes they don’t. Depends what the original function was like.

    Not every function has a derivative. Functions that have derivatives don’t necessarily have them at every point. A function has to be continuous to have a derivative. A function that jumps all over the place doesn’t really have a direction, not that differential calculus will let us suss out. But being continuous isn’t enough. The function also has to be … well, we call it “differentiable”. The word at least tells us what the property is good for, even if it doesn’t say how to tell if a function is differentiable. The function has to not have any corners, points where the direction suddenly and unpredictably changes.

    Otherwise differentiable functions can have some corners, some non-differentiable points. For instance the height of a bouncing ball, over time, looks like a bunch of upside-down U-like shapes that suddenly rebound off the floor and go up again. Those points interrupting the upside-down U’s aren’t differentiable, even though the rest of the curve is.

    It’s possible to make functions that are nothing but corners and that aren’t differentiable anywhere. These are done mostly to win quarrels with 19th century mathematicians about the nature of the real numbers. We use them in Real Analysis to blow students’ minds, and occasionally after that to give a fanciful idea a hard test. In doing real-world physics we usually don’t have to think about them.

    So why do we do them? Because they tell us how functions change. They can tell us where functions momentarily don’t change, which are usually important. And equations with differentials in them, known in the trade as “differential equations”. They’re also known to mathematics majors as “diffy Q’s”, a name which delights everyone. Diffy Q’s let us describe physical systems where there’s any kind of feedback. If something interacts with its surroundings, that interaction’s probably described by differential equations.

    So how do we do them? We start with a function that we’ll call ‘f’ because we’re not wasting more of our lives figuring out names for functions and we’re feeling smugly confident Turkish authorities aren’t interested in us.

    f’s domain is real numbers, for example, the one we’ll call ‘x’. Its range is also real numbers. f(x) is some number in that range. It’s the one that the function f matches with the domain’s value of x. We read f(x) aloud as “the value of f evaluated at the number x”, if we’re pretending or trying to scare Freshman Calculus classes. Really we say “f of x” or maybe “f at x”.

    There’s a couple ways to write down the derivative of f. First, for example, we say “the derivative of f with respect to x”. By that we mean how does the value of f(x) change if there’s a small change in x. That difference matters if we have a function that depends on more than one variable, if we had an “f(x, y)”. Having several variables for f changes stuff. Mostly it makes the work longer but not harder. We don’t need that here.

    But there’s a couple ways to write this derivative. The best one if it’s a function of one variable is to put a little ‘ mark: f'(x). We pronounce that “f-prime of x”. That’s good enough if there’s only the one variable to worry about. We have other symbols. One we use a lot in doing stuff with differentials looks for all the world like a fraction. We spend a lot of time in differential calculus explaining why it isn’t a fraction, and then in integral calculus we do some shady stuff that treats it like it is. But that’s \frac{df}{dx} . This also appears as \frac{d}{dx} f . If you encounter this do not cross out the d’s from the numerator and denominator. In this context the d’s are not numbers. They’re actually operators, which is what we call a function whose domain is functions. They’re notational quirks is all. Accept them and move on.

    How do we calculate it? In Freshman Calculus we introduce it with this expression involving limits and fractions and delta-symbols (the Greek letter that looks like a triangle). We do that to scare off students who aren’t taking this stuff seriously. Also we use that formal proper definition to prove some simple rules work. Then we use those simple rules. Here’s some of them.

    1. The derivative of something that doesn’t change is 0.
    2. The derivative of xn, where n is any constant real number — positive, negative, whole, a fraction, rational, irrational, whatever — is n xn-1.
    3. If f and g are both functions and have derivatives, then the derivative of the function f plus g is the derivative of f plus the derivative of g. This probably has some proper name but it’s used so much it kind of turns invisible. I dunno. I’d call it something about linearity because “linearity” is what happens when adding stuff works like adding numbers does.
    4. If f and g are both functions and have derivatives, then the derivative of the function f times g is the derivative of f times the original g plus the original f times the derivative of g. This is called the Product Rule.
    5. If f and g are both function and have derivatives we might do something called composing them. That is, for a given x we find the value of g(x), and then we put that value in to f. That we write as f(g(x)). For example, “the sine of the square of x”. Well, the derivative of that with respect to x is the derivative of f evaluated at g(x) times the derivative of g. That is, f'(g(x)) times g'(x). This is called the Chain Rule and believe it or not this turns up all the time.
    6. If f is a function and C is some constant number, then the derivative of C times f(x) is just C times the derivative of f(x), that is, C times f'(x). You don’t really need this as a separate rule. If you know the Product Rule and that first one about the derivatives of things that don’t change then this follows. But who wants to go through that many steps for a measly little result like this?
    7. Calculus textbooks say there’s this Quotient Rule for when you divide f by g but you know what? The one time you’re going to need it it’s easier to work out by the Product Rule and the Chain Rule and your life is too short for the Quotient Rule. Srsly.
    8. There’s some functions with special rules for the derivative. You can work them out from the Real And Proper Definition. Or you can work them out from infinitely-long polynomial approximations that somehow make sense in context. But what you really do is just remember the ones anyone actually uses. The derivative of ex is ex and we love it for that. That’s why e is that 2.71828(etc) number and not something else. The derivative of the natural log of x is 1/x. That’s what makes it natural. The derivative of the sine of x is the cosine of x. That’s if x is measured in radians, which it always is in calculus, because it makes that work. The derivative of the cosine of x is minus the sine of x. Again, radians. The derivative of the tangent of x is um the square of the secant of x? I dunno, look that sucker up. The derivative of the arctangent of x is \frac{1}{1 + x^2} and you know what? The calculus book lists this in a table in the inside covers. Just use that. Arctangent. Sheesh. We just made up the “secant” and the “cosecant” to haze Freshman Calculus students anyway. We don’t get a lot of fun. Let us have this. Or we’ll make you hear of the inverse hyperbolic cosecant.

    So. What’s this all mean for central force problems? Well, here’s what the effective potential energy V(r) usually looks like:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    So, first thing. The variable here is ‘r’. The variable name doesn’t matter. It’s just whatever name is convenient and reminds us somehow why we’re talking about this number. We adapt our rules about derivatives with respect to ‘x’ to this new name by striking out ‘x’ wherever it appeared and writing in ‘r’ instead.

    Second thing. C is a constant. So is n. So is L and so is m. 2 is also a constant, which you never think about until a mathematician brings it up. That second term is a little complicated in form. But what it means is “a constant, which happens to be the number \frac{L^2}{2m} , multiplied by r-2”.

    So the derivative of Veff, with respect to r, is the derivative of the first term plus the derivative of the second. The derivatives of each of those terms is going to be some constant time r to another number. And that’s going to be:

    V'_{eff}(r) = C n r^{n - 1} - 2 \frac{L^2}{2 m r^3}

    And there you can divide the 2 in the numerator and the 2 in the denominator. So we could make this look a little simpler yet, but don’t worry about that.

    OK. So that’s what you need to know about differential calculus to understand orbital mechanics. Not everything. Just what we need for the next bit.

     
    • davekingsbury 1:06 pm on Saturday, 8 October, 2016 Permalink | Reply

      Good article. Just finished Morton Cohen’s biography of Lewis Carroll, who was a great populariser of mathematics, logic, etc. Started a shared poem in tribute to him, here is a cheeky plug, hope you don’t mind!

      https://davekingsbury.wordpress.com/2016/10/08/web-of-life/

      Like

      • Joseph Nebus 12:22 am on Tuesday, 11 October, 2016 Permalink | Reply

        Thanks for sharing and I’m quite happy to have your plug here. I know about Carroll’s mathematics-popularization side; his logic puzzles are particularly choice ones even today. (Granting that deductive logic really lends itself to being funny.)

        Oddly I haven’t read a proper biography of Carroll, or most of the other mathematicians I’m interested in. Which is strange since I’m so very interested in the history and the cultural development of mathematics.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 4 October, 2016 Permalink | Reply
    Tags: calculus, , , , ,   

    How September 2016 Treated My Mathematics Blog 


    I put together another low-key, low-volume month in September. In trade, I got a low readership: my lowest in the past twelve months, according to WordPress, and less than a thousand readers for the first time since May. Well, that’s a lesson to me about something or other.

    Readership Numbers:

    So there were only 922 page views around here, down from August’s 1,002 and July’s 1,057. The number of distinct readers rose, at least, to 575. There had been only 531 in August. But there were 585 in July, which is the sort of way it goes.

    The number of likes rose to 115, which is technically up from August’s 107. It’s well down from July’s 177. There were 20 comments in September, up from August’s 16 yet down from July’s 33. I think this mostly reflects how many fewer posts I’ve been publishing. There were just eleven original posts in August and September, compared to, for example, July’s boom of 17. I am thinking about doing a new A To Z round to close out the year, which if past performance is any indication would bring me all sorts of readers as well as make me spend every day writing, writing, writing and hoping for any kind of mathematics word that starts with ‘y’.

    Popular Posts:

    I’m not surprised that my most popular post for September was a Reading the Comics post. With hindsight I realize it’s almost perfectly engineered for reliable readership. It’s about something light but lets me, at least in principle, bring up the whole spectrum of mathematics. That said I am surprised two of the most popular posts were stepped deep into Freshman Calculus, threatening to be inaccessible to casual readers. But then both of those posts started out when online friends needed help with their calculus work, so maybe it better matches stuff people need to know. The most-read articles around here in September were:

    Listing Countries:

    Country Readers
    United States 808
    India 53
    Canada 46
    United Kingdom 34
    New Zealand 24
    Australia 23
    Germany 18
    Philippines 17
    France 9
    Argentina 8
    Spain 7
    Singapore 6
    Brazil 6
    Kenya 5
    Switzerland 5
    Austria 3
    Denmark 3
    Indonesia 3
    Italy 3
    Netherlands 3
    South Africa 3
    Uruguay 3
    Bulgaria 2
    Croatia 2
    Cyprus 2
    Greece 2
    Israel 2
    Japan 2
    Malaysia 2
    Mexico 2
    Norway 2
    Puerto Rico 2
    Sweden 2
    Turkey 2
    Costa Rica 1
    Czech Republic 1 (*)
    Estonia 1
    European Union 1
    Hong Kong SAR China 1
    Hungary 1
    Mauritius 1
    Poland 1
    Portugal 1
    Romania 1
    Taiwan 1

    Czech Republic was the only single-reader country last month, and no country’s on a two- or more-month single-reader streak. European Union dropped from three page views so I don’t know what they’re looking for that they aren’t finding here.

    Search Term Non-Poetry:

    Nothing all that trilling among the search terms, although someone’s on a James Clerk Maxwell kick. Among things that brought people here in September were:

    • how many grooves on a record
    • james clerk maxwell comics strip
    • james clerk maxwell comics
    • james clerk maxwell comics stript about scientiest
    • james clerk maxwell comics streip photos
    • james clerk maxwell comics script scientist
    • record groove width in micrometers
    • example of comics strip of maxwell

    Definitely have to commission someone to draw a bunch of James Clerk Maxwell comics.

    Counting Readers:

    October starts with the mathematics blog at 41,318 page views from 17,189 recorded distinct visitors. (The first year or so of the blog WordPress didn’t keep track of distinct visitors, though, or at least didn’t tell us about them.)

    WordPress’s “Insights” tab tells me the most popular day for reading stuff here is Sunday, with 18 percent of page views coming then. Since that’s the designated day for Reading the Comics posts that doesn’t surprise me. The most popular hour is 6 pm, which gets 14 percent of readers in. That must be because I’ve set 6 pm Universal Time as the standard moment when new posts should be published.

    WordPress says I start October with 624 total followers, up modestly from September’s 614 base. There’s a button on the upper-right corner to follow this blog on WordPress. Below that is a button to follow this blog by e-mail. And if you’d like you can follow me on Twitter too, where I try to do more than just point out I’ve posted stuff here. But also to not post so often that you wonder if or when I rest.

     
    • davekingsbury 9:52 pm on Wednesday, 5 October, 2016 Permalink | Reply

      I wonder if readership is down generally. My own seems to have slumped a bit …

      Like

      • Joseph Nebus 12:13 am on Tuesday, 11 October, 2016 Permalink | Reply

        I wonder. I ought to poke around other people’s readership reports and see what their figures are like, and whether there’s any correlations. But that’s also a lot of work, by which I mean any work at all. I’m not sure about going to the trouble of actually doing it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 28 September, 2016 Permalink | Reply
    Tags: calculus, , , , , , ,   

    Why Stuff Can Orbit, Part 5: Why Physics Doesn’t Work And What To Do About It 


    Less way previously:


    My title’s hyperbole, to the extent it isn’t clickbait. Of course physics works. By “work” I mean “model the physical world in useful ways”. If it didn’t work then we would call it “pure” mathematics instead. Mathematicians would study it for its beauty. Physicists would be left to fend for themselves. “Useful” I’ll say means “gives us something interesting to know”. “Interesting” I’ll say if you want to ask what that means then I think you’re stalling.

    But what I mean is that Newtonian physics, the physics learned in high school, doesn’t work. Well, it works, in that if you set up a problem right and calculate right you get answers that are right. It’s just not efficient, for a lot of interesting problems. Don’t ask me about interesting again. I’ll just say the central-force problems from this series are interesting.

    Newtonian, high school type, physics works fine. It shines when you have only a few things to keep track of. In this central force problem we have one object, a planet-or-something, that moves. And only one force, one that attracts the planet to or repels the planet from the center, the Origin. This is where we’d put the sun, in a planet-and-sun system. So that seems all right as far as things go.

    It’s less good, though, if there’s constraints. If it’s not possible for the particle to move in any old direction, say. That doesn’t turn up here; we can imagine a planet heading in any direction relative to the sun. But it’s also less good if there’s a symmetry in what we’re studying. And in this case there is. The strength of the central force only changes based on how far the planet is from the origin. The direction only changes based on what direction the planet is relative to the origin. It’s a bit daft to bother with x’s and y’s and maybe even z’s when all we care about is the distance from the origin. That’s a number we’ve called ‘r’.

    So this brings us to Lagrangian mechanics. This was developed in the 18th century by Joseph-Louis Lagrange. He’s another of those 18th century mathematicians-and-physicists with his name all over everything. Lagrangian mechanics are really, really good when there’s a couple variables that describe both what we’d like to observe about the system and its energy. That’s exactly what we have with central forces. Give me a central force, one that’s pointing directly toward or away from the origin, and that grows or shrinks as the radius changes. I can give you a potential energy function, V(r), that matches that force. Give me an angular momentum L for the planet to have, and I can give you an effective potential energy function, Veff(r). And that effective potential energy lets us describe how the coordinates change in time.

    The method looks roundabout. It depends on two things. One is the coordinate you’re interested in, in this case, r. The other is how fast that coordinate changes in time. This we have a couple of ways of denoting. When working stuff out on paper that’s often done by putting a little dot above the letter. If you’re typing, dots-above-the-symbol are hard. So we mark it as a prime instead: r’. This works well until the web browser or the word processor assumes we want smart quotes and we already had the r’ in quote marks. At that point all hope of meaning is lost and we return to communicating by beating rocks with sticks. We live in an imperfect world.

    What we get out of this is a setup that tells us how fast r’, how fast the coordinate we’re interested in changes in time, itself changes in time. If the coordinate we’re interested in is the ordinary old position of something, then this describes the rate of change of the velocity. In ordinary English we call that the acceleration. What makes this worthwhile is that the coordinate doesn’t have to be the position. It also doesn’t have to be all the information we need to describe the position. For the central force problem r here is just how far the planet is from the center. That tells us something about its position, but not everything. We don’t care about anything except how far the planet is from the center, not yet. So it’s fine we have a setup that doesn’t tell us about the stuff we don’t care about.

    How fast r’ changes in time will be proportional to how fast the effective potential energy, Veff(r), changes with its coordinate. I so want to write “changes with position”, since these coordinates are usually the position. But they can be proxies for the position, or things only loosely related to the position. For an example that isn’t a central force, think about a spinning top. It spins, it wobbles, it might even dance across the table because don’t they all do that? The coordinates that most sensibly describe how it moves are about its rotation, though. What axes is it rotating around? How do those change in time? Those don’t have anything particular to do with where the top is. That’s all right. The mathematics works just fine.

    A circular orbit is one where the radius doesn’t change in time. (I’ll look at non-circular orbits later on.) That is, the radius is not increasing and is not decreasing. If it isn’t getting bigger and it isn’t getting smaller, then it’s got to be staying the same. Not all higher mathematics is tricky. The radius of the orbit is the thing I’ve been calling r all this time. So this means that r’, how fast r is changing with time, has to be zero. Now a slightly tricky part.

    How fast is r’, the rate at which r changes, changing? Well, r’ never changes. It’s always the same value. Anytime something is always the same value the rate of its change is zero. This sounds tricky. The tricky part is that it isn’t tricky. It’s coincidental that r’ is zero and the rate of change of r’ is zero, though. If r’ were any fixed, never-changing number, then the rate of change of r’ would be zero. It happens that we’re interested in times when r’ is zero.

    So we’ll find circular orbits where the change in the effective potential energy, as r changes, is zero. There’s an easy-to-understand intuitive idea of where to find these points. Look at a plot of Veff and imagine this is a smooth track or the cross-section of a bowl or the landscaping of a hill. Imagine dropping a ball or a marble or a bearing or something small enough to roll in it. Where does it roll to a stop? That’s where the change is zero.

    It’s too much bother to make a bowl or landscape a hill or whatnot for every problem we’re interested in. We might do it anyway. Mathematicians used to, to study problems that were too complicated to do by useful estimates. These were “analog computers”. They were big in the days before digital computers made it no big deal to simulate even complicated systems. We still need “analog computers” or models sometimes. That’s usually for problems that involve chaotic stuff like turbulent fluids. We call this stuff “wind tunnels” and the like. It’s all a matter of solving equations by building stuff.

    We’re not working with problems that complicated. There isn’t the sort of chaos lurking in this problem that drives us to real-world stuff. We can find these equilibriums by working just with symbols instead.

     
  • Joseph Nebus 6:00 pm on Sunday, 25 September, 2016 Permalink | Reply
    Tags: , calculus, , ,   

    Reading the Comics, September 24, 2016: Infinities Happen Edition 


    I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics.

    Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen.

    'It's true, Dr Peel. I'm a bit of a psychic.' 'Would you share the winning lottery numbers with me?' '1, 10, 17, 39, 43, and 47'. 'Those are the winning lottery numbers?' 'Yes!' 'For this Tuesday?' 'Ah! That's where it gets a bit fuzzy.'

    Kieran Meehan’s Pros and Cons for the 18th of September, 2016. I can’t say whether any of these are supposed to be the PowerBall number. (The comic strip’s title is a revision of its original, which more precisely described its gimmick but was harder to remember: A Lawyer, A Doctor, and a Cop.)

    Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye.

    Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone.

    Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it?

    Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it.

    Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: