Wronski’s Formula For Pi: How Close We Came


Previously:

  • Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using

  • Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:

    \pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

    I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:

    \displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘\frac{1}{1 / x} — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.

    The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.

    Where trouble comes up is all those parts where \frac{1}{x} turns up. The derivatives of functions with a lot of \frac{1}{x} terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?

    And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly \frac{1}{x} . And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at \frac{1}{y} . That is, and this is just a little bit of algebra:

    g(y) = -2 \cdot \frac{1}{y} \cdot 2^{\frac{1}{2} y } \cdot \sin\left(\frac{\pi}{4} y\right)

    The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then \frac{1}{x} has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?

    … For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:

    \displaystyle  \lim_{y \to 0^+} g(y) = \lim_{y \to 0^+}  -2\cdot\frac{2^{\frac{1}{2}y} \cdot \sin\left(\frac{\pi}{4} y\right)}{y}

    Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives -2 \cdot \frac{1 \cdot 0}{0} . A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:

    -2 \cdot 2^{\frac{1}{2}y} \sin\left(\frac{\pi}{4} y\right)

    And the denominator is:

    y

    The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.

    The first derivative of the numerator is going to be:

    -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4}

    Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:

    \displaystyle  \lim_{y \to 0^+} \frac{ -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4} }{1}

    And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:

    \displaystyle  -2 \cdot 2^{\frac{1}{2} 0} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} \cdot 0\right) + -2 \cdot 2^{\frac{1}{2} 0 } \cdot \cos\left(\frac{\pi}{4} \cdot 0\right) \cdot \frac{\pi}{4}

    \frac{\pi}{4} \cdot 0 is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.

    \displaystyle  -2 \cdot 2^{0} \cdot \log(2) \cdot \frac{1}{2} \cdot 0 + -2 \cdot 2^{ 0 } \cdot 1 \cdot \frac{\pi}{4}

    And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:

    \displaystyle   0 + -2 \cdot \frac{\pi}{4}

    And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …

    -\frac{\pi}{2}

    … So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted -\frac{1}{2} . The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?

    Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.

    The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.

    I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.

    Advertisements

    Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using


    Previously:


    So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:

    \displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:

    -2 \cdot \infty \cdot 1 \cdot 0

    Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘2^{\frac{1}{2}\cdot\frac{1}{x}} ‘ for ‘x’ at ∞ is 1. The limit of ‘\sin(\frac{\pi}{4}\cdot\frac{1}{x}) for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.

    Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)

    So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:

    \displaystyle  \lim_{x \to a} \frac{h(x)}{g(x)}

    (Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)

    Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:

    \displaystyle  \lim_{x \to a} \frac{h'(x)}{g'(x)}

    That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.

    This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.

    But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.

    That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as \frac{1}{ 1 / x } ?

    I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.

    And it’s a perfectly good one. And it’s perfectly legitimate, too. \frac{1}{x} is a meaningful number if ‘x’ is any finite number other than zero. So is \frac{1}{ 1 / x } . Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that \frac{1}{x} wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that \frac{1}{x} would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.

    I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.

    So back to my function, which looks like this:

    \displaystyle  f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    Do I want to replace ‘x’ with \frac{1}{1 / x} , or do I want to replace \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) with \frac{1}{1 / \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)} ? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.

    So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:

    \displaystyle f(x) = -2 \frac{2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}{\frac{1}{x}}

    I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.

    So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:

    h(x) = 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    g(x) = \frac{1}{x}

    The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:

    g'(x) = -\frac{1}{x^2}

    The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that 2^{\frac{1}{2}\cdot \frac{1}{x}} and that \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) . And each of those expressions contains within themselves another expression, that \frac{1}{x} . So this is going to require the Product Rule, of two expressions that each require the Chain Rule.

    This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:

    h'(x) = 2^{\frac{1}{2}\frac{1}{x}} \cdot \log(2) \cdot \frac{1}{2} \cdot (-1) \cdot \frac{1}{x^2} + 2^{\frac{1}{2}\frac{1}{x}} \cdot \cos( arg ) bleah

    Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.

    (If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)

    Wronski’s Formula For Pi: A First Limit


    Previously:

    When I last looked at Józef Maria Hoëne-Wronski’s attempted definition of π I had gotten it to this. Take the function:

    f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    And find its limit when ‘x’ is ∞. Formally, you want to do this by proving there’s some number, let’s say ‘L’. And ‘L’ has the property that you can pick any margin-of-error number ε that’s bigger than zero. And whatever that ε is, there’s some number ‘N’ so that whenever ‘x’ is bigger than ‘N’, ‘f(x)’ is larger than ‘L – ε’ and also smaller than ‘L + ε’. This can be a lot of mucking about with expressions to prove.

    Fortunately we have shortcuts. There’s work we can do that gets us ‘L’, and we can rely on other proofs that show that this must be the limit of ‘f(x)’ at some value ‘a’. I use ‘a’ because that doesn’t commit me to talking about ∞ or any other particular value. The first approach is to just evaluate ‘f(a)’. If you get something meaningful, great! We’re done. That’s the limit of ‘f(x)’ at ‘a’. This approach is called “substitution” — you’re substituting ‘a’ for ‘x’ in the expression of ‘f(x)’ — and it’s great. Except that if your problem’s interesting then substitution won’t work. Still, maybe Wronski’s formula turns out to be lucky. Fit in ∞ where ‘x’ appears and we get:

    f(\infty) = -2 \infty 2^{\frac{1}{2}\cdot \frac{1}{\infty}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{\infty}\right)

    So … all right. Not quite there yet. But we can get there. For example, \frac{1}{\infty} has to be — well. It’s what you would expect if you were a kid and not worried about rigor: 0. We can make it rigorous if you like. (It goes like this: Pick any ε larger than 0. Then whenever ‘x’ is larger than \frac{1}{\epsilon} then \frac{1}{x} is less than ε. So the limit of \frac{1}{x} at ∞ has to be 0.) So let’s run with this: replace all those \frac{1}{\infty} expressions with 0. Then we’ve got:

    f(\infty) = -2 \infty 2^{0} \sin\left(0\right)

    The sine of 0 is 0. 20 is 1. So substitution tells us limit is -2 times ∞ times 1 times 0. That there’s an ∞ in there isn’t a problem. A limit can be infinitely large. Think of the limit of ‘x2‘ at ∞. An infinitely large thing times an infinitely large thing is fine. The limit of ‘x ex‘ at ∞ is infinitely large. A zero times a zero is fine; that’s zero again. But having an ∞ times a 0? That’s trouble. ∞ times something should be huge; anything times zero should be 0; which term wins?

    So we have to fall back on alternate plans. Fortunately there’s a tool we have for limits when we’d otherwise have to face an infinitely large thing times a zero.

    I hope to write about this next time. I apologize for not getting through it today but time wouldn’t let me.

    As I Try To Make Wronski’s Formula For Pi Into Something I Like


    Previously:

    I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:

    \pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

    And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:

    \displaystyle \lim_{x \to \infty} f(x)

    where

    f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

    So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number 1 + \imath raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it 1 - \imath raised to the same power. 1 + \imath and 1 - \imath are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.

    Here’s the first way I figure to make my life simpler. It’s in rewriting that 1 + \imath and 1 - \imath stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.

    At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number 1 + \imath matches to the point with x-coordinate 1, y-coordinate 1. The number 1 - \imath matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.

    So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)

    The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be \frac{\pi}{2} , because mathematicians like radians more than degrees. They make other work easier.

    So. The point 1 + \imath corresponds to the polar coordinates r = \sqrt{2} and \theta = \frac{\pi}{4} . The point 1 - \imath corresponds to the polar coordinates r = \sqrt{2} and \theta = -\frac{\pi}{4} . Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.

    With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by r e^{\imath\theta} , where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.

    It’s easy to raise a number written like this to a power. r e^{\imath\theta} raised to the n-th power is going to be equal to r^n e^{\imath\theta \cdot n} . (Because (a \cdot b)^n = a^n \cdot b^n and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.

    And we can get back to something that looks like 1 + \imath too. That is, something that’s a real number plus \imath times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that e^{\imath \phi} = \cos(\phi) + \imath \sin(\phi) for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.

    OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.

    And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.

    This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.

    For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.

    What this all amounts to is that I can turn this:

    f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

    into this:

    f(x) = -4 \imath x \left\{ \left(\sqrt{2} e^{\imath \frac{\pi}{4}}\right)^{\frac{1}{x}} -  \left(\sqrt{2} e^{-\imath \frac{\pi}{4}} \right)^{\frac{1}{x}} \right\}

    without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:

    f(x) = -4 \imath x \left\{ \sqrt{2}^{\frac{1}{x}} e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - \sqrt{2}^{\frac{1}{x}} e^{-\imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

    I can’t look at that number, \sqrt{2}^{\frac{1}{x}} , sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:

    f(x) = -4 \imath x \sqrt{2}^{\frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

    Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:

    f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} -  e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}

    And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:

    \sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }

    That’s good for any real-valued φ. For example, it’s good for the number \frac{\pi}{4}\cdot\frac{1}{x} . And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:

    f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

    And that’s the function whose limit I want to take at ∞. No, really.

    Deciphering Wronski, Non-Standardly


    I ran out of time to do my next bit on Wronski’s attempted definition of π. Next week, all goes well. But I have something to share anyway. William Lane Craig, of the The author of Boxing Pythagoras blog was intrigued by the starting point. And as a fan of studying how people understand infinity and infinitesimals (and how they don’t), this two-century-old example of mixing the numerous and the tiny set his course.

    So here’s his essay, trying to work out Wronski’s beautiful weird formula from a non-standard analysis perspective. Non-standard analysis is a field that’s grown in the last fifty years. It’s probably fairly close in spirit to what (I think) Wronski might have been getting at, too. Non-standard analysis works with ideas that seem to match many people’s intuitive feelings about infinitesimals and infinities.

    For example, can we speak of a number that’s larger than zero, but smaller than the reciprocal of any positive integer? It’s hard to imagine such a thing. But what if we can show that if we suppose such a number exists, then we can do this logically sound work with it? If you want to say that isn’t enough to show a number exists, then I have to ask how you know imaginary numbers or negative numbers exist.

    Standard analysis, you probably guessed, doesn’t do that. It developed over the 19th century when the logical problems of these kinds of numbers seemed unsolvable. Mostly that’s done by limits, showing that a thing must be true whenever some quantity is small enough, or large enough. It seems safe to trust that the infinitesimally small is small enough, and the infinitely large is large enough. And it’s not like mathematicians back then were bad at their job. Mathematicians learned a lot of things about how infinitesimals and infinities work over the late 19th and early 20th century. It makes modern work possible.

    Anyway, Boxing Pythagoras goes over what a non-standard analysis treatment of the formula suggests. I think it’s accessible even if you haven’t had much non-standard analysis in your background. At least it worked for me and I haven’t had much of the stuff. I think it’s also accessible if you’re good at following logical argument and won’t be thrown by Greek letters as variables. Most of the hard work is really arithmetic with funny letters. I recommend going and seeing if he did get to π.

    As I Try To Figure Out What Wronski Thought ‘Pi’ Was


    A couple weeks ago I shared a fascinating formula for π. I got it from Carl B Boyer’s The History of Calculus and its Conceptual Development. He got it from Józef Maria Hoëne-Wronski, early 19th-century Polish mathematician. His idea was that an absolute, culturally-independent definition of π would come not from thinking about circles and diameters but rather this formula:

    \pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

    Now, this formula is beautiful, at least to my eyes. It’s also gibberish. At least it’s ungrammatical. Mathematicians don’t like to write stuff like “four times infinity”, at least not as more than a rough draft on the way to a real thought. What does it mean to multiply four by infinity? Is arithmetic even a thing that can be done on infinitely large quantities? Among Wronski’s problems is that they didn’t have a clear answer to this. We’re a little more advanced in our mathematics now. We’ve had a century and a half of rather sound treatment of infinitely large and infinitely small things. Can we save Wronski’s work?

    Start with the easiest thing. I’m offended by those \sqrt{-1} bits. Well, no, I’m more unsettled by them. I would rather have \imath in there. The difference? … More taste than anything sound. I prefer, if I can get away with it, using the square root symbol to mean the positive square root of the thing inside. There is no positive square root of -1, so, pfaugh, away with it. Mere style? All right, well, how do you know whether those \sqrt{-1} terms are meant to be \imath or its additive inverse, -\imath ? How do you know they’re all meant to be the same one? See? … As with all style preferences, it’s impossible to be perfectly consistent. I’m sure there are times I accept a big square root symbol over a negative or a complex-valued quantity. But I’m not forced to have it here so I’d rather not. First step:

    \pi = \frac{4\infty}{\imath}\left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} -  \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}

    Also dividing by \imath is the same as multiplying by -\imath so the second easy step gives me:

    \pi = -4 \imath \infty \left\{ \left(1 + \imath\right)^{\frac{1}{\infty}} -  \left(1 - \imath\right)^{\frac{1}{\infty}} \right\}

    Now the hard part. All those infinities. I don’t like multiplying by infinity. I don’t like dividing by infinity. I really, really don’t like raising a quantity to the one-over-infinity power. Most mathematicians don’t. We have a tool for dealing with this sort of thing. It’s called a “limit”.

    Mathematicians developed the idea of limits over … well, since they started doing mathematics. In the 19th century limits got sound enough that we still trust the idea. Here’s the rough way it works. Suppose we have a function which I’m going to name ‘f’ because I have better things to do than give functions good names. Its domain is the real numbers. Its range is the real numbers. (We can define functions for other domains and ranges, too. Those definitions look like what they do here.)

    I’m going to use ‘x’ for the independent variable. It’s any number in the domain. I’m going to use ‘a’ for some point. We want to know the limit of the function “at a”. ‘a’ might be in the domain. But — and this is genius — it doesn’t have to be. We can talk sensibly about the limit of a function at some point where the function doesn’t exist. We can say “the limit of f at a is the number L”. I hadn’t introduced ‘L’ into evidence before, but … it’s a number. It has some specific set value. Can’t say which one without knowing what ‘f’ is and what its domain is and what ‘a’ is. But I know this about it.

    Pick any error margin that you like. Call it ε because mathematicians do. However small this (positive) number is, there’s at least one neighborhood in the domain of ‘f’ that surrounds ‘a’. Check every point in that neighborhood other than ‘a’. The value of ‘f’ at all those points in that neighborhood other than ‘a’ will be larger than L – ε and smaller than L + ε.

    Yeah, pause a bit there. It’s a tricky definition. It’s a nice common place to crash hard in freshman calculus. Also again in Intro to Real Analysis. It’s not just you. Perhaps it’ll help to think of it as a kind of mutual challenge game. Try this.

    1. You draw whatever error bar, as big or as little as you like, around ‘L’.
    2. But I always respond by drawing some strip around ‘a’.
    3. You then pick absolutely any ‘x’ inside my strip, other than ‘a’.
    4. Is f(x) always within the error bar you drew?

    Suppose f(x) is. Suppose that you can pick any error bar however tiny, and I can answer with a strip however tiny, and every single ‘x’ inside my strip has an f(x) within your error bar … then, L is the limit of f at a.

    Again, yes, tricky. But mathematicians haven’t found a better definition that doesn’t break something mathematicians need.

    To write “the limit of f at a is L” we use the notation:

    \displaystyle \lim_{x \to a} f(x) = L

    The ‘lim’ part probably makes perfect sense. And you can see where ‘f’ and ‘a’ have to enter into it. ‘x’ here is a “dummy variable”. It’s the falsework of the mathematical expression. We need some name for the independent variable. It’s clumsy to do without. But it doesn’t matter what the name is. It’ll never appear in the answer. If it does then the work went wrong somewhere.

    What I want to do, then, is turn all those appearances of ‘∞’ in Wronski’s expression into limits of something at infinity. And having just said what a limit is I have to do a patch job. In that talk about the limit at ‘a’ I talked about a neighborhood containing ‘a’. What’s it mean to have a neighborhood “containing ∞”?

    The answer is exactly what you’d think if you got this question and were eight years old. The “neighborhood of infinity” is “all the big enough numbers”. To make it rigorous, it’s “all the numbers bigger than some finite number that let’s just call N”. So you give me an error bar around ‘L’. I’ll give you back some number ‘N’. Every ‘x’ that’s bigger than ‘N’ has f(x) inside your error bars. And note that I don’t have to say what ‘f(∞)’ is or even commit to the idea that such a thing can be meaningful. I only ever have to think directly about values of ‘f(x)’ where ‘x’ is some real number.

    So! First, let me rewrite Wronski’s formula as a function, defined on the real numbers. Then I can replace each ∞ with the limit of something at infinity and … oh, wait a minute. There’s three ∞ symbols there. Do I need three limits?

    Ugh. Yeah. Probably. This can be all right. We can do multiple limits. This can be well-defined. It can also be a right pain. The challenge-and-response game needs a little modifying to work. You still draw error bars. But I have to draw multiple strips. One for each of the variables. And every combination of values inside all those strips has give an ‘f’ that’s inside your error bars. There’s room for great mischief. You can arrange combinations of variables that look likely to break ‘f’ outside the error bars.

    So. Three independent variables, all taking a limit at ∞? That’s not guaranteed to be trouble, but I’d expect trouble. At least I’d expect something to keep the limit from existing. That is, we could find there’s no number ‘L’ so that this drawing-neighborhoods thing works for all three variables at once.

    Let’s try. One of the ∞ will be a limit of a variable named ‘x’. One of them a variable named ‘y’. One of them a variable named ‘z’. Then:

    f(x, y, z) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{y}} -  \left(1 - \imath\right)^{\frac{1}{z}} \right\}

    Without doing the work, my hunch is: this is utter madness. I expect it’s probably possible to make this function take on many wildly different values by the judicious choice of ‘x’, ‘y’, and ‘z’. Particularly ‘y’ and ‘z’. You maybe see it already. If you don’t, you maybe see it now that I’ve said you maybe see it. If you don’t, I’ll get there, but not in this essay. But let’s suppose that it’s possible to make f(x, y, z) take on wildly different values like I’m getting at. This implies that there’s not any limit ‘L’, and therefore Wronski’s work is just wrong.

    Thing is, Wronski wouldn’t have thought that. Deep down, I am certain, he thought the three appearances of ∞ were the same “value”. And that to translate him fairly we’d use the same name for all three appearances. So I am going to do that. I shall use ‘x’ as my variable name, and replace all three appearances of ∞ with the same variable and a common limit. So this gives me the single function:

    f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} -  \left(1 - \imath\right)^{\frac{1}{x}} \right\}

    And then I need to take the limit of this at ∞. If Wronski is right, and if I’ve translated him fairly, it’s going to be π. Or something easy to get π from.

    I hope to get there next week.

    What Only One Person Ever Has Thought ‘Pi’ Means, And Who That Was


    I’ve been reading Carl B Boyer’s The History of Calculus and its Conceptual Development. It’s been slow going, because reading about how calculus’s ideas developed is hard. The ideas underlying it are subtle to start with. And the ideas have to be discussed using vague, unclear definitions. That’s not because dumb people were making arguments. It’s because these were smart people studying ideas at the limits of what we understood. When we got clear definitions we had the fundamentals of calculus understood. (By our modern standards. The future will likely see us as accepting strange ambiguities.) And I still think Boyer whiffs the discussion of Zeno’s Paradoxes in a way that mathematics and science-types usually do. (The trouble isn’t imagining that infinite series can converge. The trouble is that things are either infinitely divisible or they’re not. Either way implies things that seem false.)

    Anyway. Boyer got to a part about the early 19th century. This was when mathematicians were discovering infinities and infinitesimals are amazing tools. Also that mathematicians should maybe learn if they follow any rules. Because you can just plug symbols in to formulas and grind out what looks like they might mean and get answers. Sometimes this works great. Grind through the formulas for solving cubic polynomials as though square roots of negative numbers make sense. You get good results. Later, we worked out a coherent scheme of “complex-valued numbers” that justified it all. We can get lucky with infinities and infinitesimals, sometimes.

    And this brought Boyer to an argument made by Józef Maria Hoëne-Wronski. He was a Polish mathematician whose fantastic ambition in … everything … didn’t turn out many useful results. Algebra, the Longitude Problem, building a rival to the railroad, even the Kosciuszko Uprising, none quite panned out. (And that’s not quite his name. The ‘n’ in ‘Wronski’ should have an acute mark over it. But WordPress’s HTML engine doesn’t want to imagine such a thing exists. Nor do many typesetters writing calculus or differential equations books, Boyer’s included.)

    But anyone who studies differential equations knows his name, for a concept called the Wronskian. It’s a matrix determinant that anyone who studies differential equations hopes they won’t ever have to do after learning it. And, says Boyer, Wronski had this notion for an “absolute meaning of the number π”. (By “absolute” Wronski means one that not drawn from cultural factors like the weird human interset in circle perimeters and diameters. Compare it to the way we speak of “absolute temperature”, where the zero means something not particular to western European weather.)

    \pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

    Well.

    I will admit I’m not fond of “real” alternate definitions of π. They seem to me mostly to signal how clever the definition-originator is. The only one I like at all defines π as the smallest positive root of the simple-harmonic-motion differential equation. (With the right starting conditions and all that.) And I’m not sure that isn’t “circumference over diameter” in a hidden form.

    And yes, that definition is a mess of early-19th-century wild, untamed casualness in the use of symbols. But I admire the crazypants beauty of it. If I ever get a couple free hours I should rework it into something grammatical. And then see if, turned into something tolerable, Wronski’s idea is something even true.

    Boyer allows that “perhaps” because of the strange notation and “bizarre use of the symbol ∞” Wronski didn’t make much headway on this point. I can’t fault people for looking at that and refusing to go further. But isn’t it enchanting as it is?

    Reading the Comics, September 29, 2017: Anthropomorphic Mathematics Edition


    The rest of last week had more mathematically-themed comic strips than Sunday alone did. As sometimes happens, I noticed an objectively unimportant detail in one of the comics and got to thinking about it. Whether I could solve the equation as posted, or whether at least part of it made sense as a mathematics problem. Well, you’ll see.

    Patrick McDonnell’s Mutts for the 25th of September I include because it’s cute and I like when I can feature some comic in these roundups. Maybe there’s some discussion that could be had about what “equals” means in ordinary English versus what it means in mathematics. But I admit that’s a stretch.

    Professor Earl's Math Class. (Earl is the dog.) 'One belly rub equals two pats on the head!'
    Patrick McDonnell’s Mutts for the 25th of September, 2017. I should be interested in other people’s research on this. My love’s parents’ dogs are the ones I’ve had the most regular contact with the last few years, and the dogs have all been moderately to extremely alarmed by my doing suspicious things, such as existing or being near them or being away from them or reaching a hand to them or leaving a treat on the floor for them. I know this makes me sound worrisome, but my love’s parents are very good about taking care of dogs others would consider just too much trouble.

    Olivia Walch’s Imogen Quest for the 25th uses, and describes, the mathematics of a famous probability problem. This is the surprising result of how few people you need to have a 50 percent chance that some pair of people have a birthday in common. It then goes over to some other probability problems. The examples are silly. But the reasoning is sound. And the approach is useful. To find the chance of something happens it’s often easiest to work out the chance it doesn’t. Which is as good as knowing the chance it does, since a thing can either happen or not happen. At least in probability problems, which define “thing” and “happen” so there’s not ambiguity about whether it happened or not.

    Piers Baker’s Ollie and Quentin rerun for the 26th I’m pretty sure I’ve written about before, although back before I included pictures of the Comics Kingdom strips. (The strip moved from Comics Kingdom over to GoComics, which I haven’t caught removing old comics from their pages.) Anyway, it plays on a core piece of probability. It sets out the world as things, “events”, that can have one of multiple outcomes, and which must have one of those outcomes. Coin tossing is taken to mean, by default, an event that has exactly two possible outcomes, each equally likely. And that is near enough true for real-world coin tossing. But there is a little gap between “near enough” and “true”.

    Rick Stromoski’s Soup To Nutz for the 27th is your standard sort of Dumb Royboy joke, in this case about him not knowing what percentages are. You could do the same joke about fractions, including with the same breakdown of what part of the mathematics geek population ruins it for the remainder.

    Nate Fakes’s Break of Day for the 28th is not quite the anthropomorphic-numerals joke for the week. Anthropomorphic mathematics problems, anyway. The intriguing thing to me is that the difficult, calculus, problem looks almost legitimate to me. On the right-hand-side of the first two lines, for example, the calculation goes from

    \int -8 e^{-\frac{ln 3}{14} t}

    to
    -8 -\frac{14}{ln 3} e^{-\frac{ln 3}{14} t}

    This is a little sloppy. The first line ought to end in a ‘dt’, and the second ought to have a constant of integration. If you don’t know what these calculus things are let me explain: they’re calculus things. You need to include them to express the work correctly. But if you’re just doing a quick check of something, the mathematical equivalent of a very rough preliminary sketch, it’s common enough to leave that out.

    It doesn’t quite parse or mean anything precisely as it is. But it looks like the sort of thing that some context would make meaningful. That there’s repeated appearances of - \frac{ln 3}{14} , or - \frac{14}{ln 3} , particularly makes me wonder if Frakes used a problem he (or a friend) was doing for some reason.

    Mark Anderson’s Andertoons for the 29th is a welcome reassurance that something like normality still exists. Something something student blackboard story problem something.

    Anthony Blades’s Bewley rerun for the 29th depicts a parent once again too eager to help with arithmetic homework.

    Maria Scrivan’s Half Full for the 29th gives me a proper anthropomorphic numerals panel for the week, and none too soon.

    The Summer 2017 Mathematics A To Z: Volume Forms


    I’ve been reading Elke Stangl’s Elkemental Force blog for years now. Sometimes I even feel social-media-caught-up enough to comment, or at least to like posts. This is relevant today as I discuss one of the Stangl’s suggestions for my letter-V topic.

    Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
    Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

    Volume Forms.

    So sometime in pre-algebra, or early in (high school) algebra, you start drawing equations. It’s a simple trick. Lay down a coordinate system, some set of axes for ‘x’ and ‘y’ and maybe ‘z’ or whatever letters are important. Look to the equation, made up of x’s and y’s and maybe z’s and so. Highlight all the points with coordinates whose values make the equation true. This is the logical basis for saying (eg) that the straight line “is” y = 2x + 1 .

    A short while later, you learn about polar coordinates. Instead of using ‘x’ and ‘y’, you have ‘r’ and ‘θ’. ‘r’ is the distance from the center of the universe. ‘θ’ is the angle made with respect to some reference axis. It’s as legitimate a way of describing points in space. Some classrooms even have a part of the blackboard (whiteboard, whatever) with a polar-coordinates “grid” on it. This looks like the lines of a dartboard. And you learn that some shapes are easy to describe in polar coordinates. A circle, centered on the origin, is ‘r = 2’ or something like that. A line through the origin is ‘θ = 1’ or whatever. The line that we’d called y = 2x + 1 before? … That’s … some mess. And now r = 2\theta + 1 … that’s not even a line. That’s some kind of spiral. Two spirals, really. Kind of wild.

    And something to bother you a while. y = 2x + 1 is an equation that looks the same as r = 2\theta + 1 . You’ve changed the names of the variables, but not how they relate to each other. But one is a straight line and the other a spiral thing. How can that be?

    The answer, ultimately, is that the letters in the equations aren’t these content-neutral labels. They carry meaning. ‘x’ and ‘y’ imply looking at space a particular way. ‘r’ and ‘θ’ imply looking at space a different way. A shape has different representations in different coordinate systems. Fair enough. That seems to settle the question.

    But if you get to calculus the question comes back. You can integrate over a region of space that’s defined by Cartesian coordinates, x’s and y’s. Or you can integrate over a region that’s defined by polar coordinates, r’s and θ’s. The first time you try this, you find … well, that any region easy to describe in Cartesian coordinates is painful in polar coordinates. And vice-versa. Way too hard. But if you struggle through all that symbol manipulation, you get … different answers. Eventually the calculus teacher has mercy and explains. If you’re integrating in Cartesian coordinates you need to use “dx dy”. If you’re integrating in polar coordinates you need to use “r dr dθ”. If you’ve never taken calculus, never mind what this means. What is important is that “r dr dθ” looks like three things multiplied together, while “dx dy” is two.

    We get this explained as a “change of variables”. If we want to go from one set of coordinates to a different one, we have to do something fiddly. The extra ‘r’ in “r dr dθ” is what we get going from Cartesian to polar coordinates. And we get formulas to describe what we should do if we need other kinds of coordinates. It’s some work that introduces us to the Jacobian, which looks like the most tedious possible calculation ever at that time. (In Intro to Differential Equations we learn we were wrong, and the Wronskian is the most tedious possible calculation ever. This is also wrong, but it might as well be true.) We typically move on after this and count ourselves lucky it got no worse than that.

    None of this is wrong, even from the perspective of more advanced mathematics. It’s not even misleading, which is a refreshing change. But we can look a little deeper, and get something good from doing so.

    The deeper perspective looks at “differential forms”. These are about how to encode information about how your coordinate system represents space. They’re tensors. I don’t blame you for wondering if they would be. A differential form uses interactions between some of the directions in a space. A volume form is a differential form that uses all the directions in a space. And satisfies some other rules too. I’m skipping those because some of the symbols involved I don’t even know how to look up, much less make WordPress present.

    What’s important is the volume form carries information compactly. As symbols it tells us that this represents a chunk of space that’s constant no matter what the coordinates look like. This makes it possible to do analysis on how functions work. It also tells us what we would need to do to calculate specific kinds of problem. This makes it possible to describe, for example, how something moving in space would change.

    The volume form, and the tools to do anything useful with it, demand a lot of supporting work. You can dodge having to explicitly work with tensors. But you’ll need a lot of tensor-related materials, like wedge products and exterior derivatives and stuff like that. If you’ve never taken freshman calculus don’t worry: the people who have taken freshman calculus never heard of those things either. So what makes this worthwhile?

    Yes, person who called out “polynomials”. Good instinct. Polynomials are usually a reason for any mathematics thing. This is one of maybe four exceptions. I have to appeal to my other standard answer: “group theory”. These volume forms match up naturally with groups. There’s not only information about how coordinates describe a space to consider. There’s ways to set up coordinates that tell us things.

    That isn’t all. These volume forms can give us new invariants. Invariants are what mathematicians say instead of “conservation laws”. They’re properties whose value for a given problem is constant. This can make it easier to work out how one variable depends on another, or to work out specific values of variables.

    For example, classical physics problems like how a bunch of planets orbit a sun often have a “symplectic manifold” that matches the problem. This is a description of how the positions and momentums of all the things in the problem relate. The symplectic manifold has a volume form. That volume is going to be constant as time progresses. That is, there’s this way of representing the positions and speeds of all the planets that does not change, no matter what. It’s much like the conservation of energy or the conservation of angular momentum. And this has practical value. It’s the subject that brought my and Elke Stangl’s blogs into contact, years ago. It also has broader applicability.

    There’s no way to provide an exact answer for the movement of, like, the sun and nine-ish planets and a couple major moons and all that. So there’s no known way to answer the question of whether the Earth’s orbit is stable. All the planets are always tugging one another, changing their orbits a little. Could this converge in a weird way suddenly, on geologic timescales? Might the planet might go flying off out of the solar system? It doesn’t seem like the solar system could be all that unstable, or it would have already. But we can’t rule out that some freaky alignment of Jupiter, Saturn, and Halley’s Comet might not tweak the Earth’s orbit just far enough for catastrophe to unfold. Granted there’s nothing we could do about the Earth flying out of the solar system, but it would be nice to know if we face it, we tell ourselves.

    But we can answer this numerically. We can set a computer to simulate the movement of the solar system. But there will always be numerical errors. For example, we can’t use the exact value of π in a numerical computation. 3.141592 (and more digits) might be good enough for projecting stuff out a day, a week, a thousand years. But if we’re looking at millions of years? The difference can add up. We can imagine compensating for not having the value of π exactly right. But what about compensating for something we don’t know precisely, like, where Jupiter will be in 16 million years and two months?

    Symplectic forms can help us. The volume form represented by this space has to be conserved. So we can rewrite our simulation so that these forms are conserved, by design. This does not mean we avoid making errors. But it means we avoid making certain kinds of errors. We’re more likely to make what we call “phase” errors. We predict Jupiter’s location in 16 million years and two months. Our simulation puts it thirty degrees farther in its circular orbit than it actually would be. This is a less serious mistake to make than putting Jupiter, say, eight-tenths as far from the Sun as it would really be.

    Volume forms seem, at first, a lot of mechanism for a small problem. And, unfortunately for students, they are. They’re more trouble than they’re worth for changing Cartesian to polar coordinates, or similar problems. You know, ones that the student already has some feel for. They pay off on more abstract problems. Tracking the movement of a dozen interacting things, say, or describing a space that’s very strangely shaped. Those make the effort to learn about forms worthwhile.

    Reading the Comics, August 15, 2017: Cake Edition


    It was again a week just busy enough that I’m comfortable splitting the Reading The Comments thread into two pieces. It’s also a week that made me think about cake. So, I’m happy with the way last week shaped up, as far as comic strips go. Other stuff could have used a lot of work Let’s read.

    Stephen Bentley’s Herb and Jamaal rerun for the 13th depicts “teaching the kids math” by having them divide up a cake fairly. I accept this as a viable way to make kids interested in the problem. Cake-slicing problems are a corner of game theory as it addresses questions we always find interesting. How can a resource be fairly divided? How can it be divided if there is not a trusted authority? How can it be divided if the parties do not trust one another? Why do we not have more cake? The kids seem to be trying to divide the cake by volume, which could be fair. If the cake slice is a small enough wedge they can likely get near enough a perfect split by ordinary measures. If it’s a bigger wedge they’d need calculus to get the answer perfect. It’ll be well-approximated by solids of revolution. But they likely don’t need perfection.

    This is assuming the value of the icing side is not held in greater esteem than the bare-cake sides. This is not how I would value the parts of the cake. They’ll need to work something out about that, too.

    Mac King and Bill King’s Magic in a Minute for the 13th features a bit of numerical wizardry. That the dates in a three-by-three block in a calendar will add up to nine times the centered date. Why this works is good for a bit of practice in simplifying algebraic expressions. The stunt will be more impressive if you can multiply by nine in your head. I’d do that by taking ten times the given date and then subtracting the original date. I won’t say I’m fond of the idea of subtracting 23 from 230, or 17 from 170. But a skilled performer could do something interesting while trying to do this subtraction. (And if you practice the trick you can get the hang of the … fifteen? … different possible answers.)

    Bill Amend’s FoxTrot rerun for the 14th mentions mathematics. Young nerd Jason’s trying to get back into hand-raising form. Arithmetic has considerable advantages as a thing to practice answering teachers. The questions have clear, definitely right answers, that can be worked out or memorized ahead of time, and can be asked in under half a panel’s word balloon space. I deduce the strip first ran the 21st of August, 2006, although that image seems to be broken.

    Ed Allison’s Unstrange Phenomena for the 14th suggests changes in the definition of the mile and the gallon to effortlessly improve the fuel economy of cars. As befits Allison’s Dadaist inclinations the numbers don’t work out. As it is, if you defined a New Mile of 7,290 feet (and didn’t change what a foot was) and a New Gallon of 192 fluid ounces (and didn’t change what an old fluid ounce was) then a 20 old-miles-per-old-gallon car would come out to about 21.7 new-miles-per-new-gallon. Commenter Del_Grande points out that if the New Mile were 3,960 feet then the calculation would work out. This inspires in me curiosity. Did Allison figure out the numbers that would work and then make a mistake in the final art? Or did he pick funny-looking numbers and not worry about whether they made sense? No way to tell from here, I suppose. (Allison doesn’t mention ways to get in touch on the comic’s About page and I’ve only got the weakest links into the professional cartoon community.)

    Todd the Dinosaur in the playground. 'Kickball, here we come!' Teacher's voice: 'Hold it right there! What is 128 divided by 4?' Todd: 'Long division?' He screams until he wakes. Trent: 'What's wrong?' Todd: 'I dreamed it was the first day of school! And my teacher made me do math ... DURING RECESS!' Trent: 'Stop! That's too scary!'
    Patrick Roberts’s Todd the Dinosaur for the 15th of August, 2017. Before you snipe that there’s no room on the teacher’s worksheet for Todd to actually give an answer, remember that it’s an important part of dream-logic that it’s impossible to actually do the commanded task.

    Patrick Roberts’s Todd the Dinosaur for the 15th mentions long division as the stuff of nightmares. So it is. I guess MathWorld and Wikipedia endorse calling 128 divided by 4 long division, although I’m not sure I’m comfortable with that. This may be idiosyncratic; I’d thought of long division as where the divisor is two or more digits. A three-digit number divided by a one-digit one doesn’t seem long to me. I’d just think that was division. I’m curious what readers’ experiences have been.

    The Summer 2017 Mathematics A To Z: Integration


    One more mathematics term suggested by Gaurish for the A-To-Z today, and then I’ll move on to a couple of others. Today’s is a good one.

    Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
    Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

    Integration.

    Stand on the edge of a plot of land. Walk along its boundary. As you walk the edge pay attention. Note how far you walk before changing direction, even in the slightest. When you return to where you started consult your notes. Contained within them is the area you circumnavigated.

    If that doesn’t startle you perhaps you haven’t thought about how odd that is. You don’t ever touch the interior of the region. You never do anything like see how many standard-size tiles would fit inside. You walk a path that is as close to one-dimensional as your feet allow. And encoded in there somewhere is an area. Stare at that incongruity and you realize why integrals baffle the student so. They have a deep strangeness embedded in them.

    We who do mathematics have always liked integration. They grow, in the western tradition, out of geometry. Given a shape, what is a square that has the same area? There are shapes it’s easy to find the area for, given only straightedge and compass: a rectangle? Easy. A triangle? Just as straightforward. A polygon? If you know triangles then you know polygons. A lune, the crescent-moon shape formed by taking a circular cut out of a circle? We can do that. (If the cut is the right size.) A circle? … All right, we can’t do that, but we spent two thousand years trying before we found that out for sure. And we can do some excellent approximations.

    That bit of finding-a-square-with-the-same-area was called “quadrature”. The name survives, mostly in the phrase “numerical quadrature”. We use that to mean that we computed an integral’s approximate value, instead of finding a formula that would get it exactly. The otherwise obvious choice of “numerical integration” we use already. It describes computing the solution of a differential equation. We’re not trying to be difficult about this. Solving a differential equation is a kind of integration, and we need to do that a lot. We could recast a solving-a-differential-equation problem as a find-the-area problem, and vice-versa. But that’s bother, if we don’t need to, and so we talk about numerical quadrature and numerical integration.

    Integrals are built on two infinities. This is part of why it took so long to work out their logic. One is the infinity of number; we find an integral’s value, in principle, by adding together infinitely many things. The other is an infinity of smallness. The things we add together are infinitesimally small. That we need to take things, each smaller than any number yet somehow not zero, and in such quantity that they add up to something, seems paradoxical. Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy. Bishop George Berkeley made a steady name for himself in calculus textbooks by pointing this out. We have worked out several logically consistent schemes for evaluating integrals. They work, mostly, by showing that we can make the error caused by approximating the integral smaller than any margin we like. This is a standard trick, or at least it is, now that we know it.

    That “in principle” above is important. We don’t actually work out an integral by finding the sum of infinitely many, infinitely tiny, things. It’s too hard. I remember in grad school the analysis professor working out by the proper definitions the integral of 1. This is as easy an integral as you can do without just integrating zero. He escaped with his life, but it was a close scrape. He offered the integral of x as a way to test our endurance, without actually doing it. I’ve never made it through that.

    But we do integrals anyway. We have tools on our side. We can show, for example, that if a function obeys some common rules then we can use simpler formulas. Ones that don’t demand so many symbols in such tight formation. Ones that we can use in high school. Also, ones we can adapt to numerical computing, so that we can let machines give us answers which are near enough right. We get to choose how near is “near enough”. But then the machines decide how long we’ll have to wait to get that answer.

    The greatest tool we have on our side is the Fundamental Theorem of Calculus. Even the name promises it’s the greatest tool we might have. This rule tells us how to connect integrating a function to differentiating another function. If we can find a function whose derivative is the thing we want to integrate, then we have a formula for the integral. It’s that function we found. What a fantastic result.

    The trouble is it’s so hard to find functions whose derivatives are the thing we wanted to integrate. There are a lot of functions we can find, mind you. If we want to integrate a polynomial it’s easy. Sine and cosine and even tangent? Yeah. Logarithms? A little tedious but all right. A constant number raised to the power x? Also tedious but doable. A constant number raised to the power x2? Hold on there, that’s madness. No, we can’t do that.

    There is a weird grab-bag of functions we can find these integrals for. They’re mostly ones we can find some integration trick for. An integration trick is some way to turn the integral we’re interested in into a couple of integrals we can do and then mix back together. A lot of a Freshman Calculus course is a heap of tricks we’ve learned. They have names like “u-substitution” and “integration by parts” and “trigonometric substitution”. Some of them are really exotic, such as turning a single integral into a double integral because that leads us to something we can do. And there’s something called “differentiation under the integral sign” that I don’t know of anyone actually using. People know of it because Richard Feynman, in his fun memoir What Do You Care What Other People Think: 250 Pages Of How Awesome I Was In Every Situation Ever, mentions how awesome it made him in so many situations. Mathematics, physics, and engineering nerds are required to read this at an impressionable age, so we fall in love with a technique no textbook ever mentions. Sorry.

    I’ve written about all this as if we were interested just in areas. We’re not. We like calculating lengths and volumes and, if we dare venture into more dimensions, hypervolumes and the like. That’s all right. If we understand how to calculate areas, we have the tools we need. We can adapt them to as many or as few dimensions as we need. By weighting integrals we can do calculations that tell us about centers of mass and moments of inertial, about the most and least probable values of something, about all quantum mechanics.

    As often happens, this powerful tool starts with something anyone might ponder: what size square has the same area as this other shape? And then think seriously about it.

    What Second Derivatives Are And What They Can Do For You


    Previous supplemental reading for Why Stuff Can Orbit:


    This is another supplemental piece because it’s too much to include in the next bit of Why Stuff Can Orbit. I need some more stuff about how a mathematical physicist would look at something.

    This is also a story about approximations. A lot of mathematics is really about approximations. I don’t mean numerical computing. We all know that when we compute we’re making approximations. We use 0.333333 instead of one-third and we use 3.141592 instead of π. But a lot of precise mathematics, what we call analysis, is also about approximations. We do this by a logical structure that works something like this: take something we want to prove. Now for every positive number ε we can find something — a point, a function, a curve — that’s no more than ε away from the thing we’re really interested in, and which is easier to work with. Then we prove whatever we want to with the easier-to-work-with thing. And since ε can be as tiny a positive number as we want, we can suppose ε is a tinier difference than we can hope to measure. And so the difference between the thing we’re interested in and the thing we’ve proved something interesting about is zero. (This is the part that feels like we’re pulling a scam. We’re not, but this is where it’s worth stopping and thinking about what we mean by “a difference between two things”. When you feel confident this isn’t a scam, continue.) So we proved whatever we proved about the thing we’re interested in. Take an analysis course and you will see this all the time.

    When we get into mathematical physics we do a lot of approximating functions with polynomials. Why polynomials? Yes, because everything is polynomials. But also because polynomials make so much mathematical physics easy. Polynomials are easy to calculate, if you need numbers. Polynomials are easy to integrate and differentiate, if you need analysis. Here that’s the calculus that tells you about patterns of behavior. If you want to approximate a continuous function you can always do it with a polynomial. The polynomial might have to be infinitely long to approximate the entire function. That’s all right. You can chop it off after finitely many terms. This finite polynomial is still a good approximation. It’s just good for a smaller region than the infinitely long polynomial would have been.

    Necessary qualifiers: pages 65 through 82 of any book on real analysis.

    So. Let me get to functions. I’m going to use a function named ‘f’ because I’m not wasting my energy coming up with good names. (When we get back to the main Why Stuff Can Orbit sequence this is going to be ‘U’ for potential energy or ‘E’ for energy.) It’s got a domain that’s the real numbers, and a range that’s the real numbers. To express this in symbols I can write f: \Re \rightarrow \Re . If I have some number called ‘x’ that’s in the domain then I can tell you what number in the domain is matched by the function ‘f’ to ‘x’: it’s the number ‘f(x)’. You were expecting maybe 3.5? I don’t know that about ‘f’, not yet anyway. The one thing I do know about ‘f’, because I insist on it as a condition for appearing, is that it’s continuous. It hasn’t got any jumps, any gaps, any regions where it’s not defined. You could draw a curve representing it with a single, if wriggly, stroke of the pen.

    I mean to build an approximation to the function ‘f’. It’s going to be a polynomial expansion, a set of things to multiply and add together that’s easy to find. To make this polynomial expansion this I need to choose some point to build the approximation around. Mathematicians call this the “point of expansion” because we froze up in panic when someone asked what we were going to name it, okay? But how are we going to make an approximation to a function if we don’t have some particular point we’re approximating around?

    (One answer we find in grad school when we pick up some stuff from linear algebra we hadn’t been thinking about. We’ll skip it for now.)

    I need a name for the point of expansion. I’ll use ‘a’. Many mathematicians do. Another popular name for it is ‘x0‘. Or if you’re using some other variable name for stuff in the domain then whatever that variable is with subscript zero.

    So my first approximation to the original function ‘f’ is … oh, shoot, I should have some new name for this. All right. I’m going to use ‘F0‘ as the name. This is because it’s one of a set of approximations, each of them a little better than the old. ‘F1‘ will be better than ‘F0‘, but ‘F2‘ will be even better, and ‘F2038‘ will be way better yet. I’ll also say something about what I mean by “better”, although you’ve got some sense of that already.

    I start off by calling the first approximation ‘F0‘ by the way because you’re going to think it’s too stupid to dignify with a number as big as ‘1’. Well, I have other reasons, but they’ll be easier to see in a bit. ‘F0‘, like all its sibling ‘Fn‘ functions, has a domain of the real numbers and a range of the real numbers. The rule defining how to go from a number ‘x’ in the domain to some real number in the range?

    F^0(x) = f(a)

    That is, this first approximation is simply whatever the original function’s value is at the point of expansion. Notice that’s an ‘x’ on the left side of the equals sign and an ‘a’ on the right. This seems to challenge the idea of what an “approximation” even is. But it’s legit. Supposing something to be constant is often a decent working assumption. If you failed to check what the weather for today will be like, supposing that it’ll be about like yesterday will usually serve you well enough. If you aren’t sure where your pet is, you look first wherever you last saw the animal. (Or, yes, where your pet most loves to be. A particular spot, though.)

    We can make this rigorous. A mathematician thinks this is rigorous: you pick any margin of error you like. Then I can find a region near enough to the point of expansion. The value for ‘f’ for every point inside that region is ‘f(a)’ plus or minus your margin of error. It might be a small region, yes. Doesn’t matter. It exists, no matter how tiny your margin of error was.

    But yeah, that expansion still seems too cheap to work. My next approximation, ‘F1‘, will be a little better. I mean that we can expect it will be closer than ‘F0‘ was to the original ‘f’. Or it’ll be as close for a bigger region around the point of expansion ‘a’. What it’ll represent is a line. Yeah, ‘F0‘ was a line too. But ‘F0‘ is a horizontal line. ‘F1‘ might be a line at some completely other angle. If that works better. The second approximation will look like this:

    F^1(x) = f(a) + m\cdot\left(x - a\right)

    Here ‘m’ serves its traditional yet poorly-explained role as the slope of a line. What the slope of that line should be we learn from the derivative of the original ‘f’. The derivative of a function is itself a new function, with the same domain and the same range. There’s a couple ways to denote this. Each way has its strengths and weaknesses about clarifying what we’re doing versus how much we’re writing down. And trying to write down almost anything can inspire confusion in analysis later on. There’s a part of analysis when you have to shift from thinking of particular problems to how problems work then.

    So I will define a new function, spoken of as f-prime, this way:

    f'(x) = \frac{df}{dx}\left(x\right)

    If you look closely you realize there’s two different meanings of ‘x’ here. One is the ‘x’ that appears in parentheses. It’s the value in the domain of f and of f’ where we want to evaluate the function. The other ‘x’ is the one in the lower side of the derivative, in that \frac{df}{dx} . That’s my sloppiness, but it’s not uniquely mine. Mathematicians keep this straight by using the symbols \frac{df}{dx} so much they don’t even see the ‘x’ down there anymore so have no idea there’s anything to find confusing. Students keep this straight by guessing helplessly about what their instructors want and clinging to anything that doesn’t get marked down. Sorry. But what this means is to “take the derivative of the function ‘f’ with respect to its variable, and then, evaluate what that expression is for the value of ‘x’ that’s in parentheses on the left-hand side”. We can do some things that avoid the confusion in symbols there. They all require adding some more variables and some more notation in, and it looks like overkill for a measly definition like this.

    Anyway. We really just want the deriviate evaluated at one point, the point of expansion. That is:

    m = f'(a) = \frac{df}{dx}\left(a\right)

    which by the way avoids that overloaded meaning of ‘x’ there. Put this together and we have what we call the tangent line approximation to the original ‘f’ at the point of expansion:

    F^1(x) = f(a) + f'(a)\cdot\left(x - a\right)

    This is also called the tangent line, because it’s a line that’s tangent to the original function. A plot of ‘F1‘ and the original function ‘f’ are guaranteed to touch one another only at the point of expansion. They might happen to touch again, but that’s luck. The tangent line will be close to the original function near the point of expansion. It might happen to be close again later on, but that’s luck, not design. Most stuff you might want to do with the original function you can do with the tangent line, but the tangent line will be easier to work with. It exactly matches the original function at the point of expansion, and its first derivative exactly matches the original function’s first derivative at the point of expansion.

    We can do better. We can find a parabola, a second-order polynomial that approximates the original function. This will be a function ‘F2(x)’ that looks something like:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 m_2 \left(x - a\right)^2

    What we’re doing is adding a parabola to the approximation. This is that curve that looks kind of like a loosely-drawn U. The ‘m2‘ there measures how spread out the U is. It’s not quite the slope, but it’s kind of like that, which is why I’m using the letter ‘m’ for it. Its value we get from the second derivative of the original ‘f’:

    m_2 = f''(a) = \frac{d^2f}{dx^2}\left(a\right)

    We find the second derivative of a function ‘f’ by evaluating the first derivative, and then, taking the derivative of that. We can denote it with two ‘ marks after the ‘f’ as long as we aren’t stuck wrapping the function name in ‘ marks to set it out. And so we can describe the function this way:

    F^2(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2

    This will be a better approximation to the original function near the point of expansion. Or it’ll make larger the region where the approximation is good.

    If the first derivative of a function at a point is zero that means the tangent line is horizontal. In physics stuff this is an equilibrium. The second derivative can tell us whether the equilibrium is stable or not. If the second derivative at the equilibrium is positive it’s a stable equilibrium. The function looks like a bowl open at the top. If the second derivative at the equilibrium is negative then it’s an unstable equilibrium.

    We can make better approximations yet, by using even more derivatives of the original function ‘f’ at the point of expansion:

    F^3(x) = f(a) + f'(a)\cdot\left(x - a\right) + \frac12 f''(a) \left(x - a\right)^2 + \frac{1}{3\cdot 2} f'''(a) \left(x - a\right)^3

    There’s better approximations yet. You can probably guess what the next, fourth-degree, polynomial would be. Or you can after I tell you the fraction in front of the new term will be \frac{1}{4\cdot 3\cdot 2} . The only big difference is that after about the third derivative we give up on adding ‘ marks after the function name ‘f’. It’s just too many little dots. We start writing, like, ‘f(iv)‘ instead. Or if the Roman numerals are too much then ‘f(2038)‘ instead. Or if we don’t want to pin things down to a specific value ‘f(j)‘ with the understanding that ‘j’ is some whole number.

    We don’t need all of them. In physics problems we get equilibriums from the first derivative. We get stability from the second derivative. And we get springs in the second derivative too. And that’s what I hope to pick up on in the next installment of the main series.

    Something Cute I Never Noticed Before About Infinite Sums


    This is a trifle, for which I apologize. I’ve been sick. But I ran across this while reading Carl B Boyer’s The History of the Calculus and its Conceptual Development. This is from the chapter “A Century Of Anticipation”, developments leading up to Newton and Leibniz and The Calculus As We Know It. In particular, while working out the indefinite integrals for simple powers — x raised to a whole number — John Wallis, whom you’ll remember from such things as the first use of the ∞ symbol and beating up Thomas Hobbes for his lunch money, noted this:

    \frac{0 + 1}{1 + 1} = \frac{1}{2}

    Which is fine enough. But then Wallis also noted that

    \frac{0 + 1 + 2}{2 + 2 + 2} = \frac{1}{2}

    And furthermore that

    \frac{0 + 1 + 2 + 3}{3 + 3 + 3 + 3} = \frac{1}{2}

    \frac{0 + 1 + 2 + 3 + 4}{4 + 4 + 4 + 4 + 4} = \frac{1}{2}

    \frac{0 + 1 + 2 + 3 + 4 + 5}{5 + 5 + 5 + 5 + 5 + 5} = \frac{1}{2}

    And isn’t that neat? Wallis goes on to conclude that this is true not just for finitely many terms in the numerator and denominator, but also if you carry on infinitely far. This seems like a dangerous leap to make, but they treated infinities and infinitesimals dangerously in those days.

    What makes this work is — well, it’s just true; explaining how that can be is kind of like explaining how it is circles have a center point. All right. But we can prove that this has to be true at least for finite terms. A sum like 0 + 1 + 2 + 3 is an arithmetic progression. It’s the sum of a finite number of terms, each of them an equal difference from the one before or the one after (or both).

    Its sum will be equal to the number of terms times the arithmetic mean of the first and last. That is, it’ll be the number of terms times the sum of the first and the last terms and divided that by two. So that takes care of the numerator. If we have the sum 0 + 1 + 2 + 3 + up to whatever number you like which we’ll call ‘N’, we know its value has to be (N + 1) times N divided by 2. That takes care of the numerator.

    The denominator, well, that’s (N + 1) cases of the number N being added together. Its value has to be (N + 1) times N. So the fraction is (N + 1) times N divided by 2, itself divided by (N + 1) times N. That’s got to be one-half except when N is zero. And if N were zero, well, that fraction would be 0 over 0 and we know what kind of trouble that is.

    It’s a tiny bit, although you can use it to make an argument about what to expect from \int{x^n dx} , as Wallis did. And it delighted me to see and to understand why it should be so.

    Reading the Comics, May 27, 2017: Panels Edition


    Can’t say this was too fast or too slow a week for mathematically-themed comic strips. A bunch of the strips were panel comics, so that’ll do for my theme.

    Norm Feuti’s Retail for the 21st mentions every (not that) algebra teacher’s favorite vague introduction to group theory, the Rubik’s Cube. Well, the ways you can rotate the various sides of the cube do form a group, which is something that acts like arithmetic without necessarily being numbers. And it gets into value judgements. There exist algorithms to solve Rubik’s cubes. Is it a show of intelligence that someone can learn an algorithm and solve any cube? — But then, how is solving a Rubik’s cube, with or without the help of an algorithm, a show of intelligence? At least of any intelligence more than the bit of spatial recognition that’s good for rotating cubes around?

    'Rubik's cube, huh? I never could solve one of those.' 'I'm just fidgeting with it. I never bothered learning the algorithm either.' 'What algorithm?' 'The pattern you use to solve it.' 'Wait. All you have to do to solve it is memorize a pattern?' 'Of course. How did you think people solved it?' 'I always thought you had to be super smart to figure it out.' 'Well, memorizing the pattern does take a degree of intelligence.' 'Yeah, but that's not the same thing as solving it on your own.' 'I'm sure some people figured out the algorithm without help.' 'I KNEW Chad Gustafson was a liar! He was no eighth-grade prodigy, he just memorized the pattern!' 'Sounds like you and the CUBE have some unresolved issues.'
    Norm Feuti’s Retail for the 21st of May, 2017. A few weeks ago I ran across a book about the world of competitive Rubik’s Cube solving. I haven’t had the chance to read it, but am interested by the ways people form rules for what would seem like a naturally shapeless feature such as solving Rubik’s Cubes. Not featured: the early 80s Saturday morning cartoon that totally existed because somehow that made sense back then.

    I don’t see that learning an algorithm for a problem is a lack of intelligence. No more than using a photo reference shows a lack of drawing skill. It’s still something you need to learn, and to apply, and to adapt to the cube as you have it to deal with. Anyway, I never learned any techniques for solving it either. Would just play for the joy of it. Here’s a page with one approach to solving the cube, if you’d like to give it a try yourself. Good luck.

    Bob Weber Jr and Jay Stephens’s Oh, Brother! for the 22nd is a word-problem avoidance joke. It’s a slight thing to include, but the artwork is nice.

    Brian and Ron Boychuk’s Chuckle Brothers for the 23rd is a very slight thing to include, but it’s looking like a slow week. I need something here. If you don’t see it then things picked up. They similarly tried sprucing things up the 27th, with another joke for taping onto the door.

    Nate Fakes’s Break of Day for the 24th features the traditional whiteboard full of mathematics scrawls as a sign of intelligence. The scrawl on the whiteboard looks almost meaningful. The integral, particularly, looks like it might have been copied from a legitimate problem in polar or cylindrical coordinates. I say “almost” because while I think that some of the r symbols there are r’ I’m not positive those aren’t just stray marks. If they are r’ symbols, it’s the sort of integral that comes up when you look at surfaces of spheres. It would be the electric field of a conductive metal ball given some charge, or the gravitational field of a shell. These are tedious integrals to solve, but fortunately after you do them in a couple of introductory physics-for-majors classes you can just look up the answers instead.

    Samson’s Dark Side of the Horse for the 26th is the Roman numerals joke for this installment. I feel like it ought to be a pie chart joke too, but I can’t find a way to make it one.

    Izzy Ehnes’s The Best Medicine Cartoon for the 27th is the anthropomorphic numerals joke for this paragraph.

    In Which I Offer Excuses Instead Of Mathematics


    I’d been hoping to get back into longer-form essays. And then the calculations I meant to do on one problem turned out more complicated than I’d wanted. And they’re hard to square with the approach I used in some earlier work. Not that the results I was looking at were wrong, mind, just that an approach I’d used as “convenient for this sort of problem” turned inconvenient here.

    So while I have the whole piece back in the shop for re-thinking, which is harder than even thinking, let me give you some other stuff to read. Or look at. One is from regular Singaporean correspondent MathTuition88. If you know anything about topology it’s because you’ve heard about Möbius strips. Surfaces with a single side are neat, and form the base of 95 percent of all science fiction stories in which the mathematics is the fantastic element. Klein bottles are often mentioned as a four-dimensional analogue to the Möbius strip, a solid object with no distinguishable interior or exterior. And a Klein bottle can be divided into two Möbius strips. MathTuition88 showcases a picture about how to turn two strips into a bottle. Or at least the best approximation of a bottle we can do; the actual Klein bottle is a four-dimensional structure and we can just make a three-dimensional imitation of the thing.

    For something a bit more vector-analytic Joe Heafner’s Tensor Time has an essay about vectors. It’s about Heafner’s dislike for the way some vector problems are presented. Some common and easy ways to solve vector equations lead to spurious solutions that have to be weeded out by ad hoc reasoning; can’t we do better? Heafner argues that we can and should. The suggested alternative looks a little stuffy, but as often happens, spending more time on the setup means one spends less time confused later on. Worth pondering.

    And this is a late addition, but I couldn’t resist.

    Now I have a new favorite first chapter for a calculus text.

    Reading the Comics, February 23, 2017: The Week At Once Edition


    For the first time in ages there aren’t enough mathematically-themed comic strips to justify my cutting the week’s roundup in two. No, I have no idea what I’m going to write about for Thursday. Let’s find out together.

    Jenny Campbell’s Flo and Friends for the 19th faintly irritates me. Flo wants to make sure her granddaughter understands that just because it takes people on average 14 minutes to fall asleep doesn’t mean that anyone actually does, by listing all sorts of reasons that a person might need more than fourteen minutes to sleep. It makes me think of a behavior John Allen Paulos notes in Innumeracy, wherein the statistically wise points out that someone has, say, a one-in-a-hundred-million chance of being killed by a terrorist (or whatever) and is answered, “ah, but what if you’re that one?” That is, it’s a response that has the form of wisdom without the substance. I notice Flo doesn’t mention the many reasons someone might fall asleep in less than fourteen minutes.

    But there is something wise in there nevertheless. For most stuff, the average is the most common value. By “the average” I mean the arithmetic mean, because that is what anyone means by “the average” unless they’re being difficult. (Mathematicians acknowledge the existence of an average called the mode, which is the most common value (or values), and that’s most common by definition.) But just because something is the most common result does not mean that it must be common. Toss a coin fairly a hundred times and it’s most likely to come up tails 50 times. But you shouldn’t be surprised if it actually turns up tails 51 or 49 or 45 times. This doesn’t make 50 a poor estimate for the average number of times something will happen. It just means that it’s not a guarantee.

    Gary Wise and Lance Aldrich’s Real Life Adventures for the 19th shows off an unusually dynamic camera angle. It’s in service for a class of problem you get in freshman calculus: find the longest pole that can fit around a corner. Oh, a box-spring mattress up a stairwell is a little different, what with box-spring mattresses being three-dimensional objects. It’s the same kind of problem. I want to say the most astounding furniture-moving event I’ve ever seen was when I moved a fold-out couch down one and a half flights of stairs single-handed. But that overlooks the caged mouse we had one winter, who moved a Chinese finger-trap full of crinkle paper up the tight curved plastic to his nest by sheer determination. The trap was far longer than could possibly be curved around the tube. We have no idea how he managed it.

    J R Faulkner’s Promises, Promises for the 20th jokes that one could use Roman numerals to obscure calculations. So you could. Roman numerals are terrible things for doing arithmetic, at least past addition and subtraction. This is why accountants and mathematicians abandoned them pretty soon after learning there were alternatives.

    Mark Anderson’s Andertoons for the 21st is the Mark Anderson’s Andertoons for the week. Probably anything would do for the blackboard problem, but something geometry reads very well.

    Jef Mallett’s Frazz for the 21st makes some comedy out of the sort of arithmetic error we all make. It’s so easy to pair up, like, 7 and 3 make 10 and 8 and 2 make 10. It takes a moment, or experience, to realize 78 and 32 will not make 100. Forgive casual mistakes.

    Bud Fisher’s Mutt and Jeff rerun for the 22nd is a similar-in-tone joke built on arithmetic errors. It’s got the form of vaudeville-style sketch compressed way down, which is probably why the third panel could be made into a satisfying final panel too.

    'How did you do on the math test?' 'Terrible.' 'Will your mom be mad?' 'Maybe. But at least she'll know I didn't cheat!'
    Bud Blake’s Tiger for the 23rd of February, 2017. I want to blame the colorists for making Hugo’s baby tooth look so weird in the second and third panels, but the coloring is such a faint thing at that point I can’t. I’m sorry to bring it to your attention if you didn’t notice and weren’t bothered by it before.

    Bud Blake’s Tiger rerun for the 23rd just name-drops mathematics; it could be any subject. But I need some kind of picture around here, don’t I?

    Mike Baldwin’s Cornered for the 23rd is the anthropomorphic numerals joke for the week.

    Reading the Comics, February 6, 2017: Another Pictureless Half-Week Edition


    Got another little flood of mathematically-themed comic strips last week and so once again I’ll split them along something that looks kind of middle-ish. Also this is another bunch of GoComics.com-only posts. Since those seem to be accessible to anyone whether or not they’re subscribers indefinitely far into the future I don’t feel like I can put the comics directly up and will trust you all to click on the links that you find interesting. Which is fine; the new GoComics.com design makes it annoyingly hard to download a comic strip. I don’t think that was their intention. But that’s one of the two nagging problems I have with their new design. So you know.

    Tony Cochran’s Agnes for the 5th sees a brand-new mathematics. Always dangerous stuff. But mathematicians do invent, or discover, new things in mathematics all the time. Part of the task is naming the things in it. That’s something which takes talent. Some people, such as Leonhard Euler, had the knack a great novelist has for putting names to things. The rest of us muddle along. Often if there’s any real-world inspiration, or resemblance to anything, we’ll rely on that. And we look for terminology that evokes similar ideas in other fields. … And, Agnes would like to know, there is mathematics that’s about approximate answers, being “right around” the desired answer. Unfortunately, that’s hard. (It’s all hard, if you’re going to take it seriously, much like everything else people do.)

    Scott Hilburn’s The Argyle Sweater for the 5th is the anthropomorphic numerals joke for this essay.

    Carol Lay’s Lay Lines for the 6th depicts the hazards of thinking deeply and hard about the infinitely large and the infinitesimally small. They’re hard. Our intuition seems well-suited to handing a modest bunch of household-sized things. Logic guides us when thinking about the infinitely large or small, but it takes a long time to get truly conversant and comfortable with it all.

    Paul Gilligan’s Pooch Cafe for the 6th sees Poncho try to argue there’s thermodynamical reasons for not being kind. Reasoning about why one should be kind (or not) is the business of philosophers and I won’t overstep my expertise. Poncho’s mathematics, that’s something I can write about. He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around. That’s fair enough as a description of what the first law of thermodynamics tells us about energy. The equation he reads off reads, “the change in the internal energy (Δ U) equals the heat added to the system (U) minus the work done by the system (W)”. Conservation laws aren’t unique to thermodynamics. But Poncho may be aware of just how universal and powerful thermodynamics is. I’m open to an argument that it’s the most important field of physics.

    Jonathan Lemon’s Rabbits Against Magic for the 6th is another strip Intro to Calculus instructors can use for their presentation on instantaneous versus average velocities. There’s been a bunch of them recently. I wonder if someone at Comic Strip Master Command got a speeding ticket.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 6th is about numeric bases. They’re fun to learn about. There’s an arbitrariness in the way we represent concepts. I think we can understand better what kinds of problems seem easy and what kinds seem harder if we write them out different ways. But base eleven is only good for jokes.

    Reading the Comics, January 21, 2017: Homework Edition


    Now to close out what Comic Strip Master Command sent my way through last Saturday. And I’m glad I’ve shifted to a regular schedule for these. They ordered a mass of comics with mathematical themes for Sunday and Monday this current week.

    Karen Montague-Reyes’s Clear Blue Water rerun for the 17th describes trick-or-treating as “logarithmic”. The intention is to say that the difficulty in wrangling kids from house to house grows incredibly fast as the number of kids increases. Fair enough, but should it be “logarithmic” or “exponential”? Because the logarithm grows slowly as the number you take the logarithm of grows. It grows all the slower the bigger the number gets. The exponential of a number, though, that grows faster and faster still as the number underlying it grows. So is this mistaken?

    I say no. It depends what the logarithm is, and is of. If the number of kids is the logarithm of the difficulty of hauling them around, then the intent and the mathematics are in perfect alignment. Five kids are (let’s say) ten times harder to deal with than four kids. Sensible and, from what I can tell of packs of kids, correct.

    'Anne has six nickels. Sue has 41 pennies. Who has more money?' 'That's not going to be easy to figure out. It all depends on how they're dressed!'
    Rick Detorie’s One Big Happy for the 17th of January, 2017. The section was about how the appearance and trappings of wealth matter for more than the actual substance of wealth so everyone’s really up to speed in the course.

    Rick Detorie’s One Big Happy for the 17th is a resisting-the-word-problem joke. There’s probably some warning that could be drawn about this in how to write story problems. It’s hard to foresee all the reasonable confounding factors that might get a student to the wrong answer, or to see a problem that isn’t meant to be there.

    Bill Holbrook’s On The Fastrack for the 19th continues Fi’s story of considering leaving Fastrack Inc, and finding a non-competition clause that’s of appropriate comical absurdity. As an auditor there’s not even a chance Fi could do without numbers. Were she a pure mathematician … yeah, no. There’s fields of mathematics in which numbers aren’t all that important. But we never do without them entirely. Even if we exclude cases where a number is just used as an index, for which Roman numerals would be almost as good as regular numerals. If nothing else numbers would keep sneaking in by way of polynomials.

    'Uh, Fi? Have you looked at the non-compete clause in your contract?' 'I wouldn't go to one of Fastrack's competitors.' 'No, but, um ... you'd better read this.' 'I COULDN'T USE NUMBERS FOR TWO YEARS???' 'Roman numerals would be okay.'
    Bill Holbrook’s On The Fastrack for the 19th of January, 2017. I feel like someone could write a convoluted story that lets someone do mathematics while avoiding any actual use of any numbers, and that it would probably be Greg Egan who did it.

    Dave Whamond’s Reality Check for the 19th breaks our long dry spell without pie chart jokes.

    Mort Walker and Dik Browne’s Vintage Hi and Lois for the 27th of July, 1959 uses calculus as stand-in for what college is all about. Lois’s particular example is about a second derivative. Suppose we have a function named ‘y’ and that depends on a variable named ‘x’. Probably it’s a function with domain and range both real numbers. If complex numbers were involved then the variable would more likely be called ‘z’. The first derivative of a function is about how fast its values change with small changes in the variable. The second derivative is about how fast the values of the first derivative change with small changes in the variable.

    'I hope our kids are smart enough to win scholarships for college.' 'We can't count on that. We'll just have to save the money!' 'Do you know it costs about $10,000 to send one child through college?!' 'That's $40,000 we'd have to save!' Lois reads to the kids: (d^2/dx^2)y = 6x - 2.
    Mort Walker and Dik Browne’s Vintage Hi and Lois for the 27th of July, 1959. Fortunately Lois discovered the other way to avoid college costs: simply freeze the ages of your children where they are now, so they never face student loans. It’s an appealing plan until you imagine being Trixie.

    The ‘d’ in this equation is more of an instruction than it is a number, which is why it’s a mistake to just divide those out. Instead of writing it as \frac{d^2 y}{dx^2} it’s permitted, and common, to write it as \frac{d^2}{dx^2} y . This means the same thing. I like that because, to me at least, it more clearly suggests “do this thing (take the second derivative) to the function we call ‘y’.” That’s a matter of style and what the author thinks needs emphasis.

    There are infinitely many possible functions y that would make the equation \frac{d^2 y}{dx^2} = 6x - 2 true. They all belong to one family, though. They all look like y(x) = \frac{1}{6} 6 x^3 - \frac{1}{2} 2 x^2 + C x + D , where ‘C’ and ‘D’ are some fixed numbers. There’s no way to know, from what Lois has given, what those numbers should be. It might be that the context of the problem gives information to use to say what those numbers should be. It might be that the problem doesn’t care what those numbers should be. Impossible to say without the context.

    The End 2016 Mathematics A To Z: Weierstrass Function


    I’ve teased this one before.

    Weierstrass Function.

    So you know how the Earth is a sphere, but from our normal vantage point right up close to its surface it looks flat? That happens with functions too. Here I mean the normal kinds of functions we deal with, ones with domains that are the real numbers or a Euclidean space. And ranges that are real numbers. The functions you can draw on a sheet of paper with some wiggly bits. Let the function wiggle as much as you want. Pick a part of it and zoom in close. That zoomed-in part will look straight. If it doesn’t look straight, zoom in closer.

    We rely on this. Functions that are straight, or at least straight enough, are easy to work with. We can do calculus on them. We can do analysis on them. Functions with plots that look like straight lines are easy to work with. Often the best approach to working with the function you’re interested in is to approximate it with an easy-to-work-with function. I bet it’ll be a polynomial. That serves us well. Polynomials are these continuous functions. They’re differentiable. They’re smooth.

    That thing about the Earth looking flat, though? That’s a lie. I’ve never been to any of the really great cuts in the Earth’s surface, but I have been to some decent gorges. I went to grad school in the Hudson River Valley. I’ve driven I-80 over Pennsylvania’s scariest bridges. There’s points where the surface of the Earth just drops a great distance between your one footstep and your last.

    Functions do that too. We can have points where a function isn’t differentiable, where it’s impossible to define the direction it’s headed. We can have points where a function isn’t continuous, where it jumps from one region of values to another region. Everyone knows this. We can’t dismiss those as abberations not worthy of the name “function”; too many of them are too useful. Typically we handle this by admitting there’s points that aren’t continuous and we chop the function up. We make it into a couple of functions, each stretching from discontinuity to discontinuity. Between them we have continuous region and we can go about our business as before.

    Then came the 19th century when things got crazy. This particular craziness we credit to Karl Weierstrass. Weierstrass’s name is all over 19th century analysis. He had that talent for probing the limits of our intuition about basic mathematical ideas. We have a calculus that is logically rigorous because he found great counterexamples to what we had assumed without proving.

    The Weierstrass function challenges this idea that any function is going to eventually level out. Or that we can even smooth a function out into basically straight, predictable chunks in-between sudden changes of direction. The function is continuous everywhere; you can draw it perfectly without lifting your pen from paper. But it always looks like a zig-zag pattern, jumping around like it was always randomly deciding whether to go up or down next. Zoom in on any patch and it still jumps around, zig-zagging up and down. There’s never an interval where it’s always moving up, or always moving down, or even just staying constant.

    Despite being continuous it’s not differentiable. I’ve described that casually as it being impossible to predict where the function is going. That’s an abuse of words, yes. The function is defined. Its value at a point isn’t any more random than the value of “x2” is for any particular x. The unpredictability I’m talking about here is a side effect of ignorance. Imagine I showed you a plot of “x2” with a part of it concealed and asked you to fill in the gap. You’d probably do pretty well estimating it. The Weierstrass function, though? No; your guess would be lousy. My guess would be lousy too.

    That’s a weird thing to have happen. A century and a half later it’s still weird. It gets weirder. The Weierstrass function isn’t differentiable generally. But there are exceptions. There are little dots of differentiability, where the rate at which the function changes is known. Not intervals, though. Single points. This is crazy. Derivatives are about how a function changes. We work out what they should even mean by thinking of a function’s value on strips of the domain. Those strips are small, but they’re still, you know, strips. But on almost all of that strip the derivative isn’t defined. It’s only at isolated points, a set with measure zero, that this derivative even exists. It evokes the medieval Mysteries, of how we are supposed to try, even though we know we shall fail, to understand how God can have contradictory properties.

    It’s not quite that Mysterious here. Properties like this challenge our intuition, if we’ve gotten any. Once we’ve laid out good definitions for ideas like “derivative” and “continuous” and “limit” and “function” we can work out whether results like this make sense. And they — well, they follow. We can avoid weird conclusions like this, but at the cost of messing up our definitions for what a “function” and other things are. Making those useless. For the mathematical world to make sense, we have to change our idea of what quite makes sense.

    That’s all right. When we look close we realize the Earth around us is never flat. Even reasonably flat areas have slight rises and falls. The ends of properties are marked with curbs or ditches, and bordered by streets that rise to a center. Look closely even at the dirt and we notice that as level as it gets there are still rocks and scratches in the ground, clumps of dirt an infinitesimal bit higher here and lower there. The flatness of the Earth around us is a useful tool, but we miss a lot by pretending it’s everything. The Weierstrass function is one of the ways a student mathematician learns that while smooth, predictable functions are essential, there is much more out there.

    The End 2016 Mathematics A To Z: Riemann Sum


    I see for the other A To Z I did this year I did something else named for Riemann. So I did. Bernhard Riemann did a lot of work that’s essential to how we see mathematics today. We name all kinds of things for him, and correctly so. Here’s one of his many essential bits of work.

    Riemann Sum.

    The Riemann Sum is a thing we learn in Intro to Calculus. It’s essential in getting us to definite integrals. We’re introduced to it in functions of a single variable. The functions have a domain that’s an interval of real numbers and a range that’s somewhere in the real numbers. The Riemann Sum — and from it, the integral — is a real number.

    We get this number by following a couple steps. The first is we chop the interval up into a bunch of smaller intervals. That chopping-up we call a “partition” because it’s another of those times mathematicians use a word the way people might use the same word. From each one of those chopped-up pieces we pick a representative point. Now with each piece evaluate what the function is for that representative point. Multiply that by the width of the partition it was in. Then take those products for each of those pieces and add them all together. If you’ve done it right you’ve got a number.

    You need a couple pieces in place to have “the” Riemann Sum for something. You need a function, which is fair enough. And you need a partitioning of the interval. And you need some representative point for each of the partitions. Change any of them — function, partition, or point — and you may change the sum you get. You expect that for changing the function. Changing the partition? That’s less obvious. But draw some wiggly curvy function on a sheet of paper. Draw a couple of partitions of the horizontal axis. (You’ll probably want to use different colors for different partitions.) That should coax you into it. And you’d probably take it on my word that different representative points give you different sums.

    Very different? It’s possible. There’s nothing stopping it from happening. But if the results aren’t very different then we might just have an integrable function. That’s a function that gives us the same Riemann Sum no matter how we pick representative points, as long as we pick partitions that get finer and finer enough. We measure how fine a partition is by how big the widest chopped-up piece is. To be integrable the Riemann Sum for a function has to get to the same number whenever the partition’s size gets small enough and however we pick points inside. We get the lovely quiet paradox in which we add together infinitely many things, each of them infinitesimally tiny, and get a regular old number out of all that work.

    We use the Riemann Sum for what we call numerical quadrature. That’s working out integrals on the computer. Or calculator. Or by hand. When we do it by evaluating numbers instead of using analysis. It’s very easy to program. And we can do some tricks based on the Riemann Sum to make the numerical estimate a closer match to the actual integral.

    And we use the Riemann Sum to learn how the Riemann Integral works. It’s a blessedly straightforward thing. It appeals to intuition well. It lets us draw all sorts of curves with rectangular boxes overlaying them. It’s so easy to work out the area of a rectangular box. We can imagine adding up these areas without being confused.

    We don’t use the Riemann Sum to actually do integrals, though. Numerical approximations to an integral, yes. For the actual integral it’s too hard to use. What makes it hard is you need to evaluate this for every possible partition and every possible pick of representative points. In grad school my analysis professor worked through — once — using this to integrate the number 1. This is the easiest possible thing to integrate and it was barely manageable. He gave a good try at integrating the function ‘f(x) = x’ but admitted he couldn’t do it. None of us could.

    When you see the Riemann Sum in an Introduction to Calculus course you see it in simplified form. You get partitions that are very easy to work with. Like, you break the interval up into some number of equally-sized chunks. You get representative points that follow one of a couple good choices. The left end of the partition. The right end of the partition. The middle of the partition.

    That’s fine, numerically. If the function is integrable it doesn’t matter what partition or representative points we pick. And it’s fine for learning about whether functions are integrable. If it matters whether you pick left or middle or right ends of the partition then the function isn’t integrable. The instructor can give functions that break integrability based on a given partition or endpoint choice or whatever.

    But that isn’t every possible partition and every possible pick of representative points. I suppose it’s possible to work all that out for a couple of really, really simple functions. But it’s so much work. We’re better off using the Riemann Sum to get to formulas about integrals that don’t depend on actually using the Riemann Sum.

    So that is the curious position the Riemann Sum has. It is a fundament of integral calculus. It is the way we first define the definite integral. We rely on it to learn what definite integrals are like. We use it all the time numerically. We never use it analytically. It’s too hard. I hope you appreciate the strange beauty of that.