Wronski’s Formula For Pi: How Close We Came


Previously:


Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:

\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} -  \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}

I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:

\displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘\frac{1}{1 / x} — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.

The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.

Where trouble comes up is all those parts where \frac{1}{x} turns up. The derivatives of functions with a lot of \frac{1}{x} terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?

And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly \frac{1}{x} . And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at \frac{1}{y} . That is, and this is just a little bit of algebra:

g(y) = -2 \cdot \frac{1}{y} \cdot 2^{\frac{1}{2} y } \cdot \sin\left(\frac{\pi}{4} y\right)

The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then \frac{1}{x} has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?

… For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:

\displaystyle  \lim_{y \to 0^+} g(y) = \lim_{y \to 0^+}  -2\cdot\frac{2^{\frac{1}{2}y} \cdot \sin\left(\frac{\pi}{4} y\right)}{y}

Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives -2 \cdot \frac{1 \cdot 0}{0} . A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:

-2 \cdot 2^{\frac{1}{2}y} \sin\left(\frac{\pi}{4} y\right)

And the denominator is:

y

The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.

The first derivative of the numerator is going to be:

-2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4}

Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:

\displaystyle  \lim_{y \to 0^+} \frac{ -2 \cdot 2^{\frac{1}{2}y} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} y\right) + -2 \cdot 2^{\frac{1}{2}y} \cdot \cos\left(\frac{\pi}{4} y\right) \cdot \frac{\pi}{4} }{1}

And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:

\displaystyle  -2 \cdot 2^{\frac{1}{2} 0} \cdot \log(2) \cdot \frac{1}{2} \cdot \sin\left(\frac{\pi}{4} \cdot 0\right) + -2 \cdot 2^{\frac{1}{2} 0 } \cdot \cos\left(\frac{\pi}{4} \cdot 0\right) \cdot \frac{\pi}{4}

\frac{\pi}{4} \cdot 0 is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.

\displaystyle  -2 \cdot 2^{0} \cdot \log(2) \cdot \frac{1}{2} \cdot 0 + -2 \cdot 2^{ 0 } \cdot 1 \cdot \frac{\pi}{4}

And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:

\displaystyle   0 + -2 \cdot \frac{\pi}{4}

And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …

-\frac{\pi}{2}

… So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted -\frac{1}{2} . The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?

Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.

The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.

I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.

Wronski’s Formula For Pi: Two Weird Tricks For Limits That Mathematicians Keep Using


Previously:


So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:

\displaystyle  \lim_{x \to \infty} f(x) = \lim_{x \to \infty} -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:

-2 \cdot \infty \cdot 1 \cdot 0

Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘2^{\frac{1}{2}\cdot\frac{1}{x}} ‘ for ‘x’ at ∞ is 1. The limit of ‘\sin(\frac{\pi}{4}\cdot\frac{1}{x}) for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.

Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)

So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:

\displaystyle  \lim_{x \to a} \frac{h(x)}{g(x)}

(Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)

Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:

\displaystyle  \lim_{x \to a} \frac{h'(x)}{g'(x)}

That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.

This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.

But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.

That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as \frac{1}{ 1 / x } ?

I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.

And it’s a perfectly good one. And it’s perfectly legitimate, too. \frac{1}{x} is a meaningful number if ‘x’ is any finite number other than zero. So is \frac{1}{ 1 / x } . Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that \frac{1}{x} wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that \frac{1}{x} would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.

I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.

So back to my function, which looks like this:

\displaystyle  f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

Do I want to replace ‘x’ with \frac{1}{1 / x} , or do I want to replace \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) with \frac{1}{1 / \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)} ? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.

So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:

\displaystyle f(x) = -2 \frac{2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)}{\frac{1}{x}}

I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.

So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:

h(x) = 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)

g(x) = \frac{1}{x}

The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:

g'(x) = -\frac{1}{x^2}

The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that 2^{\frac{1}{2}\cdot \frac{1}{x}} and that \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right) . And each of those expressions contains within themselves another expression, that \frac{1}{x} . So this is going to require the Product Rule, of two expressions that each require the Chain Rule.

This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:

h'(x) = 2^{\frac{1}{2}\frac{1}{x}} \cdot \log(2) \cdot \frac{1}{2} \cdot (-1) \cdot \frac{1}{x^2} + 2^{\frac{1}{2}\frac{1}{x}} \cdot \cos( arg ) bleah

Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.

(If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)

L’Hopital’s Rule Without End: Is That A Thing?


I was helping a friend learn L’Hôpital’s Rule. This is a Freshman Calculus thing. (A different one from last week, it happens. Folks are going back to school, I suppose.) The friend asked me a point I thought shouldn’t come up. I’m certain it won’t come up in the exam my friend was worried about, but I couldn’t swear it wouldn’t happen at all. So this is mostly a note to myself to think it over and figure out whether the trouble could come up. And also so this won’t be my most accessible post; I’m sorry for that, for folks who aren’t calculus-familiar.

L’Hôpital’s Rule is a way of evaluating the limit of one function divided by another, of f(x) divided by g(x). If the limit of \frac{f(x)}{g(x)} has either the form of \frac{0}{0} or \frac{\infty}{\infty} then you’re not stuck. You can take the first derivative of the numerator and the denominator separately. The limit of \frac{f'(x)}{g'(x)} if it exists will be the same value.

But it’s possible to have to do this several times over. I used the example of finding the limit, as x grows infinitely large, where f(x) = x2 and g(x) = ex. \frac{x^2}{e^x} goes to \frac{\infty}{\infty} as x grows infinitely large. The first derivatives, \frac{2x}{e^x} , also go to \frac{\infty}{\infty} . You have to repeat the process again, taking the first derivatives of numerator and denominator again. \frac{2}{e^x} finally goes to 0 as x gets infinitely large. You might have to do this a bunch of times. If f(x) were x7 and g(x) again ex you’d properly need to do this seven times over. With experience you figure out you can skip some steps. Of course students don’t have the experience to know they can skip ahead to the punch line there, but that’s what the practice in homework is for.

Anyway, my friend asked whether it’s possible to get a pattern that always ends up with \frac{0}{0} or \frac{\infty}{\infty} and never breaks out of this. And that’s what’s got me stuck. I can think of a few patterns that would. Start out, for example, with f(x) = e3x and g(x) = e2x. Properly speaking, that would never end. You’d get an infinity-over-infinity pattern every derivative you took. Similarly, if you started with f(x) = \frac{1}{x} and g(x) = e^{-x} you’d never come to an end. As x got infinitely large both f(x) and g(x) would go to zero and all their derivatives would be zero over and over and over and over again.

But those are special cases. Anyone looking at what they were doing instead of just calculating would look at, say, \frac{e^{3x}}{e^{2x}} and realize that’s the same as e^x which falls out of the L’Hôpital’s Rule formulas. Or \frac{\frac{1}{x}}{e^{-x}} would be the same as \frac{e^x}{x} which is an infinity-over-infinity form. But it takes only one derivative to break out of the infinity-over-infinity pattern.

So I can construct examples that never break out of a zero-over-zero or an infinity-over-infinity pattern if you calculate without thinking. And calculating without thinking is a common problem students have. Arguably it’s the biggest problem mathematics students have. But what I wonder is, are there ratios that end up in an endless zero-over-zero or infinity-over-infinity pattern even if you do think it out?

And thus this note; I’d like to nag myself into thinking about that.

%d bloggers like this: