My All 2020 Mathematics A to Z: Exponential

GoldenOj suggested the exponential as a topic. It seemed like a good important topic, but one that was already well-explored by other people. Then I realized I could spend time thinking about something which had bothered me.

In here I write about “the” exponential, which is a bit like writing about “the” multiplication. We can talk about $2^3$ and $10^2$ and many other such exponential functions. One secret of algebra, not appreciated until calculus (or later), is that all these different functions are a single family. Understanding one exponential function lets you understand them all. Mathematicians pick one, the exponential with base e, because we find that convenient. e itself isn’t a convenient number — it’s a bit over 2.718 — but it has some wonderful properties. When I write “the exponential” here, I am looking at this function where we look at $e^{t}$.

This piece will have a bit more mathematics, as in equations, than usual. If you like me writing about mathematics more than reading equations, you’re hardly alone. I recommend letting your eyes drop to the next sentence, or at least the next sentence that makes sense. You should be fine.

Exponential.

My professor for real analysis, in grad school, gave us one of those brilliant projects. Starting from the definition of the logarithm, as an integral, prove at least thirty things. They could be as trivial as “the log of 1 is 0”. They could be as subtle as how to calculate the log of one number in a different base. It was a great project for testing what we knew about why calculus works.

And it gives me the structure to write about the exponential function. Anyone reading a pop-mathematics blog about exponentials knows them. They’re these functions that, as the independent variable grows, grow ever-faster. Or that decay asymptotically to zero. Some readers know that, if the independent variable is an imaginary number, the exponential is a complex number too. As the independent variable grows, becoming a bigger imaginary number, the exponential doesn’t grow. It oscillates, a sine wave.

That’s weird. I’d like to see why that makes sense.

To say “why” this makes sense is doomed. It’s like explaining “why” 36 is divisible by three and six and nine but not eight. It follows from what the words we have mean. The “why” I’ll offer is reasons why this strange behavior is plausible. It’ll be a mix of deductive reasoning and heuristics. This is a common blend when trying to understand why a result happens, or why we should accept it.

I’ll start with the definition of the logarithm, as used in real analysis. The natural logarithm, if you’re curious. It has a lot of nice properties. You can use this to prove over thirty things. Here it is:

$log\left(x\right) = \int_{1}^{x} \frac{1}{s} ds$

The “s” is a dummy variable. You’ll never see it in actual use.

So now let me summon into existence a new function. I want to call it g. This is because I’ve worked this out before and I want to label something else as f. There is something coming ahead that’s a bit of a syntactic mess. This is the best way around it that I can find.

$g(x) = \frac{1}{c} \int_{1}^{x} \frac{1}{s} ds$

Here, ‘c’ is a constant. It might be real. It might be imaginary. It might be complex. I’m using ‘c’ rather than ‘a’ or ‘b’ so that I can later on play with possibilities.

So the alert reader noticed that g(x) here means “take the logarithm of x, and divide it by a constant”. So it does. I’ll need two things built off of g(x), though. The first is its derivative. That’s taken with respect to x, the only variable. Finding the derivative of an integral sounds intimidating but, happy to say, we have a theorem to make this easy. It’s the Fundamental Theorem of Calculus, and it tells us:

$g'(x) = \frac{1}{c}\cdot\frac{1}{x}$

We can use the ‘ to denote “first derivative” if a function has only one variable. Saves time to write and is easier to type.

The other thing that I need, and the thing I really want, is the inverse of g. I’m going to call this function f(t). A more common notation would be to write $g^{-1}(t)$ but we already have $g'(x)$ in the works here. There is a limit to how many little one-stroke superscripts we need above g. This is the tradeoff to using ‘ for first derivatives. But here’s the important thing:

$x = f(t) = g^{-1}(t)$

Here, we have some extratextual information. We know the inverse of a logarithm is an exponential. We even have a standard notation for that. We’d write

$x = f(t) = e^{ct}$

in any context besides this essay as I’ve set it up.

What I would like to know next is: what is the derivative of f(t)? This sounds impossible to know, if we’re thinking of “the inverse of this integration”. It’s not. We have the Inverse Function Theorem to come to our aid. We encounter the Inverse Function Theorem briefly, in freshman calculus. There we use it to do as many as two problems and then hide away forever from the Inverse Function Theorem. (This is why it’s not mentioned in my quick little guide to how to take derivatives.) It reappears in real analysis for this sort of contingency. The inverse function theorem tells us, if f the inverse of g, that:

$f'(t) = \frac{1}{g'(f(t))}$

That g'(f(t)) means, use the rule for g'(x), with f(t) substituted in place of ‘x’. And now we see something magic:

$f'(t) = \frac{1}{\frac{1}{c}\cdot\frac{1}{f(t)}}$

$f'(t) = c\cdot f(t)$

And that is the wonderful thing about the exponential. Its derivative is a constant times its original value. That alone would make the exponential one of mathematics’ favorite functions. It allows us, for example, to transform differential equations into polynomials. (If you want everlasting fame, albeit among mathematicians, invent a new way to turn differential equations into polynomials.) Because we could turn, say,

$f'''(t) - 3f''(t) + 3f'(t) - f(t) = 0$

into

$c^3 e^{ct} - 3c^2 e^{ct} + 3c e^{ct} - e^{ct} = 0$

and then

$\left(c^3 - 3c^2 + 3c - 1\right) e^{ct} = 0$

by supposing that f(t) has to be $e^{ct}$ for the correct value of c. Then all you need do is find a value of ‘c’ that makes that last equation true.

Supposing that the answer has this convenient form may remind you of searching for the lost keys over here where the light is better. But we find so many keys in this good light. If you carry on in mathematics you will never stop seeing this trick, although it may be disguised.

In part because it’s so easy to work with. In part because exponentials like this cover so much of what we might like to do. Let’s go back to looking at the derivative of the exponential function.

$f'(t) = c\cdot f(t)$

There are many ways to understand what a derivative is. One compelling way is to think of it as the rate of change. If you make a tiny change in t, how big is the change in f(t)? So what is the rate of change here?

We can pose this as a pretend-physics problem. This lets us use our physical intuition to understand things. This also is the transition between careful reasoning and ad-hoc arguments. Imagine a particle that, at time ‘t’, is at the position $x = f(t)$. What is its velocity? That’s the first derivative of its position, so, $x' = f'(t) = c\cdot f(t)$.

If we are using our physics intuition to understand this it helps to go all the way. Where is the particle? Can we plot that? … Sure. We’re used to matching real numbers with points on a number line. Go ahead and do that. Not to give away spoilers, but we will want to think about complex numbers too. Mathematicians are used to matching complex numbers with points on the Cartesian plane, though. The real part of the complex number matches the horizontal coordinate. The imaginary part matches the vertical coordinate.

So how is this particle moving?

To say for sure we need some value of t. All right. Pick your favorite number. That’s our t. f(t) follows from whatever your t was. What’s interesting is that the change also depends on c. There’s a couple possibilities. Let me go through them.

First, what if c is zero? Well, then the definition of g(t) was gibberish and we can’t have that. All right.

What if c is a positive real number? Well, then, f'(t) is some positive multiple of whatever f(t) was. The change is “away from zero”. The particle will push away from the origin. As t increases, f(t) increases, so it pushes away faster and faster. This is exponential growth.

What if c is a negative real number? Well, then, f'(t) is some negative multiple of whatever f(t) was. The change is “towards zero”. The particle pulls toward the origin. But the closer it gets the more slowly it approaches. If t is large enough, f(t) will be so tiny that $c\cdot f(t)$ is too small to notice. The motion declines into imperceptibility.

What if c is an imaginary number, though?

So let’s suppose that c is equal to some real number b times $\imath$, where $\imath^2 = -1$.

I need some way to describe what value f(t) has, for whatever your pick of t was. Let me say it’s equal to $\alpha + \beta\imath$, where $\alpha$ and $\beta$ are some real numbers whose value I don’t care about. What’s important here is that $f(t) = \alpha + \beta\imath$.

And, then, what’s the first derivative? The magnitude and direction of motion? That’s easy to calculate; it’ll be $\imath b f(t) = -\beta + \alpha\imath$. This is an interesting complex number. Do you see what’s interesting about it? I’ll get there next paragraph.

So f(t) matches some point on the Cartesian plane. But f'(t), the direction our particle moves with a small change in t, is another poiat whatever complex number f'(t) is as another point on the plane. The line segment connecting the origin to f(t) is perpendicular to the one connecting the origin to f'(t). The ‘motion’ of this particle is perpendicular to its position. And it always is. There’s several ways to show this. An easy one is to just pick some values for $\alpha$ and $\beta$ and b and try it out. This proof is not rigorous, but it is quick and convincing.

If your direction of motion is always perpendicular to your position, then what you’re doing is moving in a circle around the origin. This we pick up in physics, but it applies to the pretend-particle moving here. The exponentials of $\imath t$ and $2 \imath t$ and $-40 \imath t$ will all be points on a locus that’s a circle centered on the origin. The values will look like the cosine of an angle plus $\imath$ times the sine of an angle.

And there, I think, we finally get some justification for the exponential of an imaginary number being a complex number. And for why exponentials might have anything to do with cosines and sines.

You might ask what if c is a complex number, if it’s equal to $a + b\imath$ for some real numbers a and b. In this case, you get spirals as t changes. If a is positive, you get points spiralling outward as t increases. If a is negative, you get points spiralling inward toward zero as t increases. If b is positive the spirals go counterclockwise. If b is negative the spirals go clockwise. $e^{(a + \imath b) t}$ is the same as $e^{at} \cdot e^{\imath b t}$.

This does depend on knowing the exponential of a sum of terms, such as of $a + \imath b$, is equal to the product of the exponential of those terms. This is a good thing to have in your portfolio. If I remember right, it comes in around the 25th thing. It’s an easy result to have if you already showed something about the logarithms of products.

Thank you for reading. I have this and all my A-to-Z topics for the year at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still interested in topics to discuss in the coming weeks. Take care, please.

I Don’t Have Any Good Ideas For Finding Cube Roots By Trigonometry

So I did a bit of thinking. There’s a prosthaphaeretic rule that lets you calculate square roots using nothing more than trigonometric functions. Is there one that lets you calculate cube roots?

And I don’t know. I don’t see where there is one. I may be overlooking an approach, though. Let me outline what I’ve thought out.

First is square roots. It’s possible to find the square root of a number between 0 and 1 using arc-cosine and cosine functions. This is done by using a trigonometric identity called the double-angle formula. This formula, normally, you use if you know the cosine of a particular angle named θ and want the cosine of double that angle:

$\cos\left(2\theta\right) = 2 \cos^2\left(\theta\right) - 1$

If we suppose the number whose square we want is $\cos^2\left(\theta\right)$ then we can find $\cos\left(\theta\right)$. The calculation on the right-hand side of this is easy; double your number and subtract one. Then to the lookup table; find the angle whose cosine is that number. That angle is two times θ. So divide that angle in two. Cosine of that is, well, $\cos\left(\theta\right)$ and most people would agree that’s a square root of $\cos^2\left(\theta\right)$ without any further work.

Why can’t I do the same thing with a triple-angle formula? … Well, here’s my choices among the normal trig functions:

$\cos\left(3\theta\right) = 4 \cos^3\left(\theta\right) - 3\cos\left(\theta\right)$

$\sin\left(3\theta\right) = 3 \sin\left(\theta\right) - 4\sin^3\left(\theta\right)$

$\tan\left(3\theta\right) = \frac{3 \tan\left(\theta\right) - \tan^3\left(\theta\right)}{1 - 3 \tan^2\left(\theta\right)}$

Yes, I see you in the corner, hopping up and down and asking about the cosecant. It’s not any better. Trust me.

So you see the problem here. The number whose cube root I want has to be the $\cos^3\left(\theta\right)$. Or the cube of the sine of theta, or the cube of the tangent of theta. Whatever. The trouble is I don’t see a way to calculate cosine (sine, tangent) of 3θ, or 3 times the cosine (etc) of θ. Nor to get some other simple expression out of that. I can get mixtures of the cosine of 3θ plus the cosine of θ, sure. But that doesn’t help me figure out what θ is.

Can it be worked out? Oh, sure, yes. There’s absolutely approximation schemes that would let me find a value of θ which makes true, say,

$4 \cos^3\left(\theta\right) - 3 \cos\left(\theta\right) = 0.5$

But: is there a way takes less work than some ordinary method of calculating a cube root? Even if you allow some work to be done by someone else ahead of time, such as by computing a table of trig functions? … If there is, I don’t see it. So there’s another point in favor of logarithms. Finding a cube root using a logarithm table is no harder than finding a square root, or any other root.

If you’re using trig tables, you can find a square root, or a fourth root, or an eighth root. Cube roots, if I’m not missing something, are beyond us. So are, I imagine, fifth roots and sixth roots and seventh roots and so on. I could protest that I have never in my life cared what the seventh root of a thing is, but it would sound like a declaration of sour grapes. Too bad.

If I have missed something, it’s probably obvious. Please go ahead and tell me what it is.

How To Calculate A Square Root By A Method You Will Never Actually Use

Sunday’s comics post got me thinking about ways to calculate square roots besides using the square root function on a calculator. I wondered if I could find my own little approach. Maybe something that isn’t iterative. Iterative methods are great in that they tend to forgive numerical errors. All numerical calculations carry errors with them. But they can involve a lot of calculation and, in principle, never finish. You just give up when you think the answer is good enough. A non-iterative method carries the promise that things will, someday, end.

And I found one! It’s a neat little way to find the square root of a number between 0 and 1. Call the number ‘S’, as in square. I’ll give you the square root from it. Here’s how.

First, take S. Multiply S by two. Then subtract 1 from this.

Next. Find the angle — I shall call it 2A — whose cosine is this number 2S – 1.

You have 2A? Great. Divide that in two, so that you get the angle A.

Now take the cosine of A. This will be the (positive) square root of S. (You can find the negative square root by taking minus this.)

Let me show it in action. Let’s say you want the square root of 0.25. So let S = 0.25. And then 2S – 1 is two times 0.25 (which is 0.50) minus 1. That’s -0.50. What angle has cosine of -0.50? Well, that’s an angle of 2 π / 3 radians. Mathematicians think in radians. People think in degrees. And you can do that too. This is 120 degrees. Divide this by two. That’s an angle of π / 3 radians, or 60 degrees. The cosine of π / 3 is 0.5. And, indeed, 0.5 is the square root of 0.25.

I hear you protesting already: what if we want the square root of something larger than 1? Like, how is this any good in finding the square root of 81? Well, if we add a little step before and after this work, we’re in good shape. Here’s what.

So we start with some number larger than 1. Say, 81. Fine. Divide it by 100. If it’s still larger than 100, divide it again, and again, until you get a number smaller than 1. Keep track of how many times you did this. In this case, 81 just has to be divided by 100 the one time. That gives us 0.81, a number which is smaller than 1.

Twice 0.81 minus 1 is equal to 0.62. The angle which has 0.81 as cosine is roughly 0.90205. Half this angle is about 0.45103. And the cosine of 0.45103 is 0.9. This is looking good, but obviously 0.9 is no square root of 81.

Ah, but? We divided 81 by 100 to get it smaller than 1. So we balance that by multiplying 0.9 by 10 to get it back larger than 1. If we had divided by 100 twice to start with, we’d multiply by 10 twice to finish. If we had divided by 100 six times to start with, we’d multiply by 10 six times to finish. Yes, 10 is the square root of 100. You see what’s going on here.

(And if you want the square root of a tiny number, something smaller than 0.01, it’s not a bad idea to multiply it by 100, maybe several times over. Then calculate the square root, and divide the result by 10 a matching number of times. It’s hard to calculate with very big or with very small numbers. If you must calculate, do it on very medium numbers. This is one of those little things you learn in numerical mathematics.)

So maybe now you’re convinced this works. You may not be convinced of why this works. What I’m using here is a trigonometric identity, one of the angle-doubling formulas. Its heart is this identity. It’s familiar to students whose Intro to Trigonometry class is making them finally, irrecoverably hate mathematics:

$\cos\left(2\theta\right) = 2 \cos^2\left(\theta\right) - 1$

Here, I let ‘S’ be the squared number, $\cos^2\left(\theta\right)$. So then anything I do to find $\cos\left(\theta\right)$ gets me the square root. The algebra here is straightforward. Since ‘S’ is that cosine-squared thing, all I have to do is double it, subtract one, and then find what angle 2θ has that number as cosine. Then the cosine of θ has to be the square root.

Oh, yeah, all right. There’s an extra little objection. In what world is it easier to take an arc-cosine (to figure out what 2θ is) and then later to take a cosine? … And the answer is, well, any world where you’ve already got a table printed out of cosines of angles and don’t have a calculator on hand. This would be a common condition through to about 1975. And not all that ridiculous through to about 1990.

This is an example of a prosthaphaeretic rule. These are calculation tools. They’re used to convert multiplication or division problems into addition and subtraction. The idea is exactly like that of logarithms and exponents. Using trig functions predates logarithms. People knew about sines and cosines long before they knew about logarithms and exponentials. But the impulse is the same. And you might, if you squint, see in my little method here an echo of what you’d do more easily with a logarithm table. If you had a log table, you’d calculate $\exp\left(\frac{1}{2}\log\left(S\right)\right)$ instead. But if you don’t have a log table, and only have a table of cosines, you can calculate $\cos\left(\frac{1}{2}\arccos\left(2 S - 1 \right)\right)$ at least.

Is this easier than normal methods of finding square roots? … If you have a table of cosines, yes. Definitely. You have to scale the number into range (divide by 100 some) do an easy multiplication (S times 2), an easy subtraction (minus 1), a table lookup (arccosine), an easy division (divide by 2), another table lookup (cosine), and scale the number up again (multiply by 10 some). That’s all. Seven steps, and two of them are reading. Two of the rest are multiplying or dividing by 10’s. Using logarithm tables has it beat, yes, at five steps (two that are scaling, two that are reading, one that’s dividing by 2). But if you can’t find your table of logarithms, and do have a table of cosines, you’re set.

This may not be practical, since who has a table of cosines anymore? Who hasn’t also got a calculator that does square roots faster? But it delighted me to work this scheme out. Give me a while and maybe I’ll think about cube roots.

As I Try To Make Wronski’s Formula For Pi Into Something I Like

Previously:

I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:

$\pi = \frac{4\infty}{\sqrt{-1}}\left\{ \left(1 + \sqrt{-1}\right)^{\frac{1}{\infty}} - \left(1 - \sqrt{-1}\right)^{\frac{1}{\infty}} \right\}$

And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:

$\displaystyle \lim_{x \to \infty} f(x)$

where

$f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$

So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number $1 + \imath$ raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it $1 - \imath$ raised to the same power. $1 + \imath$ and $1 - \imath$ are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.

Here’s the first way I figure to make my life simpler. It’s in rewriting that $1 + \imath$ and $1 - \imath$ stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.

At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number $1 + \imath$ matches to the point with x-coordinate 1, y-coordinate 1. The number $1 - \imath$ matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.

So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)

The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be $\frac{\pi}{2}$, because mathematicians like radians more than degrees. They make other work easier.

So. The point $1 + \imath$ corresponds to the polar coordinates $r = \sqrt{2}$ and $\theta = \frac{\pi}{4}$. The point $1 - \imath$ corresponds to the polar coordinates $r = \sqrt{2}$ and $\theta = -\frac{\pi}{4}$. Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.

With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by $r e^{\imath\theta}$, where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.

It’s easy to raise a number written like this to a power. $r e^{\imath\theta}$ raised to the n-th power is going to be equal to $r^n e^{\imath\theta \cdot n}$. (Because $(a \cdot b)^n = a^n \cdot b^n$ and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.

And we can get back to something that looks like $1 + \imath$ too. That is, something that’s a real number plus $\imath$ times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that $e^{\imath \phi} = \cos(\phi) + \imath \sin(\phi)$ for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.

OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.

And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.

This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.

For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.

What this all amounts to is that I can turn this:

$f(x) = -4 \imath x \left\{ \left(1 + \imath\right)^{\frac{1}{x}} - \left(1 - \imath\right)^{\frac{1}{x}} \right\}$

into this:

$f(x) = -4 \imath x \left\{ \left(\sqrt{2} e^{\imath \frac{\pi}{4}}\right)^{\frac{1}{x}} - \left(\sqrt{2} e^{-\imath \frac{\pi}{4}} \right)^{\frac{1}{x}} \right\}$

without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:

$f(x) = -4 \imath x \left\{ \sqrt{2}^{\frac{1}{x}} e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - \sqrt{2}^{\frac{1}{x}} e^{-\imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

I can’t look at that number, $\sqrt{2}^{\frac{1}{x}}$, sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:

$f(x) = -4 \imath x \sqrt{2}^{\frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:

$f(x) = -4 \imath x 2^{\frac{1}{2}\cdot \frac{1}{x}} \left\{ e^{\imath \frac{\pi}{4}\cdot\frac{1}{x}} - e^{- \imath \frac{\pi}{4}\cdot\frac{1}{x}} \right\}$

And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:

$\sin\left(\phi\right) = \frac{e^{\imath \phi} - e^{-\imath \phi}}{2\imath }$

That’s good for any real-valued φ. For example, it’s good for the number $\frac{\pi}{4}\cdot\frac{1}{x}$. And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:

$f(x) = -2 x 2^{\frac{1}{2}\cdot \frac{1}{x}} \sin\left(\frac{\pi}{4}\cdot \frac{1}{x}\right)$

And that’s the function whose limit I want to take at ∞. No, really.

Reading the Comics, November 19, 2016: Thought I Featured This Already Edition

For the second half of last week Comic Strip Master Command sent me a couple comics I would have sworn I showed off here before.

Jason Poland’s Robbie and Bobby for the 16th I would have sworn I’d featured around here before. I still think it’s a rerun but apparently I haven’t written it up. It’s a pun, I suppose, playing on the use of “power” to mean both exponentials and the thing knowledge is. I’m curious why Polard used 10 for the new exponent. Normally if there isn’t an exponent explicitly written we take that to be “1”, and incrementing 1 would give 2. Possibly that would have made a less-clear illustration. Or possibly the idea of sleeping squared lacked the Brobdingnagian excess of sleeping to the tenth power.

Exponentials have been written as a small number elevated from the baseline since 1636. James Hume then published an edition of François Viète’s text on algebra. Hume used a Roman numeral in the superscript — xii instead of x2 — but apart from that it’s the scheme we use today. The scheme was in the air, though. Renée Descartes also used the notation, but with Arabic numerals throughout, from 1637. (With quirks; he would write “xx” instead of “x2”, possibly because it’s the same number of characters to write.) And Pierre Hérigone just wrote the exponent after the variable: x2, like you see in bad character-recognition texts. That isn’t a bad scheme, particularly since it’s so easy to type, although we would add a caret: x^2. (I draw all this history, as ever, from Florian Cajori’s A History of Mathematical Notations, particularly sections 297 through 299).

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th has a fun concept about statisticians running wild and causing chaos. I appreciate a good healthy prank myself. It does point out something valuable, though. People in general have gotten to understand the idea that there are correlations between things. An event happening and some effect happening seem to go together. This is sometimes because the event causes the effect. Sometimes they’re both caused by some other factor; the event and effect are spuriously linked. Sometimes there’s just no meaningful connection. Coincidences do happen. But there’s really no good linking of how strong effects can be. And that’s not just a pop culture thing. For example, doing anything other than driving while driving increases the risk of crashing. But by how much? It’s easy to take something with the shape of a fact. Suppose it’s “looking at a text quadruples your risk of crashing”. (I don’t know what the risk increase is. Pretend it’s quadruple for the sake of this.) That’s easy to remember. But what’s my risk of crashing? Suppose it’s a clear, dry day, no winds, and I’m on a limited-access highway with light traffic. What’s the risk of crashing? Can’t be very high, considering how long I’ve done that without a crash. Quadruple that risk? That doesn’t seem terrifying. But I don’t know what that is, or how to express it in a way that helps make decisions. It’s not just newscasters who have this weakness.

Mark Anderson’s Andertoons for the 18th is the soothing appearance of Andertoons for this essay. And while it’s the familiar form of the student protesting the assignment the kid does have a point. There are times an estimate is all we need, and there’s times an exact answer is necessary. When are those times? That’s another skill that people have to develop.

Arthur C Clarke, in his semi-memoir Astounding Days, wrote of how his early-40s civil service job had him auditing schoolteacher pension contributions. He worked out that he really didn’t need to get the answers exactly. If the contribution was within about one percent of right it wasn’t worth his time to track it down more precisely. I’m not sure that his supervisors would take the same attitude. But the war soon took everyone to other matters without clarifying just how exactly he was supposed to audit.

Mark Anderson’s Mr Lowe rerun for the 18th is another I would have sworn I’ve brought up before. The strip was short-lived and this is at least its second time through. But then mathematics is only mentioned here as a dull things students must suffer through. It might not have seemed interesting enough for me to mention before.

Rick Detorie’s One Big Happy rerun for the 19th is another sort of pun. At least it plays on the multiple meanings of “negative”. And I suspect that negative numbers acquired a name with, er, negative connotations because the numbers were suspicious. It took centuries for mathematicians to move them from “obvious nonsense” to “convenient but meaningless tools for useful calculations” to “acceptable things” to “essential stuff”. Non-mathematicians can be forgiven for needing time to work through that progression. Also I’m not sure I didn’t show this one off here when it was first-run. Might be wrong.

Saturday Morning Breakfast Cereal pops back into my attention for the 19th. That’s with a bit about Dad messing with his kid’s head. Not much to say about that so let me bury the whimsy with my earnestness. The strip does point out that what we name stuff is arbitrary. We would say that 4 and 12 and 6 are “composite numbers”, while 2 and 3 are “prime numbers”. But if we all decided one day to swap the meanings of the terms around we wouldn’t be making any mathematics wrong. Or linguistics either. We would probably want to clarify what “a really good factor” is, but all the comic really does is mess with the labels of groups of numbers we’re already interested in.

Calculus without limits 5: log and exp

I’ve been on a bit of a logarithms kick lately, and I should say I’m not the only one. HowardAt58 has had a good number of articles about it, too, and I wanted to point some out to you. In this particular reblogging he brings a bit of calculus to show why the logarithm of the product of two numbere has to be the sum of the logarithms of the two separate numbers, in a way that’s more rigorous (if you’re comfortable with freshman calculus) than just writing down a couple examples along the lines of how 102 times 103 is equal to 105. (I won’t argue that having a couple specific examples might be better at communicating the point, but there’s a difference between believing something is so and being able to prove that it’s true.)

The derivative of the log function can be investigated informally, as log(x) is seen as the inverse of the exponential function, written here as exp(x). The exponential function appears naturally from numbers raised to varying powers, but formal definitions of the exponential function are difficult to achieve. For example, what exactly is the meaning of exp(pi) or exp(root(2)).
So we look at the log function:-

View original post