The Magic Realism Bot twitter feed, which every four hours generates a fanciful plot, offered this bit of whimsy earlier this month.
A good number of people would like that crystal ball.
The Magic Realism Bot twitter feed, which every four hours generates a fanciful plot, offered this bit of whimsy earlier this month.
A good number of people would like that crystal ball.
Today’s A To Z term was suggested by Dina Yagodich, whose YouTube channel features many topics, including calculus and differential equations, statistics, discrete math, and Matlab. Matlab is especially valuable to know as a good quick calculation can answer many questions.
The Wallis named here is John Wallis, an English clergyman and mathematician and cryptographer. His most tweetable work is how we follow his lead in using the symbol ∞ to represent infinity. But he did much in calculus. And it’s a piece of that which brings us to today. He particularly noticed this:
This is an infinite product. It’s multiplication’s answer to the infinite series. It always amazes me when an infinite product works. There are dangers when you do anything with an infinite number of terms. Even the basics of arithmetic, like that you can change the order in which you calculate but still get the same result, break down. Series, in which you add together infinitely many things, are risky, but I’m comfortable with the rules to know when the sum can be trusted. Infinite products seem more mysterious. Then you learn an infinite product converges if and only if the series made from the logarithms of the terms in it also converges. Then infinite products seem less exciting.
There are many infinite products that give us π. Some work quite efficiently, giving us lots of digits for a few terms’ work. Wallis’s formula does not. We need about a thousand terms for it to get us a π of about 3.141. This is a bit much to calculate even today. In 1656, when he published it in Arithmetica Infinitorum, a book I have never read? Wallis was able to do mental arithmetic well. His biography at St Andrews says once when having trouble sleeping he calculated the square root of a 53-digit number in his head, and in the morning, remembered it, and was right. Still, this would be a lot of work. How could Wallis possibly do it? And what work could possibly convince anyone else that he was right?
As it common to striking discoveries it was a mixture of insight and luck and persistence and pattern recognition. He seems to have started with pondering the value of
Happily, he knew exactly what this was: . He knew this because of a bit of insight. We can interpret the integral here as asking for the area that’s enclosed, on a Cartesian coordinate system, by the positive x-axis, the positive y-axis, and the set of points which makes true the equation . This curve is the upper half of a circle with radius 1 and centered on the origin. The area enclosed by all this is one-fourth the area of a circle of radius 1. So that’s how he could know the value of the integral, without doing any symbol manipulation.
The question, in modern notation, would be whether he could do that integral. And, for this? He couldn’t. But, unable to do the problem he wanted, he tried doing the most similar problem he could and see what that proved. was beyond his power to integrate; but what if he swapped those exponents? Worked on instead? This would not — could not — give him what he was interested in. But it would give him something he could calculate. So can we:
And now here comes persistence. What if it’s not inside the parentheses there? If it’s x raised to some other unit fraction instead? What if the parentheses aren’t raised to the second power, but to some other whole number? Might that reveal something useful? Each of these integrals is calculable, and he calculated them. He worked out a table for many values of
for different sets of whole numbers p and q. He trusted that if he kept this up, he’d find some interesting pattern. And he does. The integral, for example, always turns out to be a unit fraction. And there’s a deeper pattern. Let me share results for different values of p and q; the integral is the reciprocal of the number inside the table. The topmost row is values of q; the leftmost column is values of p.
There is a deep pattern here, although I’m not sure Wallis noticed that one. Look along the diagonals, running from lower-left to upper-right. These are the coefficients of the binomial expansion. Yang Hui’s triangle, if you prefer. Pascal’s triangle, if you prefer that. Let me call the term in row p, column q of this table . Then
Great material, anyway. The trouble is that it doesn’t help Wallis with the original problem, which — in this notation — would have and . What he really wanted was the Binomial Theorem, but western mathematicians didn’t know it yet. Here a bit of luck comes in. He had noticed there’s a relationship between terms in one column and terms in another, particularly, that
So why shouldn’t that hold if p and q aren’t whole numbers? … We would today say why should they hold? But Wallis was working with a different idea of mathematical rigor. He made assumptions that it turned out in this case were correct. Of course, had he been wrong, we wouldn’t have heard of any of this and I would have an essay on some other topic.
With luck in Wallis’s favor we can go back to making a table. What would the row for look like? We’ll need both whole and half-integers. is easy; its reciprocal is 1. is also easy; that’s the insight Wallis had to start with. Its reciprocal is . What about the rest? Use the equation just up above, relating to ; then we can start to fill in:
Anything we can learn from this? … Well, sure. For one, as we go left to right, all these entries are increasing. So, like, the second column is less than the third which is less than the fourth. Here’s a triple inequality for you:
Multiply all that through by, on, . And then divide it all through by . What have we got?
I did some rearranging of terms, but, that’s the pattern. One-half π has to be between and four-thirds that.
Move over a little. Start from the row where . This starts us out with
Multiply everything by , and divide everything by and follow with some symbol manipulation. And here’s a tip which would have saved me some frustration working out my notes: . Also, 6 equals 2 times 3. Later on, you may want to remember that 8 equals 2 times 4. All this gets us eventually to
Move over to the next terms, starting from . This will get us eventually to
You see the pattern here. Whatever the value of , it’s squeezed between some number, on the left side of this triple inequality, and that same number times … uh … something like or or or . That last one is a number very close to 1. So the conclusion is that has to equal whatever that pattern is making for the number on the left there.
We can make this more rigorous. Like, we don’t have to just talk about squeezing the number we want between two nearly-equal values. We can rely on the use of the … Squeeze Theorem … to prove this is okay. And there’s much we have to straighten out. Particularly, we really don’t want to write out expressions like
Put that way, it looks like, well, we can divide each 3 in the denominator into a 6 in the numerator to get a 2, each 5 in the denominator to a 10 in the numerator to get a 2, and so on. We get a product that’s infinitely large, instead of anything to do with π. This is that problem where arithmetic on infinitely long strings of things becomes dangerous. To be rigorous, we need to write this product as the limit of a sequence, with finite numerator and denominator, and be careful about how to compose the numerators and denominators.
But this is all right. Wallis found a lovely result and in a way that’s common to much work in mathematics. It used a combination of insight and persistence, with pattern recognition and luck making a great difference. Often when we first find something the proof of it is rough, and we need considerable work to make it rigorous. The path that got Wallis to these products is one we still walk.
There’s just three more essays to go this year! I hope to have the letter X published here, Thursday. All the other A-to-Z essays for this year are also at that link. And past A-to-Z essays are at this link. Thanks for reading.
Today’s A To Z term was suggested by Peter Mander. Mander authors CarnotCycle, which when I first joined WordPress was one of the few blogs discussing thermodynamics in any detail. When I last checked it still was, which is a shame. Thermodynamics is a fascinating field. It’s as deeply weird and counter-intuitive and important as quantum mechanics. Yet its principles are as familiar as a mug of warm tea on a chilly day. Mander writes at a more technical level than I usually do. But if you’re comfortable with calculus, or if you’re comfortable nodding at a line and agreeing that he wouldn’t fib to you about a thing like calculus, it’s worth reading.
I’ve written of my fondness for boredom. A bored mind is not one lacking stimulation. It is one stimulated by anything, however petty. And in petty things we can find great surprises.
I do not know what caused Georges-Louis Leclerc, Comte de Buffon, to discover the needle problem named for him. It seems like something born of a bored but active mind. Buffon had an active mind: he was one of Europe’s most important naturalists of the 1700s. He also worked in mathematics, and astronomy, and optics. It shows what one can do with an engaged mind and a large inheritance from one’s childless uncle who’s the tax farmer for all Sicily.
The problem, though. Imagine dropping a needle on a floor that has equally spaced parallel lines. What is the probability that the needle will land on any of the lines? It could occur to anyone with a wood floor who’s dropped a thing. (There is a similar problem which would occur to anyone with a tile floor.) They have only to be ready to ask the question. Buffon did this in 1733. He had it solved by 1777. We, with several centuries’ insight into probability and calculus, need less than 44 years to solve the question.
Let me use L as the length of the needle. And d as the spacing of the parallel lines. If the needle’s length is less than the spacing then this is an easy formula to write, and not too hard to calculate. The probability, P, of the needle crossing some line is:
I won’t derive it rigorously. You don’t need me for that. The interesting question is whether this formula makes sense. That L and d are in it? Yes, that makes sense. The length of the needle and the gap between lines have to be in there. More, the probability has to have the ratio between the two. There’s different ways to argue this. Dimensional analysis convinces me, at least. Probability is a pure number. L is a measurement of length; d is a measurement of length. To get a pure number starting with L and d means one of them has to divide into the other. That L is in the numerator and d the denominator makes sense. A tiny needle has a tiny chance of crossing a line. A large needle has a large chance. That is raised to the first power, rather than the second or third or such … well, that’s fair. A needle twice as long having twice the chance of crossing a line? That sounds more likely than a needle twice as long having four times the chance, or eight times the chance.
Does the 2 belong there? Hard to say. 2 seems like a harmless enough number. It appears in many respectable formulas. That π, though …
That π …
π comes to us from circles. We see it in calculations about circles and spheres all the time. We’re doing a problem with lines and line segments. What business does π have showing up?
We can find reasons. One way is to look at a similar problem. Imagine dropping a disc on these lines. What’s the chance the disc falls across some line? That’s the chance that the center of the disc is less than one radius from any of the lines. What if the disc has an equal chance of landing anywhere on the floor? Then it has a probability of of crossing a line. If the radius is smaller than the distance between lines, anyway. If the radius is larger than that, the probability is 1.
Now draw a diameter line on this disc. What’s the chance that this diameter line crosses this floor line? That depends on a couple things. Whether the center of the disc is near enough a floor line. And what angle the diameter line makes with respect to the floor lines. If the diameter line is parallel the floor line there’s almost no chance. If the diameter line is perpendicular to the floor line there’s the best possible chance. But that angle might be anything.
Let me call that angle θ. The diameter line crosses the floor line if the diameter times the sine of θ is less than half the distance between floor lines. … Oh. Sine. Sine and cosine and all the trigonometry functions we get from studying circles, and how to draw triangles within circles. And this diameter-line problem looks the same as the needle problem. So that’s where π comes from.
I’m being figurative. I don’t think one can make a rigorous declaration that the π in the probability formula “comes from” this sine, any more than you can declare that the square-ness of a shape comes from any one side. But it gives a reason to believe that π belongs in the probability.
If the needle’s longer than the gap between floor lines, if , there’s still a probability that the needle crosses at least one line. It never becomes certain. No matter how long the needle is it could fall parallel to all the floor lines and miss them all. The probability is instead:
Here is the world-famous arcsecant function. That is, it’s whatever angle has as its secant the number . I don’t mean to insult you. I’m being kind to the person reading this first thing in the morning. I’m not going to try justifying this formula. You can play with numbers, though. You’ll see that if is a little bit bigger than 1, the probability is a little more than what you get if is a little smaller than 1. This is reassuring.
The exciting thing is arithmetic, though. Use the probability of a needle crossing a line, for short needles. You can re-write it as this:
L and d you can find by measuring needles and the lines. P you can estimate. Drop a needle many times over. Count how many times you drop it, and how many times it crosses a line. P is roughly the number of crossings divided by the number of needle drops. Doing this gives you a way to estimate π. This gives you something to talk about on Pi Day.
It’s a rubbish way to find π. It’s a lot of work, plus you have to sweep needles off the floor. Well, you can do it in simulation and avoid the risk of stepping on an overlooked needle. But it takes a lot of needle-drops to get good results. To be certain you’ve calculated the first two decimal points correctly requires 3,380,000 needle-drops. Yes, yes. You could get lucky and happen to hit on an estimate of 3.14 for π with fewer needle-drops. But if you were sincerely trying to calculate the digits of π this way? If you did not know what they were? You would need the three and a third million tries to be confident you had the number correct.
So this result is, as a practical matter, useless. It’s a heady concept, though. We think casually of randomness as … randomness. Unpredictability. Sometimes we will speak of the Law of Large Numbers. This is several theorems in probability. They all point to the same result. That if some event has (say) a probability of one-third of happening, then given 30 million chances, it will happen quite close to 10 million times.
This π result is another casting of the Law of Large Numbers, and of the apparent paradox that true unpredictability is itself predictable. There is no way to predict whether any one dropped needle will cross any line. It doesn’t even matter whether any one needle crosses any line. An enormous number of needles, tossed without fear or favor, will fall in ways that embed π. The same π you get from comparing the circumference of a circle to its diameter. The same π you get from looking at the arc-cosine of a negative one.
I suppose we could use this also to calculate the value of 2, but that somehow seems to touch lesser majesties.
Thank you again for reading. All of the Fall 2019 A To Z posts should be at this link. This year’s and all past A To Z sequences should be at this link. I’ve made my picks for next week’s topics, and am fooling myself into thinking I have a rough outline for them already. But I’m still open for suggestions for the letters E through H and appreciate suggestions.
The first two comics for this essay have titles of the form Name’s Thing, so, that’s why this edition title. That’s good enough, isn’t it? And besides this series there was a Perry Bible Fellowship which at least depicted mathematical symbols. It’s a rerun, though, even among those shown on GoComics.com. It was rerun recently enough that I featured it around here back in June. It’s a bit risque. But the strip was rerun the 12th. Maybe I also need to drop Perry Bible Fellowship from the roster of comics I read for this.
On to the comics I haven’t dropped.
Tony Buino and Gary Markstein’s Daddy’s Home for the 11th tries using specific examples to teach mathematics. There’s strangeness to arithmetic. It’s about these abstract things like “thirty” and “addition” and such. But these things match very well the behaviors of discrete objects, ones that don’t blend together or shatter by themselves. So we can use the intuition we have for specific things to get comfortable working with the abstract. This doesn’t stop, either. Mathematicians like to work on general, abstract questions; they let us answer big swaths of questions all at once. But working out a specific case is usually easier, both to prove and to understand. I don’t know what’s the most advanced mathematics that could be usefully practiced by thinking about cupcakes. Probably something in group theory, in studying the rotations of objects that are perfectly, or nearly, rotationally symmetric.
John Zakour and Scott Roberts’s Maria’s Day for the 11th is a follow-up to a strip featured last week. Maria’s been getting help on her mathematics from one of her closet monsters. And includes the usual joke about Common Core being such a horrible thing that it must come from monsters. I don’t know whether in the comic strip’s universe the monster is supposed to be imaginary. (Usually, in a comic strip, the question of whether a character is imaginary-or-real is pointless. I think Richard Thompson’s Cul de Sac is the only one to have done something good with it.) But if the closet monster is in Maria’s imagination, it’s quite in line for her to think that teaching comes from some malevolent and inscrutable force.
Olivia Jaimes’s Nancy for the 12th features one of the first interesting mathematics questions you do in physics. This is often done with calculus. Not much, but more than Nancy and Esther could realistically have. It could be worked out experimentally, and that’s likely what the teacher was hoping for. Calculus isn’t really necessary, although it does show skeptical students there’s some value in all this d-dx business they’ve been working through. You can find the same answers by dimensional analysis, which is less intimidating. But you’d still need to know some trigonometry functions. That’s beyond whatever Nancy’s grade level is too. In any case, Nancy is an expert at identifying unstated assumptions, and working out loopholes in them. I’m curious whether the teacher would respect Nancy’s skill here. (The way the writing’s been going, I think she would.)
Francesco Marciuliano and Jim Keefe’s Sally Forth for the 13th is about new-friend Jenny trying to work out her relationship with Hilary-Faye-and-Nona. It’s a good bit of character work, but that is outside my subject here. In the last panel Nona admits she’s been talking, or at least thinking about τ versus π. This references a minor nerd-squabble that’s been going on a couple years. π is an incredibly well-known, useful number. It’s the only transcendental number you can expect a normal person to have ever heard of. Humans noticed it, historically, because the length of the circumference of a circle is π times the length of its diameter. Going between “the distance across” and “the distance around” turns out to be useful.
The thing is, many mathematical and physics formulas find it more convenient to write things in terms of the radius of a circle or sphere. And this makes 2π show up in formulas. A lot. Even in things that don’t obviously have circles in them. For example, the Gaussian distribution, which describes how much a sample looks like the population it’s sampled from, has 2π in it. So, the τ argument does, why write out 2π all these places? Why not decide that that’s the useful number to think about, give it the catchy name τ, and use that instead? All the interesting questions about π have exact, obvious parallel questions about τ. Any answers about one give us answers about the other. So why not make this switch and then … pocket the savings in having shorter formulas?
You may sense in me a certain skepticism. I don’t see where changing over gets us anything worth the bother. But there are fashions in mathematics as with everything else. Perhaps τ has some ability to clarify things in ways we’ll come to better appreciate.
This and my other Reading the Comics posts are this link. Essays inspired by Daddy’s Home are at this link. Other essays that mention Maria’s Day discussions should be at this link. Essays with a mention of Nancy, old and new, are at this link. And essays in which Sally Forth gets discussed will be at this link. It’s a new tag today, which does surprise me.
So, I must confess failure. Not about deciphering Józef Maria Hoëne-Wronski’s attempted definition of π. He’d tried this crazy method throwing a lot of infinities and roots of infinities and imaginary numbers together. I believe I translated it into the language of modern mathematics fairly. And my failure is not that I found the formula actually described the number -½π.
Oh, I had an error in there, yes. And I’d found where it was. It was all the way back in the essay which first converted Wronski’s formula into something respectable. It was a small error, first appearing in the last formula of that essay and never corrected from there. This reinforces my suspicion that when normal people see formulas they mostly look at them to confirm there is a formula there. With luck they carry on and read the sentences around them.
My failure is I wanted to write a bit about boring mistakes. The kinds which you make all the time while doing mathematics work, but which you don’t worry about. Dropped signs. Constants which aren’t divided out, or which get multiplied in incorrectly. Stuff like this which you only detect because you know, deep down, that you should have gotten to an attractive simple formula and you haven’t. Mistakes which are tiresome to make, but never make you wonder if you’re in the wrong job.
The trouble is I can’t think of how to make an essay of that. We don’t tend to rate little mistakes like the wrong sign or the wrong multiple or a boring unnecessary added constant as important. This is because they’re not. The interesting stuff in a mathematical formula is usually the stuff representing variations. Change is interesting. The direction of the change? Eh, nice to know. A swapped plus or minus sign alters your understanding of the direction of the change, but that’s all. Multiplying or dividing by a constant wrongly changes your understanding of the size of the change. But that doesn’t alter what the change looks like. Just the scale of the change. Adding or subtracting the wrong constant alters what you think the change is varying from, but not what the shape of the change is. Once more, not a big deal.
But you also know that instinctively, or at least you get it from seeing how it’s worth one or two points on an exam to write -sin where you mean +sin. Or how if you ask the instructor in class about that 2 where a ½ should be, she’ll say, “Oh, yeah, you’re right” and do a hurried bit of erasing before going on.
Thus my failure: I don’t know what to say about boring mistakes that has any insight.
For the record here’s where I got things wrong. I was creating a function, named ‘f’ and using as a variable ‘x’, to represent Wronski’s formula. I’d gotten to this point:
And then I observed how the stuff in curly braces there is “one of those magic tricks that mathematicians know because they see it all the time”. And I wanted to call in this formula, correctly:
So here’s where I went wrong. I took the way off in the front of that first formula and combined it with the stuff in braces to make 2 times a sine of some stuff. I apologize for this. I must have been writing stuff out faster than I was thinking about it. If I had thought, I would have gone through this intermediate step:
Because with that form in mind, it’s easy to take the stuff in curled braces and the in the denominator. From that we get, correctly, . And then the on the far left of that expression and the on the right multiply together to produce the number 8.
So the function ought to have been, all along:
Not very different, is it? Ah, but it makes a huge difference. Carry through with all the L’Hôpital’s Rule stuff described in previous essays. All the complicated formula work is the same. There’s a different number hanging off the front, waiting to multiply in. That’s all. And what you find, redoing all the work but using this corrected function, is that Wronski’s original mess —
— should indeed equal:
All right, there’s an extra factor of 2 here. And I don’t think that is my mistake. Or if it is, other people come to the same mistake without my prompting.
Possibly the book I drew this from misquoted Wronski. It’s at least as good to have a formula for 2π as it is to have one for π. Or Wronski had a mistake in his original formula, and had a constant multiplied out front which he didn’t want. It happens to us all.
Józef Maria Hoëne-Wronski’s had an idea for a new, universal, culturally-independent definition of π. It was this formula that nobody went along with because they had looked at it:
I made some guesses about what he would want this to mean. And how we might put that in terms of modern, conventional mathematics. I describe those in the above links. In terms of limits of functions, I got this:
The trouble is that limit took more work than I wanted to do to evaluate. If you try evaluating that ‘f(x)’ at ∞, you get an expression that looks like zero times ∞. This begs for the use of L’Hôpital’s Rule, which tells you how to find the limit for something that looks like zero divided by zero, or like ∞ divided by ∞. Do a little rewriting — replacing that first ‘x’ with ‘ — and this ‘f(x)’ behaves like L’Hôpital’s Rule needs.
The trouble is, that’s a pain to evaluate. L’Hôpital’s Rule works on functions that look like one function divided by another function. It does this by calculating the derivative of the numerator function divided by the derivative of the denominator function. And I decided that was more work than I wanted to do.
Where trouble comes up is all those parts where turns up. The derivatives of functions with a lot of terms in them get more complicated than the original functions were. Is there a way to get rid of some or all of those?
And there is. Do a change of variables. Let me summon the variable ‘y’, whose value is exactly . And then I’ll define a new function, ‘g(y)’, whose value is whatever ‘f’ would be at . That is, and this is just a little bit of algebra:
The limit of ‘f(x)’ for ‘x’ at ∞ should be the same number as the limit of ‘g(y)’ for ‘y’ at … you’d really like it to be zero. If ‘x’ is incredibly huge, then has to be incredibly small. But we can’t just swap the limit of ‘x’ at ∞ for the limit of ‘y’ at 0. The limit of a function at a point reflects the value of the function at a neighborhood around that point. If the point’s 0, this includes positive and negative numbers. But looking for the limit at ∞ gets at only positive numbers. You see the difference?
… For this particular problem it doesn’t matter. But it might. Mathematicians handle this by taking a “one-sided limit”, or a “directional limit”. The normal limit at 0 of ‘g(y)’ is based on what ‘g(y)’ looks like in a neighborhood of 0, positive and negative numbers. In the one-sided limit, we just look at a neighborhood of 0 that’s all values greater than 0, or less than 0. In this case, I want the neighborhood that’s all values greater than 0. And we write that by adding a little + in superscript to the limit. For the other side, the neighborhood less than 0, we add a little – in superscript. So I want to evalute:
Limits and L’Hôpital’s Rule and stuff work for one-sided limits the way they do for regular limits. So there’s that mercy. The first attempt at this limit, seeing what ‘g(y)’ is if ‘y’ happens to be 0, gives . A zero divided by a zero is promising. That’s not defined, no, but it’s exactly the format that L’Hôpital’s Rule likes. The numerator is:
And the denominator is:
The first derivative of the denominator is blessedly easy: the derivative of y, with respect to y, is 1. The derivative of the numerator is a little harder. It demands the use of the Product Rule and the Chain Rule, just as last time. But these chains are easier.
The first derivative of the numerator is going to be:
Yeah, this is the simpler version of the thing I was trying to figure out last time. Because this is what’s left if I write the derivative of the numerator over the derivative of the denominator:
And now this is easy. Promise. There’s no expressions of ‘y’ divided by other expressions of ‘y’ or anything else tricky like that. There’s just a bunch of ordinary functions, all of them defined for when ‘y’ is zero. If this limit exists, it’s got to be equal to:
is 0. And the sine of 0 is 0. The cosine of 0 is 1. So all this gets to be a lot simpler, really fast.
And 20 is equal to 1. So the part to the left of the + sign there is all zero. What remains is:
And so, finally, we have it. Wronski’s formula, as best I make it out, is a function whose value is …
… So, what Wronski had been looking for, originally, was π. This is … oh, so very close to right. I mean, there’s π right there, it’s just multiplied by an unwanted . The question is, where’s the mistake? Was Wronski wrong to start with? Did I parse him wrongly? Is it possible that the book I copied Wronski’s formula from made a mistake?
Could be any of them. I’d particularly suspect I parsed him wrongly. I returned the library book I had got the original claim from, and I can’t find it again before this is set to publish. But I should check whether Wronski was thinking to find π, the ratio of the circumference to the diameter of a circle. Or might he have looked to find the ratio of the circumference to the radius of a circle? Either is an interesting number worth finding. We’ve settled on the circumference-over-diameter as valuable, likely for practical reasons. It’s much easier to measure the diameter than the radius of a thing. (Yes, I have read the Tau Manifesto. No, I am not impressed by it.) But if you know 2π, then you know π, or vice-versa.
The next question: yeah, but I turned up -½π. What am I talking about 2π for? And the answer there is, I’m not the first person to try working out Wronski’s stuff. You can try putting the expression, as best you parse it, into a tool like Mathematica and see what makes sense. Or you can read, for example, Quora commenters giving answers with way less exposition than I do. And I’m convinced: somewhere along the line I messed up. Not in an important way, but, essentially, doing something equivalent to divided by -2 when I should have multiplied by that.
I’ve spotted my mistake. I figure to come back around to explaining where it is and how I made it.
So now a bit more on Józef Maria Hoëne-Wronski’s attempted definition of π. I had got it rewritten to this form:
And I’d tried the first thing mathematicians do when trying to evaluate the limit of a function at a point. That is, take the value of that point and put it in whatever the formula is. If that formula evaluates to something meaningful, then that value is the limit. That attempt gave this:
Because the limit of ‘x’, for ‘x’ at ∞, is infinitely large. The limit of ‘‘ for ‘x’ at ∞ is 1. The limit of ‘ for ‘x’ at ∞ is 0. We can take limits that are 0, or limits that are some finite number, or limits that are infinitely large. But multiplying a zero times an infinity is dangerous. Could be anything.
Mathematicians have a tool. We know it as L’Hôpital’s Rule. It’s named for the French mathematician Guillaume de l’Hôpital, who discovered it in the works of his tutor, Johann Bernoulli. (They had a contract giving l’Hôpital publication rights. If Wikipedia’s right the preface of the book credited Bernoulli, although it doesn’t appear to be specifically for this. The full story is more complicated and ambiguous. The previous sentence may be said about most things.)
So here’s the first trick. Suppose you’re finding the limit of something that you can write as the quotient of one function divided by another. So, something that looks like this:
(Normally, this gets presented as ‘f(x)’ divided by ‘g(x)’. But I’m already using ‘f(x)’ for another function and I don’t want to muddle what that means.)
Suppose it turns out that at ‘a’, both ‘h(x)’ and ‘g(x)’ are zero, or both ‘h(x)’ and ‘g(x)’ are ∞. Zero divided by zero, or ∞ divided by ∞, looks like danger. It’s not necessarily so, though. If this limit exists, then we can find it by taking the first derivatives of ‘h’ and ‘g’, and evaluating:
That ‘ mark is a common shorthand for “the first derivative of this function, with respect to the only variable we have around here”.
This doesn’t look like it should help matters. Often it does, though. There’s an excellent chance that either ‘h'(x)’ or ‘g'(x)’ — or both — aren’t simultaneously zero, or ∞, at ‘a’. And once that’s so, we’ve got a meaningful limit. This doesn’t always work. Sometimes we have to use this l’Hôpital’s Rule trick a second time, or a third or so on. But it works so very often for the kinds of problems we like to do. Reaches the point that if it doesn’t work, we have to suspect we’re calculating the wrong thing.
But wait, you protest, reasonably. This is fine for problems where the limit looks like 0 divided by 0, or ∞ divided by ∞. What Wronski’s formula got me was 0 times 1 times ∞. And I won’t lie: I’m a little unsettled by having that 1 there. I feel like multiplying by 1 shouldn’t be a problem, but I have doubts.
That zero times ∞ thing, thought? That’s easy. Here’s the second trick. Let me put it this way: isn’t ‘x’ really the same thing as ?
I expect your answer is to slam your hand down on the table and glare at my writing with contempt. So be it. I told you it was a trick.
And it’s a perfectly good one. And it’s perfectly legitimate, too. is a meaningful number if ‘x’ is any finite number other than zero. So is . Mathematicians accept a definition of limit that doesn’t really depend on the value of your expression at a point. So that wouldn’t be meaningful for ‘x’ at zero doesn’t mean we can’t evaluate its limit for ‘x’ at zero. And just because we might not be sure that would mean for infinitely large ‘x’ doesn’t mean we can’t evaluate its limit for ‘x’ at ∞.
I see you, person who figures you’ve caught me. The first thing I tried was putting in the value of ‘x’ at the ∞, all ready to declare that this was the limit of ‘f(x)’. I know my caveats, though. Plugging in the value you want the limit at into the function whose limit you’re evaluating is a shortcut. If you get something meaningful, then that’s the same answer you would get finding the limit properly. Which is done by looking at the neighborhood around but not at that point. So that’s why this reciprocal-of-the-reciprocal trick works.
So back to my function, which looks like this:
Do I want to replace ‘x’ with , or do I want to replace with ? I was going to say something about how many times in my life I’ve been glad to take the reciprocal of the sine of an expression of x. But just writing the symbols out like that makes the case better than being witty would.
So here is a new, L’Hôpital’s Rule-friendly, version of my version of Wronski’s formula:
I put that -2 out in front because it’s not really important. The limit of a constant number times some function is the same as that constant number times the limit of that function. We can put that off to the side, work on other stuff, and hope that we remember to bring it back in later. I manage to remember it about four-fifths of the time.
So these are the numerator and denominator functions I was calling ‘h(x)’ and ‘g(x)’ before:
The limit of both of these at ∞ is 0, just as we might hope. So we take the first derivatives. That for ‘g(x)’ is easy. Anyone who’s reached week three in Intro Calculus can do it. This may only be because she’s gotten bored and leafed through the formulas on the inside front cover of the textbook. But she can do it. It’s:
The derivative for ‘h(x)’ is a little more involved. ‘h(x)’ we can write as the product of two expressions, that and that . And each of those expressions contains within themselves another expression, that . So this is going to require the Product Rule, of two expressions that each require the Chain Rule.
This is as far as I got with that before slamming my hand down on the table and glaring at the problem with disgust:
Yeah I’m not finishing that. Too much work. I’m going to reluctantly try thinking instead.
(If you want to do that work — actually, it isn’t much more past there, and if you followed that first half you’re going to be fine. And you’ll see an echo of it in what I do next time.)
When I last looked at Józef Maria Hoëne-Wronski’s attempted definition of π I had gotten it to this. Take the function:
And find its limit when ‘x’ is ∞. Formally, you want to do this by proving there’s some number, let’s say ‘L’. And ‘L’ has the property that you can pick any margin-of-error number ε that’s bigger than zero. And whatever that ε is, there’s some number ‘N’ so that whenever ‘x’ is bigger than ‘N’, ‘f(x)’ is larger than ‘L – ε’ and also smaller than ‘L + ε’. This can be a lot of mucking about with expressions to prove.
Fortunately we have shortcuts. There’s work we can do that gets us ‘L’, and we can rely on other proofs that show that this must be the limit of ‘f(x)’ at some value ‘a’. I use ‘a’ because that doesn’t commit me to talking about ∞ or any other particular value. The first approach is to just evaluate ‘f(a)’. If you get something meaningful, great! We’re done. That’s the limit of ‘f(x)’ at ‘a’. This approach is called “substitution” — you’re substituting ‘a’ for ‘x’ in the expression of ‘f(x)’ — and it’s great. Except that if your problem’s interesting then substitution won’t work. Still, maybe Wronski’s formula turns out to be lucky. Fit in ∞ where ‘x’ appears and we get:
So … all right. Not quite there yet. But we can get there. For example, has to be — well. It’s what you would expect if you were a kid and not worried about rigor: 0. We can make it rigorous if you like. (It goes like this: Pick any ε larger than 0. Then whenever ‘x’ is larger than then is less than ε. So the limit of at ∞ has to be 0.) So let’s run with this: replace all those expressions with 0. Then we’ve got:
The sine of 0 is 0. 20 is 1. So substitution tells us limit is -2 times ∞ times 1 times 0. That there’s an ∞ in there isn’t a problem. A limit can be infinitely large. Think of the limit of ‘x2‘ at ∞. An infinitely large thing times an infinitely large thing is fine. The limit of ‘x ex‘ at ∞ is infinitely large. A zero times a zero is fine; that’s zero again. But having an ∞ times a 0? That’s trouble. ∞ times something should be huge; anything times zero should be 0; which term wins?
So we have to fall back on alternate plans. Fortunately there’s a tool we have for limits when we’d otherwise have to face an infinitely large thing times a zero.
I hope to write about this next time. I apologize for not getting through it today but time wouldn’t let me.
I remain fascinated with Józef Maria Hoëne-Wronski’s attempted definition of π. It had started out like this:
And I’d translated that into something that modern mathematicians would accept without flinching. That is to evaluate the limit of a function that looks like this:
So. I don’t want to deal with that f(x) as it’s written. I can make it better. One thing that bothers me is seeing the complex number raised to a power. I’d like to work with something simpler than that. And I can’t see that number without also noticing that I’m subtracting from it raised to the same power. and are a “conjugate pair”. It’s usually nice to see those. It often hints at ways to make your expression simpler. That’s one of those patterns you pick up from doing a lot of problems as a mathematics major, and that then look like magic to the lay audience.
Here’s the first way I figure to make my life simpler. It’s in rewriting that and stuff so it’s simpler. It’ll be simpler by using exponentials. Shut up, it will too. I get there through Gauss, Descartes, and Euler.
At least I think it was Gauss who pointed out how you can match complex-valued numbers with points on the two-dimensional plane. On a sheet of graph paper, if you like. The number matches to the point with x-coordinate 1, y-coordinate 1. The number matches to the point with x-coordinate 1, y-coordinate -1. Yes, yes, this doesn’t sound like much of an insight Gauss had, but his work goes on. I’m leaving it off here because that’s all that I need for right now.
So these two numbers that offended me I can think of as points. They have Cartesian coordinates (1, 1) and (1, -1). But there’s never only one coordinate system for something. There may be only one that’s good for the problem you’re doing. I mean that makes the problem easier to study. But there are always infinitely many choices. For points on a flat surface like a piece of paper, and where the points don’t represent any particular physics problem, there’s two good choices. One is the Cartesian coordinates. In it you refer to points by an origin, an x-axis, and a y-axis. How far is the point from the origin in a direction parallel to the x-axis? (And in which direction? This gives us a positive or a negative number) How far is the point from the origin in a direction parallel to the y-axis? (And in which direction? Same positive or negative thing.)
The other good choice is polar coordinates. For that we need an origin and a positive x-axis. We refer to points by how far they are from the origin, heedless of direction. And then to get direction, what angle the line segment connecting the point with the origin makes with the positive x-axis. The first of these numbers, the distance, we normally label ‘r’ unless there’s compelling reason otherwise. The other we label ‘θ’. ‘r’ is always going to be a positive number or, possibly, zero. ‘θ’ might be any number, positive or negative. By convention, we measure angles so that positive numbers are counterclockwise from the x-axis. I don’t know why. I guess it seemed less weird for, say, the point with Cartesian coordinates (0, 1) to have a positive angle rather than a negative angle. That angle would be , because mathematicians like radians more than degrees. They make other work easier.
So. The point corresponds to the polar coordinates and . The point corresponds to the polar coordinates and . Yes, the θ coordinates being negative one times each other is common in conjugate pairs. Also, if you have doubts about my use of the word “the” before “polar coordinates”, well-spotted. If you’re not sure about that thing where ‘r’ is not negative, again, well-spotted. I intend to come back to that.
With the polar coordinates ‘r’ and ‘θ’ to describe a point I can go back to complex numbers. I can match the point to the complex number with the value given by , where ‘e’ is that old 2.71828something number. Superficially, this looks like a big dumb waste of time. I had some problem with imaginary numbers raised to powers, so now, I’m rewriting things with a number raised to imaginary powers. Here’s why it isn’t dumb.
It’s easy to raise a number written like this to a power. raised to the n-th power is going to be equal to . (Because and we’re going to go ahead and assume this stays true if ‘b’ is a complex-valued number. It does, but you’re right to ask how we know that.) And this turns into raising a real-valued number to a power, which we know how to do. And it involves dividing a number by that power, which is also easy.
And we can get back to something that looks like too. That is, something that’s a real number plus times some real number. This is through one of the many Euler’s Formulas. The one that’s relevant here is that for any real number ‘φ’. So, that’s true also for ‘θ’ times ‘n’. Or, looking to where everybody knows we’re going, also true for ‘θ’ divided by ‘x’.
OK, on to the people so anxious about all this. I talked about the angle made between the line segment that connects a point and the origin and the positive x-axis. “The” angle. “The”. If that wasn’t enough explanation of the problem, mention how your thinking’s done a 360 degree turn and you see it different now. In an empty room, if you happen to be in one. Your pedantic know-it-all friend is explaining it now. There’s an infinite number of angles that correspond to any given direction. They’re all separated by 360 degrees or, to a mathematician, 2π.
And more. What’s the difference between going out five units of distance in the direction of angle 0 and going out minus-five units of distance in the direction of angle -π? That is, between walking forward five paces while facing east and walking backward five paces while facing west? Yeah. So if we let ‘r’ be negative we’ve got twice as many infinitely many sets of coordinates for each point.
This complicates raising numbers to powers. θ times n might match with some point that’s very different from θ-plus-2-π times n. There might be a whole ring of powers. This seems … hard to work with, at least. But it’s, at heart, the same problem you get thinking about the square root of 4 and concluding it’s both plus 2 and minus 2. If you want “the” square root, you’d like it to be a single number. At least if you want to calculate anything from it. You have to pick out a preferred θ from the family of possible candidates.
For me, that’s whatever set of coordinates has ‘r’ that’s positive (or zero), and that has ‘θ’ between -π and π. Or between 0 and 2π. It could be any strip of numbers that’s 2π wide. Pick what makes sense for the problem you’re doing. It’s going to be the strip from -π to π. Perhaps the strip from 0 to 2π.
What this all amounts to is that I can turn this:
without changing its meaning any. Raising a number to the one-over-x power looks different from raising it to the n power. But the work isn’t different. The function I wrote out up there is the same as this function:
I can’t look at that number, , sitting there, multiplied by two things added together, and leave that. (OK, subtracted, but same thing.) I want to something something distributive law something and that gets us here:
Also, yeah, that square root of two raised to a power looks weird. I can turn that square root of two into “two to the one-half power”. That gets to this rewrite:
And then. Those parentheses. e raised to an imaginary number minus e raised to minus-one-times that same imaginary number. This is another one of those magic tricks that mathematicians know because they see it all the time. Part of what we know from Euler’s Formula, the one I waved at back when I was talking about coordinates, is this:
That’s good for any real-valued φ. For example, it’s good for the number . And that means we can rewrite that function into something that, finally, actually looks a little bit simpler. It looks like this:
And that’s the function whose limit I want to take at ∞. No, really.
I ran out of time to do my next bit on Wronski’s attempted definition of π. Next week, all goes well. But I have something to share anyway.
William Lane Craig, of the The author of Boxing Pythagoras blog was intrigued by the starting point. And as a fan of studying how people understand infinity and infinitesimals (and how they don’t), this two-century-old example of mixing the numerous and the tiny set his course.
So here’s his essay, trying to work out Wronski’s beautiful weird formula from a non-standard analysis perspective. Non-standard analysis is a field that’s grown in the last fifty years. It’s probably fairly close in spirit to what (I think) Wronski might have been getting at, too. Non-standard analysis works with ideas that seem to match many people’s intuitive feelings about infinitesimals and infinities.
For example, can we speak of a number that’s larger than zero, but smaller than the reciprocal of any positive integer? It’s hard to imagine such a thing. But what if we can show that if we suppose such a number exists, then we can do this logically sound work with it? If you want to say that isn’t enough to show a number exists, then I have to ask how you know imaginary numbers or negative numbers exist.
Standard analysis, you probably guessed, doesn’t do that. It developed over the 19th century when the logical problems of these kinds of numbers seemed unsolvable. Mostly that’s done by limits, showing that a thing must be true whenever some quantity is small enough, or large enough. It seems safe to trust that the infinitesimally small is small enough, and the infinitely large is large enough. And it’s not like mathematicians back then were bad at their job. Mathematicians learned a lot of things about how infinitesimals and infinities work over the late 19th and early 20th century. It makes modern work possible.
Anyway, Boxing Pythagoras goes over what a non-standard analysis treatment of the formula suggests. I think it’s accessible even if you haven’t had much non-standard analysis in your background. At least it worked for me and I haven’t had much of the stuff. I think it’s also accessible if you’re good at following logical argument and won’t be thrown by Greek letters as variables. Most of the hard work is really arithmetic with funny letters. I recommend going and seeing if he did get to π.
A couple weeks ago I shared a fascinating formula for π. I got it from Carl B Boyer’s The History of Calculus and its Conceptual Development. He got it from Józef Maria Hoëne-Wronski, early 19th-century Polish mathematician. His idea was that an absolute, culturally-independent definition of π would come not from thinking about circles and diameters but rather this formula:
Now, this formula is beautiful, at least to my eyes. It’s also gibberish. At least it’s ungrammatical. Mathematicians don’t like to write stuff like “four times infinity”, at least not as more than a rough draft on the way to a real thought. What does it mean to multiply four by infinity? Is arithmetic even a thing that can be done on infinitely large quantities? Among Wronski’s problems is that they didn’t have a clear answer to this. We’re a little more advanced in our mathematics now. We’ve had a century and a half of rather sound treatment of infinitely large and infinitely small things. Can we save Wronski’s work?
Start with the easiest thing. I’m offended by those bits. Well, no, I’m more unsettled by them. I would rather have in there. The difference? … More taste than anything sound. I prefer, if I can get away with it, using the square root symbol to mean the positive square root of the thing inside. There is no positive square root of -1, so, pfaugh, away with it. Mere style? All right, well, how do you know whether those terms are meant to be or its additive inverse, ? How do you know they’re all meant to be the same one? See? … As with all style preferences, it’s impossible to be perfectly consistent. I’m sure there are times I accept a big square root symbol over a negative or a complex-valued quantity. But I’m not forced to have it here so I’d rather not. First step:
Also dividing by is the same as multiplying by so the second easy step gives me:
Now the hard part. All those infinities. I don’t like multiplying by infinity. I don’t like dividing by infinity. I really, really don’t like raising a quantity to the one-over-infinity power. Most mathematicians don’t. We have a tool for dealing with this sort of thing. It’s called a “limit”.
Mathematicians developed the idea of limits over … well, since they started doing mathematics. In the 19th century limits got sound enough that we still trust the idea. Here’s the rough way it works. Suppose we have a function which I’m going to name ‘f’ because I have better things to do than give functions good names. Its domain is the real numbers. Its range is the real numbers. (We can define functions for other domains and ranges, too. Those definitions look like what they do here.)
I’m going to use ‘x’ for the independent variable. It’s any number in the domain. I’m going to use ‘a’ for some point. We want to know the limit of the function “at a”. ‘a’ might be in the domain. But — and this is genius — it doesn’t have to be. We can talk sensibly about the limit of a function at some point where the function doesn’t exist. We can say “the limit of f at a is the number L”. I hadn’t introduced ‘L’ into evidence before, but … it’s a number. It has some specific set value. Can’t say which one without knowing what ‘f’ is and what its domain is and what ‘a’ is. But I know this about it.
Pick any error margin that you like. Call it ε because mathematicians do. However small this (positive) number is, there’s at least one neighborhood in the domain of ‘f’ that surrounds ‘a’. Check every point in that neighborhood other than ‘a’. The value of ‘f’ at all those points in that neighborhood other than ‘a’ will be larger than L – ε and smaller than L + ε.
Yeah, pause a bit there. It’s a tricky definition. It’s a nice common place to crash hard in freshman calculus. Also again in Intro to Real Analysis. It’s not just you. Perhaps it’ll help to think of it as a kind of mutual challenge game. Try this.
Suppose f(x) is. Suppose that you can pick any error bar however tiny, and I can answer with a strip however tiny, and every single ‘x’ inside my strip has an f(x) within your error bar … then, L is the limit of f at a.
Again, yes, tricky. But mathematicians haven’t found a better definition that doesn’t break something mathematicians need.
To write “the limit of f at a is L” we use the notation:
The ‘lim’ part probably makes perfect sense. And you can see where ‘f’ and ‘a’ have to enter into it. ‘x’ here is a “dummy variable”. It’s the falsework of the mathematical expression. We need some name for the independent variable. It’s clumsy to do without. But it doesn’t matter what the name is. It’ll never appear in the answer. If it does then the work went wrong somewhere.
What I want to do, then, is turn all those appearances of ‘∞’ in Wronski’s expression into limits of something at infinity. And having just said what a limit is I have to do a patch job. In that talk about the limit at ‘a’ I talked about a neighborhood containing ‘a’. What’s it mean to have a neighborhood “containing ∞”?
The answer is exactly what you’d think if you got this question and were eight years old. The “neighborhood of infinity” is “all the big enough numbers”. To make it rigorous, it’s “all the numbers bigger than some finite number that let’s just call N”. So you give me an error bar around ‘L’. I’ll give you back some number ‘N’. Every ‘x’ that’s bigger than ‘N’ has f(x) inside your error bars. And note that I don’t have to say what ‘f(∞)’ is or even commit to the idea that such a thing can be meaningful. I only ever have to think directly about values of ‘f(x)’ where ‘x’ is some real number.
So! First, let me rewrite Wronski’s formula as a function, defined on the real numbers. Then I can replace each ∞ with the limit of something at infinity and … oh, wait a minute. There’s three ∞ symbols there. Do I need three limits?
Ugh. Yeah. Probably. This can be all right. We can do multiple limits. This can be well-defined. It can also be a right pain. The challenge-and-response game needs a little modifying to work. You still draw error bars. But I have to draw multiple strips. One for each of the variables. And every combination of values inside all those strips has give an ‘f’ that’s inside your error bars. There’s room for great mischief. You can arrange combinations of variables that look likely to break ‘f’ outside the error bars.
So. Three independent variables, all taking a limit at ∞? That’s not guaranteed to be trouble, but I’d expect trouble. At least I’d expect something to keep the limit from existing. That is, we could find there’s no number ‘L’ so that this drawing-neighborhoods thing works for all three variables at once.
Let’s try. One of the ∞ will be a limit of a variable named ‘x’. One of them a variable named ‘y’. One of them a variable named ‘z’. Then:
Without doing the work, my hunch is: this is utter madness. I expect it’s probably possible to make this function take on many wildly different values by the judicious choice of ‘x’, ‘y’, and ‘z’. Particularly ‘y’ and ‘z’. You maybe see it already. If you don’t, you maybe see it now that I’ve said you maybe see it. If you don’t, I’ll get there, but not in this essay. But let’s suppose that it’s possible to make f(x, y, z) take on wildly different values like I’m getting at. This implies that there’s not any limit ‘L’, and therefore Wronski’s work is just wrong.
Thing is, Wronski wouldn’t have thought that. Deep down, I am certain, he thought the three appearances of ∞ were the same “value”. And that to translate him fairly we’d use the same name for all three appearances. So I am going to do that. I shall use ‘x’ as my variable name, and replace all three appearances of ∞ with the same variable and a common limit. So this gives me the single function:
And then I need to take the limit of this at ∞. If Wronski is right, and if I’ve translated him fairly, it’s going to be π. Or something easy to get π from.
I hope to get there next week.
I’ve been reading Carl B Boyer’s The History of Calculus and its Conceptual Development. It’s been slow going, because reading about how calculus’s ideas developed is hard. The ideas underlying it are subtle to start with. And the ideas have to be discussed using vague, unclear definitions. That’s not because dumb people were making arguments. It’s because these were smart people studying ideas at the limits of what we understood. When we got clear definitions we had the fundamentals of calculus understood. (By our modern standards. The future will likely see us as accepting strange ambiguities.) And I still think Boyer whiffs the discussion of Zeno’s Paradoxes in a way that mathematics and science-types usually do. (The trouble isn’t imagining that infinite series can converge. The trouble is that things are either infinitely divisible or they’re not. Either way implies things that seem false.)
Anyway. Boyer got to a part about the early 19th century. This was when mathematicians were discovering infinities and infinitesimals are amazing tools. Also that mathematicians should maybe learn if they follow any rules. Because you can just plug symbols in to formulas and grind out what looks like they might mean and get answers. Sometimes this works great. Grind through the formulas for solving cubic polynomials as though square roots of negative numbers make sense. You get good results. Later, we worked out a coherent scheme of “complex-valued numbers” that justified it all. We can get lucky with infinities and infinitesimals, sometimes.
And this brought Boyer to an argument made by Józef Maria Hoëne-Wronski. He was a Polish mathematician whose fantastic ambition in … everything … didn’t turn out many useful results. Algebra, the Longitude Problem, building a rival to the railroad, even the Kosciuszko Uprising, none quite panned out. (And that’s not quite his name. The ‘n’ in ‘Wronski’ should have an acute mark over it. But WordPress’s HTML engine doesn’t want to imagine such a thing exists. Nor do many typesetters writing calculus or differential equations books, Boyer’s included.)
But anyone who studies differential equations knows his name, for a concept called the Wronskian. It’s a matrix determinant that anyone who studies differential equations hopes they won’t ever have to do after learning it. And, says Boyer, Wronski had this notion for an “absolute meaning of the number π”. (By “absolute” Wronski means one that not drawn from cultural factors like the weird human interset in circle perimeters and diameters. Compare it to the way we speak of “absolute temperature”, where the zero means something not particular to western European weather.)
I will admit I’m not fond of “real” alternate definitions of π. They seem to me mostly to signal how clever the definition-originator is. The only one I like at all defines π as the smallest positive root of the simple-harmonic-motion differential equation. (With the right starting conditions and all that.) And I’m not sure that isn’t “circumference over diameter” in a hidden form.
And yes, that definition is a mess of early-19th-century wild, untamed casualness in the use of symbols. But I admire the crazypants beauty of it. If I ever get a couple free hours I should rework it into something grammatical. And then see if, turned into something tolerable, Wronski’s idea is something even true.
Boyer allows that “perhaps” because of the strange notation and “bizarre use of the symbol ∞” Wronski didn’t make much headway on this point. I can’t fault people for looking at that and refusing to go further. But isn’t it enchanting as it is?
It was looking like another slow week for something so early in the (United States) school year. Then Comic Strip Master Commend sent a flood of strips in for Friday and Saturday, so I’m splitting the load. It’s not a heavy one, as back-to-school jokes are on people’s minds. But here goes.
Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017 is a fair strip for this early in the school year. It’s an old joke about making subtraction understandable.
Mark Anderson’s Andertoons for the 3rd is the Mark Anderson installment for this week, so I’m glad to have that. It’s a good old classic cranky-students setup and it reminds me that “unlike fractions” is a thing. I’m not quibbling with the term, especially not after the whole long-division mess a couple weeks back. I just hadn’t thought in a long while about how different denominators do make adding fractions harder.
Jeff Harris’s Shortcuts informational feature for the 3rd I couldn’t remember why I put on the list of mathematically-themed comic strips. The reason’s in there. There’s a Pi Joke. But my interest was more in learning that strawberries are a hybrid created in France from a North American and a Chilean breed. Isn’t that intriguing stuff?
Bill Abbott’s Specktickles for the 8th uses arithmetic — multiplication flash cards — as emblem of stuff to study. About all I can say for that.
To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line.
Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that.
Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics.
Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here.
By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time.
Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like.
Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.
The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.
Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.
Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.
Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.
Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.
John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.
Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.
As the 14th of March comes around it’s the time for mathematics bloggers to put up whatever they can about π. I will stir from my traditional crankiness about Pi Day (look, we don’t write days of the year as 3.14 unless we’re doing fake stardates) to bring back my two most π-relevant posts:
It was another of those curious weeks when Comic Strip Master Command didn’t send quite enough comics my way. Among those they did send were a couple of strips in pairs. I can work with that.
Samson’s Dark Side Of The Horse for the 26th is the Roman Numerals joke for this essay. I apologize to Horace for being so late in writing about Roman Numerals but I did have to wait for Cecil Adams to publish first.
In Jef Mallett’s Frazz for the 26th Caulfield ponders what we know about Pythagoras. It’s hard to say much about the historical figure: he built a cult that sounds outright daft around himself. But it’s hard to say how much of their craziness was actually their craziness, how much was just that any ancient society had a lot of what seems nutty to us, and how much was jokes (or deliberate slander) directed against some weirdos. What does seem certain is that Pythagoras’s followers attributed many of their discoveries to him. And what’s certain is that the Pythagorean Theorem was known, at least a thing that could be used to measure things, long before Pythagoras was on the scene. I’m not sure if it was proved as a theorem or whether it was just known that making triangles with the right relative lengths meant you had a right triangle.
Greg Evans’s Luann Againn for the 28th of February — reprinting the strip from the same day in 1989 — uses a bit of arithmetic as generic homework. It’s an interesting change of pace that the mathematics homework is what keeps one from sleep. I don’t blame Luann or Puddles for not being very interested in this, though. Those sorts of complicated-fraction-manipulation problems, at least when I was in middle school, were always slogs of shuffling stuff around. They rarely got to anything we’d like to know.
Jef Mallett’s Frazz for the 1st of March is one of those little revelations that statistics can give one. Myself, I was always haunted by the line in Carl Sagan’s Cosmos about how, in the future, with the Sun ageing and (presumably) swelling in size and heat, the Earth would see one last perfect day. That there would most likely be quite fine days after that didn’t matter, and that different people might disagree on what made a day perfect didn’t matter. Setting out the idea of a “perfect day” and realizing there would someday be a last gave me chills. It still does.
Richard Thompson’s Poor Richard’s Almanac for the 1st and the 2nd of March have appeared here before. But I like the strip so I’ll reuse them too. They’re from the strip’s guide to types of Christmas trees. The Cubist Fur is described as “so asymmetrical it no longer inhabits Euclidean space”. Properly neither do we, but we can’t tell by eye the difference between our space and a Euclidean space. “Non-Euclidean” has picked up connotations of being so bizarre or even horrifying that we can’t hope to understand it. In practice, it means we have to go a little slower and think about, like, what would it look like if we drew a triangle on a ball instead of a sheet of paper. The Platonic Fir, in the 2nd of March strip, looks like a geometry diagram and I doubt that’s coincidental. It’s very hard to avoid thoughts of Platonic Ideals when one does any mathematics with a diagram. We know our drawings aren’t very good triangles or squares or circles especially. And three-dimensional shapes are worse, as see every ellipsoid ever done on a chalkboard. But we know what we mean by them. And then we can get into a good argument about what we mean by saying “this mathematical construct exists”.
Mark Litzler’s Joe Vanilla for the 3rd uses a chalkboard full of mathematics to represent the deep thinking behind a silly little thing. I can’t make any of the symbols out to mean anything specific, but I do like the way it looks. It’s quite well-done in looking like the shorthand that, especially, physicists would use while roughing out a problem. That there are subscripts with forms like “12” and “22” with a bar over them reinforces that. I would, knowing nothing else, expect this to represent some interaction between particles 1 and 2, and 2 with itself, and that the bar means some kind of complement. This doesn’t mean much to me, but with luck, it means enough to the scientist working it out that it could be turned into a coherent paper.
Bill Holbrook’s On The Fastrack is this week about the wedding of the accounting-minded Fi. And she’s having last-minute doubts, which is why the strip of the 3rd brings in irrational and anthropomorphized numerals. π gets called in to serve as emblematic of the irrational numbers. Can’t fault that. I think the only more famously irrational number is the square root of two, and π anthropomorphizes more easily. Well, you can draw an established character’s face onto π. The square root of 2 is, necessarily, at least two disconnected symbols and you don’t want to raise distracting questions about whether the root sign or the 2 gets the face.
That said, it’s a lot easier to prove that the square root of 2 is irrational. Even the Pythagoreans knew it, and a bright child can follow the proof. A really bright child could create a proof of it. To prove that π is irrational is not at all easy; it took mathematicians until the 19th century. And the best proof I know of the fact does it by a roundabout method. We prove that if a number (other than zero) is rational then the tangent of that number must be irrational, and vice-versa. And the tangent of π/4 is 1, so therefore π/4 must be irrational, so therefore π must be irrational. I know you’ll all trust me on that argument, but I wouldn’t want to sell it to a bright child.
Holbrook continues the thread on the 4th, extends the anthropomorphic-mathematics-stuff to call people variables. There’s ways that this is fair. We use a variable for a number whose value we don’t know or don’t care about. A “random variable” is one that could take on any of a set of values. We don’t know which one it does, in any particular case. But we do know — or we can find out — how likely each of the possible values is. We can use this to understand the behavior of systems even if we never actually know what any one of it does. You see how I’m going to defend this metaphor, then, especially if we allow that what people are likely or unlikely to do will depend on context and evolve in time.
It’s another busy enough week for mathematically-themed comic strips that I’m dividing the harvest in two. There’s a natural cutting point since there weren’t any comics I could call relevant for the 15th. But I’m moving a Saturday Morning Breakfast Cereal of course from the 16th into this pile. That’s because there’s another Saturday Morning Breakfast Cereal of course from after the 16th that I might include. I’m still deciding if it’s close enough to on topic. We’ll see.
John Graziano’s Ripley’s Believe It Or Not for the 12th mentions the “Futurama Theorem”. The trivia is true, in that writer Ken Keeler did create a theorem for a body-swap plot he had going. The premise was that any two bodies could swap minds at most one time. So, after a couple people had swapped bodies, was there any way to get everyone back to their correct original body? There is, if you bring two more people in to the body-swapping party. It’s clever.
From reading comment threads about the episode I conclude people are really awestruck by the idea of creating a theorem for a TV show episode. The thing is that “a theorem” isn’t necessarily a mind-boggling piece of work. It’s just the name mathematicians give when we have a clearly-defined logical problem and its solution. A theorem and its proof can be a mind-wrenching bit of work, like Fermat’s Last Theorem or the Four-Color Map Theorem are. Or it can be on the verge of obvious. Keeler’s proof isn’t on the obvious side of things. But it is the reasoning one would have to do to solve the body-swap problem the episode posited without cheating. Logic and good story-telling are, as often, good partners.
Teresa Burritt’s Frog Applause is a Dadaist nonsense strip. But for the 13th it hit across some legitimate words, about a 14 percent false-positive rate. This is something run across in hypothesis testing. The hypothesis is something like “is whatever we’re measuring so much above (or so far below) the average that it’s not plausibly just luck?” A false positive is what it sounds like: our analysis said yes, this can’t just be luck, and it turns out that it was. This turns up most notoriously in medical screenings, when we want to know if there’s reason to suspect a health risk, and in forensic analysis, when we want to know if a particular person can be shown to have been a particular place at a particular time. A 14 percent false positive rate doesn’t sound very good — except.
Suppose we are looking for a rare condition. Say, something one person out of 500 will have. A test that’s 99 percent accurate will turn up positives for the one person who has got it and for five of the people who haven’t. It’s not that the test is bad; it’s just there are so many negatives to work through. If you can screen out a good number of the negatives, though, the people who haven’t got the condition, then the good test will turn up fewer false positives. So suppose you have a cheap or easy or quick test that doesn’t miss any true positives but does have a 14 percent false positive rate. That would screen out 430 of the people who haven’t got whatever we’re testing for, leaving only 71 people who need the 99-percent-accurate test. This can make for a more effective use of resources.
Gary Wise and Lance Aldrich’s Real Life Adventures for the 13th is an algebra-in-real-life joke and I can’t make something deeper out of that.
Mike Shiell’s The Wandering Melon for the 13th is a spot of wordplay built around statisticians. Good for taping to the mathematics teacher’s walls.
Eric the Circle for the 14th, this one by “zapaway”, is another bit of wordplay. Tans and tangents.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 16th identifies, aptly, a difference between scientists and science fans. Weinersmith is right that loving trivia is a hallmark of a fan. Expertise — in any field, not just science — is more about recognizing patterns of problems and concepts, ways to bring approaches from one field into another, this sort of thing. And the digits of π are great examples of trivia. There’s no need for anyone to know the 1,681st digit of π. There’s few calculations you could ever do when you needed more than three dozen digits. But if memorizing digits seems like fun then π is a great set to learn. e is the only other number at all compelling.
The thing is, it’s very hard to become an expert in something without first being a fan of it. It’s possible, but if a field doesn’t delight you why would you put that much work into it? So even though the scientist might have long since gotten past caring how many digits of π, it’s awfully hard to get something memorized in the flush of fandom out of your head.
I know you’re curious. I can only remember π out to 3.14158926535787962. I might have gotten farther if I’d tried, but I actually got a digit wrong, inserting a ‘3’ before that last ’62’, and the effort to get that mistake out of my head obliterated any desire to waste more time memorizing digits. For e I can only give you 2.718281828. But there’s almost no hope I’d know that far if it weren’t for how e happens to repeat that 1828 stanza right away.
Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.
A normal number is any real number you never heard of.
Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.
We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?
Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.
In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.
It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.
That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.
The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.
This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.
Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.
I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher `1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.
But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.
And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.
Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?
We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.
It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.
It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.
It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.
We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.
Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.
So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.
I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.
Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.
Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s . Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.
That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.
Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.
Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.
Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.
Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.
And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.
Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.
Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.