How To Wreck Your Idea About What ‘Continuous’ Means


This attractive little tweet came across my feed yesterday:

This function — I guess it’s the “popcorn” function — is a challenge to our ideas about what a “continuous” function is. I’ve mentioned “continuous” functions before and said something like they’re functions you could draw without lifting your pen from the paper. That’s the colloquial, and the intuitive, idea of what they mean. And that’s all right for ordinary uses.

But the best definition mathematicians have thought of for a “continuous function” has some quirks. And here’s one of them. Define a function named ‘f’. Its domain is the real numbers. Its range is the real numbers. And the rule matching things in the domain to things in the range is, as pictured:

  • If ‘x’ is zero then f(x) = 1
  • If ‘x’ is an irrational number then f(x) = 0
  • If ‘x’ is a rational number, then it’s equal in lowest terms to the whole number ‘p’ divided by the positive whole number ‘q’. And for this ‘x’, then f(x) = \frac{1}{q}

And as the tweet from Fermat’s Library says, this is a function that’s continuous on all the irrational numbers. It’s not continuous on any rational numbers. This seems like a prank. But it’s a common approach to finding intuition-testing ideas about continuity. Setting different rules for rational and irrational numbers works well for making these strange functions. And thinking of rational numbers as their representation in lowest terms is also common. (Writing it as ‘p divided by q’ suggests that ‘p’ and ‘q’ are going to be prime, but, no! Think of \frac{3}{8} or of \frac{4}{9} .) If you stare at the plot you can maybe convince yourself that “continuous on the irrational numbers” makes sense here. That heavy line of dots at the bottom looks like it’s approaching a continuous blur, at least.

It can get weirder. It’s possible to create a function that’s continuous at only a single point of all the real numbers. This is why Real Analysis is such a good subject to crash hard against. But we accept weird conclusions like this because the alternative is to give up as “continuous” functions that we just know have to be continuous. Mathematical definitions are things we make for our use.

A Leap Day 2016 Mathematics A To Z: Fractions (Continued)


Another request! I was asked to write about continued fractions for the Leap Day 2016 A To Z. The request came from Keilah, of the Knot Theorist blog. But I’d already had a c-word request in (conjecture). So you see my elegant workaround to talk about continued fractions anyway.

Fractions (continued).

There are fashions in mathematics. There are fashions in all human endeavors. But mathematics almost begs people to forget that it is a human endeavor. Sometimes a field of mathematics will be popular a while and then fade. Some fade almost to oblivion. Continued fractions are one of them.

A continued fraction comes from a simple enough starting point. Start with a whole number. Add a fraction to it. 1 + \frac{2}{3}. Everyone knows what that is. But then look at the denominator. In this case, that’s the ‘3’. Why couldn’t that be a sum, instead? No reason. Imagine then the number 1 + \frac{2}{3 + 4}. Is there a reason that we couldn’t, instead of the ‘4’ there, have a fraction instead? No reason beyond our own timidity. Let’s be courageous. Does 1 + \frac{2}{3 + \frac{4}{5}} even mean anything?

Well, sure. It’s getting a little hard to read, but 3 + \frac{4}{5} is a fine enough number. It’s 3.8. \frac{2}{3.8} is a less friendly number, but it’s a number anyway. It’s a little over 0.526. (It takes a fair number of digits past the decimal before it ends, but trust me, it does.) And we can add 1 to that easily. So 1 + \frac{2}{3 + \frac{4}{5}} means a number a slight bit more than 1.526.

Dare we replace the “5” in that expression with a sum? Better, with the sum of a whole number and a fraction? If we don’t fear being audacious, yes. Could we replace the denominator of that with another sum? Yes. Can we keep doing this forever, creating this never-ending stack of whole numbers plus fractions? … If we want an irrational number, anyway. If we want a rational number, this stack will eventually end. But suppose we feel like creating an infinitely long stack of continued fractions. Can we do it? Why not? Who dares, wins!

OK. Wins what, exactly?

Well … um. Continued fractions certainly had a fashionable time. John Wallis, the 17th century mathematician famous for introducing the ∞ symbol, and for an interminable quarrel with Thomas Hobbes over Hobbes’s attempts to reform mathematics, did much to establish continuous fractions as a field of study. (He’s credited with inventing the field. But all claims to inventing something big are misleading. Real things are complicated and go back farther than people realize, and inventions are more ambiguous than people think.) The astronomer Christiaan Huygens showed how to use continued fractions to design better gear ratios. This may strike you as the dullest application of mathematics ever. Let it. It’s also important stuff. People who need to scale one movement to another need this.

In the 18th and 19th century continued fractions became interesting for higher mathematics. Continued fractions were the approach Leonhard Euler used to prove that e had to be irrational. That’s one of the superstar numbers of mathematics. Johan Heinrich Lambert used this to show that if θ is a rational number (other than zero) then the tangent of θ must be irrational. This is one path to showing that π must be irrational. Many of the astounding theorems of Srinivasa Ramanujan were about continued fractions, or ideas which built on continued fractions.

But since the early 20th century the field’s evaporated. I don’t have a good answer why. The best speculation I’ve heard is that the field seems to fit poorly into any particular topic. Continued fractions get interesting when you have an infinitely long stack of nesting denominators. You don’t want to work with infinitely long strings of things before you’ve studied calculus. You have to be comfortable with these things. But that means students don’t encounter it until college, at least. And at that point fractions seem beneath the grade level. There’s a handful of proofs best done by them. But those proofs can be shown as odd, novel approaches to these particular problems. Studying the whole field is hardly needed.

So, perhaps because it seems like an odd fit, the subject’s dried up and blown away. Even enthusiasts seem to be resigned to its oblivion. Professor Adam Van Tyul, then at Queens University in Kingston, Ontario, composed a nice set of introductory pages about continued fractions. But the page is defunct. Dr Ron Knott has a more thorough page, though, and one with calculators that work well.

Will continued fractions make a comeback? Maybe. It might take the discovery of some interesting new results, or some better visualization tools, to reignite interest. Chaos theory, the study of deterministic yet unpredictable systems, first grew (we now recognize) in the 1890s. But it fell into obscurity. When we got some new theoretical papers and the ability to do computer simulations, it flowered again. For a time it looked ready to take over all mathematics, although we’ve got things under better control now. Could continued fractions do the same? I’m skeptical, but won’t rule it out.

Postscript: something you notice quickly with continued fractions is they’re a pain to typeset. We’re all right with 1 + \frac{2}{3 + \frac{4}{5}} . But after that the LaTeX engine that WordPress uses to render mathematical symbols is doomed. A real LaTeX engine gets another couple nested denominators in before the situation is hopeless. If you’re writing this out on paper, the way people did in the 19th century, that’s all right. But there’s no typing it out that way.

But notation is made for us, not us for notation. If we want to write a continued fraction in which the numerators are all 1, we have a brackets shorthand available. In this we would write 2 + \frac{1}{3 + \frac{1}{4 + \cdots }} as [2; 3, 4, … ]. The numbers are the whole numbers added to the next level of fractions. Another option, and one that lends itself to having numerators which aren’t 1, is to write out a string of fractions. In this we’d write 2 + \frac{1}{3 +} \frac{1}{4 +} \frac{1}{\cdots + }. We have to trust people notice the + sign is in the denominator there. But if people know we’re doing continued fractions then they know to look for the peculiar notation.

Reading the Comics, June 4, 2015: Taking It Easy Edition


I do like looking for thematic links among the comic strips that mention mathematical topics that I gather for these posts. This time around all I can find is a theme of “nothing big going on”. I’m amused by some of them but don’t think there’s much depth to the topics. But I like them anyway.

Mark Anderson’s Andertoons gets its appearance here with the May 25th strip. And it’s a joke about the hatred of fractions. It’s a suitable one for posting in mathematics classes too, since it is right about naming three famous irrational numbers — pi, the “golden ratio” phi, and the square root of two — and the fact they can’t be written as fractions which use only whole numbers in the numerator and denominator. Pi is, well, pi. The square root of two is nice and easy to find, and has that famous legend about the Pythagoreans attached to it. And it’s probably the easiest number to prove is irrational.

The “golden ratio” is that number that’s about 1.618. It’s interesting because 1 divided by phi is about 0.618, and who can resist a symmetry like that? That’s about all there is to say for it, though. The golden ratio is otherwise a pretty boring number, really. It’s gained celebrity as an “ideal” ratio — that a rectangle with one side about 1.6 times as long as the other is somehow more appealing than other choices — but that’s rubbish. It’s a novelty act among numbers. Novelty acts are fun, but we should appreciate them for what they are.

Continue reading “Reading the Comics, June 4, 2015: Taking It Easy Edition”

The Short, Unhappy Life Of A Doomed Conjecture


So last month amongst the talk about the radius of a circle inscribed in a Pythagorean right triangle I mentioned that I had, briefly, floated a conjecture that might have spun off it. It didn’t, though I promised to describe the chain of thought I had while exploring it, on the grounds that the process of coming up with mathematical ideas doesn’t get described much, and certainly doesn’t get described for the sorts of fiddling little things that make up a trifle like this.

A triangle with sides a, b, and c, and an inscribed circle. From the center of the circle are lines going to the vertices of the triangle, dividing the circle into three smaller triangles, with bases of lengths, a, b, and c respectively and all with the same height, r, the radius of the inscribed circle.
A triangle (meant to be a right triangle) with an inscribed circle of radius r. The triangle is divided into three smaller triangles meeting at the center of the inscribed circle.

The point from which I started was a question about the radius of a circle inscribed in the right triangle with legs of length 5, 12, and 13. This turns out to have a radius of 2, which is interesting because it’s a whole number. It turns out to be simple to show that for a Pythagorean right triangle, that is, a right triangle whose legs are a Pythagorean triple — like (3, 4, 5), or (5, 12, 13), any where the square of the biggest number is the same as what you get adding together the squares of the two smaller numbers — the inscribed circle has a radius that’s a whole number. For example, the circle you could inscribe in a triangle of sides 3, 4, and 5 would have radius 1. The circle inscribed in a triangle of sides 8, 15, and 17 would have radius 3; so does the circle inscribed in a triangle of sides 7, 24, and 25.

Since I now knew that (and in multiple ways: HowardAt58 had his own geometric solution, and you can also do this algebraically) I started to wonder about the converse. If a Pythagorean right triangle’s inscribed circle has a whole number for a radius, can does knowing a circle has a whole number for a radius tell us anything about the triangle it’s inscribed in? This is an easy way to build new conjectures: given that “if A is true, then B must be true”, can it also be that “if B is true, then A must be true”? Only rarely will that be so — it’s neat when it is — but we might be able to patch something up, like, “if B, C, and D are all simultaneously true, then A must be true”, or perhaps, “if B is true, then at least E must be true”, where E resembles A but maybe doesn’t make such a strong claim. Thus are tiny little advances in mathematics created.

Continue reading “The Short, Unhappy Life Of A Doomed Conjecture”

How Richard Feynman Got From The Square Root of 2 to e


I wanted to bring to greater prominence something that might have got lost in comments. Elke Stangl, author of the Theory And Practice Of Trying To Combine Just Anything blog, noticed that among the Richard Feynman Lectures on Physics, and available online, is his derivation of how to discover e — the base of natural logarithms — from playing around.

e is an important number, certainly, but it’s tricky to explain why it’s important; it hasn’t got a catchy definition like pi has, and even the description that most efficiently says why it’s interesting (“the base of the natural logarithm”) sounds perilously close to technobabble. As an explanation for why e should be interesting Feynman’s text isn’t economical — I make it out as something around two thousand words — but it’s a really good explanation since it starts from a good starting point.

That point is: it’s easy to understand what you mean by raising a number, say 10, to a positive integer: 104, for example, is four tens multiplied together. And it doesn’t take much work to extend that to negative numbers: 10-4 is one divided by the product of four tens multiplied together. Fractions aren’t too bad either: 101/2 would be the number which, multiplied by itself, gives you 10. 103/2 would be 101/2 times 101/2 times 101/2; or if you think this is easier (it might be!), the number which, multiplied by itself, gives you 103. But what about the number 10^{\sqrt{2}} ? And if you can work that out, what about the number 10^{\pi} ?

There’s a pretty good, natural way to go about writing that and as Feynman shows you find there’s something special about some particular number pretty close to 2.71828 by doing so.

I Know The Square Root Of Five, Too


Now I want to do a little more complicated problem of showing two numbers are equal because the difference between them is so tiny. It struck me that if I wanted to do that, I’d have to do some setup to even start. What I really meant to do was to show that some number was equal to the square root of five. I picked the square root of five because I had it burned into my memory from a children’s book that knowing the first few digits of an irrational number would be sufficient to immobilize the mind-controlled population of an all-powerful computer dictator, and I’ve kept it in mind just in case ever since. I’m also glad to know on double-checking that I remembered the first couple digits of the square root of five well (2.236). I’m shakier on the square root of seven (2.something) so if it’s a more advanced computer we’re up against I’m in trouble.

Still, most square roots would do. It’s a neat little property of the whole numbers that the square roots of them are either whole numbers themselves — the square root of 4 is 2, the square root of 169 is 13, the square root of 4,153,444 is not worth thinking about — or else they’re irrational numbers, going on without ending and without repetition. Most people who’d read a mathematics blog on purpose have heard about how the irrationality of the square root of 2 was proven in ancient days, and maybe heard the story of how the Pythagoreans murdered the person who let slip the horrifying secret that there were irrational numbers and they represented real things that might be of interest, and a few are even aware we don’t really know with certainty that the story’s actually true. (At this point, I suspect it’s too strong a claim to say we know anything about the Pythagoreans for certain, but I haven’t looked closely. Maybe matters are not quite that dismal.) Whether true or not the legend of the Pythagoreans turning to murder is a fine way to get an algebra class’s attention. I just fear that what the students take away from it is, “if you learn any of this math stuff a cabal of mathematicians will murder you” and they stay oblivious for reasonable self-protection.

But anyone who’s understood a proof that the square root of two is irrational is perfectly able to show that the square root of three is irrational as well, or the square root of five, or any other such desired number. The proof that way runs just about the same route, but takes longer to get there.

Similarly, if you have a rational number that comes to an end, such as 0.49, then the square root either is a rational number that comes to an end, in this case 0.7; or else it never comes to an and and never repeats. That’s easy to prove, if you have that idea about the square roots of whole numbers. The square root of 4.9, for example, is not a rational number, although I can’t promise anything for its ability to halt world-spanning computers.