My 2019 Mathematics A To Z: Julia set


Today’s A To Z term is my pick again. So I choose the Julia Set. This is named for Gaston Julia, one of the pioneers in chaos theory and fractals. He was born earlier than you imagine. No, earlier than that: he was born in 1893.

The early 20th century saw amazing work done. We think of chaos theory and fractals as modern things, things that require vast computing power to understand. The computers help, yes. But the foundational work was done more than a century ago. Some of these pioneering mathematicians may have been able to get some numerical computing done. But many did not. They would have to do the hard work of thinking about things which they could not visualize. Things which surely did not look like they imagined.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Julia set.

We think of things as moving. Even static things we consider as implying movement. Else we’d think it odd to ask, “Where does that road go?” This carries over to abstract things, like mathematical functions. A function is a domain, a range, and a rule matching things in the domain to things in the range. It “moves” things as much as a dictionary moves words.

Yet we still think of a function as expressing motion. A common way for mathematicians to write functions uses little arrows, and describes what’s done as “mapping”. We might write f: D \rightarrow R . This is a general idea. We’re expressing that it maps things in the set D to things in the set R. We can use the notation to write something more specific. If ‘z’ is in the set D, we might write f : z \rightarrow z^2 + \frac{1}{2} . This describes the rule that matches things in the domain to things in the range. f(2) represents the evaluation of this rule at a specific point, the one where the independent variable has the value ‘2’. f(z) represents the evaluation of this rule at a specific point without committing to what that point is. f(D) represents a collection of points. It’s the set you get by evaluating the rule at every point in D.

And it’s not bad to think of motion. Many functions are models of things that move. Particles in space. Fluids in a room. Populations changing in time. Signal strengths varying with a sensor’s position. Often we’ll calculate the development of something iteratively, too. If the domain and the range of a function are the same set? There’s no reason that we can’t take our z, evaluate f(z), and then take whatever that thing is and evaluate f(f(z)). And again. And again.

My age cohort, at least, learned to do this almost instinctively when we discovered you could take the result on a calculator and hit a function again. Calculate something and keep hitting square root; you get a string of numbers that eventually settle on 1. Or you started at zero. Calculate something and keep hitting square; you settle at either 0, 1, or grow to infinity. Hitting sine over and over … well, that was interesting since you might settle on 0 or some other, weird number. Same with tangent. Cosine you wouldn’t settle down to zero.

Serious mathematicians look at this stuff too, though. Take any set ‘D’, and find what its image is, f(D). Then iterate this, figuring out what f(f(D)) is. Then f(f(f(D))). f(f(f(f(D)))). And so on. What happens if you keep doing this? Like, forever?

We can say some things, at least. Even without knowing what f is. There could be a part of D that all these many iterations of f will send out to infinity. There could be a part of D that all these many iterations will send to some fixed point. And there could be a part of D that just keeps getting shuffled around without ever finishing.

Some of these might not exist. Like, f: z \rightarrow z + 4 doesn’t have any fixed points or shuffled-around points. It sends everything off to infinity. f: z \rightarrow \frac{1}{10} z has only a fixed point; nothing from it goes off to infinity and nothing’s shuffled back and forth. f: z \rightarrow -z has a fixed point and a lot of points that shuffle back and forth.

Thinking about these fixed points and these shuffling points gets us Julia Sets. These sets are the fixed points and shuffling-around points for certain kinds of functions. These functions are ones that have domain and range of the complex-valued numbers. Complex-valued numbers are the sum of a real number plus an imaginary number. A real number is just what it says on the tin. An imaginary number is a real number multiplied by \imath . What is \imath ? It’s the imaginary unit. It has the neat property that \imath^2 = -1 . That’s all we need to know about it.

Oh, also, zero times \imath is zero again. So if you really want, you can say all real numbers are complex numbers; they’re just themselves plus 0 \imath . Complex-valued functions are worth a lot of study in their own right. Better, they’re easier to study (at the introductory level) than real-valued functions are. This is such a relief to the mathematics major.

And now let me explain some little nagging weird thing. I’ve been using ‘z’ to represent the independent variable here. You know, using it as if it were ‘x’. This is a convention mathematicians use, when working with complex-valued numbers. An arbitrary complex-valued number tends to be called ‘z’. We haven’t forgotten x, though. We just in this context use ‘x’ to mean “the real part of z”. We also use “y” to carry information about the imaginary part of z. When we write ‘z’ we hold in trust an ‘x’ and ‘y’ for which z = x + y\imath . This all comes in handy.

But we still don’t have Julia Sets for every complex-valued function. We need it to be a rational function. The name evokes rational numbers, but that doesn’t seem like much guidance. f:z \rightarrow \frac{3}{5} is a rational function. It seems too boring to be worth studying, though, and it is. A “rational function” is a function that’s one polynomial divided by another polynomial. This whether they’re real-valued or complex-valued polynomials.

So. Start with an ‘f’ that’s one complex-valued polynomial divided by another complex-valued polynomial. Start with the domain D, all of the complex-valued numbers. Find f(D). And f(f(D)). And f(f(f(D))). And so on. If you iterated this ‘f’ without limit, what’s the set of points that never go off to infinity? That’s the Julia Set for that function ‘f’.

There are some famous Julia sets, though. There are the Julia sets that we heard about during the great fractal boom of the 1980s. This was when computers got cheap enough, and their graphic abilities good enough, to automate the calculation of points in these sets. At least to approximate the points in these sets. And these are based on some nice, easy-to-understand functions. First, you have to pick a constant C. This C is drawn from the complex-valued numbers. But that can still be, like, ½, if that’s what interests you. For whatever your C is? Define this function:

f_C: z \rightarrow z^2 + C

And that’s it. Yes, this is a rational function. The numerator function is z^2 + C . The denominator function is 1 .

This produces many different patterns. If you picked C = 0, you get a circle. Good on you for starting out with something you could double-check. If you picked C = -2? You get a long skinny line, again, easy enough to check. If you picked C = -1? Well, now you have a nice interesting weird shape, several bulging ovals with peninsulas of other bulging ovals all over. Pick other numbers. Pick numbers with interesting imaginary components. You get pinwheels. You get jagged streaks of lightning. You can even get separate islands, whole clouds of disjoint threatening-looking blobs.

There is some guessing what you’ll get. If you work out a Julia Set for a particular C, you’ll see a similar-looking Julia Set for a different C that’s very close to it. This is a comfort.

You can create a Julia Set for any rational function. I’ve only ever seen anyone actually do it for functions that look like what we already had. z^3 + C . Sometimes z^4 + C . I suppose once, in high school, I might have tried z^5 + C but I don’t remember what it looked like. If someone’s done, say, \frac{1}{z^2 + C} please write in and let me know what it looks like.

The Julia Set has a famous partner. Maybe the most famous fractal of them all, the Mandelbrot Set. That’s the strange blobby sea surrounded by lightning bolts that you see on the cover of every pop mathematics book from the 80s and 90s. If a C gives us a Julia Set that’s one single, contiguous patch? Then that C is in the Mandelbrot Set. Also vice-versa.

The ideas behind these sets are old. Julia’s paper about the iterations of rational functions first appeared in 1918. Julia died in 1978, the same year that the first computer rendering of the Mandelbrot set was done. I haven’t been able to find whether that rendering existed before his death. Nor have I decided which I would think the better sequence.


Thanks for reading. All of Fall 2019 A To Z posts should be at this link. And next week I hope to get to the letters ‘K’ and ‘L’. Sunday, yes, I hope to get back to the comics.

Reading the Comics, January 3, 2018: Explaining Things Edition


There were a good number of mathematically-themed comic strips in the syndicated comics last week. Those from the first part of the week gave me topics I could really sink my rhetorical teeth into, too. So I’m going to lop those off into the first essay for last week and circle around to the other comics later on.

Jef Mallett’s Frazz started a week of calendar talk on the 31st of December. I’ve usually counted that as mathematical enough to mention here. The 1st of January as we know it derives, as best I can figure, from the 1st of January as Julius Caesar established for 45 BCE. This was the first Roman calendar to run basically automatically. Its length was quite close to the solar year’s length. It had leap days added according to a rule that should have been easy enough to understand (one day every fourth year). Before then the Roman calendar year was far enough off the solar year that they had to be kept in synch by interventions. Mostly, by that time, adding a short extra month to put things more nearly right. This had gotten all confusingly messed up and Caesar took the chance to set things right, running 46 BCE to 445 days long.

But why 445 and not, say, 443 or 457? And I find on research that my recollection might not be right. That is, I recall that the plan was to set the 1st of January, Reformed, to the first new moon after the winter solstice. A choice that makes sense only for that one year, but, where to set the 1st is literally arbitrary. While that apparently passes astronomical muster (the new moon as seen from Rome then would be just after midnight the 2nd of January, but hitting the night of 1/2 January is good enough), there’s apparently dispute about whether that was the objective. It might have been to set the winter solstice to the 25th of December. Or it might have been that the extra days matched neatly the length of two intercalated months that by rights should have gone into earlier years. It’s a good reminder of the difficulty of reading motivation.

Brian Fies’s The Last Mechanical Monster for the 1st of January, 2018, continues his story about the mad scientist from the Fleischer studios’ first Superman cartoon, back in 1941. In this panel he’s describing how he realized, over the course of his long prison sentence, that his intelligence was fading with age. He uses the ability to do arithmetic in his head as proof of that. These types never try naming, like, rulers of the Byzantine Empire. Anyway, to calculate the cube root of 50,653 in his head? As he used to be able to do? … guh. It’s not the sort of mental arithmetic that I find fun.

But I could think of a couple ways to do it. The one I’d use is based on a technique called Newton-Raphson iteration that can often be used to find where a function’s value is zero. Raphson here is Joseph Raphson, a late 17th century English mathematician known for the Newton-Raphson method. Newton is that falling-apples fellow. It’s an iterative scheme because you start with a guess about what the answer would be, and do calculations to make the answer better. I don’t say this is the best method, but it’s the one that demands me remember the least stuff to re-generate the algorithm. And it’ll work for any positive number ‘A’ and any root, to the ‘n’-th power.

So you want the n-th root of ‘A’. Start with your current guess about what this root is. (If you have no idea, try ‘1’ or ‘A’.) Call that guess ‘x’. Then work out this number:

\frac{1}{n}\left( (n - 1) \cdot x + \frac{A}{x^{n - 1}} \right)

Ta-da! You have, probably, now a better guess of the n-th root of ‘A’. If you want a better guess yet, take the result you just got and call that ‘x’, and go back calculating that again. Stop when you feel like your answer is good enough. This is going to be tedious but, hey, if you’re serving a prison term of the length of US copyright you’ve got time. (It’s possible with this sort of iterator to get a worse approximation, although I don’t think that happens with n-th root process. Most of the time, a couple more iterations will get you back on track.)

But that’s work. Can we think instead? Now, most n-th roots of whole numbers aren’t going to be whole numbers. Most integers aren’t perfect powers of some other integer. If you think 50,653 is a perfect cube of something, though, you can say some things about it. For one, it’s going to have to be a two-digit number. 103 is 1,000; 1003 is 1,000,000. The second digit has to be a 7. 73 is 343. The cube of any number ending in 7 has to end in 3. There’s not another number from 1 to 9 with a cube that ends in 3. That’s one of those things you learn from playing with arithmetic. (A number ending in 1 cubes to something ending in 1. A number ending in 2 cubes to something ending in 8. And so on.)

So the cube root has to be one of 17, 27, 37, 47, 57, 67, 77, 87, or 97. Again, if 50,653 is a perfect cube. And we can do better than saying it’s merely one of those nine possibilities. 40 times 40 times 40 is 64,000. This means, first, that 47 and up are definitely too large. But it also means that 40 is just a little more than the cube root of 50,653. So, if 50,653 is a perfect cube, then it’s most likely going to be the cube of 37.

Bill Watterson’s Calvin and Hobbes rerun for the 2nd is a great sequence of Hobbes explaining arithmetic to Calvin. There is nothing which could be added to Hobbes’s explanation of 3 + 8 which would make it better. I will modify Hobbes’s explanation of what the numerator. It’s ridiculous to think it’s Latin for “number eighter”. The reality is possibly more ridiculous, as it means “a numberer”. Apparently it derives from “numeratus”, meaning, “to number”. The “denominator” comes from “de nomen”, as in “name”. So, you know, “the thing that’s named”. Which does show the terms mean something. A poet could turn “numerator over denominator” into “the number of parts of the thing we name”, or something near enough that.

Hobbes continues the next day, introducing Calvin to imaginary numbers. The term “imaginary numbers” tells us their history: they looked, when first noticed in formulas for finding roots of third- and fourth-degree polynomials, like obvious nonsense. But if you carry on, following the rules as best you can, that nonsense would often shake out and you’d get back to normal numbers again. And as generations of mathematicians grew up realizing these acted like numbers we started to ask: well, how is an imaginary number any less real than, oh, the square root of six?

Hobbes’s particular examples of imaginary numbers — “eleventeen” and “thirty-twelve” — are great-sounding compositions. They put me in mind, as many of Watterson’s best words do, of a 1960s Peanuts in which Charlie Brown is trying to help Sally practice arithmetic. (I can’t find it online, as that meme with edited text about Sally Brown and the sixty grapefruits confounds my web searches.) She offers suggestions like “eleventy-Q” and asks if she’s close, which Charlie Brown admits is hard to say.

Cherry Trail: 'Good morning, honey! Where's Dad?' Mark Trail: 'He's out on the porch reading the paper!' Cherry: 'Rusty sure is excited about our upcoming trip to Mexico!' Mark: 'Did you get everything worked out with the school?' Cherry: 'Rusty will need to do some math assignments, but he'll get credit for his other subjects since it's an educational trip!'
James Allen’s Mark Trail for the 3rd of January, 2018. James Allen has changed many things about the comic strip since Jack Elrod’s retirement, as I have observed over on the other blog. There are less ruthlessly linear stories. There’s no more odd word balloon placement implying that giant squirrels are talking about the poachers. Mark Trail sometimes has internal thoughts. I’m glad that he does still choose to over-emphasize declarations like “[Your Dad]’s out on the porch reading the paper!” There are some traditions.
And finally, James Allen’s Mark Trail for the 3rd just mentions mathematics as the subject that Rusty Trail is going to have to do some work on instead of allowing the experience of a family trip to Mexico to count. This is of extremely marginal relevance, but it lets me include a picture of a comic strip, and I always like getting to do that.

Theorem Thursday: A First Fixed Point Theorem


I’m going to let the Mean Value Theorem slide a while. I feel more like a Fixed Point Theorem today. As with the Mean Value Theorem there’s several of these. Here I’ll start with an easy one.

The Fixed Point Theorem.

Back when the world and I were young I would play with electronic calculators. They encouraged play. They made it so easy to enter a number and hit an operation, and then hit that operation again, and again and again. Patterns appeared. Start with, say, ‘2’ and hit the ‘squared’ button, the smaller ‘2’ raised up from the key’s baseline. You got 4. And again: 16. And again: 256. And again and again and you got ever-huger numbers. This happened whenever you started from a number bigger than 1. Start from something smaller than 1, however tiny, and it dwindled down to zero, whatever you tried. Start at ‘1’ and it just stays there. The results were similar if you started with negative numbers. The first squaring put you in positive numbers and everything carried on as before.

This sort of thing happened a lot. Keep hitting the mysterious ‘exp’ and the numbers would keep growing forever. Keep hitting ‘sqrt’; if you started above 1, the numbers dwindled to 1. Start below and the numbers rise to 1. Or you started at zero, but who’s boring enough to do that? ‘log’ would start with positive numbers and keep dropping until it turned into a negative number. The next step was the calculator’s protest we were unleashing madness on the world.

But you didn’t always get zero, one, infinity, or madness, from repeatedly hitting the calculator button. Sometimes, some functions, you’d get an interesting number. If you picked any old number and hit cosine over and over the digits would eventually settle down to around 0.739085. Or -0.739085. Cosine’s great. Tangent … tangent is weird. Tangent does all sorts of bizarre stuff. But at least cosine is there, giving us this interesting number.

(Something you might wonder: this is the cosine of an angle measured in radians, which is how mathematicians naturally think of angles. Normal people measure angles in degrees, and that will have a different fixed point. We write both the cosine-in-radians and the cosine-in-degrees using the shorthand ‘cos’. We get away with this because people who are confused by this are too embarrassed to call us out on it. If we’re thoughtful we write, say, ‘cos x’ for radians and ‘cos x°’ for degrees. This makes the difference obvious. It doesn’t really, but at least we gave some hint to the reader.)

This all is an example of a fixed point theorem. Fixed point theorems turn up in a lot of fields. They were most impressed upon me in dynamical systems, studying how a complex system changes in time. A fixed point, for these problems, is an equilibrium. It’s where things aren’t changed by a process. You can see where that’s interesting.

In this series I haven’t stated theorems exactly much, and I haven’t given them real proofs. But this is an easy one to state and to prove. Start off with a function, which I’ll name ‘f’, because yes that is exactly how much effort goes in to naming functions. It has as a domain the interval [a, b] for some real numbers ‘a’ and ‘b’. And it has as rang the same interval, [a, b]. It might use the whole range; it might use only a subset of it. And we have to require that f is continuous.

Then there has to be at least one fixed point. There must be at last one number ‘c’, somewhere in the interval [a, b], for which f(c) equals c. There may be more than one; we don’t say anything about how many there are. And it can happen that c is equal to a. Or that c equals b. We don’t know that it is or that it isn’t. We just know there’s at least one ‘c’ that makes f(c) equal c.

You get that in my various examples. If the function f has the rule that any given x is matched to x2, then we do get two fixed points: f(0) = 02 = 0, and, f(1) = 12 = 1. Or if f has the rule that any given x is matched to the square root of x, then again we have: f(0) = \sqrt{0} = 0 and f(1) = \sqrt{1} = 1 . Same old boring fixed points. The cosine is a little more interesting. For that we have f(0.739085...) = \cos\left(0.739085...\right) = 0.739085... .

How to prove it? The easiest way I know is to summon the Intermediate Value Theorem. Since I wrote a couple hundred words about that a few weeks ago I can assume you to understand it perfectly and have no question about how it makes this problem easy. I don’t even need to go on, do I?

… Yeah, fair enough. Well, here’s how to do it. We’ll take the original function f and create, based on it, a new function. We’ll dig deep in the alphabet and name that ‘g’. It has the same domain as f, [a, b]. Its range is … oh, well, something in the real numbers. Don’t care. The wonder comes from the rule we use.

The rule for ‘g’ is this: match the given number ‘x’ with the number ‘f(x) – x’. That is, g(a) equals whatever f(a) would be, minus a. g(b) equals whatever f(b) would be, minus b. We’re allowed to define a function in terms of some other function, as long as the symbols are meaningful. But we aren’t doing anything wrong like dividing by zero or taking the logarithm of a negative number or asking for f where it isn’t defined.

You might protest that we don’t know what the rule for f is. We’re told there is one, and that it’s a continuous function, but nothing more. So how can I say I’ve defined g in terms of a function I don’t know?

In the first place, I already know everything about f that I need to. I know it’s a continuous function defined on the interval [a, b]. I won’t use any more than that about it. And that’s great. A theorem that doesn’t require knowing much about a function is one that applies to more functions. It’s like the difference between being able to say something true of all living things in North America, and being able to say something true of all persons born in Redbank, New Jersey, on the 18th of February, 1944, who are presently between 68 and 70 inches tall and working on their rock operas. Both things may be true, but one of those things you probably use more.

In the second place, suppose I gave you a specific rule for f. Let me say, oh, f matches x with the arccosecant of x. Are you feeling any more enlightened now? Didn’t think so.

Back to g. Here’s some things we can say for sure about it. g is a function defined on the interval [a, b]. That’s how we set it up. Next point: g is a continuous function on the interval [a, b]. Remember, g is just the function f, which was continuous, minus x, which is also continuous. The difference of two continuous functions is still going to be continuous. (This is obvious, although it may take some considered thinking to realize why it is obvious.)

Now some interesting stuff. What is g(a)? Well, it’s whatever number f(a) is minus a. I can’t tell you what number that is. But I can tell you this: it’s not negative. Remember that f(a) has to be some number in the interval [a, b]. That is, it’s got to be no smaller than a. So the smallest f(a) can be is equal to a, in which case f(a) minus a is zero. And f(a) might be larger than a, in which case f(a) minus a is positive. So g(a) is either zero or a positive number.

(If you’ve just realized where I’m going and gasped in delight, well done. If you haven’t, don’t worry. You will. You’re just out of practice.)

What about g(b)? Since I don’t know what f(b) is, I can’t tell you what specific number it is. But I can tell you it’s not a positive number. The reasoning is just like above: f(b) is some number on the interval [a, b]. So the biggest number f(b) can equal is b. And in that case f(b) minus b is zero. If f(b) is any smaller than b, then f(b) minus b is negative. So g(b) is either zero or a negative number.

(Smiling at this? Good job. If you aren’t, again, not to worry. This sort of argument is not the kind of thing you do in Boring Algebra. It takes time and practice to think this way.)

And now the Intermediate Value Theorem works. g(a) is a positive number. g(b) is a negative number. g is continuous from a to b. Therefore, there must be some number ‘c’, between a and b, for which g(c) equals zero. And remember what g(c) means: f(c) – c equals 0. Therefore f(c) has to equal c. There has to be a fixed point.

And some tidying up. Like I said, g(a) might be positive. It might also be zero. But if g(a) is zero, then f(a) – a = 0. So a would be a fixed point. And similarly if g(b) is zero, then f(b) – b = 0. So then b would be a fixed point. The important thing is there must be at least some fixed point.

Now that calculator play starts taking on purposeful shape. Squaring a number could find a fixed point only if you started with a number from -1 to 1. The square of a number outside this range, such as ‘2’, would be bigger than you started with, and the Fixed Point Theorem doesn’t apply. Similarly with exponentials. But square roots? The square root of any number from 0 to a positive number ‘b’ is a number between 0 and ‘b’, at least as long as b was bigger than 1. So there was a fixed point, at 1. The cosine of a real number is some number between -1 and 1, and the cosines of all the numbers between -1 and 1 are themselves between -1 and 1. The Fixed Point Theorem applies. Tangent isn’t a continuous function. And the calculator play never settles on anything.

As with the Intermediate Value Theorem, this is an existence proof. It guarantees there is a fixed point. It doesn’t tell us how to find one. Calculator play does, though. Start from any old number that looks promising and work out f for that number. Then take that and put it back into f. And again. And again. This is known as “fixed point iteration”. It won’t give you the exact answer.

Not usually, anyway. In some freak cases it will. But what it will give, provided some extra conditions are satisfied, is a sequence of values that get closer and closer to the fixed point. When you’re close enough, then you stop calculating. How do you know you’re close enough? If you know something about the original f you can work out some logically rigorous estimates. Or you just keep calculating until all the decimal points you want stop changing between iterations. That’s not logically sound, but it’s easy to program.

That won’t always work. It’ll only work if the function f is differentiable on the interval (a, b). That is, it can’t have corners. And there have to be limits on how fast the function changes on the interval (a, b). If the function changes too fast, iteration can’t be guaranteed to work. But often if we’re interested in a function at all then these conditions will be true, or we can think of a related function that for which they are true.

And even if it works it won’t always work well. It can take an enormous pile of calculations to get near the fixed point. But this is why we have computers, and why we can leave them to work overnight.

And yet such a simple idea works. It appears in ancient times, in a formula for finding the square root of an arbitrary positive number ‘N’. (Find the fixed point for f(x) = \frac{1}{2}\left(\frac{N}{x} + x\right) ). It creeps into problems that don’t look like fixed points. Calculus students learn of something called the Newton-Raphson Iteration. It finds roots, points where a function f(x) equals zero. Mathematics majors learn of numerical methods to solve ordinary differential equations. The most stable of these are again fixed-point iteration schemes, albeit in disguise.

They all share this almost playful backbone.

%d bloggers like this: