My Little 2021 Mathematics A-to-Z: Analysis

I’m fortunate this week to have another topic suggested again by Mr Wu, blogger and Singaporean mathematics tutor. It’s a big field, so forgive me not explaining the entire subject.


Analysis is about proving why the rest of mathematics works. It’s a hard field. My experience, a typical one, included crashing against real analysis as an undergraduate and again as a graduate student. It turns out mathematics works by throwing a lot of \epsilon symbols around.

Let me give an example. If you read pop mathematics blogs you know about the number represented by 0.999999\cdots . You’ve seen proofs, some of them even convincing, that this number equals 1. Not a tiny bit less than 1, but exactly 1. Here’s a real-analysis treatment. And — I may regret this — I recommend you don’t read it. Not closely, at least. Instead, look at its shape. Look at the words and symbols as graphic design elements, and trust that what I say is not nonsense. Resume reading after the horizontal rule.

It’s convenient to have a name for the number 0.999999\cdots . I’ll call that r , for “repeating”. 1 we’ll call 1. I think you’ll grant that whatever r is, it can’t be more than 1. I hope you’ll accept that if the difference between 1 and r is zero, then r equals 1. So what is the difference between 1 and r?

Give me some number \epsilon . It has to be a positive number. The implication in the letter \epsilon is that it’s a small number. This isn’t actually required in general. We expect it. We feel surprise and offense if it’s ever not the case.

I can show that the difference between 1 and r is less than \epsilon . I know there is some smallest counting number N so that \epsilon > \frac{1}{10^{N}} . For example, say \epsilon is 0.125. Then we can let N = 1, and 0.125 > \frac{1}{10^{1}} . Or suppose \epsilon is 0.00625. But then if N = 3, 0.00625 > \frac{1}{10^{3}} . (If \epsilon is bigger than 1, let N = 1.) Now we have to ask why I want this N.

Whatever the value of r is, I know that it is more than 0.9. And that it is more than 0.99. And that it is more than 0.999. In fact, it’s more than the number you get by truncating r after any whole number N of digits. Let me call r_N the number you get by truncating r after N digits. So, r_1 = 0.9 and r_2 = 0.99 and r_5 = 0.99999 and so on.

Since r > r_N , it has to be true that 1 - r < 1 - r_N . And since we know what r_N is, we can say exactly what 1 - r_N is. It's \frac{1}{10^{N}} . And we picked N so that \frac{1}{10^{N}} < \epsilon . So 1 - r < 1 - r_N = \frac{1}{10^{N}} < \epsilon . But all we know of \epsilon is that it's a positive number. It can be any positive number. So 1 - r has to be smaller than each and every positive number. The biggest number that’s smaller than every positive number is zero. So the difference between 1 and r must be zero and so they must be equal.

That is a compelling argument. Granted, it compels much the way your older brother kneeling on your chest and pressing your head into the ground compels. But this argument gives the flavor of what much of analysis is like.

For one, it is fussy, leaning to technical. You see why the subject has the reputation of driving off all but the most intent mathematics majors. If you get comfortable with this sort of argument it’s hard to notice anymore.

For another, the argument shows that the difference between two things is less than every positive number. Therefore the difference is zero and so the things are equal. This is one of mathematics’ most important tricks. And another point, there’s a lot of talk about \epsilon . And about finding differences that are, it usually turns out, smaller than some \epsilon . (As an undergraduate I found something wasteful in how the differences were so often so much less than \epsilon . We can’t exhaust the small numbers, though. It still feels uneconomic.)

Something this misses is another trick, though. That’s adding zero. I couldn’t think of a good way to use that here. What we often get is the need to show that, say, function f and function g are equal. That is, that they are less than \epsilon apart. What we can often do is show that f is close to some related function, which let me call f_n .

I know what you’re suspecting: f_n must be a polynomial. Good thought! Although in my experience, it’s actually more likely to be a piecewise constant function. That is, it’s some number, eg, “2”, for part of the domain, and then “2.5” in some other region, with no transition between them. Some other values, even values not starting with “2”, in other parts of the domain. Usually this is easier to prove stuff about than even polynomials are.

But get back to g_n . It’s got the same deal as f_n , some approximation easier to prove stuff about. Then we want to show that g is close to some g_n . And then show that f_n is close to g_n . So — watch this trick. Or, again, watch the shape of this trick. Read again after the horizontal rule.

The difference | f - g | is equal to | f - f_n + f_n - g | since adding zero, that is, adding the number ( -f_n + f_n ) , can’t change a quantity. And | f - f_n + f_n - g | is equal to | f - f_n + f_n -g_n + g_n - g | . Same reason: ( -g_n + g_n ) is zero. So:

| f - g | = |f - f_n + f_n -g_n + g_n - g |

Now we use the “triangle inequality”. If a, b, and c are the lengths of a triangle’s sides, the sum of any two of those numbers is larger than the third. And that tells us:

|f - f_n + f_n  -g_n + g_n - g | \le |f - f_n| + |f_n - g_n|  + | g_n - g |

And then if you can show that | f - f_n | is less than \frac{1}{3}\epsilon ? And that | f_n - g_n | is also \frac{1}{3}\epsilon ? And you see where this is going for | g_n - g | ? Then you’ve shown that | f - g | \le \epsilon . With luck, each of these little pieces is something you can prove.

Don’t worry about what all this means. It’s meant to give a flavor of what you do in an analysis course. It looks hard, but most of that is because it’s a different sort of work than you’d done before. If you hadn’t seen the adding-zero and triangle-inequality tricks? I don’t know how long you’d need to imagine them.

There are other tricks too. An old reliable one is showing that one thing is bounded by the other. That is, that f \le g . You use this trick all the time because if you can also show that g \le f , then those two have to be equal.

The good thing — and there is good — is that once you get the hang of these tricks analysis starts to come together. And even get easier. The first course you take as a mathematics major is real analysis, all about functions of real numbers. The next course in this track is complex analysis, about functions of complex-valued numbers. And it is easy. Compared to what comes before, yes. But also on its own. Every theorem in complex analysis named after Augustin-Louis Cauchy. They all show that the integral of your function, calculated along a closed loop, is zero. I exaggerate by \epsilon .

In grad school, if you make it, you get to functional analysis, which examines functions on functions and other abstractions like that. This, too, is easy, possibly because all the basic approaches you’ve seen several courses over. Or it feels easy after all that mucking around with the real numbers.

This is not the entirety of explaining how mathematics works. Since all these proofs depend on how numbers work, we need to show how numbers work. How logic works. But those are subjects we can leave for grad school, for someone who’s survived this gauntlet.

I hope to return in a week with a fresh A-to-Z essay. This week’s essay, and all the essays for the Little Mathematics A-to-Z, should be at this link. And all this year’s essays, and all A-to-Z essays from past years, should be at this link. Thank you once more for reading.

Before Drawing a Graph

I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

Continue reading “Before Drawing a Graph”

%d bloggers like this: