Reading the Comics, October 14, 2017: Physics Equations Edition


So that busy Saturday I promised for the mathematically-themed comic strips? Here it is, along with a Friday that reached the lowest non-zero levels of activity.

Stephan Pastis’s Pearls Before Swine for the 13th is one of those equations-of-everything jokes. Naturally it features a panel full of symbols that, to my eye, don’t parse. There are what look like syntax errors, for example, with the one that anyone could see the { mark that isn’t balanced by a }. But when someone works rough they will, often, write stuff that doesn’t quite parse. Think of it as an artist’s rough sketch of a complicated scene: the lines and anatomy may be gibberish, but if the major lines of the composition are right then all is well.

Most attempts to write an equation for everything are really about writing a description of the fundamental forces of nature. We trust that it’s possible to go from a description of how gravity and electromagnetism and the nuclear forces go to, ultimately, a description of why chemistry should work and why ecologies should form and there should be societies. There are, as you might imagine, a number of assumed steps along the way. I would accept the idea that we’ll have a unification of the fundamental forces of physics this century. I’m not sure I would believe having all the steps between the fundamental forces and, say, how nerve cells develop worked out in that time.

Mark Anderson’s Andertoons makes it overdue appearance for the week on the 14th, with a chalkboard word-problem joke. Amusing enough. And estimating an answer, getting it wrong, and refining it is good mathematics. It’s not just numerical mathematics that will look for an approximate solution and then refine it. As a first approximation, 15 minus 7 isn’t far off 10. And for mental arithmetic approximating 15 minus 7 as 10 is quite justifiable. It could be made more precise if a more exact answer were needed.

Maria Scrivan’s Half Full for the 14th I’m going to call the anthropomorphic geometry joke for the week. If it’s not then it’s just wordplay and I’d have no business including it here.

Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 14th tosses in the formula describing how strong the force of gravity between two objects is. In Newtonian gravity, which is why it’s the Newton Police. It’s close enough for most purposes. I’m not sure how this supports the cause of world peace.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th names Riemann’s Quaternary Conjecture. I was taken in by the panel, trying to work out what the proposed conjecture could even mean. The reason it works is that Bernhard Riemann wrote like 150,000 major works in every field of mathematics, and about 149,000 of them are big, important foundational works. The most important Riemann conjecture would be the one about zeroes of the Riemann Zeta function. This is typically called the Riemann Hypothesis. But someone could probably write a book just listing the stuff named for Riemann, and that’s got to include a bunch of very specific conjectures.

The End 2016 Mathematics A To Z: Riemann Sum


I see for the other A To Z I did this year I did something else named for Riemann. So I did. Bernhard Riemann did a lot of work that’s essential to how we see mathematics today. We name all kinds of things for him, and correctly so. Here’s one of his many essential bits of work.

Riemann Sum.

The Riemann Sum is a thing we learn in Intro to Calculus. It’s essential in getting us to definite integrals. We’re introduced to it in functions of a single variable. The functions have a domain that’s an interval of real numbers and a range that’s somewhere in the real numbers. The Riemann Sum — and from it, the integral — is a real number.

We get this number by following a couple steps. The first is we chop the interval up into a bunch of smaller intervals. That chopping-up we call a “partition” because it’s another of those times mathematicians use a word the way people might use the same word. From each one of those chopped-up pieces we pick a representative point. Now with each piece evaluate what the function is for that representative point. Multiply that by the width of the partition it was in. Then take those products for each of those pieces and add them all together. If you’ve done it right you’ve got a number.

You need a couple pieces in place to have “the” Riemann Sum for something. You need a function, which is fair enough. And you need a partitioning of the interval. And you need some representative point for each of the partitions. Change any of them — function, partition, or point — and you may change the sum you get. You expect that for changing the function. Changing the partition? That’s less obvious. But draw some wiggly curvy function on a sheet of paper. Draw a couple of partitions of the horizontal axis. (You’ll probably want to use different colors for different partitions.) That should coax you into it. And you’d probably take it on my word that different representative points give you different sums.

Very different? It’s possible. There’s nothing stopping it from happening. But if the results aren’t very different then we might just have an integrable function. That’s a function that gives us the same Riemann Sum no matter how we pick representative points, as long as we pick partitions that get finer and finer enough. We measure how fine a partition is by how big the widest chopped-up piece is. To be integrable the Riemann Sum for a function has to get to the same number whenever the partition’s size gets small enough and however we pick points inside. We get the lovely quiet paradox in which we add together infinitely many things, each of them infinitesimally tiny, and get a regular old number out of all that work.

We use the Riemann Sum for what we call numerical quadrature. That’s working out integrals on the computer. Or calculator. Or by hand. When we do it by evaluating numbers instead of using analysis. It’s very easy to program. And we can do some tricks based on the Riemann Sum to make the numerical estimate a closer match to the actual integral.

And we use the Riemann Sum to learn how the Riemann Integral works. It’s a blessedly straightforward thing. It appeals to intuition well. It lets us draw all sorts of curves with rectangular boxes overlaying them. It’s so easy to work out the area of a rectangular box. We can imagine adding up these areas without being confused.

We don’t use the Riemann Sum to actually do integrals, though. Numerical approximations to an integral, yes. For the actual integral it’s too hard to use. What makes it hard is you need to evaluate this for every possible partition and every possible pick of representative points. In grad school my analysis professor worked through — once — using this to integrate the number 1. This is the easiest possible thing to integrate and it was barely manageable. He gave a good try at integrating the function ‘f(x) = x’ but admitted he couldn’t do it. None of us could.

When you see the Riemann Sum in an Introduction to Calculus course you see it in simplified form. You get partitions that are very easy to work with. Like, you break the interval up into some number of equally-sized chunks. You get representative points that follow one of a couple good choices. The left end of the partition. The right end of the partition. The middle of the partition.

That’s fine, numerically. If the function is integrable it doesn’t matter what partition or representative points we pick. And it’s fine for learning about whether functions are integrable. If it matters whether you pick left or middle or right ends of the partition then the function isn’t integrable. The instructor can give functions that break integrability based on a given partition or endpoint choice or whatever.

But that isn’t every possible partition and every possible pick of representative points. I suppose it’s possible to work all that out for a couple of really, really simple functions. But it’s so much work. We’re better off using the Riemann Sum to get to formulas about integrals that don’t depend on actually using the Riemann Sum.

So that is the curious position the Riemann Sum has. It is a fundament of integral calculus. It is the way we first define the definite integral. We rely on it to learn what definite integrals are like. We use it all the time numerically. We never use it analytically. It’s too hard. I hope you appreciate the strange beauty of that.

A Leap Day 2016 Mathematics A To Z: Riemann Sphere


To my surprise nobody requested any terms beginning with `R’ for this A To Z. So I take this free day to pick on a concept I’d imagine nobody saw coming.

Riemann Sphere.

We need to start with the complex plane. This is just, well, a plane. All the points on the plane correspond to a complex-valued number. That’s a real number plus a real number times i. And i is one of those numbers which, squared, equals -1. It’s like the real number line, only in two directions at once.

Take that plane. Now put a sphere on it. The sphere has radius one-half. And it sits on top of the plane. Its lowest point, the south pole, sits on the origin. That’s whatever point corresponds to the number 0 + 0i, or as humans know it, “zero”.

We’re going to do something amazing with this. We’re going to make a projection, something that maps every point on the sphere to every point on the plane, and vice-versa. In other words, we can match every complex-valued number to one point on the sphere. And every point on the sphere to one complex-valued number. Here’s how.

Imagine sitting at the north pole. And imagine that you can see through the sphere. Pick any point on the plane. Look directly at it. Shine a laser beam, if that helps you pick the point out. The laser beam is going to go into the sphere — you’re squatting down to better look through the sphere — and come out somewhere on the sphere, before going on to the point in the plane. The point where the laser beam emerges? That’s the mapping of the point on the plane to the sphere.

There’s one point with an obvious match. The south pole is going to match zero. They touch, after all. Other points … it’s less obvious. But some are easy enough to work out. The equator of the sphere, for instance, is going to match all the points a distance of 1 from the origin. So it’ll have the point matching the number 1 on it. It’ll also have the point matching the number -1, and the point matching i, and the point matching -i. And some other numbers.

All the numbers that are less than 1 from the origin, in fact, will have matches somewhere in the southern hemisphere. If you don’t see why that is, draw some sketches and think about it. You’ll convince yourself. If you write down what convinced you and sprinkle the word “continuity” in here and there, you’ll convince a mathematician. (WARNING! Don’t actually try getting through your Intro to Complex Analysis class doing this. But this is what you’ll be doing.)

What about the numbers more than 1 from the origin? … Well, they all match to points on the northern hemisphere. And tell me that doesn’t stagger you. It’s one thing to match the southern hemisphere to all the points in a circle of radius 1 away from the origin. But we can match everything outside that little circle to the northern hemisphere. And it all fits in!

Not amazed enough? How about this: draw a circle on the plane. Then look at the points on the Riemann sphere that match it. That set of points? It’s also a circle. A line on the plane? That’s also a line on the sphere. (Well, it’s a geodesic. It’s the thing that looks like a line, on spheres.)

How about this? Take a pair of intersecting lines or circles in the plane. Look at what they map to. That mapping, squashed as it might be to the northern hemisphere of the sphere? The projection of the lines or circles will intersect at the same angles as the original. As much as space gets stretched out (near the south pole) or squashed down (near the north pole), angles stay intact.

OK, but besides being stunning, what good is all this?

Well, one is that it’s a good thing to learn on. Geometry gets interested in things that look, at least in places, like planes, but aren’t necessarily. These spheres are, and the way a sphere matches a plane is obvious. We can learn the tools for geometry on the Möbius strip or the Klein bottle or other exotic creations by the tools we prove out on this.

And then physics comes in, being all weird. Much of quantum mechanics makes sense if you imagine it as things on the sphere. (I admit I don’t know exactly how. I went to grad school in mathematics, not in physics, and I didn’t get to the physics side of mathematics much at that time.) The strange ways distance can get mushed up or stretched out have echoes in relativity. They’ll continue having these echoes in other efforts to explain physics as geometry, the way that string theory will.

Also important is that the sphere has a top, the north pole. That point matches … well, what? It’s got to be something infinitely far away from the origin. And this make sense. We can use this projection to make a logically coherent, sensible description of things “approaching infinity”, the way we want to when we first learn about infinitely big things. Wrapping all the complex-valued numbers to this ball makes the vast manageable.

It’s also good numerical practice. Computer simulations have problems with infinitely large things, for the obvious reason. We have a couple of tools to handle this. One is to model a really big but not infinitely large space and hope we aren’t breaking anything. One is to create a “tiling”, making the space we are able to simulate repeat itself in a perfect grid forever and ever. But recasting the problem from the infinitely large plane onto the sphere can also work. This requires some ingenuity, to be sure we do the recasting correctly, but that’s all right. If we need to run a simulation over all of space, we can often get away with doing a simulation on a sphere. And isn’t that also grand?

The Riemann named here is Bernhard Riemann, yet another of those absurdly prolific 19th century mathematicians, especially considering how young he was when he died. His name is all over the fundamentals of analysis and geometry. When you take Introduction to Calculus you get introduced pretty quickly to the Riemann Sum, which is how we first learn how to calculate integrals. It’s that guy. General relativity, and much of modern physics, is based on advanced geometries that again fall back on principles Riemann noticed or set out or described so well that we still think of them as he discovered.